Two weeks ago I had a couple of posts on the subject of ontologies, in which moral issues were raised. I followed the traces left by Udi, who commented these two posts and bumped into Udi's LiveJournal. In the journal, probably as a follow-up on his comments in this blog, I found a text which I think got some poetical essence in its structure and content. I've asked for Udi's permission to bring it here. I call this bit Moral OWL (OWL is an anagramic TLA for the W3C Semanic Web standard, aka Web Ontology Language).
Monday, August 22nd, 20057:07 pm
Practical (geek) religion
This is what I'm going to do:
- Build an OWL ontology, defining the rules for what a good behavior is.
- Add all of my behavior decisions to this ontology.
- Run the reasoner to see whether the behavior decisions are consistent with the rules of good behavior or not
- If so, use the brain reasoner to decide whether the behavior decision is good or not & if not, update the rules of good behavior
- Execute only behavior decisions that are found to be good
- Run forward chaining to see what behavior decisions can be deducted from the ontology, with the goal of maximizing good behavior actions (& eventually good events).
I thought about this Moral OWL for a while, and concluded that even though it appears to be good, it is actually a dangerous tool. Once in the VAT (in its simple Web 2.0 offering, or in its advanced, real-time thought processing capabilities) it’s a matter of time until we get scrutinized, tagged and even- filtered out! Whoever will do this, might use a proprietary Moral OWL, in which good is evil and evil is good.
So, indeed, technology in itself is not evil. And though I approve burning such a Moral OWL in the brains of the Intelligent Machines (classical for Asimov Robots; highly desireable for Agent Smiths), I fear its potentially dangerous application on mortals. The Mor(t)al OWL has to be put in safe.