Saturday, September 10, 2005

Moral OWL

Two weeks ago I had a couple of posts on the subject of ontologies, in which moral issues were raised. I followed the traces left by Udi, who commented these two posts and bumped into Udi's LiveJournal. In the journal, probably as a follow-up on his comments in this blog, I found a text which I think got some poetical essence in its structure and content. I've asked for Udi's permission to bring it here. I call this bit Moral OWL (OWL is an anagramic TLA for the W3C Semanic Web standard, aka Web Ontology Language).



Monday, August 22nd, 20057:07 pm

Practical (geek) religion

This is what I'm going to do:
- Build an OWL ontology, defining the rules for what a good behavior is.
- Add all of my behavior decisions to this ontology.
- Run the reasoner to see whether the behavior decisions are consistent with the rules of good behavior or not
- If so, use the brain reasoner to decide whether the behavior decision is good or not & if not, update the rules of good behavior
- Execute only behavior decisions that are found to be good
- Run forward chaining to see what behavior decisions can be deducted from the ontology, with the goal of maximizing good behavior actions (& eventually good events).

I thought about this Moral OWL for a while, and concluded that even though it appears to be good, it is actually a dangerous tool. Once in the VAT (in its simple Web 2.0 offering, or in its advanced, real-time thought processing capabilities) it’s a matter of time until we get scrutinized, tagged and even- filtered out! Whoever will do this, might use a proprietary Moral OWL, in which good is evil and evil is good.

So, indeed, technology in itself is not evil. And though I approve burning such a Moral OWL in the brains of the Intelligent Machines (classical for Asimov Robots; highly desireable for Agent Smiths), I fear its potentially dangerous application on mortals. The Mor(t)al OWL has to be put in safe.

2 Comments:

Blogger Udi h Bauman said...

Actually part of my motivation was indeed to eventually apply it to agent smith's, but my primary goal was to improve myself. As an engineer, I need to model my brain & its environment, in order to be able to fix it's working.

But, the distinction between applying Moral OWL on mortals & on machines is a temporary thing. As Kurzweil say, the answer to the question who has higher intelligence, humans or computers, will soon depend on how you define a human.

Besides, what's the difference between a Moral OWL & any other religion/ethical code? I just find this language more suiting my needs. Many logical positivists would have been sooooo happy to be alive today!!

Thanks for the very interesting comments & posts.

Udi

8:09 PM  
Blogger Muli Koppel said...

Udi

I think you touched all these points which make the Moral OWL a frightening tool. Do you really want to model your brain into a rigid schema? Don't you want to have a nasty thought here and there? Do you really want to live in a police-state, controlled by the Moral OWL? I also don't get your Kurzweilian point. Assuming Robots will have higher intelligence than humans – would they, the Robots, want to live under the Moral OWL? ("Do Androids Dream of Electric Sheep?" [known as Blade Runner], by Philip K. Dick, touches this point and postulates that intelligent robots would go, just the same, against the Moral OWL)

Nevertheless, I like very much the originality of your idea.

9:53 PM  

Post a Comment

<< Home