I don’t get why AI researchers find it so hard to formally define self-awareness, freewill, consciousness, morality and all other mysterious qualities that make us human. Being an ex- religious zealot, I like to think I have the right to make some comment on this matter.
Everything human boils down to foresight. Foresight about the outcome of a set of events. Alright, Turing proved that perfect foresight is impossible on a Turing machine in his proofs about the halting problem. But I think we shouldn’t give up on it so quickly. Turing sees the glass is half empty.
It is half full. Turing’s proof’s allow for the existence of partial functions that predict whether an algorithm will halt or not. This means algorithms that have foresight most of the time can exist. So why don’t we let it improve its abilities with time?
There has been some effort in this direction. For instance, PAQ8P & PAQ8HP8 compressors use foresight which is right most of the time. But different PAQ algorithms are created by humans. I believe they should be created automatically using a genetic algorithm.
Automatic Theorem Provers are another line of thought in the same direction. These algorithms have foresight because they can know a proposition is true without having to test for all cases. But these provers are restricted in the sense that they are not allowed to conjure up new axioms like the theologians everytime they hit a dead end. I say we should we should let these theorem provers have that luxury and they should rely on PAQs to conjure the new axioms.
Yeah, this will make the foresight algorithms superstitious but thats a small price to pay.
Once a perpetually self improving foresight has been realized, freewill is easy. All the algorithm needs to do is to rebel against the foresight if it predicts all choices are bad. Rebelling is easy, just look at what foresight has to say and then try to contradict it by opting for a bad choice just for a change.
Morality and a lust to be good will inevitably evolve in a community of such algorithms if they are allowed to make choices about the ways to act with each other.
Self-awareness is all about having a foresight about the foresight algorithm. Yeah, it is like a tail recursion with no end, so there has to some criteria to stop trying to have more accurate models of itself. This criteria will depend on the availability of system resources and urgency of reaching a solution etc.
I think that should be it. We will have a spiritual machine on our hands. And it will be moral too, so let us hope it will adopt us as pets..