My exposure to artificial intelligence is made up largely of fantasy fiction, supported by the seemingly exponential curve for technological advancement. However, Lanier provides an extremely rational look at how feasible the eschatological scenario caused by machine sentience actually is. He doesn’t exactly rule it out completely, and I would be curious to know how his position has evolved since this piece was published, but he essentially explains how the larger menace to the end of humanity is the motivations of the people who create the technology. Each sentence Lanier writes is packed with meaning and intent, and I would be lying to say I was able to fully digest everything in it’s intended form, but there were several ideas he presented that I found really interesting.
He labels artificial intelligence as more of a belief system than an actually technology. By ascribing a larger meaning to this possible eventuality that resembles a form of worship, proponents of artificial intelligence are more inclined to dumb themselves down, as the author describes. Conforming to poorly designed software, in a way, valorizes and gives software more importance that it deserves. It unnecessarily relinquishes power to the machines. He provides the example of the system of determining credit scores to prove this point. Lanier pushes it to the point of comparing people who believe in AI to the cultish behaviors of people who claim to be abducted by aliens, where he claims that, “people are demonstrably insane when it comes to assessing nonhuman sentience.” While it seems to be an orchestrated blow to his counterparts, who seem to also antagonize him on his beliefs, it almost takes away from many of the rational points he makes to argue his case against machines reaching the point where they will be able to intelligently create other machines. His strongest argument, in my opinion, is that even with the exponential progression of hardware, we will always be limited by problems that are presented by software as we scale to more robust hardware.
Another point that I was fascinated about was how central Darwinism was to the debate of AI. It makes complete sense that his theory of evolution would be a computer’s best chance at creating new forms of itself. People also love seductive metaphors, and as Lanier mentions, it’s very easy to overlay one’s personal beliefs on the convenient vehicle for justification that is Darwinism. However, without time, computers would not be able to vet which variations of itself performed better and which should be propagated. I don’t believe computers could vastly abridge the trail and error process which is at the core of evolution, as I hope we will not be able to derive an algorithm for time. I also think that cell-based organisms’ inherent desire to reproduce is a crucial part of the evolutionary process. That is not something that will be easily adopted by machines.