Singularity Summit: Thoughts on AGI
As most of you know, one of the commonly proposed paths to The Singularity is the development of artificial general intelligence (AGI). As you can read in my rundown of the Singularity Summit, speakers showcased a lot of progress in hardware substrate and software infrastructure, but no significant conceptual advances in implementing executive function in software.
Absence of evidence isn’t necessarily evidence of absence, but I believe that if anyone were making headway on this problem, the chances that someone at the summit would have alluded to it are high. Therefore, I predict that the first being with substantially higher g than current humans is much more likely to be an augmented human than an AGI [Edit: more thoughts on electronically enhancing humans here].
I was into AI about 15 years ago. I worked for a pretty successful startup whose product was based on narrow AI techniques. I then left to start a company with a couple of my colleagues focused on doing custom enterprise software development with IDEs that integrated narrow-AI techniques. I had a decent grasp of the literature and attended the occasional conference. Like many of my peers I forsook this field for distributed computing and the Internet.
Now, in every other area of information technology I can think of, if I were to come back to it after a 15-year absence, I would be blown away with the progress. Not only am I not blown away, I am a bit despondent over the complete lack of conceptual breakthroughs. Yes, we have much more powerful hardware. Yes, we can simulate quite large assemblies of neurons. Yes, we can process language and vision much better. But I haven’t seen anything new about building systems that choose which general goals to pursue, formulate plans to achieve them, and make mid-course corrections in execution.
The most impressive thing I’ve seen is from Eliezer Yudkowsky. He’s working on Friendly AI at the Singularity Institute (notably, he did not present at this Singularity Summit). I’ve been following his posts at Overcoming Bias. He’s a scary smart auto-didact that has integrated a bunch of different fields into some very nuanced thinking. But he’s trying to hit a smaller target than just AGI, the subset that we would consider “friendly”. Obviously, this is a harder problem and he hasn’t even figured out (that he’s mentioned) how to define the boundary of the friendly subset of AGI. If anybody can do it, Eliezer can, but a lot of other smart guys have tackled the AGI problem over the years and we don’t have much to show for it.
Therefore, I see the path toward AGI where there is a purposeful design rather unlikely. Now, if you believe that executive function is an emergent property, there is still the path to AGI where a collection of programs “wakes up” one day and successfully implementing lower order cognitive functions is progress. I am maximally uncertain about this proposition.
What I think is likely is that humans will gradually augment their own intelligence. Eventually, we’ll either be smart enough or have enough instrumentation plugged directly into our brains that we’ll be able to determine what constitutes executive function. But by then, we’ll already by superhuman and AGI will be a smaller leap.