in one common view of AI our computers will continue to get better at solving problems, but human beings will remain largely unchanged. In a second common view, human beings will be modiﬁed at the hardware level, perhaps directly through neural interfaces, or indirectly through whole brain emulation.
We’ve described a third view, in which AIs actually change humanity, helping us invent new cognitive technologies, which expand the range of human thought. Perhaps one day those cognitive technologies will, in turn, speed up the development of AI, in a virtuous feedback cycle
The interface-oriented work we’ve discussed is outside the narrative used to judge most existing work in artiﬁcial intelligence. It doesn’t involve beating some benchmark for a classiﬁcation or regression problem. It doesn’t involve impressive feats like beating human champions at games such as Go. Rather, it involves a much more subjective and difﬁcult-to-measure criterion: is it helping humans think and create in new ways?
This creates difﬁculties for doing this kind of work, particularly in a research setting. Where should one publish? What community does one belong to? What standards should be applied to judge such work? What distinguishes good work from bad?
A truly remarkable idea that would be infinitely more powerful if not buried under a wall of complexity, making it out of reach for very many readers.
This could be a seminal paper.