http://video.bigthink.com/player.js?width=516&height=344&embedCode=RhNDNpOjftFhwnwDN7N–tO01S8pP9xq Ray Kurzweil, author of (among other books) The Age of Spiritual Machines, expounds on the promises and pitfalls of the coming expansion of GNR (genetics, nanotech, and robotics) technology, claiming that by 2029 scientists will have effectively modelled the human mind, producing artificial intelligence fully capable of passing a Turing test.

Of note in relevance to this particular video are two points:

First, it is narrow to focus on only the “terrorist threats” that are opened up by the development of this sort of technology, and unrealistic to expect that military research in such areas will be used only for “defence” purposes. Kurzweil elides the fact that the very people he is working with to ostensibly combat the pitfalls of this technology (the US military) are the most likely to use it.

Secondly, Kurzweil is right to say that these advances pose an “existential” threat to us, but not for the reasons he believes. A threat is not existential insofar as it threatens our existence, but rather (if I can venture an off-the-cuff formulation) insofar as it threatens our very humanity. The fantasy of a set of artificial intelligences that are of equal or greater intelligence than humans,  which have perfect recall of facts and figures, and which lack many if not all of the physicality that at least in part constitutes human-ness should raise serious questions about what implications the production of such an intelligence might have for human decision-making, politics and freedom in general.

Jurgen Habermas and others have already indicated ways in which genetic engineering poses a threat to human autonomy, and robotics of the sort envisioned by Kurzweil should also provoke similar concerns. http://video.bigthink.com/player.js?width=516&height=344&embedCode=RhNDNpOjftFhwnwDN7N–tO01S8pP9xq