I came across an interesting request the other day from a DARPA consultant who was seeking input from the slashdot/computer geek community on projects that were pushing the boundaries of “neuromorphic computing” (better known as AI).

Among the goals of the project are:

measuring and understanding biological brains, creating AI systems, and investigating the fundamental nature of intelligence.

There’s something funny, something more than a little hubristic about the way this is put, as if it were just another US military project that was, oh-by-the-way, seeking to determine the fundamental nature of intelligence.

But beyond being a bit of a grandiose task, I think it’s framed the wrong way: intelligence is not something like elephants, electricity, or even quarks – it isn’t “out there, waiting to be described” in the same way that everyday physical objects are. Saying you’re setting out to understand “the fundamental nature of intelligence” is a bit like saying you want to determine once and for all the “fundamental nature of art”. Well, it turns out that your project is pretty much doomed to fail since how society defines, treats, and values intelligence (or art) is highly specific to cultural and temporal contexts.

Daniel Goleman’s pioneering work on emotional intelligence, for example, is not so much an instance of “coming to a better grasp of the fundamental nature of intelligence” as it is a redefinition of intelligence, a re-valuation of certain traits that were previously less strongly associated with intelligence. It was a “persuasive redefinition”, to use a Quinean idiom, rather than a pure explication (if such a thing is indeed possible). But the word that comes so readily to mind when describing Goleman’s work (“pioneering”) reveals how deeply our tendency runs to view such work as grasping towards unexplored territory, shedding light into darkness; that is, revealing something “out there”, hitherto undiscovered.

But if there is no fundamental, universal nature of intelligence, then setting out to uncover the truth of intelligence is likely to just re-enforce and re-privilege certain notions of what intelligence is. I’m not saying that research into intelligence (even that conducted by comp-sci Ph.Ds with DARPA funding) will not be fruitful; nor am I even saying that it will be overly one-sided (there is fair bit of research going into making robots more emotional “intelligent”). All I mean to suggest is that those conducting such research be aware that they are entering a particularly value-laden field of research, one that deals with phenomena more socially constructed (using Ian Hacking’s scale, from The Social Construction of What?) than in many of the other sciences.