Note: I despise the term "meatspace." However, as it's a sort of left-hand derivative of an uncritical acceptance of the computational model of mind, perhaps the irony will gel at some point in what follows. No guarantee is implied.
Financial concerns were partly responsible for [Frederick] Terman's attention to output and external support. A desire to see Stanford produce large numbers of Ph.D.s and to build steeples in particular fields of national importance also reflected his commitment to increasing Stanford's influence and prestige. To these concerns Terman brought a particular interest in efficiency and a faith in statistical measures of merit, both of which would have pleased his father, who had dedicated his professional life to the dubious goal of quantifying intelligence and had regarded the university as a business enterprise, albeit an inexcusably inefficient one.
If Terman was carrying into the postwar period the interests of some prewar corporate progressives, he was also representing (although perhaps in extreme fashion) the enthusiasms of the cold war decades. The early postwar years were marked by a general faith in objective measures of merit and the widespread use of intelligence, achievement, and personality tests at all levels of the educational system. The application of standards of efficiency and productivity to the university was also in keeping with the interests of the period.
Free associating off the above passage, I immediately think of three other books. Occupational hazard.
The third issue that generated controversy within the discipline (and outside of it) was over the characterization of intelligence as a biological potential genetically determined from birth. Especially potent at the level of groups, where this use of intelligence ... had proven integral to the development of scientific racism, the biological basis for an individual's degree of intelligence had been promoted by most of the first generation of American testers as well as by their allies in the eugenics movement.
Both "first generation of American testers" and "their allies in the eugenics movement" apply specifically to Fred's pop, Lewis Terman.
The book describes (pp. 271-73) the January, 1934 translation by one Dr. George Dock -- "a member of the advisory board for California's Human Betterment Foundation and one of the most highly regarded physicians in the United States" -- of a recently published Nazi compulsory sterilization statute: the "Law for the Prevention of Hereditarily Diseased Offspring."
...as he translated the official proclamation of this far-reaching German law, Dr. Dock did not really expect to discover what he did: immediately after the full text of the law, the Nazi government had singled out the example of California and the statistics provided by the Human Betterment Foundation to justify what it knew would be a controversial national program....
[Dock wrote:] "I think the reference to the California work, and the work of the Foundation is a very significant thing. The matter has given me a better opinion of Mr. Hitler than I had before."
All the leaves are brown. And the sky is grey.
Finally, with respect to Fred Terman's "particular interest in efficiency," how could I not think of that turn-of-the-[last]- century elitist scumbag Frederick Taylor, "the father of Scientific Management"? Library Journal said this about Taylorism Transformed: Scientific Management Theory Since 1945 (University of North Carolina Press, 1991).
...this intellectual history of modern management theory traces the two sides of Taylorism (power and value) and its effect on modern management writers such as Herbert Simon, Peter Drucker, Elton Mayo, and Abraham Maslow. He discusses the bureaucracy of centralized power and the corporatism of worker participation, which he describes as "the opposite side of the same Taylorist coin." ...this title is a detailed argument that "Taylor's bureaucratic and corporate successors transcended his techniques but not his premises."
We've run into Maslow before here on Mystic B -- and will again. For now, just keep in mind that his mentor was Edward L. Thorndike, a colleague of Lewis Terman who was equally involved in intelligence testing and its use to support "scientific racism."
Herb Simon is another story -- one that will link us back to where this started: with Fred Terman at Stanford. One of the signal events of the cold war, and a major driver of government funding for Big Science, was the successful launch of Sputnik by the Soviet Union in 1957. As a direct result, the Advanced Research Projects Agency (ARPA) was created a year later under the US Department of Defense. Over the intervening decades, DARPA, as the agency came to be known, expanded its funding to include what was, by the early '60s, coming to be called "cognitive science" -- a new field which owed much to Herbert Simon. I would occasionally run into the guy in the halls of the Robotics Institute at Carnegie Mellon University. But cutting to the chase, this graphic from DARPA's 50th anniversary Strategic Plan (PDF, March 2007) will, at a glance, give you the basic idea of what developed over the ensuing five decades. God Bless America: your PAL.
Howard Gardner documented this paradigm shift -- and it actually was one, in this case, in the true Kuhnian sense -- in The Mind's New Science: A History of the Cognitive Revolution. In a section titled "De-Emphasis on Affect, Context, Culture, and History," he writes about some of the pushback (now largely forgotten) that the new field was met with (pp. 41-42)...
Though mainstream cognitive scientists do not necessarily bear any animus against the affective realm [i.e., emotion], against the context that surrounds any action or thought, or against historical or cultural analyses, in practice they attempt to factor out these elements to the maximum extent possible.... And so, at least provisionally, most cognitive scientists attempt to so define and investigate problems that an adequate account can be given without resorting to these murky concepts.
Critics of cognitivism have responded in two principal ways. Some critics hold that factors like affect, history, or context will never be explicable by science: they are inherently humanistic or aesthetic dimensions, destined to fall within the province of other disciplines or practices. Since these factors are central to human experience, any science that attempts to exclude them is doomed from the start. Other critics agree that some or all of these features are of the essence in human experience, but do not feel that they are insusceptible to scientific explanation. Their quarrel with an antiseptic cognitive science is that it is wrong to bracket these dimensions artificially. Instead, cognitive scientists should from the first put their noses to the grindstone and incorporate such dimensions fully into their models of thought and behavior.
The main feature of our architecture is the notion of self-aware cognition that we believe is necessary for human-like cognitive growth. Our approach is inspired by studies of the human brain-mind: in particular, by theoretical models of representations of agency in the higher associative human brain areas. This feature (a theory of mind including representations of one’s self) allows the system to maintain human-like attention, focus on the most relevant features and aspects of a situation, and come up with ideas and initiatives that may not follow from formal logic. The result is a robust cognitive system capable of significant cognitive growth.
Note the present-tense use of "is" in that last sentence. This rhetorical trope is commonly known as "wishful thinking" and it's been relentlessly deployed for most of DARPA's 50 year history. As it turns out, your PAL is a wholly imaginary friend. So much for getting rid of "murky concepts."
In the references at the end of that paper, I was interested -- fascinated is a better word -- to find this: Mindblindness: An Essay on Autism and Theory of Mind by Simon Baron-Cohen (MIT Press, 1995). Amazon reader Hubert Cross, self-identifying as autistic, includes the following in his review:
As a kid, I didn't see people like objects, but I didn't quite see them as people either. They were there, but they were not very important. That is as far as I can go explaining how it was for me. The only thing I can add is that I am not giving you anything more than a faint idea of how it really was.
What does Simon Baron-Cohen do? He introduces the concept of "skinbags." Bags of skin that move and talk like people but that are not quite people.
"Skinbags" is precisely what people were for me. They moved and talked, but they had no feelings. It was not that I believed that they had no feelings; it was that it never crossed my mind to consider the possibility.
A Newsweek review of Mindblindness (" Blind to other minds," Geoffrey Cowley, 14 August 1995) contains statements such as: "Scientists sometimes think of the mind as an all-purpose learning machine, equally receptive to any kind of information." And: "...by the time they turn 4, [children] display a 'theory-of-mind mechanism,' an awareness that other people are capable of different mental actions, such as pretending or believing or misunderstanding."
Machines and mechanisms: it has become quite common to encounter such "digital analogs" of mental abilities and dynamics. The ease with which such assumptions are made and accepted is itself a product of the so-called cognitive revolution -- and the cold war Big Science that fostered it. In The Closed World: Computers and the Politics of Discourse in Cold War America (MIT Press, 1996), Paul Edwards writes (p.2)...
...computers inspired new psychological theories built around concepts of "information processing." Cybernetics, AI, and cognitive psychology relied crucially upon computers as metaphors and models for minds conceived as problem-solving, self-controlling, symbol-processing systems.
After awhile, Bruner himself became disenchanted with the
Cognitive Revolution, or at least with what it had become. "That
revolution," he wrote at the beginning of his 1990 Acts of
Meaning, a "goodbye to all that" proclamation of a new
was intended to bring "mind" back into the human sciences
after a long and cold winter of objectivism... [But it] has
now been diverted into issues that are marginal to the
impulse that brought it into being. Instead, it has been
technicalized in a manner that undermines that original
impulse. This is not to say that it has failed: far from it,
for cognitive science must surely be among the leading
growth shares on the academic bourse. It may rather be that
it has become diverted by success, a success whose
technological virtuosity has cost dear. Some critics... even
argue that the new cognitive science, the child of the
revolution, has gained its technical success at the price of
dehumanizing the very concept of mind it had sought to
reestablish in psychology, and that it has thereby estranged
itself from the other human sciences and the humanities."
Here's a question to ponder at the end (for now) of these loosely connected and intertwining threads. Is it possible that "autism" is, if not an effect of, at least a metaphorical mirror held up to the "antiseptic cognitive science" that has increasingly come to define human beings as mindblinded skinbags? In the case of the computational model of mind, minus even the skin.
If you want to really spook yourself on this general theme, go read The Geek Syndrome by Steve Silberman (Wired, December 2001). The sub-title slug reads: "Autism - and its milder cousin Asperger's syndrome - is surging among the children of Silicon Valley."
As ever, to be continued...
posted by Christopher Locke at #
Saturday, April 21, 2007