I promised to write more about my thoughts on how the question of what software is—and does—relates to fMRI research. Tonight seems the perfect time to do so, as I just received feedback on my paper proposal. Besides that, my calendar tells me that the following days got assigned to other projects of mine, so I’d better use this rainy night to share my ideas with you. (I divided the remaining days between the three final written assignments for this term; and, as a colleague told me yesterday, I did so in a stereotypically German manner of accuracy.)

In case this looks like way too much free time to you, let me assure you that I had to add quite some appointments since last week, and my regular classes aren’t listed. The apparently “German” thing, by the way, is the “+x” notation, indicating how many days I have left for the project I’m working on that respective day. Consequently, “+0” means: don’t go to sleep before you’ve finished this assignment, you have to hand it in tomorrow … (I’m not attaching the schedule for the subsequent two weeks, because some family members might see it and get a heart attack.)
The paper for my History and Philosophy of Technology class under the direction of Heidi Voskuhl bears the working title: A Thick Enough Concept of fMRI: The Role of Statistical Technologies Embedded in Neuroscientific Imaging. I am deriving this terminology from an Isis focus section on “Thick Things.” In the introduction to this 2007 issue, Ken Alder demands scholarship that “invoke[s] two interrelated aspects of the artifactual life”; first, the effort of assembling material parts and the act of “shaping the material world” (A), and second, the inscribed politics, the meanings of technology, and the designers’ and users’ visions and intentions (B). Alder specifies this second facet of “things” by asserting that they are “as much assemblages of ethical, aesthetic, and political prescriptions as they are elements in the service of any narrow utility.”
This is certainly true, but “things” are even more than that.
In the paper I am writing for my HistPhilTech class, I am arguing that historians and philosophers of technology and science need to integrate a third dimension of contemporary techno-scientific apparatuses into their critical studies: immaterial technologies in the form of software (C). I am focusing on statistical data analysis in neuroscientific fMRI research; this software that is embedded in the technological fMRI complex (C) is neither res extensa (A) nor an articulation of individual intent (B), but nevertheless crucial for fMRI. It functions as a communicator between Alder’s “material parts” (A) and the “prescriptions” he writes about (B).
Amongst other frameworks, I’m drawing on Lev Manovich’s theory of digital media as addressed briefly in my last post. LM’s claim that the use of digital media transforms aesthetics, systems of knowledge, and networks of interaction is applicable to the use of computer-based statistical analysis of fMRI data as well. Anne Beaulieu’s analysis of neuroscience as a “cyberscience,” which is dependent on and shaped by computer technologies, resonates with LM’s concepts and further illustrates the applicability of LM’s framework to the neurosciences.
This is not an entirely theoretical paper (even though I’m a little bit afraid of hitting my primary sources over the head with all the frameworks that I will build on in this paper): I’m also drawing on primary sources. I’m using neuroscientific accounts in the special issue of NeuroImage on the occasion of the 20th anniversary of fMRI two years ago in order to demonstrate that brain researchers view their imaging technologies as instruments from which they can separate themselves. Contemporary neuroscientists tend to consider statistical technologies as an external testing tool for the reliability (i.e., replicability) and validity (i.e., accuracy) of their imaging technologies, not as an embedded part of them.
Adapting Wendy Chun’s examination of fiber technologies as summarized in my last post suggests that the researchers’ feeling of absolute control over their apparatuses might be illusory. As one can learn from LM’s examination of digital media and WC’s analysis of the internet, it is impossible for users to distance themselves from digital technology.
With a little help from Johan Huizinga’s and Roger Caillois’s concepts of the far-reaching influence of play on human culture, Natasha Schüll’s analysis of digital gambling can strengthen the applicability of WC’s and LM’s accounts: the three of them agree that the use of digital technology creates an illusion of control and simultaneously threatens the users’ self-determination, changes their modes of perception and knowledge acquisition, and makes it impossible for users to distance themselves from the apparatus and its software.
In other words, neuroscientists cannot use computer-based statistical analyses to test the reliability and validity of neuroimaging results, because their bodies and minds are interwoven with the material and immaterial technologies of the fMRI assemblage.
I argue that neuroscientists’ intentions and networks are shaped by the immaterial parts of the technologies they interact with, as becomes clear in tracing the joint performance of statistical calculations between the researcher and the machine. More abstractly spoken, I illustrate that human and social visions and objectives (B) are molded by statistical software (C), which is the necessary intermediary to the material parts of an fMRI scanner (A).
Thus, a “thick enough” theory of techno-scientific tools in the digital age should include at least these three levels of investigation.
Would anybody like to propose a fourth one?