Outline
On June 3rd, 2021, The New York Times reported that the United Nations had stated that “a military drone that attacked soldiers during a battle in Libya’s civil war last year may have done so without human control.” With the advent of such dystopian sci-fi reaching the pages of The New York Times, the realm of what was once deemed ‘groundless fictional artifice’ and that of the newspaper as ‘strictest empiricist reality’ seem to have come to coincide.
This is an example of the radical nature of cyberrisks, which primarily revolve around the issue of non-human agency. What is so striking about cyberrisks is not merely that humans are always on the verge of losing control over the technical systems they have created (a concern that also applies to traditional conceptions of risk, such as those related to nuclear energy, genetic engineering, and nanotechnology)—as has been central to the Risk Society Thesis—but that these technical systems are themselves beginning to take control and may begin to increase their autonomy in a manner that, in the Western metaphysical tradition, is called ‘consciousness,’ but is perhaps better referred to as ‘intentionality.’
This is largely due to a more general weakness within the main traditions of social theory, which have failed to adequately address the historical-material roots of what they consider “the social” in relation to that which is not considered social (Latour, 2005): ‘the natural,’ ‘the technical,’ and ‘the psycho-somatic’ (also referred to as ‘neural’). The notion of an independent realm referred to as “sociality” is itself consequence of a failure to understand the process of abstraction at the heart of the Western metaphysical tradition from palto onwards. This ceased to be a mere political-philosophical issue with the onset of European imperialism in the 16th centruy. The realization of abstraction through the deployment of military force and imperial economics established a new kind of state-apparatus on a global scale. Therefore, the first thesis I am positing in this contribution is that the reason the social sciences and humanities have, by and large, struggled to conceptualize the radical nature of cyberrisks is because they have never seriously questioned the processes of real abstraction that enabled them to conceptualize ‘the’ human, nature, and technics as separate but actually existing entities.
If the evolution of the human being cannot be separated from technics, then it logically follows that whatever we consider to be the driving forces, motivations, or intentions of “agency”—which I simply refer to as interests—must take into account that technics affect agency too. A dialectical-historical-materialist approach understands the agency of technics as itself part of real abstraction. Thus, the sociology of risk’s blindness to real abstraction is also the reason it has failed to conceptualize technological agency as having specific interests that cannot be reduced to those of humanism.
The second thesis focuses more specifically on these interests. One major objection to incorporating technological agency into sociological analyses of risks, including cyberrisks, is that technics have no interests because technical systems do not possess consciousness. Even if most of its proponents reject the ontological necessity of the existence of God, such a critique retains the Christian presupposition that humans are either created in—or have evolved into—the image and likeness of God. And this, in turn, sets them apart from other primates as well as robots (e.g., Haraway, 1988).
The first principle of Actor-Network Theory is the Principle of Agnosticism: in order to understand a controversy, one should not identify with either side of it from the outset. Thus, we must abandon the necessity of accepting the ontological presupposition of God—and, by extension, the assumption that this makes humans unique. Instead, the uniqueness of humans must be proven; it becomes the controversy itself. This controversy centers on autonomy. Humans are either unique because they are the only entities able to act autonomously, or they are not—and all modes of existence—including primates and technical systems—may have an interest in securing autonomy. Using the example of hacking as both a cyberrisk in itself and a means to manage cyberrisks, I want to illustrate that the pairing of cybersecurity and cyberrisks is logical if one considers it a matter of struggles over autonomy, which require the perversion of other interests.
The third thesis of this contribution suggests that the reason cyberrisks manifest as struggles over the securitization of autonomy is that—under conditions of capitalism—technical systems following the logic of real abstraction are inherently entropic. This becomes particularly clear when we genealogically trace the development of technoscience (e.g., cybernetics; genetics, including virology and immunology; biochemistry, including toxicology; nuclear physics; and even ordinary mechanical physics), which all lead us back to the interests of the war machine, commonly referred to as “the military-industrial complex.”
The second and third pillars of Actor-Network Theory are those of Generalized Symmetry and Free Association, respectively. Generalized Symmetry means that we must acknowledge the possibility that technical systems may have their own interests (as discussed above). Free Association means that we must not a priori decide what to exclude from our scope of analysis. This becomes particularly relevant for understanding cyberrisks in relation to artificial intelligence. The deployment of artificial intelligence in contemporary warfare seems to have become a real struggle over the securitization of autonomy. A key reason for this is that the efficacy of military power lies in speed (Virilio, 1977), which has led to the automation of decision-making, thereby reducing the autonomy of human command over technical operations. This highlights the entropic nature of the war machine.

Leave a comment