top of page


Image by Benjamin Blättler



Lab Author: Nandita Biswas Mellamphy

The future is not just ahead and ‘to come’; it can also be found in the past. Looking back at how ancient cultures imagined futuristic technologies, one can readily find depictions of artificial life, robots, self-propelling objects and/or flying machines.

Example: Ancient Indian flying vehicles.  


Image: Rukma Vimana, Sundara Vimana and Shakuna Vimana Source: Bharadwaaja (1973)

Image by Benjamin Blättler


Is it possible to make a time machine? Physicists suggest that while it may be possible engineer particle physics to allow a time-machine to work, an adequate theory of quantum gravity still needs to be developed.

Links: +

(Bonus Link:

Image: Dark, Netflix television series, 2017-2020

Image by Benjamin Blättler


Origami, the ancient art of folding, holds a key to understanding bio-nano-technical design. Folding is not just a design principle but also determines function organic systems.


Quote/Example: “Either it is the fold of the infinite, or the constant folds [replis] of finitude which curve the outside and constitute the inside.”

― Gilles Deleuze, Foucault

Link: The Origami Code

Image by Benjamin Blättler


Once considered ‘junk DNA’ by scientists, viruses are sophisticated biological machines involving coordination of a range of different molecular machines, including entry machines, replication machines, assembly machines, and genome packaging machines. 


Image by Benjamin Blättler


Can machines be universally moral? 

Researchers at MIT have created a platform for crowd-sourcing responses to moral dilemmas. While some A.I. ethicists argue in the affirmative, others maintain that we cannot outsource moral decision-making to automated intelligence.


Image by Benjamin Blättler


Can blockchain design principles be used to create a more equitable and trustworthy framework for politics?


Champions argue that blockchain can introduce not only more transparency and autonomy into current practices, but potentially set the stage for novel techno-political imaginaries.

Image: Guedda Hassan Mohamed, "Blockchain" (2017) 


Image by Benjamin Blättler


“What post-structuralism has tended to leave undertheorized […] is the nature of the impersonal and perpetual mediation machine itself, the machinic aspects of network-centricity that are anonymous, non-organic, and non-human”.


Quote: Dan Mellamphy and Nandita Biswas Mellamphy, “Nietzsche and Networks, Nietzschean Networks: The Digital Dionysus” in The Digital Dionysus: Nietzsche and the Network-Centric Condition (2016).

Image: Perry Hall, Beneath a Violet Skin (2014)

Screenshot (212).png
Image by Benjamin Blättler


“The expansive, unlimited affectspace or dracage-zone of War-as-Machine arises from and builds upon “differentiation, falsification, divergence, mass hysteria, terminal catabolism and disintegration in the direction of something other than death” – here a larval, crypto-fractal kind of ‘terror’”.

Quote: Dan Mellamphy and Nandita Biswas Mellamphy, “Phileas Fogg, or the Cyclonic Passepartout: On the Alchemical Elements of War” in Leper Creativity: Cyclonopedia Symposium (2012).


Image by Benjamin Blättler


When it comes to classifying objects, deep learning neural networks can be tricked into seeing and hearing things. Although they are inspired by the human visual cortex, the resemblance between artificial neural networks and human neural networks may be merely superficial.

mACHINE9.5 (2).png
Image by Benjamin Blättler


Nonviolent protests are twice as likely to succeed as armed conflicts – and studies show that those engaging a threshold of 3.5% of the population have never failed to bring about change.

Bahar Noorizadeh, Governance machines and the future of futures (2018)

Image by Benjamin Blättler


There is growing awareness of the effects of bias in machine learning. Gender biases are embedded in the way language is used. Training machine learning algorithms to desist from perpetuating gender biases requires an understanding of how gender ideology is manifested in language.

Image by Benjamin Blättler


The current global pandemic has exposed deep failures across the world such as socio-economic inequality and systemic discrimination. The pandemic has disproportionately disadvantaged the working class, especially vulnerable populations like women, transgender people, and the elderly. Nevertheless, workers around the world are resisting and standing in opposition to capitalist responses to the COVID-19 pandemic.

Image: In South Minneapolis, an artwork by Melodee Strong, Mama, portrays grieving mothers (Credit: Melodee Strong)

Image by Benjamin Blättler



“There is no transparency or accountability in the algorithm’s space of play, and so we must begin instead from notions of opacity and partiality.” Louise Amoore, 2018.

Example: Louise Amoore, Professor of Political Geography and author of The Politics of Possibility: Risk and Security Beyond Probability and Cloud Ethics, speculates on the ever-heightening ability of algorithms to coat the public sphere with vague and obscure ideological premises.

Image: Suzanne Treister, ALGORITHM 2.0/Algorithm Dream Machine, 2015-16.

Image by Benjamin Blättler


(In memory of Bernard Stiegler)

Neuropower is a circuit of power that views humans not primarily as individuals or as populations, but as vehicles and conduits for information circulation, as brains and neural networks that can be coded, decoded and recoded. The challenge of social networks is transfiugure neuropower into a noopower by politically reimagining individuation. 

Suzanne Treister, 2016, Installation, From Psychedelics via the Counterculture to the Possible Futures of Humanity

Image by Benjamin Blättler


Discussions of ethical AI could be enriched by thinking anew about human/nonhuman relationalities, and debates would benefit from confronting questions of whether nonhuman intelligences can be conceptualized in terms other than humanistic.

Example: Underground fungal networks allow plants to communicate across vast spaces in ways that resemble the human internet.

Image: The mycelium of a fungus spreading through soil (Credit: Nigel Cattlin / Alamy)


Image by Benjamin Blättler


A critical approach to research should incorporate history, inequality, power, and culture. The authors of a publication argue that effective analysis of disinformation requires researchers to take an approach to disinformation that is historically grounded, foregrounding questions of how power, institutions and socio-technical structure shape disinformation.


Image by Benjamin Blättler


Scientists and artists are working together to create a 3D printer that can produce ultra lightweight floating sculptures. Members of MIT’s Center for Art, Science, and Technology are exploring how computation and digital information can transform objects in real time. Just as earlier technologies revolutionized the distribution of images, sounds, and texts, leading to an era of limitless remixing and sampling, 3D fabrication techniques promise to remake both the world of materials, as well as the material world.

Orbiting_Conceptual Drawing. Courtesy of Thom Kubli and Hiroshi Ishii.

Image by Benjamin Blättler


The body's cells confront a tricky issue when infected by viruses because they must figure out eradicate the illness without harming themselves.  To explain how cells do this, scientists have visualized a tiny cellular machine that chops viral genetic material into pieces.

Image by Benjamin Blättler


Autonomous machine intelligences have been making original art for over fifty years.  AARON, created by artist and computer programmer Harold Cohen, is one of the longest-running AIs and creates original artistic images.

Example: Joseph Nechvatal’s Computer Virus Project 2.0, 2001.


AARON, Socrates’ Garden, 1984.

Image by Benjamin Blättler


Can sound have an existence separate from its source? Pierre Schaeffer, a French radio engineer, coined the term “sound object” (objet sonore) which relates to the experience of “acousmatic listening” or hearing without seeing the causes behind it. ‘Acousmatics’ were the disciples of the ancient philosopher Pythagoras who were initiated into the mysteries of his teachings only by hearing the voice of their master while he was hidden behind a curtain.


Example: Contemporary sound artists like Noise Orchestra explore methods of synthesizing sound out of light with self-built machines. 

Image: Florian Hecker, Courtesy the artist, Sadie Coles HQ, London; Galerie Neu, Berlin

Image by Benjamin Blättler


Today warfare is conducted not only in military battlespaces by martial personnel using armed force and weapons; increasingly, warfare is emergent and masked, creeping into the realms of everyday culture and sweeping across social networks using familiar and ordinary platforms of social communication as weapons for gaining advantage over opponents.  Hypercamouflaged warfare comes to encroach upon civilian arenas and to seep into civil society.  Digital cultural techniques and technological regimes and social media become fertile ground for the use of warfare techniques like covert surveillance, tracking and targeting to influence civilian domains and social relations.

Image: Sorting Daemon, 2003; David Rokeby

Image by Benjamin Blättler


Is it a bird? Is it a plane?  It’s neither.  It is the face of future warfare. Not only is AI being used in specific domains of military activity to enhance performance in particular environments (e.g. air, land, sea, and cyberspace), but questions of global balance of power are being impacted by the strategic risks associated with the development of autonomous machine-controlled decision-making systems capable of learning from experience and improving performance relative to specific goals.


By 2030, it is predicted that machine capabilities will have increased to the point that humans will have not only become the weakest component in a wide array of systems and processes, but they will fade out of the decision-making loop entirely. Fully autonomous, Unmanned Lethal Weapons (UAVs), or ‘killer robots’ as they are known, will be making combat decisions without necessitating any human input.

Image by Benjamin Blättler


In 2010, the United States succeeded in installing a piece of malware (dubbed Stuxnet) at Natanz, an Iranian nuclear plant, disrupting the refinement process and causing centrifuges to spin out of control. Iran retaliated with malware attacks on Bank of America, among other American institutions. Undetected, imperceptible, and uncontrolled, the malware squirmed into and infected other systems all over the world. Such cyber-attacks, despite official denials, are not just a matter of “hacking” or “spying” but are offensive capabilities that make it possible to conduct warfare without declaring war.

Untitled (zero day exploits), 2019; Chris Dorland

Image by Benjamin Blättler


The ability for a machine to ‘see’ allows it to identify the signatures (not necessarily just visible ones) of enemies, targets, landmarks and anything else it is trained to do. Machine vision allows a program to extract salient features from a landscape or image that it can then use for classification and pattern recognition.  Paul Virilio’s 1994 book The Vision Machine warned that machine vision was turning into the rise of ‘vision machines’ capable of not just recognizing patterns but also to interpret entire visual fields. “For the first time in history, we are dealing with images that are not only created by machines, but that are also meant to be seen by machines. We are now in a situation in which we share the perception of our environment with our machinic other. This has given rise to the philosophical problem that Virilio called the ‘splitting of viewpoint’. How can we understand a world seen by a synthetic, sightless vision? What modes of representation are created by it? And how does this affect the way we see the world?” (Goeting, 2018).

Image: Probably Chelsea, 2017; Heather Dewey-Hagborg. Thirty different possible portraits of Chelsea Manning algorithmically-generated by an analysis of her DNA.

Image by Benjamin Blättler


Is desire programmable? In the last century, the American industrial model had consisted in finding ways to entice people to buy a limited range of mass produced items; but by the late 20th century, neoliberal consumerism took root around the capitalization of the programmability of human desires. Neoliberal consumerism is always trying to satisfy desire by calculating and predicting it, making desire, on the one hand, ever-elusive and never-ending, and on the other, manufacturing desire, synchronizing, standardizing and commodifying it. The neoliberal machinery—involving government, corporations, public relations and advertising— capitalized on the exploration of the inner feelings of lifestyle groups in order to invent a whole new range of brands, and identities, and thus win elections and direct economies.


This kind of tactical informatic exploitation has become strategic and globalized. Today in the age of algorithmic enlightenment, consumption has become hyper-consumption and addiction (most of all to connectivity. The human has itself become an informational hub that can be measured, standardized and exploited. Algorithms reduce human expression and action to ‘machine-readable’ form, and in this sense, the human becomes both post-humanized and machinified, as well as pre-humanized and animalized by the myriad effects of this global process of human adaptation and acculturation to algorithmic logic.

Image: Suzanne Treister, HFT The Gardener/Video stills and Photo works/High Frequency Trading Floor, 2014-15.

Image by Benjamin Blättler


Through advances in deep learning algorithms, autonomous flocks of drones behave like birds: they can fly, swarm, and move collectively.   The possibilities of such ‘deep learning’ appeals to the Defense Advanced Research Projects Agency (DARPA), a research agency of the United States Department of Defense charged with developing military applications for emerging technologies. In 2013, DARPA investigated real-time machine systems that could mimic mammalian intelligence via cortical processing that recognises patterns in spatial and temporal data, allowing systems to respond flexibly and creatively on the battlefield without any human input.

September 2018, Burning Man - Photo: Rahi Rezvani

Image by Benjamin Blättler


In the 1960s, the United States Defense Department’s Advanced Research Projects Agency undertook a program called ARPAnet. A precursor to the contemporary ‘internet’, ARPAnet’s purpose was to allow the Pentagon share data and research internally on the same network. Just before its rollout in 1967, a computer engineer at RAND Corporation, the think tank in California, wrote a paper called “Security and Privacy in Computer Systems” praising the goals of ARPAnet but also warning of the security risks of what he called ‘on-line’ networks. As soon as multiple users could gain access to data from unprotected locations, any hacker could hack into the network and gain access to unclassified and secret information relevant to national security. Eventually, under Ronald Reagan’s national security directive entitled “National Policy on Telecommunications and Automated Information Systems Security,” the National Security Agency, which had originally begun in 1952 to intercept foreign communications, was placed in charge of securing all of the nation’s computer servers and networks.

Image by Benjamin Blättler


Frank Herbert’s science-fiction classic Dune is a literary work about re-engineering ecology, namely the political, religious, military and ecological re-design of the planet Arrakis. The first book of Dune describes the Fremen, a people autochthonous to Arrakis whose entire culture depends on tracking and exploiting another life form indigenous to the planet, that is, the sand-worm.  The Fremen ecology is parasitic to that of the sand-worm, but it has evolved to be able to track not only its own ecological movements across the planet’s desert landscape but has also developed the abilities to hunt and manipulate the underground mechanisms of sand-work ecology.  The entire Fremen culture, including its brutal mystical religion, is engineered to trap and hook worm-movement across given expanses; the hook also happens to be the very technical device that permits Fremen to interact directly with the sand-worms: they use actual hooks to attach themselves to moving sand-worms, to climb and then ride them. The hook and the gesture of ‘hooking’ become the central but elusive strategy of an ecological perspective that uses the interaction of elemental movements of sand and water to track, trap and manipulate the movement of sand-worms across the Arrakeen landscape. Gestural ecology – its ‘hooklike’ mechanism – becomes the device that allows Fremen to control and communicate with their mystical god, the Shai-hulud, becoming a vehicle for the propagation of Fremen theocracy. The mobile ‘logic’ of a gestural ecology operates not by way of theoretical knowledge (e.g. the contemplative mode of the philosopher), but by way of a more cunning intelligence that transforms gesture into technique.

Image: Dune, Director: Denis Villeneuve, 2021.

Image by Benjamin Blättler


Whether in the role of portable psychotherapist or love objects, humans have conceptualized AI as a type of ‘companion species’ for humans. In The Three Stigmata of Palmer Eldritch published in 1965, American science-fiction author Philip K Dick creates Dr. Smile, a suitcase-sized machine that people carry around as their own personal psychotherapist. In Dick’s near-future world, Dr. Smile is a symptom of the highly neurotic nature of contemporary human beings whose mental health is so fragile and technologically reinforced that they need to have a 24/7 portable, ‘on call’ source of  psychological coaching and companionship. In another famous book, Do Androids Dream of Electric Sheep (made popular by the filmic adaptations Bladerunner and Bladerunner 2049), another technology appears: the mood organ, a device that allows the user to tune into and choose to feel different moods. In Dick’s world, these technologies are not examples of human ingenuity and innovation, but instead indicative of frailty, deterioration and ultimately serve as technical companions for people who live in a society that has been degraded and desensitized. This criticism was brilliantly captured by Mamoru Oshii in his ‘Ghost in the Shell’ films, especially the second, Innocence, in which he shows that while humans are dependent on machines, machinic intelligence and artificial forms of life, they nonetheless mistreat and enslave them without a second thought. The established oppositions between culture and nature, and human and machine, are fraught and can be technically manipulated so as to shed light on a dimension that remains indiscernible to Humanism: that it is by way of technical objects and technical existence that human beings most authentically relate to their living milieu and to living processes.

Ghost in the Shell: Inosensu, Mamoru Oshii, 2004.

Image by Benjamin Blättler


The ‘body’ has been a central concept and site of subjectivity in the genealogies of humanism. Yet in the age of ubiquitous mediation, there are more than just fleshy, lived bodies with which to contend. Governmentality and virtuality are intertwined insofar as contemporary political technologies increasingly involve control and production of not only physical fleshy bodies and power-relations, but also of ‘data-bodies’: virtual assemblages composed of information connected to an individual, group or network. For example, calls for ‘digital democracy’, ‘e-governance’, ‘predictive policing’, as well as ‘smart cities’ all promise to replace failing political arrangements with digital feedback and technological solutionism. But are these endeavors also part of a nebulous process of data hegemony, a condition in which data and the datalogical are in command of decision-making processes? Data-bodies proliferate and meta-stabilize in serpentine schemas of informational connectivity/control that seek to both exploit human-centered resources, as well as replace them with algorithmic, artificial-intelligence-driven technologies that appear human-friendly, ultimately seeking to minimize human oversight. Surveillance capitalism is based on exploiting the virtual qualities of data-bodies, especially their legal and normative loopholes, since most laws apply to physical rather than virtual bodies. Neither merely object/commodity, nor merely subjective extensions of organic bodies, data-bodies are double-faced mechanisms that potentially erect and collapse subjectivities and objectivities.

Image: Thousand Little Brothers, 2014; Hassan Elahi

Image by Benjamin Blättler


The dream of ‘ubiquitous connectivity’ is to unite humans and non-humans in an ever-tightening mesh of mechanisms that would cater to every need and desire from the most mundane to the most exotic. This imagined “networked future” wherein seemingly-innocuous and wonderfully-useful apparati do our bidding is, however, a trap that lures the human being with digital elixirs: tantalizing prosthetics that appear to extend, expand and enlarge the dominion (nevermind the desires) of what in fact is an ever-waning species—a species on its way out in a world with very few exits, moreover.  Champions of what today is being called ‘algorithmic governance’ suggest that this model can free humans from the foibles of hierarchical and traditional forms of power, but this utopianism may end up being the dominant digital duplicity of the digital era. Algorithmic regulation, in other words, is the control and regulation of network behaviour conducted by automated informational processes that produce so-called ‘desired outcomes’ for humans, based on real-time, modulated feedback.  It is a paradigm of self-organization in which networks are governed, managed, and reproduced by the capture and processing of digital information.  Algorithmic governance colonizes and propagates by creating more opportunities for digitally regulating information, thus creating the conditions for continued algorithmic expansion into networks of increasingly planetary scale. Like Francis Bacon’s New Atlantis which describes a utopia ruled by ‘Salomon House’, a college of benevolent scientific keepers of knowledge, algorithmic governance promises the rule of algorithmic knowledge applied to the betterment of human beings.  And yet, might this utopian promise turn out to really mean ‘more production of data in machine readable form’?  From such a machine perspective—the human being finds itself at once both post-humanized and machinified, as well as pre-humanized and animalized.




Is the evolution of technology connected to the human experience of boredom? How do humans co-evolve with technologies by way of the catalyzing effects of boredom?  Boredom is the negative effect of human relations that have lost their novelty; but boredom also becomes the positive catalyst for technological innovation and televisual inventions, like the motion-picture camera, as well as for the development of comedic techniques like the pratfall, custard pies, double-takes, and ‘the chase’.  It’s not just that boredom is both empowering and disempowering, enabling and disabling.  It’s that the moebius-like relation between boredom and technology suggests a kind of ontological aporia of origin in which the bored human, or human experience of boredom, is not possible without the technical and vice versa.  Like the chicken or egg paradox, the catalyst for technical solutions becomes connected to the very experience of humans becoming bored by the very technologies that were once solutions to the problems of boredom.  We see this paradox very much reflected in popular culture as well as neuroscientific research.  Boredom, on the one hand, is said to be necessary for creativity; and on the other, it is said to contribute negatively to cognitive, affective, and social disorders. The paradox of boredom and technology may imply that the innovative, and potentially emancipatory/democratic effects of technological development to attenuate boredom, inevitably lead to the uncanny return of the totalitarian effects (perhaps in the form of a microfascism of boredom). The question of politics is inevitably tied up with the paradoxical relation between technics and boredom, of what one could call, following the late French political philosopher of media Bernard Stiegler, the technics of attention.  For Stiegler, technologies are crucial in these processes of phenomenological and ‘psychic’ experience, through the various mental, sensory and physiological means by which we capture the world and it captures us.   But they are also agents of routinization, synchronization and homogenization, or programmatisation.  So the paradoxical relation between technics and boredom, from a political point of view, has both curative and poisonous socio-political effects: novel forms of collectivity may be engendered but the modes of interaction are limited.




As trends towards total digitization continue (e.g. the internet of things and smart cities), the complete integration of physical and virtual environments will mean that every aspect of the environment — including the body — becomes potentially data-rich and ready to be tapped, even ‘fracked,’ which is a term used to describe the process of hydraulic fracturing used in extracting oil which causes earthquakes and other environmental hazards. It might be worth considering the analogy more seriously and to wonder how in the age of cognitive capitalism, humans have become subject to fracking, in this case fracking for information. Not only are humans colonizing each other by way of technologies and information, but in the empire of the digital, all humans are being colonized by digital mediation as it takes hold over the planet and in every sphere of knowledge.

Image: Didier William, Ki moun ki rele Olympia, 2018



How to resist, subvert, and even overturn the surveillant gaze of the increasingly ubiquitous machines of surveillance capitalism? This gaze could be described as the signature device of contemporary forms of power, our mobile all-seeing eye.  As Paul Virilio wrote in his 1989 book War and Cinema: The Logistics of Perception, “In a technicians’ version of an all-seeing Divinity, the drive is on for a general system of illumination that will allow everything to be seen and known, at every moment and in every place” (1989: 4).  The global architecture of surveillance technologies plays a key role in the establishment of such a ‘general system of illumination’ driven by information extraction, analysis, manipulation, commodification and control. Big data has a foundational role in this new regime which promises (but fails to) shed light on the opacity of human problems through the power of computing.  In the name of counter-terrorism and patriotism, new identification technologies like biometrics and backscatter x-rays are being used in airports and border security to compel bodies – especially bodies that are covered or veiled – to unveil their ‘hidden truths’.  Muslim women’s bodies have been subjected to the disciplinary violence of the biometric surveillant gaze. Imaging technologies aimed at stripping the body and revealing what is beneath are being used, buttressed by police power and the rhetoric of scientific certainty, to justify making Muslim women more and more visible, and thus more and more subject to surveillance intervention.  States and corporations are increasingly relying on a political rationality that equates ‘security’ with ‘visibility’.  And yet, against the backdrop of surveillance capitalism and its surveillant technologies, this politics of transparency has taken on new and perhaps even insidious connotations.



Norbert Wiener’s Cybernetics: or Control and Communication in the Animal and the Machine had first been published in 1948 by French publisher Hermann et Cie.  Surprisingly, the first French translation of the book only appeared in 2014 (as La Cybernétique: information et regulation dans le vivant et la machine).  ‘Cybernetics,’ a word Wiener had taken from the ancient Greek word kubernētēs (meaning ‘governor’), was not merely about communication in animal and machines, but also a schema for communication between various isolated sciences in which the driving force was ‘information’, not ‘reason’. Through the universalization of the concept of information, cybernetics was conceived, especially by Wiener, as a kind of ‘Enlightenment’ that would liberate humans from servitude and renew the scientific and moral spirit of humanity (especially in the aftermath of the consequences of the development of the atomic bomb).  France had played a decisive role in the development of cybernetics – even Wiener was unaware that the term has been used in the 19th century by the French physicist André-Marie Ampère— not only in the publication of Wiener’s book, but also in using cybernetics as a platform for discussing well-established themes in French intellectual history such as automation, subjectivity, cognition, volition, the differences and similarities between human and artificial intelligence, as well as the emancipatory or enslaving potentials of the new techno-sciences. While Wienerian cybernetics had always defined itself in relation to humanist ideals, in France, cybernetics came to be associated strongly with anti-humanism, even posthumanism.  In privileging the concept of ‘information’ and in blurring the theoretical boundaries between the living and non-living, Wienerian cybernetics (which had already started to decline by the 1960s giving way to a second wave of cybernetics) had hoped to offer a new vision of humanism but instead may have revealed a kind of post-humanist impulse that would come to influence the development of French structuralism and post-structuralism.

Image via deviantart



To harness the trends and potentials of digital social mediation has become a priority both for conducting warfare and for conceptualizing military doctrine.  ‘Commanding the trend’ is a social media mechanism of persuasion used in social networking that is fast becoming a weapon of warfare involving the exploitation of pre-existing social networks by subversive agents especially through the use of algorithmic/automatic techniques in order to covertly introduce propaganda effects into social media platforms and ensure the quick and cost-effective circulation of messages, narratives and false information.  It is not simply that social media have become used as tools of warfare, but also that military rationale and practice have struggled to keep up with and adapt to the quixotic transformations and permanent disruptive effects of ever-expanding digital networking practices.

Image: Camouflage (2013), Suzanne Treister.



Are humans becoming obsolete? A major part of contemporary digital communications happens not between people, but between devices, about people, and over the internet of things (IoT).  With increased reliance on machine to machine communications, humans are quickly going from being ‘in the loop’ of control and command to being ‘out of the loop’ altogether. As technologies become ‘autonomous’ requiring little to no human oversight, will the implementation of self-learning machines and AI systems also lead to human obsolescence in governance processes?  What kind of politics can be mobilized in the context of a planetary future in which humans are no longer ‘in the loop’?  The philosophical challenge seems to lie in imagining politics as not necessarily grounded in assumptions about humans being in-the-loop of command and control. How to conceive of such a politics? Can it be called a ‘politics’ at all?  Or is it instead about something altogether ‘beyond’ politics – perhaps a post-human politics?

Tian Xiaolei, “the Poem,” video, 7’11”, 2014



In a networkcentric and infodemic age, contagion and virality have become not just icons but models for networked connectivity. The asymmetrical tendencies of contemporary digital capitalism are the results of the entanglement between the powers of centralization and striation, and affective and dispersed logics of contagious communication. In an age of networks, contagious communication has itself become part of a new protocological paradigm of information, mediation, and networked control. The virality of communication has become furious, like the ancient Furies, described by the Greek tragedian Aeschylus and others as a “bloody ravening pack.” Furious media engages affect and communicates through contagion within the distributed (and disturbed) logic of the ‘swarm.’


Image: John Singer Sargent, 1921.



The current global pandemic and the rise of extreme weather caused by climate change has many key international institutions talking about the need for transnational, global, and planetary-level cooperation. Many favour the idea of ‘global citizen’ and theories of cosmopolitan citizenship. Nothwithstanding the various conceptions of cosmopolitanism in contemporary political, social and moral philosophies, the standard vision claims that all human beings regardless of their differences can and should be regarded as citizens of a single human community or confederation.  Standard cosmopolitanisms thus tend to depend on the primacy of the Liberal Enlightenment conception of the human individual endowed with basic human rights and, is, as such, grounded in Humanism. At the outset of the 21st century, however, widespread engagement with the advanced digital technologies of the information age has extended human boundaries beyond just the global, to increasingly planetary, post-planetary, and even trans-human scales.  Transhumanist cosmopolitanists might seek to extend the concept of cosmopolitan citizenship beyond the limits of the past to include non-human, that is animal, vegetal, mineral and machinic life-forms, which would entail the affirmation of hybrid human/non-human interfaces and identities. However, efforts to actualize transhuman cosmopolitanist agendas have been accompanied by the widespread adoption of an impoverishing and instrumentalizing fantasies of technological enlightenment. Post-human post-colonialists would thus criticize the ‘prometheanism’ of trans-human cosmopolitanist narratives, instead emphasizing their complicity with the inequities and inequalities produced by a hyper-consumptive, hyper-technical form of capitalism. As the recent anthology Critical Posthumanism and Planetary Futures asks, what kinds of futures await contemporary societies “in this age of the limit condition of the human” (ed. Banerji and Paranjape, 2016)?

Diagram used by Johannes Kepler to establish his laws of planetary motion. Photo: Wikimedia Commons.



It is no coincidence that the information-surveillance powers now available to police, governments and third-party security corporations are being compared to quasi-futuristic, sci-fi scenarios like that of Philip K. Dick’s 1956 short story ‘Minority Report,’ which was turned into a very successful film by Steven Spielberg in 2002, and features a plot in which advanced technologies enable police to predict, identify and incarcerate people before they actually commit a crime.  The cop at the heart of the Pre-crime unit, John Anderton is, of course, totally on-board with the goals of the pre-crime unit, until he himself becomes a victim of its predictive technologies and becomes identified as a ‘pre-criminal’, eventually going on the run to evade the authorities that are in hot pursuit.  The Department of ‘Pre-crime’ criminalizes people before they have even committed a crime— a truly ethical dilemma based on current legal frameworks of individual rights and responsibilities— by bringing the norms of war-time into the civilian spaces of peace-time.  In this way, civilian spaces begin to operate as if they were military battlefields.  The normalization of information-surveillance technologies has the effect of militarizing civilian spaces as the logic of war invades the physical and psychic spaces of civil society on a global scale.



The ‘war on terror’ refers to the global military campaign spearheaded by the United States after September 11, 2001 to deal with terrorist threats. Rather than a defined set of military imperatives  governed by the regular norms of war (such as the requirement that nations determine and delimit a geographical theatre of war), the use of the term extends beyond the normal conceptual parameters to become, at once, both a set of practices and policies that include military intervention, covert operations, agencies and institutions, as well as an entire set of cultural and political beliefs, assumptions, justifications and narratives for determining not only who are allies and who are enemies, but also for delimiting the bounds of national identity and individual rights.  The increased adoption of global counter-terrorism technologies across the globe is converging with the proliferation of globally co-ordinated systems and processes for capturing, storing and cross-referencing digital information of any and every sort, in the name of ‘security’.  In this scenario, the ‘war on terror’ is being waged by intelligent machines of information surveillance that are developing and being justified in its name, from biometric technologies like facial recognition or retinal scanning in border security, to predictive technologies that can harvest, record, and even make predictions. Today, war is made ambiguous through its entanglement with the growing but complex mesh of everyday media.  One of the graver implications of this is that the War on Terror in the Age of intelligent machines is not so much about using the rules of war to isolate and eliminate those undertaking terrorist activity, but rather to extend war into all spheres of life, to extend and use the techniques and technologies of war to manage and govern entire civilian populations.

Image: Hito Steyerl, “How Not to Be Seen: A Fucking Didactic Educational .MOV File” (still), 2013.

bottom of page