Trading, credit scoring, grading, policing—automated prediction and decision-making systems have become ubiquitous in many domains of social live. They stand in for a paradigmatic form of tech solutionism that churns out demos and protoypes which put the very fabric of the social to the (stress) test. Flash and hack crashes that shook stock markets throughout the last decade exemplify how computational technologies aggravate uncertainties and wreak havoc in environments they were designed to help control. More often than not, such traumatic errors derive from design choices that were made in pursuit of optimizing the tool, and result in attempts at aligning the world with the demands of our systems.
This project analyzes experimental computational technologies, corporate and scientific, to get a grip on this phenomenon. The scope of the project is decidedly broad: we engage with machine learning systems and other experimental computational technologies—ranging from multi-purpose prototypes for general artificial intelligence to very specific tools and expert systems employed in finance, ecology, and social science research—as artefacts of contemporary governmentalities. Via scientific papers, blog posts, githubs, and patents, we zoom in on algorithmic techniques to understand the very large through seemingly negligible and very small.
The semantic linking of governing (“gouverner”) and modes of thought (“mentalité”) puts an emphasis on political rationalities and extends the question of government and administration to psychologies, sociologies and ecologies inherent to contemporary computing and algorithmic design. We place an emphasis on questions of systemic health and algorithmic pathologies—questions that concern both the machines designed to govern and the humans, societies, and eco-systems they encode. That is, we focus on leakages and contaminations between the tool and the problem, or between computational logics and notions of psychic, social, and ecological resilience that emerge through design.