I’d Prefer To Be Too Many To Name
Curtis Roth
More and more I imagine the next thing will set everything right. Like faster-acting melatonin gummies, or a mail-order mattress, or timing my daily internet intake – all to assuage a growing unease that remains difficult to specify. If I’m not alone in this sentiment, then perhaps it could be said that never before have […]
More and more I imagine the next thing will set everything right. Like faster-acting melatonin gummies, or a mail-order mattress, or timing my daily internet intake – all to assuage a growing unease that remains difficult to specify. If I’m not alone in this sentiment, then perhaps it could be said that never before have we had so many specific solutions for such general problems. And at no other time has this been more apparent than during the rolling quarantines of our present moment; where the cruel indifference of our public institutions is offset by the obligation to maintain an endless array of self-care regiments from starting sourdough to learning Mandarin. Informed by earlier self-actualization movements like Quantified Self (QS) or Neuro-linguistic Programming (NLP), the contemporary economy of self-care regards life as the confluence of so many discrete signals. This cybernetic understanding of being suggests that our futures might be positively steered by meticulously managing the flows from which our lives are constituted. We’re told our futures now depend on the constant interrogation of these signals, in other words: self-care entails the responsibility to relentlessly self-profile. Today, many are offered the ability to manage the minutiae of their lives at an unimaginably fine resolution. But like melatonin gummies on the deck of the Titanic, the responsibility to self-profile grows increasingly perverse amidst the increasing uninhabitability of reality itself.
While I might not be alone in my impulse to neurotically profile my own life, this impulse is far from universally accessible. I write this text from the United States, following weeks of public protests against state-sponsored racial violence and the uneven death-toll of a viral pandemic that’s normalized by the powerful as the cost of doing business while disproportionately killing the poor. If the responsibility to continuously manage one’s life can be understood as a technique for directing my future, such events remind us that one of the ways in which power remains powerful is by unevenly distributing such techniques of living. I’m compelled to profile myself while others are brutally profiled.
Such incongruities between techniques of the self are also present in the ways in which life is captured by contemporary online surveillance. Until recently, the most common way to profile an internet user was through Challenge-Response Authentication. These profiling processes are typically used to differentiate human beings from bots, and to allocate a user’s privileges appropriately. Challenge-Response Authentication is usually encountered as annoying JavaScript CAPTCHA apps, requiring users to retype distorted lines of text, or select all of the images containing traffic signals from a nine-square grid. CAPTCHAs entail a sensory-cognitive task presumed easy for humans and difficult for computers. Importantly, these tests don’t care which particular user you might be, only whether the user in question is a human being or not.
Problematically however, CAPTCHAs profile this human being through the narrow threshold of specific abilities that are far from universally human. For example, a bot and the visually impaired are equally unable to select all of the images containing traffic signals from a nine-square grid. In order to expand this overly-narrow circumscription of the human, in 2014, Google unveiled an application called “no CAPTCHA reCAPTCHA.”1 The unwieldy name signifies a comparatively painless process: a small check box accompanied by the succinct assertion “I’m not a robot.” A user agrees simply by checking the box and is immediately authenticated. But while reCAPTCHA was unveiled through a narrative of increased accessibility, it simultaneously facilitated a new regime of surveillance built atop a radically different conception of life itself. Unlike previous Challenge-Response tests, reCAPTCHA isn’t strictly interested in whether the user is a human being, but in registering the user as a specific human being in the process.
Clicking a reCAPTCHA doesn’t confirm your humanity through a test, rather it infers it from your ability to enter into a legal agreement with Google. By clicking “I’m not a robot” the user submits to a process of continual surveillance designed to calculate their humanity in perpetuity. After accepting the agreement, each user is saddled with a tracking cookie and assigned a ‘risk-score’ indicating a live calculation of their potential for malicious activity while using a site.2 While Google refuses to indicate what factors comprise users’ risk-scores, security researchers have theorized that it is derived through a combination of hardware and software fingerprinting, as well as the live tracking of the cursor gestures of individual users.3 Today, these two models of user authentication exist in an uneven patchwork of surveillance across the web. But critically, CAPTCHA and reCAPTCHA are not only competing models of authentication, but competing techniques for exerting power.
Theorist Byung-Chul Han differentiates the techniques implicit in CAPTCHA and reCAPTCHA by drawing a contrast between the biopolitics of the industrial state and the psychopolitics employed under neoliberalism.4 Like CAPTCHAs, Biopolitics exerts power over life by construing it through systems of norms, such as the cognitive-perceptual criterion tested by a user’s selection of traffic signals from a nine-square grid. For Byung-Chul Han, while norms such as citizenship, gender, or physical ability have proved useful for calibrating the productivity of bodies, they prove less useful in conscripting the psyche upon which neoliberal production increasingly depends.5 While CAPTCHAs differentiate humans from bots through binary categorization, reCAPTCHAs regard life as the ever-changing aggregate of probabilities processed from a user’s behavior. Such systems are psychopolitical, in that they allocate freedom by modeling a users’ cognitive states such as their attention, arousal or ennui.
Whether biometrically or psychometrically, such attempts at profiling are invariably directed toward the monetization of users’ futures. It’s of no real interest on the back-end whether a user is a human or a bot in any ontological sense, rather what is at stake is the probability of a user behaving in ways that are reliably profitable. Crucially, CAPTCHA and reCAPTCHA, along with the bio and psychopolitical techniques that underwrite them, project the future through two distinct regimes of probability.
CAPTCHAs rely on a mathematical method known as frequentism, the dominant technique for statistical analysis prior to the 21st century. According to Justin Joque, “[frequentism] defines probability as the long-run frequency of a system.” 6 Through frequentism, a static prediction is made and then proven or disproven based on the frequency of its occurrence over a series of instances. Like many biopolitical demographic techniques, frequentism works at the level of total systems over long-runs. The assertion that a human can complete a CAPTCHA while a bot cannot, depends on a static and universal conception of the capacities of all humans and all bots for all time.
ReCAPTCHA, on the other hand, relies on an alternative predictive technique known as Bayesian probability. While first theorized in the 18th century, most Bayesian methods remained prohibitively inefficient until recent advances in computation. Rather than a stable prediction, Bayesian probability allows a prediction to be updated after each discrete event.7 Instead of static hypotheses, Bayesianism can establish probabilities for individual events. ReCAPTCHA doesn’t require any preexisting definition of what constitutes a human user, only that the behaviors of a particular user presumed to be human continue to be similar to the behaviors of other presumably human users. My surfing behaviors, recorded by Google’s tracking cookies, inform predictive models of a general human user that eventually determine the risk scores of others. Crucially, this flexibility is afforded by the Bayesian method’s ability to perpetually incorporate new inputs. While subjects modeled through CAPTCHAs are what they will always be, reCAPTCHA regards the user as an evolving confluence of signals amongst a spectrum of similarly evolving users.
In this sense, today’s economies of self-care rely on a model of life in which the future is realized through ad hoc Bayesian principles. I don’t need to know the precise ways in which my melatonin intake and mattress type contribute toward my personal fulfillment, only that by fine-tuning such inputs I am more likely to eventually find fulfillment. The connection between Bayesian statistics, self-care and economic privilege is made explicit in organizations like the Silicon Valley based Less Wrong group. Founded by artificial intelligence researcher Eliezer Yudkowsky in 2009, and supported by radical libertarian financier Peter Thiel, Less Wrong is a techno-utopian doomsday cult.8 The organization employs Bayesian statistical methods to maximize its members’ pleasure as they collectively hurtle toward the technological singularity, and the end of human life as we understand it. The forward-looking nature of such groups, along with the myriad ways in which self-care is now expected to substitute for state-care would seem to confirm the growing sense that the present moment constitutes some sort of epochal shift. One in which frequentism is supplanted by Bayesianism, biopolitics by psychopolitics, and Keynesianism by neoliberalism.
Instead, I would argue that the disproportionate suffering made explicit over the last several months suggests otherwise. Like the internet’s uneven muck of CAPTCHAs and reCAPTCHAs, today we occupy a moment in which life itself is a wildly unstable concept. Less one thing following another than every past turned productive by living-on in simultaneity. Where the responsibilities of governance are outsourced to the psyches of some as the obligation of self-care; even while others are murdered through much cruder techniques of population management. This isn’t to call for a more equitable distribution of suffering, but rather to suggest that any model of life precipitates the possibility of another future. And that if design has something to offer the present moment, it is our ability to make new configurations of life real. To offer the present muck ways to be that allow for a more just future. One in which the capturing of life as information, implicit in all contemporary profiling, is no longer merely the raw material means to others’ ends, but a form of self-determination.
1,2,3 Schwab, Katharine. “Google’s new reCAPTCHA has a dark side” Fast Company, June 19, 2019.
4,5 Han, Byung-Chul. Psychopolitics, Neoliberalism and New Technologies of Power. Verso, London, 2017.
6,7 Joque, Justin. “Chances Are” Real-Life Magazine, March 28, 2019.
8 Tiku, Nitasha. “Faith, Hope and Singularity: Entering the Matrix with New York’s Futurist Set” Observer, July 25th, 2012.
Curtis Roth is an Associate Professor at the Knowlton School of Architecture at The Ohio State University. His work examines new formations of subjectivities within networks of computation, labor and distance.