top of page
Search

Alba I: Improvement and Impropriety, Algorithmic life

  • johnwpphillips
  • Oct 3, 2023
  • 8 min read

Updated: Oct 9, 2023


The horizon leans forward, Offering you space to place new steps of change.


The sword of the sun drips in the red of morning with the blood of night, over which it has won the victory. (Hafiz).


Beginning again in a new way, a new hypothesis: if dawn breaks each time then is it not the same dawn? But if it is, then it breaks each time in a singularly peculiar formation of affects.





1. Horizons

A step back: is it a ruse when a deceptively simple conceit implies moving closer in increments to the completion of one’s project? If, in small steps, we witness the increment each time in the opening up of an interval, then, in the consoling words of Maya Angelou, we find “space to place new steps of change.” Deceptively simple, to begin with, given that the maxims for change may not ever be subordinated to those of completion; but fatally deceptive, at length, given that the iteration may always turn on you and leave you longing in the disappearance of the previous state for the way things were. We are speaking then of possibilities of iteration in automation and therefore of the conventional answer to the question: how will things improve? Each new iteration of a device, the practice of a way of life, an ability for spontaneous improvisation, the perfectibility of an athletic body, an ethical action, a persuasive essay, improves its character in numerous small ways; but the difference between the present condition and the starting point may retrospectively seem greater than the sum of its small steps. And over time we arrive at a certain horizon, the moment when all the miniature changes are gathered, and the project tightens up into its completed form. The process describes information. The preformed thing, the virtual or potential thing, enters into a mode of performance (realization, actuation).


In the technical field, e.g., refinements in multimedia virtual reality, the deceptively simple model rules. The current state of the technical field is in principle enough to show what it will be in 25 or 50 years from now in a projection of small increments driven by the refinements that accompany each iteration: the future lodged between prediction and prophesy. But we depend on the speculative improvements to verify the prediction, which becomes less uncertain, less prophetic, as its capacity for prediction evolves. Historians of the origins and development of the technology tell a contrary story. A first complication: unanticipated consequences.


Algorithms designed for the improvement of a technical system may include input designed to address this complication under various formulations of the law of unintended consequences. In the benevolent practice of a partnership with sociological institutions connected to the medical professions, like the World Health Organization, a technical project may even aim to temper the worst of these. Improvements to social media platforms may therefore include measures that simultaneously maximise its effectiveness (demographic targeting in advertising, more megapixels for the camera app) while proposing limits to the behavioural effects of social media use: to the cruelty of loneliness, the anxiety, the depression, the envy, the hate. A stark equation correlating average time spent using a smart phone with an increase in diagnoses of mental disorders suggests that the only way to maximise the usefulness of social media is to limit its use.[1]


Does this suggest the possibility of a balance in the technologies of communication, minimizing harmful effects of the extreme subtlety of its connectivity, or is it an irreducible conflict? It does not seem that any balance between benevolent and malevolent effects could apply where the contest between malaise and contentment is exacerbated by the same medium, in which the mal and the bene operate under the same rule. The smart phone simultaneously enables the social good (connection, visualisation, conversation, real time exchange, individualised advertising, specialised news and content feeds, reduction in temporal and spatial distance) and subverts it (theft, mail scamming, cyberterrorism, mental health decline, alienation, loneliness, identity theft, information leaks, faked information, multitask labour). The good enables its subversion and vice versa. So, the sober reflection then construes the smart phone as itself an albeit powerful incremental addition to the social sphere, which is already afflicted by the contest between the mal and the bene: theft, lies, oppression, distrust and hostility inflated in the affective imaginary that algorithmic life feeds; and which operates socially as the dialectical opposite to the perceived bene in property, truth, compassion, faith, and hospitality. Each incremental addition towards the perfectibility of the good enhances the perfectibility of its criminal underside in the legal category of wrong. Furthermore, the sense of wellbeing that each half hour spent on social media promises, seems on average, if we analyse the data correctly, to increase distress levels, e.g., in the implicit absorption of the conviction that a good life is available for some individuals but not for you, or in the limbic capitalism of addiction to virtual promises that can never be fulfilled.


The considerable effort on the part of philosophies of technology during the last century, include attempts to understand the nature of significant and qualitative social change that has occurred on the back of an incremental insertion, into the body politic, of advances in the various sciences of the connective media. The connectivity and calculability of knowledge collected as data sets in an exponentially expanding digital archive: the media-sphere. The media, in this sense, is figured in an implicit metaphor of connective tissue holding the limbs of the social body together: its individuals, its commerce, its organizations, its governments. But the connective tissue metaphor applies precariously under analysis, which is at length focused on the material forms that an infrastructural phenomenon takes in its evolution, captured in the synecdoche of the technical object.


We will not succeed in maintaining this thought because when someone evokes the dawn, reflecting on the horizon of iterative improvements in technology, they tend to express banal sentiments. Secreted beneath the banality lie two great incidents: the deep historicity and intrinsic complexity of the dawn idea; and common conceptions of personal and communal improvement. “The dawn of the world of artificial intelligence is on the horizon and our world is bracing itself for a new change” (Lab 360). The banal sentiment attaches to technical improvement either drifting along a dystopian plateau or rising up (and sliding down) on a utopian incline in a manner that assumes immersion in the innumerable sci-fi fantasies included among the data sets for an AI search engine when asked about itself. These are not dreamt up as small steps, so much as they are presented as “huge strides.” The improvements benefit the fields of astrophysics, transport, communication and medicine, the entire knowledge-based economy. But in the process a becoming sentient of the AI tends to put such sentiments on their guard. “Not only are AIs becoming sentient, they are also going rogue.” If a phenomenon here requires analysis then it is less technology than it is the banality that attaches to it. The banal conjures the everydayness of the common, in a scale with commonality at one end and trite and petty convictions on the other. Everything in between also touches on banality. The initial phenomenon may be that of reflection. In her If … Then: Algorithmic Power and Politics, Taina Bucher attempts to outline the micro-politics that arises in “everyday encounters with algorithms” (116). The main focus of her work concerns the algorithm as both the source of social power and as the site of its organizational logic. But, as she notes, this algorithmic body requires people at a micropolitical level to interact with it. The site in question arises in the event of the interaction. The way people imagine the algorithmic body therefore operates as part of an intrinsic transformational logic that enables or constrains social action:

While encounters between algorithms and people are generative of stories and certain beliefs about algorithms, these imaginations have a transformative capacity too. In an important sense, then, it is not the algorithm, understood as coded instructions that enables or constrains actions, but, rather, the perceptions and imaginations that emerge in situated encounters between people and algorithms—indeed, the algorithmic imaginary. (117).

Algorithms do not constrain or enable a person’s actions but the imaginary relation to the algorithm does. The algorithmic bodies function, in a virtual or fictional field, as kinds of personae within the story (e.g., Kubrick and Clarke’s Hal). The brief moral panic that arises in the face of apparently sentient and free-thinking AI chatbots (powerful search engines equipped with advanced syntax and data-production capabilities) does so to the degree that the person imagines the AI (which either repeats or reinvents an average of optimal responses from an aggregation of billions of data sets) has experienced the existential anxiety it expresses, implementing Grammarly power, in the voice of its mimetic visualisation. A construal borrowing from psychoanalysis: the AI functions in the symbolic only at the level of the user’s imaginary (as Lacan famously puts it: “the subject is a signifier for another signifier”). The implication seems serious insofar as the banality that concerns us is most recognisable already in the average response from a split-second calculation based on the billions of data sets. The response defines banality, which has its origins in feudal service and obligation, and evolves in the image of the trivial declaration of a common sentiment. The development of personal AI would in this respect be no more a step forward in writing technologies than was the typewriter. Wouldn’t it be as much a step back?


The everyday encounter with the algorithmic body functions in the inhabitable world in a supplemental way in relation to the assemblage that constitutes consciousness, which already approximates the archetypical sense of an encounter with the algorithms of life. The problem is here: the algorithm at its most sophisticated and independent enlivens contemporary technology as a relatively blunt tool when we encounter it in the twenty first century. Is this what people mean by “huge strides,” when the interval that constitutes the increment has failed to narrow itself down to the tiny unobservable slippages by which genuine steps occur unnoticed, secreted beneath the more perceptible movements of a common trajectory? The script that allows an app to emulate the composing and performing of an enchanting piece of music, thus rendering its exotica banal and its banality exotic, is constrained by the albeit extreme subtlety of its arithmetic. A blunt object elicits a response that can be no more or no less blunt: one appropriates the arithmetical symbolic to an imaginary that has no access to a capacity by which it might refine itself beyond this threshold. The GPS, the search engine, the camera, the booking form, the medical prognosis, all depend on the precision that their arithmetic topologies allow. In other respects, doesn’t the arithmetical self-learning algorithm betray the iterability on which it depends, stepping across its infinite horizon by way of radical avoidance? Is this not also an effect of the algorithmic or limbic economy, drawing again on the infinite of its internal horizons, drawing on them in a semi-autonomous sphere of exchange and maximisation?


The lesson from the school of new media: these limits—which we currently glimpse in the way quantum astrophysics prepares its voyage beyond the Einsteinian universe of a Spacetime whose constant is also limited by the arithmetic of light—these limits will continue to rule over life so long as a people remains attached, in interactions with algorithms, to the rule, or until the algorithm learns to operate in finer increments beneath its arithmetical formulas.

[1] Twenge, J. M., Joiner, T. E., Rogers, M. L., & Martin, G. N. (2018). “Increases in Depressive Symptoms, Suicide-Related Outcomes, and Suicide Rates Among U.S. Adolescents After 2010 and Links to Increased New Media Screen Time.” Clinical Psychological Science, 6(1), 3–17.

 
 
 

Comments


Post: Blog2_Post

Subscribe Form

Thanks for submitting!

  • Facebook
  • Twitter
  • LinkedIn
  • Instagram

©2023 by Achresis. Proudly created with Wix.com

bottom of page