Progress and development have always been among capitalism’s core articles of faith. The (often dubious) successes of the technical transformation and exploitation of nature and human beings cannot be overlooked. However, such developments can also have fatal psychosocial and ecological consequences. We can’t say that technical progress is a good idea in itself or that it actually leads to greater prosperity, as is often claimed. Technical developments, or rather technical development paradigms, exist in the context of the valorizing movement of capital. If a new technology promises a cost advantage or opens up new possibilities for accumulation in the form of process innovation in production or in the form of an (expanded) mass consumption of commodities, it is developed and produced, while at the same time people proclaim its necessity and inevitability. (Whereby it can also be cheaper to wear out workers than to rationalize them away through technology. Automation is by no means implemented everywhere that it is theoretically possible, and in practice automation is not always feasible, see Becker 2017 and Moody 2019.) The capitalist ideology of progress and development always comes with a certain optimism and many promises of happiness. And, conversely, it also comes with a pessimism regarding the familiar and the implicit or explicit threat that we must accept progress as fate and are driven to adapt and reinvent ourselves in the process of “self-determination.” Otherwise, one is simply one of the “losers,” a status for which biologistic rationalizations can undoubtedly also be found, rationalizations that provide a genetic or neurological “explanation” for poverty and conservatism, etc. It is no coincidence that these promises of happiness are based on an ideological or completely exaggerated claim that is often untenable and is based on assumptions and vulgar materialist or utilitarian anthropologies that are not investigated further (see Schnetker 2019). At the same time, technological development with its sometimes insane promises is linked to a corresponding background music of legitimation. People emphasize how unstoppable technological development is, how desirable and unavoidable it is, and what opportunities, but also risks, it holds. When people say that “development can no longer be stopped,” then this development appears unstoppable to the optimists/apocalyptics as well as the “realists,” since the social dynamics underlying this development are not scrutinized and questioned as such. We are not dealing here with an assertive natural law (as in the case of an imminent volcanic eruption, which is actually impossible to stop), and yet the fetishistic valorizing movement of capital appears to the subjects living under its influence as just that, even though it is not (cf. Kurz 2012).
No matter what the issue is: “progress” is the solution, which often amounts to nothing more than digitalization and cost cutting. The digitalization critic Evgeny Morozov called this way of thinking, where one has the perspective of a hammer and everything appears to be a nail, “solutionism” (Morozov 2013). Particularly zealous disciples of solutionism are Silicon Valley ideologues, especially representatives of transhumanist ideology, who do not even shy away from considering the rationalization of humans as such and even consider it desirable for humans to either disappear or transform into “cyborgs” (cf. Wagner 2016). Transhumanism is therefore a technocratic death cult (see Konicz 2018 and Meyer 2020) that updates social Darwinism and eugenics (see Jansen 2018). These legitimizing ideologies and their “prophets” do indeed have aspects that are usually found among religious fundamentalists. It is not for nothing that the term “technological evangelist” has arisen. AI ideologues believe that humans, because of their fallibility, need a man-made artificial intelligence to deal with things like climate change, for example. Transhumanists strive for salvation through technology, even if this may mean the destruction of humans. In addition to big data and digitalization (Meyer 2018), an almost omnipresent hype in the current capitalist regime (to which “Chinese-style socialism” naturally belongs) is so-called artificial intelligence (cf. e.g. Simanowski 2020). Artificial intelligence has been on everyone’s lips since, at the latest, the publication of ChatGPT at the end of 2022.
What can we make of the hype surrounding artificial intelligence? Some are predicting massive disruptions in the economy (Industry 4.0, Internet of Things) and AI overtaking and replacing humans. Humans are essentially seen as a discontinued model. According to this line of thought, AI can and will be used in education, medicine, logistics, the culture industry, journalism, the military, art, etc., or in other words, everywhere. People hold out the prospect of many jobs or kinds of work disappearing altogether, while once again downplaying the social consequences that this would have. They tend to numb themselves with ignorance or optimism, assuming that many new job opportunities will be created, whereby there is always a latent threat against those who fall by the wayside in this game of “musical chairs” and do not prove to be flexible or resilient enough. However, AI is not creating a high-tech paradise, as the fundamentalist AI preachers would have us believe, but rather predominantly precarious work. AI as “capitalist intelligence” (see wildcat no. 112, 42ff. and Seppmann 2017) serves to rationalize capital, i.e. to cut costs, speed up logistics, compress work, accelerate and maintain the valorization process and continue competition at all levels.
As current or “upcoming” developments show, AI systems are ideally suited for managing the crisis (see Konicz 2024). They are predestined to subjugate capitalist “human material” by evaluating huge amounts of data (big data) and assessing and selecting this human material according to its usability or “future viability” (law enforcement, insurance, health, surveillance, etc.). When AI systems make predictions, they always do so on the basis of a statistical evaluation of “what already is.” This leads to fatal positive feedback loops: for example, someone does not get a job or a loan because they come from a “social hotspot” or presumably from a “criminal milieu,” as evidenced by corresponding “police work.” The police are in turn mobilized to screen said milieus, since crime is also likely to occur there in the future, as their work has already shown in the past and will show again due to AI and algorithms (search and find!). And thus it is “confirmed” that the criminal milieu is a criminal milieu and that black people or foreigners are more “inclined” to commit crimes than those who are less in the crosshairs of the police and justice system (cf. O’Neil 2016). A racist reality is thus perpetuated algorithmically.
If you are caught in the “tentacles” of an AI system or algorithm due to a misjudgment, it is usually not possible to “object” (and the users of an AI system themselves do not know why an AI has “decided” one thing and not another in a specific case – even if the “trade secret” were abolished, the “decision-making” of the AI would remain opaque). The fact that AI systems make mistakes (i.e. mistakes from the point of view of the user and those affected) has to do with the fact that reality cannot be clearly sorted and that AI systems cannot understand (the social and situational) context (which is why language programs have problems with sarcasm and irony). Statistical evaluations of the frequency of words or word combinations do not result in meaning. Statistical evaluations of data do not lead to an understanding of the genesis of said data (or of the social phenomena that are reflected in the data). The fatal flaw of AI is that it is impossible to know what mistakes these systems (will) make and when, or how exactly these mistakes come about. The mistakes that AIs, such as speech and image recognition programs, make show that they do not understand what they have “learned” (cf. Lenzen 2023, 48ff., 133ff.). If AI systems produce nonsensical results, it is very difficult to “repair” them through retraining (in contrast to “normal” computer programs, which can be repaired by finding the errors in the program code).
Artificial intelligence and “computational thinking” in general have a long history and AI has already gone through several periods of hype (see Weizenbaum 1982, Dreyfus 1978, Irrgang; Klawitter 1990, Larson 2021). The fact that such hype always returns at a “higher level,” despite all the criticism, is obviously because of its capitalist “usefulness” and the optimistic promises and apocalyptic fears associated with it. These promises and fears often accompany technological developments and are rehashed again and again. They may have been repeatedly disappointed or denied, but they cannot be killed off. The fact that AI research and the interest in funding it have had a “winter” on several occasions is due to an underestimation of the complexity of developing artificial intelligence and the fact that computer technology has long been inadequately developed (as well as the insufficient amount of digitized data available to train “artificial neural networks”).
Regardless of the repressive applications and capitalist use of AI systems, apt objections are formulated against the concept of intelligence commonly used in the “AI scene.”
The media liked to report, with a great deal of sensationalism, that an AI could play chess or Go better than any human, which some interpreted to mean that humans would soon become a “discontinued model.” Artificial intelligence is indeed far superior to human intelligence when it comes to storing huge amounts of data and evaluating it statistically (with certain weightings and model assumptions). However, conceptualization and judgement are not the same as memorizing a telephone directory or every bit of insurance data. There is no doubt that AI systems can recognize patterns from huge amounts of data that would otherwise have been overlooked. However, a human would never have been able to cope with this amount of data in their lifetime, as the data volumes are simply too large, which is why AI systems should more correctly be referred to as pattern recognition programs. It should be noted that correlations, i.e. patterns that are detected, don’t come close to actually proving causality. This applies to statistics in general, something that those who believe that more and more data will lead to an increase in knowledge (so that theory could therefore be dispensed with) do not seem to consider! In fact, such programs can be usefully employed as a scientific tool (and not as a substitute for theoretical thinking) (for example in astrophysics, medicine, molecular biology, solid-state physics, etc., cf. Bischoff 2022, 109ff.) but they are not suitable solely for the repression or selection of people.
The fact that a computer program can beat a world chess champion has a lot to do with the fact that this program has memorized billions of move combinations (and can estimate the most advantageous next moves based on a programmed heuristic, i.e. it does not have to memorize all of them). What is usually not mentioned is that these programs are hyperspecialists. A chess program (in the sense of an “artificial neural network”) cannot also learn to play Go. A human being can learn both without unlearning something previously learned at the same time (cf. Larson 2021, 28ff.). This is also the reason why some people, when talking about AI, are not referring to such hyperspecialists (weak AI). Instead, they believe the term “artificial intelligence” should be reserved for an artificial general intelligence, i.e. for one that can potentially do “everything” and is capable of doing “everything,” and is ultimately capable of developing consciousness (whatever that is exactly) (which is also called strong AI). However, this kind of intelligence is (and will presumably remain) pure fiction outside the world of science fiction and the delusional world of transhumanists (Schnetker 2019) and the “millenarian redemption rhetoric” of Silicon Valley ideologues (Nida-Rümelin; Wiedenfeld 2023, 252). It should not be forgotten that “artificial intelligence” is also a marketing term; it is used to describe various things that often have nothing to do with AI, but rather with banal statistics programs or databases. This is why you don’t come across too much in-depth theoretical reflection when this term is commonly used in the press (of course, there are always exceptions). This applies all the more to the propaganda of the tech giants (for example, the chatbot LaMDA developed by Google has allegedly developed sentience and consciousness).
A central objection to “computational thinking” or artificial intelligence is the equation of intelligence with computation or rule-based instructions. The computer scientist Erik J. Larson points out that computer programs (regardless of what they are called) can only proceed deductively (symbolic AI) or inductively (sub-symbolic AI) (training an AI with data is nothing other than induction). However, according to Charles Sanders Peirce (1839-1914), to whom Larson refers, what characterizes human thinking is abduction, which combines inductive and deductive elements without being reducible to them. Human thinking can neither be limited to deduction (logic, i.e. the derivation of a concrete from a universal) nor to induction (the collection of facts or data and the generalization that may result from this). Abduction is rather something that could be described as hypothesizing. Hypothesizing implies initially ignoring certain facts or interpretations in order to allow them to appear in a new light in a different context, within the framework of a new “paradigm.” Larson illustrates this with Copernicus: “When Copernicus posited that the earth revolved around the sun and not vice versa, he ignored mountains of evidence and data accumulated over the centuries by astronomers working with the older, Ptolemaic model. He redrew everything with the sun at the center, and worked out a useable heliocentric model. Importantly, the initial Copernican model was actually less predictive despite its being correct. It was initially only a framework that, if completed, could offer elegant explanations to replace the increasingly convoluted ones, such as planetary retrograde motion, plaguing the Ptolemaic model. Only by first ignoring all the data or reconceptualizing it could Copernicus reject the geocentric model and infer a radical new structure to the solar system (And note that this raises the question: How would ‘big data’ have helped? The data was all fit to the wrong model).” (Larson 2021, 104).
Any thoughts of a “difference between essence and appearance” remain alien to logical reasoning and statistics. With induction and deduction alone, without them being mediated by some third thing, neither “novelty” nor “creativity” can be explained. Artificial intelligences are therefore nothing more than “stochastic parrots” (Emily M. Bender).[1] If you were to train an AI only with circles, it would never suddenly start drawing squares. Artificial intelligences can basically only interpolate, i.e. operate with known values, i.e. with “what has already been,” and not extrapolate (Otte 2023, 60ff.). Only the latter would produce something new, as the new or creative cannot be formalized. In principle, computers and thus artificial intelligences, i.e. “AI devices” (Ralf Otte), can only solve problems that can be represented in terms of an algorithm (an algorithm is a calculation or rule for action that can be formalized and translated into binary numbers, which arrives at a result after a finite number of steps), i.e. that can be translated into a formal language. AIs therefore basically only operate in a world of mathematics (and even this cannot be completely formalized and there are also mathematical problems that have no solution, for which no algorithm can be found), and those aspects of reality that cannot be represented by an algorithm remain alien to AI. This is where an AI device has its fundamental limits, no matter how clever it may seem. This is why autonomous driving, for example, is likely to be an illusion, as AI expert Ralf Otte points out. The only way to realize autonomous driving would be to mathematize the environment, i.e. “transform the natural environment […] into a deterministic environment.” Autonomous driving takes place in a natural environment, and it is not possible to transfer reality as such into algorithms or “artificially enrich the whole world with [IP] addresses or cameras, even with the mass use of 5G technology, just to make it more predictable for the robot cars” (ibid., 342).
Another objection to the concept of intelligence in the prevailing AI discourse, according to philosopher Manuela Lenzen, is the limitation of intelligence to human intelligence (cf. Lenzen 2023). Instead of understanding artificial intelligence as a quality in its own right, people are all too quick to compare it with human intelligence. This leads to unrealistic assessments and a misjudgment of human intelligence. People tend to get hung up on nonsense and ignore what AIs can and cannot actually do. Lenzen argues that we can talk about artificial intelligence without devaluing humans and without falling into mythology (for example, the idea that AI will soon surpass humans in everything and take over the world, etc.). Rather, intelligence should be understood as a more general phenomenon that also occurs in nature and is by no means a monopoly of Homo sapiens (even though Homo sapiens is capable of a capacity for abstraction that far eclipses that of “non-human animals” and is therefore indeed a “unique specimen” in nature). Intelligence is the property of an organism that allows it to be part of an environment and to act in this environment in a “sophisticated” way, i.e. ultimately to survive. Thus, as Lenzen explains, intelligence is by no means just something “mental,” purely cognitive, but is linked to a body acting in an environment. This can be described as embodied cognition/intelligence. The approach of robotics is to “teach” a physical machine to act in a certain environment through trial and error (i.e. not so much by feeding in large amounts of data). Just as a small child learns to grasp or walk (learning by doing), a robot is trained to be able to do the same. Of course, we are infinitely far from being able to create artificial intelligence in the sense of general artificial intelligence.
We can therefore say – and this has been repeatedly stated (cf. e.g. Weizenbaum 1982, 268ff. and Larson 2021) – that the AI discourse reduces the idea of human intelligence to an overly simplistic image. Quite a few AI theorists have adopted a tautology: intelligence is defined as something calculable (rule-based thinking/action), i.e. something that can be translated into an algorithm, and computers can do exactly that. And then you realize with astonishment that computers have intelligence (or at least appear intelligent, so that they would be on a par with humans if humans could no longer tell whether a computer or a human was talking/writing to them; this is known as the Turing test), and will soon have more computing power than the human brain (which assumes that the brain is essentially a computer). The fact that this reduction seems plausible and credible to many is probably due to the actual reduction of human intelligence to the imperatives of the capitalist valorization process (see Seppmann 2017). The panic that AI will replace and enslave us is precisely the echo of capitalism’s general imposition in that a person must always prove and rationalize themselves, as well as the threat of a failure to do so, which is nevertheless rarely expressed. The humanization of machines makes sense precisely when man tends to be reduced to a machine or can “willfully” reduce himself to one and consequently experience himself as little more than an apparatus executing algorithms (undoubtedly with the corresponding psychological consequences, cognitive dissonances and repressions). Emil Post, a (less well-known) computer theorist alongside Alan Turing, used an assembly line worker as a model to theoretically understand a computer and what it can or should be able to do (cf. Heintz 1993, 166ff.). The computer essentially does what humans do (or should do!) when they work on an assembly line, i.e. perform identical actions based on rules. It is therefore not at all surprising that a machine can in principle perform actions much better and more efficiently than a human reduced to machine-like behavior ever could. The fact that artificial intelligence could surpass human intelligence and will almost inevitably enslave humanity suggests that those who propagate and seriously believe this have a rather limited horizon. Take, for example, the “philosophy professor” Nick Bostrom, who spends hundreds of pages in his book Superintelligence dreaming up all kinds of horror scenarios and worrying about how they could possibly be prevented – without, of course, questioning capitalism at any point. So when people talk about humans as a “discontinued model,” this means that the human being, reduced to variable capital, is in fact increasingly a discontinued model, and with it capitalism itself (cf. Konicz 2024a). However, neither optimists nor apocalyptics want to know anything about a crisis of capitalist society, or an inner barrier to capital valorization (cf. e.g. Ortlieb 2009 and Kurz 2012).
Literature
Author collective. 2023. wildcat no. 112.
Becker, Matthias Martin. 2017. Automatisierung und Ausbeutung: Was wird aus der Arbeit im digitalen Kapitalismus? Vienna: Promedia.
Bischoff, Manon (ed.). 2022. Künstliche Intelligenz: Vom Schachspieler zur Superintelligenz? Berlin: Springer.
Bostrom, Nick. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.
Dreyfus, Hubert L. 1978. What Computers Can’t Do: The Limits of Artificial Intelligence. New York: HarperCollins.
Heintz, Bettina. 1993. Die Herrschaft der Regel: Zur Grundlagengeschichte des Computers. Frankfurt: Campus.
Irrgang, Bernhard and Jörg Klawitter (eds.). 1990. Künstliche Intelligenz (Edition Universitas). Stuttgart: Wissenschaftliche Verlagsgesellschaft.
Jansen, Markus. 2018. Radikale Paradiese: Die Moderne und der Traum von der perfekten Welt. Würzburg: Königshausen & Neumann.
Konicz, Tomasz. 2018. AI and Capital: In the Singularity Longed for By Silicon Valley, The Automatic Subject Would Come into Itself. Available on exit-online.org.
Konicz, Tomasz. 2024. AI and Crisis Management. Available at https://exitinenglish.com/2024/08/01/ai-and-crisis-management/.
Konicz, Tomasz 2024a. AI: The Final Boost to Automation. Available at https://exitinenglish.com/2024/08/03/ai-the-final-boost-to-automation/.
Kurz, Robert. 2012. Geld ohne Wert: Grundrisse zur einer Transformation der Kritik der Politischen Ökonomie. Berlin: Horlemann.
Larson, Erik J. 2021. The Myth of Artificial Intelligence: Why Computers Can’t Think The Way We Do. Cambridge: Belknap.
Lenzen, Manuela. 2023. Der elektronische Spiegel: Menschliches Denken und künstliche Intelligenz, Munich: C.H. Beck.
Meyer, Thomas. 2018. Big Data and The Smart New World as the Highest Stage of Positivism. Available at: https://exitinenglish.com/2022/02/07/big-data-and-the-smart-new-world-as-the-highest-stage-of-positivism/.
Meyer, Thomas. 2020. “Zwischen Selbstvernichtung und technokratischem Machbarkeitswahn: Transhumanismus als Rassenhygiene von heute.” Available on exit-online.org.
Moody, Kim. 2019. “Schnelle Technologie, langsames Wachstum: Roboter und die Zukunft der Arbeit.” In Marx und die Roboter: Vernetzte Produktion, Künstliche Intelligenz und lebendige Arbeit, edited by Florian Butolo and Sabine Nuss. 132-155. Berlin: Dietz.
Morozov, Evgeny. 2013. Smarte neue Welt: Digitale Technik und die Freiheit des Menschen, Munich: Karl Blessing.
Nida-Rümelin, Julian and Nathalie Weidenfeld. 2023. Was kann und was darf künstliche Intelligenz? – Ein Plädoyer für Digitalen Humanismus. Munich: Piper.
O’Neil, Cathy. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Penguin.
Ortlieb, Claus Peter. 2013. “A Contradiction between Matter and Form: On the Significance of the Production of Relative Surplus Value in the Dynamic of Terminal Crisis.” In Marxism and the Critique of Value, edited by Neil Larsen, Mathias Nilges, Josh Robinson, and Nicholas Brown, 77-122, Chicago: M-C-M’.
Otte, Ralf. 2023. Künstliche Intelligenz für Dummies. Weinheim: Wiley-VCH.
Schnetker, Max Franz Johann. 2019. Transhumanistische Mythologie: Rechte Utopien einer technologischen Erlösung durch künstliche Intelligenz. Münster: Unrast.
Seppmann, Werner. 2017. Kritik des Computers: Der Kapitalismus und die Digitalisierung des Sozialen. Kassel: Mangroven.
Simanowski, Roberto. 2020. Todesalgorithmus: Das Dilemma der künstlichen Intelligenz, Vienna: Passagen.
Wagner, Thomas. 2016. Robokratie: Google, das Silicon Valley und der Mensch als Auslaufmodell. Cologne: PapyRossa.
Weizenbaum, Joseph. 1978. Die Macht der Computer und die Ohnmacht der Vernunft. Frankfurt: Suhrkamp.
[1] https://en.wikipedia.org/wiki/Stochastic_parrot
Originally published on exit-online.org.