Querverweis: Jaron Lanier — »There is no A.I.«

17. April 2024

Vor genau einem Jahr hat der Informatiker, streitbare Technikphilosoph und Digitalfuturist Jaron Lanier im Magazin The New Yorker ein diskussions- und kritikwürdiges Textstück mit dem Titel »There is no A.I.« veröffentlicht.

Sein Kernargument: Eine ›künstliche Intelligenz‹, so wie sie in den letzten Jahren im öffentlichen (und auch sozialwissenschaftlichen) Diskurs adressiert und imaginiert worden ist, gibt es bislang nicht – und wird es auch in absehbarer Zeit nicht geben. Schon der schiere Begriff ›künstliche Intelligenz‹ (KI) führe zu einer Reihe an Missverständnissen und lasse falsche Technikvorstellungen aufkommen:

»The most pragmatic position is to think of A.I. as a tool, not a creature. […] Mythologizing the technology only makes it more likely that we’ll fail to operate it well—and this kind of thinking limits our imaginations, tying them to yesterday’s dreams. We can work better under the assumption that there is no such thing as A.I.

[…] The new programs mash up work done by human minds. What’s innovative is that the mashup process has become guided and constrained, so that the results are usable and often striking. […] It’s easy to attribute intelligence to the new systems; they have a flexibility and unpredictability that we don’t usually associate with computer technology. But this flexibility arises from simple mathematics.

A large language model like GPT-4 contains a cumulative record of how particular words coincide in the vast amounts of text that the program has processed. […] When you enter a query consisting of certain words in a certain order, your entry is correlated with what’s in the model; the results can come out a little differently each time, because of the complexity of correlating billions of entries. The non-repeating nature of this process can make it feel lively

Die Outputs heutiger KI-Systeme basieren so betrachtet auf einem Blending vorangegangener menschlicher Leistungen (wobei sich an dieser Stelle trefflich die Frage diskutieren ließe, ob menschliche Kreativität nicht auch auf einer Neukombination vorhandener Werke basiert). Vor diesem Hintergrund schlägt Jaron Lanier vor, KI als eine neue Form sozialer Kollaboration zu verstehen:

»In my view, the most accurate way to understand what we are building today is as an innovative form of social collaboration. […] After all, what is civilization but social collaboration? Seeing A.I. as a way of working together, rather than as a technology for creating independent, intelligent beings, may make it less mysterious—less like hal 9000 or Commander Data. But that’s good, because mystery only makes mismanagement more likely.«

Nichtsdestoweniger hält es Lanier für wichtig, das künstlich generierte Inhalte wie Deepfakes eindeutig als solche markiert werden und KI-Lösungen insgesamt eine Regulierung erfahren. Das aber setze erst einmal eine höhere Nachvollziehbarkeit von KI-Outputs voraus:

»The systems must be made more transparent. We need to get better at saying what is going on inside them and why. This won’t be easy. The problem is that the large-model A.I. systems we are talking about aren’t made of explicit ideas. There is no definite representation of what the system ›wants,‹ no label for when it is doing a particular thing, like manipulating a person. There is only a giant ocean of jello—a vast mathematical mixing.«

Der Schlüssel für eine höhere Systemtransparenz und einen produktiveren Umgang mit KI liegt Jaron Lanier zufolge in einem stärkeren Fokus auf die Menschen, die hinter den KI-Modellen und ihren Outputs stehen:

»At the same time, it’s not true that the interior of a big model has to be a trackless wilderness. We may not know what an ›idea‹ is from a formal, computational point of view, but there could be tracks made not of ideas but of people. At some point in the past, a real person created an illustration that was input as data into the model, and, in combination with contributions from other people, this was transformed into a fresh image. Big-model A.I. is made of people—and the way to open the black box is to reveal them.

[…] Anything engineered—cars, bridges, buildings—can cause harm to people, and yet we have built a civilization on engineering. It’s by increasing and broadening human awareness, responsibility, and participation that we can make automation safe; conversely, if we treat our inventions as occult objects, we can hardly be good engineers. Seeing A.I. as a form of social collaboration is more actionable: it gives us access to the engine room, which is made of people.«


Ähnliche Artikel