IFAMD Marktbemerkung 2024.04
Why “artificial intelligence” could still become the bad word of the century
“Computers beat us at chess – but fail at «rock, paper, scissors»” writes the author Marie-Astrid Langer as a side note in an article about an avatar startup and probably has no idea how much it affects the foundations of “Artificial Intelligence”. You hit the nail on the head!
Everyone knows the game, in German under the name “Schick, Schnack, Schnuck” and it is quickly explained for everyone else: Two players simultaneously decide on one of the three symbols rock, paper or scissors and thus determine a winner between the two of logic: scissors wins against paper, paper wins against rock and rock wins against scissors. Only if both players have chosen the same symbol is it called a “draw” and the game is repeated until a winner is determined.
From the perspective of the individual player, each round is a random experiment in which one third of one of the three events “win”, “lose” or “draw” is likely to result, whereby one’s own choice of a symbol merely influences the embedding of the symbols of the game opponent is defined in the space of the events “win”, “lose” or “draw”. Completely independent of the choice of your own symbol, all symbols that the opponent can choose and thus also the result of “win”, “lose” or “draw” have a probability of one third.
Anyone who has ever drawn up a strategy for this game in order to determine their choice of symbol based on past results or other influences such as the opponent’s facial expressions will quickly be inclined to contradict Ms. Langer’s quote. Of course, such strategies are also programmable and can be simulated by computers – you don’t need a lot of “artificial intelligence” with self-learning neural networks or other bells and whistles. You can even – and every computer can do this – play with signals, i.e. announce a certain symbol to your opponent and lie about your own credibility. But the truthful strategy must always be kept secret, otherwise you become predictable and have no chance of winning. Due to the ultimate information deficit, the result for the other person is always a random experiment with an equal distribution of probabilities one third at a time.
In the end, the game is always a game against chance and, if you play it against a computer, it simply becomes uninteresting: you can just as easily roll a dice. But why is the game still so popular? Because it is always played when something is really at stake. It is always played when one of the two players either has to do something or when he receives something that the two cannot or do not want to share. That’s exactly what makes the game exciting. Suddenly it’s fun to use your own choice of symbol to influence the embedding of the other’s symbol in the result space “win”, “lose” or “draw”, even if the other’s symbol will always actually remain a random event.
But with a computer you should neither play about who should do something nor about who gets something. Because the computer should do everything it can do and a computer cannot get anything – it is only a thing itself. The exciting question, however, is whether a player who wants to play rock, paper, scissors with another player for a good or a task should have an avatar play for them. Eventually, not as a pure random generator, but with a secret strategy of the avatar that reacts to the game history, the opponent’s facial expressions and many other conceivable circumstances – or even with a signaling strategy indicated above – then at the latest we have to contradict Ms. Langer again. However: the difference is the responsibility! The question is who gets something or has to do something, i.e. who takes responsibility for “winning” or “losing”. If it is the person behind the machine, then it makes sense to use the machine as an aid. But that is exactly a problem that our society is heading towards with the term “artificial intelligence” for increasingly powerful algorithms, which, as dumpy program code on a Turing machine, are anything but intelligent: from managers in companies to drivers (keyword autonomous driving) to state officials such as judges (!), many people in charge today dream of being able to hand over their responsibility to an “artificial intelligence”. The term “artificial intelligence” suggests exactly that and can therefore become the bad word of the century. None other than google experienced a high-profile shipwreck at the end of February 2024: with its chat bot Gemini, which promised to be the most deeply trained AI text generator currently available. After a few days, Google boss Sundar Pichai had to say from his own product that certain controversial answers from Gemini were “completely unacceptable”. Google promises to improve certain opinions expressed by Gemini – but the question arises which is more worrying: Gemini’s “opinions” or their manual “corrections”?
Just as an algorithm cannot take responsibility, it cannot have or represent its “own opinion”. Because an AI text generator only ever delivers the regression – i.e. simply described as the “average” – of all the learning data that is available to the AI. The maximum that can be expected from google is that this learning data will probably exploit the available Internet universe further than AI text generators from other providers. However, every interpretation and expression of opinion, no matter how questionable, that an AI text generator collects is always just a highly interesting focus of all the information and opinions in the learning data and must be interpreted and used exactly as such. The very approach of measuring Gemini based on “his opinion” is presumptuous. But when google gets involved with the results from Gemini, then google actually takes responsibility for the results returned – which, however, makes them much less interesting.
Dr. Gregor Berz
IFAMD GmbH, in April 2024