"A certain danger lurks there": how the inventor of the first chatbot turned against AI [View all]
Computer scientist Joseph Weizenbaum was there at the dawn of artificial intelligence but he was also adamant that we must never confuse computers with humans...
Some subjects have been very hard to convince that Eliza (with its present script) is not human, Weizenbaum wrote. In a follow-up article that appeared the next year, he was more specific: one day, he said, his secretary requested some time with Eliza. After a few moments, she asked Weizenbaum to leave the room. I believe
this anecdote testifies to the success with which the program maintains the illusion of understanding, he noted....
For Weizenbaum, judgment involves choices that are guided by values. These
values are acquired through the course of our life experience and are necessarily qualitative: they cannot be captured in code. Calculation, by contrast, is quantitative. It uses a technical calculus to arrive at a decision. Computers are only capable of calculation, not judgment. This is because they are not human, which is to say, they do not have a human history they were not born to mothers, they did not have a childhood, they do not inhabit human bodies or possess a human psyche with a human unconscious and so do not have the basis from which to form values...
The later Weizenbaum was increasingly pessimistic about the future, much more so than he had been in the 1970s.
Climate change terrified him. Still, he held out hope for the possibility of radical change. As he put it in a January 2008 article for Süddeutsche Zeitung: The belief that science and technology will save the Earth from the effects of climate breakdown is misleading.
Nothing will save our children and grandchildren from an Earthly hell. Unless: we organise resistance against the greed of global capitalism...
https://www.theguardian.com/technology/2023/jul/25/joseph-weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai
There is SO MUCH between the ellipses. Out of curiosity, wondering what AI would make of this, used TLDR on BingBot with this not too bad, though very superficial, result:
Joseph Weizenbaum was a computer scientist who created the first chatbot in 1966¹. He was also adamant that we must never confuse computers with humans¹. In an article published by The Guardian, it was reported that Weizenbaum turned against AI and warned of the dangers of AI¹. The article also mentioned that Weizenbaum believed that there is a certain danger lurking in AI¹.
Unlike GPT, Bing gives sources. All of the superscripted refs are to the Guardian Article. Other sources, unreferenced in the synopsis, are also listed.
Long ago I wrote a GWBASIC version of Eliza. Even though I knew precisely how the responses were generated, as long as took care to not break the input the resulting output gave quite an eerie feeling of an actual conversation.
For those with interest, github has the source code of Weizenbaum's program implemented in Java OOP by Charles Hayden. Fascinating to explore. As for myself, need to "step away from the computer" and get going in the Real World this morning!!
https://github.com/codeanticode/eliza