Skip to content

Archives

The Banal Evil of AI Safety – by Ben Recht – arg min

  • The Banal Evil of AI Safety - by Ben Recht - arg min

    This, 100000%:

    The “nonprofit” company OpenAI was launched under the cynical message of building a “safe” artificial intelligence that would “benefit” humanity. The company adopted a bunch of science fiction talk popular amongst the religious effective altruists and rationalists in the Bay Area. The AI they would build would be “aligned” with human values and built upon the principles of “helpfulness, harmlessness, and honesty.” [...]

    The general blindness of AI safety developers to what harm might mean is unforgivable. These people talked about paperclip maximization, where their AI system would be tasked with making paperclips and kill humanity in the process. They would ponder implausible hypotheticals of how your robot might kill your pet if you told it to fetch you coffee. Since ELIZA, they failed to heed the warnings of countless researchers about the dangers of humans interacting with synthetic text. And here we are, with story after story coming out about their products warping the mental well-being of the people who use them.

    You might say that the recent news stories of a young adult killing himself, or a VC having a public psychotic break on Twitter, or people despairing the death of a companion when a model is changed are just anecdotes. Our Rationalist EA overlords demand you make “arguments with data.” OK Fine. Here’s an IRB approved randomized trial showing that chatbots immiserate people. Now what?

    Tags: ai lllms safety openai chatgpt gemini suicide mental-health