Skip to content

Archives

Links for 2023-09-27

  • LLMs as hall monitors

    lcamtuf with a solid prediction for the future of content moderation: it’s LLMs.

    Here’s what I fear more, and what’s already coming true: LLMs make it possible to build infinitely scalable, personal hall monitors that follow you on social media, evaluate your behavior, and dispense punishment. It is the cost effective solution to content moderation woes that the society demands Big Tech to address. And here’s the harbinger of things to come, presented as a success story: https://pcgamer.com/blizzard-bans-250000-overwatch-2-cheaters-says-its-ai-that-analyses-voice-chat-is-warning-naughty-players-and-can-often-correct-negative-behaviour-immediately/ And the thing is, it will work, and it will work better than human moderators. It will reduce costs and improve outcomes. Some parties will *demand* other platforms to follow. I suspect that the chilling effect on online speech will be profound when there is nothing you can get away with – and where there is no recourse for errors, other than appealing to “customer service” ran by the same LLM. Human moderation sucks. It’s costly, inconsistent, it has privacy risks. It’s a liability if you’re fighting abuse or child porn. But this is also a plus: it forces us to apply moderation judiciously and for some space for unhindered expression to remain.

    (tags: moderation llms future ai ml hall-monitors content mods)