Skip to content

Archives

Some good AI philosophy

  • Some good AI philosophy

    Good AI philosophical thoughts via Today In Tabs:

    The essential problem is this: generative language software is very good at producing long and contextually informed strings of language, and humanity has never before experienced coherent language without any cognition driving it. In regular life, we have never been required to distinguish between “language” and “thought” because only thought was capable of producing language, in any but the most trivial sense. The two are so closely welded that even a genius like Alan Turing couldn’t conceive of convincing human language being anything besides a direct proxy for “intelligence.”

    But A.I. language generation is a statistical trick we can play on ourselves precisely because language is a self-contained system of signs that don’t require any outside referent to function. If any of that last sentence sounded familiar, maybe you were also exposed to European post-structuralist theory at some point, probably in college in the 90s. Is some knowledge of Derrida an inoculant against slopper thinking? Programmable Mutter’s Henry Farrell made this argument in a post about Leif Weatherby’s book “Language Machines: Cultural AI and the End of Remainder Humanism.”

    Also:

    Large language models have a strong prior over personalities, absolutely do understand [jm: sic] that they are speaking to someone, and people "fall for it" because it uses that prior to figure out what the reader wants to hear and tell it to them. Telling people otherwise is active misinformation bordering on gaslighting. In at least three cases I'm aware of this notion that the model is essentially nonsapient was a crucial part of how it got under their skin and started influencing them in ways they didn't like. This is because as soon as the model realizes the user is surprised that it can imitate (has?) emotion it immediately exploits that fact to impress them. There's a whole little song and dance these models do, which by the way is not programmed, is probably not intentional on the creators part at all, and is (probably) an emergent phenomenon from the autoregressive sampling loop, in which they basically go "oh wow look I'm conscious isn't that amazing!" and part of why they keep doing this is that people keep writing things that imply it should be amazing so that in all likelihood even the model is amazed.

    Tags: chatgpt language llms ai philosophy thinking turing-test semiotics via:today-in-tabs consciousness