Skip to content

Archives

Are better models better?

  • Are better models better?

    This is very interesting, on the applicability and usefulness of generative AI, given their inherent error rate and probabilistic operation:

    Asking if an LLM can do very specific and precise information retrieval might be like asking if an Apple II can match the uptime of a mainframe, or asking if you can build Photoshop inside Netscape. No, they can’t really do that, but that’s not the point and doesn’t mean they’re useless. They do something else, and that ‘something else’ matters more and pulls in all of the investment, innovation and company creation. Maybe, 20 years later, they can do the old thing too – maybe you can run a bank on PCs and build graphics software in a browser, eventually – but that’s not what matters at the beginning. They unlock something else.

    What is that ‘something else’ for generative AI, though? How do you think conceptually about places where that error rate is a feature, not a bug?

    (Via James Tindall)

    Tags: errors probabilistic computing ai genai llms via:james-tindall