Skip to content

Archives

_An Architectural Risk Analysis of Large Language Models_ [pdf]

  • _An Architectural Risk Analysis of Large Language Models_ [pdf]

    The Berryville Institute of Machine Learning presents “a basic architectural risk analysis (ARA) of large language models (LLMs), guided by an understanding of standard machine learning (ML) risks as previously identified”. “This document identifies a set of 81 specific risks associated with an LLM application and its LLM foundation model. We organize the risks by common component and also include a number of critical LLM black box foundation model risks as well as overall system risks. Our risk analysis results are meant to help LLM systems engineers in securing their own particular LLM applications. We present a list of what we consider to be the top ten LLM risks (a subset of the 81 risks we identify). In our view, the biggest challenge in secure use of LLM technology is understanding and managing the 23 risks inherent in black box foundation models. From the point of view of an LLM user (say, someone writing an application with an LLM module, someone using a chain of LLMs, or someone simply interacting with a chatbot), choosing which LLM foundation model to use is confusing. There are no useful metrics for users to compare in order to make a decision about which LLM to use, and not much in the way of data about which models are best to use in which situations or for what kinds of application. Opening the black box would make these decisions possible (and easier) and would in turn make managing hidden LLM foundation risks possible. For this reason, we are in favor of regulating LLM foundation models. Not only the use of these models, but the way in which they are built (and, most importantly, out of what) in the first place.” This is excellent as a baseline for security assessment of LLM-driven systems. (via Adam Shostack)

    (tags: security infosec llms machine-learning biml via:adam-shostack ai risks)