16
An artificial intelligence model can be made to spout gibberish if a single one of the many billions of numbers that compose it is altered.
Large language models (LLMs) like the one behind OpenAI’s ChatGPT contain billions of parameters or weights, which are the numerical values used to represent each “neuron” of their neural network. These are what get tuned and tweaked during training so the AI can learn abilities such as generating text. Input is passed through these weights, which determine the most statistically likely output.…