One of the pioneers of how large language models are trained and fine-tuned, Jeremy Howard, has a great article here on “What policy makers need to know about AI (and what goes wrong if they don’t)”. The article is actually very accessible to everyday readers. Here’s an excerpt as an example.
The rest of the article makes the broad point that policy makers don’t understand how large language models work. Howard notes that, “AI models are general purpose computation devices, which can be quickly and easily modified and used for any purpose. Therefore, the requirement that they must be shown to be safe prior to release, means that they can’t be released at all.”
The full article is worth reading if you’re building in AI or are interested in how this new powerful technology will be regulated.