Since the explosion of large language models (LLMs) into the mainstream, there have been multiple calls from tech leaders and AI scientists to pause developments and consider perceived risks in a similar vein to a pandemic or a nuclear war.
The reason behind this narrative is safety. Policymakers, labs and independent experts say they need time to develop and implement safety protocols for these AI technologies, and we must mitigate the risk of doomsday or “extinction” event.
This rhetoric has, of course, rung alarm bells among the general public. The most notable escalation of the conversation happened in March when an open letter calling for an immediate pause to training experiments connected to LLMs was signed by some of the most influential names in tech that hold immense authority....