The Speed of AI development and its effects on risk assessement

The increasing power of the latest artificial intelligence systems is stretching traditional evaluation methods to the breaking point, posing a challenge to businesses and public bodies over how best to work with the fast-evolving technology.

Flaws in the evaluation criteria commonly used to gauge performance, accuracy, and safety are being exposed as more models come to market, according to people who build, test, and invest in AI tools. The traditional tools are easy to manipulate and too narrow for the complexity of the latest models, they said.

The accelerating technology race sparked by the 2022 release of OpenAIโ€™s chatbot ChatGPT and fed by tens of billions of dollars from venture capitalists and big tech companies, such as Microsoft, Google, and Amazon, has obliterated many older yardsticks for assessing AIโ€™s progress.

โ€œA public benchmark has a lifespan,โ€ said Aidan Gomez, founder and chief executive of AI start-up Cohere. โ€œItโ€™s useful until people have optimized [their models] to it or gamed it. That used to take a couple of years; now itโ€™s a couple of months.โ€

Governments are also struggling with how to deploy and manage the risks of the latest AI models. Last week the US and UK signed a landmark bilateral arrangement on AI safety, building on new AI institutes that the two countries set up last year to โ€œminimize surprise…โ€‰from rapid and unexpected advances in AI.โ€