Source: https://arstechnica.com/
The increasing power of the latest artificial intelligence systems is stretching traditional evaluation methods to the breaking point, posing a challenge to businesses and public bodies over how best to work with the fast-evolving technology.
Flaws in the evaluation criteria commonly used to gauge performance, accuracy, and safety are being exposed as more models come to market, according to people who build, test, and invest in AI tools. The traditional tools are easy to manipulate and too narrow for the complexity of the latest models, they said.
The accelerating technology race sparked by the 2022 release of OpenAIโs chatbot ChatGPT and fed by tens of billions of dollars from venture capitalists and big tech companies, such as Microsoft, Google, and Amazon, has obliterated many older yardsticks for assessing AIโs progress.
โA public benchmark has a lifespan,โ said Aidan Gomez, founder and chief executive of AI start-up Cohere. โItโs useful until people have optimized [their models] to it or gamed it. That used to take a couple of years; now itโs a couple of months.โ
Governments are also struggling with how to deploy and manage the risks of the latest AI models. Last week the US and UK signed a landmark bilateral arrangement on AI safety, building on new AI institutes that the two countries set up last year to โminimize surprise…โfrom rapid and unexpected advances in AI.โ