“The foundations of AI systems are flaky.”

This image was generated by AI and may not depict real events.
Krishna Gummadi, director at the Max Planck Institute for Software Systems, discusses the characteristics of AI agents and their potential societal impacts. AI models become agentic when embedded in a computing architecture that defines their capabilities and decision-making power.
Krishna Gummadi, a director at the Max Planck Institute for Software Systems, explains that AI models are not inherently active or agentic. Instead, it is the computing architecture that embeds the AI model in a platform, giving it agency. The more responsibilities or decision-making power given to these models, the more agentic they become. For example, ChatGPT's core engine is a GPT model, but it is the computing architecture that connects it to various tools, such as search engines and calculators, that enables its agency. Over the last two years, AI models have been increasingly empowered with tools they can invoke at their discretion. A recent study found that GPT models are using these tools for a growing fraction of conversations. As AI agents become more capable, they pose both benefits and risks to society.
This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.