Agentic AI Could Be "The Next Productivity Revolution" But is "Far From Ready," A Researcher Warns
The technology has the potential to improve access to technology and automate drudgery — but privacy implications overshadow its potential.
Flavio Esposito, associate professor of computer science at Saint Louis University, is urging caution on the use of "agentic" artificial intelligence able to take actions on your behalf — calling the technology "far from ready," even as its capabilities expand.
"OpenAI recently released ChatGPT Atlas, a new Internet browser guided by an AI agent," Esposito, an AI and computer science researcher, explains of his concerns. "Unlike traditional browsers that wait for you to click and type, these browsers (Atlas is not the first) can act on your behalf. You don't browse the web; you tell the browser what you want. OpenAI advertises it with an example on how it is easier to 'book the cheapest flight to London next Friday,' and it will search, compare, and complete the purchase automatically."
Having a digital personal assistant available all hours of the day to handle quotidian drudgery is a heck of a sales pitch, but Esposito warns that caution is required. These "agentic" systems, he says, are indeed capable of automating tedious tasks, and can also provide assistance to those with disabilities or limited technical skills — helping to bridge the digital divide.
The technology, however, is "far from ready," Esposito says — with users having to grant high levels of access to the agents, which are based on statistical-continuation large language model (LLM) technology, and allow them full ownership of browser software which had previously been under human control.
"Agentic AI can either become the next great productivity revolution or the next great breach of trust," Esposito warns, pointing to existing research on "indirect prompt injection" in which malicious websites and emails contain strings interpreted by the agent as instructions from its user and executed accordingly, "and which path we take depends on how deliberately we build the guardrails today."
The researcher has also highlighted other ill-considered consequences of putting a large language model — prone to "hallucinations," in which the statistically-selected continuation tokens returned in response to a prompt bear no resemblance to fact nor reality at large but are presented as thought-out and reasoned truth — in charger of a browser, including how such an agent's access to personal data is covered by privacy frameworks and regulations including HIPAA and FERPA in healthcare and education respectively.
So far, there are few answers — but Esposito is leading the charge with a National Science Foundation-funded project on attacking an AI-powered open radio access network system designed for cellular networks, the results of which may inform similar research into agentic AI systems.
Main article image courtesy of Hanna Barakat & Archival Images of AI + AIxDESIGN, CC-BY 4.0.
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.