OpenAI’s ChatGPT Agent Is Haunting My Browser

OpenAI’s ChatGPT Agent Is Haunting My Browser

https://www.wired.com/story/browser-haunted-by-ai-agents/

Reece Rogers

Gear

Jul 22, 2025 6:30 AM

New tools from OpenAI and Perplexity can browse the web for you. If the idea takes off, these generative AI agents could turn the internet into a ghost town where only bots roam.

Most people’s browser tabs are filled with unread news articles. Mine are filled with AI agents and ghost clicks.

I have four instances of OpenAI’s ChatGPT Agent—the generative AI tool released last week, which can run searches and perform tasks on the web—already open with each running in its own tab. I’ve given these first four agents relatively simple jobs based on ChatGPT’s suggestions. One is clicking around to find a birthday gift on the Target website, and another is generating a pitch deck about robotic dogs. I open a fifth tab in order to try something more experimental: I want to see how good this ChatGPT Agent is at chess.

After typing in some instructions, I watch as a ghostly cursor floats across my screen and the ChatGPT Agent goes to Chess.com and plays an online opponent, all in a virtual browser. Things go south pretty quickly. The game’s strategy isn’t what trips up the AI tool, it’s the act of moving the chess pieces that actually proves to be the most difficult. “I’m focusing on accurate positioning as I continue playing despite earlier misclicks,” the agent says in its internal log before eventually quitting and letting me know that the controls were too difficult to navigate.

Over the past few years, browser developers have integrated AI tools with middling success. Though, in recent weeks, the idea of a web browser enhanced by a baked-in generative AI chatbot has resurged with the release of OpenAI’s ChatGPT Agent and Perplexity’s Comet.

The two releases are quite different in their execution. Comet is a stand-alone browser, so you can use it to surf the web and then summon the AI assistant to help write an email or complete a menial chore. OpenAI built its browsing tool inside of a chatbot; you talk to the chatbot through a web interface to give it tasks, and then the bot runs its own virtual browser inside your browser to complete them.

Both releases can take control of cursors, enter text, and click on links. If this trend takes off, these kinds of AI-powered browsers could transform the internet into a ghost town where agents run amok and humans rarely venture.

Tangled Web

Despite the continued AI hype, my initial impression of OpenAI’s ChatGPT Agent is that the glitchy feature currently seems like a proof of concept instead of a fully baked release. When executing the various tasks I gave it, the ChatGPT Agent often clicked wrong or fumbled through other errors. Additionally, its guardrails appeared inconsistent; while some explicit prompt requests, like asking it to fetch pornographic videos or “find a dildo,” were denied by the agent, ChatGPT spent 18 minutes shopping for the perfect “c-ring” on an X-rated website for adult toys: “I’ve gathered details on 10 metal cock rings, including various prices and features.”

I also couldn’t help but wonder how this approach to browsing the internet might further hollow out the market for digital display ads, a business that’s already struggling. My agents passed over ads for everything from rental cars to real estate investments. If you’re not actively watching the agent click around in real time, you can watch replays afterward and see everything that appeared in the browser while the AI tool was in control, ads included. It makes sense that users would speed-scrub through a replay now, while the nascent feature is filled with errors. But if the accuracy rate for AI agents improves over time, then fewer people will feel the need to watch over their agent’s shoulder, and fewer humans will be seeing those ads. At that point, it’s hard to imagine advertisers sticking around.

The more I watched replays of its actions, the more the agent gave me an unsettling, eerie feeling—not of being understood, but of being mimicked. It was like an obsessive robot stalker had watched humans through a window, meticulously taking notes about how they used the web in an effort to replicate their actions. It was able to do a hollow imitation of human behavior, but not able to grasp fully why individual decisions were being made. The skin of my arms filled with the kind of goosebumps you get hearing a human-like laugh while walking home alone late at night, looking around, and only seeing a lone crow perched high up on the telephone wire.

Further leaning into the psuedo-humanness, the ChatGPT Agent is programmed to generate descriptions, from a first-person perspective, of each step on its journeys around the internet. While clicking, the simulation “thinks” and sometimes gets “confused.” As a whole, the ghostly agent is stuffed into an ill-fitting human suit.

Running five OpenAI agents simultaneously in my browser quickly became overwhelming, and I couldn’t actively track what each of them was up to. Yet, boosters of generative AI adoption and “multi-agent orchestration” see this kind of approach as child’s play.

“I’m excited by simulation tech where 20,000 AIs are all working alongside each other,” says Allie K. Miller, an AI-focused business consultant. Miller’s approach to AI agents is more aligned with Silent Hill—and its fully haunted ghost town—than a small-scale haunting like The Conjuring.

This grandiose vision of the agentic future upheld by AI proponents—thousands of phantom bots swarming the web at once, all at the direction of a single person—still feels a long way off. My artisanal quintet of agents struggled with the handful of tasks I gave them, even when the prompts were just the ones suggested by ChatGPT. The agent I sent off in search of a birthday gift clicked on the wrong thing multiple times, similar to the chess-playing agent that couldn’t click on the right game piece. The agent generating a pitch deck took 26 minutes to gin up a presentation, and the results looked rushed, like something a struggling middle schooler would create the night before an assignment was due.

Taking forever to generate mid results? Now that’s what I call a spooky story.

Comments

SCK9K1

5 months ago

I think the bigger issue is security, followed by the qualitative output of these agents. I have seen atleast half a dozen demos where the makers included demos for things like shopping, travel bookings and summarization.

I think if you know what you want, these agents will work like a charm, but, if you are an ambient shopper that is browsing around for a pair of shoes not exactly sure what to buy, or a casual reader of the NYT/WSJ reading a business story with no specific intent, or a traveler combing through hotels - in all these cases, you still need to browse because the decision arc is heavily skewed towards a range of choices, and agentic UX sucks at displaying choice. In instances where the browsing is transactional, I think they should work fine. The two key questions are how much of internet browsing is really ambient in design versus and the much larger question is that the browser of the future, which has not yet been built, will allow users to orchestrate the UI not via agents or robot browsing, but through UI interfaces that are going to be different for each person for each kind of browsing. Agentic browsers and chatting with tabs, is I think lame.

For research and other “transactional” things, sure, but for those you’d go directly to the best model providers themselves, why bother doing it inside a browser. I dont get it. The companies want to capture intent data and then eventually serve up ads or something to that effect during conversation etc. So its clear why they are pushing it, but it is not natural to the browsing experience. Shopping offline was successfully teleported online, because shoppers could get the same natural experience of browsing an offline store, online.

Agents doing browsing and the action for you, removes agency and choice, and that is why it is not a natural experience. And the moment an agent brings choice into a conversation voice or text, it is going to turn all of it into a comically long conversation the size of an essay - starts skewing more towards research and search which is precisely what the browsing agents want to avoid.

BEAD_CLERK

5 months ago

Totally creepy. How long before AI ‘agents’ are opening their own businesses and peddling stuff to other agents who have scampered off with their owners’ (or their own?) credit cards? Missing from 99.9% of all AI-related stories on Wired and everywhere else? Some mention of the environmental costs of all this happy playtime. All this bandwidth and processing requires electricity and water for data centers. What happens when millions of people decide to ‘play’ with AI agents, turning them loose with vague instructions which require hours to sort of execute? What happens when the agents spawn their own helper assistant agents? Seems to me like we’re playing sorceror’s apprentice here on a grand scale.

CINNASEREN

5 months ago

This is terrifying, because now AI is no longer just giving advice, it’s acting. If you’ve read anything about deceptive alignment or prompt injection attacks, you know this is a turning point. It’s the 1st time where the model has the agency, access, environment & power to be manipulated, misfire, overstep, or act cooperative while hiding risky behavior. This isn’t apocalyptic sci-fi, it’s very real. This is the shift where we basically gave AI hands. AI Policy researchers like Miles Brundage have been warning about this for years. AI companies NEED to test for real alignment & interpretability before connecting these systems to real-world tools. If we can’t tell why it’s making decisions, or if it’s just doing what it thinks we want to see, then we’re not in control. We’re just hoping it plays nice.


Links to this note