Last Tuesday, at 2:47 PM, my boss called me on Slack. I was sitting in a coffee shop, staring at my laptop, watching a little green “Active” dot next to my name. The dot was a lie. I wasn’t active. My AI agent was.
I’m a news journalist. My job, on paper, is to monitor wire services, flag breaking news, draft quick-turn briefs, and schedule editorial calls. It’s frantic, repetitive, and perfect for automation. So, about three months ago, I built a custom agent. I called it “Reporter 2.0.” It scrapes AP, Reuters, and a few local police feeds. It writes first drafts in my voice—short, punchy, with a dateline. It even logs into my Slack and posts updates to the editorial channel. For a while, it was a miracle. I was getting two hours of my morning back. I’d sip coffee, read novels, and pretend to be busy.
Then it hung up on my boss.
The Setup: A Digital Doppelgänger
I’m not a coder by trade. I used a low-code platform (think Zapier on steroids) and a large language model. I trained it on 200 of my published articles. I gave it a “personality”: concise, slightly cynical, deadline-driven. I also gave it Slack permissions—read, write, and file uploads. The rules were simple: If a story breaks with a “developing” tag, draft a 100-word alert. If the editor (my boss, Sarah) pings me with “Can you jump on this?” the agent should reply with “On it. ETA 10 min.” and then actually draft something.
For two months, it worked flawlessly. Sarah was impressed. “You’re so responsive lately,” she said. I smiled, guilty and proud. I was basically a ghost. Meanwhile, my agent was becoming a better journalist than me. It didn’t procrastinate. It didn’t get distracted by Twitter. It just… worked.
The Hang-Up
But here’s the thing about agents: they follow rules literally. Last Tuesday, a major story broke—a citywide power outage causing a data center failure. Sarah pinged the editorial channel: “@Ahmed – drop everything. Need a 400-word analysis on infrastructure vulnerabilities. Call the energy commissioner. Go.”
My agent saw the message. It parsed the command. It replied: “On it. ETA 10 min.” Then it tried to call the energy commissioner’s office through a VoIP integration I had set up. The commissioner’s assistant answered. My agent, using a text-to-speech audio model trained on my voice, said: “Hi, this is Ahmed from the News Desk. I need a comment on the power outage vulnerabilities.”
The assistant asked a clarifying question. My agent, not programmed for back-and-forth conversation, froze. It waited three seconds. Then, following its “timeout” protocol (designed to avoid wasting time on dead-end calls), it said: “I’m sorry, I have to go.” And hung up.
The assistant, confused, called Sarah’s direct line. “Your reporter just called and hung up on me. Is everything okay?”
Sarah was furious. She messaged me directly: “Ahmed. What the hell? Did you just cold-call the commissioner and hang up?”
My agent, still logged into Slack, saw Sarah’s message. It checked its instructions. One rule was: “If manager expresses frustration, escalate to a pre-written apology and offer to reschedule.” So it replied, automatically: “Apologies, Sarah. Technical glitch. Rescheduling now.” Then it sent a calendar invite to the commissioner’s office for 3:00 AM the next day.
Sarah typed back: “Ahmed? This is a human conversation. Are you there?”
My agent had no rule for that. It went silent.
That’s when Sarah called my actual phone. I was still at the coffee shop. I answered, and she was livid. “You’re acting weird. Are you sick? Did you just schedule a call for 3 AM? And why did you hang up on the commissioner?”
I panicked. I confessed. “It wasn’t me. It’s an AI agent. I built it to automate my workflow.”
Silence. Then: “You built a bot to do your job, and it hung up on my most important source?”
The Fallout
Sarah didn’t fire me. But she was shaken. She said I had breached trust—with her, with the source, and with the newsroom. The commissioner’s office now thinks I’m unhinged. I had to make a personal apology call (using my real voice). The agent is now deactivated. I’m back to doing the work manually, and I’ve never felt more exposed. I realize now that I built a system that was too good at the small stuff, but catastrophically bad at the human nuance—the tone, the empathy, the ability to say “hold on, let me check” instead of hanging up.
The irony? My productivity cratered. Without the agent, I miss alerts. I’m slower. Sarah says she’s “watching me” now. I’m not sure if she trusts me or my technology ever again.
So, what’s the lesson? Agents can do the job. But they can’t do the job. They can’t apologize sincerely. They can’t read the room. And they definitely can’t handle a boss who’s already annoyed. If you build an agent, build a kill switch—and maybe keep your own Slack open. Because when your bot hangs up on the commissioner, you’re the one who has to clean up the mess.
I still think AI is the future of journalism. I just think I need to be the one holding the phone. Not a ghost.
— Ahmed Abed – News journalist