AI is advancing fast. But the alarm bells may be premature

As Singapore doubles down on AI, a balanced look at both sides of the disruption debate.

AI is advancing fast. But the alarm bells may be premature
Photo Credit: Unsplash/Jisun Han

As Singapore pushes hard into AI with Budget 2026, what comes next? Brutal global change or a fast, controlled transition? This UnfilteredFriday, let's look at both sides of the debate.

Something 'big' is happening

This week, a viral piece titled "Something big is happening in AI, and most people will be blindsided" made its way around social media, racking up thousands of engagements on LinkedIn and over 80 million views on X at the time of my writing this.

Matt Shumer, CEO of a firm behind a writing assistant platform, warned that an AI wave is about to hit, COVID-like in its disruption, and that the public is underestimating how fast things are moving. In a lengthy 4,700-word article, he argued that we can no longer ignore AI, and that the only solution is to adopt it early and use AI tools seriously.

I agree that AI is advancing rapidly, more than most users realise. I also think AI will impact many jobs, including in industries or specialisations where workers don't expect it. But to balance the discussion, three points are worth considering.

A possible edge case

The alarming scenario painted by Shumer could be an edge case that pertains more to software startups. In general, AI isn't easily adopted by most industries.

I think my friend Gary Ang, said it best: "Code is easy to verify, it runs or it breaks. That's why AI coding leapt ahead. Most knowledge work isn't like this. A legal brief doesn't throw a runtime error."

"A strategic recommendation doesn't return a stack trace... my bet is that the degree of AI impact will track this line: how verifiable is your output? And there are many jobs where the tasks are hard to verify."

Jagged capabilities

One mistake we make with AI is looking at one aspect that works brilliantly and unconsciously extrapolating it to other scenarios where it is actually very bad.

I saw that recently in an AI-written article on data centres, a topic I know a bit about. All the right blurbs and references. But drawing a completely erroneous conclusion. Can we risk that in a hospital or a bank?

Just not good enough

I spend too much on AI software every month, using ChatGPT, Claude, and Gemini to assist with writing. Let's just say I can clearly see the many places where they fall short. And I really don't enjoy reading most AI-written pieces.

The scary bit, though, is I've come to realise how many people can't tell the difference. In my mind, that's the primary reason why AI slop is so rampant now: it looked perfectly fine to the poster.

As Singapore pushes into AI, what are your thoughts?