What AI means for prompts, judgement, and code
Three shifts in working with AI, from a conversation with AI Singapore's Laurence Liew.
Prompt engineering is dead. Memorising facts no longer signals competence. And reading every line of code you ship? Also gone.
I spoke with AI Singapore's Laurence Liew on the sidelines of ElasticON in March. Three things he shared stuck with me.
From prompts to intent
Remember the days when every other post here peddled AI "prompt packs"? I've personally never bought into them, and Laurence himself pushed back on the idea of learning prompt engineering.
These days, he observed, intent and context engineering have replaced prompt engineering. After all, modern LLMs are now sophisticated enough to figure out the best techniques.
According to Laurence, what matters now is articulating intent clearly and providing enough context to constrain the model's reasoning, and reduce hallucination.
In short: what's valuable isn't crafting clever prompts but knowing and clearly articulating what you really want.
Judgement is the new skill
This one surprised me, as I hadn't really thought about it before. As we touched on AI education, Laurence reflected on what to teach in AI Singapore's apprenticeship programme, given that AI agents can now code.
His thinking: regurgitating facts is no longer proof of competence. Critical and computational thinking, and reasoning, remain valuable. Humans working with LLMs will have to constantly exercise judgement.
So which faculty teaches judgement? Law and journalism, it turns out. Lawyers weigh evidence, challenge claims, and test credibility. Journalists separate facts from fabrication by questioning every account. These are the skills that anyone working with AI needs to develop.
From programmer to architect
This next area struck a chord. I've recently shared my own experience with Claude Code, working on a web-based app which at last count was over 80,000 lines of code.
On this, Laurence noted that reading every line of AI-generated code is no longer feasible when agents generate thousands of lines in minutes.
He said: "You cannot operate at a level of a programmer anymore. You need to operate at a level of an architect. You have to operate also as a QA developer, because you need to think, how do I test? How do I build test harness and so on and so forth, right while building my systems."
In closing
I agree that developing apps has fundamentally changed. And it's not just one part of the workflow, it's the whole thing.
The real work hasn't gone away. It's just moved. With AI, it is less about the doing, more about the thinking.
What's your take?
Full-length video interview with Laurence Liew of AI Singapore.