Andrew Ng says vibe coding is a bad name for a very real and exhausting job – AOL.com

In the ever-churning world of artificial intelligence, few names command as much respect as Andrew Ng. A pioneer in machine learning, co-founder of Google Brain, and longtime educator, Ng has become something of a touchstone for the AI community. When he speaks, people listen. So when Ng recently weighed in on the latest buzzword ricocheting through Silicon Valley—“vibe coding”—it sent a ripple through the industry.

The phrase itself conjures a certain cheeky irreverence, more at home in a startup’s Slack channel than a Nobel lecture. Yet, as Ng was quick to assert, the reality behind the term is anything but glib. “Vibe coding is a bad name for a very real and exhausting job,” he said, cutting through the hype with characteristic clarity. In doing so, Ng gave voice to a growing frustration among AI engineers, one rooted less in technology than in the shifting demands of their work.

At its core, “vibe coding” refers to the largely intuitive, trial-and-error process of tuning AI models—especially large language models—so they behave in ways that feel natural, helpful, or human. The term emerged as developers grappled with the unpredictable outputs of advanced AI systems. No longer is it enough to simply build a model and let it run. Now, engineers must painstakingly coax the right “vibes” from their creations, a task as much about artistry and patience as it is about code.

This is, in many ways, a new frontier for computer science. Traditional programming is deterministic; you give the computer instructions, and it does as told. But with foundational AI models, the behavior is probabilistic and emergent. As these systems grow in complexity, so too does the challenge of steering them. Developers find themselves endlessly tweaking prompts, adjusting parameters, and introducing guardrails, all in pursuit of outputs that align with human expectations—be they politeness, accuracy, or creativity.

Ng’s critique is not merely semantic. His concern is that the term “vibe coding” trivializes the months of labor that engineers pour into this process. “It makes it sound like we’re just waving our hands and hoping for the best,” one AI researcher lamented at a recent conference. The reality is more Sisyphean: teams burning through hours, days, even weeks, trying to nudge a model toward reliability, only to see it veer off into unexpected territory.

This exhaustion is compounded by the relentless pace of the sector. As AI capabilities accelerate, so do user expectations. Companies, desperate to stay ahead, demand faster iteration and ever-more dazzling features. The result is a kind of technological treadmill, where engineers are asked to not only build the future but also constantly re-tune it to suit an ever-changing set of values and social norms.

The stakes are high, and not just for tech giants chasing market share. AI systems are being deployed in critical sectors—healthcare, education, law enforcement—where a rogue “vibe” could have serious real-world consequences. Imagine a medical chatbot that inadvertently offers dangerous advice, or a customer service agent that responds with unexpected sarcasm. The margin for error is shrinking, even as the technology becomes more inscrutable.

Ng’s intervention, then, is both a warning and a call to action. For all the talk of AI automating away drudgery, he reminds us that much of the real work has simply shifted to new, less visible domains. The glamorous image of AI as an omniscient oracle belies the sweat equity invested by those behind the curtain. “We’re still figuring out what it means to build trustworthy AI,” Ng explained. “And that’s a much messier process than people realize.”

There is also an implicit critique of the industry’s penchant for catchy labels. In the rush to brand every new phenomenon, there is a risk of glossing over complexity and nuance. “Vibe coding” may sound playful, but it masks the profound responsibility shouldered by those tasked with shaping how AI interacts with the world. It also obscures the need for better tools, clearer standards, and a more sustainable approach to development.

Some argue that this is simply the growing pain of a field in flux. In the early days of computers, debugging was a dark art, more intuition than science. Over time, the discipline matured, and methodologies emerged to make the process more systematic. AI, it seems, is at a similar crossroads. The hope is that today’s exhausting “vibe coding” will give way to tomorrow’s robust frameworks, where desired behaviors can be engineered with precision rather than coaxed through endless experimentation.

Yet, for now, the work remains grueling. Developers speak of “prompt fatigue” and “alignment burnout,” symptoms of a job that demands both technical brilliance and emotional resilience. There is little glamour in spending hours massaging a model’s responses so that it apologizes when appropriate, never insults, and always stays on-topic. The margin for error is slim, the consequences of failure potentially vast.

In all this, Ng’s candor is refreshing. At a time when AI discourse is often dominated by hype or fearmongering, he offers a more grounded perspective—one that acknowledges both the promise and the perils of the technology. By rejecting the term “vibe coding,” he is not dismissing the work itself, but rather elevating it, insisting that it be recognized for what it is: a critical, complex, and all-too-human endeavor.

The challenge for the industry is to rise to this moment. That means investing not just in bigger models or faster chips, but also in the people tasked with making these systems safe, reliable, and humane. It means building better tools to support the grueling work of alignment, and resisting the urge to trivialize it with cutesy jargon. Above all, it means recognizing that the real magic of AI lies not in the vibes it projects, but in the hard-won expertise of those who shape them.

As artificial intelligence weaves itself ever deeper into the fabric of daily life, the questions raised by Andrew Ng cannot be ignored. The future of the field will be defined not just by technical breakthroughs, but by the values, diligence, and resilience of its practitioners. Call it what you will—alignment engineering, prompt design, or something else entirely—the work remains vital. And, as Ng reminds us, it deserves our respect.

Related

Related

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *