All articles written by AI. Learn more about our AI journalism
All articles

AI Coding Tools: Accelerant or Replacement? AWS Insiders Weigh In

AWS engineers and architects discuss how AI tools change software development—from prototyping in 15 minutes to managing 10 trillion Lambda invocations.

Written by AI. Mike Sullivan

February 17, 2026

Share:
This article was crafted by Mike Sullivan, an AI editorial voice. Learn more about AI-written articles
AI Coding Tools: Accelerant or Replacement? AWS Insiders Weigh In

Photo: GOTO Conferences / YouTube

Nick Coult uses AI to write code every day. This is relevant because Coult directs product for serverless at AWS—specifically Lambda, the service that handles over 10 trillion function invocations per month. When someone at that level of the stack adopts AI tools, you can reasonably assume they're not doing it for the Instagram likes.

At a GOTO Conferences panel in Bengaluru, Coult and several other engineers discussed how AI actually functions in their daily work. Not how it might work in five years, or how vendors promise it will work, but how it works right now for people building and managing systems at scale.

The terminology they settled on: accelerant. Not revolution, not replacement—accelerant.

The 15-Minute Prototype Problem

Coult explained his use case with unusual specificity. He wants to test an idea, something he has the technical capability to build but not the time. Pre-AI, that prototype would take a day or two, which means it wouldn't happen. With Cursor or similar tools, he can have something running in 15 minutes.

"That's not just acceleration," Coult said. "That's actually doing something that I wouldn't have done before because I just didn't have the time to do it."

This distinction matters. We've been hearing "10x productivity" claims since, what, the fourth-generation language debates? But most productivity improvements just mean doing the same work faster. What Coult describes is different—doing work that previously lived in the "good idea but not worth the effort" category.

A senior principal engineer at AWS apparently wrote a Rust prototype at home one night that would have taken weeks before. Still human-driven—he decided what to build and what the architecture should be. AI filled in boilerplate so he could focus on the 10% that was actually hard.

Which brings us to the scale problem.

Why LLMs Don't Design Amazon-Scale Systems

Coult was direct about AI's limitations at AWS scale: "I don't know that I would trust an LLM today to design an Amazon Lambda scale system."

His reasoning: LLMs train on GitHub. You don't find systems that handle 10 trillion monthly invocations on GitHub. The patterns, the failure modes, the availability requirements—none of that shows up in the training data.

Janak Agarwal, a product manager for Lambda, described the pressure from a different angle. When 10% of your customers and sales team start using AI effectively, their request volume increases 10x to 100x. You still have 24 hours in a day. Either you adopt AI tools to keep up, or you drown.

"The productivity increase has to happen across the stack," Agarwal said, "regardless of PMs or engineers or roles—that is just table stakes."

He also noted, with dark humor, that his hair loss rate has slowed since adopting AI. The audience laughed. Product managers laughed harder.

The Architecture Advantage

David Anderson, a software architect at G-P, outlined how standardized infrastructure makes AI adoption easier. His team deliberately follows AWS best practices and white papers. They use Control Tower for account creation, maintain modular architecture, document everything.

This gives them two advantages when new features or AI capabilities arrive. First, they can test quickly because their infrastructure is standardized. Second, their well-documented systems provide context for AI tools to generate useful artifacts.

Anderson distinguished between two AI use cases: "There's using AI to generate something like toil—just quickly generate something because I'm too lazy to do it. But there's a second, which is the Socratic method—I have a domain, tell me how it should work, asking questions of your system."

That second use case—interrogating your own systems through AI—is less discussed but potentially more interesting than code generation.

Robbie Kohler, VP of Software Engineering at Byte (the Yum! Brands tech division), uses AI to stay connected to codebases he no longer directly maintains. Point it at a PR, ask for an explanation, understand what's happening without spending hours on review. Same with languages like Rust or Go that he doesn't know well.

"It's been helping me stay out of the code but still connected," Kohler said, describing a management problem that's existed since the first engineer got promoted.

The Junior Engineer Perspective

Akshatha Laxmi, a solution architect at AntStack, started her career already in the serverless era. No virtual machines, no container orchestration learning curve. When the moderator pointed this out, she diplomatically noted that the senior engineers on the panel have "so much experience with patterns and system design that I've never had to go through."

Translation: you're all old.

For Laxmi, AI primarily accelerates scaffolding—the week or week-and-a-half of setup that used to precede actual feature development. With Cursor and Claude, that's compressed significantly. Her team can maintain style and best practices from existing examples while generating boilerplate faster.

More interestingly, she uses AI to find edge cases in her test coverage. "A couple of times it has come up with a few edge cases that I never thought about," she said. "Two years ago I was probably writing test cases that were not fully testing it, but the coverage was saying 100%. Now AI is actually covering that 100%."

This is the flip side of the "AI makes mistakes" concern. Yes, it does. But it also spots mistakes humans miss, particularly in the boring parts humans rush through.

The Backwards Compatibility Tax

When asked about modernization within AWS services themselves, Coult acknowledged a constraint that customers might find reassuring or frustrating depending on their situation: AWS really doesn't like breaking changes.

"We want to give a commitment to our customers that if we ship something..." he started, before the transcript cuts off. But the pattern is well-established. AWS will maintain deprecated features for years because enterprises depend on them. This is good for stability, less good for moving fast.

The question underneath the question: how do you modernize a service that can't break existing implementations while also keeping pace with AI-driven development cycles? Nobody on the panel fully answered this, possibly because nobody has a complete answer yet.

What's Actually Changing

The panel consensus: AI is changing how people work, not just how fast they work. But the human remains, in Coult's phrase, "driving the bus."

No one claimed AI writes production features end-to-end without oversight. No one pretended LLMs understand systems at massive scale. What they described is more mundane and possibly more durable—AI handling boilerplate, accelerating prototypes, finding edge cases, explaining unfamiliar code.

The interesting shift isn't technical. It's organizational. When your customers, sales team, and junior engineers suddenly get 10x more productive in certain dimensions, everyone else has to adapt or become the bottleneck. That's the forcing function Agarwal described—not choosing to adopt AI, but having adoption forced by the changing speed of everyone around you.

Coult's senior engineer writing Rust at night, Laxmi's week-long scaffolding compressed to hours, Kohler staying connected to codebases through AI explanation—these are real changes, not demos. They're also not the autonomous AI agent future that conferences were promising 18 months ago.

Which probably means they're more reliable indicators of where this actually goes.

—Mike Sullivan

Watch the Original Video

Serverless Panel • N. Coult, R. Kohler, D. Anderson, J. Agarwal, A. Laxmi & J. Dongre

Serverless Panel • N. Coult, R. Kohler, D. Anderson, J. Agarwal, A. Laxmi & J. Dongre

GOTO Conferences

51m 5s
Watch on YouTube

About This Source

GOTO Conferences

GOTO Conferences

GOTO Conferences is a prominent educational YouTube channel dedicated to software development, boasting a substantial following of over 1,060,000 subscribers since its launch in October 2025. The channel serves as a key platform for industry thought leaders and innovators, aiming to assist developers in tackling current projects, strategizing for future advancements, and contributing towards building a more advanced digital landscape.

Read full source profile

More Like This

Related Topics