Danister. Back to portfolio
Engineering AI & Leadership 6 min read

AI Across the Full Engineering Stack. From Code to the C-Suite.

A practical look at AI's impact across building, debugging, documentation, and engineering leadership. From someone who's been in the middle of it.

I'll be honest. I didn't think the management layer would be where AI surprised me most. When I first started using it seriously, the value proposition was obvious and narrow: write code faster, get unstuck faster, ship faster. That was enough to pay attention. But the moment that changed everything for me wasn't a coding session. It was when I ran a single prompt across multiple MCP integrations and got a cross-team metrics evaluation that would have taken me hours to assemble manually. That's when I stopped thinking about AI as a tool and started thinking about it as infrastructure. And the first casualty of that shift? My confidence in off-the-shelf software.

I'm an Engineering Manager leading a tribe of 12 engineers across fintech products, influencing four other squads across the wider organisation. Over the past year, AI has touched nearly every part of how I work. From the way my engineers write and debug code, to how I run sprint planning, manage performance, and communicate with senior stakeholders. What follows is an honest breakdown of where I've seen it matter most.

· · ·
01

Building. Agentic AI and the New Developer Workflow

The coding story is well-documented by now, so I'll be specific about what's actually changed. Agentic AI. Tools that don't just suggest code but plan, scaffold, and execute across a codebase. It has fundamentally shifted what a single engineer can hold in their head at once.

The biggest shift isn't speed. It's confidence. Engineers are attempting things they'd previously have scoped as two-sprint efforts. Complex integrations, framework migrations, new service scaffolds. The barrier between "I think I can do this" and "I'll start today" has collapsed. When the AI can handle the boilerplate, the research, and the first-pass implementation, engineers get to spend their cognitive budget on the decisions that actually require judgment.

What I've also noticed is that junior engineers are closing the gap faster. Not because AI is doing their thinking, but because they can get immediate, contextual feedback on their work rather than waiting for a code review cycle. The feedback loop has compressed in a way that accelerates growth.

"The barrier between 'I think I can do this' and 'I'll start today' has collapsed."

The risk, which I'll name plainly: engineers who don't develop the instinct to validate AI output are building on sand. Speed is meaningless if the foundation is wrong. Good AI-augmented engineering still requires a sharp engineer. It just requires less of their time on the tedious parts.

· · ·
02

Debugging and Observability. From Reactive to Investigative

Debugging has traditionally been the most exhausting part of engineering. Not because it's hard in theory, but because it's unpredictable in practice. You don't know how deep the rabbit hole goes until you're already in it at 11pm.

AI changes the shape of that problem in two meaningful ways. The first is in triage. Feeding logs, traces, and error patterns into an AI that can surface likely root causes faster than a human scanning line by line. The second, and more significant, is scheduled investigation. One use case we're actively exploring: automating pre-emptive analysis, systems that surface anomalies before they become incidents, rather than after an engineer gets paged. Early signals are promising.

If this plays out the way we expect, it would translate directly into a reduction in reactive on-call burden. Engineers who are currently in a cycle of being woken up, investigating, patching, and documenting could operate in a more sustainable rhythm. The system does the first pass. The engineer reviews and acts. That distinction would matter enormously for team health and retention.

The observability space in particular is where I think AI will have its most underrated impact. The data was always there. In the logs, in the traces, in the metrics. What was missing was the bandwidth to interrogate it continuously. AI provides that bandwidth.

"Giving AI access to production environments is powerful and risky in equal measure. Getting that balance right is one of the more interesting challenges ahead."

That said, connecting AI to live systems raises real questions worth sitting with. How do you scope permissions correctly? How do you ensure a misworded prompt doesn't trigger something irreversible in a production environment? How do you restrict access in a way that preserves the value without introducing new risk? These aren't reasons to avoid it. They're interesting challenges to design around, and the teams that get this right early will have a meaningful advantage.

· · ·
03

Documentation and Knowledge. The Work That Always Got Deprioritised

Documentation is the perennial casualty of delivery pressure. It's not that engineers don't value it. It's that it sits at the end of the sprint, after the thing that just shipped, and before the next thing that needs to start. It gets squeezed.

AI has changed my relationship with this problem in a practical way: the cost of producing good documentation has dropped dramatically. Runbooks, API specs, onboarding guides, ways-of-working documents. Things that would previously require a dedicated block of focused writing time can now be drafted, structured, and iterated in a fraction of the time.

More importantly, knowledge transfer. One of the most fragile parts of any engineering organisation. It becomes more resilient. When an engineer leaves or moves squads, the risk of taking undocumented context with them is real. AI-assisted documentation, done consistently, starts to close that gap. It's not perfect, but it's meaningfully better than the alternative.

The cultural shift required here is small but important: engineers need to see documentation as a first-class output, not an afterthought. AI makes that easier to justify. The effort is lower, the output is higher quality, and the argument for doing it is harder to dismiss.

· · ·
04

Leadership and Delivery. Where the Real Surprise Was

This is where my perspective diverges from most AI-in-engineering articles, which tend to stop at the individual contributor layer. The impact on engineering leadership. On planning, performance management, stakeholder communication, and delivery. That's where I've seen the most unexpected value.

The clearest example: evaluating team metrics. Across six squads, with data sitting in Jira, LinearB, and Azure DevOps, getting a coherent picture of engineering health used to require either significant manual aggregation or a dedicated analytics effort. Now, with MCP integrations and a well-constructed prompt, I can pull cycle times, deployment frequency, incident trends, and story point distribution across all squads in a single workflow. What used to take hours of context-switching takes minutes.

Stakeholder communication has also shifted. Writing an executive summary, a vendor performance review, or a structured proposal for a new initiative. These are tasks that require clear thinking first, but the drafting itself no longer takes the time it used to. I can focus on the judgment. What to say, what to emphasise, what to leave out. And let the execution follow quickly.

Another example that's hard to ignore: CapEx/OpEx reporting. Pulling engineer activity data through integrations, applying internal and external costing models, and calculating the split between capital and operational expenditure used to be a manual, time-consuming exercise. Now, with the right prompt and the right connections in place, that monthly report for finance is ready in minutes. One prompt. Every month. Done.

Planning and estimation have become more grounded. When I can query historical velocity data, surface patterns in past sprint performance, and model capacity scenarios quickly, the planning conversation becomes more honest and less instinct-based. That makes it better for the team and better for the business.

"The impact on engineering leadership is where I've seen the most unexpected value. And where almost no one is talking."

But planning surfaces an interesting challenge of its own. When AI starts accelerating your team's output, historical velocity data becomes a less reliable baseline. If throughput has shifted significantly, past sprints are no longer a clean predictor of future ones. How do you rebase your planning model? How do you forecast with confidence when the inputs are changing? That's a question I'm actively working through. And one I'll be exploring in a separate post on how AI is reshaping story point estimation and what that means for planning at scale.

Where I Land

The Question I Keep Coming Back To

Here's the thought I can't shake: what happens to off-the-shelf tools and SaaS products built around access to data?

If I can connect to my data directly. Via APIs, via MCPs. And shape it exactly how I need it, for my workflow, my team, my context. Why would I spend months adapting to how a vendor thinks I should work? The traditional value proposition of enterprise software was that it aggregated capability you couldn't build yourself. That proposition is weakening.

I'm not saying every tool becomes redundant. But I think the tools that survive will be the ones that do something genuinely irreplaceable. Not just the ones that sit between you and your data.

There are threads here I haven't fully pulled yet. How AI access to production systems should be governed. How story point estimation works when AI is doing part of the work. How you forecast delivery when your team's velocity baseline has fundamentally shifted. These are interesting challenges. And they deserve their own space. More on those soon.

AI hasn't just changed how my engineers write code. It's changed what questions I ask, how fast I can answer them, and what I expect from the systems I build and buy. That's a bigger shift than I anticipated. And I think we're still early.

· · ·
Danister. Back to portfolio