Software program builders are reporting marked productiveness positive factors with AI, but new analysis from Harness exhibits using the expertise is creating higher downstream challenges.
Findings from the corporate’s 2026 State of Engineering Excellence report recommend areas reminiscent of code high quality and validation are being neglected because the mass-integration of AI, and it’s creating greater workloads and resulting in greater ranges of burnout.
Indeed, Harness found that many developers are now spending more of their day on manual remediation tasks. Around 81% said they spend more time in code reviews since before the adoption of AI tools, for example.
More than one-quarter (28%) reported spending 30% longer on these tasks on average, according to Harness.
Worse still, nearly one-third of that activity isn’t tracked by teams, creating a trend of “invisible work”.
Organizations estimate that roughly 31% of developer time is now consumed by invisible work, which typically involves reviewing AI-generated code, fixing bugs, and switching between disparate tools.
Trevor Stuart, SVP and General Manager at Harness, said the findings show that while AI is not only changing how developers build, but how they spend their working day.
This, Stuart noted, means organizations need to overhaul how they measure developer productivity to compensate for changing workflows.
“Cloud and the internet were infrastructure revolutions layered underneath the developer,” he said.
“AI is reshaping the developer’s job entirely, and the measurement frameworks that the industry has relied on for the past decade weren’t built for this new unit of work.”
Changing developer metrics
Harness noted that previous frameworks for measuring productivity and efficiency simply aren’t able to keep up with current AI-fueled working patterns.
Around 89% of tech leaders still trust metrics that don’t accurately reflect AI’s impact on individual developers or teams at large.
Moreover, 94% said key considerations such as tech debt and developer burnout rates are missing from metrics, painting a convoluted picture of overall performance which fails to scratch the surface.
“The biggest AI challenge is measurement itself,” the company noted in a blog post detailing the findings.
“When asked to name the single biggest challenge, the top answers are all visibility problems: measuring true productivity impact (26%), maintaining code quality with AI (24%), and proving ROI to leadership (18%).”
Measuring developer performance
According to Harness, changes to how performance is tracked needs to take AI into account.
The company recommends that enterprises approach this by considering the impact of tasks such as code validation, as well as the overall quality of code produced by AI tools.
This, the company noted, will paint a clearer picture on how workloads are changing due to new follow-up tasks created by AI.
Organizations should also “treat AI performance as its own discipline”, the study noted. This includes tracking AI agent accuracy, acceptance rates of AI outputs, and costs associated with the tools separately from human developer output.
Developers should also be given a say in how performance is measured due to the influx of AI tools, according to Harness. Nearly half (49%) of developers said they want to be involved in defining metrics themselves.
This is crucial, the report noted, largely because there’s a growing perception gap on how AI is actually impacting teams.
Managers, for example, are nearly four-time more likely to report no concerns on how productivity metrics are measured compared to frontline practitioners.
FOLLOW US ON SOCIAL MEDIA
Follow ITPro on Google News and add us as a preferred source to maintain tabs on all our newest information, evaluation, views, and critiques.
You can even follow ITPro on LinkedIn, X, Facebook, and BlueSky.








