A SecDevOps Reflection on the 2025 "State of Software Engineering Excellence" Report
Why this report matters to me
If you've followed my writing, you know I treat DevOps (and its grown-up sibling, SecDevOps) as a team sport that balances speed with safety, people with process, and sustainability with shareholder value. When Harness published its State of Software Engineering Excellence 2025 study last week, I grabbed a coffee and devoured all 30-plus pages.
The headline is blunt: most organisations are still missing the mark on developer experience and DevOps maturity, and they're paying for it in lost productivity, spiralling risk, and frustrated talent. (harness.io)
Snapshot of the findings
Trouble spot | Survey data |
---|---|
Slow feedback loops | 67 % of teams can't spin up a full build/test environment in 15 minutes |
Manual toil | 64 % still deploy infra code by hand; 50 % ship apps manually |
Code-review drag | 61 % wait > 1 day for a review |
Quality gates missing | 55 % of CI/CD pipelines lack any gating |
Incident readiness gaps | 52 % lack key tools for incident response |
Security blind spots | 38 % don't scan builds; 1 in 10 let critical bugs hit prod; median fix time ≥ 7 days for high-severity vulnerabilities. |
Add to that:
- Only 29 % have an up-to-date software catalogue.
- Just 19 % provide structured up-skilling for devs.
- A quarter of teams admit > 70 % of user stories lack clear acceptance criteria.
Put bluntly again, the DevOps engine is knocking, the check-security light is flashing, and we're still driving 120 km/h on the 132.
Why does this gap persist?
In my consulting rounds I see three recurring anti-patterns:
- Role-centric "DevOps" – Hiring a "pipeline wizard" and hoping magic trickles outward.
- Tool-first obsession – Buying best-of-breed widgets without addressing processes or team dynamics.
- Short-term velocity worship – Shipping features faster today while stacking tomorrow's technical debt (and stress injuries).
The Harness data validates all three: lots of shiny tools, but manual gates; pockets of excellence, but no cross-functional ownership; heroic engineers, but brittle systems.
Pros of the current trajectory (yes, there are some)
What's working | Why it still deserves credit |
---|---|
Awareness is up | 650 leaders cared enough to benchmark themselves, that's cultural progress. |
Platform thinking is emerging | The study champions unified software delivery platforms — exactly what modern SecDevOps preaches. |
Security training exists | 56 %* are at least training devs annually. Not perfect, but a foundation. (harness.io) |
Metrics beat anecdotes | Using a maturity assessment moves the conversation from "feelings" to evidence. |
*(Glass-half-full side note: it also means 44 % train less than annually or never.)
Cons (the real cost centre)
- Productivity hemorrhage – Every 30-minute context switch waiting for a code review is compound interest on waste.
- Security debt – A week to patch high-sev vulns is an eternity to an attacker; supply-chain exploits don't wait for sprint planning.
- People burnout – Manual deployments and after-hours incidents create the very exhaustion AI tooling claims to cure.
- Innovation drag – Talent stuck chasing broken pipelines isn't inventing the next feature set.
Multiply those by global salary averages and you understand the "millions" Harness flags. (harness.io)
A SecDevOps lens on the recommendations
Harness (unsurprisingly) prescribes a platform-centric approach. I agree, with caveats.
Harness prescription | My SecDevOps-flavoured refinement |
---|---|
Automate pipelines end-to-end | Yes, but policy-as-code must gate every stage: SAST, SBOM, IaC drift, secrets scanning, Snyk, SonarQube. |
Adopt internal developer portals (IDPs) | Absolutely, but ensure least-privilege defaults and guard-railed self-service to avoid "shadow IT/infra." |
Upskill continuously | Non-negotiable. Pair mandatory secure-coding sessions with rotation in incident response to build real empathy. |
Measure maturity across five dimensions | Do it, with DORA metrics to keep eyes on throughput and reliability and ideally a SecDevOps lens. (en.wikipedia.org) |
Culture change beats platform change
You can't secure what you don't understand; you can't automate what you haven't agreed to standardise. That calls for:
- Blameless, brutally honest retrospectives – surface the frictions behind those < 15-minute build failures.
- Shared SLOs – dev, ops, and security owning the same uptime, MTTR, and change-failure goals.
- Sustainability KPIs – track on-call load, after-hours pages, and training hours as first-class metrics, not "nice to haves." (See the emerging Sustainable DevOps research. (arxiv.org) and our article Sustainability Is the New Performance )
People friendliness: the missing variable
The report focuses on dollars and defects. Let's add a human dimension:
- Manual toil = burnout.
- Waiting on reviews = disengagement.
- Slow incident resolution = paging the same hero at 3 a.m.
Addressing these isn't just a retention strategy; it's risk management. Tired minds mis-configure prod.
My four-point action plan
-
Map the waste Time-in-queue, not lines-of-code, is my leading indicator. If your PRs sit more than 12 hours, fix the feedback loop before buying another scanner.
-
Shift security conversations left Tools catch patterns; humans catch plausibility. Schedule brown-bag threat-model sessions so devs smell danger before a pull request exists.
-
Invest in platform and people equally A robust IDP without a rotation plan is shelf-ware. Pair every new self-service feature with a skills module and a psychological-safety check-in.
-
Have a Maturity Assessment. You don't know how or by whom, please contact us. But you need to know where you are of you intend of getting somewhere.
Where AI fits (and where it doesn't)
Copilot-style agents can suggest secure configs and auto-generate tests. But they can't:
- Negotiate acceptance criteria with stakeholders.
- Decide your organisation's risk appetite.
- Hold a retrospective on why a critical bug slipped.
Automation is the valet, not the driver.
A note on sustainability
The study hints at cost savings from eliminating manual tasks. Saving money is great; saving energy, cognitive load, and morale is better. The most sustainable software organisation is one that:
- Deploys with low carbon and low cortisol.
- Learns faster than it burns talent.
- Automates the boring and leaves space for mastery.
Closing thoughts
Harness's data isn't a doom scroll; it's a mirror. Yes, most teams remain stuck in mid-maturity limbo. But mirrors are powerful: once you see the gap, you can close it.
So ask yourself:
Are we optimising pipelines while ignoring people? Are we chasing velocity and missing resilience? Are we buying tools to hide symptoms or investing in culture to cure causes?I know which answer keeps me in the game for the long haul. What about you? Let's continue this conversation. Share your team's biggest DevOps maturity hurdle or success story in the comments, and let's build a more secure, sustainable, and human-friendly engineering culture together.
References:
# |
Source |
Publisher |
|
1 |
The State of Software Engineering Excellence 2025 |
Harness |
|
2 |
New report reveals alarming state of software engineering excellence … |
PR Newswire |
|
3 |
Accelerate State of DevOps Report 2023 |
Google Cloud / DORA |
|
4 |
Accelerate State of DevOps Report 2024 |
Google Cloud / DORA |
|
5 |
Organisations are failing on DevOps experience and maturity |
Digit.fyi |
|
6 |
Measuring GitHub Copilot’s Impact on Productivity |
Communications of the ACM |
|
7 |
Predicting Attrition among Software Professionals: Antecedents and … |
ACM Digital Library |
No comments:
Post a Comment