Without enriched data, teams are left sifting through raw CVE entries without actionable detail required for effective prioritisation.
With more than 100 new CVEs reported every day of last year, keeping on top of vulnerabilities is growing steadily more difficult. External resources like the National Vulnerability Database and MITRE’s CVE programme have long served as reliable bedrocks in this shifting landscape, providing organisations with dependable information on the most critical vulnerabilities.
However, recent developments have exposed the fragility of the systems underpinning global vulnerability management. Earlier this year, funding uncertainty around MITRE’s CVE programme sent ripples through the cybersecurity community, while CISA has recently lost most of its senior leadership throwing further doubt on the agency’s future. Meanwhile, NIST is still struggling to clear a significant backlog of vulnerability enrichment.
With some predictions estimating as many as 50,000 new CVEs ahead in 2025, organisations must modernise how they source, interpret and act on vulnerability intelligence, building the ability to tackle vulnerabilities even without external databases.
Redundancy, not reliance: rebuilding trust in CVE feeds
The sheer quantity of vulnerabilities catalogued by the NVD and MITRE’s CVE programme is impressive, but they have long provided more than just numerical identifiers. For over two decades, they have offered critical enrichment that helps security teams understand how vulnerabilities function and which systems are affected.
The recent threats to stalwart threat sources like the NVD have been an eye-opener to many vulnerability management teams. If these sources were to disappear, how would they manage?
Without this enriched data, teams are left sifting through raw CVE entries devoid of the actionable detail required for effective prioritisation. This means delayed remediation, inconsistent interpretation, and an increased chance of critical threats being overlooked.
There is some good news with the recent launch of the EU Vulnerability Database (EUVD), a new platform monitoring critical and exploited security flaws. Led by ENISA, this project is a positive step toward reinforcing the availability of vulnerability data and reducing the reliance on a single feed. However, resilience demands more than one alternative.
Hopefully we’ll see Europe invest in building a fully-fledged, sovereign enrichment capability which is not just a mirror of existing projects, but a reliable, open and trusted source in its own right.
In the meantime, security teams must act to safeguard their own sources of vulnerability data. Investing in third-party CVE intelligence feeds – ones that enrich data independently of the NVD - is a critical safeguard against disruption in external systems.
Why legacy VM is no longer fit for purpose
Part of the reason the stability of the big CVE sources is such an issue is that many organisations are still grappling with outdated vulnerability management practices that simply can’t keep pace with today’s threat environment.
Vulnerabilities are often tracked in spreadsheets, assessed in silos, and patched by overstretched teams with little coordination. As a result, issues slip through the cracks – not due to negligence, but because the process is inherently broken.
Responsibility for vulnerability oversight often falls to SOC teams kept busy with a reactive remit that can leave little room for prevention. Meanwhile, cloud, IT, and development teams rely on different tools, with different scoring systems, and no common language for risk.
Without normalised data and central oversight, comparing risks across systems is near impossible. Add to that the fatigue of manually collating insights from disparate sources, and it’s no wonder many security professionals feel disengaged or overwhelmed.
The case for a Vulnerability Operations Centre (VOC)
To break free from reactive, fragmented vulnerability management, many organisations are now adopting a centralised approach through a Vulnerability Operations Centre (VOC). Much like a SOC focuses on incident response, the VOC provides dedicated oversight for identifying, prioritising and co-ordinating vulnerability remediation.
A fully operational VOC consolidates vulnerability data from multiple sources, including databases like the NVD and EUVD, along with commercial feeds, into a unified platform. From here, analysts can run automated queries, identify critical exposures, and launch remediation campaigns aligned with business priorities. Normalised scoring across cloud, infrastructure, and application environments enables meaningful comparison of risk, regardless of where a vulnerability originates.
Crucially, the VOC equips security teams with the information to focus on what really matters to their organisation. AI and automation reduce the burden of triage and ticket chasing, while real-time tracking of SLAs ensures remediation stays on course. Contextual analysis – not just CVSS scores – determines which vulnerabilities pose the greatest risk in each environment.
Perhaps most importantly, the VOC can detect and reprioritise long-forgotten vulnerabilities that gain new urgency as threat actors adopt them. With so many breaches exploiting vulnerabilities that have been known for a year or more, maintaining this continuous, contextual awareness is essential to reliable security.
Building resilience before the next disruption
The uncertainty surrounding much relied-on CVE programmes makes it clear that businesses cannot assume continuity when it comes to external databases.
That means a combination of resilience and efficiency. Security teams should diversify their enriched intelligence sources, so they aren’t reliant solely on one main source, but they also need a centralised and agile system for managing the vulnerabilities.
With these processes in place, VM teams will be ready for whatever comes next, even when the established ‘vanguards’ of vulnerability intelligence show more cracks.
Written by
Sylvain Cortes
VP Strategy
Hackuity