In a recent forum post, we introduced the overall concept of EU Cyber Resilience Act (CRA) and what it means for the Ansible community, the timeline, who is affected, and Red Hat’s role as an open-source software steward. This post is to be seen as the companion to that discussion. It goes deeper into the philosophy behind the CRA and a question that the regulation forces us to confront: why has security been treated as an afterthought for so long, and what does it look like when we finally make it the default? Moving from “default trust” to “trust but verify” as the security strategy.
The Ansible community can view CRA as an opportunity to better our security model. It should not be interpreted as just a compliance checklist. At its core, it codifies a principle that many in the security community have been advocating for years: security must be built in from the start, not bolted on after the fact.
Shift on security mindset
For decades, the dominant model in software development has been: ship first, patch later. Products reach users with known vulnerabilities. Security updates are optional, inconsistent, or time-limited in ways that are invisible to consumers. Users have no reliable way to evaluate whether the product they are about to deploy is secure - or whether the vendor will still be providing patches a year from now.
This is not a failure of individual developers or companies. It is a systemic market failure. When security is invisible to the consumer, there is no competitive incentive to invest in it. A product that ships faster and cheaper will win market share over one that spent months on threat modelling and secure design - even if the cheaper product is riddled with vulnerabilities.
The CRA exists because voluntary approaches have not been enough to fix this. The EU decided that baseline cybersecurity cannot be left to market forces alone.
What “Security by Default” Means Under the CRA
The CRA’s essential cybersecurity requirements (Annex I) are deliberately framed around the principle that products must be secure in their default configuration. This is not about achieving perfect security - it is about shifting the burden.
Before the CRA, the burden was on the user: research the product, assess the risks, configure it securely, hope the vendor ships patches, and monitor for vulnerabilities yourself. Under the CRA, the burden shifts to the entity placing the product on the market:
-
Products must be designed and developed with security in mind - not as a feature to be added later, but as a property of the product from the first line of code.
-
Products must be delivered without known exploitable vulnerabilities at the time they are placed on the market.
-
Products must be configured securely by default - the out-of-the-box experience should not require users to harden the product themselves.
-
Security updates must be provided automatically where feasible, for the entire defined support period.
-
Vulnerabilities must be actively handled - identified, documented, remediated, and reported - not ignored until someone files a CVE.
This is a fundamental reorientation. Security is not a premium feature. It is a baseline expectation.
The Role of the Steward: Making Security the Culture, Not Just the Rule
In the previous forum post, we explained how Red Hat acts as the open-source software steward for Ansible under the CRA, absorbing compliance burdens so that community volunteers are not exposed to regulatory liability. But the steward role is about more than compliance logistics. It is about supporting embedding security into the culture of the project.
Article 24 requires stewards to put in place a cybersecurity policy that fosters:
-
The development of secure products
-
Effective vulnerability handling by developers
-
Voluntary reporting of vulnerabilities
-
Sharing of vulnerability information within the open source community
Notice what this is asking for: not a checklist that sits in a drawer, but a living practice. The steward’s job is to make it easier for contributors to do the secure thing by default - through tooling, processes, documentation, and culture - rather than relying on individual developers to remember security as a separate concern.
This is the difference between “we have a security policy” and “security is how we work.”
Manufacturers: You Own the Lifecycle
Where the steward fosters the culture, the manufacturer owns the outcome. The CRA draws a sharp line here that is worth understanding clearly.
A manufacturer is any entity that places a product with digital elements on the EU market. If you take open source software, package it into a product, and ship it to customers, you are a manufacturer. The full weight of the CRA falls on manufacturers:
-
Secure by design - Your product must meet the essential cybersecurity requirements in Annex I before it reaches the market. Security cannot be deferred to a future release.
-
Conformity assessment - You must evaluate your product against the CRA requirements and affix the CE marking. For higher-risk categories, this requires third-party assessment.
-
Vulnerability handling - You must identify and document vulnerabilities, provide security updates for the support period, and maintain a coordinated vulnerability disclosure policy.
-
Incident reporting - Actively exploited vulnerabilities and severe incidents must be reported to ENISA and the relevant CSIRT within 24 hours (early warning) and 72 hours (full notification). This is not optional, and the clock starts when you become aware of any.
-
Support period - You must define and communicate how long you will provide security updates. Users must know before they buy.
-
Administrative fines - Non-compliance can result in fines of up to 15 million EUR or 2.5% of global annual turnover.
The message is clear: if you profit from placing a product on the market, you are responsible for its security - not your users, not the upstream open source project, not the foundation that hosts the code.
Steward vs. Manufacturer: Where the Line Falls
This distinction matters enormously for the open source ecosystem. A steward and a manufacturer may both work with the same codebase, but their obligations are different because their roles are different.
| Manufacturer | Open-Source Software Steward | |
|---|---|---|
| Places product on the market | Yes | No |
| Core obligation | Product must be secure by default | Foster a culture of secure development, serve as a bridge between open source project and external CRA enquiries |
| Conformity assessment / CE marking | Yes | No |
| Vulnerability handling | Full lifecycle obligations | Cybersecurity policy encouraging best practices |
| Reporting to ENISA | Mandatory for everything related to a product with digital elements | Scoped to involvement in certain activities like code development or infrastructure management |
| Administrative fines | Up to 15M EUR / 2.5% turnover | Exempt (Article 64(10)) |
| Typical examples | Companies selling software, IoT device vendors, device manufacturers, cloud providers bundling FOSS | Eclipse Foundation, Linux Foundation, Apache Foundation, Commercial companies (like Red Hat for Ansible community projects) |
The CRA does not punish open source. It targets the point where software enters the commercial supply chain. Stewards are treated as partners in the security ecosystem, not as enforcement targets.
What This Means in Practice for the Ansible Community
In the previous forum post we laid out the practical impact: contributors and maintainers are shielded by Red Hat’s stewardship, and users can expect greater transparency about the security posture of collections and tooling.
But the deeper shift is cultural. The CRA gives us an opportunity - and an obligation - to ask ourselves whether we are treating security as default or as afterthought in our daily work:
-
When we write a new module or plugin, are we considering what happens if the input is malicious, or are we assuming benign use?
-
When we review a pull request, are we evaluating security properties alongside functionality, or are we deferring that to “a security review later”?
-
When we discover a vulnerability, are we reporting it through the proper channels, or are we opening a public issue because it is easier?
-
When we set default configuration values, are we choosing the secure option, or the convenient one?
None of these questions are new. But the CRA makes them non-negotiable for anyone whose software ends up in a product on the EU market. And for the rest of us, it is a useful mirror: if a regulation had to be written to enforce these practices, perhaps we as an industry were not doing them consistently enough on our own.
Security Is Not a Feature
The CRA’s most important impact is not any specific obligation or timeline. It is the framing. For years, security has been marketed as a feature - something you pay extra for, enable with a toggle, or get from a premium tier. The CRA rejects this framing entirely. Security is a property that products must have before they can be sold. It is not a competitive differentiator. It is a floor.
For open source communities like Ansible, which has billions of downloads and is trusted by millions of users, this is actually liberating. We do not need to justify investing time in security tooling, vulnerability handling processes, or secure defaults. The CRA makes the case for us: this is not optional, this is not overhead, this is how software should be built.
Security as default, not afterthought. The CRA makes it law. We should make it a habit.
Call to Action
This is a community effort, and we want your voice in it. Here is where things stand and how you can participate.
What is happening right now:
-
We are updating the Ansible community security policy to align with the CRA’s expectations around secure development practices and vulnerability handling. You will see changes landing in the coming weeks.
-
We are working closely with Red Hat Product Security to tighten the vulnerability management process for Ansible community projects. This means clearer reporting channels, faster triage, and more transparent communication when vulnerabilities are identified. Expect positive changes on that front soon.
-
We are planning follow-up posts in this series that will dig into specific topics: vulnerability disclosure workflows, what secure defaults look like in practice for collections and plugins, and how the CRA timeline affects the Ansible ecosystem.
What we need from you:
-
Ask questions. If anything about the CRA, its timeline, or how it affects your work as a contributor, maintainer, or user is unclear, ask. No question is too basic. We would rather address concerns early than have them become blockers later.
-
Share your thoughts and concerns: Are there areas where you think the community’s security practices could improve? Do you see gaps in how we handle vulnerabilities or set defaults? This is the time to raise them.
-
Stay tuned: More posts in this series are on the way. Each one will tackle a specific aspect of CRA readiness and what it means for day-to-day work in the Ansible community.
-
Get involved: If you are interested in helping with security policy, vulnerability handling processes, documentation, or tooling, we would love to have you. Reach out on the forum or through the usual community channels and let us know where you would like to contribute.
This is not a top-down compliance exercise. The CRA gives us a framework, but the community decides how we build security into our culture. The more perspectives we have in that conversation, the better the outcome will be for everyone.
Let us hear from you.
Further Reading
- EU Cyber Resilience Act and What It Means for Ansible?
- Cyber Resilience Act - Official Summary by European Commission
- CRA and Open Source by European Commission
- Article 24 - Obligations of Open-Source Software Stewards
- Open Regulatory Compliance Working Group, Eclipse Foundation
- CRA Implementation FAQ by European Commission
- OpenSSF - EU Cyber Resilience Act Resources