Skip to content

Latest commit

 

History

History
256 lines (127 loc) · 26.4 KB

guide.md

File metadata and controls

256 lines (127 loc) · 26.4 KB

Guide to implementing a coordinated vulnerability disclosure process for open source projects

Table of Contents

Before you begin

No open source project is perfect. At some point in the life of your project, someone--a user, a contributor, or a security researcher--will find a vulnerability that affects the safety and usefulness of your project.

This guide is intended to help open source maintainers create and maintain a coordinated vulnerability response process.

About this guide

This guide was produced by the Google Open Source Programs Office security program, Google’s vulnerability response team, and Google’s infrastructure security team. It was written for Googlers working in open source projects, with the types of projects Google tends to open source and contribute to in mind. We shared this guide in hopes of helping all open source projects have good vulnerability management, but not all advice in this guide is applicable to all open source projects.

Who's a vulnerability reporter?

There’s no single example of how security issues are reported, or why people report them. That’s one of the things that can make vulnerability management and disclosure tricky: the human on the other side is, well, a human, who has their own wants, needs, and interests out of a vulnerability disclosure. This is why the phrase “coordinated vulnerability disclosure” is now preferred over “responsible disclosure.” There are two parties involved, and you need to coordinate and work together!

Very broadly, reporters fall into two camps: those with a direct connection to the project, and those with an indirect connection. Direct reporters are active users of the project, or were hired to do work on behalf of a direct user. Because they are direct users, they have a strong motivation for an issue to be patched and smoothly rolled out. They might want to help develop and test a patch—they have a reason to see this through to a fix.

Indirect reporters may be security researchers, people doing penetration testing or security audits, or may stumble across an issue in your project as the result of chasing an issue in a dependent project. They may want to be highly involved in the patching and disclosure process, including coordinating publicity for their work, or they may just want to send over the issue and not be involved further.

Neither of these attitudes are wrong. By alerting you to an issue, they have done your project a massive favor. Unfortunately, there are incentives to not report vulnerabilities, and in some rare cases that incentive is quite a bit of cash (there is a market for undisclosed exploits).

We want to thank reporters for taking the time to find you and go through your process, and one of the ways to do that is to make your process as discoverable, smooth, and low-friction as possible (we’ll mention other important ways to thank reporters in the Response Process).

Setting up the vulnerability management "infrastructure"

The next section will walk through how these pieces come into play to handle a vulnerability report in detail, but we’re going to introduce them here, in the order that you’ll encounter them in the process.

Before you can set up a vulnerability reporting process, there are some important pieces of infrastructure your project needs to have in place.

Create a vulnerability management team (VMT)

To keep security issues to a “need to know” basis while they’re being resolved, you need a small team who can be available to respond to issues. If you have a small project, you’ll want to split this work up amongst your maintainers. This team’s primary responsibility is coordination: they will be the reporter’s point of contact throughout the process, keep them informed (if they’d like to be), and keep the security issue moving through the process. You will want some team members who are familiar with the project’s release mechanisms and security, but that does not need to be everyone. Part of “coordinating” is knowing when and who to bring in when you need help beyond your team’s knowledge.

Recommendation: For larger projects, 3-7 team members with experience in security, engineering, and program management. For smaller projects, you’ll want to divide the responsibilities among maintainers. Create an email alias (security@[yourdomain] is recommended) for these team members. Make sure at least 2 team members have the correct permissions to generate security issues/advisories on your development platform (ie Admin on GitHub for Security Advisory).

Set up report intake

Location

You’ll need an easy, obvious way to be alerted to issues. Where should this information live?

If you are using GitHub

GitHub Security Advisory is the feature that displays the “Security Policy” and “Security Advisory” information in the top-level security tab on a GitHub repository. To populate the “Security Policy” field, you will want to create a SECURITY.MD file in your root, docs, or .github folder. (GitHub documentation: Creating a Security Policy) Whatever you decide, our recommendation is to also put a link to the SECURITY.MD in your README. The Security tab isn’t obvious to everyone; the README puts this information front and center. (Just putting disclosure information in the README will not populate the Security tab.)

If you are using another git service

It is recommended to put your security policy in the same place where you document how to report issues, with a distinct callout for “Reporting a security issue.” If this page is not a top-level page, we recommend also adding a link to this documentation on a landing page, a security features page, a contact page, or other prominent, heavily-trafficked page. If you have site search, “vulnerability,” “report security,” and “security issue” are common keywords that you’ll want to incorporate.

Every project organizes themselves differently. The goal here is: “make it obvious.”

Intake Method

Your intake method will depend on how you plan to privately develop and test your patch. Whatever method you pick, clear documentation and consistency across the vulnerability management team (VMT) will help you stay organized and responsive. (Half your reports coming in via email and half coming in through Launchpad security issues is a recipe for miscommunication.) Inevitably you’ll receive a report through the “wrong” method; just kindly help the report get into your workflow and keep going.

If you are using GitHub

GitHub Security Advisory is a GitHub feature that allows selected users to privately share information about reported issues, develop patches on a private branch, and publish a security advisory. If you plan to use GitHub Security Advisory for private patch development, follow the directions below. Otherwise, follow the directions for another git service.

The GitHub Security Advisory workflow starts when a repo or org admin opens a Security Advisory; general users cannot create a Security Advisory or create a private “security issue” out of a standard GitHub issue.

Your Security Policy should instruct reporters to email the VMT with a vulnerability report (see SECURITY.MD templates). The VMT will then open a Security Advisory and add the reporter as a collaborator (see GitHub documentation on GitHub Security Advisory). It is also appropriate to email that alias for questions about the vulnerability disclosure process.

If you are using another git service

If you are using a security issue tracker (eg Launchpad, Buganizer), your Security Policy should instruct reporters to open a security issue in that tracker. It is also appropriate to email the VMT alias for questions about the vulnerability disclosure process or if there are problems opening a security issue.

If you do not have an issue tracker with a security issue feature, you need an alternative method for intake. Your intake solution should restrict access to the content of the messages to verified identities. However, this solution also has to be accessible and low friction. The reporter is doing you a favor; don’t add more steps than absolutely necessary. In the spirit of this balance, our recommendation is that using email for intake is okay, and having email available as an alternative method of intake can help make sure issues get to you.

Private patch development

Later sections will cover how to determine if something is a security issue, a regular issue, and something that you will patch privately and then disclose. If it is a security issue and you will be issuing a patch, you will need a way to privately develop and test your work. If you test a particular patch in public, an observant attacker may see the exploitability, and exploit the vulnerability before you’re able to issue a patch.

If you are using GitHub

You have a decision to make: Will you use the GitHub Security Advisory feature to do private development of a patch? Based on the feature set of GitHub Security Advisory at the time of writing, our recommendation is that if you are a project using GitHub, you should use the private development features to generate your patch there.

Pros: Keeps all development within one platform, easy to add external contributors (eg the reporter, or other experts who can help with patching), when the vulnerability is disclosed, it is easy to flip the work from “private” to “public.”

Cons: At the time of writing, private forks created as part of GitHub Security Advisory do not have access to integration like CI systems (see documentation), so you will need to run tests locally.

If you need to test against hardware or systems not already included in your testing suite but available somewhere else (for example, internally at your company), it may be faster to fork the project, develop, and test outside of GitHub’s private branches. However, this does introduce the challenge of keeping your internal fork up with main while you develop a patch, and restrictions on who can help.

Running private mirrors can be done, but we do not recommend this as the default. If you run a private mirror for developing and testing security patches, you will want to have this set up and operational before you have a vulnerability report.

If you are using another git service

There are many issue trackers that are able to separate security issues from regular issues. Whatever tracker you select, the following features are strongly recommended for your vulnerability reporting system:

  • A changelog is available for each ticket
  • Membership can be restricted, and member identity is compatible with multi-factor authentication
  • Private issues/tickets can be made public after disclosure
  • Issues and coordination communication are not ephemeral
  • The reporting process does not require the user to make an account with a service that is not already used in the corresponding project or is not a commonly used developer tool

Establish a CNA contact

CNAs (CVE Numbering Authorities) are organizations who can assign CVE numbers to new vulnerabilities. CNAs have various scopes, and do not issue CVEs outside of their scopes. (eg While the (fictitious) SpeakerCompany uses open source software in their products, their scope could be restricted to vulnerabilities only found in SpeakerCompany software, and they would not handle a CVE request for an upstream issue.) There are many CNAs; the only “pre-work” for the VMT is to know of at least one CNA whose scope covers your project and who you will go to first for a CVE assignment. (MITRE, the organization that manages CVE administration, is also a “CNA of Last Resort” for open source projects.)

Embargo list

TL;DR: Embargoed notification requires careful administration and management, adds additional responsibility for the VMT, and adds time to the disclosure process. Unless your project has a significant vendor ecosystem, embargoed notification is probably not necessary.

When companies offer your project as a managed service or your project is critical to their infrastructure, and their infrastructure has the potential to expose users, it is probably appropriate to have an “embargo list.” An embargo list is a read-only announcement list whose membership is restricted to particular users. Depending on the nature of your project and the vulnerability, a user of a managed service might be dependent on their provider to take action to reduce that user’s exposure. A notification under embargo, prior to the public disclosure, gives service providers time to prepare so they can patch quickly after the public disclosure and reduce the time their users are exposed.

Embargoed notification is not about avoiding PR issues or providing high-profile users with preferential treatment; it is about protecting users from damaging exploits by giving preparation time to the distributors and providers that control those users’ systems. It can also give distributors a chance to test and qualify the patch across diverse environments and report problems that can be fixed prior to public release. This extra testing validation can be valuable for complex patches. Make sure someone on the VMT is monitoring for replies to the embargo announcement.

Using an embargoed notification is not without risk. An embargoed notification is expanding the number of people with early awareness and adds extra time between when the vulnerability is discovered and when it’s patched. As the Project Zero team states, “We have observed several unintended outcomes from vulnerability sharing under embargo arrangements, such as: increased risk of leaks, slower patch release cycles, and inconsistent criteria for inclusion.” When deciding to use an embargoed notification, consider the severity and exploitability of your vulnerability, the patching complexity (does the provider actually need the time to prepare, or this is an easily rolled out patch?), the resource cost in running and managing an embargoed notification cycle, and the breadth of your embargo list.

If an embargo list is relevant to your project, you will want to create a restricted, read-only announcement list that is administered by your VMT. The VMT is responsible for approving access requests and maintaining an accurate list (e.g. removing outdated members), but it is the provider’s responsibility to request access to your list. List the requirements and directions for requesting access in your security documentation.

Communication templates

The more you have pre-written, the less there is to do when you have an issue to respond to. See the Templates directory for security policy (SECURITY.md), embargoed notification, and public disclosure templates.

The vulnerability response process

Runbook

See Runbook.md for step-by-step directions on the vulnerability response and disclosure process

Response process

  1. Acknowledge the issue

    A 90-day disclosure deadline is the current norm in vulnerability disclosure. This means that the reporter will give the project 90 days to respond, patch, and publicly disclose the vulnerability before publicly disclosing it themselves. However, depending on if the issue is being actively exploited or if there are problems in patch rollout, less or more time (respectively) may be appropriate and agreed on by both parties. That’s why ongoing communication with your reporter is critical.

    It starts with acknowledging that you have received their issue. At this point you likely haven’t assessed the issue; you’re just letting them know that you’re on it.

  2. Assess the issue

    To assess if an issue is a vulnerability, you will need:

    • Documented steps the reporter took that created the behavior
    • Any relevant information about systems, versions, or packages involved

    Not everything reported as a security issue is a security issue. Generally, something is a security issue if it compromises data availability, data integrity, or data confidentiality. This may happen by way of elevated permissions or access, but what separates a security issue from unwanted behavior (a bug) is a compromise in one or more of those areas.

    Like bugs, intentional design decisions that do not have “optimized” security are not vulnerabilities. A suggestion for better security is not the same as a vulnerability. Vulnerabilities create a situation where something is not working as intended, and creates unintended access to data, systems, or resources.

Assessment Response
Working as intended Let the reporter know this is the intended behavior. If they think this behavior could be improved, they can file a feature request. Close the security issue. When responding with this assessment, you should try to explain why you arrived at this conclusion, in case the original report was unclear and the VMT has unintentionally misunderstood the original report.
Bug Let the reporter know this is unwanted behavior but not a security issue, and ask them to refile this as a bug. Close the security issue.
Feature request Let the reporter know this is the intended behavior. If they think this behavior could be improved, they can file a feature request. Close the security issue.
Vulnerability Let the reporter know that you have confirmed this is unwanted behavior that creates a security issue. Proceed with process.
  1. Create a patch for the issue

    Let the reporter know you have confirmed the issue, will begin developing a patch, and will be requesting a CVE entry. Ask the reporter if they would like to be involved in the patch development process. Using your private development and testing tooling, develop a patch and prepare (but do not cut) a release.

    In your assessment process, you should have identified what versions are affected. As you prepare your patch, take note of backwards compatibility and upgrade requirements (for example: v1.0.0 is affected, but the patch is not compatible, and users will need to upgrade to v1.7.0 or above to apply the patch). You will need to communicate these details in your disclosure announcements.

    For issues in patching, see the Troubleshooting section of the guide.

  2. Get a CVE for the issue

    Ask the reporter if they would like to be involved in writing the CVE entry, and if they would like to be credited in the entry. (Recognition is one of the many ways we thank reporters!)

    Go through your identified CNA to have a CVE number reserved and submit a description. Let your CNA know you are working on a patch and, if applicable, will be doing embargoed notifications before public disclosure. Keep your CNA up to date on your public disclosure date so they can coordinate listing your CVE entry.

  3. (If applicable) Notify providers under embargo

    Embargo notifications are sent anywhere from 3-30 work days before the intended date of public disclosure. This timeframe depends on the severity and exploitability of the issue, the complexity of the patch, and the type of providers your project is used by (can the providers feasibly qualify and patch in 5 days? 10 days?). Also consider holidays and significant events that could impact the provider’s ability to prepare and adjust your dates accordingly (eg if your project is heavily used by retailers, don’t expect them to be able to prepare over the US Black Friday shopping days).

    Your notification should include the CVE number, issue description, reporter credit (if applicable) affected versions, how the patch will be made available, and the public disclosure date. See corresponding template examples in the guide.

  4. Cut a release and publicly disclose the issue

    On the day of public disclosure, publish your disclosure announcement (see templates). If using GitHub Security Advisories, “publishing” your private Security Advisory will add it to the “Security” tab. If you are not using GitHub Security Advisories, publish the announcement to your release notes and/or security bulletins.

    It’s recommended to also send the announcement to appropriate mailing lists for your community (i.e. a security-announce@ list, and even a general mailing list for high impact vulnerabilities).

Publishing your vulnerability management process

It can be beneficial to both reporters and users to publish what your project does when it receives a security issue, and if you have a time-based disclosure deadline (eg 90 days). This helps reporters follow the process along, and helps users have context for how an issue was handled when they see a disclosure.

Troubleshooting the process

Our reporter isn't very responsive

After the initial report, how responsive your responder is is up to them (that’s the “coordination” part of Coordinated Vulnerability Disclosure). If you receive a report that you are not able to reproduce and have tried multiple times to reach the reporter, send them a polite, final note that you were not able to reproduce the issue and will not be issuing a security advisory. Encourage them to reopen the issue if they are able to reproduce in the future.

Patch development isn't going well

If you’re struggling to develop a patch that fully resolves the issue, you have a couple of options:

  1. Get more help. It is okay to expand the people working on an issue beyond the VMT when you’re struggling to create a fix. Is there a project contributor who has particular knowledge of the affected area? Do you know someone who specializes in this security area? (eg networking security, container security, etc) Do VMT members have resources at their company (eg vuln response teams) who can help?

  2. Patch partially (break the exploitation chain) before 90 days. If you’ve gotten more help, the 90 day window is coming up, and you don’t have a complete fix yet, a patch that breaks the exploitation chain before the public disclosure time is preferable to no patch. This does not mean you stop working on a complete fix after disclosure, but that you release the solution you do have.

    In this option, it is important you communicate and document that this patch does not resolve the issue entirely. It is critical that users understand their exposure level even after patching. When you have a comprehensive fix, remember to add updates to past announcements to point users to the latest information. (For example, your release notes notes for the comprehensive fix could say, "Further security improvements addressing $CVEID.")

  3. Disclose without a patch and document it well. If an issue is unresolvable, it is better that users know than not know. “Security through obscurity” is a weak defense in vulnerability management. Any existing vulnerability can be found and exploited by bad actors. Document the issue well, including any related work-arounds for common environments, and continue to work on it in public.

Someone disclosed a vulnerability without working with us

Maybe it’s found in a research paper, an article, or on social media, but if someone discloses a vulnerability in your project that you had no prior awareness of, the best thing to do is treat it as a regular project issue (it is, afterall, already public) but assign it high priority and communicate with your users, particularly if it’s a publicized or critical issue. Let them know you’re aware of the issue, how it’s being handled, and where they should watch for updates. Handling an issue of this type publicly removes a significant part of the communication burden, as it allows others to find this information and not have to contact the VMT.

Acknowledgements

Thank you to the Google Open Source Programs Office, the Google vulnerability response team, the Google infrastructure security team, and Project Zero team for their work on this guide. Thank you to the wider security and open source communities whose work informed this guide, including the OpenStack Vulnerability Management Process, Project Zero’s disclosure process, and the Kubernetes security and disclosure process.