Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI checks for build, test, lint, and dep vulnerabilities #23

Merged
merged 28 commits into from
Jun 15, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
68 changes: 68 additions & 0 deletions .github/workflows/pr.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
# Checks for PRs
# See: https://github.com/actions-rs/example/blob/master/.github/workflows/quickstart.yml

on: [pull_request]

name: PR

jobs:
Copy link

@keyz keyz Jun 7, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense for "Checkout + fetch dependencies" to be a separate job, and have other jobs depend on that? I guess this way we can use Cargo.lock as the key for caching the toolchain and dependencies across CI runs

Copy link
Contributor Author

@emostov emostov Jun 7, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we could "Checkout + fetch dependencies" and then have subsequent jobs run in the same environment it makes sense because it saves us for having to do it for each. But AFAICT every job runs in its own environment so it won't save us any time but instead just make sure subsequent jobs don't run if it fails?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup every job runs in its own environment, however there's a ~global cache that every job can read/write. I think we can have the "checkout + fetch" job write to the cache, then let other jobs restore the cache: maybe something like this

test:
name: Test Suite
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout@v2

- name: Install stable toolchain
uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true

- name: Run cargo test
uses: actions-rs/cargo@v1
with:
command: test

rustfmt:
name: rustfmt
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout@v2

- name: Install stable toolchain
uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: nightly
override: true
components: rustfmt, clippy

- name: Run cargo fmt
uses: actions-rs/cargo@v1
with:
command: fmt
args: --all -- --check

clippy:
name: clippy
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout@v2

- name: Install stable toolchain
uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
components: rustfmt, clippy

- name: Run cargo clippy
uses: actions-rs/cargo@v1
with:
command: clippy
args: --all-targets -- -D warnings
56 changes: 30 additions & 26 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,47 +1,51 @@
Quick start
# QuorumOS

This is a WIP.

## Submitting a PR

Before a PR can be merged it must:

Be formatted

```bash
cargo +nightly
```
# make sure you have the latest rustc stable
rustup update stable

# run tests
cargo test --all
Pass the linter

# format code
cargo +nightly fmt
```bash
cargo clippy

# to fix some types of lints you can run
cargo clippy --fix
```

# System requirements
And pass all tests

```bash
cargo test
```

## System requirements

- openssl >= 1.1.0

# Key parts
## Key parts

## Enclave
### Enclave

- houses nitro server
- see `qos-core`

## Host

- EC2 instance where the nitro enclave lives inside
- has client for talking to nitro enclave
- has server for incoming request from outside world
- see `qos-host`

## End user

- Anything making request to host

# Decisions / Things to Revisit:

- Use Serde in `qos-core`. We've decided to do this right now for agility; but we should probably make our own simple macro or find a secure serialization lib (look into borsch?)

# TODO:

- Build crypto - all public key + hashing logic. High level so we can swap. Bring in OpenSSL
- Pivot logic
- Cli for posting shards, nsm attestation flow
- Research flow for attestation - with nsm / nitro enclave docs

- Sanity check vsock - aws or qemu
- Run deployed aws attestation flow (save nsm responses for stubbing)
- Smart shamir logic in enclave, don't randomly reconstruct
- anything making request to host
- see `qos-client`
19 changes: 18 additions & 1 deletion qos-client/src/attest/mod.rs
Original file line number Diff line number Diff line change
@@ -1,22 +1,39 @@
//! Attestation verification logic
//! Attestation specific logic

pub mod nitro;

/// Attestation error.
#[derive(Debug)]
pub enum AttestError {
/// `webpki::Error` wrapper.
WebPki(webpki::Error),
/// Invalid certificate chain.
InvalidCertChain(webpki::Error),
/// `openssl::error::ErrorStack` wrapper.
OpenSSLError(openssl::error::ErrorStack),
/// `aws_nitro_enclaves_nsm_api::api::Error` wrapper.
Nsm(aws_nitro_enclaves_nsm_api::api::Error),
/// Invalid end entity certificate. In the case of Nitro this means the
/// NSM's certificate was invalid.
InvalidEndEntityCert,
/// Invalid COSE Sign1 structure signature. In the case of Nitro this means
/// the end entitys signature of the attestation doc was invalid.
InvalidCOSESign1Signature,
/// Invalid COSE Sign1 structure.
InvalidCOSESign1Structure,
/// Invalid hash digest.
InvalidDigest,
/// Invalid NSM module id.
InvalidModuleId,
/// Invalid PCR.
InvalidPcr,
/// Invalid certificate authority bundle.
InvalidCABundle,
/// Invalid time.
InvalidTimeStamp,
/// Invalid public key.
InvalidPubKey,
/// Invalid bytes.
InvalidBytes,
}

Expand Down
22 changes: 11 additions & 11 deletions qos-client/src/attest/nitro/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -21,15 +21,15 @@ static AWS_NITRO_CERT_SIG_ALG: &[&webpki::SignatureAlgorithm] =
/// Corresponds to `MockNsm` attestation document response. This time is
/// valid for the mock and should only be used for testing.
#[cfg(any(feature = "mock", test))]
pub const MOCK_SECONDS_SINCE_EPOCH: u64 = 1652756400;
pub const MOCK_SECONDS_SINCE_EPOCH: u64 = 1_652_756_400;

/// AWS Nitro root CA certificate.
///
/// This should be validated against the checksum:
/// `8cf60e2b2efca96c6a9e71e851d00c1b6991cc09eadbe64a6a1d1b1eb9faff7c`. This
/// checksum and the certificate should be manually verified against
/// https://docs.aws.amazon.com/enclaves/latest/user/verify-root.html.
pub const AWS_ROOT_CERT_PEM: &'static [u8] =
/// <https://docs.aws.amazon.com/enclaves/latest/user/verify-root.html/>.
pub const AWS_ROOT_CERT_PEM: &[u8] =
std::include_bytes!("./static/aws_root_cert.pem");

/// Extract a DER encoded certificate from bytes representing a PEM encoded
Expand Down Expand Up @@ -99,7 +99,7 @@ pub fn attestation_doc_from_der(

/// Verify the certificate chain against the root & end entity certificates.
fn verify_certificate_chain(
cabundle: &Vec<ByteBuf>,
cabundle: &[ByteBuf],
root_cert: &[u8],
end_entity_certificate: &[u8],
validation_time: u64,
Expand All @@ -109,7 +109,7 @@ fn verify_certificate_chain(
// (first element). Ordering is: root cert .. intermediate certs ..
// end entity cert.
let intermediate_certs: Vec<_> =
cabundle[1..].into_iter().map(|x| x.as_slice()).collect();
cabundle[1..].iter().map(|x| x.as_slice()).collect();

let anchor = vec![webpki::TrustAnchor::try_from_cert_der(root_cert)?];
let anchors = webpki::TlsServerTrustAnchors(&anchor);
Expand All @@ -121,7 +121,7 @@ fn verify_certificate_chain(
&intermediate_certs,
webpki::Time::from_seconds_since_unix_epoch(validation_time),
)
.map_err(|e| AttestError::InvalidCertChain(e))?;
.map_err(AttestError::InvalidCertChain)?;

Ok(())
}
Expand All @@ -136,7 +136,7 @@ fn verify_cose_sign1_sig(

// Expect v3
if ee_cert.version() != X509_V3 {
return Err(AttestError::InvalidEndEntityCert)
return Err(AttestError::InvalidEndEntityCert);
}

let ee_cert_pub_key = ee_cert.public_key()?;
Expand All @@ -145,10 +145,10 @@ fn verify_cose_sign1_sig(
let is_valid_sig = cose_sign1
.verify_signature(&ee_cert_pub_key)
.map_err(|_| AttestError::InvalidCOSESign1Signature)?;
if !is_valid_sig {
Err(AttestError::InvalidCOSESign1Signature)
} else {
if is_valid_sig {
Ok(())
} else {
Err(AttestError::InvalidCOSESign1Signature)
}
}

Expand Down Expand Up @@ -290,7 +290,7 @@ mod test {
}

{
let valid = attestation_doc.clone();
let valid = attestation_doc;
// Don't pop anything, just want to sanity check that we get a
// corrupt signature on the cose sign1 structure.

Expand Down
38 changes: 17 additions & 21 deletions qos-client/src/attest/nitro/syntactic_validation.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ use std::collections::BTreeMap;

use aws_nitro_enclaves_nsm_api::api::Digest;

use super::*;
use super::{AttestError, ByteBuf};

const MIN_PCR_COUNT: usize = 1;
const MAX_PRC_COUNT: usize = 32;
Expand All @@ -19,8 +19,8 @@ const MIN_CERT_LEN: usize = 1;
const MAX_CERT_LEN: usize = 1024;

/// Mandatory field
pub(super) fn module_id(id: &String) -> Result<(), AttestError> {
if id.len() < 1 {
pub(super) fn module_id(id: &str) -> Result<(), AttestError> {
if id.is_empty() {
Err(AttestError::InvalidModuleId)
} else {
Ok(())
Expand All @@ -44,7 +44,7 @@ pub(super) fn pcrs(pcrs: &BTreeMap<usize, ByteBuf>) -> Result<(), AttestError> {
}
}
/// Mandatory field
pub(super) fn cabundle(cabundle: &Vec<ByteBuf>) -> Result<(), AttestError> {
pub(super) fn cabundle(cabundle: &[ByteBuf]) -> Result<(), AttestError> {
let is_valid_len = cabundle.len() >= MIN_CERT_CHAIN_LEN;
let is_valid_entries = cabundle
.iter()
Expand All @@ -58,10 +58,10 @@ pub(super) fn cabundle(cabundle: &Vec<ByteBuf>) -> Result<(), AttestError> {
}
/// Mandatory field
pub(super) fn digest(d: Digest) -> Result<(), AttestError> {
if d != Digest::SHA384 {
Err(AttestError::InvalidDigest)
} else {
if d == Digest::SHA384 {
Ok(())
} else {
Err(AttestError::InvalidDigest)
}
}
/// Mandatory field
Expand All @@ -77,7 +77,7 @@ pub(super) fn public_key(pub_key: &Option<ByteBuf>) -> Result<(), AttestError> {
if let Some(key) = pub_key {
(key.len() >= MIN_PUB_KEY_LEN && key.len() <= MAX_PUB_KEY_LEN)
.then(|| ())
.ok_or(AttestError::InvalidPubKey)?
.ok_or(AttestError::InvalidPubKey)?;
}

Ok(())
Expand All @@ -93,7 +93,7 @@ pub(super) fn nonce(n: &Option<ByteBuf>) -> Result<(), AttestError> {

fn bytes_512(val: &Option<ByteBuf>) -> Result<(), AttestError> {
if let Some(val) = val {
(val.len() <= 512).then(|| ()).ok_or(AttestError::InvalidBytes)?
(val.len() <= 512).then(|| ()).ok_or(AttestError::InvalidBytes)?;
}

Ok(())
Expand Down Expand Up @@ -149,15 +149,15 @@ mod test {
#[test]
fn cabundle_works() {
let valid_cert = ByteBuf::from(vec![42]);
assert!(cabundle(&vec![valid_cert]).is_ok());
assert!(cabundle(&[valid_cert]).is_ok());

assert!(cabundle(&vec![]).is_err());
assert!(cabundle(&[]).is_err());

let short_cert = ByteBuf::new();
assert!(cabundle(&vec![short_cert]).is_err());
assert!(cabundle(&[short_cert]).is_err());

let long_cert = ByteBuf::from((0..1025).map(|_| 3).collect::<Vec<_>>());
assert!(cabundle(&vec![long_cert]).is_err());
assert!(cabundle(&[long_cert]).is_err());
}

#[test]
Expand All @@ -182,19 +182,15 @@ mod test {
assert!(pcrs(&BTreeMap::from([(33, pcr32.clone())])).is_err());

// Valid
assert!(pcrs(&BTreeMap::from([
(0, pcr48.clone()),
(32, pcr32.clone()),
(5, pcr64.clone())
]))
.is_ok());
assert!(pcrs(&BTreeMap::from([(0, pcr48), (32, pcr32), (5, pcr64)]))
.is_ok());

assert!(pcrs(&BTreeMap::from([(5, pcr_invalid)])).is_err());
}

#[test]
fn module_id_works() {
assert!(module_id(&"".to_string()).is_err());
assert!(module_id(&"1".to_string()).is_ok());
assert!(module_id("").is_err());
assert!(module_id("1").is_ok());
}
}
Loading