-
Notifications
You must be signed in to change notification settings - Fork 5
add fields for scores #8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
The JSON format could look something like this:
If the above example were put through docgen.py, it would compute the following values:
the effective weight of all criteria in the threat model MIGHT add up to 100 total depending on whether our criteria allow for a wallet to do a perfect job of mitigating all attacks we list, but the sum of all criteria's effective weights would definitely not be higher than 100. All of these severity benchmarks would simply be multiplied together to derive a final weight, effectiveness, or score. The next version of this project could include complex arithmetic relationships between the scores, but I don't think we need this for v1.0.0. |
LGTM 👍 |
A few changes to my proposed new format:
|
Currently scores are expressed in the JSON file as 'weights' and 'effectiveness' for attackers, attacks, countermeasures, and criteria with integer and float values, respectively.
I suggest that we instead accumulate these scores from a series of score sub criteria. In the OBPP v2 threat model, we refer to these as "acceptance criteria" -- rules of thumb for how we derive subjective values that compare various threat model elements -- but in the JSON format I propose we refer to them as "severity benchmarks" to avoid confusion with what we're currently calling criteria.
The text was updated successfully, but these errors were encountered: