Skip to content

add fields for scores #8

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
kristovatlas opened this issue Aug 10, 2016 · 3 comments
Open

add fields for scores #8

kristovatlas opened this issue Aug 10, 2016 · 3 comments
Assignees

Comments

@kristovatlas
Copy link
Member

Currently scores are expressed in the JSON file as 'weights' and 'effectiveness' for attackers, attacks, countermeasures, and criteria with integer and float values, respectively.

I suggest that we instead accumulate these scores from a series of score sub criteria. In the OBPP v2 threat model, we refer to these as "acceptance criteria" -- rules of thumb for how we derive subjective values that compare various threat model elements -- but in the JSON format I propose we refer to them as "severity benchmarks" to avoid confusion with what we're currently calling criteria.

@kristovatlas
Copy link
Member Author

kristovatlas commented Aug 10, 2016

The JSON format could look something like this:

'attackers':
[
    {
        'name': 'attacker 1',
        'weight': -1, //leave as -1 in order to be accumulated from severity benchmarks by docgen.py
        'severity-benchmarks': [
            {
                'description': 'quantity of likely attackers',
                'weight': 50, // an int between 0 and 100 where 0is "few" and "100" is "many"
                'max-weight': 100,
                'min-weight': 0
            },
            {
                'description': 'temporal window of attacks',
                'weight': 100 //an int between 0 and 100 where 0 is "short" and 100 is "long",
                'max-weight': 100,
                'min-weight': 0
             }
        ],
        'attacks': [
            {
                'name': 'attack 1',
                'weight': -1,
                'severity-benchmarks': [
                    {
                        'description': 'probability of attack success if unmitigated'
                        'weight': 100,
                        'max-weight': 100,
                        'min-weight': 0
                    },
                    {
                        'description': 'severity of information gained in successful attack',
                        'weight': 50,
                        'max-weight': 100,
                        'min-weight': 0
                    }
                ],
                'countermeasures': [
                    {
                        'name': 'countermeasure 1',
                        'effectiveness': -1,
                        'severity-benchmarks': [
                            {
                                'description': 'likelihood of mitigation if completely implemented',
                                'score': 1.0,
                                'max-score': 1.0,
                                'min-score': 0.0,
                                'comment': 'countermeasure 1 would always be effective if completely implemented because blah blah blah'
                            },
                            {
                                'description': 'severity of information protected by countermeasure',
                                'score': 0.75,
                                'max-score': 1.0,
                                'min-score': 0.0,
                                'comment': 'countermeasure 1 would only protect 75% of the data lost by attack 1 because blah blah blah'
                            }
                        ],
                        'criteria': [
                            'name': 'criterion 1',
                            'effectiveness': -1,
                            'severity-benchmarks': [
                                {
                                    'description': 'thoroughness of implementing countermeasure',
                                    'score': 0.60,
                                    'max-score': 1.0,
                                    'min-score': 0.0,
                                    'comment': 'completion of criterion 1 indicates a 60% application of countermeasure 1 because blah blah blah'
                                }
                            ]
                        ]
                    }
                ]
            }
        ]
    }
]

If the above example were put through docgen.py, it would compute the following values:

  • attacker 1's weight would be: 50/(100-0) * 100/(100-0) * 100 = 50
  • attack 1's effective weight would be: 50 * ( 100/(100-0) * 50/(100-0) * 100 ) / 100 = 50 * 50 / 100 = 25
  • countermeasure 1's effectiveness under attack 1 would be: 1.0/(1.0-0.0) * 0.75/(1.0-0.0) * 1.0 = 0.75
  • countermeasure 1's effective weight under attack 1 would be: 25 * 0.75 = 18.75
  • criteria 1's effectiveness under countermeasure 1 under attack 1 would be: 0.75 * 0.60/(1.0-0.0) = 0.75 * 0.60 = 0.45
  • criteria 1's effective weight would be: 18.75 * 0.45 = 8.4375

the effective weight of all criteria in the threat model MIGHT add up to 100 total depending on whether our criteria allow for a wallet to do a perfect job of mitigating all attacks we list, but the sum of all criteria's effective weights would definitely not be higher than 100.

All of these severity benchmarks would simply be multiplied together to derive a final weight, effectiveness, or score. The next version of this project could include complex arithmetic relationships between the scores, but I don't think we need this for v1.0.0.

@dcousens
Copy link

LGTM 👍

@kristovatlas
Copy link
Member Author

A few changes to my proposed new format:

  • No more need to specify a weight of -1 as placeholder; either you specify the weight/effectiveness or you provide one or more severity benchmarks
  • severity benchmarks may have arbitrary depths of sub-benchmarks
  • "relationship" field is required and set to either "direct" or "inverse." score is determined by multiplying by all direct relationship scores and dividing by all inverse relationship scores.
'attackers':
[
    {
        'name': 'attacker 1',
        'min-weight': 0,
        'max-weight': 100,
        'severity-benchmarks': [
            {
                'description': 'likelihood of attack against average user',
                'relationship': 'direct',
                'weight': 50, // an int between 0 and 100 where 0is "few" and "100" is "many"
                'max-weight': 100,
                'min-weight': 0
            },
            {
                'description': 'temporal window of attacks',
                'relationship': 'direct',
                'weight': 100 //an int between 0 and 100 where 0 is "short" and 100 is "long",
                'max-weight': 100,
                'min-weight': 0
             }
        ],
        'attacks': [
            {
                'name': 'attack 1',
                'weight': -1,
                'severity-benchmarks': [
                    {
                        'description': 'probability of attack success if unmitigated'
                        'relationship': 'direct',
                        'weight': 100,
                        'max-weight': 100,
                        'min-weight': 0
                    },
                    {
                        'description': 'severity of information gained in successful attack',
                        'relationship': 'direct',
                        'weight': 50,
                        'max-weight': 100,
                        'min-weight': 0
                    }
                ],
                'countermeasures': [
                    {
                        'name': 'countermeasure 1',
                        'relationship': 'direct',
                        'severity-benchmarks': [
                            {
                                'description': 'likelihood of mitigation if completely implemented',
                                'relationship': 'direct',
                                'score': 1.0,
                                'max-score': 1.0,
                                'min-score': 0.0,
                                'comment': 'countermeasure 1 would always be effective if completely implemented because blah blah blah'
                            },
                            {
                                'description': 'severity of information protected by countermeasure',
                                'relationship': 'direct',
                                'score': 0.75,
                                'max-score': 1.0,
                                'min-score': 0.0,
                                'comment': 'countermeasure 1 would only protect 75% of the data lost by attack 1 because blah blah blah'
                            }
                        ],
                        'criteria': [
                            'name': 'criterion 1',
                            'effectiveness': -1,
                            'severity-benchmarks': [
                                {
                                    'description': 'thoroughness of implementing countermeasure',
                                    'relationship': 'direct',
                                    'score': 0.60,
                                    'max-score': 1.0,
                                    'min-score': 0.0,
                                    'comment': 'completion of criterion 1 indicates a 60% application of countermeasure 1 because blah blah blah'
                                }
                            ]
                        ]
                    }
                ]
            }
        ]
    }
]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants