- 
                Notifications
    You must be signed in to change notification settings 
- Fork 20
Add script for benchmarking simulators with different parameters #621
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
231f253    to
    6af60fc      
    Compare
  
    | Coverage reportClick to see where and how coverage changed
 This report was generated by python-coverage-comment-action | ||||||||||||||||||||||||||||||
| Codecov Report❌ Patch coverage is  
 Additional details and impacted files@@            Coverage Diff             @@
##             main     #621      +/-   ##
==========================================
+ Coverage   80.20%   80.27%   +0.06%     
==========================================
  Files         155      154       -1     
  Lines       11132    11166      +34     
==========================================
+ Hits         8928     8963      +35     
+ Misses       2204     2203       -1     ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
 | 
| Check out this pull request on   See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB | 
| Could we include in the benchmark performance of the main AutoEmulate (test model performance given different N simulated datapoints but otherwise keeping the default AE settings). | 
9eb54fd    to
    9250d61      
    Compare
  
    
Closes #454.
This PR adds for benchmarking:
It might work well to include a version of this to be run by github actions at regular intervals.