-
Notifications
You must be signed in to change notification settings - Fork 193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CI failure #649
Milestone
Comments
Merged
JingfeiPeng
added a commit
that referenced
this issue
Mar 7, 2021
* not use eager update strategy * upgrade numpy (inspect only) Co-authored-by: Jenish Patel <[email protected]>
JingfeiPeng
added a commit
that referenced
this issue
Mar 7, 2021
JenishPatel99
added a commit
that referenced
this issue
Apr 12, 2021
* Update evaluate and train to eval per episode * Fix tests and change normalization * Fix formatting * Record episode for train/eval at the same time * Train performance averages last eval_rate episodes * Add gap_mode and calculate generalization gap * Enable explore during eval for better comparison * Fix formatting * Adjust ci (#649) (#650) removes eager update strategy in ci automatically upgrades numpy version in ci * Fix averaging in record_episode * Only average scalars in record_episode * Check for scalar when calculating gap * Fix formatting * Fix bugs introduced by merge * Fix calculate_gap * Merge 0.4.15 version changes to master * Fix averaging over episodes * Fix formatting * Check eval_episodes is 0 before evaluation * Make evaluations run in parallel to training * Evaluate on train and test without exploration * Make train/test occur in one ray.get * Empty commit: create ultra-develop * Add on_shoulder event (#714) (#717) * add on_shoulder in events in observations * update changelogs * display grade intervals (episodes) * Update open agent example * Update changelog * Format * Remove episode_count * Remove episode_count from tests * Showbase refactor (#706) * First pass at ShowBase refactor into Renderer; Also added Provider base class. WIP * Updated the pybullet examples. +comments. * Added changelog entry. * find collidee by vehicle.id, not node in scene graph. misc. minor cleanup. * oops, had wrong issue number in CHANGELOG. * Made ShowBaseInstance class private. * Fixups from review: actually use Provider! * Added some comments to explain why overlapping SMARTS is still not a good idea. * fixups from review (better encapsulation) * fixed oops * don't use ShowBase clock for main simulation clock. * hide other sim_root nodes when rendering * Added simple multiple-renderers test. * WIP: Scenario density information available during training * isolate density extraction from non gb runs * WIP: Evaluate updated to record test densities * make done_creteria independent from events (#728) * make done_creteria independent from events * update changelog * map offset fixes (#727) * Use possibly-shifted road network file when starting Sumo. * use unique name for shifted net file just in case. * Don't use a unique shifted_net filename every time afterall to avoid accumlation of cruft. * updated CHANGELOG * Fixed Changelog location. * Add CHANGELOG line for previous map offset fix saying that scenario maps need to be rebuilt. * Describe map creation (#609 #689) (#701) * Added description for map creation * Mentioned paper in docs * change mention to paper to front page * update changelog * added description of lane connection * minor typos * add sub-headings Co-authored-by: Stephen Hockema <[email protected]> * Make envision server can be killed by terminal signal * Remove averaging over training episodes * Restructure evaluation_check with eval_mode = False * Fix formatting * Fix headless argument in ULTRA scripts * Refactored train, evaluate, coordinator & scenario data available from evaluate * updated config file * try split ci test method * updated gb tests & split evaluate tests into single and multi * clean up * formatted * fix tests * Add rollout_fragment_length and max_episode_steps to rllib * Add new flags to documentation * fix ci * Fix flag in example * Renderer Optional (#729) * First pass adding renderer_optional param to SMARTS. * updated CHANGELOG * recategorized the renderer changes in CHANGELOG * Fixed location of changelog entry. * ok, I'm going to be more careful with CHANGELOG changes going forward! * Added new RendererException (per review) * Added shifted_map-AUTOGEN.net.xml to .gitignore (due to already-merged "maps_offset_fixes" PR). * Only create Renderer on demand (if camera-based sensor added). * Ensure evaluation is not run in test_train.py * Check if log_dir doesn't exist in test_train.py * Fix typo in argument and set eval_episodes to 0 * Check if not exists for paths * Fix formatting * Update evaluate and train to evaluate per episode (#639) * Update evaluate and train to eval per episode * Fix tests and change normalization * Fix formatting * Record episode for train/eval at the same time * Train performance averages last eval_rate episodes * Add gap_mode and calculate generalization gap * Enable explore during eval for better comparison * Fix formatting * Fix averaging in record_episode * Only average scalars in record_episode * Check for scalar when calculating gap * Fix formatting * Fix bugs introduced by merge * Fix calculate_gap * Fix averaging over episodes * Fix formatting * Check eval_episodes is 0 before evaluation * Evaluate on train and test without exploration * Make train/test occur in one ray.get * Remove episode_count * Remove episode_count from tests * Remove averaging over training episodes * Restructure evaluation_check with eval_mode = False * Fix formatting * try split ci test method * Check for valid eval_episodes inside evaluation_check * Removed averaging feature of record_episode * Fix formatting Co-authored-by: Jenish Patel <[email protected]> * Merge with ultra-rllib-speedup-tests * Rename evaluation_task_id for training * fix ci * split evaluate tests into heavy and light * Update smarts/core/sumo_road_network.py * Simplify argument to ray.wait(...) * Remove print statements * Fix defaults * Fix eval-rate in docs * Scenario density analysis now available in non-gb mode * static evaluation models for test * refactored train/evaluate/episodes to store scenarios inside of episode.info object * fixed gb test error Co-authored-by: AlexLewandowski <[email protected]> Co-authored-by: Jeffer Jingfei Peng <[email protected]> Co-authored-by: Tucker <[email protected]> Co-authored-by: christianjans <[email protected]> Co-authored-by: sah-huawei <[email protected]> Co-authored-by: Stephen Hockema <[email protected]> Co-authored-by: Valax Kong <[email protected]> Co-authored-by: Tucker Alban <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
BUG REPORT
High Level Description
CI is suddenly failing for no apparent reason.
SMARTS version
develop branch
Previous associated issues
n/a
Steps to reproduce the bug
Make a new branch off of develop and push.
Resulting and expected behaviour
Expected passing CI since develop base is passing but instead fails.
Error logs and screenshots
https://github.com/huawei-noah/SMARTS/runs/2040512338
System information
Github Actions
Impact [If known]
Inaccurate CI failure.
The text was updated successfully, but these errors were encountered: