-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to control (or at least speed up) time in the simulator? #17
Comments
Second that question. Also, how will time be handled in the competition? If it is "real-time" how do we deal with the fact that our controller that has potentially been trained on different hardware has to suddenly work on a new timescale? (A problem that would be unlikely to occur on a real drone as those usually use real-time operating systems with predicable and constant time stepping) |
You can access your
Note that the default clock speed (1) will be used for both the qualifiers and the live competition. We will be releasing the hardware specs and estimated performance of the live competition machine in the near future. |
Okay, cool, thanks. Having the hardware specs helps in the real-time case. It could still trip up the agent, in case the system needs to swap memory or the python garbage collector kicks in. I don't know how easy/hard it is to do in Unreal Engine but using proper stepping would be cleaner. By that I mean between each two observations (and actions) the exact same amount of game time passes. |
Using a stepper clock is a good idea. I'll bring it up with AirSimNeurips team. |
This part of the official airsim documentation is worrying on this matter, it implies that what is being simulated between two time steps is clockspeed-dependent and hardware-dependent?
|
Is there any update/progress being made regarding this issue? |
I have been struggling on this issue for quite a while now and I didn't come out with a satisfactory solution yet. I think it is overly difficult to get a deterministic control over time in Airsim and to speed it up without changing the actual simulation, which makes the simulator very impractical for Reinforcement Learning, sadly. |
"You might see artifacts like object moving past obstacles because collision is not detected." Is there any easy way of checking when this has occurred in a given simulation? I was trying to see what clock speed can my hardware reliably handle. |
Actually I am trying to make something that is clockspeed-independent and hardware-independent at least in simulated time (not accounting for the 'quality' of the simulation) but this is very challenging. I have implemented 2 ways of running the simulator for a should-be-constant amount of simulated time. The first one is the following:
However I am afraid things still happen in simulation where the (:/) are, and that it gets worse when we set a higher clockspeed, am I correct? The second one is the following:
The problem here is that what actually happens in simContinueForTime() is very hardware dependent, in the sense that a simulation running with e.g. clockspeed=100.0 won't be running actually exactly 100 times faster than real-time, am I correct? Could we have a way to make this predictable? |
Hey everyone
|
Really? I had the feeling that using f.join() and then simPause(True) had this problem of things keeping being simulated between the two calls (e.g. when we set a high clockspeed), which didn't seem to happen with simContinueForTime() ? (Also what I actually do is use a duration for actions that is larger than the duration of simContinueForTime() so I don't fall in this regime where the drone has finished its action and falls back to the default action for a small amount of time) |
Well, there are annoying problems with the new Linux binaries. When using simPause(True), everything gets so much paused that it becomes impossible to use e.g. F10 to get the mouse back when it is caught in the window without opening a terminal to kill the simulator, and when using simContinueForTime() (which I do for the aforementioned reasons) the simulator seems to run but the visualization window stays frozen. |
Looking into simContinueForTime |
|
Hello everyone,
So, I am experimenting with Reinforcement Learning strategies to develop a controller for the competition. Since this is very sample-intensive, I need to speed Airsim up as much as possible to generate as many samples as possible (for now I understand it only runs real-time?).
Is it possible to basically control time in airsimneurips, please?
This is an obvious requirement for making the engine more useful than the real world in training Machine Learning algorithms in general, and it seems people tried this in microsoft/AirSim#901 modifying the setting.json file, but I don't think we have access to this file?
Regards,
Yann.
The text was updated successfully, but these errors were encountered: