You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
i installed the predictor today and I think it is working - exept it gets no gfs-data. The progress tells "Server says: downloaded 0% of GFS files" for 20 times followed by the message"There was an error in running the prediction: Unknown error exit".
The directory "gfs" exists and is writeable, and I correct the path in predict.py. Temporary folder /tmp/pydap-cache was created by the program - but it is still empty (so the same with the pfs - directory).
In py.log stands
INFO: Looking for latest dataset which covers .....
INFO: Picked IP: 140.90.33.61
ERROR: Could not locate a dataset for the requested time
The clock is running on local time (Europe/Berlin) - ok, I must fix the error that a time between UTC and MESZ throws a "...is in the past error". But when I enter a time 5 hours in the future (or a day), py.log - error remains the same...
Is there a possibility to get verbose error messages??
Is it possible to add variables to the sites.json for burst altitude, ascent/descent rate for each location?
Andreas
The text was updated successfully, but these errors were encountered:
Many thanks for ypur fast reply. This is the problem - but it persists today, too. May be noaa-servers are overloaded e.g.
I tried another balloon-forecast-site http://weather.uwyo.edu/polar/balloon_traj.html and it works - habhub has problems in getting gfs-data. And today in the morning I recognized that the new model 00z was available there 30 minutes before the model arrived at habhub.... Are there mirrors or other gfs-servers which are quicker in getting new models and/or have more bandwidth and/or better stability? I see the used server can be modified in predict.py...
i installed the predictor today and I think it is working - exept it gets no gfs-data. The progress tells "Server says: downloaded 0% of GFS files" for 20 times followed by the message"There was an error in running the prediction: Unknown error exit".
The directory "gfs" exists and is writeable, and I correct the path in predict.py. Temporary folder /tmp/pydap-cache was created by the program - but it is still empty (so the same with the pfs - directory).
In py.log stands
INFO: Looking for latest dataset which covers .....
INFO: Picked IP: 140.90.33.61
ERROR: Could not locate a dataset for the requested time
The clock is running on local time (Europe/Berlin) - ok, I must fix the error that a time between UTC and MESZ throws a "...is in the past error". But when I enter a time 5 hours in the future (or a day), py.log - error remains the same...
Is there a possibility to get verbose error messages??
Is it possible to add variables to the sites.json for burst altitude, ascent/descent rate for each location?
Andreas
The text was updated successfully, but these errors were encountered: