-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BUG:--- Logging error --- #509
Comments
Hi @fMizki -- I'm sorry you're running into this issue. I can see the crash occurred after the script had been running for a while, I'm sure that must be frustrating. I also know that the error messages that we get from There are a couple of things I can think of that might explain these errors but I'm not sure how to narrow them down. Can you please do the following?
Let's try that first and then we can take it from there |
Replying with some thoughts Possible explanations:
|
Thank you for your reply.
Well, when I failed in
When I ran just 'vak predict' as
|
I'm sure all paths I had to tell to vak in the .toml file are correct.
I agree. Well, actually even though I'm on the more powerful machine, the time for the prep has not been reduced. This makes me think that there might be something to do with |
Sorry, to be clear, what I mean is that there's an error causes by how the code in But now that I think about it, that can't be the explanation, or you would get similar errors for other directories.
Yes, I think there's multiple things interacting here and that's what's making it confusing. What's particularly surprising is that the code continues execution after the logging-related errors. This is the new environment you made where you installed
I would like to try and create your environment on my Windows machine and see if I can reproduce these errors. |
I'm going to reply with the full tracebacks you attached above so it's more searchable. (vak-env-cuda113) Z:\cyn191>(vak prep manual/211225-220204/cyn191_predict11.toml && vak predict manual/211225-220204/cyn191_predict11.toml )
determined that purpose of config file is: predict
will add 'csv_path' option to 'PREDICT' section
purpose for dataset: predict
will not split dataset
creating dataset from spectrogram files in: manual\211225-220204\predict_data11
validating set of spectrogram files
[########################################] | 100% Completed | 1hr 36min 34.7s
--- Logging error ---
Traceback (most recent call last):
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\logging\__init__.py", line 1104, in emit
self.flush()
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\logging\__init__.py", line 1084, in flush
self.stream.flush()
FileNotFoundError: [Errno 2] No such file or directory
Call stack:
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\Scripts\vak-script.py", line 9, in <module>
sys.exit(main())
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\__main__.py", line 45, in main
cli.cli(command=args.command, config_file=args.configfile)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\cli\cli.py", line 30, in cli
COMMAND_FUNCTION_MAP[command](toml_path=config_file)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\cli\prep.py", line 132, in prep
vak_df, csv_path = core.prep(
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\core\prep.py", line 205, in prep
vak_df = dataframe.from_files(
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\io\dataframe.py", line 171, in from_files
vak_df = spect.to_dataframe(**to_dataframe_kwargs, logger=logger)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\io\spect.py", line 262, in to_dataframe
log_or_print(
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\logging.py", line 66, in log_or_print
log_method(msg)
Message: 'creating pandas.DataFrame representing dataset from spectrogram files'
Arguments: ()
creating pandas.DataFrame representing dataset from spectrogram files
[ ] | 2% Completed | 53min 10.2s
Traceback (most recent call last):
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\Scripts\vak-script.py", line 9, in <module>
sys.exit(main())
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\__main__.py", line 45, in main
cli.cli(command=args.command, config_file=args.configfile)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\cli\cli.py", line 30, in cli
COMMAND_FUNCTION_MAP[command](toml_path=config_file)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\cli\prep.py", line 132, in prep
vak_df, csv_path = core.prep(
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\core\prep.py", line 205, in prep
vak_df = dataframe.from_files(
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\io\dataframe.py", line 171, in from_files
vak_df = spect.to_dataframe(**to_dataframe_kwargs, logger=logger)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\io\spect.py", line 268, in to_dataframe
records = list(spect_path_annot_tuples.map(_to_record))
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\dask\bag\core.py", line 1471, in __iter__
return iter(self.compute())
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\dask\base.py", line 292, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\dask\base.py", line 575, in compute
results = schedule(dsk, keys, **kwargs)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\dask\multiprocessing.py", line 220, in get
result = get_async(
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\dask\local.py", line 508, in get_async
raise_exception(exc, tb)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\dask\multiprocessing.py", line 110, in reraise
raise exc
dask.multiprocessing.OSError: [Errno 22] Invalid argument
Traceback
---------
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\dask\local.py", line 221, in execute_task
result = _execute_task(task, data)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\dask\core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\dask\bag\core.py", line 1845, in reify
seq = list(seq)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\dask\bag\core.py", line 2033, in __next__
return self.f(*vals)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\io\spect.py", line 226, in _to_record
spect_dur = spect_dict[spect_key].shape[-1] * timebin_dur
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\numpy\lib\npyio.py", line 243, in __getitem__
return format.read_array(bytes,
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\numpy\lib\format.py", line 778, in read_array
data = _read_bytes(fp, read_size, "array data")
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\numpy\lib\format.py", line 907, in _read_bytes
r = fp.read(size - len(data))
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\zipfile.py", line 925, in read
data = self._read1(n)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\zipfile.py", line 995, in _read1
data = self._read2(n)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\zipfile.py", line 1025, in _read2
data = self._fileobj.read(n)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\zipfile.py", line 745, in read
data = self._file.read(n) This is the first error you report ("1:") |
We can see in 1: that the error in the middle appears to be one of the mysterious logging tracebacks, that does not force the problem to halt execution: File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\logging.py", line 66, in log_or_print
log_method(msg)
Message: 'creating pandas.DataFrame representing dataset from spectrogram files'
Arguments: ()
[ ] | 2% Completed | 53min 10.2s because it continued running after (for an hour before it crashed 🙁) but the next error is definitely fatal.
|
This is the second error you report ("2:") (vak-env-cuda113) Z:\cyn191>(vak prep manual/211225-220204/cyn191_predict11.toml && vak predict manual/211225-220204/cyn191_predict11.toml )
determined that purpose of config file is: predict
will add 'csv_path' option to 'PREDICT' section
purpose for dataset: predict
will not split dataset
creating dataset from spectrogram files in: manual\211225-220204\predict_data11
--- Logging error ---
Traceback (most recent call last):
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\logging\__init__.py", line 1104, in emit
self.flush()
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\logging\__init__.py", line 1084, in flush
self.stream.flush()
FileNotFoundError: [Errno 2] No such file or directory
Call stack:
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\Scripts\vak-script.py", line 9, in <module>
sys.exit(main())
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\__main__.py", line 45, in main
cli.cli(command=args.command, config_file=args.configfile)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\cli\cli.py", line 30, in cli
COMMAND_FUNCTION_MAP[command](toml_path=config_file)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\cli\prep.py", line 132, in prep
vak_df, csv_path = core.prep(
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\core\prep.py", line 205, in prep
vak_df = dataframe.from_files(
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\io\dataframe.py", line 171, in from_files
vak_df = spect.to_dataframe(**to_dataframe_kwargs, logger=logger)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\io\spect.py", line 199, in to_dataframe
files.spect.is_valid_set_of_spect_files(
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\files\spect.py", line 193, in is_valid_set_of_spect_files
log_or_print("validating set of spectrogram files", logger=logger, level="info")
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\logging.py", line 66, in log_or_print
log_method(msg)
Message: 'validating set of spectrogram files'
Arguments: ()
validating set of spectrogram files
[########################################] | 100% Completed | 1hr 27min 34.9s
--- Logging error ---
Traceback (most recent call last):
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\logging\__init__.py", line 1104, in emit
self.flush()
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\logging\__init__.py", line 1084, in flush
self.stream.flush()
OSError: [Errno 22] Invalid argument
Call stack:
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\Scripts\vak-script.py", line 9, in <module>
sys.exit(main())
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\__main__.py", line 45, in main
cli.cli(command=args.command, config_file=args.configfile)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\cli\cli.py", line 30, in cli
COMMAND_FUNCTION_MAP[command](toml_path=config_file)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\cli\prep.py", line 132, in prep
vak_df, csv_path = core.prep(
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\core\prep.py", line 205, in prep
vak_df = dataframe.from_files(
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\io\dataframe.py", line 171, in from_files
vak_df = spect.to_dataframe(**to_dataframe_kwargs, logger=logger)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\io\spect.py", line 262, in to_dataframe
log_or_print(
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\logging.py", line 66, in log_or_print
log_method(msg)
Message: 'creating pandas.DataFrame representing dataset from spectrogram files'
Arguments: ()
creating pandas.DataFrame representing dataset from spectrogram files
[########################################] | 100% Completed | 1hr 0min 52.2s
--- Logging error ---
Traceback (most recent call last):
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\logging\__init__.py", line 1104, in emit
self.flush()
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\logging\__init__.py", line 1084, in flush
self.stream.flush()
OSError: [Errno 22] Invalid argument
Call stack:
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\Scripts\vak-script.py", line 9, in <module>
sys.exit(main())
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\__main__.py", line 45, in main
cli.cli(command=args.command, config_file=args.configfile)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\cli\cli.py", line 30, in cli
COMMAND_FUNCTION_MAP[command](toml_path=config_file)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\cli\prep.py", line 132, in prep
vak_df, csv_path = core.prep(
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\core\prep.py", line 241, in prep
log_or_print(
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\logging.py", line 66, in log_or_print
log_method(msg)
Message: 'saving dataset as a .csv file: manual\\211225-220204\\output\\prep\\predict_data11_prep_220518_071706.csv'
Arguments: ()
saving dataset as a .csv file: manual\211225-220204\output\prep\predict_data11_prep_220518_071706.csv
Logging results to manual\211225-220204\output\prep
loading SpectScaler from path: manual\train\results\results_220515_113717\StandardizeSpect
loading labelmap from path: manual\train\results\results_220515_113717\labelmap.json
loading dataset to predict from csv path: manual\211225-220204\output\prep\predict_data11_prep_220518_071706.csv
will save annotations in .csv file: manual\211225-220204\output\cyn191_predict11.annot.csv
dataset has timebins with duration: 0.00145
shape of input to networks used for predictions: torch.Size([1, 1025, 176])
instantiating models from model-config map:/n{'TweetyNet': {'optimizer': {'lr': 0.001}, 'network': {}, 'loss': {}, 'metrics': {}}}
loading checkpoint for TweetyNet from path: manual\train\results\results_220515_113717\TweetyNet\checkpoints\max-val-acc-checkpoint.pt
Loading checkpoint from:
manual\train\results\results_220515_113717\TweetyNet\checkpoints\max-val-acc-checkpoint.pt
running predict method of TweetyNet
batch 4999 / 5000: 100%|█████████████████████████████████████████████████████████| 5000/5000 [1:17:54<00:00, 1.07it/s]
0%| | 0/5000 [00:00<?, ?it/s]converting predictions to annotations
80%|████████████████████████████████████████████████████████████▌ | 3983/5000 [1:45:54<27:02, 1.60s/it]
Traceback (most recent call last):
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\Scripts\vak-script.py", line 9, in <module>
sys.exit(main())
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\__main__.py", line 45, in main
cli.cli(command=args.command, config_file=args.configfile)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\cli\cli.py", line 30, in cli
COMMAND_FUNCTION_MAP[command](toml_path=config_file)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\cli\predict.py", line 42, in predict
core.predict(
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\core\predict.py", line 236, in predict
for ind, batch in enumerate(progress_bar):
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\tqdm\std.py", line 1195, in __iter__
for obj in iterable:
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\torch\utils\data\dataloader.py", line 530, in __next__
data = self._next_data()
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\torch\utils\data\dataloader.py", line 1224, in _next_data
return self._process_data(data)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\torch\utils\data\dataloader.py", line 1250, in _process_data
data.reraise()
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\torch\_utils.py", line 457, in reraise
raise exception
OSError: Caught OSError in DataLoader worker process 3.
Original Traceback (most recent call last):
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\torch\utils\data\_utils\worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\torch\utils\data\_utils\fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\torch\utils\data\_utils\fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\vak\datasets\vocal_dataset.py", line 75, in __getitem__
spect = spect_dict[self.spect_key]
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\numpy\lib\npyio.py", line 243, in __getitem__
return format.read_array(bytes,
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\numpy\lib\format.py", line 778, in read_array
data = _read_bytes(fp, read_size, "array data")
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\site-packages\numpy\lib\format.py", line 907, in _read_bytes
r = fp.read(size - len(data))
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\zipfile.py", line 925, in read
data = self._read1(n)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\zipfile.py", line 995, in _read1
data = self._read2(n)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\zipfile.py", line 1025, in _read2
data = self._fileobj.read(n)
File "C:\Users\finch\anaconda3\envs\vak-env-cuda113\lib\zipfile.py", line 745, in read
data = self._file.read(n)
OSError: [Errno 22] Invalid argument |
Same thing for "2:" -- there's the
And then there's the final fatal error when
A difference in this case being that it happened inside a |
This is starting to look similar to what @yardencsGitHub and I experienced on Windows previously: |
just trying to document that in detail to increase the likelihood other people find it when they search @fMizki if you could reply to this comment above |
Well, as we talked on the e-mail, I had been working on network drive, and this can be the cause, so I tried on the local drive.
As you can see, I got no error though I tried on the same files with which I was getting the logging-error. Moreover, the time for the prep is greatly reduced. I will share this on #507 All of my problems seems to root in the working drive. I've never expected this. I'm learnig a lot. |
🙌 🙌 🙌 🚀 🚀 🚀 You do not have to apologize. I will raise an issue to add something about this to the docs and make sure to give you credit for your contribution. |
Thank you! |
Hi,
I got un error I couldn't understand.
On the file I attached, I copied the errors.
The errors '1:' and '3:' in the file are what I got when I ran
vak prep and predict
though I didn't get it when I ran on the other files of which parameters are completely the same with those I put when I got the error. Only the difference is the predicting files, so it is possible that some of them are broken.And when did
vak prep and predict
with the completely same files with '1:' again( I just commanded samely without changes in any files), I got the error '2:'.logging_error.txt
I'm sorry if it were a problem on my computer or my files.
The text was updated successfully, but these errors were encountered: