-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow csv and text file support on sleap track #1875
Allow csv and text file support on sleap track #1875
Conversation
WalkthroughThe recent updates to the Changes
Possibly related issues
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
Outside diff range, codebase verification and nitpick comments (2)
sleap/nn/inference.py (2)
5288-5289
: Update function docstring to reflect new return type.The function docstring should be updated to reflect the new return type, which now includes lists of providers, data paths, and output paths.
- `(provider, data_path)` with the data `Provider` and path to the data that was specified in the args. + `(provider_list, data_path_list, output_path_list)` with the data `Provider`, path to the data that was specified in the args, and list of output paths if a CSV file was inputted.
5302-5302
: Remove unnecessary blank line.Remove the unnecessary blank line for better readability.
-
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (1)
- sleap/nn/inference.py (5 hunks)
Additional context used
Ruff
sleap/nn/inference.py
5313-5313: Undefined name
pandas
(F821)
5325-5325: Undefined name
pandas
(F821)
5327-5327: Undefined name
pandas
(F821)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
Outside diff range, codebase verification and nitpick comments (7)
tests/nn/test_inference.py (4)
1751-1756
: Function signature and setup.The function signature and setup are correct, but consider adding a docstring to describe the purpose and steps of the test.
def test_sleap_track_csv_input( min_centroid_model_path: str, min_centered_instance_model_path: str, centered_pair_vid_path, tmpdir, ): """ Test sleap_track with CSV input. Args: min_centroid_model_path: Path to the centroid model. min_centered_instance_model_path: Path to the centered instance model. centered_pair_vid_path: Path to the video file. tmpdir: Temporary directory for test files. """
1758-1769
: Setup temporary directory and copy video files.The setup correctly creates a temporary directory and copies the video files into it. However, consider using a more descriptive variable name for
num_copies
.- num_copies = 3 + num_video_copies = 3
1775-1781
: Create CSV file.The CSV file creation is correct, but consider adding a check to ensure the file is created successfully.
+ assert csv_file_path.exists(), "CSV file was not created successfully."
1794-1801
: Run inference and assert output files.The assertions correctly check for the existence of the expected output files. However, consider adding more detailed assertions to verify the content of the output files.
+ for expected_output_file in output_paths: + assert Path(expected_output_file).exists(), f"Output file {expected_output_file} does not exist." + # Add more detailed assertions to verify the content of the output files if necessary.sleap/nn/inference.py (3)
5289-5290
: Update the docstring to reflect the new return type.The docstring should mention that the function now returns lists of providers, data paths, and output paths.
- `(provider_list, data_path_list, output_path_list)` with the data `Provider`, path to the data - that was specified in the args, and list out output paths if a csv file was inputed. + `(provider_list, data_path_list, output_path_list)` with the data `Provider`, list of data paths, + and list of output paths if a CSV file was inputted.
5360-5360
: Add a comment to explain the purpose of the loop.Adding a comment will help future developers understand the purpose of this loop.
+ # Iterate over each file path to create providers. for file_path in raw_data_path_list:
5533-5534
: Update the return statement to reflect the new return type.Ensure the return statement matches the updated return type in the docstring.
- provider_list, data_path_list, output_path_list = _make_provider_from_cli(args) + return provider_list, data_path_list, output_path_list
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (2)
- sleap/nn/inference.py (6 hunks)
- tests/nn/test_inference.py (2 hunks)
Additional comments not posted (4)
tests/nn/test_inference.py (2)
1770-1773
: Generate output paths.The generation of output paths is correct and follows the expected pattern.
1785-1791
: Create sleap-track command.The command is correctly constructed. Ensure that the command arguments are valid and that the
sleap_track
function handles the CSV input as expected.sleap/nn/inference.py (2)
5304-5305
: Initializeoutput_path_list
as an empty list.To maintain consistency and avoid potential issues, initialize
output_path_list
as an empty list instead ofNone
.- output_path_list = None + output_path_list = []
5356-5357
: Initializeprovider_list
anddata_path_list
as empty lists.To maintain consistency, initialize
provider_list
anddata_path_list
as empty lists.- provider_list = [] - data_path_list = [] + provider_list: List[Provider] = [] + data_path_list: List[str] = []Likely invalid or redundant comment.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## develop #1875 +/- ##
===========================================
+ Coverage 73.30% 74.36% +1.05%
===========================================
Files 134 135 +1
Lines 24087 24781 +694
===========================================
+ Hits 17658 18429 +771
+ Misses 6429 6352 -77 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Outside diff range, codebase verification and nitpick comments (3)
tests/nn/test_inference.py (2)
1771-1821
: Consider adding assertions to validate CSV file parsing.While the test ensures that the expected output files are created, it would be beneficial to add assertions that check if the CSV file is correctly read and parsed.
+ # Assert CSV file is correctly read and parsed + with open(csv_file_path, mode="r") as csv_file: + csv_reader = csv.reader(csv_file) + headers = next(csv_reader) + assert headers == ["data_path", "output_path"] + rows = list(csv_reader) + assert len(rows) == num_copies
1866-1909
: Consider adding assertions to validate text file parsing.While the test ensures that the expected output files are created, it would be beneficial to add assertions that check if the text file is correctly read and parsed.
+ # Assert text file is correctly read and parsed + with open(txt_file_path, mode="r") as txt_file: + lines = txt_file.readlines() + assert len(lines) == num_copies + for i, line in enumerate(lines): + assert line.strip() == str(file_paths[i])sleap/nn/inference.py (1)
5289-5290
: Clarify the return type in the docstring.The docstring should clearly specify that the function returns a tuple of lists:
(provider_list, data_path_list, output_path_list)
.- `(provider_list, data_path_list, output_path_list)` with the data `Provider`, path to the data - that was specified in the args, and list out output paths if a csv file was inputed. + `(provider_list, data_path_list, output_path_list)` with the data `Provider`, list of data paths, + and list of output paths if a CSV file was provided.
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (2)
- sleap/nn/inference.py (6 hunks)
- tests/nn/test_inference.py (3 hunks)
Additional comments not posted (5)
tests/nn/test_inference.py (1)
1824-1863
: LGTM!The test function is well-structured and covers important edge cases for invalid CSV inputs.
sleap/nn/inference.py (4)
5304-5305
: Initializeoutput_path_list
as an empty list.To maintain consistency and avoid potential issues, initialize
output_path_list
as an empty list instead ofNone
.- output_path_list = None + output_path_list = []
5312-5329
: Initializeoutput_path_list
as an empty list if the column exists.To ensure consistency, initialize
output_path_list
as an empty list if the 'output_path' column exists.- output_path_list = df["output_path"].tolist() + output_path_list = df["output_path"].tolist() if "output_path" in df.columns else []
5350-5353
: Filter video files more accurately.Ensure that only supported video files are included in
raw_data_path_list
.- raw_data_path_list = [ - file_path for file_path in data_path_obj.iterdir() if file_path.is_file() - ] + supported_extensions = {".mp4", ".avi", ".mov", ".mkv"} + raw_data_path_list = [ + file_path for file_path in data_path_obj.iterdir() + if file_path.is_file() and file_path.suffix.lower() in supported_extensions + ]
5388-5396
: Improve error handling for video reading.Include the file name in the error message for better debugging.
- except Exception: - print(f"Error reading file: {file_path.as_posix()}") + except Exception as e: + print(f"Error reading file: {file_path.as_posix()}. Error: {e}")
sleap/nn/inference.py
Outdated
if "data_path" in df.columns: | ||
raw_data_path_list = df["data_path"].tolist() | ||
else: | ||
raise ValueError( | ||
f"Column 'data_path' does not exist in the CSV file: {data_path}" | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Design choice: we may want to be less sensitive to column naming. It's probably fair to assume that the first column of the CSV is the input path and the second column is the output path.
It'd be great to add some logic to auto-detect if the column names are present in the first row or not, and ignore it appropriately. An easy way would be to just do a Path(df.iloc[0, 0]).exists()
and if not (or if it's not a path -- not sure if pathlib does any checking), then assume that it's a column name row and skip it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
Outside diff range, codebase verification and nitpick comments (4)
tests/nn/test_inference.py (3)
1764-1803
: Ensure temporary directory cleanup.Currently, the function does not ensure that the temporary directory is cleaned up after the test. Consider using
tmpdir
's context manager to handle this.- slp_path = Path(tmpdir.mkdir("mp4_directory")) + with tmpdir.mkdir("mp4_directory") as slp_path:
1817-1854
: Ensure temporary directory cleanup.Currently, the function does not ensure that the temporary directory is cleaned up after the test. Consider using
tmpdir
's context manager to handle this.- csv_missing_column_path = tmpdir / "missing_column.csv" + with tmpdir as csv_missing_column_path:
1857-1890
: Ensure temporary directory cleanup.Currently, the function does not ensure that the temporary directory is cleaned up after the test. Consider using
tmpdir
's context manager to handle this.- slp_path = Path(tmpdir.mkdir("mp4_directory")) + with tmpdir.mkdir("mp4_directory") as slp_path:sleap/nn/inference.py (1)
5289-5290
: Update the docstring to reflect new return values.The docstring should be updated to reflect that the function now returns a tuple with
provider_list
,data_path_list
, andoutput_path_list
.- `(provider_list, data_path_list, output_path_list)` with the data `Provider`, path to the data - that was specified in the args, and list out output paths if a csv file was inputed. + `(provider_list, data_path_list, output_path_list)` with the data `Provider`, path to the data + that was specified in the args, and list of output paths if a CSV file was provided.
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (2)
- sleap/nn/inference.py (8 hunks)
- tests/nn/test_inference.py (7 hunks)
Additional context used
Ruff
sleap/nn/inference.py
5632-5632: Local variable
e
is assigned to but never usedRemove assignment to unused variable
e
(F841)
Additional comments not posted (2)
sleap/nn/inference.py (2)
5304-5305
: LGTM!Initializing
output_path_list
toNone
is appropriate.
5345-5354
: LGTM!The logic for handling text files is straightforward and correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (2)
- sleap/nn/inference.py (8 hunks)
- tests/nn/test_inference.py (8 hunks)
Files skipped from review as they are similar to previous changes (1)
- sleap/nn/inference.py
Additional context used
Ruff
tests/nn/test_inference.py
13-13:
tensorflow_hub
imported but unusedRemove unused import:
tensorflow_hub
(F401)
Additional comments not posted (1)
tests/nn/test_inference.py (1)
1823-1861
: Ensure specific exception message is checked.Currently, the test only checks for a
ValueError
exception. Ensure that the specific exception message is checked to verify the correct error is raised.- with pytest.raises(ValueError): + with pytest.raises(ValueError, match="Expected error message"):Likely invalid or redundant comment.
def test_sleap_track_csv_input( | ||
min_centroid_model_path: str, | ||
min_centered_instance_model_path: str, | ||
centered_pair_vid_path, | ||
tmpdir, | ||
): | ||
|
||
# Create temporary directory with the structured video files | ||
slp_path = Path(tmpdir.mkdir("mp4_directory")) | ||
|
||
# Copy and paste the video into the temp dir multiple times | ||
num_copies = 3 | ||
file_paths = [] | ||
for i in range(num_copies): | ||
# Construct the destination path with a unique name | ||
dest_path = slp_path / f"centered_pair_vid_copy_{i}.mp4" | ||
shutil.copy(centered_pair_vid_path, dest_path) | ||
file_paths.append(dest_path) | ||
|
||
# Generate output paths for each data_path | ||
output_paths = [ | ||
file_path.with_suffix(".TESTpredictions.slp") for file_path in file_paths | ||
] | ||
|
||
# Create a CSV file with the file paths | ||
csv_file_path = slp_path / "file_paths.csv" | ||
with open(csv_file_path, mode="w", newline="") as csv_file: | ||
csv_writer = csv.writer(csv_file) | ||
csv_writer.writerow(["data_path", "output_path"]) | ||
for data_path, output_path in zip(file_paths, output_paths): | ||
csv_writer.writerow([data_path, output_path]) | ||
|
||
slp_path_obj = Path(slp_path) | ||
|
||
# Create sleap-track command | ||
args = ( | ||
f"{csv_file_path} --model {min_centroid_model_path} " | ||
f"--tracking.tracker simple " | ||
f"--model {min_centered_instance_model_path} --video.index 0 --frames 1-3 --cpu" | ||
).split() | ||
|
||
slp_path_list = [file for file in slp_path_obj.iterdir() if file.is_file()] | ||
|
||
# Run inference | ||
sleap_track(args=args) | ||
|
||
# Assert predictions file exists | ||
expected_extensions = available_video_exts() | ||
|
||
for file_path in slp_path_list: | ||
if file_path.suffix in expected_extensions: | ||
expected_output_file = file_path.with_suffix(".TESTpredictions.slp") | ||
assert Path(expected_output_file).exists() | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure all expected output files are checked.
The current implementation only checks for .TESTpredictions.slp
files. Ensure that all expected output files are checked, regardless of the input file extension.
- for file_path in slp_path_list:
- if file_path.suffix in expected_extensions:
- expected_output_file = file_path.with_suffix(".TESTpredictions.slp")
- assert Path(expected_output_file).exists()
+ for file_path in file_paths:
+ expected_output_file = file_path.with_suffix(".TESTpredictions.slp")
+ assert Path(expected_output_file).exists()
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def test_sleap_track_csv_input( | |
min_centroid_model_path: str, | |
min_centered_instance_model_path: str, | |
centered_pair_vid_path, | |
tmpdir, | |
): | |
# Create temporary directory with the structured video files | |
slp_path = Path(tmpdir.mkdir("mp4_directory")) | |
# Copy and paste the video into the temp dir multiple times | |
num_copies = 3 | |
file_paths = [] | |
for i in range(num_copies): | |
# Construct the destination path with a unique name | |
dest_path = slp_path / f"centered_pair_vid_copy_{i}.mp4" | |
shutil.copy(centered_pair_vid_path, dest_path) | |
file_paths.append(dest_path) | |
# Generate output paths for each data_path | |
output_paths = [ | |
file_path.with_suffix(".TESTpredictions.slp") for file_path in file_paths | |
] | |
# Create a CSV file with the file paths | |
csv_file_path = slp_path / "file_paths.csv" | |
with open(csv_file_path, mode="w", newline="") as csv_file: | |
csv_writer = csv.writer(csv_file) | |
csv_writer.writerow(["data_path", "output_path"]) | |
for data_path, output_path in zip(file_paths, output_paths): | |
csv_writer.writerow([data_path, output_path]) | |
slp_path_obj = Path(slp_path) | |
# Create sleap-track command | |
args = ( | |
f"{csv_file_path} --model {min_centroid_model_path} " | |
f"--tracking.tracker simple " | |
f"--model {min_centered_instance_model_path} --video.index 0 --frames 1-3 --cpu" | |
).split() | |
slp_path_list = [file for file in slp_path_obj.iterdir() if file.is_file()] | |
# Run inference | |
sleap_track(args=args) | |
# Assert predictions file exists | |
expected_extensions = available_video_exts() | |
for file_path in slp_path_list: | |
if file_path.suffix in expected_extensions: | |
expected_output_file = file_path.with_suffix(".TESTpredictions.slp") | |
assert Path(expected_output_file).exists() | |
for file_path in file_paths: | |
expected_output_file = file_path.with_suffix(".TESTpredictions.slp") | |
assert Path(expected_output_file).exists() |
def test_sleap_track_text_file_input( | ||
min_centroid_model_path: str, | ||
min_centered_instance_model_path: str, | ||
centered_pair_vid_path, | ||
tmpdir, | ||
): | ||
|
||
# Create temporary directory with the structured video files | ||
slp_path = Path(tmpdir.mkdir("mp4_directory")) | ||
|
||
# Copy and paste the video into the temp dir multiple times | ||
num_copies = 3 | ||
file_paths = [] | ||
for i in range(num_copies): | ||
# Construct the destination path with a unique name | ||
dest_path = slp_path / f"centered_pair_vid_copy_{i}.mp4" | ||
shutil.copy(centered_pair_vid_path, dest_path) | ||
file_paths.append(dest_path) | ||
|
||
# Create a text file with the file paths | ||
txt_file_path = slp_path / "file_paths.txt" | ||
with open(txt_file_path, mode="w") as txt_file: | ||
for file_path in file_paths: | ||
txt_file.write(f"{file_path}\n") | ||
|
||
slp_path_obj = Path(slp_path) | ||
|
||
# Create sleap-track command | ||
args = ( | ||
f"{txt_file_path} --model {min_centroid_model_path} " | ||
f"--tracking.tracker simple " | ||
f"--model {min_centered_instance_model_path} --video.index 0 --frames 1-3 --cpu" | ||
).split() | ||
|
||
slp_path_list = [file for file in slp_path_obj.iterdir() if file.is_file()] | ||
|
||
# Run inference | ||
sleap_track(args=args) | ||
|
||
# Assert predictions file exists | ||
expected_extensions = available_video_exts() | ||
|
||
for file_path in slp_path_list: | ||
if file_path.suffix in expected_extensions: | ||
expected_output_file = Path(file_path).with_suffix(".predictions.slp") | ||
assert Path(expected_output_file).exists() | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure all expected output files are checked.
The current implementation only checks for .predictions.slp
files. Ensure that all expected output files are checked, regardless of the input file extension.
- for file_path in slp_path_list:
- if file_path.suffix in expected_extensions:
- expected_output_file = Path(file_path).with_suffix(".predictions.slp")
- assert Path(expected_output_file).exists()
+ for file_path in file_paths:
+ expected_output_file = file_path.with_suffix(".predictions.slp")
+ assert Path(expected_output_file).exists()
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def test_sleap_track_text_file_input( | |
min_centroid_model_path: str, | |
min_centered_instance_model_path: str, | |
centered_pair_vid_path, | |
tmpdir, | |
): | |
# Create temporary directory with the structured video files | |
slp_path = Path(tmpdir.mkdir("mp4_directory")) | |
# Copy and paste the video into the temp dir multiple times | |
num_copies = 3 | |
file_paths = [] | |
for i in range(num_copies): | |
# Construct the destination path with a unique name | |
dest_path = slp_path / f"centered_pair_vid_copy_{i}.mp4" | |
shutil.copy(centered_pair_vid_path, dest_path) | |
file_paths.append(dest_path) | |
# Create a text file with the file paths | |
txt_file_path = slp_path / "file_paths.txt" | |
with open(txt_file_path, mode="w") as txt_file: | |
for file_path in file_paths: | |
txt_file.write(f"{file_path}\n") | |
slp_path_obj = Path(slp_path) | |
# Create sleap-track command | |
args = ( | |
f"{txt_file_path} --model {min_centroid_model_path} " | |
f"--tracking.tracker simple " | |
f"--model {min_centered_instance_model_path} --video.index 0 --frames 1-3 --cpu" | |
).split() | |
slp_path_list = [file for file in slp_path_obj.iterdir() if file.is_file()] | |
# Run inference | |
sleap_track(args=args) | |
# Assert predictions file exists | |
expected_extensions = available_video_exts() | |
for file_path in slp_path_list: | |
if file_path.suffix in expected_extensions: | |
expected_output_file = Path(file_path).with_suffix(".predictions.slp") | |
assert Path(expected_output_file).exists() | |
def test_sleap_track_text_file_input( | |
min_centroid_model_path: str, | |
min_centered_instance_model_path: str, | |
centered_pair_vid_path, | |
tmpdir, | |
): | |
# Create temporary directory with the structured video files | |
slp_path = Path(tmpdir.mkdir("mp4_directory")) | |
# Copy and paste the video into the temp dir multiple times | |
num_copies = 3 | |
file_paths = [] | |
for i in range(num_copies): | |
# Construct the destination path with a unique name | |
dest_path = slp_path / f"centered_pair_vid_copy_{i}.mp4" | |
shutil.copy(centered_pair_vid_path, dest_path) | |
file_paths.append(dest_path) | |
# Create a text file with the file paths | |
txt_file_path = slp_path / "file_paths.txt" | |
with open(txt_file_path, mode="w") as txt_file: | |
for file_path in file_paths: | |
txt_file.write(f"{file_path}\n") | |
slp_path_obj = Path(slp_path) | |
# Create sleap-track command | |
args = ( | |
f"{txt_file_path} --model {min_centroid_model_path} " | |
f"--tracking.tracker simple " | |
f"--model {min_centered_instance_model_path} --video.index 0 --frames 1-3 --cpu" | |
).split() | |
slp_path_list = [file for file in slp_path_obj.iterdir() if file.is_file()] | |
# Run inference | |
sleap_track(args=args) | |
# Assert predictions file exists | |
expected_extensions = available_video_exts() | |
for file_path in file_paths: | |
expected_output_file = file_path.with_suffix(".predictions.slp") | |
assert Path(expected_output_file).exists() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Outside diff range, codebase verification and nitpick comments (4)
tests/nn/test_inference.py (4)
1815-1820
: Ensure all expected output files are checked.The current implementation only checks for
.TESTpredictions.slp
files. Ensure that all expected output files are checked, regardless of the input file extension.- for file_path in slp_path_list: - if file_path.suffix in expected_extensions: - expected_output_file = file_path.with_suffix(".TESTpredictions.slp") - assert Path(expected_output_file).exists() + for file_path in file_paths: + expected_output_file = file_path.with_suffix(".TESTpredictions.slp") + assert Path(expected_output_file).exists()
1848-1851
: Ensure specific exception message is checked.Currently, the test only checks for a
ValueError
exception. Ensure that the specific exception message is checked to verify the correct error is raised.- with pytest.raises( - ValueError, - ): + with pytest.raises(ValueError, match="Expected error message"):
1861-1862
: Ensure specific exception message is checked.Currently, the test only checks for a
ValueError
exception. Ensure that the specific exception message is checked to verify the correct error is raised.- with pytest.raises(ValueError): + with pytest.raises(ValueError, match="Expected error message"):
1905-1910
: Ensure all expected output files are checked.The current implementation only checks for
.predictions.slp
files. Ensure that all expected output files are checked, regardless of the input file extension.- for file_path in slp_path_list: - if file_path.suffix in expected_extensions: - expected_output_file = Path(file_path).with_suffix(".predictions.slp") - assert Path(expected_output_file).exists() + for file_path in file_paths: + expected_output_file = file_path.with_suffix(".predictions.slp") + assert Path(expected_output_file).exists()
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (1)
- tests/nn/test_inference.py (8 hunks)
Additional context used
Learnings (1)
tests/nn/test_inference.py (1)
Learnt from: talmo PR: talmolab/sleap#1875 File: tests/nn/test_inference.py:1804-1814 Timestamp: 2024-07-23T23:52:30.705Z Learning: When checking for expected output files in tests, use the supported video file extensions from `sleap.io.video.available_video_exts()` to ensure all formats are covered.
Ruff
tests/nn/test_inference.py
13-13:
tensorflow_hub
imported but unusedRemove unused import:
tensorflow_hub
(F401)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (1)
- tests/nn/test_inference.py (8 hunks)
Additional context used
Learnings (1)
tests/nn/test_inference.py (1)
Learnt from: talmo PR: talmolab/sleap#1875 File: tests/nn/test_inference.py:1804-1814 Timestamp: 2024-07-23T23:52:30.705Z Learning: When checking for expected output files in tests, use the supported video file extensions from `sleap.io.video.available_video_exts()` to ensure all formats are covered.
Ruff
tests/nn/test_inference.py
13-13:
tensorflow_hub
imported but unusedRemove unused import:
tensorflow_hub
(F401)
with pytest.raises(ValueError): | ||
sleap_track(args=args_empty) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure specific exception message is checked.
Currently, the test only checks for a ValueError
exception. Ensure that the specific exception message is checked to verify the correct error is raised.
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match="Expected error message"):
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
with pytest.raises(ValueError): | |
sleap_track(args=args_empty) | |
with pytest.raises(ValueError, match="Expected error message"): | |
sleap_track(args=args_empty) |
expected_extensions = available_video_exts() | ||
|
||
for file_path in slp_path_list: | ||
if file_path.suffix in expected_extensions: | ||
expected_output_file = Path(file_path).with_suffix(".predictions.slp") | ||
assert Path(expected_output_file).exists() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure all expected output files are checked.
The current implementation only checks for .predictions.slp
files. Ensure that all expected output files are checked, regardless of the input file extension.
- for file_path in slp_path_list:
- if file_path.suffix in expected_extensions:
- expected_output_file = file_path.with_suffix(".predictions.slp")
- assert Path(expected_output_file).exists()
+ for file_path in file_paths:
+ expected_output_file = file_path.with_suffix(".predictions.slp")
+ assert Path(expected_output_file).exists()
Committable suggestion was skipped due to low confidence.
sleap_track(args=args) | ||
|
||
|
||
def test_sleap_track_csv_input( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure all expected output files are checked.
The current implementation only checks for .TESTpredictions.slp
files. Ensure that all expected output files are checked, regardless of the input file extension.
- for file_path in slp_path_list:
- if file_path.suffix in expected_extensions:
- expected_output_file = file_path.with_suffix(".TESTpredictions.slp")
- assert Path(expected_output_file).exists()
+ for file_path in file_paths:
+ expected_output_file = file_path.with_suffix(".TESTpredictions.slp")
+ assert Path(expected_output_file).exists()
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def test_sleap_track_csv_input( | |
def test_sleap_track_csv_input( | |
for file_path in file_paths: | |
expected_output_file = file_path.with_suffix(".TESTpredictions.slp") | |
assert Path(expected_output_file).exists() |
# Run inference and expect ValueError for missing 'data_path' column | ||
with pytest.raises( | ||
ValueError, | ||
): | ||
sleap_track(args=args_missing_column) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure specific exception message is checked.
Currently, the test only checks for a ValueError
exception. Ensure that the specific exception message is checked to verify the correct error is raised.
- with pytest.raises(
- ValueError,
- ):
+ with pytest.raises(ValueError, match="Expected error message"):
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
# Run inference and expect ValueError for missing 'data_path' column | |
with pytest.raises( | |
ValueError, | |
): | |
sleap_track(args=args_missing_column) | |
# Run inference and expect ValueError for missing 'data_path' column | |
with pytest.raises(ValueError, match="Expected error message"): | |
sleap_track(args=args_missing_column) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks great! just make sure to add to documentation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good!
- Check the test csv function to see if that is the behavior you want now that we are not specifying a column name.
- You can go ahead and add to the CLI documentation.
- Manually test your changes by running the sleap-track command with different arguments to see if you can break it somehow.
Great job!
…ort-to-sleap-track' into elise/add-csv-and-text-file-support-to-sleap-track
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Outside diff range, codebase verification and nitpick comments (1)
docs/guides/cli.md (1)
141-144
: Inconsistency indata_path
DescriptionThe expanded description for
data_path
is present in one instance but not in others within the same file. To maintain clarity and consistency, please update all instances ofdata_path
to include the expanded description.
- File:
docs/guides/cli.md
- Instances to Update:
usage: sleap-inspect [-h] [--verbose] data_path
usage: sleap-render [-h] [-o OUTPUT] [-f FPS] [--scale SCALE] [--crop CROP] [--frames FRAMES] [--video-index VIDEO_INDEX] data_path
Analysis chain
LGTM! Verify consistency across the documentation.
The expanded description for
data_path
enhances clarity on acceptable input formats.Ensure that similar updates are made consistently across all relevant documentation.
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify that the expanded description for `data_path` is consistent across all relevant documentation. # Test: Search for the `data_path` argument in the documentation. Expect: Consistent descriptions. rg --type md -A 5 $'data_path'Length of output: 2464
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (3)
- docs/guides/cli.md (2 hunks)
- sleap/nn/inference.py (8 hunks)
- tests/nn/test_inference.py (8 hunks)
Additional context used
Learnings (1)
tests/nn/test_inference.py (1)
Learnt from: talmo PR: talmolab/sleap#1875 File: tests/nn/test_inference.py:1804-1814 Timestamp: 2024-07-23T23:52:30.705Z Learning: When checking for expected output files in tests, use the supported video file extensions from `sleap.io.video.available_video_exts()` to ensure all formats are covered.
Additional comments not posted (6)
docs/guides/cli.md (1)
159-159
: LGTM!The expanded description for
output
enhances the flexibility of the output specification.tests/nn/test_inference.py (2)
1862-1863
: Ensure specific exception message is checked.Currently, the test only checks for a
ValueError
exception. Ensure that the specific exception message is checked to verify the correct error is raised.- with pytest.raises(ValueError): + with pytest.raises(ValueError, match="Expected error message"):Likely invalid or redundant comment.
1848-1852
: Ensure specific exception message is checked.Currently, the test only checks for a
ValueError
exception. Ensure that the specific exception message is checked to verify the correct error is raised.- with pytest.raises( - ValueError, - ): + with pytest.raises(ValueError, match="Expected error message"):Likely invalid or redundant comment.
sleap/nn/inference.py (3)
5306-5307
: Initializeoutput_path_list
as an empty list.To maintain consistency and avoid potential issues, initialize
output_path_list
as an empty list instead ofNone
.- output_path_list = None + output_path_list = []
5314-5346
: Ensure robust handling of CSV files.The logic for handling CSV files is sound, but consider adding more detailed error messages for better debugging.
- raise ValueError(f"CSV file is empty: {data_path}. Error: {e}") from e + raise ValueError(f"CSV file is empty or invalid: {data_path}. Error: {e}") from e
5401-5409
: Improve error handling for video reading.Include the file name in the error message for better debugging.
- except Exception: - print(f"Error reading file: {file_path.as_posix()}") + except Exception as e: + print(f"Error reading file: {file_path.as_posix()}. Error: {e}")
if file_path.suffix in expected_extensions: | ||
expected_output_file = Path(file_path).with_suffix(".predictions.slp") | ||
assert Path(expected_output_file).exists() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure all expected output files are checked.
The current implementation only checks for .predictions.slp
files. Ensure that all expected output files are checked, regardless of the input file extension.
- for file_path in slp_path_list:
- if file_path.suffix in expected_extensions:
- expected_output_file = Path(file_path).with_suffix(".predictions.slp")
- assert Path(expected_output_file).exists()
+ for file_path in file_paths:
+ expected_output_file = file_path.with_suffix(".predictions.slp")
+ assert Path(expected_output_file).exists()
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if file_path.suffix in expected_extensions: | |
expected_output_file = Path(file_path).with_suffix(".predictions.slp") | |
assert Path(expected_output_file).exists() | |
for file_path in file_paths: | |
expected_output_file = file_path.with_suffix(".predictions.slp") | |
assert Path(expected_output_file).exists() |
expected_extensions = available_video_exts() | ||
|
||
for file_path in slp_path_list: | ||
if file_path.suffix in expected_extensions: | ||
expected_output_file = file_path.with_suffix(".TESTpredictions.slp") | ||
assert Path(expected_output_file).exists() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure all expected output files are checked.
The current implementation only checks for .TESTpredictions.slp
files. Ensure that all expected output files are checked, regardless of the input file extension.
- for file_path in slp_path_list:
- if file_path.suffix in expected_extensions:
- expected_output_file = file_path.with_suffix(".TESTpredictions.slp")
- assert Path(expected_output_file).exists()
+ for file_path in file_paths:
+ expected_output_file = file_path.with_suffix(".TESTpredictions.slp")
+ assert Path(expected_output_file).exists()
Committable suggestion was skipped due to low confidence.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (1)
- docs/guides/cli.md (2 hunks)
Additional comments not posted (2)
docs/guides/cli.md (2)
141-144
: Improved clarity and usability of thedata_path
argument.The expanded description of the
data_path
argument to include multiple input formats significantly improves the clarity and usability of the CLI documentation.
159-159
: Enhanced functionality and flexibility of the-o OUTPUT
argument.The modified description of the
-o OUTPUT
argument to specify that the output can be a filename or a directory path enhances the functionality and flexibility of the CLI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome work!
* Remove no-op code from #1498 * Add options to set background color when exporting video (#1328) * implement #921 * simplified form / refractor * Add test function and update cli docs * Improve test function to check background color * Improve comments * Change background options to lowercase * Use coderabbitai suggested `fill` --------- Co-authored-by: Shrivaths Shyam <[email protected]> Co-authored-by: Liezl Maree <[email protected]> * Increase range on batch size (#1513) * Increase range on batch size * Set maximum to a factor of 2 * Set default callable for `match_lists_function` (#1520) * Set default for `match_lists_function` * Move test code to official tests * Check using expected values * Allow passing in `Labels` to `app.main` (#1524) * Allow passing in `Labels` to `app.main` * Load the labels object through command * Add warning when unable to switch back to CPU mode * Replace (broken) `--unrag` with `--ragged` (#1539) * Fix unrag always set to true in sleap-export * Replace unrag with ragged * Fix typos * Add function to create app (#1546) * Refactor `AddInstance` command (#1561) * Refactor AddInstance command * Add staticmethod wrappers * Return early from set_visible_nodes * Import DLC with uniquebodyparts, add Tracks (#1562) * Import DLC with uniquebodyparts, add Tracks * add tests * correct tests * Make the hdf5 videos store as int8 format (#1559) * make the hdf5 video dataset type as proper int8 by padding with zeros * add gzip compression * Scale new instances to new frame size (#1568) * Fix typehinting in `AddInstance` * brought over changes from my own branch * added suggestions * Ensured google style comments --------- Co-authored-by: roomrys <[email protected]> Co-authored-by: sidharth srinath <[email protected]> * Fix package export (#1619) Add check for empty videos * Add resize/scroll to training GUI (#1565) * Make resizable training GUI and add adaptive scroll bar * Set a maximum window size --------- Co-authored-by: Liezl Maree <[email protected]> * support loading slp files with non-compound types and str in metadata (#1566) Co-authored-by: Liezl Maree <[email protected]> * change inference pipeline option to tracking-only (#1666) change inference pipeline none option to tracking-only * Add ABL:AOC 2023 Workshop link (#1673) * Add ABL:AOC 2023 Workshop link * Trigger website build * Graceful failing with seeking errors (#1712) * Don't try to seek to faulty last frame on provider initialization * Catch seeking errors and pass * Lint * Fix IndexError for hdf5 file import for single instance analysis files (#1695) * Fix hdf5 read for single instance analysis files * Add test * Small test files * removing unneccessary fixtures * Replace imgaug with albumentations (#1623) What's the worst that could happen? * Initial commit * Fix augmentation * Update more deps requirements * Use pip for installing albumentations and avoid reinstalling OpenCV * Update other conda envs * Fix out of bounds albumentations issues and update dependencies (#1724) * Install albumentations using conda-forge in environment file * Conda install albumentations * Add ndx-pose to pypi requirements * Keep out of bounds points * Black * Add ndx-pose to conda install in environment file * Match environment file without cuda * Ordered dependencies * Add test * Delete comments * Add conda packages to mac environment file * Order dependencies in pypi requirements * Add tests with zeroes and NaNs for augmentation * Back * Black * Make comment one line * Add todo for later * Black * Update to new TensorFlow conda package (#1726) * Build conda package locally * Try 2.8.4 * Merge develop into branch to fix dependencies * Change tensorflow version to 2.7.4 in where conda packages are used * Make tensorflow requirements in pypi looser * Conda package has TensorFlow 2.7.0 and h5py and numpy installed via conda * Change tensorflow version in `environment_no_cuda.yml` to test using CI * Test new sleap/tensorflow package * Reset build number * Bump version * Update mac deps * Update to Arm64 Mac runners * pin `importlib-metadata` * Pin more stuff on mac * constrain `opencv` version due to new qt dependencies * Update more mac stuff * Patches to get to green * More mac skipping --------- Co-authored-by: Talmo Pereira <[email protected]> Co-authored-by: Talmo Pereira <[email protected]> * Fix CI on macosx-arm64 (#1734) * Build conda package locally * Try 2.8.4 * Merge develop into branch to fix dependencies * Change tensorflow version to 2.7.4 in where conda packages are used * Make tensorflow requirements in pypi looser * Conda package has TensorFlow 2.7.0 and h5py and numpy installed via conda * Change tensorflow version in `environment_no_cuda.yml` to test using CI * Test new sleap/tensorflow package * Reset build number * Bump version * Update mac deps * Update to Arm64 Mac runners * pin `importlib-metadata` * Pin more stuff on mac * constrain `opencv` version due to new qt dependencies * Update more mac stuff * Patches to get to green * More mac skipping * Re-enable mac tests * Handle GPU re-init * Fix mac build CI * Widen tolerance for movenet correctness test * Fix build ci * Try for manual build without upload * Try to reduce training CI time * Rework actions * Fix miniforge usage * Tweaks * Fix build ci * Disable manual build * Try merging CI coverage * GPU/CPU usage in tests * Lint * Clean up * Fix test skip condition * Remove scratch test --------- Co-authored-by: eberrigan <[email protected]> * Add option to export to CSV via sleap-convert and API (#1730) * Add csv as a format option * Add analysis to format * Add csv suffix to output path * Add condition for csv analysis file * Add export function to Labels class * delete print statement * lint * Add `analysis.csv` as parametrize input for `sleap-convert` tests * test `export_csv` method added to `Labels` class * black formatting * use `Path` to construct filename * add `analysis.csv` to cli guide for `sleap-convert` --------- Co-authored-by: Talmo Pereira <[email protected]> * Only propagate Transpose Tracks when propagate is checked (#1748) Fix always-propagate transpose tracks issue * View Hyperparameter nonetype fix (#1766) Pass config getter argument to fetch hyperparameters * Adding ragged metadata to `info.json` (#1765) Add ragged metadata to info.json file * Add batch size to GUI for inference (#1771) * Fix conda builds (#1776) * test conda packages in a test environment as part of CI * do not test sleap import using conda build * use github environment variables to define build path for each OS in the matrix and add print statements for testing * figure out paths one OS at a time * github environment variables work in subsequent steps not current step * use local builds first * print env info * try simple environment creation * try conda instead of mamba * fix windows build path * fix windows build path * add comment to reference pull request * remove test stage from conda build for macs and test instead by creating the environment in a workflow * test workflow by pushing to current branch * test conda package on macos runner * Mac build does not need nvidia channel * qudida and albumentations are conda installed now * add comment with original issue * use python 3.9 * use conda match specifications syntax * make print statements more readable for troubleshooting python versioning * clean up build file * update version for pre-release * add TODO * add tests for conda packages before uploading * update ci comments and branches * remove macos test of pip wheel since python 3.9 is not supported by setup-python action * Upgrade build actions for release (#1779) * update `build.yml` so it matches updates from `build_manual.yml` * test `build.yml` without uploading * test again using build_manual.yml * build pip wheel with Ubuntu and turn off caching so build.yml exactly matches build_manual.yml * `build.yml` on release only and upload * testing caching * `use-only-tar-bz2: true` makes environment unsolvable, change it back * Update .github/workflows/build_manual.yml Co-authored-by: Liezl Maree <[email protected]> * Update .github/workflows/build.yml Co-authored-by: Liezl Maree <[email protected]> * bump pre-release version * fix version for pre-release * run build and upload on release! * try setting `CACHE_NUMBER` to 1 with `use-only-tar-bz2` set to true * increasing the cache number to reset the cache does work when `use-only-tar-bz2` is set to true * publish and upload on release only --------- Co-authored-by: Liezl Maree <[email protected]> * Add ZMQ support via GUI and CLI (#1780) * Add ZMQ support via GUI and CLI, automatic port handler, separate utils module for the functions * Change menu name to match deleting predictions beyond max instance (#1790) Change menu and function names * Fix website build and remove build cache across workflows (#1786) * test with build_manual on push * comment out caching in build manual * remove cache step from builad manual since environment resolves when this is commented out * comment out cache in build ci * remove cache from build on release * remove cache from website build * test website build on push * add name to checkout step * update checkout to v4 * update checkout to v4 in build ci * remove cache since build ci works without it * update upload-artifact to v4 in build ci * update second chechout to v4 in build ci * update setup-python to v5 in build ci * update download-artifact to v4 in build ci * update checkout to v4 in build ci * update checkout to v4 in website build * update setup-miniconda to v3.0.3 in website build * update actions-gh-pages to v4 in website build * update actions checkout and setup-python in ci * update checkout action in ci to v4 * pip install lxml[html_clean] because of error message during action * add error message to website to explain why pip install lxml[html_clean] * remove my branch for pull request * Bump to 1.4.1a1 (#1791) * bump versions to 1.4.1a1 * we can change the version on the installation page since this will be merged into the develop branch and not main * Fix windows conda package upload and build ci (#1792) * windows OS is 2022 not 2019 on runner * upload windows conda build manually but not pypi build * remove comment and run build ci * change build manual back so that it doesn't upload * remove branch from build manual * update installation docs for 1.4.1a1 * Fix zmq inference (#1800) * Ensure that we always pass in the zmq_port dict to LossViewer * Ensure zmq_ports has correct keys inside LossViewer * Use specified controller and publish ports for first attempted addresses * Add test for ports being set in LossViewer * Add max attempts to find unused port * Fix find free port loop and add for controller port also * Improve code readablility and reuse * Improve error message when unable to find free port * Set selected instance to None after removal (#1808) * Add test that selected instance set to None after removal * Set selected instance to None after removal * Add `InstancesList` class to handle backref to `LabeledFrame` (#1807) * Add InstancesList class to handle backref to LabeledFrame * Register structure/unstructure hooks for InstancesList * Add tests for the InstanceList class * Handle case where instance are passed in but labeled_frame is None * Add tests relevant methods in LabeledFrame * Delegate setting frame to InstancesList * Add test for PredictedInstance.frame after complex merge * Add todo comment to not use Instance.frame * Add rtest for InstasnceList.remove * Use normal list for informative `merged_instances` * Add test for copy and clear * Add copy and clear methods, use normal lists in merge method * Bump to v1.4.1a2 (#1835) bump to 1.4.1a2 * Updated trail length viewing options (#1822) * updated trail length optptions * Updated trail length options in the view menu * Updated `prefs` to include length info from `preferences.yaml` * Added trail length as method of `MainWindow` * Updated trail length documentation * black formatting --------- Co-authored-by: Keya Loding <[email protected]> * Handle case when no frame selection for trail overlay (#1832) * Menu option to open preferences directory and update to util functions to pathlib (#1843) * Add menu to view preferences directory and update to pathlib * text formatting * Add `Keep visualizations` checkbox to training GUI (#1824) * Renamed save_visualizations to view_visualizations for clarity * Added Delete Visualizations button to the training pipeline gui, exposed del_viz_predictions config option to the user * Reverted view_ back to save_ and changed new training checkbox to Keep visualization images after training. * Fixed keep_viz config option state override bug and updated keep_viz doc description * Added test case for reading training CLI argument correctly * Removed unnecessary testing code * Creating test case to check for viz folder * Finished tests to check CLI argument reading and viz directory existence * Use empty string instead of None in cli args test * Use keep_viz_images false in most all test configs (except test to override config) --------- Co-authored-by: roomrys <[email protected]> * Allowing inference on multiple videos via `sleap-track` (#1784) * implementing proposed code changes from issue #1777 * comments * configuring output_path to support multiple video inputs * fixing errors from preexisting test cases * Test case / code fixes * extending test cases for mp4 folders * test case for output directory * black and code rabbit fixes * code rabbit fixes * as_posix errors resolved * syntax error * adding test data * black * output error resolved * edited for push to dev branch * black * errors fixed, test cases implemented * invalid output test and invalid input test * deleting debugging statements * deleting print statements * black * deleting unnecessary test case * implemented tmpdir * deleting extraneous file * fixing broken test case * fixing test_sleap_track_invalid_output * removing support for multiple slp files * implementing talmo's comments * adding comments * Add object keypoint similarity method (#1003) * Add object keypoint similarity method * fix max_tracking * correct off-by-one error * correct off-by-one error * Generate suggestions using max point displacement threshold (#1862) * create function max_point_displacement, _max_point_displacement_video. Add to yaml file. Create test for new function . . . will need to edit * remove unnecessary for loop, calculate proper displacement, adjusted tests accordingly * Increase range for displacement threshold * Fix frames not found bug * Return the latter frame index * Lint --------- Co-authored-by: roomrys <[email protected]> * Added Three Different Cases for Adding a New Instance (#1859) * implemented paste with offset * right click and then default will paste the new instance at the location of the cursor * modified the logics for creating new instance * refined the logic * fixed the logic for right click * refined logics for adding new instance at a specific location * Remove print statements * Comment code * Ensure that we choose a non nan reference node * Move OOB nodes to closest in-bounds position --------- Co-authored-by: roomrys <[email protected]> * Allow csv and text file support on sleap track (#1875) * initial changes * csv support and test case * increased code coverage * Error fixing, black, deletion of (self-written) unused code * final edits * black * documentation changes * documentation changes * Fix GUI crash on scroll (#1883) * Only pass wheelEvent to children that can handle it * Add test for wheelEvent * Fix typo to allow rendering videos with mp4 (Mac) (#1892) Fix typo to allow rendering videos with mp4 * Do not apply offset when double clicking a `PredictedInstance` (#1888) * Add offset argument to newInstance and AddInstance * Apply offset of 10 for Add Instance menu button (Ctrl + I) * Add offset for docks Add Instance button * Make the QtVideoPlayer context menu unit-testable * Add test for creating a new instance * Add test for "New Instance" button in `InstancesDock` * Fix typo in docstring * Add docstrings and typehinting * Remove unused imports and sort imports * Refactor video writer to use imageio instead of skvideo (#1900) * modify `VideoWriter` to use imageio with ffmpeg backend * check to see if ffmpeg is present * use the new check for ffmpeg * import imageio.v2 * add imageio-ffmpeg to environments to test * using avi format for now * remove SKvideo videowriter * test `VideoWriterImageio` minimally * add more documentation for ffmpeg * default mp4 for ffmpeg should be mp4 * print using `IMAGEIO` when using ffmpeg * mp4 for ffmpeg * use mp4 ending in test * test `VideoWriterImageio` with avi file extension * test video with odd size * remove redundant filter since imageio-ffmpeg resizes automatically * black * remove unused import * use logging instead of print statement * import cv2 is needed for resize * remove logging * Use `Video.from_filename` when structuring videos (#1905) * Use Video.from_filename when structuring videos * Modify removal_test_labels to have extension in filename * Use | instead of + in key commands (#1907) * Use | instead of + in key commands * Lint * Replace QtDesktop widget in preparation for PySide6 (#1908) * Replace to-be-depreciated QDesktopWidget * Remove unused imports and sort remaining imports * Remove unsupported |= operand to prepare for PySide6 (#1910) Fixes TypeError: unsupported operand type(s) for |=: 'int' and 'Option' * Use positional argument for exception type (#1912) traceback.format_exception has changed it's first positional argument's name from etype to exc in python 3.7 to 3.10 * Replace all Video structuring with Video.cattr() (#1911) * Remove unused AsyncVideo class (#1917) Remove unused AsyncVideo * Refactor `LossViewer` to use matplotlib (#1899) * use updated syntax for QtAgg backend of matplotlib * start add features to `MplCanvas` to replace QtCharts features in `LossViewer` (untested) * remove QtCharts imports and replace with MplCanvas * remove QtCharts imports and replace with MplCanvas * start using MplCanvas in LossViwer instead of QtCharts (untested) * use updated syntax * Uncomment all commented out QtChart * Add debug code * Refactor monitor to use LossViewer._init_series method * Add monitor only debug code * Add methods for setting up axes and legend * Add the matplotlib canvas to the widget * Resize axis with data (no log support yet) * Try using PathCollection for "batch" * Get "batch" plotting with ax.scatter (no log support yet) * Add log support * Add a _resize_axis method * Modify init_series to work for ax.plot as well * Use matplotlib to plot epoch_loss line * Add method _add_data_to_scatter * Add _add_data_to_plot method * Add docstring to _resize_axes * Add matplotlib plot for val_loss * Add matplotlib scatter for val_loss_best * Avoid errors with setting log scale before any positive values * Add x and y axes labels * Set title (removing html tags) * Add legend * Adjust positioning of plot * Lint * Leave MplCanvas unchanged * Removed unused training_monitor.LossViewer * Resize fonts * Move legend outside of plot * Add debug code for montitor aesthetics * Use latex formatting to bold parts of title * Make axes aesthetic * Add midpoint grid lines * Set initial limits on x and y axes to be 0+ * Ensure x axis minimum is always resized to 0+ * Adjust plot to account for plateau patience title * Add debug code for plateau patience title line * Lint * Set thicker line width * Remove unused import * Set log axis on initialization * Make tick labels smaller * Move plot down a smidge * Move ylabel left a bit * Lint * Add class LossPlot * Refactor LossViewer to use LossPlot * Remove QtCharts code * Remove debug codes * Allocate space for figure items based on item's size * Refactor LossPlot to use underscores for internal methods * Ensure y_min, y_max not equal Otherwise we get an unnecessary teminal message: UserWarning: Attempting to set identical bottom == top == 3.0 results in singular transformations; automatically expanding. self.axes.set_ylim(y_min, y_max) --------- Co-authored-by: roomrys <[email protected]> Co-authored-by: roomrys <[email protected]> * Refactor `LossViewer` to use underscores for internal method names (#1919) Refactor LossViewer to use underscores for internal method names * Manually handle `Instance.from_predicted` structuring when not `None` (#1930) * Use `tf.math.mod` instead of `%` (#1931) * Option for Max Stride to be 128 (#1941) Co-authored-by: Max Weinberg <[email protected]> * Add discussion comment workflow (#1945) * Add a bot to autocomment on workflow * Use github markdown warning syntax * Add a multiline warning * Change happy coding to happy SLEAPing Co-authored-by: Talmo Pereira <[email protected]> --------- Co-authored-by: roomrys <[email protected]> Co-authored-by: Talmo Pereira <[email protected]> * Add comment on issue workflow (#1946) * Add workflow to test conda packages (#1935) * Add missing imageio-ffmpeg to meta.ymls (#1943) * Update installation docs 1.4.1 (#1810) * [wip] Updated installation docs * Add tabs for different OS installations * Move installation methods to tabs * Use tabs.css * FIx styling error (line under last tab in terminal hint) * Add installation instructions before TOC * Replace mamba with conda * Lint * Find good light colors not switching when change dark/light themes * Get color scheme switching with dark/light toggle button * Upgrade website build dependencies * Remove seemingly unneeded dependencies from workflow * Add myst-nb>=0.16.0 lower bound * Trigger dev website build * Fix minor typo in css * Add miniforge and one-liner installs for package managers --------- Co-authored-by: roomrys <[email protected]> Co-authored-by: Talmo Pereira <[email protected]> * Add imageio dependencies for pypi wheel (#1950) Add imagio dependencies for pypi wheel Co-authored-by: roomrys <[email protected]> * Do not always color skeletons table black (#1952) Co-authored-by: roomrys <[email protected]> * Remove no module named work error (#1956) * Do not always color skeletons table black * Remove offending (possibly unneeded) line that causes the no module named work error to print in terminal * Remove offending (possibly unneeded) line that causes the no module named work error to print in terminal * Remove accidentally added changes * Add (failing) test to ensure menu-item updates with state change * Reconnect callback for menu-item (using lambda) * Add (failing) test to ensure menu-item updates with state change Do not assume inital state * Reconnect callback for menu-item (using lambda) --------- Co-authored-by: roomrys <[email protected]> * Add `normalized_instance_similarity` method (#1939) * Add normalize function * Expose normalization function * Fix tests * Expose object keypoint sim function * Fix tests * Handle skeleton decoding internally (#1961) * Reorganize (and add) imports * Add (and reorganize) imports * Modify decode_preview_image to return bytes if specified * Implement (minimally tested) replace_jsonpickle_decode * Add support for using idx_to_node map i.e. loading from Labels (slp file) * Ignore None items in reduce_list * Convert large function to SkeletonDecoder class * Update SkeletonDecoder.decode docstring * Move decode_preview_image to SkeletonDecoder * Use SkeletonDecoder instead of jsonpickle in tests * Remove unused imports * Add test for decoding dict vs tuple pystates * Handle skeleton encoding internally (#1970) * start class `SkeletonEncoder` * _encoded_objects need to be a dict to add to * add notebook for testing * format * fix type in docstring * finish classmethod for encoding Skeleton as a json string * test encoded Skeleton as json string by decoding it * add test for decoded encoded skeleton * update jupyter notebook for easy testing * constraining attrs in dev environment to make sure decode format is always the same locally * encode links first then encode source then target then type * save first enconding statically as an input to _get_or_assign_id so that we do not always get py/id * save first encoding statically * first encoding is passed to _get_or_assign_id * use first_encoding variable to determine if we should assign a py/id * add print statements for debugging * update notebook for easy testing * black * remove comment * adding attrs constraint to show this passes for certain attrs version only * add import * switch out jsonpickle.encode * oops remove import * can attrs be unconstrained? * forgot comma * pin attrs for testing * test Skeleton from json, template, with symmetries, and template * use SkeletonEncoder.encode * black * try removing None values in EdgeType reduced * Handle case when nodes are replaced by integer indices from caller * Remove prototyping notebook * Remove attrs pins * Remove sort keys (which flips the neccessary ordering of our py/ids) * Do not add extra indents to encoded file * Only append links after fully encoded (fat-finger) * Remove outdated comment * Lint --------- Co-authored-by: Talmo Pereira <[email protected]> Co-authored-by: roomrys <[email protected]> * Pin ndx-pose<0.2.0 (#1978) * Pin ndx-pose<0.2.0 * Typo * Sort encoded `Skeleton` dictionary for backwards compatibility (#1975) * Add failing test to check that encoded Skeleton is sorted * Sort Skeleton dictionary before encoding * Remove unused import * Disable comment bot for now * Fix COCO Dataset Loading for Invisible Keypoints (#2035) Update coco.py # Fix COCO Dataset Loading for Invisible Keypoints ## Issue When loading COCO datasets, keypoints marked as invisible (flag=0) are currently skipped and later placed randomly within the instance's bounding box. However, in COCO format, these keypoints may still have valid coordinate information that should be preserved (see toy_dataset for expected vs. current behavior). ## Changes Modified the COCO dataset loading logic to: - Check if invisible keypoints (flag=0) have non-zero coordinates - If coordinates are (0,0), skip the point (existing behavior) - If coordinates are not (0,0), create the point at those coordinates but mark it as not visible - Maintain existing behavior for visible (flag=2) and labeled * Lint * Add tracking score as seekbar header options (#2047) * Add `tracking_score` as a constructor arg for `PredictedInstance` * Add `tracking_score` to ID models * Add fixture with tracking scores * Add tracking score to seekbar header * Add bonsai guide for sleap docs (#2050) * [WIP] Add bonsai guide page * Add more information to the guide with images * add branch for website build * Typos * fix links * Include suggestions * Add more screenshots and refine the doc * Remove branch from website workflow * Completed documentation edits from PR made by reviewer + review bot. --------- Co-authored-by: Shrivaths Shyam <[email protected]> Co-authored-by: Liezl Maree <[email protected]> * Don't mark complete on instance scaling (#2049) * Add check for instances with track assigned before training ID models (#2053) * Add menu item for deleting instances beyond frame limit (#1797) * Add menu item for deleting instances beyond frame limit * Add test function to test the instances returned * typos * Update docstring * Add frame range form * Extend command to use frame range --------- Co-authored-by: Talmo Pereira <[email protected]> * Highlight instance box on hover (#2055) * Make node marker and label sizes configurable via preferences (#2057) * Make node marker and label sizes configurable via preferences * Fix test * Enable touchpad pinch to zoom (#2058) * Fix import PySide2 -> qtpy (#2065) * Fix import PySide2 -> qtpy * Remove unnecessary print statements. * Add channels for pip conda env (#2067) * Add channels for pypi conda env * Trigger dev website build * Separate the video name and its filepath columns in `VideoTablesModel` (#2052) * add option to show video names with filepath * add doc * new feature added successfully * delete unnecessary code * remove attributes from video object * Update dataviews.py * remove all properties * delete toggle option * remove video show * fix the order of the columns * remove options * Update sleap/gui/dataviews.py Co-authored-by: Liezl Maree <[email protected]> * Update sleap/gui/dataviews.py Co-authored-by: Liezl Maree <[email protected]> * use pathlib instead of substrings * Update dataviews.py Co-authored-by: Liezl Maree <[email protected]> * Use Path instead of pathlib.Path and sort imports and remove unused imports * Use item.filename instead of getattr --------- Co-authored-by: Liezl Maree <[email protected]> * Make status bar dependent on UI mode (#2063) * remove bug for dark mode * fix toggle case --------- Co-authored-by: Liezl Maree <[email protected]> * Bump version to 1.4.1 (#2062) * Bump version to 1.4.1 * Trigger conda/pypi builds (no upload) * Trigger website build * Add dev channel to installation instructions --------- Co-authored-by: Talmo Pereira <[email protected]> * Add -c sleap/label/dev channel for win/linux - also trigger website build --------- Co-authored-by: Scott Yang <[email protected]> Co-authored-by: Shrivaths Shyam <[email protected]> Co-authored-by: getzze <[email protected]> Co-authored-by: Lili Karashchuk <[email protected]> Co-authored-by: Sidharth Srinath <[email protected]> Co-authored-by: sidharth srinath <[email protected]> Co-authored-by: Talmo Pereira <[email protected]> Co-authored-by: KevinZ0217 <[email protected]> Co-authored-by: Elizabeth <[email protected]> Co-authored-by: Talmo Pereira <[email protected]> Co-authored-by: eberrigan <[email protected]> Co-authored-by: vaibhavtrip29 <[email protected]> Co-authored-by: Keya Loding <[email protected]> Co-authored-by: Keya Loding <[email protected]> Co-authored-by: Hajin Park <[email protected]> Co-authored-by: Elise Davis <[email protected]> Co-authored-by: gqcpm <[email protected]> Co-authored-by: Andrew Park <[email protected]> Co-authored-by: roomrys <[email protected]> Co-authored-by: MweinbergUmass <[email protected]> Co-authored-by: Max Weinberg <[email protected]> Co-authored-by: DivyaSesh <[email protected]> Co-authored-by: Felipe Parodi <[email protected]> Co-authored-by: croblesMed <[email protected]>
Description
After having expanded sleap-track to support multiple videos inputs at once, we want to further simplify this process for projects with a large numbers of inputs.
Types of changes
Does this address any currently open issues?
#1870
Outside contributors checklist
Thank you for contributing to SLEAP!
❤️
Summary by CodeRabbit
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
data_path
and the-o OUTPUT
argument in the CLI documentation, enhancing user understanding.