refactor(api): stop mocking protocol_api.validation in test_instrument_context.py#17994
refactor(api): stop mocking protocol_api.validation in test_instrument_context.py#17994
Conversation
|
|
||
| move_to_location: types.Location | ||
| well: Optional[labware.Well] = None | ||
| well: Optional[labware.Well] |
There was a problem hiding this comment.
My IDE linter doesn't want us to set it to None here, since we override it below.
| ).then_return(WellTarget(well=mock_well, location=None, in_place=False)) | ||
| decoy.when(mock_well.top()).then_return(top_location) | ||
| subject.blow_out(location=input_location) | ||
| subject.blow_out(location=mock_well) |
There was a problem hiding this comment.
I believe the point of this test is to call blow_out() with a Well (hence its name), not a Location.
Calling it with a Well is how you trigger defaulting to the top of the well, which is the only logical reason why top_location is defined in this test.
| ).then_return([mock_well]) | ||
| decoy.when(mock_validation.ensure_new_tip_policy("never")).then_return( | ||
| TransferTipPolicyV2.ONCE | ||
| ) |
| mock_validation.validate_location(location=None, last_location=None) | ||
| ).then_raise(mock_validation.NoLocationError()) | ||
| with pytest.raises(RuntimeError): | ||
| subject.aspirate(location=None) |
There was a problem hiding this comment.
This is an example of how mocking out the validation does us a disservice: the entire point of this test is to make sure that validate_location() raises an error when the location is missing. By mocking out validate_location(), we end up not testing our implementation at all, which made this test useless.
There was a problem hiding this comment.
This test wasn't exactly testing that validate_location() raises an error when location is missing. Rather it was testing that instrument_context.aspirate() raises an error when validate_location() raises an error. I'd say the name of the test is misleading.
There was a problem hiding this comment.
I understand your philosophy better now after your explanation. But I would still assert that this way of writing unit tests is dangerous:
Suppose I lost my mind one day and decided to change the implementation of validate_location() to:
def validate_location(...):
if target_location is None:
# raise NoLocationError() # before
raise ZeroDivisionError()
Knowing that all changes should be tested, I also go and update the unit test in test_validation.py:
def test_validate_raises_no_location_error() -> None:
# with pytest.raises(subject.NoLocationError): # before
with pytest.raises(ZeroDivisionError):
subject.validate_location(location=None, last_location=None)
Then I send out a PR claiming that I've updated the unit tests, and all other tests (including this test in test_instrument_context.py) also pass -- which is true -- so this change is safe and should be approved.
With the original approach, the test was asserting that if validate_location() raises NoLocationError, then aspirate() does the right thing. But if validate_location() does not raise NoLocationError, this test wouldn't raise any alarms.
You're right that we could hypothetically write an integration test that tests the combination of aspirate() and validate_location() -- but there are LOTS of potential combinations, and realistically, we're not going to be able to write so many integration tests.
So I think the best way to ensure correctness is to just let aspirate() call the real validation function if it's not too onerous. And it this case, calling the real validation function is also less work, since we don't have to write the mock.
TamarZanzouri
left a comment
There was a problem hiding this comment.
LGTM! Thank you for your attention :-)
There was a problem hiding this comment.
I don't have super strong opinions about this but let me explain why this mock existed.
We mock any dependencies of the test subject so that the tests are true 'unit' test in that they test only the things in that test unit. In this case, we have tests that check that the validations are handled correctly in the functions. We shouldn't need to worry about whether a validation is correct or not, because that's the job of the unit tests written for the validation functions. And on the other side, we have integration tests that test the function as a whole, including the validations.
Your point about abusing the mocked functions is taken. Some of us (including me) have a habit of making the mocks purposely ridiculous to convey two things:
- that it doesn't matter what the return value of a mock is because we are not concerned with it. OR
- to make sure that a mocked function is actually called and its return value is dictated by the stub we wrote in the test
But I can see how it can be confusing to a reader. Maybe we should stop doing that.
Lastly, I do understand that mocking every dependency, even if it is an isolated one like validation, adds the burden onto integration tests to test the handling of the function with its dependencies and we admittedly don't have that great coverage in integration tests so I see the value of the change you made here. But I think this change is valuable only for complicated functions since it makes testing the numerous cases of complex functions easier. For any functions/ test subjects that don't have a lot of test cases, we should use pure unit tests + integration tests. This approach keeps the unit tests free from any issues in the dependencies.
| mock_validation.validate_location(location=None, last_location=None) | ||
| ).then_raise(mock_validation.NoLocationError()) | ||
| with pytest.raises(RuntimeError): | ||
| subject.aspirate(location=None) |
There was a problem hiding this comment.
This test wasn't exactly testing that validate_location() raises an error when location is missing. Rather it was testing that instrument_context.aspirate() raises an error when validate_location() raises an error. I'd say the name of the test is misleading.
…t_context.py (#17994) # Overview For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. ## Test Plan and Hands on Testing I examined all the usages of `mock_validation` by hand. It was very painful. ## Risk assessment Low. This is a test-only change.
ddcc4
left a comment
There was a problem hiding this comment.
Your point about abusing the mocked functions is taken. Some of us (including me) have a habit of making the mocks purposely ridiculous to convey two things:
- that it doesn't matter what the return value of a mock is because we are not concerned with it. OR
- to make sure that a mocked function is actually called and its return value is dictated by the stub we wrote in the test
For any functions/ test subjects that don't have a lot of test cases, we should use pure unit tests + integration tests. This approach keeps the unit tests free from any issues in the dependencies.
Ah, thank you for that explanation! I wasn't aware that that's what you're trying to do. The code makes more sense in that case.
But still, I would point out that this approach leads to worse test coverage, plus it takes more lines of code.
See my comment below for an example why.
| mock_validation.validate_location(location=None, last_location=None) | ||
| ).then_raise(mock_validation.NoLocationError()) | ||
| with pytest.raises(RuntimeError): | ||
| subject.aspirate(location=None) |
There was a problem hiding this comment.
I understand your philosophy better now after your explanation. But I would still assert that this way of writing unit tests is dangerous:
Suppose I lost my mind one day and decided to change the implementation of validate_location() to:
def validate_location(...):
if target_location is None:
# raise NoLocationError() # before
raise ZeroDivisionError()
Knowing that all changes should be tested, I also go and update the unit test in test_validation.py:
def test_validate_raises_no_location_error() -> None:
# with pytest.raises(subject.NoLocationError): # before
with pytest.raises(ZeroDivisionError):
subject.validate_location(location=None, last_location=None)
Then I send out a PR claiming that I've updated the unit tests, and all other tests (including this test in test_instrument_context.py) also pass -- which is true -- so this change is safe and should be approved.
With the original approach, the test was asserting that if validate_location() raises NoLocationError, then aspirate() does the right thing. But if validate_location() does not raise NoLocationError, this test wouldn't raise any alarms.
You're right that we could hypothetically write an integration test that tests the combination of aspirate() and validate_location() -- but there are LOTS of potential combinations, and realistically, we're not going to be able to write so many integration tests.
So I think the best way to ensure correctness is to just let aspirate() call the real validation function if it's not too onerous. And it this case, calling the real validation function is also less work, since we don't have to write the mock.
…t_context.py (#17994) # Overview For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. ## Test Plan and Hands on Testing I examined all the usages of `mock_validation` by hand. It was very painful. ## Risk assessment Low. This is a test-only change.
…t_context.py (#17994) # Overview For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. ## Test Plan and Hands on Testing I examined all the usages of `mock_validation` by hand. It was very painful. ## Risk assessment Low. This is a test-only change.
…t_context.py (#17994) For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. I examined all the usages of `mock_validation` by hand. It was very painful. Low. This is a test-only change.
…t_context.py (#17994) For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. I examined all the usages of `mock_validation` by hand. It was very painful. Low. This is a test-only change.
…t_context.py (#17994) # Overview For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. ## Test Plan and Hands on Testing I examined all the usages of `mock_validation` by hand. It was very painful. ## Risk assessment Low. This is a test-only change.
…t_context.py (#17994) # Overview For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. ## Test Plan and Hands on Testing I examined all the usages of `mock_validation` by hand. It was very painful. ## Risk assessment Low. This is a test-only change.
…t_context.py (#17994) # Overview For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. ## Test Plan and Hands on Testing I examined all the usages of `mock_validation` by hand. It was very painful. ## Risk assessment Low. This is a test-only change.
…t_context.py (#17994) # Overview For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. ## Test Plan and Hands on Testing I examined all the usages of `mock_validation` by hand. It was very painful. ## Risk assessment Low. This is a test-only change.
…t_context.py (#17994) # Overview For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. ## Test Plan and Hands on Testing I examined all the usages of `mock_validation` by hand. It was very painful. ## Risk assessment Low. This is a test-only change. (cherry picked from commit 41043f32776a2de932c029e72e26043e781186ef)
…t_context.py (#17994) # Overview For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. ## Test Plan and Hands on Testing I examined all the usages of `mock_validation` by hand. It was very painful. ## Risk assessment Low. This is a test-only change. (cherry picked from commit 41043f32776a2de932c029e72e26043e781186ef)
…t_context.py (#17994) # Overview For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. ## Test Plan and Hands on Testing I examined all the usages of `mock_validation` by hand. It was very painful. ## Risk assessment Low. This is a test-only change. (cherry picked from commit 41043f32776a2de932c029e72e26043e781186ef)
…t_context.py (#17994) # Overview For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. ## Test Plan and Hands on Testing I examined all the usages of `mock_validation` by hand. It was very painful. ## Risk assessment Low. This is a test-only change. (cherry picked from commit 2fece0c)
…t_context.py (#17994) # Overview For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. ## Test Plan and Hands on Testing I examined all the usages of `mock_validation` by hand. It was very painful. ## Risk assessment Low. This is a test-only change. (cherry picked from commit 2fece0c)
…t_context.py (#17994) # Overview For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. ## Test Plan and Hands on Testing I examined all the usages of `mock_validation` by hand. It was very painful. ## Risk assessment Low. This is a test-only change. (cherry picked from commit 2fece0c)
…t_context.py (#17994) # Overview For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. ## Test Plan and Hands on Testing I examined all the usages of `mock_validation` by hand. It was very painful. ## Risk assessment Low. This is a test-only change. (cherry picked from commit 2fece0c)
…t_context.py (#17994) # Overview For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. ## Test Plan and Hands on Testing I examined all the usages of `mock_validation` by hand. It was very painful. ## Risk assessment Low. This is a test-only change. (cherry picked from commit 2fece0c)
…t_context.py (#17994) # Overview For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. ## Test Plan and Hands on Testing I examined all the usages of `mock_validation` by hand. It was very painful. ## Risk assessment Low. This is a test-only change. (cherry picked from commit 2fece0c)
…t_context.py (#17994) # Overview For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. ## Test Plan and Hands on Testing I examined all the usages of `mock_validation` by hand. It was very painful. ## Risk assessment Low. This is a test-only change. (cherry picked from commit 2fece0c)
…t_context.py (#17994) # Overview For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. ## Test Plan and Hands on Testing I examined all the usages of `mock_validation` by hand. It was very painful. ## Risk assessment Low. This is a test-only change. (cherry picked from commit 2fece0c)
…t_context.py (#17994) # Overview For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. ## Test Plan and Hands on Testing I examined all the usages of `mock_validation` by hand. It was very painful. ## Risk assessment Low. This is a test-only change. (cherry picked from commit 2fece0c)
…t_context.py (#17994) # Overview For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. ## Test Plan and Hands on Testing I examined all the usages of `mock_validation` by hand. It was very painful. ## Risk assessment Low. This is a test-only change. (cherry picked from commit 2fece0c)
…t_context.py (#17994) # Overview For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. ## Test Plan and Hands on Testing I examined all the usages of `mock_validation` by hand. It was very painful. ## Risk assessment Low. This is a test-only change. (cherry picked from commit 2fece0c)
…t_context.py (#17994) # Overview For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. ## Test Plan and Hands on Testing I examined all the usages of `mock_validation` by hand. It was very painful. ## Risk assessment Low. This is a test-only change. (cherry picked from commit 2fece0c)
…t_context.py (#17994) # Overview For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. ## Test Plan and Hands on Testing I examined all the usages of `mock_validation` by hand. It was very painful. ## Risk assessment Low. This is a test-only change. (cherry picked from commit 2fece0c)
…t_context.py (#17994) # Overview For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. ## Test Plan and Hands on Testing I examined all the usages of `mock_validation` by hand. It was very painful. ## Risk assessment Low. This is a test-only change. (cherry picked from commit 2fece0c)
…t_context.py (#17994) # Overview For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. ## Test Plan and Hands on Testing I examined all the usages of `mock_validation` by hand. It was very painful. ## Risk assessment Low. This is a test-only change. (cherry picked from commit 2fece0c)
…t_context.py (#17994) # Overview For some reason, we were mocking out the entire `protocol_api/validation.py` module in `test_instrument_context.py`. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests. Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating `Location`s into `WellTarget`s, or turning the `"never"` tip-replacement policy into `TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow. I need to fix this because I want to write a test for my change to the `mix()` function, but it was very hard to do so given how we were using `mock_validation`. ## Test Plan and Hands on Testing I examined all the usages of `mock_validation` by hand. It was very painful. ## Risk assessment Low. This is a test-only change. (cherry picked from commit 2fece0c)
Overview
For some reason, we were mocking out the entire
protocol_api/validation.pymodule intest_instrument_context.py. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests.Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating
Locations intoWellTargets, or turning the"never"tip-replacement policy intoTransferTipPolicyV2.ONCE. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow.I need to fix this because I want to write a test for my change to the
mix()function, but it was very hard to do so given how we were usingmock_validation.Test Plan and Hands on Testing
I examined all the usages of
mock_validationby hand. It was very painful.Risk assessment
Low. This is a test-only change.