Skip to content

refactor(api): stop mocking protocol_api.validation in test_instrument_context.py#17994

Merged
ddcc4 merged 2 commits intoedgefrom
dc-mock-validation
Apr 7, 2025
Merged

refactor(api): stop mocking protocol_api.validation in test_instrument_context.py#17994
ddcc4 merged 2 commits intoedgefrom
dc-mock-validation

Conversation

@ddcc4
Copy link
Copy Markdown
Contributor

@ddcc4 ddcc4 commented Apr 4, 2025

Overview

For some reason, we were mocking out the entire protocol_api/validation.py module in test_instrument_context.py. There is no reason to do that, since the validation functions are pure functions that don't rely on external objects, and we really ought to test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions do wild things: like mistranslating Locations into WellTargets, or turning the "never" tip-replacement policy into TransferTipPolicyV2.ONCE. There is no good reason to do that, and it just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the mix() function, but it was very hard to do so given how we were using mock_validation.

Test Plan and Hands on Testing

I examined all the usages of mock_validation by hand. It was very painful.

Risk assessment

Low. This is a test-only change.

@ddcc4 ddcc4 requested a review from a team as a code owner April 4, 2025 23:01

move_to_location: types.Location
well: Optional[labware.Well] = None
well: Optional[labware.Well]
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My IDE linter doesn't want us to set it to None here, since we override it below.

).then_return(WellTarget(well=mock_well, location=None, in_place=False))
decoy.when(mock_well.top()).then_return(top_location)
subject.blow_out(location=input_location)
subject.blow_out(location=mock_well)
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the point of this test is to call blow_out() with a Well (hence its name), not a Location.

Calling it with a Well is how you trigger defaulting to the top of the well, which is the only logical reason why top_location is defined in this test.

).then_return([mock_well])
decoy.when(mock_validation.ensure_new_tip_policy("never")).then_return(
TransferTipPolicyV2.ONCE
)
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just silly.

mock_validation.validate_location(location=None, last_location=None)
).then_raise(mock_validation.NoLocationError())
with pytest.raises(RuntimeError):
subject.aspirate(location=None)
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an example of how mocking out the validation does us a disservice: the entire point of this test is to make sure that validate_location() raises an error when the location is missing. By mocking out validate_location(), we end up not testing our implementation at all, which made this test useless.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test wasn't exactly testing that validate_location() raises an error when location is missing. Rather it was testing that instrument_context.aspirate() raises an error when validate_location() raises an error. I'd say the name of the test is misleading.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand your philosophy better now after your explanation. But I would still assert that this way of writing unit tests is dangerous:

Suppose I lost my mind one day and decided to change the implementation of validate_location() to:

def validate_location(...):
    if target_location is None:
        # raise NoLocationError()  # before
        raise ZeroDivisionError()

Knowing that all changes should be tested, I also go and update the unit test in test_validation.py:

def test_validate_raises_no_location_error() -> None:
    # with pytest.raises(subject.NoLocationError):  # before

    with pytest.raises(ZeroDivisionError):
        subject.validate_location(location=None, last_location=None)

Then I send out a PR claiming that I've updated the unit tests, and all other tests (including this test in test_instrument_context.py) also pass -- which is true -- so this change is safe and should be approved.


With the original approach, the test was asserting that if validate_location() raises NoLocationError, then aspirate() does the right thing. But if validate_location() does not raise NoLocationError, this test wouldn't raise any alarms.

You're right that we could hypothetically write an integration test that tests the combination of aspirate() and validate_location() -- but there are LOTS of potential combinations, and realistically, we're not going to be able to write so many integration tests.

So I think the best way to ensure correctness is to just let aspirate() call the real validation function if it's not too onerous. And it this case, calling the real validation function is also less work, since we don't have to write the mock.

Copy link
Copy Markdown
Contributor

@TamarZanzouri TamarZanzouri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thank you for your attention :-)

@ddcc4 ddcc4 merged commit 2fece0c into edge Apr 7, 2025
24 checks passed
@ddcc4 ddcc4 deleted the dc-mock-validation branch April 7, 2025 15:39
Copy link
Copy Markdown
Member

@sanni-t sanni-t left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have super strong opinions about this but let me explain why this mock existed.

We mock any dependencies of the test subject so that the tests are true 'unit' test in that they test only the things in that test unit. In this case, we have tests that check that the validations are handled correctly in the functions. We shouldn't need to worry about whether a validation is correct or not, because that's the job of the unit tests written for the validation functions. And on the other side, we have integration tests that test the function as a whole, including the validations.

Your point about abusing the mocked functions is taken. Some of us (including me) have a habit of making the mocks purposely ridiculous to convey two things:

  • that it doesn't matter what the return value of a mock is because we are not concerned with it. OR
  • to make sure that a mocked function is actually called and its return value is dictated by the stub we wrote in the test

But I can see how it can be confusing to a reader. Maybe we should stop doing that.

Lastly, I do understand that mocking every dependency, even if it is an isolated one like validation, adds the burden onto integration tests to test the handling of the function with its dependencies and we admittedly don't have that great coverage in integration tests so I see the value of the change you made here. But I think this change is valuable only for complicated functions since it makes testing the numerous cases of complex functions easier. For any functions/ test subjects that don't have a lot of test cases, we should use pure unit tests + integration tests. This approach keeps the unit tests free from any issues in the dependencies.

mock_validation.validate_location(location=None, last_location=None)
).then_raise(mock_validation.NoLocationError())
with pytest.raises(RuntimeError):
subject.aspirate(location=None)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test wasn't exactly testing that validate_location() raises an error when location is missing. Rather it was testing that instrument_context.aspirate() raises an error when validate_location() raises an error. I'd say the name of the test is misleading.

ahiuchingau pushed a commit that referenced this pull request Apr 7, 2025
…t_context.py (#17994)

# Overview

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

## Test Plan and Hands on Testing

I examined all the usages of `mock_validation` by hand. It was very
painful.

## Risk assessment

Low. This is a test-only change.
Copy link
Copy Markdown
Contributor Author

@ddcc4 ddcc4 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your point about abusing the mocked functions is taken. Some of us (including me) have a habit of making the mocks purposely ridiculous to convey two things:

  • that it doesn't matter what the return value of a mock is because we are not concerned with it. OR
  • to make sure that a mocked function is actually called and its return value is dictated by the stub we wrote in the test

For any functions/ test subjects that don't have a lot of test cases, we should use pure unit tests + integration tests. This approach keeps the unit tests free from any issues in the dependencies.

Ah, thank you for that explanation! I wasn't aware that that's what you're trying to do. The code makes more sense in that case.

But still, I would point out that this approach leads to worse test coverage, plus it takes more lines of code.

See my comment below for an example why.

mock_validation.validate_location(location=None, last_location=None)
).then_raise(mock_validation.NoLocationError())
with pytest.raises(RuntimeError):
subject.aspirate(location=None)
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand your philosophy better now after your explanation. But I would still assert that this way of writing unit tests is dangerous:

Suppose I lost my mind one day and decided to change the implementation of validate_location() to:

def validate_location(...):
    if target_location is None:
        # raise NoLocationError()  # before
        raise ZeroDivisionError()

Knowing that all changes should be tested, I also go and update the unit test in test_validation.py:

def test_validate_raises_no_location_error() -> None:
    # with pytest.raises(subject.NoLocationError):  # before

    with pytest.raises(ZeroDivisionError):
        subject.validate_location(location=None, last_location=None)

Then I send out a PR claiming that I've updated the unit tests, and all other tests (including this test in test_instrument_context.py) also pass -- which is true -- so this change is safe and should be approved.


With the original approach, the test was asserting that if validate_location() raises NoLocationError, then aspirate() does the right thing. But if validate_location() does not raise NoLocationError, this test wouldn't raise any alarms.

You're right that we could hypothetically write an integration test that tests the combination of aspirate() and validate_location() -- but there are LOTS of potential combinations, and realistically, we're not going to be able to write so many integration tests.

So I think the best way to ensure correctness is to just let aspirate() call the real validation function if it's not too onerous. And it this case, calling the real validation function is also less work, since we don't have to write the mock.

caila-marashaj pushed a commit that referenced this pull request Apr 8, 2025
…t_context.py (#17994)

# Overview

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

## Test Plan and Hands on Testing

I examined all the usages of `mock_validation` by hand. It was very
painful.

## Risk assessment

Low. This is a test-only change.
caila-marashaj pushed a commit that referenced this pull request Apr 11, 2025
…t_context.py (#17994)

# Overview

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

## Test Plan and Hands on Testing

I examined all the usages of `mock_validation` by hand. It was very
painful.

## Risk assessment

Low. This is a test-only change.
caila-marashaj pushed a commit that referenced this pull request Apr 11, 2025
…t_context.py (#17994)

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

I examined all the usages of `mock_validation` by hand. It was very
painful.

Low. This is a test-only change.
caila-marashaj pushed a commit that referenced this pull request Apr 11, 2025
…t_context.py (#17994)

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

I examined all the usages of `mock_validation` by hand. It was very
painful.

Low. This is a test-only change.
ddcc4 added a commit that referenced this pull request May 16, 2025
…t_context.py (#17994)

# Overview

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

## Test Plan and Hands on Testing

I examined all the usages of `mock_validation` by hand. It was very
painful.

## Risk assessment

Low. This is a test-only change.
ddcc4 added a commit that referenced this pull request May 17, 2025
…t_context.py (#17994)

# Overview

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

## Test Plan and Hands on Testing

I examined all the usages of `mock_validation` by hand. It was very
painful.

## Risk assessment

Low. This is a test-only change.
ddcc4 added a commit that referenced this pull request May 17, 2025
…t_context.py (#17994)

# Overview

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

## Test Plan and Hands on Testing

I examined all the usages of `mock_validation` by hand. It was very
painful.

## Risk assessment

Low. This is a test-only change.
ddcc4 added a commit that referenced this pull request May 17, 2025
…t_context.py (#17994)

# Overview

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

## Test Plan and Hands on Testing

I examined all the usages of `mock_validation` by hand. It was very
painful.

## Risk assessment

Low. This is a test-only change.
ddcc4 added a commit that referenced this pull request May 17, 2025
…t_context.py (#17994)

# Overview

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

## Test Plan and Hands on Testing

I examined all the usages of `mock_validation` by hand. It was very
painful.

## Risk assessment

Low. This is a test-only change.

(cherry picked from commit 41043f32776a2de932c029e72e26043e781186ef)
ddcc4 added a commit that referenced this pull request May 17, 2025
…t_context.py (#17994)

# Overview

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

## Test Plan and Hands on Testing

I examined all the usages of `mock_validation` by hand. It was very
painful.

## Risk assessment

Low. This is a test-only change.

(cherry picked from commit 41043f32776a2de932c029e72e26043e781186ef)
ddcc4 added a commit that referenced this pull request May 17, 2025
…t_context.py (#17994)

# Overview

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

## Test Plan and Hands on Testing

I examined all the usages of `mock_validation` by hand. It was very
painful.

## Risk assessment

Low. This is a test-only change.

(cherry picked from commit 41043f32776a2de932c029e72e26043e781186ef)
ddcc4 added a commit that referenced this pull request May 17, 2025
…t_context.py (#17994)

# Overview

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

## Test Plan and Hands on Testing

I examined all the usages of `mock_validation` by hand. It was very
painful.

## Risk assessment

Low. This is a test-only change.

(cherry picked from commit 2fece0c)
ddcc4 added a commit that referenced this pull request May 17, 2025
…t_context.py (#17994)

# Overview

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

## Test Plan and Hands on Testing

I examined all the usages of `mock_validation` by hand. It was very
painful.

## Risk assessment

Low. This is a test-only change.

(cherry picked from commit 2fece0c)
ddcc4 added a commit that referenced this pull request May 17, 2025
…t_context.py (#17994)

# Overview

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

## Test Plan and Hands on Testing

I examined all the usages of `mock_validation` by hand. It was very
painful.

## Risk assessment

Low. This is a test-only change.

(cherry picked from commit 2fece0c)
ddcc4 added a commit that referenced this pull request May 19, 2025
…t_context.py (#17994)

# Overview

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

## Test Plan and Hands on Testing

I examined all the usages of `mock_validation` by hand. It was very
painful.

## Risk assessment

Low. This is a test-only change.

(cherry picked from commit 2fece0c)
ddcc4 added a commit that referenced this pull request May 19, 2025
…t_context.py (#17994)

# Overview

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

## Test Plan and Hands on Testing

I examined all the usages of `mock_validation` by hand. It was very
painful.

## Risk assessment

Low. This is a test-only change.

(cherry picked from commit 2fece0c)
ddcc4 added a commit that referenced this pull request May 19, 2025
…t_context.py (#17994)

# Overview

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

## Test Plan and Hands on Testing

I examined all the usages of `mock_validation` by hand. It was very
painful.

## Risk assessment

Low. This is a test-only change.

(cherry picked from commit 2fece0c)
ddcc4 added a commit that referenced this pull request May 20, 2025
…t_context.py (#17994)

# Overview

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

## Test Plan and Hands on Testing

I examined all the usages of `mock_validation` by hand. It was very
painful.

## Risk assessment

Low. This is a test-only change.

(cherry picked from commit 2fece0c)
ddcc4 added a commit that referenced this pull request May 20, 2025
…t_context.py (#17994)

# Overview

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

## Test Plan and Hands on Testing

I examined all the usages of `mock_validation` by hand. It was very
painful.

## Risk assessment

Low. This is a test-only change.

(cherry picked from commit 2fece0c)
ddcc4 added a commit that referenced this pull request May 22, 2025
…t_context.py (#17994)

# Overview

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

## Test Plan and Hands on Testing

I examined all the usages of `mock_validation` by hand. It was very
painful.

## Risk assessment

Low. This is a test-only change.

(cherry picked from commit 2fece0c)
ddcc4 added a commit that referenced this pull request May 23, 2025
…t_context.py (#17994)

# Overview

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

## Test Plan and Hands on Testing

I examined all the usages of `mock_validation` by hand. It was very
painful.

## Risk assessment

Low. This is a test-only change.

(cherry picked from commit 2fece0c)
ddcc4 added a commit that referenced this pull request May 24, 2025
…t_context.py (#17994)

# Overview

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

## Test Plan and Hands on Testing

I examined all the usages of `mock_validation` by hand. It was very
painful.

## Risk assessment

Low. This is a test-only change.

(cherry picked from commit 2fece0c)
ddcc4 added a commit that referenced this pull request May 24, 2025
…t_context.py (#17994)

# Overview

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

## Test Plan and Hands on Testing

I examined all the usages of `mock_validation` by hand. It was very
painful.

## Risk assessment

Low. This is a test-only change.

(cherry picked from commit 2fece0c)
ddcc4 added a commit that referenced this pull request May 29, 2025
…t_context.py (#17994)

# Overview

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

## Test Plan and Hands on Testing

I examined all the usages of `mock_validation` by hand. It was very
painful.

## Risk assessment

Low. This is a test-only change.

(cherry picked from commit 2fece0c)
ddcc4 added a commit that referenced this pull request May 29, 2025
…t_context.py (#17994)

# Overview

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

## Test Plan and Hands on Testing

I examined all the usages of `mock_validation` by hand. It was very
painful.

## Risk assessment

Low. This is a test-only change.

(cherry picked from commit 2fece0c)
ddcc4 added a commit that referenced this pull request May 29, 2025
…t_context.py (#17994)

# Overview

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

## Test Plan and Hands on Testing

I examined all the usages of `mock_validation` by hand. It was very
painful.

## Risk assessment

Low. This is a test-only change.

(cherry picked from commit 2fece0c)
ddcc4 added a commit that referenced this pull request May 29, 2025
…t_context.py (#17994)

# Overview

For some reason, we were mocking out the entire
`protocol_api/validation.py` module in `test_instrument_context.py`.
There is no reason to do that, since the validation functions are pure
functions that don't rely on external objects, and we really ought to
test the validation functions in our tests.

Furthermore, we were abusing the mocks to make the validation functions
do wild things: like mistranslating `Location`s into `WellTarget`s, or
turning the `"never"` tip-replacement policy into
`TransferTipPolicyV2.ONCE`. There is no good reason to do that, and it
just makes the test logic ridiculously hard to follow.

I need to fix this because I want to write a test for my change to the
`mix()` function, but it was very hard to do so given how we were using
`mock_validation`.

## Test Plan and Hands on Testing

I examined all the usages of `mock_validation` by hand. It was very
painful.

## Risk assessment

Low. This is a test-only change.

(cherry picked from commit 2fece0c)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants