Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cannot read samples during task.stop! in tests #247

Open
g-arjones opened this issue May 21, 2020 · 12 comments
Open

cannot read samples during task.stop! in tests #247

g-arjones opened this issue May 21, 2020 · 12 comments
Assignees

Comments

@g-arjones
Copy link
Contributor

I have a stopHook that writes to an output port. Short of doing task.orocos_task.stop in a separate thread, didn't find any other way to get the samples in tests

@doudou doudou changed the title cannot read samples during task.stop! cannot read samples during task.stop! in tests May 22, 2020
@doudou
Copy link
Member

doudou commented May 22, 2020

Mmmmmm. Not really sure why not, actually.

the have_one_new_sample predicates create their own reader/writer directly from the orocos.rb layer (to avoid asynchronicity in setup).

These readers aren't event disconnected (which is a bug, but is in the end harmless as the task's process will be terminated after the test).

So, you should be able to read. Can you give more info, in the form of "test", "expected behavior", "actual behavior" ?

@g-arjones
Copy link
Contributor Author

g-arjones commented May 22, 2020

So, you should be able to read

Yeah, I got it wrong. After more investigation, I realized that the problem is in the second step of the hook which expects a response from the device (the test code in this case). What seems to be happening is that calling task.stop! with expect_execution disconnects the ports before I have the chance to write the response. Maybe an example will make things clearer (if what I am saying even makes sense):

I expected this to call the stop hook, read a sample from the driver, and write a response:

expect_execution { task.stop! }
    .to { have_one_new_sample task.out_port }
expect_execution { syskit_write task.in_port, make_response }
    .to { finish task }

I can read the sample (have_one_new_sample fulfills) but the response never gets to the driver (and it times out and breaks during the stop hook). Now, I've tried a bunch of things and a couple worked:

(1)

execution_engine.once { task.stop! }
expect_execution
    .join_all_waiting_work(false)
    .to { have_one_new_sample task.out_port }
expect_execution { syskit_write task.in_port, make_response }
    .to { finish task }

(2)

t = Thread.new { task.orocos_task.stop }
expect_execution.to { have_one_new_sample task.out_port }
expect_execution { syskit_write task.in_port, make_response }
    .to { finish task }
t.join

@g-arjones
Copy link
Contributor Author

Probably a different issue but I also could not find a way to test the emission of an exception event during configuration. The test always fails because the "expected" start event is never emitted

@doudou
Copy link
Member

doudou commented May 22, 2020

Could you start a new issue for that ? I have to look it up. The concept of configuration is not mapping very well with Roby's task state handling. Failure to configure maps to a failure to start, not as an event.

@doudou
Copy link
Member

doudou commented May 22, 2020

You nailed it with join_all_waiting_work. To make the execution_expectations feel as synchronous as possible, the harness waits for all futures to finish before returning. One of them is the call to stop.

Now, to be honest, I'm not entirely sure it makes that much sense. If you want synchronous behavior, you should be waiting for the end condition in the to { } block.

@g-arjones
Copy link
Contributor Author

g-arjones commented May 22, 2020

Could you start a new issue for that?

Definitely.

Failure to configure maps to a failure to start, not as an event

I think that makes sense, provided that I can still catch the exception events that happen in the meantime. I tried disabling join_all_waiting_work and validate_unexpected_errors but I still can't catch the exceptions. I catch a stop_event though (which makes less sense, IMO).

You nailed it with join_all_waiting_work

I guessed as much but what I don't get is why I have to use execution_engine.once { task.stop! } to make it work. How is that different from passing the block to expect_execution? Also, is it OK to use execution_engine with the test harness?

If you want synchronous behavior, you should be waiting for the end condition in the to { }

Yeah, I have the same impression. join_all_waiting_work should default to false and maybe have the minitest integration bits add an after/teardown block to join the waiting work.

@doudou
Copy link
Member

doudou commented May 22, 2020

I've done a quick test here, and I don't need the once. The following works (after having hacked the task to send and receive one message in stopHook)

        @task = syskit_deploy(
            OroGen.iodrivers_base.test.PeriodicTask.deployed_as("task_under_test")
        )
        setup_iodrivers_base_with_fd
        syskit_configure_and_start(@task)

        sample = expect_execution { task.stop! }
            .join_all_waiting_work(false)
            .to { have_one_new_sample task.rx_port }
        expect_execution { syskit_write task.tx_port, sample }
            .to { emit task.stop_event }

@doudou
Copy link
Member

doudou commented May 22, 2020

Also, check this out: rock-core/rock-and-syskit#61

@g-arjones
Copy link
Contributor Author

I don't need the once

Yes, I guess you are right. I probably missed that in between the myriad of stuff I was trying. So the conclusion is that there's nothing wrong with the harness, right? Except for the fact that one may have to disable the join_all_waiting_work option very often...

@g-arjones
Copy link
Contributor Author

Also, it's not very clear to me what #scheduler() does. Could you please clarify?

@doudou
Copy link
Member

doudou commented May 22, 2020

The scheduler is the object that's auto-starting tasks. In the case of Syskit, it also controls whether a component will be auto-configured.

Since there's no harness-compatible syskit_configure, it lets the system auto-configure stuff. And thanks to the join_all_waiting_work being true, the system waits for configuration to be finished.

All very ad-hoc. The harness definitely needs a syskit_configure action that's compatible with expect_execution, and a syskit_is_configured predicate.

@g-arjones
Copy link
Contributor Author

👍

Feel free to close. Thanks for the insights.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants