Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create multiagent environment #6

Closed
HighlyAuditory opened this issue Mar 7, 2019 · 9 comments
Closed

Create multiagent environment #6

HighlyAuditory opened this issue Mar 7, 2019 · 9 comments

Comments

@HighlyAuditory
Copy link

Expected Results

My goal is to create a scene where there are other agents moving around, an ego view percept these movements. Also, I would like to keep all the images from the ego view along with the trajectory and angles of the ego camera. Is the platform compatible with this kind of task? Any potential tutorial to look at?

Thanks!

@abhiskk
Copy link
Contributor

abhiskk commented Mar 7, 2019

@LlamasI, this feature is currently not supported. We have multi-agent support in our future roadmap. Although there is definitely a pseudo multiagent experimentation setup that you can do:

  1. Have multiple environments setup similar to train_ppo.py.
  2. Have the multiple environment be setup inside the same house.
  3. In each environment you can have a single agent. You can run the multiple agents in parallel inside multiple environments but the same house (pseudo-multiagent).
  4. Note that currently there is no way for the multiple agents to see each other or physically interact with each other (collisions etc).
  5. Although you can setup interactions for the agents on your side (for example communication between the agents etc).

@HighlyAuditory
Copy link
Author

HighlyAuditory commented Mar 7, 2019

@LlamasI, this feature is currently not supported. We have multi-agent support in our future roadmap. Although there is definitely a pseudo multiagent experimentation setup that you can do:

  1. Have multiple environments setup similar to train_ppo.py.
  2. Have the multiple environment be setup inside the same house.
  3. In each environment you can have a single agent. You can run the multiple agents in parallel inside multiple environments but the same house (pseudo-multiagent).
  4. Note that currently there is no way for the multiple agents to see each other or physically interact with each other (collisions etc).

Thanks a lot for the answer!
May I ask if a simpler case with one agent and multiple dynamic objects is possible? Like some mesh moving around inside the scene. Can the agent see those moving objects?

@abhiskk
Copy link
Contributor

abhiskk commented Mar 8, 2019

@LlamasI at the moment it is not possible. A simple feature of moving mesh would go hand in hand with a better multiagent support. But this feature is not in place right now.

@HighlyAuditory
Copy link
Author

Thanks!

@xiaotaw
Copy link

xiaotaw commented Nov 25, 2019

Looking forward to multi-agent support too. [smile]

@abhiskk's pseudo multiagent seems great enough to meet my needs tho.

@srama2512
Copy link
Contributor

@abhiskk - what would be the timeframe for enabling multi-agent support in Habitat? Can it be expected in the near future? Also, are there any plans to have a standard implementation of pseudo multiagent?

@mathfac
Copy link
Contributor

mathfac commented Jan 6, 2020

Hi @srama2512! We will have team planning in near future and let you know.

erikwijmans pushed a commit to erikwijmans/habitat-lab that referenced this issue Mar 5, 2020
mathfac pushed a commit that referenced this issue May 6, 2020
dhruvbatra pushed a commit that referenced this issue May 10, 2020
* Agent in python

* Add some docstrings

* Remove dead code

* Typo and docs

* More removes

* Add global config

* Better attrs

* Re-use make_cfg

* Import all from bindings

* Spell out Configuration to match rest of API

* agent_cfgs -> agents

* Allow no actuation spec

* Add copyright headers

* Test RGB and Depth on test_scenes

* Don't use random seeds for testing positions

* Or do

* Don't use random seeds for testing positions

* Did this fix it?

* Make it work

* Proper reconfigure support

* Allow double call to close

* Configure backend last to give python's GC a helping hand

* Add test for cfgs being equal and be more explicit with detaching things

* Typo
@smorad
Copy link
Contributor

smorad commented Jul 7, 2021

Has there been any update on this? I see multi agent support exists many places in the codebase, but there are no guides on how to add additional agents. Adding AGENT_1, AGENT_2, ... to the habitat config doesn't work.

@JiayunjieJYJ
Copy link

JiayunjieJYJ commented Sep 14, 2021

Has there been any update on this? I see multi agent support exists many places in the codebase, but there are no guides on how to add additional agents. Adding AGENT_1, AGENT_2, ... to the habitat config doesn't work.

@smorad I have the same question, do you know how to create multi agent environment in habitat?

vincentpierre added a commit that referenced this issue May 2, 2022
* attempt at navreach

* some minor changes

* adding stagger

* -

* setting up collision flag

* Improved rendering. Cartesian coordinates. No more Mixed Precision training. (#2)

* Improved rendering. Cartesian coordinates. No more MP training

* Improved rendering. Cartesian coordinates. No more MP training

* Fix default

* Added back joint pos sensor

* Fixed bug

* Fixed in cart

* Added configurable success thresh and sucess reward

* Changed name of success reward and thresh keys

* resolving conflicts

* fixing the eval config

* nav-pick-nav-reach (#3)

* nav-pick-nav-reach

* nav PLACE + measure the gradient norm

* fixing a bug

* fixing issues and setting new trainnig defaults

Co-authored-by: vincentpierre <[email protected]>

* using logger and not print

* addressing comments

* minor changes

* minor changes

* minor changes

* BPS policy integration (#6)

Co-authored-by: vincentpierre <[email protected]>
Co-authored-by: Andrew Szot <[email protected]>
cpaxton added a commit that referenced this issue Jan 17, 2023
* iadd motion files, interpolation, rrt placeholder, and ROS launch files

* add ROS tools, mask rccnn wrapper, and utils

* update configs for planning and executing

* update configuration files# Please enter the commit message for your changes. Lines starting

* add the pick test

* add ros services and fix

* version consistency so we can use it as a ROS package

* update readme

* add hab stretch - contains URDF for planning

* update

* add new readm

* update third party config and add repos

* require detectron for the full perception install

* reformat with black

* Formatting

* Run pytest in tests folder only

* rename the file

* Formatting

* add a new file containing point cloud tools

* update pt cloud tools - make sure its ok

* Fix imports

* Add modifications required for demo to run

* Restructure files, update README

* Formatting

* Update instructions based on Ben's feedback

* README formatting

* Make mamba optional

* Clean up troubleshooting

* Debug init node issue

* Add comments to script

Co-authored-by: ExhAustin <[email protected]>
jimmytyyang pushed a commit that referenced this issue Jan 21, 2024
jimmytyyang pushed a commit that referenced this issue Jan 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants