Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat (environment): Swarm Covering Environment #273

Open
zombie-einstein opened this issue Feb 6, 2025 · 1 comment
Open

feat (environment): Swarm Covering Environment #273

zombie-einstein opened this issue Feb 6, 2025 · 1 comment
Labels
enhancement New feature or request

Comments

@zombie-einstein
Copy link
Contributor

I was looking at implementing an environment similar to this one from petting-zoo (which itself I believe is taken from an older OpenAI multi-agent env library) where a team of agents have to simultaneously cover (i.e. come within a certain range of) a set of targets (of the same size) whilst not colliding.

It should be possible to reuse some of the existing code from the search-and-rescue environment such as the agent local visualisation and matching agents to targets (the main difference between this and search-and-rescue is that the targets are fully visible and need to be continually covered instead of only found once).

  • State: A 2d space with a set of targets distributed across the space, and a set of agents with position updated each step according to their velocity. Targets can be in an uncovered/covered state if an agent is within a fixed range of them.
  • Rewards: In the original implementation they use a distance based rewards between agents and targets, but there's a couple other options that could be considered:
    • Shared rewards based on the fraction of covered targets
    • Individual rewards when an agent covers a target, that are divided if agent are covering the same target.
    • Sparse rewards awarded when the agents succeed in covering all the targets.
  • Actions: Agents individually update their velocity. In contrast to the search-and-rescue I was looking into a simplified drone/quadcopter type flight model, i.e. agent pitch/roll/rotate with some momentum (but in 2d space).
@zombie-einstein zombie-einstein added the enhancement New feature or request label Feb 6, 2025
@sash-a
Copy link
Collaborator

sash-a commented Feb 7, 2025

Hey @zombie-einstein this would be great to have and is a classic cooperative MARL env, but would be great if we could make it harder and put our own spin on it as it already exists in JaxMARL.

Also Jumanji has the tricky requirement of needing to put an industry related spin on the environments

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants