Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compatibility for Gym 0.13.1 #50

Open
wants to merge 64 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
64 commits
Select commit Hold shift + click to select a range
cd234ab
Messing with mountain car
ymkymkymkymx Jul 15, 2019
3703822
Update README.md
ymkymkymkymx Jul 15, 2019
6a5113c
Update README.md
ymkymkymkymx Jul 15, 2019
ba131a8
Create EnvironmentIdeas.md
dpakalarry Jul 17, 2019
367a1c6
Update and rename EnvironmentIdeas.md to ScenarioIdeas.md
dpakalarry Jul 17, 2019
593f2cb
Added Scenario Idea 1
dpakalarry Jul 17, 2019
c6ac044
start finding incompatibilities for latest gym version
tkclough Jul 17, 2019
15fed32
Merge remote-tracking branch 'origin/master'
tkclough Jul 17, 2019
113c199
tkclough Jul 17, 2019
38490a2
downgrade
ymkymkymkymx Jul 17, 2019
d763aa3
Merge branch 'master' of https://github.com/jarbus/multiagent-particl…
ymkymkymkymx Jul 17, 2019
2ef3400
Added testing.py to play around with a scenario
dpakalarry Jul 17, 2019
5fbefa9
Merge branch 'master' of https://github.com/jarbus/multiagent-particl…
dpakalarry Jul 17, 2019
b68236c
tkclough Jul 17, 2019
a24235d
Merge remote-tracking branch 'origin/master'
tkclough Jul 17, 2019
d7e489a
tkclough Jul 17, 2019
2db0588
tkclough Jul 17, 2019
c7dfadf
Fixed reraise error
dpakalarry Jul 17, 2019
0732a28
Update changes.txt
dpakalarry Jul 17, 2019
dddd559
adding documentation
jarbus Jul 17, 2019
7d66dd0
Merge branch 'master' of github.com:jarbus/multiagent-particle-envs
jarbus Jul 17, 2019
eb09d38
More documentation
jarbus Jul 17, 2019
990ca85
Added some comments to testing.py to better understand
dpakalarry Jul 20, 2019
8a6cad2
Path for "scenario.py" in the documentation
zrysnd Jul 20, 2019
fcbd86b
More info on simple_crypto.py
Brin775 Jul 20, 2019
b7ceac7
More on multiagent/core.py
zrysnd Jul 20, 2019
4314fb6
compatible?
jarbus Jul 20, 2019
42b1be8
Merge branch 'master' of github.com:jarbus/multiagent-particle-envs
jarbus Jul 20, 2019
28de1ad
Added race scenario
dpakalarry Jul 20, 2019
dd886dd
Update ScenarioIdeas.md
dpakalarry Jul 20, 2019
77e9486
Update testing.py
dpakalarry Jul 20, 2019
f643ac2
Update testing.py
dpakalarry Jul 20, 2019
bd12a43
Modified policy.py so that agents can go to the landmark automaticall…
linlinbest Jul 20, 2019
e004d7b
Merge branch 'master' of https://github.com/jarbus/multiagent-particl…
linlinbest Jul 20, 2019
f26f54c
more details
zrysnd Jul 20, 2019
ab27aaa
more details
zrysnd Jul 20, 2019
b192819
added reward and observation function to BaseScenario
zrysnd Jul 20, 2019
51e2cb8
a customized scenario
zrysnd Jul 20, 2019
ee9d068
adding dictionary<agent, landmark>
zrysnd Jul 20, 2019
e2f2fde
remards based on distance between agent and its target
zrysnd Jul 20, 2019
54d39c6
minor changes
zrysnd Jul 20, 2019
883ccaf
leave policy unchanged for now
zrysnd Jul 20, 2019
4753e06
environment no longer printing message, leave printing in script
zrysnd Jul 20, 2019
b151774
agent landmark position fixed
zrysnd Jul 20, 2019
1ff2b05
more reward closer
zrysnd Jul 20, 2019
8fbb1bf
documenting visualization
zrysnd Jul 20, 2019
c358b4b
documenting visualization
zrysnd Jul 20, 2019
2c2d347
Added race.py (not finished)
Brin775 Jul 21, 2019
927f504
Tweak scenarioideas.md
jarbus Jul 21, 2019
2e9d4ec
race tweaks
jarbus Jul 21, 2019
2c178b7
Commenting
dpakalarry Jul 22, 2019
b5541b2
Merge branch 'master' of https://github.com/jarbus/multiagent-particl…
dpakalarry Jul 22, 2019
4f2b97c
Setup testing.py for the scenario
dpakalarry Jul 23, 2019
7f59b61
Added comments
dpakalarry Jul 23, 2019
1f97b16
Added Idea 3 to ScenarioIdeas.md
SimplySonder Jul 24, 2019
cbfea47
Add files via upload
linlinbest Jul 24, 2019
df8bf17
Update ScenarioIdeas.md
syhdd Jul 24, 2019
447ac7b
Update ScenarioIdeas.md
syhdd Jul 24, 2019
6ffece2
Add new Scenario Idea
syhdd Jul 24, 2019
d01ecf0
reward based on cheat/cooperate
zrysnd Jul 24, 2019
3e8fc13
update readme
jarbus Jul 24, 2019
250d49a
clean up for pull
jarbus Jul 26, 2019
b82e19b
push cleanup
jarbus Jul 26, 2019
6ec57e7
push cleanup
jarbus Jul 26, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
__pycache__/
*.egg-info/
*.pyc
*.pyc
.idea/
5 changes: 3 additions & 2 deletions multiagent/environment.py
Original file line number Diff line number Diff line change
Expand Up @@ -210,7 +210,7 @@ def render(self, mode='human'):
else:
word = alphabet[np.argmax(other.state.c)]
message += (other.name + ' to ' + agent.name + ': ' + word + ' ')
print(message)
# print(message)

for i in range(len(self.viewers)):
# create viewers (if necessary)
Expand All @@ -231,7 +231,8 @@ def render(self, mode='human'):
geom = rendering.make_circle(entity.size)
xform = rendering.Transform()
if 'agent' in entity.name:
geom.set_color(*entity.color, alpha=0.5)
color = (entity.color[0], entity.color[1], entity.color[2], 0.5)
geom.set_color(*color)
else:
geom.set_color(*entity.color)
geom.add_attr(xform)
Expand Down
7 changes: 5 additions & 2 deletions multiagent/multi_discrete.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,8 @@
import numpy as np

import gym
from gym.spaces import prng
from gym.utils import seeding


class MultiDiscrete(gym.Space):
"""
Expand All @@ -27,10 +28,12 @@ def __init__(self, array_of_param_array):
self.high = np.array([x[1] for x in array_of_param_array])
self.num_discrete_space = self.low.shape[0]

self.random = seeding.np_random()

def sample(self):
""" Returns a array with one sample from each discrete action space """
# For each row: round(random .* (max - min) + min, 0)
random_array = prng.np_random.rand(self.num_discrete_space)
random_array = self.random.rand(self.num_discrete_space)
return [int(x) for x in np.floor(np.multiply((self.high - self.low + 1.), random_array) + self.low)]
def contains(self, x):
return len(x) == self.num_discrete_space and (np.array(x) >= self.low).all() and (np.array(x) <= self.high).all()
Expand Down
3 changes: 3 additions & 0 deletions multiagent/policy.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
import numpy as np
from pyglet.window import key

from multiagent.scenarios.simple import Scenario

# individual agent policy
class Policy(object):
def __init__(self):
Expand All @@ -14,6 +16,7 @@ class InteractivePolicy(Policy):
def __init__(self, env, agent_index):
super(InteractivePolicy, self).__init__()
self.env = env
#self.agent_index = agent_index
# hard-coded keyboard events
self.move = [False for i in range(4)]
self.comm = [False for i in range(env.world.dim_c)]
Expand Down
21 changes: 16 additions & 5 deletions multiagent/rendering.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,19 +11,30 @@
os.environ['DYLD_FALLBACK_LIBRARY_PATH'] += ':/usr/lib'
# (JDS 2016/04/15): avoid bug on Anaconda 2.3.0 / Yosemite

from gym.utils import reraise
#from gym.utils import reraise
from gym import error

try:
import pyglet
except ImportError as e:
reraise(suffix="HINT: you can install pyglet directly via 'pip install pyglet'. But if you really just want to install all Gym dependencies and not have to think about it, 'pip install -e .[all]' or 'pip install gym[all]' will do it.")
#reraise(suffix="HINT: you can install pyglet directly via 'pip install pyglet'. But if you really just want to install all Gym dependencies and not have to think about it, 'pip install -e .[all]' or 'pip install gym[all]' will do it.")
raise ImportError('''
Cannot import pyglet.
HINT: you can install pyglet directly via 'pip install pyglet'.
But if you really just want to install all Gym dependencies and not have to think about it,
'pip install -e .[all]' or 'pip install gym[all]' will do it.
''')

try:
from pyglet.gl import *
except ImportError as e:
reraise(prefix="Error occured while running `from pyglet.gl import *`",suffix="HINT: make sure you have OpenGL install. On Ubuntu, you can run 'apt-get install python-opengl'. If you're running on a server, you may need a virtual frame buffer; something like this should work: 'xvfb-run -s \"-screen 0 1400x900x24\" python <your_script.py>'")

#reraise(prefix="Error occured while running `from pyglet.gl import *`",suffix="HINT: make sure you have OpenGL install. On Ubuntu, you can run 'apt-get install python-opengl'. If you're running on a server, you may need a virtual frame buffer; something like this should work: 'xvfb-run -s \"-screen 0 1400x900x24\" python <your_script.py>'")
raise ImportError('''
Error occured while running `from pyglet.gl import *`
HINT: make sure you have OpenGL install. On Ubuntu, you can run 'apt-get install python-opengl'.
If you're running on a server, you may need a virtual frame buffer; something like this should work:
'xvfb-run -s \"-screen 0 1400x900x24\" python <your_script.py>'
''')
import math
import numpy as np

Expand Down Expand Up @@ -342,4 +353,4 @@ def close(self):
self.window.close()
self.isopen = False
def __del__(self):
self.close()
self.close()
4 changes: 4 additions & 0 deletions multiagent/scenario.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,3 +8,7 @@ def make_world(self):
# create initial conditions of the world
def reset_world(self, world):
raise NotImplementedError()
def reward(self, agent, world):
raise NotImplementedError()
def observation(self, agent, world):
raise NotImplementedError()