-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multi-agent problem #84
Comments
Did you find a Solution for the Problem ? |
Sorry, I did not found a solution yet. |
I solved the problem. The issue was that I didn't start two separate instances of Minecraft on different ports. I'm not sure if it helps you, but try starting one instance on port 10000 and another on port 10001. I did it using the Anaconda Prompt tool. Go to your installed Minecraft directory (if you use the Conda environment, it's set to %MALMO_MINECRAFT_ROOT%, so use: These steps are also described in the tutorial on the website, but I overlooked them too. hope this helps you ! (all this is done on an Windows System) |
It works! Thank you very much! |
no i try to figure out at the moment how to get the Observations and how many actions are in the Action space. Have you some info about that ? I see in both Minecraft instances the Agents but they only look eacht other in the face and doing nothing. At the end of the countdown i see both in full Size. Idk why. |
Sorry i get the same situation that the agents only look eacht other too. But when i printed the observations, they are changed. So maybe that's what happens when the code runs?You can find some information about the envs in 'marLo-master\marlo\envs' . I get some information in there and hope it'll help you. |
Could be possible. Yeah i found the Infos about the envs. But the problem is that i dont find a Proper Doc where the functions to Extract Observations are Explained. Do you know where to look for that ? |
Sorry I do not find the doc too. And I have no idea about how to extract the infos. Sorry that I can't help you about this. |
observation = env.reset() Thats the normal way. The Observation is a Picture from the Actual game taken at the step. so based on that you can train your agent. action = np.argmax(action_outputs) with that i give actions to the Api. but i didnt get Continouse Movements to work i dont know why. so my agent is really stupid at the moment. There is a kind of hidden doc if you go to Examples and Look how they are Implemented. Hope thats help. |
Sorry to bother you guys, but when I run the example of multi_agent, I always get the ‘not enough clients in the client pool’ error. And I checked the Issues for a solution. But I still don't know how to solve the problem that running the role0 first. So how can I run the multi-agent successfully? Thank you!
The text was updated successfully, but these errors were encountered: