Just want to say - thank you! #50
Replies: 13 comments
-
Great, thanks for the feedback, so glad you like it! Yes For your other questions, I have written a few blog posts about the development which may help. https://domwil.co.uk/minimaxer/. Part 2 discusses presorting, part 4 has the
I have actually removed In the latest versions there is an I do want to look into some neural net approaches at some point, maybe one day I will find the time. Would be happy to chat about it, and can look through your code if you want help trying to get it to work. |
Beta Was this translation helpful? Give feedback.
-
Wow! Your blog posts are super helpful, I'll have to read them over a few times :) (consider linking to them in your README if you haven't already) I never thought about the generator function, super interesting, I'll have to wrap my head around how you've implemented it at some point. You're blowing my mind. Simply setting that option to At this point my usage or your algorithm allows my AI to search a depth of 2 in 2 seconds, and depth of 3 takes over a minute. One interesting observation when I use the Early in my game,
Later in my game,
How is it possible that SO many more nodes are searched in the same amount of time? I'll do some digging but figure I should ask if it's something obvious. EDIT: this seems like my GetMoves callback is to blame (im trying really hard to stay on top of performance but WOW it is difficult) I guess your blog post on measuring performance will come in handy for me heh. |
Beta Was this translation helpful? Give feedback.
-
One more question. Do I set the (I'm checking by logging |
Beta Was this translation helpful? Give feedback.
-
Thanks. I am happy you found the posts helpful and the options are helping the performance. Sounds like you have the time search working as well. Yes I need to update the README and all the docs, its probably quite difficult for people to use at the moment. Got a few improvements to make to the code as well, particularly the maxn algorithm (3+ players). You will have to make a few changes, but I recommend the latest package version, it has some potentially faster sort functions as well. I intend to not change the api again. Warning I think I did some awful things like change argument order in some of the callbacks. It looks like your game has quite a large branching factor if you are getting 2 million nodes within 3 moves. You might start to run into memory limitations with a tree that large. I had issues searching for 5+ seconds. For performance, I recommend using a flamegraph to see the functions that are taking the most time, I've used Clinic.js before and it was super helpful. The 3 things might be improvable are:
If possible, only clone the subset of the gamestate that changes with the move that is played (see line 510 here). I had a quick look at your game and i'm not sure if this is viable. You might be able to benchmark each function individually as well (blog post and code). I guess this is the blog post you are referring to. One trick is to remove moves that are obviously bad to reduce the search space. Only possible with some games. If you are using negamax, setting the node aim in the create child callback does nothing, however you must set it correctly on the root node. I need to document this properly. |
Beta Was this translation helpful? Give feedback.
-
Excellent, appreciate the recommendations. Never heard of Clinic.js, but with the Chrome profiler / flamegraph I was able to optimize some of the obvious culprits (Object.assign, map / reduce). I'll try that tool out and hopefully squeeze some more performance out of things. Ok sounds great about the Thanks again!! |
Beta Was this translation helpful? Give feedback.
-
Ah nice, did not know chrome had a profiler. I have just made some large changes to the docs, would be good to know what you think.
Also one thing I forgot to add that might be relevant, setting |
Beta Was this translation helpful? Give feedback.
-
Super useful, nice! One thing I noticed in your mancala example is in the child callback you are manually setting Also, you are using
Does this mean I can manually set the One thing I'm curious about is why |
Beta Was this translation helpful? Give feedback.
-
I'm looking forward to |
Beta Was this translation helpful? Give feedback.
-
Thanks for the feedback. Yes I need to sort out the examples, they aren't very good at the moment and i keep using them to test stuff, hence why there is a weird combo of options. I do use the mancala one for minimax though, hence explicitly setting the aim. Optimal negamax just uses a different recursive function without all the if statements etc. checking for options, so is slightly quicker. See normal and optimal. Its only a few percent quicker so you can just set the options you like with |
Beta Was this translation helpful? Give feedback.
-
Random selection with pruning is tricky. The issue is that the values of the nodes aren't necessarily correct if any of their children have been pruned. So you could end up in a situation where a few nodes appear to have the same value, but actually one of them is far worse, but it never checked its worse child because it was pruned . I have a few ideas to fix it. One would be to track if any descendants of a node have been pruned, and to discount the node from random selection. For your game when only looking 2 moves ahead this might work well. Another idea is to change the pruning condition from >= to >, therefore keeping all children that have the same value. Then the Another is to maintain at least the best n moves by searching their sub-trees completely, so sort of a cross between pruning and no pruning. First thought is that this would be complex to implement. Edit: You can also add randomness to your evaluation function to make the bot play more interestingly. |
Beta Was this translation helpful? Give feedback.
-
Yeah Alpha-beta pruning is more complicated and less effective which will be a challenge. There are also a number of interesting strategies to use when choosing the best moves. The basic algorithm assumes each player tries to maximise their own score, but you could do things like:
|
Beta Was this translation helpful? Give feedback.
-
I considered this and haven't tried it yet. I haven't thought through the consequences of how much of an impact it would have on performance. (Like if pruning wouldn't be as effective). I'll play around with it! Question for you. I see in your normal and optimal that I have to revisit the combination of options vs |
Beta Was this translation helpful? Give feedback.
-
Yeah I imagine the RNG overhead could be a bit much with 1 million+ nodes. Hard to say what the impact on pruning would be, my gut feeling is on average it wouldn't change. The class inheritance got a bit out of hand, so some of the functionality is a bit hidden. The generator is created in getChildren, which the normal negamax function calls. Optimal does it directly to avoid the function call overhead and unnecessary checks. Changing the options throughout the game is a neat idea I haven't tried yet. I have altered the evaluation function though which can be quite useful. Also going to move this to discussions. |
Beta Was this translation helpful? Give feedback.
-
Your implementation is very nice and has been very useful. I've replaced my shoddy implementation from years ago with yours and it has allowed me to (mostly) focus on the other aspects of building a game AI.
Tonight I stumbled upon using your
presort
andpruneByPathLength
options and it solved an issue where my AI would, instead of making a winning move, make a different move that eventually would lead to a win. (So I guess this is whatpruneByPathLength
is for?)Are there any tradeoffs using
pruneByPathLength
andpresort
? It seems to speed up the algorithm so it isn't clear to me if there is negative consequence.Some things I still need to explore
genBased
because Chrome garbage collection is taking up some time.postSort
, if it actually speeds up the algorithmWould be interesting to chat at some point. Curious if you have explored any neural net approaches?
In any case - thank you for sharing your code which is well crafted.
Beta Was this translation helpful? Give feedback.
All reactions