Markov part 2
As shown in my first Markov post, I now have a piece of AI which can predict what is going to happen next, based on previous experience. From the testing I've done this piece of technology works exactly as I expected however there is one small problem, it isn't fun. Due to the fact that I'm using this technology in a video game there needs to be some potential for enjoyment, and playing soccer against a machine that always makes the perfect decision just isn't enjoyable.
To fix this problem I have added some extra functionality to my script. Originally the AI would determine where the ball was going to go next, and would always go that way, however my plan now is to implement a roulette wheel style of selection.
The roulette wheel selection checks the total number of times that the player has been in their current situation, and then thinks about every previous choice they have made. Instead of just choosing the most common previous choice and predicting that this style gives each option a certain chance based on how many previous times it has been has been chosen, and uses random number generation to get a value that will decide which choice will be made. The more often an option has been taken by a player the greater representation it will have in the list and therefore the greater chance it will have of being chosen.
The end result of this is that whilst the Markov chain still generally makes correct choices, it feels more real as it is not perfect.