First, this article explains the rationale behind the bot architecture designed in CodinGame-Scala-Kit. Then, we demonstrate this architecture style with some code.
A bot is a computer program. It communicates with a referee system. A referee system controls two or more bots.
The referee defines game rules. It distributes the game state to bots, collects bot actions and updates the game. It continues the distribute-collect-update loop until the game is over.
The following design principles are driven by challenges in bot programming. Each of the principles aims to tackle one problem from a given perspective. The final architecture proposition should take into account the principles.
As explained above, a bot should
- communicate with the referee system
- model the game domain such as game rules, state, action
- implement a fighting strategy
Among these concerns, the communication mechanism is strictly constrained by the referee system. We have more flexibility on domain modeling but game rules must be respected. The strategy is the heart of the bot and it’s where we can be creative. On one hand, separating these concerns allows us to distinguish the fixing and moving parts. On the other hand, it helps us to invest time and efforts on the right module.
Debugging helps developers to diagnosis a bot’s behavior. However, bot is often executed in a remote server. Even if it’s possible to write some logs, the debugging capabilities are still limited. If we managed to replay the game in developer’s local environement, all debugging issues would be resolved. Therefore, the achitecture should enable developers to write replayable codes.
In most games, one wins the game if he/she looks ahead more steps than others. Better performance leads to better result. As we know, premature optimization is root of all evil for performance tuning tasks. The proposed architecture should make benchmarking and profiling easy to setup.
A well designed game should have a large action space. A game is playable when one cannot figure out a winning strategy within a reasonable amount of time.
Given that most of time, it’s extremely difficult to solve a problem analytically, we often adopt an approach where we try a possible action and check how good it is. To know if an action is a good one, we need to play what-if scenarios. In a what-if scenario, we impact an action on a given state and then assess the quality of the updated game state. Playing a what-if scenarios is called a simulation.
The Input/Output layer handles communication with the referee system. It reads input to state and write action to output.
Inside the I/O layer, we can find the domain layer where we model the game state and action. They are pure input and output data for the bot logic. All I/O related side effects are removed.
The Bot module is a logic module that is reponsable to make action decisions upon game state change. It’s where most work should be done.
Separating the concerns into I/O, Model and Logic layer helps us to meet our requirements on debugging, performance, and simulation.
- Debugging: to replay a game locally, we only need to serialize the state object
- Performance: to benchmark the performance of bot logic, we only need to provide the serialized state object
- Simulation: the simulator takes action and existing state as input, and produces an updated state
The following code examples are taken from CodinGame-Scala-Kit.
For more details on how state is serialized, refer to my first post on Debugging in CodinGame-Scala-Kit
Benchmarking and profiling is powered by JMH, Sbt-JMH plugin and Java Flight Recorder.
The architecture proposal is influenced by ideas in functional programming such as
- side effects isolation
- data and logic separation
Please feel free to leave your comments.