Machine Learning masterclass #1 – How Coda uses ML to predict LTV
At Coda we use machine learning (ML) in three different areas, and over the coming weeks we are going to explain these processes in a little more depth.
The first area in which we use ML is to predict the success of a game. We have developed a Lifetime Value (LTV) model which enables us to calculate how much a game is likely to make for its creator. As soon as the game is published on the app store we monitor how well it’s doing and, using machine learning, we are able to quickly gauge how successful it will be.
The second use for ML in is game annotation. We curate images of games on the app stores and label them with attributes such as genre, artistic style, game mechanic, 2D or 3D etc. Up until recently this had to be done manually. Now a deep learning model has taken over and uses screenshots of untagged games, and adds those tags automatically.
The last one, which is currently in development, is the SDK client model. Even though it’s in its early stages we are very excited about its huge potential to optimise games. We will talk more about this in a few weeks.
We wanted to share with you a bit more about the role that machine learning plays in determining games LTV , and how we use this information. So here Muhammed Miah, ML Engineer at Coda explains the process in more detail.
One of the ways Coda uses machine learning is to assess the potential value of a game. What process do you go through to set this up?
So ‘value’ here refers to the total advertising revenue that we expect a game to be able to generate. Two key components are necessary to make that assessment possible, namely, data and the algorithm.
Starting with user acquisition data, we look at how much it costs to get a user to play the game and couple it to actual gameplay behaviour. The moment that a game is published on the app store it starts generating interesting metrics. For example, we get early clues as to its revenue potential by how long users spend in the first session and whether they come back to the game.
One thing that is worth mentioning is that we have insights for all games historically. We are in fact comparing the behaviour of users for a specific game with that of all Coda games in the past, and we are able to see where it ranks against other games while making the prediction.
The prediction itself is made by a machine learning algorithm called a ‘Random Forest’. It is fed the above-mentioned data and calculates how long users are likely to play the game for, and how many ads they are likely to see in that time. This then enables the algorithm to come up with a figure for how much the game will make.
How will this evolve in the future?
We want to predict more than just lifetime value. We want to gauge players’ total playtime as well as engagement per level. This will provide us a better understanding of our users and on exactly what parts of our games that they enjoy the most.
Also, we will be rolling out models to help us understand a game’s revenue potential at all stages of its lifecycle. With this in place, we will have an idea of what to expect from even just the adverts that bring in users for the game. I believe that that will have tremendous value.
So once you have the data and LTV what happens next?
One of the key reasons for ascertaining the lifetime value is to help us make a decision about whether to go forward with the game or not.
The question boils down to ‘is the lifetime value bigger than the user acquisition costs?’ If the LTV prediction is projected to be much higher than the user acquisition cost then we go forward with the game. If it’s not much higher, or perhaps is actually lower than the user acquisition cost then it’s clear that we have to make changes. The worse case scenario here of course is having to cancel the game.
So far we have put over 1,500 games through the Coda pipeline. This has provided us an enormous amount of data in terms of what makes a game successful, and I am glad to be able to leverage this when new games enter the Coda ecosystem.
So it is about balancing the cost to acquire users with the LTV?
Correct. What we want is for the machine learning algorithm to suggest that a game will make more than we spend on it. When that is true, we can then push that game forward much faster and acquire users more aggressively.
What is interesting is that our machine learning system is self-optimising and as more data comes in, it automatically retrains and produces better predictions. In essence, this is why we chose to add machine learning to this step of the process. Now that this is automated, we are able to pursue promising games much faster and help many more developers.