How Much Do Financial Incentives Affect Your Product?

Starting in October of 2014, a group of statisticians have been analysing one of the biggest surveys of the game development industry ever undertaken. After putting elements like Agile, Scrum, and Waterfall, as well as various financial incentive schemes under the microscope, the extent to which they affect the final product might surprise you.

Scrum Board picture by Shutterstock

Called the Game Outcomes Project, over 120 questions across all aspects of their careers, and received over 750 responses. These were weighed against their projects' success indicators, such as ROI, delays, critic ratings, and internal goals.

This survey was focused on the games industry, but one could easily extrapolate some of its larger points to the wider software industry. Especially when it talks about issues like crunch (which we'll get to later in the week).

So how much of an affect does production methodology have?

According to the Game Outcomes Project, not much!

Image by Game Outcomes Project

To quote Paul Tozour, leading the project:

Given that production methodologies seem to be a game development holy grail for some, one would expect to see major differences, and that Scrum in particular would be far out in the lead. But these differences are tiny, with a huge amount of variation in each category, and the correlations between the production methodology and the score have a p-value too high for us to deny the assumption that the data is independent. Scrum, agile, and “other” in particular are essentially indistinguishable from one another.   “Unknown” is far higher than one would expect, while “Other/ad-hoc” is also remarkably high, indicating that there are effective production methodologies available that aren’t on our list (interestingly, we asked those in the “other” category for more detail, and the Cerny method was listed as the production methodology for the top-scoring game project in that category).

That's one of the more surprising findings of the project, and a hard one to explain. There are many who swear by their production methodologies, but the empirical data here suggests, at least in the gaming space, that perhaps some are overhyped.

As for the financial incentives, the project quizzed participants about what type of incentives were offered, such as individual, team-based, metacritic-based, and royalties. Guess which option had the best result?

It looks like individual incentives win out. Again, to quote Tozour for perspective on the statistical significance:

Of these five forms of incentives, only individual incentives showed statistical significance. Game projects offering individually-tailored compensation (64 out of the 273 responses) had an average score of 63.2 (standard deviation 18.6), while those that did not offer individual compensation had a mean game outcome score of 56.5 (standard deviation 17.7).

We've long spoken foul of things like offering bonuses based on metacritic ratings, but it's always been more of a moral point. Developers missing out because of a few bad reviews (which may or may not have been accurate), doesn't seem fair. A project could miss out by a half a point, decided by some site that gave a 3 on a whim, and that takes things out of the hands of the developers.

But now, it helps to have some empirical evidence to point to and say, no, actually that just doesn't work. It'd be one thing if it was morals vs profit, but now it's morals and profit vs bad practice.

I highly encourage you to read the full post on the Game Outcomes Project blog, but we'll be putting up some more information on the project throughout the coming week. There were some great lessons learned, ideas challenged, and suspicions confirmed.

[The Intelligence Engine]


Be the first to comment on this story!