In-Depth: Demystifying the IGF Judging Process
[In this informational piece, indie game creator and Independent Games Festival judge Jens Bergensten (Harvest: Massive Encounter) discusses the process of judging the IGF to help entrants understand what their game goes through, with personal views from two other 2010 IGF judges and IGF Chairman Simon Carless.]
This year I had the great privilege to take part in Independent Games Festival as a judge, getting a chance to evaluate the best independent games that have been recently released or that are still in development. (The finalists for the Main Competition were recently announced.) This task has been really fun, even though it took a little more of my time than I had thought.
I wanted to share my experience from this because when we participated in IGF in 2008 and 2009, the process was somewhat of a black box. Your game was dropped into the box and wasn't seen again until three months later, when an e-mail with commiserations and a few judge comments told you relatively little about the process the titles went through.
Of course, this isn't that different to most competitions out there (even judge comments are rare in other contests), but we had no idea what had happened in that time. And we obviously wanted more information and idea of how this was decided, since IGF is among the most important events for small game start-ups like ours.
So I wrote this article with the permission of the IGF organizers, based on my experience of being on both "sides", without ever getting the luxury of a nomination.
Differences between IGF 2009 and IGF 2010
First of all, there were some things that had been changed since last IGF:
- Much greater emphasis on giving written feedback with your scores
- There was a "Nuovo" category. Judges were able to nominate games to this category (a "yes" or "no" option), and those games that were nominated would be evaluated by a committee.
- There were more than twice as many judges (over 150!).
In addition, the IGF judge site (new or not) was working great! Very easy to navigate and you could write messages on any game to the other judges. It was like a small community, except it was not possible to see the list of judges, though you could see their names when they commented on particular games.
The scoring method this year was a value between 1 and 100 in each category, with specified intervals for the different scores. This scale is more precisely defined this year, according to comments I've heard from judges in earlier years.
The score ranges this year were:
90-100: Concept and execution are both superb, with no obvious faults. Very strong candidate for finalist in this category. Difficult to imagine further improvement.
80-89: Candidate for finalist in this category, with good execution on a solid concept. Possible improvements are more apparent.
60-79: Still above average, but unlikely to make finalist. Perhaps a strong execution on a less interesting concept, or a neat concept with lackluster execution.
40-59: An average experience. No major defects, but not very compelling either.
20-39: Actively failing on some aspect. Maybe the concept is terrible, or maybe the execution is severely lacking. They tried, but big problems.
1-19: Outright bad. Completely missed the mark.
The different categories were:
Excellence In Design: Scores will be based on the quality and execution of each entry's design, including game mechanic design, level design, and difficulty balancing.
Excellence In Audio: Scores will be based on the innovation, quality, and impressiveness of each entry's music and sound effects.
Excellence In Visual Art: Scores will be based on the innovation, quality, and impressiveness of each entry's appearance and visual effects.
Technical Excellence: Scores will be based on the technical mastery and innovation demonstrated by each entry's game engine and code base.
Overall Rating: Scores will be based on your impression of the game.
Note that the overall rating is not an average score. When I evaluated my set of games, I scored this value based on how the game made me feel. I didn't care if the game was visually stunning or innovative, I mainly cared if the game was enjoyable to play.
Judge Game List
Each judge was assigned a set of games depending on which tools she or he had available. I, for example, did not have access to any dev kits, so I was only able to judge games that had either a Windows or Mac OS X executable. I had been given 14 games to evaluate, which seems to match up pretty well with my friends and fellow judges' Alex May and Erik Svedang's lists. Me and Erik had one game in common, otherwise it was randomly distributed.
It's worth pointing out that the game list was not sorted alphabetically. The order of the games seemed completely random, so having a game starting with an "A" did not give any benefit in this case (i.e. it only helps in the list of the main site).
I was only allowed to score the games in my list, but I could try the other games if I liked so I could gain perspective on the quality of the other titles. It was also possible for me to write comments on the games that would be displayed for the other judges (but not for the game authors). This was quite handy when you needed tips or technical support from the other judges.
The game presentation page looked like this (screenshot taken with permission from Jonatan, page with permission from IGF organizers, click through for full picture):
As you can see, the judges will get both the short and the long description, together with contact information and download links. Some long descriptions (for judges only) have a lot of walkthrough and technical information in them.
I think it's worth to point out that game developers are allowed to upload updates to their games while the competition is running. I always made sure I downloaded the latest build when I started evaluating a new game, and sometimes this was several weeks after the submission deadline (which is November 1st). A few weeks is a whole eternity to an indie game developer!
Judges who volunteered to evaluate the student games would receive a new list of games after the main competition had finished. This list was also displayed in a random order, and I received 15 games to evaluate. One difference to the main competition was that student games were only rated in a single category ("Overall"). The scores would be averaged among the judges who evaluated the games, and the best ones would be invited to GDC.
Following all of the first round judging, the finalists are being given a couple of weeks to upload new versions of their games, if they have any updates. After that, the IGF judges have about a month to play all finalists and simply vote for their winner in each category - quite straightforward.
Second Opinion: Alex May (Eufloria/Dyson creator, IGF judge.)
As a judge this year and a finalist last year, the change of perspective was very interesting for me. I was given 14 games to judge, and less than a month in which to do it. It's a little over two days per game in my case, and judging by the general activity of the judge comments on the individual games, many judges, like me, left it quite late before starting.
Nonetheless I gave each game a good bash and brought in third parties I could trust for multiplayer games when other judges were unavailable. I was forced to leave significant written feedback for each game, which is excellent.
A quick bit of maths suggests that an entry in the 2010 IGF would have around 8 judges playing it, with one or two maybe unable to run it due to particular system and internet configurations (I noticed a few of these cropping up - people unable to connect to internet games etc).
Overall I found the process really easy and fun to use. I could amend my points scoring and feedback any time I liked up until the votes were counted. This was good because sometimes when going back to a game my opinion changed for better or worse and I was able to go back and add more feedback and change the scores.
I would imagine that a 100-point scale would give a good granularity in the results, and this was the reason for choosing it. I personally never went off multiples of 5, effectively rendering it a 20-point scale. Judges were not privy to any statistics beyond whether a game had yet received any scores at all or not.
A few things that I personally felt as a judge:
- I really wanted to make the best feedback I could as I know what it feels like to receive it as an IGF entrant.
- I really wanted more time for each game. Partly my own fault.
- Given more time and a choice, I would have preferred to have more games assigned than have been given more time for the entries I had.
- I felt like the comments system was wasted on most of the judges as many of them failed to make a comment on a game. Also, as a judge, I felt the system of information could have been even more open to us, like who had submitted reviews, who was assigned this game, etc.
I think Jens has opened up the process very well. Hopefully this article will help explain the process better to entrants and interested parties.
Third Opinion: Michael Rose (IndieGames.com editor, judge)
For my games for IGF this year, as a judge I was assigned quite a variety, with a mixture of puzzlers, platformers, shmups and first-person shooters, as well as some... oddities. I remember looking down my list for the first time and spotting some names I recognized and others I had not a clue about.
I decided at that point that the only way I was going to judge this list of games fairly was if I played them from top to bottom, not picking out the ones I had already come across or was previously excited to play.
The first obstacle to overcome is having misgivings about a game before even playing it. We've all done it at some point - be it a screenshot, or a clumsy game description or maybe a trailer of suspect quality, it's easy to conclude that you're not going to enjoy a game before even installing it. Obviously in these circumstances this isn't fair in the slightest, so personally for myself, it was very important to leave all these kinds of thoughts well alone.
My range of titles turned out to be quite the mixed bag, with a number of superb gaming experiences slotted in between some other not-so-fantastic. My personal means of scoring each game was with a pen and paper at the ready, noting good and bad points as I went along, and using them to come to a conclusion at the end.
Along with the scoring, there was also 'Anonymous Feedback' to be given - obligatory for the first time since the competition began. This, I felt, was incredibly important. To understand how important the feedback was, I put myself in the shoes of a developer. I've just submitted what I believe is my best work ever. More than anything now, I want to know what people think. I don't just want a string of numbers thrown back at me with no explanation as to what they mean. If I'm scoring low in the Audio section, I want to know why!
With this in mind, I made sure to give each of my entries a decent amount of feedback, be it praise or constructive criticism. I didn't dance around the subject though - if something was good I said so, and if something was bad I made sure the developer understood that I didn't enjoy that specific area as much as I would have liked.
An area that I felt mildly confused about was the topic of length. I had games in my list which were over in a matter of minutes, then I had other titles which went on and on for hours. Now clearly these shorter games weren't short due to the developers being lazy or running out of ideas - this is just how the developer chose to express him or herself.
But then if a developer has put, say, a month of work in, and produced something short but sweet - but then another developer has slaved away for a whole year, crafting something wonderful with a good few hours of play to explore, should one get precedence over another? It's a tricky one, I believe.
The other feature for judges to indulge in a bit of was the 'Judge Notes'. At the bottom of each game page was a comment box, allowing judges to discuss said game -- with a mix of technical and other comments, although I generally preferred technical-only info to be displayed there.
Overall, I felt that every game had as much chance as any other, which really is a remarkable achievement considering there were 300+ games and 150+ judges to co-ordinate. The judging was a very painless experience, meaning it was easy to slot playing through my games in with the rest of my work.
Simon Carless' Comments (IGF Chairman)
Since the Independent Games Festival is becoming so important to many smaller game developers' lives, we felt it was important to go further than any other game industry awards has (as far as I'm aware), and give full feedback of how we run IGF voting from the judges themselves.
We're committed to transparency, because we know that indie game creators want to understand what happened to their game in the process of IGF judging. The contest is a truly democratic process, and we're proud of that -- and of this year's finalists.
We will be polling judges about some of their comments and continuing to hone the process, and if you'd like to add to the discussion, mail us at firstname.lastname@example.org.
Finally, I'd like to thank Jens, Alex and Mike for stepping up and explaining things, and all judges for spending many hours of their time voting this year. In addition, thanks to Matthew Wegner and Steve Swink for their sterling work on the back end and judging, and Kris Graft for much of the organizational heft this year. And roll on March!