Thibaut’s Think Tank: Score inflation and why we critics are to blame
Hi everyone! In this new regular feature, I’ll try to tackle one of the more touchy subjects in the world of gaming. I honestly hope it can interest you and that it may lead to lively yet mature discussions. If you have remarks or suggestions, feel free to e-mail them to email@example.com or respond in the comments section. We’ll kick off with review score inflation.
Review score inflation? What the…? Glad you asked. Put simply, review score inflation is the current phenomenon where critics are rewarding games with ever increasing grades, and much more frequently than ever before. Sometimes, those top marks are thoroughly deserved, but they can also be a bit flattering, to put it gently.
Let me say one thing clearly: by itself, there’s nothing wrong with high review scores. Titles like Mass Effect 2 or Uncharted 2: Among Thieves are solid gold, and in five year’s time, they’ll still be in our collective mindset. And so they deserve every single bit of praise they’ve gotten. If you’re a good student, you deserve good grades. Simple as that.
But there are boundaries. And they are being crossed more and more. Perfect 10s are regularly awarded to games that don’t deserve them the slightest bit. For example, let’s have a look at our Japanese friends from the leading magazine Famitsu (pretty much the Bible for game journalism). Between 1998 and 2008, Famitsu granted the perfect score of 40/40 to a mere six games (Zelda: Ocarina of Time, Soul Calibur, Vagrant Story, Zelda: The Wind Waker, Nintendogs en Final Fantasy XII). While there can be some doubt about the inclusion of Nintendogs, the fact remains that the other five are clear inductees for a imaginary gaming hall of fame.
Before you say ‘Six? That ain’t too bad’, keep in mind that the problem is a relatively new phenomenon. Between 2008 and 2010, Famitsu gave a perfect score to no less than nine games. That’s one and a half times as much in three years as in the entire decade before! Now, there’s something to be said for Super Smash Bros. Brawl and the incredible Metal Gear Solid 4, but are there really people who believe that Bayonetta (a very good game in its own right) and Monster Hunter Tri deserve a place in the gaming pantheon next to Ocarina of Time? Didn’t think so.
And it gets worse! What to think of the Frenchies from PSM3, who rewarded Uncharted 2 with a surreal 21/20? What’s next? 25/20 for Uncharted 3? 107% for Mass Effect 3? With stunts like that, reviewers lose their hard-earned credibility. And rightly so, as far as I’m concerned. Because how can you explain a reasonable person that a game has improved upon perfection, without said person laughing in your face?
But let who is without sin cast the first stone. In the past, I have also indulged in handing out scores that in hindsight have proven a tad optimistic, like Okami (94%; 90% would have been more correct) or Tom Clancy’s G.R.A.W. 2 (92%? Painful mistake, 85 more like!). I also have my doubts about scores that my dear colleagues have awarded, but I’ll wisely keep them to myself.
*cough* That game about Noble team *cough*
Describing the trend is one thing, explaining it is something else entirely. I can’t give you an all-emcompassing explanation -the problem is more complicated than that- but here are few issues that, in my opinion, have an impact.
Without a doubt, the most important factor is time pressure. Reviewers always to try to provide their readers with timely reviews, in the hope of preventing those readers from spending their hard-earned cash on games they actually won’t like or that are simply subpar. But in the rush to get those articles out the door as quickly as possible, writers regularly forget to take a step back. Often, critics are still impressed by a game’s initial impact, when they’re putting pen to paper. You could compare it to the first weeks of being in love: in the beginning, everything seams picture perfect, but a couple of weeks later, when you look back, it starts to dawn that you may have been overlooking some less attractive qualities of your loved one.
A second explanation is simply the fact that more and more great games are coming out. Development budgets keep skyrocketing, which has a positive impact on the quality of the average game. Developers are also getting more and more experienced. Videogames are still a young medium, but at least they’re out of their diapers. Developers have learned from earlier mistakes, both made by themselves and by others. They have learned the things that work and the things that don’t, which in turn leads to better games.
But next to the games themselves, the standards by which they are judged also need to evolve. A title that earned 9/10 last year shouldn’t be able to do so this year. Otherwise, that leads to inflation; in a situation like that, scores need to get higher and higher if there is to be any kind of distinction for truly great titles, until, finally, it starts getting ridiculous. In other words: we need to be more critical and only grant 90%+ scores to those few games everyone should have. Personally, I’m thinking among the lines of six or so games a year (which equates to about one in every two months). It would be a significant step up from the 15 games (in 20 versions) that earned a 90% or more last year.
But from personal experience, I know that a lot of gamers only buy 90+ games. By itself, that’s not a bad principle (it can never hurt to cautious), but with the lowered value of current 90% scores (which I hope to have proven with this article), there is also a greater risk for them to buy a bust. In essence, the gaming public also needs to change their mentality: games that earn 80% or more are also definitely worth buying.
Review scores offer some significant advantages: they offer a simple summary of the game and it allows for easy comparisons with other games. Ideally, however, the text alone would suffice and the scores would be omitted, but we have a long way to go before that becomes even a remote possibility.
In the meanwhile, we should be aware of the current evolution of inflating scorelines and should try to do something about it. That means being more critical and thinking longer and harder about the grades we’re handing out. Inevitably, we’ll still make mistakes (we’re human, after all), but we can only try to keep those to a minimum.
Thx for reading and see you next time!