My friend Cliff (not pictured) and I have been having a conversation about whether the Amazon Breakthrough Novel Award (ABNA) will actually find a “breakthrough” novel.
Here’s his opinion, which was so pullquote-worthy, I’m lifting it out of the comments section of another blog post:
You simply cannot go through 5,000 submissions in two months, letting amateur reviewers cherrypick their favorite genres and disqualify everyone else. The average Amazon reviewers has neither the skill, intelligence, education or any tangible qualification to be entrusted with the task. The contest’s (not AWARD) a joke…
As you can see, Cliff lacks neither the ability to form an opinion, nor pithily articulate it.
I’m a little wafflier about the process. (Wafflier isn’t a word? It should be.) For example, the Publishers Weekly review on my amazon page is riddled with errors, ones someone would only make if you didn’t read carefully — or more likely, didn’t read at all.
For example, if you hadn’t read my book, it would be easy to argue, “Otis’s unhappiness is undeniable, but it is also ambiguously rendered; he makes random comments like “I’m becoming one of you,” but these thoughts don’t lead to anything.” A particularly odd observation, because Otis’ “becoming” is his central inner conflict in the book!
Even though the PW review is the dog’s breakfast, some of the user comments are really astute. For example, one reviewer noted that chapter two,
sounds a slightly flat note in that it presents a confliciting image of Roberto. Where on the golf course his putting troubles have led him to take extreme chances, in the casino he’s the cool mathematician always in control. I don’t understand the logic of this turnaround …
What a difference actually doing the reading makes, eh?
New Yorker writer James Surowiecki (I’m predisposed to like any writer with a hard-to-spell last name) has a book called The Wisdom of Crowds, in which he summarizes a number of group dynamics studies and notes, “The simplest way to get reliably good answers is just to ask the group each time.” (Here’s an excerpt.)
The ABNA contest is allegedly judged on the basis of the PW review, and Amazon Top Reviewer review, and user reviews. So in theory, the wisdom of the crowd would mitigate one unfounded review (good or bad), and an aggregate of reviews would better reflect a work’s actual value.
Of course this doesn’t entirely apply to the Amazon contest. First, people who aren’t interested in reviewing are probably not going to write a review anyway, so the ratings will probably skew higher than if a true cross-section of people reviewed something. (Then again, Amazon probably knows this.)
Second, there’s the friends-and-family phenomenon … i.e., mom’s not going to trash my book. In a best-case scenario, amazon (or whomever) would be able to tease out the obliged reviewers from the ones doing the writer a favor. One way would be to see how many other ABNA excerpts they reviewed.
Third, without a critical mass of reviews, one or two odd ones are far more likely to skew the process.
Fourth, Amazon hasn’t been totally transparent about its judging process, so it’s impossible to know what counts and what doesn’t.
I’m just hoping that more people will review my work fairly, and I have a critical mass. After getting rooked by PW, that’s about all I can ask.