[Air-L] The review process for IR15

Daren Brabham brabham at usc.edu
Wed May 7 09:30:21 PDT 2014


I actually think the review process this year was quite smooth. While I had
some papers out of my exact area of expertise, they were still familiar
enough to me to handle, and the reviews I got were quite thoughtful and
helpful. Kudos to the committee. It is also nice to be able to indicate a
level of familiarity with a topic when you submit your review. This way,
presumably the one reviewer who hated the paper but indicated he/she wasn't
that familiar with the topic would be overruled by reviewers who liked the
paper and were more familiar...like a weighted score...or at least a brief
qualitative assessment on borderline cases.

It may never be possible to slot papers to reviewers with absolute precision
when it comes to expertise. There are too many factors that make the
matching game messy:

1) large volume of papers and the need for speedy deadlines doesn't allow a
busy program chair much time to devise a perfect, equitable matchmaking
arrangement. This also makes returning a paper to the chair because you
don't fit it that well kind of unreasonable...it's just too much to juggle
on a deadline;

2) a conference centered on an interdisciplinary topical area (Internet
studies) rather than a single disciplinary tradition or single
methodological approach means there will always be a few papers that do X
and just a few reviewers perfectly suited to review X;

3) whether constrained by the short-ish length of the submission or a
decision by the author to be vague when it comes to methodological
procedures or theoretical underpinnings, many submissions do not make it
easily known what kind of reviewer would be most appropriate for them.
Sometimes a paper's keywords indicate the paper is about "e-learning," but
really it may be that it's a lab experimental study of information
processing that just happens to use an e-learning system as its research
site...so it's no wonder a reviewer who takes a critical Marxist approach to
understanding how online education is affecting higher ed will struggle with
a submission like that and wonder why a program chair sent it to him/her.
But, as I said in point (1), ain't nobody got time for that! The program
chair has to be able to rely on keywords for assignments;

4) AoIR welcomes papers in varying stages of completion, so a proposal for a
quantitative study that will take place sometime between paper submission
deadline and the conference in October  may not satisfy a quantitative
reviewer who would prefer to dig deeper into the exact statistical
tests/data analysis used in the paper. Ditto a paper that proposes some
intricate critical essay but whose findings aren't fully baked. Everyone has
a different level of expectation for how "done" a paper should be when it is
presented or submitted, and every reviewer has a favorite part of a paper
that he/she picks apart the most - method, literature, analysis, whatever.
So when these sections don't exist yet (because the study is in progress), a
reviewer expecting to see them is more critical I think; and

5) I'm guessing a lot of reviewers don't follow through and get their
reviews done, which leads the program chairs to having to send abandoned
papers out to not-the-most-perfect-fit reviewers just to get decisions
rendered in time. This is the easiest thing that can be fixed if people
would just do their reviews on time.


The somewhat odd length of the AoIR submissions is still a controversial
topic, but I'll reiterate why I think it is an improvement to the
abstract-only format of a few years ago. When it was just abstracts, I had a
tough time as a reviewer assessing the quality of a submission, because
sometimes the only methodological explanation for how an author was
collecting or analyzing data was that they were doing "an analysis" of some
text/case/etc. Sorry, but I've seen too many Internet studies papers that
masquerade armchair theorizing or flimsy fanboy gushing about some
website/video game as "analysis." If academic research is about creating new
knowledge, there have to be some expectations for how we get to create this
knowledge...some sort of accepted procedures, a way to communicate which
bodies of literature are propping up an argument, etc. When it was just
abstracts, I think too many papers were getting accepted because they dealt
with trendy, cool, or weird topics and too many papers were getting rejected
that were making solid contributions to knowledge that just happened to be
about less sexy topics. At least with an extended abstract/short paper
format, there is more room to lay out one's argument, data collection
procedures (if applicable), relevant literature, and preliminary findings
(if applicable). And while my examples have all been about social science,
the same is true about critical/cultural or rhetorical work: these kinds of
essays need much more than like 200 words to really make a convincing
argument that they're worthy of being presented at an international
scholarly conference.

The only other obvious alternative would be to require full-length papers,
like many other conferences do. But requiring 25-page papers that are
considerably more polished and complete will change the dynamics of the
conference (from dialogue and workshopping of ideas to one-way delivery of
fully formed arguments) and it means in order to have a paper ready for the
next year's AoIR submission deadline, you'll have to probably start working
on a paper right after the previous year's AoIR conference ends.


db

---
Daren C. Brabham, Ph.D.
Assistant Professor, Annenberg School for Communication & Journalism
Editor, Case Studies in Strategic Communication | www.csscjournal.org
University of Southern California
3502 Watt Way, Los Angeles, CA 90089
(213) 740-2007 office | (801) 633-4796 cell
brabham at usc.edu | www.darenbrabham.com





More information about the Air-L mailing list