[Air-l] 988 exabytes of info created in 2010

Bram Dov Abramson bda at bazu.org
Wed Mar 7 10:36:07 PST 2007


>> Perhaps there are domains of activity where it is often true.
>> But it's certainly not my experience as both a producer and
>> user of non-academic indicators-based research, each in both
>> commercial settings and in government.

> Sounds like a researchable question, no?  Where do 'actual decision
> makers' get their information sources from? Why do they turn to it?
> What's the role of 'names that you know' and expectations of how
> their 'audiences/bosses' will respond to the source?

Sure.  But it's one thing for Source A to beat Source B.  It's another to
turn to Source A because there is no Source B.  My experience is that most
'audiences/bosses' are pretty free and easy with the range of acceptable
Source As, as long as it provides answers to the question that they think
is important -- I have seen really any name brand do, which ranges from
IDC to eMarketer to just about anything with the word "university" or
"college" attached.

Now, where data are available from both, say, Research Service Inc. and
Local Research University, the issue you are talking about would probably
come up.  I would offer only two caveats.  First, it comes up much more
rarely than one would think.  Even on questions where multiple attempts at
the answer exist, people often stop at the first reasonable one they find,
rather than evaluating the range of options -- in other words, "names that
you know", but as a search strategy rather than a weighing of alternative
findings.  That's why things like citation in the media and search engine
placement are so important to industry researchers: if you can be the
first stop, you may well be the only stop.  And, second, even where it
does come up, other criteria are more important: depending on who the
'audience/bosses' are, a more recent/conservative/aggressive answer may be
preferred.  My experience has almost always been that criteria such as
these -- i.e. if both Source A and Source B exist, which better fits the
story I am trying to tell -- outweigh any reputational issues.

But, for sure, researchable: till then, anecdotes are all we've got. ;-)

> Certainly there are failings in the review system
> for academic research, but that one can point to
> them is evidence for the point I was, perhaps clumsily,
> trying to make:  at least there is some form of public
> review of academic (and government) research, some ethic of
> open review that one can rest some weight of quality upon.
>
> There doesn't seem to be any equivalent system for commercial
> research, revealing methods seems optional, at best.

I think I know what you mean.  And, intuitively, it makes sense: academic
research has a relatively uniform institutional structure, so there should
be less variability in research quality than in research conducted in, for
instance, commercial or government settings that are outside the academic
world.

And, variability-wise, maybe that's true.  But my sense has also been
that, on the one hand, academic review standards in some journals are very
limited because of anything from disciplinary insularity to very lax
oversight of journals' own review processes -- the opposite is true in
other locations of course, but nonetheless -- and that, on the other hand,
the reputational risks facing industry researchers who know that their
primary audience may well be the exact industry participants best
positioned to review their work, may spur them to be very rigorous indeed.

Putting all that together, then, my sense is that the primary harms of
lousy non-academic research (and especially industry research which is
sold and whose producers are therefore incented to have the media talk
about it) derive from their reporting in popular media.  I believe that
lousy industry research that pertains to the telecom and IT domains just
doesn't cut it among its primary consumers, who will know if it is way
wrong.  On the other hand, the same research may be very widely reported
in the news media, who not only have few ways of knowing whether the study
is way wrong or not, but rarely consult -- nor really have the time to
consult -- potential sober-second-thought providers who might provide a
reality check.

In fact, there is something similar which occurred with the telecom
meltdown of the late 1990s.  The 100-day traffic-doubling myth derived not
from any industry research, but from a media pick-up and amplification and
distortion of something a single spokesperson at an Internet backbone
provider claimed.  The absorption of that myth into an environment with a
proclivity to believe it and without competing evidence to infirm it was
one of the more significant bad-"research" consequences around.  Except
that there was no "research" at all.

That's why I would agree:

> Now if the commerical firms got together a review system,
> that would be a very interesting 'industry self regulation'
> development :)

A review system which functioned in the form of a certification standard,
and which was publicized and adopted widely enough that it caused
reporters to regard certified studies differently than non-certified
studies, would probably do more good than harm.  And, you know, it is
probably the sort of thing where an academic-led effort could successfully
play the role of neutral arbiter.

cheers
Bram



More information about the Air-L mailing list