[Air-l] 988 exabytes of info created in 2010

James Howison jhowison at syr.edu
Wed Mar 7 09:29:26 PST 2007


Hi Bram, thanks for the thoughtful discussion.

On Mar 7, 2007, at 11:10 AM, Bram Dov Abramson wrote:

>> The results of commercial research firms are always so nicely
>> packaged and illustrated---and maybe that's one of the reasons that
>> actual real world decisions are based on them.  (50 Rhode Islands?
>> That sounds like a lot, I'd better buy more ;)
>>
>> Which, of course, makes them 'important' but for reasons other than
>> their 'truth'.  Their importance seems self-reinforcing
>
> Oh, I don't know.  The latter thesis -- that, faced with different  
> answers
> to a similar question, "actual real world decision" makers prefer the
> truthiness of name-brand commercial research -- is no doubt possible.
> Perhaps there are domains of activity where it is often true.  But  
> it's
> certainly not my experience as both a producer and user of non- 
> academic
> indicators-based research, each in both commercial settings and in
> government.

Sounds like a researchable question, no?  Where do 'actual decision  
makers' get their information sources from? Why do they turn to it?   
What's the role of 'names that you know' and expectations of how  
their 'audiences/bosses' will respond to the source?

> Rather, I've found that a less sneering version of the former  
> thesis fits
> a lot better.  But it's not just packaging and illustration: it's  
> choice
> of topic matter.  Any time I have seen non-academic research I  
> produced
> used "out there", it's been because there were very few alternative
> sources for answers to the question being asked.  For that matter,  
> when
> there did exist alternative sources, they tended if not to  
> converge, at
> least to take each other into account.

I think you are right, the questions are closer to what the 'market'  
wants.  But there's quite a bit wrapped up in that concept, that  
research that hopes to be 'relevant' can learn from.  Such as why  
such questions are being asked?  By whom and for what purpose?

> In this instance, the few-alternatives thesis seems to be the  
> case.  The
> EMC-sponsored studies, first at Berkeley and now at IDC, have  
> gotten a lot
> of press.  I have not seen a lot of other comprehensive attempts to  
> answer
> the "how much information" question -- a rather different proposition,
> obviously, then what one thinks of that question.
>
> For what it's worth, an IDC narrative of how they counted things, as
> opposed to a third-party nwes story about it, is at:
>
> http://www.emc.com/about/destination/digital_universe/pdf/ 
> Expanding_Digital_Universe_IDC_WhitePaper_022507.pdf

Thanks, that's useful.

>>  research huh?  Does that qualify as an 'A Journal'? ;)
>
> I wouldn't think so.  Academic and other forms of research each  
> have their
> own norms and control mechanisms.  They differ from each other, and  
> they
> differ internally.  I would wager that those who place all weight  
> on how
> they differ from each other, and no weight on how they differ  
> internally,
> will end up with trust mechanisms that disappoint them in the long  
> run.
> I have seen very careful and properly-reviewed non-academic  
> research --
> even shepherded some through reviewing myself -- and, by the same  
> token,
> some very inaccurate and poorly-reviewed academic journal articles.
>
> So perhaps the journal/not-journal binary is not a good totalising  
> lens
> through which to view the wide world of doing research and other  
> words for
> figuring stuff out.

Maybe it was a semi-joke? ;)

>> Is anyone aware of research that asks the question,
>> 'how reliable are  the predictions of commercial
>> research firms?'.
>
> ... or that asks that question of any form of institutional  
> research.  I
> wonder how useful such studies would really be.

Certainly there are failings in the review system for academic  
research, but that one can point to them is evidence for the point I  
was, perhaps clumsily, trying to make:  at least there is some form  
of public review of academic (and government) research, some ethic of  
open review that one can rest some weight of quality upon.

There doesn't seem to be any equivalent system for commercial  
research, revealing methods seems optional, at best.  The market  
reputation of the company, or maybe the author, could function in  
that way, but if that's confounded by the sort of effects we  
discussed above then how can anyone have confidence in it?

You mention 'different norms and control systems' for commercial  
research above (and internal variance), what do you have in mind  
there?  Having worked in consulting I can imagine internal reviews up  
the chain of researchers/supervisors, but are there other forms  
reviews? external even?  Maybe an ISO 9000 style certification for  
internal quality procedures?  The recent work on Wikipedia has  
brought 'standards of knowledge review and reliability' to the fore,  
yet we haven't heard much about commercial research in that regard.

>   In the telecom and IT
> sectors there have, from time to time, been scorecards in industry
> magazines, which would fairly up front about how they did their  
> scoring.
> I suppose this could get you started.
>
> (Parenthetically: one firm, eMarketer, does something like this.  More
> specifically, and at least the last time I checked, their business  
> model
> was that, rather than undertaking primary research of their own, they
> bought or borrowed, etc. other primary research and conducted
> meta-research -- that is, showed what all the other research firms  
> were
> saying, then sold that as a competing product.  I suppose there are  
> issues
> of shooting one's golden goose here.  But that's another story.)

Thanks, interesting.

> More helpful from a moving-things-forward standpoint, I would  
> think, would
> be to investigate the methodologies being used.  The standard by which
> predictions should be judged is a bit of a moving target --  
> ensuring that
> a prediction is based on sound reasoning and data is more do-able, and
> probably more reasonable.

Makes sense.  At least then users of research would have some  
yardstick to judge commercial research, but there still wouldn't be  
an institutional quality system analogous to the academic review  
process (or the role of journalists and FOI requests in government  
research), or even the edit-war-until-we-drop of Wikipedia.

Now if the commerical firms got together a review system, that would  
be a very interesting 'industry self regulation' development :)

Cheers,
James



More information about the Air-L mailing list