[Air-L] Help with Facebook Research

elw at stderr.org elw at stderr.org
Wed Oct 3 14:37:50 PDT 2007


> I have always been curious about the TOS on this. If I set up a group of 
> people to click and record each page, I'm in the clear. So, what if it's 
> a bookmark file they are clicking from? What if the outbound links are 
> automatically filtered and collated? What if my browser is pre-fetching 
> pages? I guess the question is: at what point does it become automated.


I expect that one of the real goals of that point of the TOS is to prevent 
someone from slurping out all of 'their' (our) data and using it to set up 
a competing SNS.  Maybe not in quite those terms - but effectively.

I would love, love, love for folks to have better access to the innards of 
a few of these sites, so that butt-ugly hacks to extract data from them 
without offending anyone or breaking TOS on sites cease to be 
necessary....


> It seems to me that there should be a kind of Turing Test for scraping 
> and crawling: if you can't tell from the server side that it's not a 
> human, then it should be considered a human.
>
> I know, that's not a practical proposal, but I just *wish* that was how 
> it was handled.


I wish it too.  It would make so many things so much easier.

--elijah



More information about the Air-L mailing list