Powered by
Movable Type 3.2
Design by
Danny Carlton





Made with NoteTab

February 15, 2005

What's Google up to?

The robots.txt file is supposed to be a tool for keeping search engines away from directories on your web site you don't want spidered or indexed. The major search engines all claim the obey them, but warn that there may be a delay between when a robots.txt file is changed and the spider reads, and follows it. All nice and good in print, but the reality is scary.

To cut down on bandwidth use I recently listed two directories containing seldom used message boards in my robots.txt as disallowed. Almost immediately Google began hitting those directories with the fervor of a teen-age hacker.  The index page alone of one received 692 hits in one day from GoogleBots.

Now add that bit of info to the recent story from Reuters about hackers discovering a “wealth” of information regarding things most people don't want on the internet -- at Google.com. (I mentioned it here.) Could Google be using the robots.txt files to intentionally harvest data people want hidden?

Not scary enough for you? Well, add to that the problems Michelle Malkin, Charles Johnson and other bloggers have had getting their blogs listed on Google News. Apparently Google refused to add Conservative blogs, but has no problem adding Liberal blogs such as Wonkette or the Democrat Underground.

Then it should come as no surprise that as I reported earlier today about the political contributions of Google employees.

Let's add it up: Google a blatantly Liberal entity, is found to have tons of sensitive data archived on its site, and seems to be using the robots.txt files to sniff out where that sensitive information is hidden. Why would they want it, and what do they plan to do with it? The last election was pretty dirty and stuff was being dug up left and right. Could Google be building a “dirt chest” of secrets to unload during the next election?

Posted by Jack Lewis at February 15, 2005 03:42 PM

Trackback Pings

TrackBack URL for this entry:
http://www.jacklewis.net/cgi-bin/mt/jl-tb.cgi/251

Listed below are links to weblogs that reference What's Google up to?:

» Et Tu, Google? from Kobayashi Maru
Jack Lewis weaves an additional thread into the already-ominous story about Google's apparent left-leaning track record with regards to including conservative blogs in its news section: [Read More]

Tracked on February 16, 2005 07:40 AM

» Google Abusing robots.txt? from Myopic Zeal
Frankly, I'm skeptical, but Jack Lewis has an interesting anecdotal observation and makes a case. Let's add it up: Google a blatantly Liberal entity, is found to have tons of sensitive data archived on its site, and seems to be using the robots.txt ... [Read More]

Tracked on February 16, 2005 08:39 AM

» Google mind control (beta version) from Mazurland Weblog
I checked out Little Green Footballs for some morning inspiration and found this interesting link to an article claiming that Google is somehow suppressing links to conservative sites through their search engine. [Read More]

Tracked on February 16, 2005 10:25 AM

» Google's Bias from PeteHoliday.com
Jack Lewis plays the "what if" game and suggests that Google is building up a treasure trove of dirt to be used in future elections. He cites an alleged problem getting conservative sites blogs listed in Google News and the recent info that Googles emp... [Read More]

Tracked on February 16, 2005 11:23 AM

Comments

So then, the answer this problem seems obvious; include references to all the websites that Google won\\\'t ordinarily include into your robots.txt file. That way, the Googlebots can hit away to their heart\\\'s content, and help drive traffic to the politically incorrect infidel sites.

Posted by: Alexander the Grape at February 15, 2005 08:29 PM

Hmm. Info-mining for political purposes? Hardly impossible. Given the company they keep, it should certainly be considered as a possibility.

Posted by: Final Historian at February 15, 2005 08:38 PM

Ouch.

Posted by: Kevin P. at February 15, 2005 09:57 PM

Well, here's an idea for the paranoid.

If you don't want people to see your most sensitive, confidential data, umm... don't store it on the Internets.

Posted by: Gumby at February 15, 2005 09:57 PM

A robots.txt file is a lousy way to "hide" data that shouldn't be world-readable in the first place.
I don't believe that Google would bother doing this for political effect, but I can't think of any other reason.
For example, Mozilla uses a robots file to discourage crawlers from browsing their automatically-generated source code display (lxr.mozilla.org). Why would a search engine want to fill up their index with crap, for instance?

Posted by: dr_dog at February 15, 2005 10:09 PM

And you guys on the right love to point out how the left is full of lunatic conspiracist theories. What the f*** man. Listen to yourself for a moment.

By the way, go check the LGF archives to find the huge celebration when they complained to Google News about DailyKos and then got them unlisted. Success!

Posted by: wtf at February 15, 2005 10:13 PM

I dont work for Google but have some insight into how data designers think. See below.

I am a data hog and have been for some time. We collect and use every bit of data that comes our way on our systems. We track what users do and how they do it. Unless we are specifically told not to, we collect and hold onto anything and everything that comes along.

I think Google is driven in this case by Data Greed (Normal) and the need for Closure.

Google is in the business of collecting information and referencing it. I see no reason why they would not ambitously seek out every nook and cranny. (I would.) They are even worse data hogs. This is the data greed part.

As for blowing past the robots.txt, Mathematically, if you want to exclude some subset, you subtract it from the main set. But first you have to define the subset.

I dont know what the internal ethics are at Google wrt the robots.txt files, but my guess is that Google may want a list as a positive list of what NOT to put up. This enumerated exclusion list may be a better way of assuring themselves they do not publish off limits links. Mathematically, it makes sense.

And that is how I would design it. The real problem is not crawling YOUR site, but what to do with the data on other sites pointing to the stuff under your robots.txt? How do you exclude these secondary references?

The only way is to develop an exclusion list.

This may become the Internet's Dark Matter someday - we know its out there and is most of the information, but we just can't get to it!!


Posted by: puredata at February 15, 2005 10:13 PM

The behavior you describe is not typically associated with Google. The most logical explanation for this is something else using a google user-agent trolling robots.txt for sensitive data.

It would only take me a few seconds to write a script which does just that, and then hammers the hell out of your server.

Posted by: Mason at February 15, 2005 10:42 PM

Did you try a reverse DNS lookup on the IP addresses which hit your site? Are they actually from Google? A user-agent is suggestive, but proves nothing.

robots.txt is supposedly there so that dynamically generated content isn't thwacked-upon by the bot, because it wouldn't add anything, and because it would put unnecessary load on the server.

robots.txt is not for hiding things you don't want seen. Yes, the Wayback Machine [ http://en.wikipedia.org/wiki/Internet_Archive ] lets you use it to de-archive content you'd rather forget, but that's not its main purpose.

Read more about the Robots Exclusion Standard on Wikipedia. [ http://en.wikipedia.org/wiki/Robots.txt ] (Where else?)

Posted by: grendelkhan at February 15, 2005 10:52 PM

more Google bias

type the words:

miserable failure

in the search area and press the 'I'm feeling lucky' button

Posted by: Mark Macy at February 15, 2005 10:57 PM

If you don't want a page to be seen in the Google listings and you don't want the page to be spidered:

1) Don't have any "normal" links on other pages pointing to it
2) Put them in a password protected directory
3) If you must, link to the page(s) in question from other pages using the following code example:

<span onclick="self.location='http://www.pajamahadin.com'"><u>Here is a link</u></span>

Something like this will make the link clickable for a user, but a search engine spider will not recognize it as a link to crawl and add to its index

Posted by: PajamaHadin at February 15, 2005 11:00 PM

Mark, that "miserable failure" bit isn't bias, it's a googlebomb.

Posted by: Mason at February 15, 2005 11:20 PM

Now, would a certain individual be responsible for that single result from the "googlebomb?"

Posted by: MikeD at February 16, 2005 12:33 AM

Additional Google Bias:

Type the word:

Impeach

in the search area and press the 'I'm feeling lucky' button. Why isn't Andrew Johnson or William Jefferson Clinton first in line..??

D. Ehlert

Posted by: David Ehlert at February 16, 2005 12:46 AM

No. Not unless you want to pin it on Dick Gephardt. The point of a googlebomb is that it games the system. You want someone to blame, start here:

http://www.dailykos.com/story/2004/1/22/17418/3789

Posted by: Mason at February 16, 2005 12:53 AM

Mark & David,

Try "great president" for a surprise.

More about googlebombing: http://en.wikipedia.org/wiki/Googlebomb

Posted by: Mason at February 16, 2005 01:01 AM

I think this is just a little beat up...does anyone actually think someone at google has any interest in reading the hundreds of millions of pages that people may not have wanted to have been indexed to try and find something interesting... the googlebomb stuff is interesting though... follow the wikipedia link above....

Posted by: stephen at February 16, 2005 04:16 AM

man, put some tinfoil on your hat and go buy some duct tape. how insane are you?

Posted by: john at February 16, 2005 04:47 AM

I wouldn't put it past the lefties at Googoo.

Democrats have turned into a desperate cult of hateful obstructionsts.

Posted by: zvi wolfe at February 16, 2005 06:56 AM

Seriously, you people are insane. I don't even know what to say.

Posted by: wtf at February 16, 2005 08:46 AM

Keep in mind that Google, though technically a "public company," is under no obligation to be politically neutral.

Perhaps some day a more conservative version of the webcrawler will pop up, exposing people to the wisdom of Michelle Malkin above all others.

Let's face it, Google is a bunch of California nerds who are encouraged to take ping-pong breaks between coding sessions. It would not surprise me that they'd want to shut out sites that they find ideologically disagreeable. I certainly would not want to associate myself with the likes of LGF. Have you seen the kind of comments it allows?

Posted by: Johnny Mainstream at February 16, 2005 09:08 AM

Odds are your robots.txt file is incorrect. It is also possible that some other site is using a fake name and searching for files hidden by robots.txt to look for sensitive information.

Posted by: Mark Fox at February 16, 2005 09:29 AM

You know, the left is claiming the exact opposite.

http://homepage.mac.com/mazurs/iblog/index.html

Posted by: Chris at February 16, 2005 10:28 AM

"And that is how I would design it. The real problem is not crawling YOUR site, but what to do with the data on other sites pointing to the stuff under your robots.txt? How do you exclude these secondary references?

The only way is to develop an exclusion list."

-----

Umm... no. The data is stored in a hierarchy -- robots.txt is going to either list specific files, or roots in the heirarchy... you don't need a fully enumerated list to know that something exists in a certain branch of the heirarchy and, in fact, that's probably the worst way to do it.

If all of the files in /a/ are excluded, and google wants to index /a/foo.htm ... why on earth would you want to enumerate all of the files in /a/ just to find out that foo is there? You already know it's there. Besides that, the method would require a continual re-indexing of the folder... which is what robots.txt is there to avoid.

Pure nonsense.

Posted by: Pete Holiday at February 16, 2005 11:41 AM

LGF didn't have a thing to do with DailyKos being delisted from Google -- Kos himself asked Google to delist the site.

http://dailykos.com/story/2005/2/15/05114/9646

"I asked for Google to drop Daily Kos.

The wingers had nothing to do with it."

"because...

it was pulling up random diaries and pasting them on the google news homepage, with the implication that it was "sanctioned" content.

Given what's sometimes written in the diaries, I was uncomfortable with that. I don't mind taking heat for things I write, but for things that other people write? I didn't want to deal with that."

Posted by: Stephen Tyson at February 16, 2005 11:58 AM

How does google unindex previously indexed material? If google previously indexed the contents of robots.txt, maybe they need see the contents again to remove it from their archives?
But a smart bot should be able to delete any indexed material that then gets referenced by robots.txt.

Posted by: bill at February 16, 2005 03:35 PM

This is absurd. I'm curious as to how much you know about the underlying technology used by Google.

Posted by: Chuck at February 16, 2005 06:46 PM

I believe skynet is becoming self-aware ;)

Posted by: John Connor at February 17, 2005 01:42 AM

No I'm not.

Posted by: skynet at February 18, 2005 12:42 PM

Post a comment




Remember Me?

(you may use HTML tags for style)

Security verification

Type the characters you see in the image above.