Search Engine Death Penalty: WhenU Kicked Off Of Yahoo And Google

from the take-that dept

WhenU, the adware/spyware company that already has quite a bit of controversy surrounding it for both (a) sneaking their software onto users’ machines and (b) then popping up competitive ads when surfers visit websites has now been removed from both Yahoo and Google searches after it was determined that they were using tricks to boost their rankings in both. WhenU blames an outside search engine optimizer who they hired, but it does raise some issues. Considering the power that sites like Google and Yahoo have over anyone finding your site, how long will it be until we hear about a lawsuit when someone sues a search engine for kicking their site out of search results?


Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Search Engine Death Penalty: WhenU Kicked Off Of Yahoo And Google”

Subscribe: RSS Leave a comment
11 Comments
Ben Edelman (user link) says:

Re: Googlebot is bandwidth-intensive, slow

I agree that in principle a second googlebot that mimics IE would be a great advance for Google. If that ‘bot only crawled top results for major keywords, it might not even be so hard for Google.

But a couple problems:

1) A far-reaching bot, that crawls deep, would be long, slow, and bandwidth-intensive. Google’s existing crawl takes, what, a month (?), though really it’s a never-ending task at this point. All indications are that Google is continually facing a constraint as to how much of the web to crawl — would want to crawl more if it could, seemingly, but continually drops low-pagerank sites in an attempt to focus efforts where they’re most needed.
2) Google doesn’t look favorably on folks tampering with HTTP headers. But the crawler you propose would have fake HTTP headers (User Agent, Referer, etc.). Google might be uncomfortable with this.
3) The crawler would still come from Google’s IP blocks. Some cloaking is based on HTTP headers (user agent or, in WhenU’s case as to the research I just posted, Referer header or lack thereof). A second ‘bot could catch these tricks. But cloaking on the basis of ‘bot IP would be much harder to find: Google would have to use another set of IPs for this project, and once word got out which IPs those were, the smart cloakers would adjust their behavior accordingly.

All in all, I think it’s actually not such an easy task. Wouldn’t want to be in Google’s shoes!

Ben Edelman
benedelman.org

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...