19
July
2010
|
07:01 AM
America/Los_Angeles

An Open Index As A Business...

My recent post about speeding up the Internet by creating an open index of web sites to cut down on robot traffic, received a lot of attention and a lot of great responses.

Several people wrote that it would be important to be able to verify the information on a web site. That's true but maybe if there was just one "openbot" verifying the data and punishing those web sites that tried to cheat we could have a fairly clean index.

Also, there could be a business model here. If there was one open index then all the bots could query just that database because it would have the best information.

The open index could even charge for access especially if it offered a variety of analysis tools or offering to run a custom algorithm. It would be far cheaper than a company crawling the web, collecting the masses of data and then trying to analyze it.

The value is in the analysis and not in the index.

For most web sites, nearly 50 percent of their traffic goes to serving robots that are all crawling exactly the same data. A central repository updated as soon as anything changes would go a long way to setting free a substantial amount of broadband resources for far less cost than adding more servers, more network capacity and the energy required to run it all.

A central shared index would save a tremendous amount of energy and carbon dioxide plus we'd have a faster Internet for a fraction of the cost of building a faster Internet.