17
September
2006
|
14:28 PM
America/Los_Angeles

We badly need a way to verify sources of online content - we need a "trust trackback"



LonelyGirl15 was found out to be a fake video blogger--scripted by a Hollywood production team--many millions had watched it, and many thousands tried to find out who was behind it.


What happens in a future world where phishing is applied to news sources rather than spoofing banking sites? And where there aren't enough watchdogs to spot the fakes?


A little while ago, Google News was carrying a hacked headline that was anti-US and anti-Israel. That was easy to spot; but what if Google News, or some other large news aggregator, were carrying a Reuters story that might have been more subtlety altered?


Google News, does not use humans to spot problems, it compiles the news stories using algorithms. But can those algorithms spot fakes? Clearly not in this case.


In the future, or even now, how can we know if a Microsoft press release really came from Microsoft? And the same goes for nearly every other piece of information we find on the internet. Tampered news stories might not be noticed for days or weeks.




Validating trusted sources of information is going to be very important. And part of that trust will be provided by going to web sites of long established media brands such as the New York Times, and through anti-phishing technologies such as OpenDNS, to make sure your browser is reading a valid site.


This ability to know that a news source --an individual, a company, an organisation, a community, or a government-- really said what it is said to have said in a news story, an online post, email, or any other distribution channel, is incredibly important. Otherwise there will be others who will sow misinformation in very sophisticated ways, for commercial gain.


There will be many opportunities for such misinformation in the online world. With so many sources of information, and more coming our way, there won't be enough online sleuths to flag the fakesters as there were with LonelyGirl15.


This means we need to have a way to verify the source of specific chunks of content as originating from an individual, a company, an organisation, a community, a government.


A reader should be able to click a "trust" button and have the content verified.


For example, in reading a news story: it consists of content from the journalist/news organization; there is content from the company (the ceo said..., our customers said..., the analyst said....,); and there is information from other sources, (the company stock price..., related announcements from other companies..., related stories..., etc). An online reader has to have the means of validating each of those sources of information.


This issue of sourcing also applies to the new media release project I've been working on with corporations and PR agencies. The new media release project is focused on ways of releasing company information onto the internet in many forms, such as vidcasts, podcasts, text press releases, etc.


Those companies/organisations have a duty to release their information in such a way that its origin can be verified, and that others cannot change the content surreptitiously.


For this next phase of the Internet, we badly need a mechanism to verify the source of information that we read online.


This is about creating a type of "trust trackback" that is part of the secure core infrastructure of the internet. Who is up to this task?


- - -


Coming up:


A report on my Sunday meeting with a delegation of Spanish technologists from the remote region of Asturias in northern Spain. This is a fascinating group of researchers, academics and business representatives, that are thinking in terms of community rather than technology. They are in town visiting with Silicon Valley's leading companies and research organisations.