San Francisco, CA,
31
August
2019
|
16:03 PM
America/Los_Angeles

LGBTQ - The Missing Letters In Google's Alphabet And The Moral Limits Of Algorithms

Where is the machine learning training data for morals and ethics?

 

 

 

 

 

Google has been sued by a group of independent video producers for restricting their YouTube channels and not allowing them to buy advertising to promote their LGBTQ related content.

Eight plaintiffs representing the LGBTQ community -- a protected class under California laws -- claim Google is guilty of: "Discrimination, fraud, unfair and deceptive business practices, unlawful restraint of speech and breach of consumer contract."

"We've all tried repeatedly to communicate with Google/YouTube to treat us fairly and work with us to allow our voices to be heard and inspire systemic change," said Celso Dulay, one of the plaintiffs in the lawsuit and host of GlitterBombTV.

"It's shameful that so many in the LGBTQ+ community on Google/YouTube are restricted, censored or blocked -- yet we're repeatedly being subjected to harassment by homophobic and racist hate-mongers who are free to post vile and obscene content."

Dulay said that he never had trouble buying Google Ads to promote GlitterBombTV until a hidden policy change blocked his ads just days before Christmas Day 2018. Dulay said that he recorded a conversation with a representative of Google Ads who explained there is a company policy against "the gay thing."

Google denies it discriminates against protected classes and says it focuses on limiting hate speech and shocking content and it does not filter out content based on any sexual orientation.

Foremski's Take: 

Four years ago Google quietly dropped its founders' motto: "Don't be evil" as Alphabet, Google's new holding company, adopted "Do the right thing."

How do you do the right thing? The answers are always in the data -- as long as you know where to look -- is the unspoken mantra of a software engineering culture.

But there’s a flaw in this approach and one that cannot be solved by more data or computational brute force.

Google certainly knows how to collect massive amounts of data and analyze it and use it to train its AI systems and algorithms.

By using its software to make key decisions on what to publish Google saves a ton of money compared to using human editors and human creators but more importantly, it is key to its protected legal status as a publishing platform unlike a newspaper which controls what it publishes and carries all responsibilities.

Avoiding using humans is a smart strategy but the trouble is that Google's algorithms are not that smart -- especially when it comes to cultural and political issues where they can't discriminate between legitimate and harmful content. The software has no understanding of what it is viewing.

It's not just LGBTQ content that is a problem for the software. Google cannot tell the difference between a well-researched YouTube history channel about the Second World War and an extreme hate speech channel -- and will demonetize and demonize them equally.

This problem in its operating strategy is ever more evident as Google gets larger: its dumb algorithms are making ever larger dumb decisions affecting more and more people.

It needs to make high-level executive decisions in an area that it knows little about: moral and ethical leadership. It is a minefield that has no machine-ready solution. Where’s the data to train machines about morals and ethics? For example: if you were to train a general AI system based on all human history it would likely conclude that violence, war and genocide are successful strategies.

Morality and ethics cannot be extrapolated from history but must be instructed -- usually by family and community. If you start instructing the software you become an editor and that means Google, Facebook and others could lose their legal protections as a platform and become legally responsible for everything they publish — just like a newspaper or magazine.

A lot of our problems with fake news and hate speech could be solved by using human editors. We didn’t have a problem with traditional media which was very good at keeping the public safe. But with machines we have a big problem. When Mark Zuckerberg was called to testify to Congress in 2018 he said it would be five years before AI was able to filter out fake news and hate speech.

Google's struggle in the coming decade will be about how a software engineering culture that depends on data to inform decisions figures out moral and ethical positions on some of society's most sensitive cultural and political issues.

Good luck with that.

(An earlier version of this column appears on ZDNET: Tom Foremski IMHO.)

 

Insert copy #2...

Add textblocks if needed.

About Silicon Valley Watcher

In 2004 Tom Foremski became the first journalist to leave a major newspaper, the Financial Times, to become a full-time journalist blogger.   Arriving from London in 1984 he was one of the first journalists to cover Silicon Valley.  He was named one of the "top 25 innovators of 2014" by the Holmes Report and LinkedIn has named him #4 in Top Ten Media Writer of the Year.