Photo:

Mark Matcho

Who knew it could get this bad?

In the past two weeks,

Facebook
Inc.


FB -1.13%

outed fake accounts that bear marks of Russian foreign agents, the U.S. president called out

Twitter

for allegedly suppressing the visibility of conservative politicians, and Apple Inc., Facebook, YouTube and Pinterest all removed accounts or content from alt-right figurehead Alex Jones and his Infowars media empire, sparking a discussion about whether or not bans by tech giants constitute abridgement of free speech.

In one way or another, all these events were about decisions made by humans, but spurred or enforced by algorithms. The issues concern the values of these companies and how they overlap—or don’t—with the values of the American public. But they’re also about the inescapable reality that, given the vast scale of these internet services, those values must be translated into cold, hard code.

We have many of the artificial intelligence tools to identify misinformation, bots, fake accounts, influence campaigns, echo chambers and the cognitive, social and algorithmic biases that combine to distort what we see and hear. There’s just no consensus on how to use them.

Tech giants already talk about what some of these tools accomplish, if not how they work. Facebook uses AI to flag everything from hate speech to suicide risk. Twitter uses it to flag potential harassers. And YouTube uses it to help keep terrorist content off the service.

Despite these efforts, researchers and policy wonks think what’s needed are stronger—and possibly concerted—commitments by the humans who run these firms. In the best scenario, tech giants would collaborate on tools in a way that would be transparent to the outside world.

Call in the bots

That filter bubbles, hate speech and harassment, and state-sponsored disinformation should all be monitored by a common set of tools is less about the flexibility of AI and more about the fact that these problems are linked.

Take the connection between terrorist accounts and disinformation campaigns. Both can be detected by the same methods, says Hamidreza Alvari, a graduate student at Arizona State University, who came up with a method for identifying the accounts of malicious actors on social networks.

Several tech companies recently removed content from Alex Jones and his Infowars media empire, sparking a discussion about whether or not bans constitute abridgement of free speech.

Several tech companies recently removed content from Alex Jones and his Infowars media empire, sparking a discussion about whether or not bans constitute abridgement of free speech.


Photo:

Jay Janner/Austin American-Statesman/Associated Press

While some attempts to detect social-media accounts of malicious actors rely on content or language filters that terrorists and disinformers have proved capable of confusing, Mr. Alvari’s algorithm looks for accounts that spread content further and faster than expected. Since this is the goal of terrorist recruiters and propagandists alike, the method could be on the front lines of algorithmic filtering across social networks. Humans still need to make the final determination, to avoid false positives.

Algorithms could also be used to identify and disrupt social-media echo chambers, where people increasingly communicate with and witness the behavior of people who align with their own social and political views. The key would be showing users a deliberately more diverse assortment of content.

To quantify just how bad the echo-chamber effect is, Indiana University computer science professor Filippo Menczer and his colleagues examined how likely social networks are to direct users to information from just a handful of sources.

By this measure, the “homogeneity bias” of Facebook is worse than the bias in search engines. This suggests search engines are better at sending users to a variety of sources outside their “filter bubble.” Twitter turned out even worse than Facebook, and YouTube was worst of all.

Companies already constantly tweak their algorithms to increase engagement; measuring homogeneity bias and prioritizing its reduction could be incorporated into that process.

There’s also the possibility of training a deep-learning AI system on known malicious behavior, in order to identify tells that even humans might not be able to spot or describe. Researchers at universities in Hong Kong, Qatar and South Korea have had some success using this strategy to identify rumors on Chinese social-media site

Sina Weibo
.

Bots and bias

Existing approaches to weeding out malicious content aren’t without issues. For instance, Twitter’s algorithms failed to auto-suggest conservative politicians when users typed their names in the service’s drop-down search feature. After President Trump and others accused Twitter of “shadow banning,” the company blamed its “behavioral ranking” algorithm and promised a fix.

Where is the line between maintaining quality of information and flat-out censorship? That’s the core question all tech giants are now wrestling with. Facebook Chief Executive

Mark Zuckerberg

recently said he’d rather not censor even speech as extreme as Holocaust denial, because it’s not Facebook’s responsibility to be the arbiter of truth.

Yet Facebook and other content companies remove content or limit its reach every day, and these decisions sometimes seem haphazard. Apple CEO

Tim Cook’s

decision to pull Infowars content apparently sparked action by other tech giants. Mr. Jones and Infowars have pushed unfounded theories that include calling the Sandy Hook Elementary School massacre in 2012 a hoax, and accusing prominent Democrats of running a global child-sex ring.

A way forward

One solution could be an independent but industry-sponsored organization that allowed companies to pool information about state-sponsored and malicious actors, and possibly tools for fighting them, says Jonathan Morgan, CEO of New Knowledge, a security company that protects companies from disinformation and social-media manipulation.

Where is the line between maintaining quality of information and flat-out censorship?

Such an organization already exists for coordinating action against terrorists online, the Global Internet Forum on Countering Terrorism. U.S. members, including Secretary of Homeland Security

Kirstjen Nielsen,

and U.S. tech giants are reluctant to expand its mission to include disinformation campaigns, but European members would.

Another model could be the nonprofit Information Sharing and Analysis Centers, covering sectors from financial services to electricity, which share information about cyber threats. This pooled resource allows a kind of herd immunity, where one attack victim can quickly alert others.

An industry organization that monitors malicious online behavior should be backstopped by some kind of government body, Mr. Morgan says, as the FEC monitors election activity and the FCC rules on media matters.

Politics is an obstacle here. In the wake of the banning of Infowars from the “mainstream internet,” fans of the site and its ideology are accusing tech giants of conspiring to silence conservative voices.

Never in history has our reality been so affected by systems created by so few. The logic of the algorithms that power the internet is decided by a handful of companies. If we can’t live without these AIs, and their decisions are tied to everything from our First Amendment rights to the sanctity of our voting system, it seems that more transparency is needed. How can such decisions continue to be made without input from the millions whom they affect?

Write to Christopher Mims at christopher.mims@wsj.com

Let’s block ads! (Why?)


Source link

Load More By Sainbayar
Load More In Social Media

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also

Apple Loop: New iPhone Features Leak, Upgraded MacBook Air Revealed, Galaxy Note9 vs iPhone X Plus

Taking a look back at another week of news from Cupertino, this week’s Apple Loop in…