Barnabé Monnot

Year 4 PhD candidate in Singapore, research in algorithmic game theory and large systems with a data-driven approach.

Publications

Posts

Teaching

News

CV

Fake news and info laundering

Yesterday appeared on the New York Times How to Fix Facebook? We Asked 9 Experts, obviously in response to the latest news that Facebook had communicated to Congress details about paid advertisements by foreign interests attempting to influence the outcome of 2016’s election. Fake news is the center and front of the debate and behind lies the technical question of how to spot them and defuse them. The article offers a few insights, including some that may well be the solution.

The second comment, by Kevin Kelly, Wired co-founder, is not such a technical prowess: it basically asks Facebook to engage in a Know-Your-Customer (KYC) policy, similar to banks. This is indeed not unreasonable. Information is power, so unregulated access to that power (fueled by access to large amounts of money by the advertisers) leads to bad outcomes, such as 126 million users receiving the flagged ads.

Fake news is similar to money laundering in a way, which is the process of turning “dirty” money into “clean” money via its discreet introduction in the banking system, by ways of, e.g., structuring (breaking up the amount in smaller portions and re-aggregating at the end of the process). Fake news can be understood as info laundering: taking a “dirty” (fake) piece of information and turning it into clean (perhaps measured as its order of influence, or how much trust is given to it) news by introducing it into the system.

KYC allows banks to screen the myriad accounts that may be employed to introduce the small amounts during the structuring phase. To break info laundering, KYC can equally be employed to flag the small accounts that introduce the bad ads into the system.

How is that technically feasible? Now that Facebook has submitted to Congress the details of advertisers engaging in fake news and influence, there is no question that this information constitutes a nice labeled training set for its engineers to use for flagging future bad accounts. This is the process that banks or regulated payment authorities use to detect fraud, and it has been readily available for some time. And it is perhaps in line with Eli Pariser’s point that some info should be made available for research. Developing the models to flag bad accounts emitting dirty info – which certainly differ from off-the-shelf credit card fraud detection methods – could be the key to solve the spread of fake news and info laundering.

EDIT 01/11: I found this great post linking game theory and the spread of fake news, interesting take that could offer solutions to slow down the spread of fake news (e.g., reward non-fake news so that propagating fake ones is costlier?)

Back to posts