Proof of Work & Fake News

in #fakenews6 years ago

Can we use "Proof of Work" in order to reduce fake news?

This post is mostly on two talks I've attended this month: the first is with Bram Cohen the founder of Bittorrent, who offered the "proof of space and time" and presented his own cryptocurrency, Chia. The second is Prof. Lawrence Lessig, the person who got me into cyberlaw to begin with. I've finally met Prof. Lessig this week, after attending conferences where he presented remotely, when he visited Israel to celebrate the 20th anniversary of the Freedom of Information Act.

WhatsApp Image 2018-05-17 at 11.55.02.jpeg

Prof. Lessig had an intimate discussion with the members of the Israeli Digital Rights Movement about the technological appratii to reduce hostile usage of bad players on political decision-making. Be it a state which wants to interfere with another state's election, or a faction inside a state that wishes to change the rules and use behavioral targeting to insert "fake news" or extreme opinions in order to create extreme views amongst those whom are prone to violent beliefs.

The discussion with Prof. Lessig brought me back to spam, and more exactly, hashcash. Adam Back's invention from 1997 which was meant to solve the email-spam problem back then. Since then, we've had enough machine learning to ensure that we do not have a spam issue: gmail has been quite vigilant, sometimes too vigilant, in protecting our peace and quite at the price of harming and impeding free speech.

Why? well, if you consider it, ** spam is speech**.

For the same reason, we do not want to put a cap on "fake news" (fake news, for this issue are automated and machine generated posts, which are meant to incite or turn people for specific activity, using behavioral targeting. Not "just something we don't agree with" or "incorrect facts"). Let's get back to basics: fake news are speech, and as speech they are protected.

We envision the speech world as such where the truth shall triumph. As such where too much misinformation will end-up with a price tag on the speaker, who shall be required to provide the facts to base them. However, this is hardly the case today.

So, what can we do to fight fake news (as I described them, not other "fake news")? we can do what Adam proposed twenty years ago as an integral part of any social network.

Adam offered (and I'm dropping the math out, both because I can't explain it as elegantly as mathematicians could) that using computing power to measure speech will put a price tag. If, for example, in order to send an email message, my computer will need to provide one second of CPU time, it means that if I'm a legitimate actor, sending one thousand emails per day, then no problem. However, if I am a spammer, sending millions of spam messages, I will need a stronger computer.

How do I envision CPU for speech in general? the general rule is that in order to post something on a social network, the person's processor will need to hack the number behind a certain number, and the difficulty shall increase (meaning, harder numbers to calculate) when a person has more followers and bigger power.

Now, people will be able to delegate their power to such speaker if they believe in him (same as in Steem where a person's voting power could be delegated). Meaning that if a person doesn't have that much to say, but he is a legit actor, he can vouch for CNN and give them computing power. CNN, in turn, would have to provide an extreme amount of processing power in order to reach a bigger audience.

Now, this means that for each exposure of the message the speaker will have to provide CPU time. This, as a part of the social network's structure, shall mean in turn that less spam occurs.

Why? well, because in order to just post one fake news story, and to reach millions of readers, a person will have to generate extreme computing power.

We can use existing algorithms such as cryptonight or similar ASIC resistant algorithms (do note, that like Casio watches, ASIC resistant is not ASIC proof), but I would extremely like to avoid any proof-of-stake here for one reason: we need to avoid putting any price on speech. Putting a CPU price on speech should be considered extreme as-well; however, this is something that in the realms of protected speech shall be quite limited. The growth of difficulty should be exponential, where sending a message to less than 150 people (number chose due to the Dunbar Number, which states that a person is not likely to hold more than 150 sociable relationships) shall be almost CPU free. Sending to a 1,000 will require a few seconds of CPU time and sending to 100,000 shall require something like an hour of CPU time.

I believe that implementing this as a part of any social network may work out to reduce "fake news" better than any human fact-checking algorithm or any other AI to spot out fake news.

Coin Marketplace

STEEM 0.27
TRX 0.11
JST 0.030
BTC 69163.01
ETH 3773.70
USDT 1.00
SBD 3.43