Facial Recognition Tech Tested By UK Police Was Wrong 96% Of The Time According To Big Brother Watch

in #technology5 years ago

By Aaron Kesel

Facial recognition is highly flawed. Activist Post has consistently reported numerous studies finding that the technology's accuracy isn't all it's marketed to be. Now, a watchdog observing UK Metropolitan Police trials has stated the technology has misidentified members of the public, including a 14-year-old black child in a school uniform who was stopped and fingerprinted by police, as potential criminals in as much as 96 percent of scans, according to Big Brother Watch in a press release.

In eight trials in London between 2016 and 2018, the technology gave "false positives" that wrongly identified individuals as crime suspects when an individual passed through an area with a facial recognition camera.

Big Brother Watch, the watchdog organization that received the data through a freedom of information request, demanded police drop using the technology. Big Brother Watch further warned of the Orwellian consequences of using it, arguing that it “breaches fundamental human rights protecting privacy and freedom of expression.”

“This is a turning point for civil liberties in the UK. If police push ahead with facial recognition surveillance, members of the public could be tracked across Britain’s colossal CCTV networks," Director Silkie Carlo said. “For a nation that opposed ID cards and rejected the national DNA database, the notion of live facial recognition turning citizens into walking ID cards is chilling.”

Further according to Big Brother Watch, Police scored a 100% misidentification rate in two separate deployments at Westfield shopping centers in Stratford, London twice. It is a horrifying thought that this technology is now being used to harass citizens as they shop.

Of course, we know that facial recognition technology is currently or will be tested in UK supermarkets for the first time to verify the age of citizens buying alcohol and cigarettes at special self-checkout machines, as Activist Post reported.

The company responsible for the devices to be used in supermarkets, according to the Telegraph, is U.S. company NCR which makes self-checkout machines for Asda, Tesco, and other UK supermarkets.

NCR has announced the integration of facial recognition technology from Yoti with its “FastLane” tills within supermarkets.

Fastlanes are currently used by UK retailers Tesco, Sainsbury’s, Marks & Spencer, Boots, and WHSmith. While not all these retailers will be a part of the pilot test program, it’s important to note how widespread this could be.

Meanwhile, hundreds of retail stores and soon thousands are investigating using another biometric facial recognition software called FaceFirst to build a database of shoplifters as a means of anti-theft, Activist Post reported.

FaceFirst is designed to scan faces as far as 50 to 100 feet away. As customers walk through a store entrance, the video camera captures repetitious images of each shopper and chooses the clearest one to store.

The software then analyzes that image and compares it to a database of “bad customers” that the retailer has compiled; if there is a match, the software sends an alert to store employees that a “high risk” customer has entered the door.

The future of shopping seems to allude to having biometric scanners written all over it, a worrying prospect for privacy enthusiasts.

Several privacy advocate groups, attorneys, and even recently Microsoft, which also markets its own facial recognition system, have all raised concerns over the technology, pointing to issues of consent, racial profiling, and the potential to use images gathered through facial recognition cameras as evidence of criminal guilt by law enforcement.

“We don’t want to live in a world where government bureaucrats can enter in your name into a database and get a record of where you’ve been and what your financial, political, sexual, and medical associations and activities are,” Jay Stanley, an attorney with ACLU, told BuzzFeed News about the use of facial recognition cameras in retail stores. “And we don’t want a world in which people are being stopped and hassled by authorities because they bear resemblance to some scary character.”

The technology currently has a lot of problems; Activist Post recently reported how Amazon’s own facial “Rekognition” software erroneously and hilariously identified 28 members of Congress as people who have been arrested for crimes according to the ACLU. Maybe the technology was trying to tell us something? But then it should have labeled more than just African American members of Congress as criminals, unless the technology has a racial bias, or perhaps this is just more evidence of how inaccurate the technology is.

Activist Post previously reported on another test of facial recognition technology in Britain which resulted in 35 false matches and 1 erroneous arrest. So the technology is demonstrated to be far from foolproof.

Many likely laughed about the paranoid nature this writer has expressed when it comes to facial recognition technology; however, vindication came swiftly recently when Amazon announced it wanted to create a "Crime News Network" to monitor neighborhoods with its Ring doorbell facial recognition cameras. At this point, they are literally just creating George Orwell’s 1984 or reinventing the Nazi Stasi.

Amazon employees who are against the company selling facial recognition technology to the government have protested the company’s decision. Over 20 groups of shareholders have sent several letters to Amazon CEO Jeff Bezos urging him to stop selling the company’s face recognition software to law enforcement.

“We are concerned the technology would be used to unfairly and disproportionately target and surveil people of color, immigrants, and civil society organizations,” the shareholders, which reportedly include Social Equity Group and Northwest Coalition for Responsible Investment, wrote. “We are concerned sales may be expanded to foreign governments, including authoritarian regimes.”

Another letter was just sent in January 2019, organized by Open Mic, a nonprofit organization focused on corporate accountability, and was filed by the Sisters of St. Joseph of Brentwood; both letters warned the technology poses “potential civil and human rights risks.”

Numerous civil rights organizations have also co-signed a letter demanding Amazon stop assisting government surveillance; and several members of Congress have expressed concerns about the partnerships.

Several lawmakers have even chimed in to voice concerns about Amazon’s facial recognition software, expressing worry that it could be misused, The Hill reported.

The American Civil Liberties Union (ACLU) obtained hundreds of pages of documents showing Amazon offering the software to law enforcement agencies across the country.

In a 2018 report, the ACLU called Amazon’s facial recognition project a “threat to civil liberties.”

Amazon responded by essentially shrugging off the employees’ and shareholder concerns by the head of the company’s public sector cloud computing business, stating that the team is “unwaveringly” committed to the U.S. government.

“We are unwaveringly in support of our law enforcement, defense and intelligence community,” Teresa Carlson, vice president of the worldwide public sector for Amazon Web Services, said July 20th at the Aspen Security Forum in Colorado, FedScoop reported.

Amazon has since released an update claiming to have fixed all of the problems with lighting that caused inaccuracy to its systems according to the company.

This also follows a report by the U.S. Government Accountability Office (GAO) that the facial recognition technology the FBI is using for the Next Generation Identification-Interstate Photo System failed privacy and accuracy tests, as Activist Post reported.

In 2018 it was reported that the FBI and other law enforcement agencies were using this same Amazon Facial Rekognition technology to sift through surveillance data.

Defense One reports that “AI-Enabled Cameras That Detect Crime Before it Occurs Will Soon Invade the Physical World” are in the works and on display at ISC West, a recent security technology conference in Las Vegas.

Activist Post has previously reported in its own way that the rise of facial recognition technology is inevitable and, as a result, the death of one’s privacy is sure to come with it.

This writer continues to focus on facial recognition technology. From Amazon helping law enforcement with its Facial Rekogntion software, DHS wanting to use it for border control, to the Olympics wanting to use the tech for security.

It’s now been reported that facial recognition has evolved and researchers at the University of Bradford have found that “facial recognition technology works even when only half a face is visible,” according to EurekAlert. Although, this upgraded technology hasn't been tested by police to this writer's knowledge, and let's hope that it never is, for if it does civil liberties and privacy will cease to exist.

Elsewhere in the world, facial recognition and the use of biometrics can be seen all over starting to emerge. In Malta, Prime Minister Joseph Muscat recently confirmed plans to implement facial recognition into the CCTV surveillance cameras around the country’s zones.

“The police are doing a good job but there’s a lot of work that still needs to be done to step up enforcement,” Muscat said in an interview on ONE Radio today. “We are looking into safe city concepts to prevent antisocial behaviour, whereby CCTV systems with technology that can identify law-breakers can do away with the need to have police stationed 24/7 in certain areas.”

Meanwhile, China is planning to merge its 170+ million security cameras with artificial intelligence and facial recognition technology to create a mega-surveillance state. This compounds with China’s “social credit system” that ranks citizens based on their behavior, and rewards and punishes depending on those scores.

Consent to be identified by the government whenever and wherever we go is approval to have the government decide whether, when, and where we are allowed to travel. Put bluntly: it is very dangerous

The scary part is that intelligence agencies would be able to use their surveillance dragnet interlinked into CCTV cameras and companies like Facebook that utilize the technology to track someone’s location in real-time.

For more on facial recognition technology and what’s to come for our future, see this writer’s previous article “The Rise Of Facial Recognition Technology Is Now Inevitable.”

This writer's not sure what's worse, the technology being inaccurate or the evolution of the technology past these inaccuracies. In other words, citizens being mindlessly harassed or the soon-to-be manifested surveillance state that several private companies are creating that won't just be used by police, but will also be used by businesses as a means of anti-theft as has been reported before. Which begs the question of protecting citizens' privacy and how this database is secured from an information technology point of view, as well as how long these companies are allowed to retain data that is obtained without a warrant with a simple scan of a person's face simply by walking by the facial recognition-equipped cameras in a public place like a shopping center.

Image credit: EFF.org

By @An0nkn0wledge

Aaron Kesel writes for Activist Post. Support us at Patreon. Follow us on Minds, Steemit, SoMee, BitChute, Facebook and Twitter. Ready for solutions? Subscribe to our premium newsletter Counter Markets.

Provide, protect and profit from what is coming! Get a free issue of Counter Markets today.

Sort:  

The problem is that facial recognition software is
racist and sexist.

No, really, that is its biggest problem, and if you understand that, you know just how stupid this technology is right now.

The #1 thing a facial recognition software can detect is race!
With somewhere around 80% accuracy.

The #2 thing a facial recognition software can detect is sex.
With around 50% accuracy.

Everything else is much lower in accuracy.
So, the developers are trying to keep under wraps that the software is sexist and racist.
Even going so far as to serious handicap the software, so that this doesn't leak out.
You know, things like f-c-book identifying apes as black people. Or black people as apes.

So, what is really happening?
Stores are not pushing forward this technology. Someone is paying them to do this.

T.H.E.Y. want the power of being able to track everyone. It is their wet dream. So, we will see all kinds of roll outs of this broken technology.

And it will be used to bust people for stupid crimes, even if they are completely innocent.

Interesting, I've played with facial recognition a little myself and I've seen it fairly successful, but only really ever put 2 faces to the test haha.

I think it's actually far worse if the technology worked perfectly.

There are a ton of laws in every country that mean that many people break laws multiple times per week, if not per day. If there was perfect enforcement, everyone would be in prison, or fined multiple times per day, or perhaps arrested several times per week. Maybe tazed by a drone.

The way the world is set up currently, it relies on many people getting away with crimes, but being fearful that they might get arrested, thus not doing it. How many people stole something as a child? Imagine if you kid took something off the shelf, and from then on, you could never bring him into a store without him being red-flagged. Or what if you actually did steal something, and then were harassed every single time you went into a store. Or even just followed around or looked at with an evil eye because the monitor told them that you stole something once.

What about with ex drug addicts. What if the system alerted employees when they went certain places, like stores, so they could never again buy cold medicine without being distrusted, or would be considered a security risk in a hospital.

Even the Bible is fully aware that humans are sinners. If we're gonna institute a system that identifies and tracks everyone, we have to make a lot less things illegal, and establish limits on what can be tracked, who gets access to what information, how long you'll be in the system, etc. It's good it's not working, because we don't need more people ending up in jail, which doesn't help anyone anyway.

Good post.Let's stop this control mess.
Resteemed.

This article should be in a museum. Technology deployed in 2016, that had been in development for several years is more relevant to the precambrian era than 2019.

These tools advance more in 90 days than the automobile industry did in any 20 year period.

Racism? This is a First World Problem, discuss this with Chinese in China, Filipinos in the Philippines, ect, ect, I have, there is absolutely no concern or care about western views of racism.

Just for a few minutes set aside the topic, and insert at typcal digital technology development. None have been stopped. None. If the west is appalled at it, the east adopts it. If governments hate it, the people love it. AOL was a 3rd generation company, yet it rose from the ashes of two failed companies to give rise to the internet, to only fall itself and become an even larger corporate digital media conglomerate.

I'm old enough to remember when it was illegal to take encryption software outside the USA. Politicans, 4 star generals and media mouthpieces proclaimed how terrorists and drug cartels would use it to end western civilization as we knew it.... and here we are using, globally some of the most perfect cryptography know to mankind, to send messages.

Let that sink in a minute and explain to me how facial recon can be stoped. Okay, times up. It will not be stoped.

It would only take 3 studies from universities such as John Hopkins to publish (and by the way they are most likely already written) how this technology will save your grandmother's live, reduce child trafficking "insert whatever pulls at the heart strings of a nation", then MSM will proclaim it for weeks with so called experts and you are on the fast track to public acceptance.

This, just as every past "hairs on fire" issue that comes up shall also come to pass.

In the near future an AI will methodically comb through this post and comments, think for a moment, then chuckle a few ones and zero's simultaneously from its quantum computing brain.

Cheers

Posted using Partiko Android

Coin Marketplace

STEEM 0.30
TRX 0.12
JST 0.033
BTC 64400.33
ETH 3140.71
USDT 1.00
SBD 3.93