Category:

Tech

[ad_1]

Artificial Intelligence (AI) and Machine Learning are used to power a variety of important modern software technologies. For instance, AI powers analytics software, Google’s bugspot tool, and code compilers for programmers. AI also powers the facial recognition software commonly used by law enforcement, landlords, and private citizens.

Of all the uses for AI-powered software, facial recognition is a big deal. Security teams from large buildings that rely on video surveillance – like schools and airports – can benefit greatly from this technology. An AI algorithm has the potential to detect a known criminal or an unauthorized person on the property. Some systems can identify guns while others can track each individual’s movements and provide a real-time update regarding their location with a single click.

Facial recognition software has phenomenal potential

Police in the U.S have used facial recognition software to successfully identify mass shooting suspects. Police in New Delhi, India, used this tech to identify close to 3,000 missing children in four days. AI-powered software scanned 45,000 photos of children living in orphanages and foster homes and matched 2,930 kids to photos in the government’s lost child database. That’s an impressive success rate.

Facial recognition software is also used by governments to help refugees find their families through the online database called REFUNITE. This database combines data from multiple agencies and allows users to perform their own searches.

Despite the potential, AI-powered software is biased

Facial recognition software is purported to enhance public safety since AI algorithms can be more accurate than the human eye. However, that’s only true when you’re a white male. The truth is, artificial intelligence algorithms have an implicit bias toward women and people with dark skin. That bias is present in two major types of software: facial recognition software and risk assessment software.

For instance, researchers from MIT’s Media Lab used facial recognition software in an experiment that misidentified dark-skinned females as men up to 35% of the time. Both women and people with dark skin had the highest error rates.

Another area of bias is seen in risk assessments. Some jails use a computer program to predict the likelihood of each inmate committing a crime in the future. Unfortunately, time has already shown these assessments are biased toward people with dark skin. Dark-skinned people are generally scored as a higher risk than light-skinned people. The problem is that risk assessment scores are used by authorities to inform decisions as a person moves through the criminal justice system. Judges frequently use these scores to determine bond amounts and whether a person should receive parole.

In 2014, U.S Attorney General Eric Holder called for the U.S. Sentencing Commission to study the use of risk assessment scores because he saw the potential for bias. The commission chose not to study risk scores. However, an independent, nonprofit news organization called ProPublica studied the scores and found them to be remarkably unreliable in forecasting violent crime. They studied more than 7,000 people in Broward County, Florida and found that only 20% of people predicted to commit violent crimes actually did.

This bias has been known for quite some time, yet experts have yet to create a solution. People wouldn’t be so alarmed at the error rate if the technology wasn’t already in use by governments and police.

The ACLU concluded facial recognition software used by police is biased

In 2018, the American Civil Liberties Union (ACLU) ran a test to see if Amazon’s facial recognition software used by police has a racial bias. The results? Twenty-eight U.S. Congress members were falsely matched with mugshots, including California representative and Harvard graduate Jimmy Gomez. The ACLU’s test revealed 40% of false matches involved people of color.

Despite the large error rate, Amazon’s facial recognition tool (Rekognition) is already in use by police. Civil liberties groups and lawmakers are heavily concerned that using this software as-is can harm minorities. Activists are calling for government regulation to prevent abuse since this software is going mainstream too soon.

Are governments suppressing AI’s racial bias?

For two years in a row, Canadian immigration authorities denied visas to approximately two dozen AI academics hoping to attend a major conference on artificial intelligence. Researchers from the group called Black in AI were planning to educate people about AI’s racial bias but were denied visas in 2018 and 2019. After pressuring the government, some denials were reversed in 2019.

The Canadian government denied the visas, claiming to have no assurance the researchers would leave Canada at the end of their visit. The group and many of their supporters don’t believe the visa denials were legitimate. Canada’s economy routinely benefits from overseas visitors, who spent more than $21 billion in 2018. Why would Canada deny that many visas two years in a row unless they’re trying to suppress the researchers from voicing their concerns?

Although there’s no direct evidence of intentional suppression, the whole situation is odd and deserves to be thoroughly investigated.

Why does AI struggle to identify women and people with dark skin?

Gender bias against women and people of color has existed in AI-powered software for years, even before facial recognition went mainstream.

Due to a lack of color contrast, it makes sense that darker skin would make it harder for computer algorithms to identify facial features. It’s also possible that photos used to train AI systems include more light skinned people and males than dark skinned people and females. Both factors likely contribute to the problem.

Computers might have a hard time identifying facial features when women are wearing makeup to hide wrinkles or when they have a short haircut. AI-powered algorithms can only be trained to recognize patterns; if short hair is registered as a factor that indicates a male, that will skew results.

While the issue appears straight forward, there’s one factor that isn’t being accounted for by some of the facial recognition critics: the racial and gender bias seems to exist with facial analysis and not facial recognition. The two terms are used interchangeably, but are distinct processes.

Facial recognition vs. facial analysis

When MIT conducted a study with facial recognition tools from Microsoft and IBM, they found those tools had less of an error rate than Amazon’s Rekognition. In response, Amazon disputed the results of MIT’s study, claiming researchers used “facial analysis” and not “facial recognition” to test for bias.

Facial recognition identifies facial features and attempts to match a face to an existing database of faces. Facial analysis uses facial characteristics to identify other factors like gender, race, or to detect a fatigued driver. An Amazon spokesperson says it doesn’t make sense to use facial analysis to gauge the accuracy of facial recognition, and that’s a fair claim.

While the two processes are not the same, facial analysis still plays a significant role in identifying suspects and should be more accurate before being used by police. For instance, if a suspect is captured on video but can’t be clearly seen, has no previous arrests, and can’t be matched to a database, facial analysis will be used to obtain the suspect’s identity. If that suspect is a female wrongly identified as a male, they might never be found.

Are we using facial recognition software too soon?

While it’s not a surprise, it’s a disappointment to know that biased software is being deployed in situations that can have serious consequences.

While the benefits to using facial recognition software are clear, it’s time for this technology to be regulated and force developers to improve the accuracy before it’s deployed in high-stake situations.

Frank Landman

Frank Landman

Frank is a freelance journalist who has worked in various editorial capacities for over 10 years. He covers trends in technology as they relate to business.

[ad_2]

0 comment
0 FacebookTwitterPinterestEmail

[ad_1]

Suspect can’t be compelled to reveal “64-character” password, court rules

Getty Images

The Fifth Amendment to the US Constitution bars people from being forced to turn over personal passwords to police, the Pennsylvania Supreme Court ruled this week.

In a 4-3 ruling, justices from Pennsylvania’s highest court overturned a lower-court order that required the suspect in a child-pornography case to turn over a 64-character password to his computer. The lower-court ruling had held that the compelled disclosure didn’t violate the defendant’s Fifth Amendment rights because of statements he made to police during questioning.

“It’s 64 characters and why would I give that to you,” Joseph J. Davis of Pennsylvania’s Luzerne County told investigators in response to their request for his password. “We both know what’s on there. It’s only going to hurt me. No fucking way I’m going to give it to you.”

A foregone conclusion

Prosecutors in the case said a legal doctrine known as the “foregone conclusion exception” permitted the compelled disclosure of Davis’ password. The doctrine, which originally applied to the compelled production of paper documents, said Fifth Amendment protections against self-incrimination don’t apply when the government already knew of the existence, location, and content of the sought-after material.

In requiring Davis to turn over his password to investigators, the lower-court agreed with prosecutors that the password demand fell under the foregone conclusion exemption. The lower court said the exception applied because, under previous US Supreme Court precedent, the password was tantamount to a key or other tangible property and didn’t reveal the “contents” of the defendant’s mind.

The majority for the Pennsylvania Supreme Court disagreed. Writing for the majority in a ruling handed down on Wednesday, Justice Debra Todd wrote:

Based upon these cases rendered by the United States Supreme Court regarding the scope of the Fifth Amendment, we conclude that compelling the disclosure of a password to a computer, that is, the act of production, is testimonial. Distilled to its essence, the revealing of a computer password is a verbal communication, not merely a physical act that would be nontestimonial in nature. There is no physical manifestation of a password, unlike a handwriting sample, blood draw, or a voice exemplar. As a passcode is necessarily memorized, one cannot reveal a passcode without revealing the contents of one’s mind. Indeed, a password to a computer is, by its nature, intentionally personalized and so unique as to accomplish its intended purpose―keeping information contained therein confidential and insulated from discovery. Here, under United States Supreme Court precedent, we find that the Commonwealth is seeking the electronic equivalent to a combination to a wall safe—the passcode to unlock Appellant’s computer. The Commonwealth is seeking the password, not as an end, but as a pathway to the files being withheld. As such, the compelled production of the computer’s password demands the recall of the contents of Appellant’s mind, and the act of production carries with it the implied factual assertions that will be used to incriminate him. Thus, we hold that compelling Appellant to reveal a password to a computer is testimonial in nature.

[ad_2]

0 comment
0 FacebookTwitterPinterestEmail

[ad_1]

Understanding where we are in the pursuit of self-driving cars can be as confusing as understanding where we are in the pursuit of AI. Over the past few years, the flood of companies entering the space and the constant news updates have made it seem as if fully autonomous vehicles are just barely out of reach. The past couple weeks have been no different: Uber announced a new CEO and $1 billion investment for its self-driving unit, Waymo launched a ride-hailing app to open up its service to more riders in Phoenix, and Tesla unveiled a new custom AI chip that promises to unlock full autonomy.

But driverless vehicles have stayed in beta, and carmakers have wildly differing estimates of how many years we still have to go. In early April, Ford CEO Jim Hackett expressed a conservative stance, admitting that the company had initially “overestimated the arrival of autonomous vehicles.” It still plans to launch its first self-driving fleet in 2021, but with significantly dialed-back capabilities. In contrast, Tesla’s chief, Elon Musk, bullishly claimed that self-driving technology will likely be safer than human intervention in cars by 2020. “I’d be shocked if it’s not next year at the latest,” he said.

I’m not in the business of prediction. But I recently sat down with Amnon Shashua, the CEO of Mobileye, to understand the challenges of reaching full autonomy. Acquired by Intel in 2017, the Israeli-based maker of self-driving tech has partnerships with more than two dozen carmakers and become one of the leading players in the space.

Shashua presented challenges in technology, regulation, and business.

Building a safe car. From a technical perspective, Shashua splits driverless technology into two parts: its perception and its decision-making capabilities. The first challenge, he says, is to build a self-driving system that can perceive the road better than the best human driver. In the US, the current car fatality rate is about one death per 1 million hours of driving. Without drunk driving or texting, the rate probably decreases by a factor of 10. Effectively that means a self-driving car’s perception system should fail, at an absolute maximum, once in every 10 million hours of driving.

But currently the best driving assistance systems incorrectly perceive something in their environment once every tens of thousands of hours, Shashua says. “We’re talking about a three-orders-of-magnitude gap.” In addition to improving computer vision, he sees two other necessary components to closing that gap. The first is to create redundancies in the perception system using cameras, radar, and lidar. The second is to build highly detailed maps of the environment to make it even easier for a car to process its surroundings.

Building a useful car. The second challenge is to build a system that can make reasonable decisions, such as how fast to drive and when to change lanes. But defining what constitutes “reasonable” is less a technical challenge than a regulatory one, says Shashua. Anytime a driverless car makes a decision, it has to make a trade-off between safety and usefulness. “I can be completely safe if I don’t drive or if I drive very slowly,” he says, “but then I’m not useful, and society will not want those vehicles on the road.” Regulators must therefore formalize the bounds of reasonable decision-making so that automakers can program their cars to act only within those bounds. This also creates a legal framework for evaluating blame when a driverless car gets into an accident: if the decision-making system did in fact fail to stay within those bounds, then it would be liable.

Building an affordable car. The last challenge is to create a cost-effective car, so consumers are willing to switch to driverless. In the near term, with the technology still at tens of thousands of dollars, only a ride-hailing business will be financially sustainable. In that context, “you are removing the driver from the equation, and the driver costs more than tens of thousands of dollars,” Shashua explains. But individual consumers would probably not pay a premium over a few thousand dollars for the technology. In the long term, that means if automakers intend to sell driverless passenger cars, they need to figure out how to create much more precise systems than exist today at a fraction of the cost. “So the robo-taxi—we’re talking about the 2021, 2022 time frame,” he says. “Passenger cars will come a few years later.”

Mobileye is now working to overcome these challenges on all fronts. It has been refining its perception system, creating detailed road maps, and working with regulators in China, the US, Europe, and Israel to standardize the rules of autonomous driving behavior. (And it’s certainly not alone: TeslaUber, and Waymo are all engaging in similar strategies.) The company plans to launch a driverless robo-taxi service with Volkswagen in Tel Aviv by 2022.

This story originally appeared in our Webby-nominated AI newsletter The Algorithm. To have it directly delivered to your inbox, sign up here for free.

[ad_2]

0 comment
0 FacebookTwitterPinterestEmail

[ad_1]

Coralogix, a startup that wants to bring automation and intelligence to logging, announced a $10 million Series A investment today.

The round was led by Aleph with participation from StageOne Ventures, Janvest Capital Partners and 2B Angels. Today’s investment brings the total raised to $16.2 million, according to the company.

CEO and co-founder Ariel Assaraf says his company focuses on two main areas: logging and analysis. The startup has been doing traditional applications performance monitoring up until now, but today, it also announced it was getting into security logging, where it tracks logs for anomalies and shares this information with security information and event management (SEIM) tools.

“We do standard log analytics in terms of ingesting, parsing, visualizing, alerting and searching for log data at scale using scaled, secure infrastructure,” Assaraf said. In addition, the company has developed a set of algorithms to analyze the data, and begin to understand patterns of expected behavior, and how to make use of that data to recognize and solve problems in an automated fashion.

“So the idea is to generally monitor a system automatically for customers plus giving them the tools to quickly drill down into data, understand how it behaves and get context to the issues that they see,” he said.

For instance, the tool could recognize that a certain sequence of events like a user logging in, authenticating that user and redirecting him or her to the application or website. All of those events happen every time, so if there is something different, the system will recognize that and share the information with DevOps team that something is amiss.

The company, which has offices in Tel Aviv, San Francisco and Kiev, was founded in 2015. It already has 1500 customers including Postman, Fiverr, KFC and Caesars Palace. They’ve been able to build the company with just 30 people to this point, but want to expand the sales and marketing team to help build it out the customer base further. The new money should help in that regard.

[ad_2]

0 comment
0 FacebookTwitterPinterestEmail

[ad_1]


Steve O’Hear / TechCrunch:

Leavy.co, a travel app that pays upfront when users put their room up for rent when they travel, discloses $14M seed funding from January, led by Prime Ventures  —  Leavy.co, the Paris-born startup that offers a travel app for millennials to help them travel more without getting into further debt …



[ad_2]

0 comment
0 FacebookTwitterPinterestEmail