<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1509222356040859&amp;ev=PageView&amp;noscript=1">

Crowdstaffing featured as Rising Star and Premium Usability HR platform in 2019

Crowdstaffing has earned the prestigious 2019 Rising Star & Premium Usability Awards from FinancesOnline, a popular B2B software review platform. This recognition is given out annually to products[...]

May 13, 2019

Read More
All Posts

The Promise of Artificial Intelligence Flourishes Only in the Absence of Bias

The global dialog concerning the future of artificial intelligence (AI), at least in the media, appears to have become very binary. On one side of the debate, technophiles excitedly praise the coming singularity, when machines and humans will merge. Across the aisle, visionaries like Elon Musk are portrayed as neo-Luddites, prophets of doom offering grim auguries of a robot-spawned dystopia. The reality, as always, lies somewhere in the center of the otherwise sensationalized argument. And I believe both sides are in agreement. Yes, AI promises a wealth of advances that can better society -- and the people who work in it. However, it can also deliver some pretty disastrous results, as we’ve seen already. The solution comes down to this: AI will learn what we humans teach it. The best way to ensure our mutual success is to approach machine learning as we should hiring: by eliminating bias from the process.

The Digital, the Divine and the Data

Artificial intelligence holds much promise in the business world and society. In our industry, AI is streamlining recruiting, creating meaningful employee experiences, and reshaping the technologies that help us create genuine talent ecosystems. In terms of a singularity, we’re already witnessing the stirrings. Nanotechnologies in the medical field, for example, could potentially repair damaged tissues, eliminate cancerous cells, and more. Yet the idea of robots interacting with people has stoked understandable fears.

Consider an article by John Brandon in VentureBeat. “In the next 25 years, AI will evolve to the point where it will know more on an intellectual level than any human,” he noted. “In the next 50 or 100 years, an AI might know more than the entire population of the planet put together.” What are the possible ramifications? Well, some are astounding. Others, a little more dire.

Anthony Levandowski, the engineer behind Google’s self-driving vehicles, recently filed paperwork to create a non-profit religious group called Way of the Future, which seeks to promote the worship of a deity based on artificial intelligence. And AI experts such as Vince Lynch believe the concept is entirely feasible. In fact, Lynch produced a simplified model that was capable of constructing its own eerily coherent biblical phrases.

“An AI that is all-powerful in the next 25-50 years could decide to write a similar AI bible for humans to follow, one that matches its own collective intelligence,” Brandon explained. “It might tell you what to do each day, or where to travel, or how to live your life.” That’s a scary thought for a lot of people, given what the contents of that scripture could be.

And this is where Elon Musk’s tireless warnings about regulating AI come into play. Musk has no designs on killing off AI or stalling its progress. Conversely, he is building groundbreaking artificial intelligence systems to evolve Tesla’s autonomous vehicles and SpaceX rocket engineering. He understands the significance of AI and the benefit it brings. He also recognizes that what we teach our children throughout their formative years (in this case, technological developments), influences their behaviors, thoughts, perspectives, demeanors, and actions. In this manner, AI transcends artifice and becomes Dynamic Digital Intelligence (I just made that up, so I’m claiming the copyright now). Musk demonstrated this with one poignant tweet.

On October 26, an AI called Sophia, created by roboticist David Hanson, chided Musk during a live conversation with New York Times columnist Andrew Ross Sorkin. When discussing the values that an AI would embrace to protect humanity, Sophia described a robot’s ability to become empathetic and cherish compassion. Sorkin replied, “We all believe you but want to prevent a bad future.” At that point, Sophia told Sorkin that he had been “reading too much Elon Musk.”

The exchange had its wit and humor. However, Musk’s response ultimately revealed the essence of potential problems with machine learning: us. “Just feed it The Godfather movies as input. What’s the worst that could happen?” he asked.

Human Bias Becomes AI’s Bias

“Robotic artificial intelligence platforms that are increasingly replacing human decision makers are inherently racist and sexist, experts have warned,” wrote Henry Bodkin in the Telegraph, citing a critical study from the Foundation for Responsible Robotics.

At the end of the day, algorithms amount to an array of correlations. And correlation alone, as an elementary tenet of research, does not imply causality -- though it can lead to false positives and negatives. For example, data would tell us that eating can make us overweight, which correlates to the presence of food. That doesn’t mean we should stop eating food.

This issue was the subject of a TED Talk about Joy Buolamwini, who discovered similar problems with facial recognition algorithms.

MIT grad student Joy Buolamwini was working with facial analysis software when she noticed a problem: the software didn’t detect her face -- because the people who coded the algorithm hadn't taught it to identify a broad range of skin tones and facial structures. Now she’s on a mission to fight bias in machine learning, a phenomenon she calls the “coded gaze.” It’s an eye-opening talk about the need for accountability in coding ... as algorithms take over more and more aspects of our lives.

Here’s another example. One system that analyzes candidates’ social media data flagged a profile picture of a same-sex couple kissing as “sexually explicit material.” The photo was not lewd or meant to be provocative. The technology simply couldn’t take into account a committed, non-traditional relationship and reconcile the image as a normal expression of love -- not “graphic content.” And there’s more.

  • A program designed to shortlist university medical candidates negatively selected against women, blacks, and other ethnic minorities.
  • Boston University discovered bias in AI algorithms by training a machine to analyze text collected from Google News. They posed this analogy to the computer: “Man is to computer programmer as woman is to x.” The AI responded, “Homemaker.”
  • Another U.S. built platform studied Internet images to shape its contextual learning systems. When shown a picture of a man in the kitchen, the AI insisted that the individual was a woman.
  • A more chilling example came from a computer program used by criminal courts to assess the risk of recidivism. The Correctional Offender Management Profiling for Alternative Sanctions machine “was much more prone to mistakenly label black defendants as likely to reoffend.” That epiphany arose at a contentious moment for Brits, who are debating regulations that oversee “killer robots” able to identify, target, and kill without human control.

One of the most well-known validation pieces erupted on social media when Microsoft unveiled Tay, a chatbot that spewed racist, sexist, and otherwise derogatory comments after users took advantage of its machine learning capabilities to teach it these behaviors -- in a matter of hours.

Without Diversity, What Will AI Become?

All of us at Crowdstaffing passionately embrace exponential technologies and the tremendous benefits they will bring to this world. Humans have sought to automate manual processes for as long as they’ve walked the Earth. Don’t believe me? The wheel. Carriages. Ships. Looms. Assembly lines. Garage door openers. Dishwashing machines. Traffic signals (yes, people used to regulate vehicles in transit). We clamber to attain the latest gadgets, and we revel in the ease that automation accords us. We’re not afraid of machines. We’re afraid of having less.

If we capitalize on the efficiencies and competitive advantages of automation, we can find new ways to support human talent -- not displace them. PwC believes the rise of automation will actually boost productivity and generate additional jobs elsewhere in the economy.

As Rally Health’s Tom Perrault observed in Harvard Business Review, “What can’t be replaced in any organization imaginable in the future is precisely what seems overlooked today: liberal arts skills, such as creativity, empathy, listening, and vision. These skills, not digital or technological ones, will hold the keys to a company’s future success.”

Like Elon Musk, we believe in the hope and future prosperity that machines will open to us. Also like Musk, we approach these advances carefully, recommending responsible parenting in overseeing the development of our digital children. If the worry is having less, then ignoring the threats that biases cause will certainly make that a reality.

Machine learning is simply a mirror that reflects the knowledge, attitudes, and behaviors of its first teachers. If an exclusive class of individuals serves as the model, machines will learn to discriminate against, exclude, and mistake your potential customers, workers, and leaders.

As talent acquisition leaders, we have seen firsthand how a lack of inclusion and unfettered bias can hobble the hiring process. In April 2016, after serving jury duty, I was inspired to use juror selection as a blueprint of how to eliminate bias from recruiting. Yet we must extend that practice into the intelligence systems we’re developing. The success of technology, business, the workforce, and the world depend on an inclusive, unbiased embrace of humankind.

Sunil Bagai
Sunil Bagai
Sunil is a Silicon Valley entrepreneur, thought leader and influencer who is transforming the way companies think about and acquire talent. Blending vision, technology and business skills honed in the most innovative corporate environments, he has launched a new model for recruitment called Crowdstaffing which is being tapped successfully top global brands. Sunil is passionate about building a company that provides value to the complete staffing ecosystem including clients, candidates and recruiters.
Post a comment

Related Posts

Crowdstaffing featured as Rising Star and Premium Usability HR platform in 2019

Crowdstaffing has earned the prestigious 2019 Rising Star & Premium Usability Awards from FinancesOnline,...
Ravdeep Sawhney May 13, 2019 4:36:00 PM

Re-thinking the Right to Represent

People are not commodities. We understand this intellectually but in staffing we have many systems that t...
Sunil Bagai Mar 14, 2019 9:24:00 AM

HR Analytics: Moneyball For Growing Businesses

HR Analytics: from eyesore to icon The Eiffel Tower celebrated its 130th birthday this year. Originally c...
Casey Enstrom Feb 18, 2019 6:54:00 PM