Experts: Ethics a must in AI

Reading Time: 5 minutes
(Image courtesy Gerd Altmann via Pixabay.)

Artificial intelligence is increasingly becoming a part of our daily lives, both in the workplace and at home.

Some AI experts are stressing the need to focus on making AI ethical and keeping it human friendly.

Bias in programming, security concerns, and a lack of public knowledge about how AI works are all issues that need to be addressed to develop and maintain a healthy relationship between humans and the technology we use.

“This is the year AI ethics become absolutely mandatory functions in most businesses, not just talk,” Alex Spinelli, chief technology officer at LivePerson and former global head of Alexa OS for Amazon, told Hypergrid Business.

 

 

Ethical AI today

Companies are just starting to consider responsible use of AI as a part of their business model.

“An increasing number of enterprises are getting behind responsible AI as a component to business success, but only twenty-five percent of companies said unbiased AI is mission-critical,” said a 2020 State of AI and Machine Learning Report.

“There are inherent risks by not considering ethics in your AI thought process, which may include AI not working for a diverse user base, not focusing on wellness and fair pay for the AI supply chain, or creating privacy issues if, for example, your AI is trained using data users didn’t consent to be used for that process,” said the report.

Data transparency will become increasingly important in the future.

“Finding companies that truly believe in open source sharing of data, as well as give reassurances via transparency will win the battle of AI. Companies that hoard data and do not share it with the rest of the community will enjoy having marketing buzz, but will ultimately fail to gain trust in both its users as well as the larger community,” Josh Rickard, security engineer at security solutions company Swimlane, told Hypergrid Business.

Bias is a big problem in AI programming.

Amazon scrapped a recruiting system that was biased against women. A 2018 ACLU study of Amazon’s facial recognition software identified twenty-eight members of congress as potential criminals.

Amazon, Google, and Microsoft stopped selling facial recognition technology to law enforcement in 2020 because it was biased against women and people of color.

Organizing an ethical AI future

A variety of new, but non-binding frameworks for ethical AI have been established to further the conversation about the ethical use of AI and to serve as a guiding light in the responsible and secure use of AI technology.

The Partnership on AI, which includes leading companies like Amazon, Google, Microsoft, and IBM, was established in 2016 to formulate best practice in AI technologies, to help advance the public’s understanding of how AI works, and to be a platform for discussion about AI’s influence on people and society.

The Partnership on AI launched Closing Gaps in Responsible AI in 2020 to help garner insights in how to inform and empower changemakers, activists, and policymakers to develop and manifest responsible AI.

“Operationalizing these principles is a complex process in relatively early stages, and currently the gap between intent and practice is large, while documentation on learning from experience remains lacking,” said the Partnership on AI on their website.

Forty-two countries, known collectively as the Organization for Economic Cooperation and Development, or OECD, came together in 2019 to create value-based principles for the responsible stewardship of trustworthy AI.

These principles state that AI should benefit people and the planet by driving inclusive growth, AI should respect the rule of law and human rights, and there should be transparent disclosure so people can understand and challenge AI-based outcomes.

Security risks should be continually assessed and managed, and organizations and individuals that deploy AI systems should be held accountable for their proper functioning in line with the OECD principles.

This includes knowing when you’re interacting with a human or an AI.

“You should always know if you’re having a conversation with an AI,” said Spinelli. “It should never pretend to be human.

Even if you’re only interacting with AI, a human being should be available at some point in the process.

“Human-in-the-loop AI is here to stay,” Ramprakash Ramamoorthy, product manager at ManageEngine, an ITSM provider serving Fortune 100 companies, told Hypergrid Business.

“Virtually no AI models are correct 100% of the time. Algorithmic decision-making requires a human in the loop to verify the integrity of the data, audit the model, provide explanations for decisions, and adjust the model for unseen phenomena,” he said.

A human would make sure the data is used as it was meant.

“It is vital that the data within AI-models is used as it was intended to be used—and only as it was intended to be used,” said Ramamoorthy.

“It will likely be a challenge for the regulators to keep up. Despite the probable increase in AI-powered cyberattacks and lawmakers’ failure to stay ahead of technological innovation, the future of AI looks bright. Artificial intelligence is here to augment humans’ work lives; it is not going to replace them,” he said.

AI is everywhere

Like it or not, AI is here to stay.

“I see a lot of pitches from companies and you rarely see software or web product that isn’t AI-based,” Steve Shwartz, AI investor and author of upcoming book Evil Robots, Killer Computers, and Other Myths: The Truth About AI and the Future of Humanity told Hypergrid Business.

“One thing we’ll start seeing is companies emerging to help people who develop AI software to make the software not discriminate, make sure it’s compliant with laws, and to analyze the risks involved in rolling out AI software,” said Shwartz.

More compassionate AI

“I think 2021 is the year we start to talk about tech where AI becomes more compassionate,” said Spinelli.

Spinelli’s company LivePerson, among others, has taken the EqualAI pledge, in which signers agree to strive to use AI as a tool to reduce harmful bias, and not replicate and spread it.

“We’ve committed to addressing bias in our own AI technologies and we encourage others to do the same,” said Spinelli.

New and friendlier AI-based models are showing up in big business.

“We launched something called Bella Loves Me,” said Spinelli. “It’s a challenger bank, and what we wanted to do was take an AI experience and really think about how to make it a warm, compassionate empathetic experience. It’s not a cold hard evil machine.  If you take the view that AI can help us augment us and assist us — not replace us — we can use that as a guiding light.”

MetaStellar news editor Alex Korolov is also a freelance technology writer who covers AI, cybersecurity, and enterprise virtual reality. His stories have also been published at CIO magazine, Network World, Data Center Knowledge, and Hypergrid Business. Find him on Twitter at @KorolovAlex and on LinkedIn at Alex Korolov.