How much can we trust AI? How to build confidence before a large-scale deployment

[ad_1]

Organizations must still build trust in AI before they deploy it throughout the organization. Here are some simple steps to make AI more dependable and ethical.

data.jpg

Image: iStock/metamorworks

In 2019, Amazon’s facial-recognition technology erroneously identified Duron Harmon of the New England Patriots, Brad Marchand of the Boston Bruins and 25 other New England athletes as criminals when it mistakenly matched the athletes to a database of mugshots. 

SEE: Artificial Intelligence Ethics Policy (TechRepublic Premium)

How can artificial intelligence be better, and when will companies and their customers be able to trust it?

“The issue of mistrust in AI systems was a major theme at IBM’s annual customer and developer conference this year,” said Ron Poznansky, who works in IBM design productivity. “To put it bluntly, most people don’t trust AI—at least, not enough to put it into production. A 2018 study conducted by The Economist found that 94% of business executives believe that adopting AI is important to solving strategic challenges; however, the MIT Sloan Management Review found in 2018 that only 18% of organizations are true AI ‘pioneers,’ having extensively adopted AI into their offerings and processes. This gap illustrates a very real usability problem that we have in the AI community: People want our technology, but it isn’t working for them in its current state.”

Poznansky feels that lack of trust is a major issue.

“There are some very good reasons why people don’t trust AI tools just yet,” he said. “For starters, there’s the hot-button issue of bias. Recent high-profile incidents have justifiably garnered significant media attention, helping to give the concept of machine learning bias a household name. Organizations are justifiably hesitant to implement systems that might end up producing racist, sexist or otherwise biased outputs down the line.”

SEE: Metaverse cheat sheet: Everything you need to know (free PDF) (TechRepublic)

Understand AI bias

On the other hand, Poznansky and others remind companies that AI is biased by design—and that as long as companies understand the nature of the bias, they can comfortably use AI.

As an example, when a major AI molecular experiment in identifying solutions for COVID was conducted in Europe, research that deliberately did not discuss the molecule in question was excluded in order to speed time to results.

That said, analytics drift that can occur when your AI moves away from the original business use case it was intended to address or when underlying AI technologies such as machine learning “learn” from data patterns and form inaccurate conclusions.

Find a midpoint

To avoid skewed results from AI, the gold standard methodology today is to check and recheck the results of AI to confirm that it is within 95% accuracy of what a team of human subject matter experts would conclude. In other cases, companies might conclude that 70% accuracy is enough for an AI model to at least start producing recommendations that humans can take under advisement. 

SEE: We need to pay attention to AI bias before it’s too late (TechRepublic)

Arriving at a suitable compromise on the degree of accuracy that AI delivers, while understanding where its intentional and blind bias spots are likely to be, are midpoint solutions that organizations can apply when working with AI.

Finding a midpoint that balances accuracy against bias allows companies to do three things:

  1. They can immediately start using their AI in the business, with the caveat that humans will review and then either accept or reject AI conclusions.
  2. They can continue to enhance the accuracy of the AI in the same way that they enhance other business software with new functions and features.
  3. They can encourage a healthy collaboration between data science, IT and end-business users.

“Solving this urgent problem of lack of trust in AI … starts by addressing the sources of mistrust,” Poznansky said. “To tackle the issue of bias, datasets [should be] designed to expand training data to eliminate blind spots.”

Also see

[ad_2]

Source link

  • Related Posts

    Run Linux edge applications on Windows PCs and servers with EFLOW

    [ad_1] WSL isn’t the only official way to run Linux code on Windows systems. Find out how to take advantage of Azure IoT Edge for Linux on Windows. Image: Shutterstock/TippaPatt…

    How one start-up learned that in order to succeed, you need to master the art of the pivot

    [ad_1] It can take multiple pivots for a good idea to turn into a product. The founders of PÜL began with surfing and ended with a smart cap. Here’s how…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Buy GoPotent Çmimi – Përdorimi Dhe Efekte Anësore 2025 Cijena, Gdje kupiti, Sastav, Način uzimanja, Forum, Recenzije

    JAVA BURN – ((❌ WARNING! ❌)) – JAVA BURN REVIEW – JAVA BURN WEIGHT LOSS REVIEWS – JAVA BURN COFFEE ✓Official Website:

    Cardiotensive HU – normális vérnyomás az első használattól kezdve

    Cardiotensive HU – normális vérnyomás az első használattól kezdve

    Matcha Suri Green Tea Supplements || Buy Matcha Green Tea Near Me

    Cheap Nursery Near Me Dubai – Budget-Friendly Yet High-Quality Education for Kids

    Why Every Business Needs a Digital Marketing Agency in Dubai