
It’s true there there has been progress in the field of data protection in the US due to the introduction of several laws, such as the California Consumer Privacy Act (CCPA), and non-binding documents, such as the Blueprint for AI Bill of Rights. However, there are currently no laws mandating tech companies to limit AI bias and discrimination.
As a result, many companies are backing away from developing ethical, private tools. About 80% of data scientists in the US are male and 66% are white, which indicates a lack of naturalness and representation in decision-making tools, which often leads to skewed data results.
A major shift in design review processes is essential for technology companies to consider all people when designing and developing their products. Otherwise, organizations can lose customers to the competition, damage their reputation and risk serious lawsuits. According to IBM, about 85% of IT professionals believe that consumers choose companies that are transparent about how AI algorithms are developed, monitored and used. We can expect this number to increase as more and more users continue to fight against bad and biased technology.
So, what should companies keep in mind when analyzing their prototypes? Here are four questions that development teams should ask themselves:
Have we banned all forms of discrimination in our art?
Technology can change society as we know it, but it will ultimately fail if it doesn’t benefit everyone equally.
In order to create effective, unbiased art, AI teams should develop a list of questions to ask during the evaluation process that will help them identify potential issues in their designs.
There are many methods that AI teams can use to test their models, but before doing so, it is important to evaluate the end goal and whether there are groups that will be disproportionately affected by the results of using AI.
For example, AI teams should consider that using facial recognition technologies can inadvertently discriminate against people of color – a common occurrence in AI algorithms. A study by the American Civil Liberties Union in 2018 showed that Amazon’s facial recognition incorrectly matched 28 members of the US Congress with mugshots. An astonishing 40% of the wrongdoers were people of color, although only 20% were from Congress.
By asking difficult questions, AI teams can find new ways to improve their models and work to prevent this from happening. For example, a thorough analysis can help them determine if they need to look at more information or if they need someone, such as a privacy expert, to review their product.
Plot4AI is a great tool for those who want to get started.