HomeOpinionEditorialsWhy Diversifying Tech is Integral for Fairer Artificial Intelligence

Why Diversifying Tech is Integral for Fairer Artificial Intelligence

- advertisement -

Artificial intelligence is rapidly integrating into everyday life, revolutionizing industries such as criminal justice, healthcare and finance. Whether asking Siri about the weather or requesting Alexa to play music, you are using AI. On a more advanced level, artificial intelligence is being used for more complex tasks, such as diagnosing patients with diseases and predictive policing in the criminal justice system. AI involves developing computer systems that can perform tasks that previously required human help and intelligence. To do so, the field is focusing on creating algorithms that can learn from data it is given and making predictions based on what it has learned.

AI is a powerful tool that is capable of tackling complex social problems, such as access to education, elimination of bias in decision-making processes and other essential services. It does so by pinpointing areas of need and improving the allocation of resources based on data-driven insights. While it has the potential to systematically eliminate these issues, without proper regulation to ensure the safe development and deployment of AI, it can perpetuate biases and infringe on individual rights. At the Internet’s inception, it was promoted as a space that was open and free. Combined with the fact that it has evolved so quickly, it has and still is outpacing regulations set by policymakers. But the lack of a regulatory framework has contributed to many issues we face today, from the spread of misinformation to algorithmic discrimination. To ensure the advancements of AI benefit all of humanity, incorporating a diverse group of perspectives in AI development and implementing better regulation and oversight is vital. 

Algorithms learn from the data given to them, similar to how a student learns from a textbook. In this way, the only information that AI can train from and make predictions off of is the data it has been fed. The more data it is given access to, the more it can refine its predictions, much like a student who can gain a deeper understanding when they read more books on the subject. But if the data provided hold biases, the predictions the algorithms then make are biased. If an AI facial recognition system is primarily trained on white male faces, it would not recognize women and people of color as well, leading to potential harm. 

The application of artificial intelligence systems in real-life has shown that biased data perpetuates existing inequalities and prejudice. PredPol, a predictive policing tool, is just one example of how biased training data targets minority communities. This tool acts as a crime weather forecast, assessing neighborhoods in a spectrum from low-risk to high-risk. However, the algorithm learns from policing data that has decades of biases baked into them. In the United States, a Black person is five times more likely to be arrested than a white person. These algorithms only magnify the bigotries and racism ingrained in the system. PredPol disproportionately highlights minority communities as neighborhoods of high risk, perpetuating the problem of people of color being more likely to be arrested. While artificial intelligence is seen as a solution to ensure objectivity, in reality, human prejudice is deeply embedded into the system. 

The result of this is long-lasting damage by producing data-driven technologies that legitimize discriminatory policing. Let’s paint the picture. Brisha Borden was an 18-year-old Black girl who was running late to pick up her god-sister, so she picked up an unlocked bike that was on the street. When the owner saw her, she quickly dropped the bike, but it was too late and she was charged with burglary and petty theft of $80. In comparison, Vernon Prater was a white man who was charged with shoplifting $86.35 from Home Depot and has been convicted of armed robbery and served prison time before. 

When both were put in an AI risk assessment algorithm of the likelihood of committing another crime, Borden was assessed as high-risk, while Prater was assessed as low-risk. These algorithms are being encouraged by the government for use in the criminal justice system. With no transparency and accountability regarding police departments and their use of the AI models, these algorithms will continue to put minority communities in unsafe environments.  

The tech field is disproportionately white and male. When the perspectives behind tech programming are not diverse, blind spots accumulate throughout the development of AI products, perpetuating biased algorithms and discriminatory predictions. The biases that are embedded in the AI systems reflect a larger problem of the lack of representation of POC and female workers in the tech field. For example, Black engineers represent 2.5% of Google’s workforce and only 4% of Facebook and Microsoft’s.  In addition, Google’s workforce is only 32% female, Facebook’s female workforce is declining to 36.7%.

Current AI systems continuously fail to serve the needs of diverse communities. Facial recognition systems can only identify Black faces when they are wearing a white mask. To pinpoint the limitations of these algorithms, it’s vital to recognize these AI models are not inherently objective and neutral. There urgently needs to be increased inclusivity and diversity in tech to ensure that these AI systems are being developed fairly and ethically. 

In an interview with New University, Jeffrey Krichmar, a professor of cognitive sciences at UCI, expressed hope that awareness is growing for the problematic lack of diversity in those creating. 

“And it’s getting better when I started years ago, it wasn’t as diverse of who was actually, you know, doing some of the algorithms,” Krichmar said. 

While diversity is increasing behind the scenes with the engineers who create these algorithms, it is also vital that the government can properly regulate advancing tech to ensure its progress. Current government officials are not technologically literate enough to supervise this expanding field. 

Professor Krichmar is hopeful that as younger people who are more technology savvy enter the workforce, they will play a critical role in managing AI companies and their algorithms. Since younger generations are more technologically proficient and aware of the social issues caused by tech, they are equipped with the proper expertise to ensure transparency and diversity in AI development to create models that are objective and fair.

“I was in Washington, DC and [a Federal liaison] took me around to different Congressional offices. And it was kind of shocking the lack of technical know-how on Capitol Hill. So I think, hopefully, young people like yourself or others will think about that,” Krichmar said. 

In the end, the objective of fairer AI is to establish a regulatory framework that ensures the ethical and responsible development of new technologies while also protecting the rights of all people. 

The European Commission is leading by example, introducing the first-ever legal framework on AI. This framework highlights the importance of transparency and human oversight in automation to ensure the elimination of biases and discrimination. This framework also sets a precedent for AI regulation that other countries can follow to control Big Tech’s power and ensure that AI acts objectively. The United States needs to follow in the European Commission’s footsteps, implementing a regulatory framework that ensures accountability and diverse perspectives in the creation of AIs. 

The rise of automation and the growing dependence on algorithms for critical decisions highlights why algorithmic bias needs to be addressed. While growing awareness of this problem within the industry and government paints a promising future, there are still ways to go until artificial intelligence serves us all equitably and fairly. Instead of codifying the past, tech industries must diversify the teams behind-the-scenes and ensure transparency through third-party regulation to achieve objective artificial intelligence.

Sriskandha Kandimalla is an Opinion Staff Writer. She can be reached at skandima@uci.edu