To streamline complete AI fairness for people with disabilities, we need to take small initiatives
Much has been spoken about how artificial intelligence is working to support and even transform the lives of disabled people. Starting from specialized applications to robots that act as their aids, technology has accelerated the lives of people with disabilities. Although AI for people with disabilities has changed their routine for good, some critical points like discrimination, inequality, and bias are still in the shadow. In order to streamline complete AI fairness for people with disabilities, we need to take a few small initiatives.
The influence of artificial intelligence in people’s lives is no joke. Whether you have a disability or not, AI is helping you greatly in your daily life. Besides, the whole motto of creating disruptive innovations with the help of artificial intelligence is to automate routine and labor-intense jobs. One such time-consuming work is finding the right person for a job. It involves sorting the resume based on their talents, experience, education qualification, skills, etc. Owing to its complex nature, many companies have computerized the resume filtering process. Only the most eligible candidates are asked to even attend interviews.
While artificial intelligence is taking care of these critical jobs, it leaves out a part called AI fairness. Yes, initially, discrimination was widely seen on the basis of gender, race, and age. But recent reports suggest that discrimination has also crashed the job opportunities of people with disabilities. Fortunately, front-running tech companies are taking initiatives to streamline AI fairness for people with disabilities. For example, IBM has conducted a workshop at the ASSETS 2019 conference to develop community across disciplines in the areas of AI fairness, accountability, transparency, and ethics (FATE) for the specific situation of people with disabilities. In this article, we take you through a few initiatives that companies could put forward to wipe out AI mistreatment.
Detect the Downsides
Not all kinds of disabled people are either eligible or not eligible for a particular. It solely relies on the necessities of the job. Therefore, instead of ruling out disabled people’s applications, AI should be designed to validate the possibilities of their recruitment. They might first be able to identify the kind of disability the individual is having and compare it with the nature of the job. To do so, the AI application should be fed with unbiased historical data that could identify the specific outcomes. Although some disabilities directly impact the job’s nature, people can still use technology or software to address the challenge. For example, if a person is blind and applies for a banking job, we can’t completely rule out the possibilities of his recruitment. He can use specialized software to deliver his works. Artificial intelligence should be intelligent enough to see these small patches and pave the way for people with disabilities to get an opportunity.
Include Extreme Diversity
While talking from a general point of view, discrimination against disabled people might feel a lot like how racial, gender inequalities are occurring. But this is far more than simple ignorance. It has many dimensions, varies in intensity and impact, and often changes over time. Besides, disability also has many forms including physical barriers, sensory barriers, and communication barriers. Therefore, in order to achieve AI fairness for people with disabilities, the training datasets should be balanced. Technology should be trained in a fair way that people with disabilities are not outlined by its developments.
Ensure Data Privacy
Similar to healthcare data, the data of disabled people is also critical. Many countries refrain companies from asking for disabled data due to privacy concerns. This kind of fairness through unawareness will only make things worse like employing an ineligible candidate for a job. Therefore, countries should ensure data privacy and loosen up their hold on companies’ data collection policies. By doing so, artificial intelligence models will be trained with unbiased data that value the talents of people with disabilities. Even if the data is anonymized, the usual nature of a person’s situation may make them ‘re-identifiable.’
Test the Model for Bias
Data reflects on real-world routines. The reason behind discriminatory data is that humankind shows discrimination towards certain kinds of people. Although it is unfair, using the same data to carry out critical tasks like resume filtering is not a good thing. Therefore, AI solution providers should develop a plan to tackle bias in source data in order to shield against existing discriminatory data. They should also try new methods to pump up the representation of people with disabilities.
Share This Article
Do the sharing thingy