Interessanter Beitrag über ethische Fragestellungen bei Artificial Intelligence und wie man diese lösen kann.
Keine News mehr verpassen! Jetzt den Newsletter abonnieren .
How can machines discriminate?First, it’s important to define the three different ways that algorithmic bias can manifest in models. Take loan underwriting, for example:● Direct discrimination can occur when an algorithm penalises a protected group based on sensitive attributes, such as race or sex, or based on a specific feature commonly possessed by a protected group. For example, the loan underwriting algorithm may judge ethnic minority or female applicants as a higher loan risk than a caucasian male applicant based on their ethnicity directly or on their zipcode (part of city where they usually reside).● Indirect discrimination occurs when an algorithm consistently produces a disparate outcome for a protected group even though it does not overtly make use of obvious group features. For example, the underwriting model may be disproportionately granting loans to men based on the applicant’s higher income.● Individual discrimination occurs when an algorithm treats an individual unequally to another, despite both possessing similar features — e.g., an ethnic minority candidate and a caucasian candidate are matched in age, profession and income, yet the model concludes the ethnic minority.