Citation: UNSPECIFIED.
Full text not available from this repository. (Request a copy)Abstract
In recent years, there has been a push in the Artificial intelligence (AI) field to simplify the application of machine learning so that its use can be more widely adopted. While this reduces barriers to entry for AI and machine learning, they also introduce the risk that persons or organisations with insufficient expertise will have the ability to use these systems to make decisions that have a significant impact on society based on discriminatory factors. Implementers and decision-makers need to have a good understanding of the features that the system might use and infer from to make predictions, and how these can affect their stakeholders. In this paper, we outline the risks of this phenomena occurring in a specific case – the application of machine learning applied to secondary school student grades. We demonstrate that naïve approaches can have unanticipated consequences and can generate predictions based on discriminatory factors such as gender or race. The impact of the application of such flawed decisions in matters such as awards, scholarships etc. could entrench detrimental bias in the education systems.
Item Type: | Paper presented at a conference, workshop or other event, and published in the proceedings |
---|---|
Uncontrolled Keywords: | Artificial Intelligence, Machine Learning, Naïve approaches, Risks of AI |
Subjects: | Q Science > QA Mathematics > QA76 Computer software |
Divisions: | Schools > Centre for Business, Information Technology and Enterprise > School of Information Technology |
Depositing User: | Sunitha Prabhu |
Date Deposited: | 18 Nov 2019 21:53 |
Last Modified: | 21 Jul 2023 08:24 |
URI: | http://researcharchive.wintec.ac.nz/id/eprint/6946 |