Ethics & Law in AI
The use of artificial intelligence (AI) in higher education poses various challenges for teachers: These range across a number of aspects, including data quality, data protection, copyright, ethics, bias, transparency, resources and acceptance as well as fairness and accessibility.
The legal classification of AI is still unclear in many respects. Until then, the following applies: students may not be required by lecturers to use AI as part of their courses, in particular not to register.
Data is the fuel of AI: artificial intelligence learns and makes predictions based on data. A large amount of data is needed to train AI models and use them effectively. This leads to questions about the collection, storage and use of personal information.
Users should be informed about how their data is used and what decisions are made based on their data. Obtaining consent and transparency in the use of data are important aspects of data protection in AI.
However, popular AI applications such as ChatGPT or bing lack options for tracking the use of personal data due to the non-public algorithm and the complexity of the processing procedures.
Please note this when using AI applications: Do not enter any personal data unless it has been securely and completely anonymized beforehand. All data entered into AI systems such as ChatGPT or bing is used for training purposes and can be retrieved by any person worldwide. They are also clearly linked to the registered account. Research secrets may also not be entered. Students may not be obliged by lecturers to use the system, in particular not to register, in the context of courses.
To date, German law has had a relatively clear understanding of which works are protected by copyright: Works protected by copyright are only personal intellectual creations, i.e. results created by humans. Since AI-generated products in the form of texts, images, music, code, etc. are the result of so-called neural networks, copyright protection is generally ruled out for them due to the lack of creation by a human being.
If, on the other hand, they are work results with a not insignificant creative, i.e. creative, part by humans, which were created with the help of AI, copyright protection is at least conceivable.This may be the case, for example, if teachers instruct the service to significantly rewrite a text (or programming code, etc.) they have written themselves according to certain specifications - or, conversely, if they extensively edit an AI-generated text after output, i.e. significantly change it according to their ideas or add content and adapt the wording to suit their needs.
It is also important to note that entering copyrighted works (e.g. parts of a lecture script or term paper) into an AI application also infringes the rights of the author. Users must keep an eye on whether they have the necessary rights to everything they enter or upload.
Licence Note: “Urheberrecht und Datenschutz bei ChatGPT & Co. in der Hochschullehre“, Andrea Schlotfeldt | HOOU@HAW, CC BY SA 4.
AI applications are only as good as the data used to train them: AI models can be trained using unequal or biased data, which can lead to discriminatory or unfair results. So always question AI-generated content critically and sensitize your students to this issue.
The topic of biases is summarized well in the following video: (in German)
AI-Campus (Licenced: CC BY-SA 4.0)
The discussion of deep fakes and fake news, which appear deceptively real with the help of AI, should also be part of a well-founded examination of AI tools.
This video from the ARD media library provides a good overview. (In German)
- “Urheberrecht und Datenschutz bei ChatGPT & Co. in der Hochschullehre“, Andrea Schlotfeldt | HOOU@HAW, CC BY SA 4.0
- ChatGPT: Risks and challenges from a Data Privacy perspective
- Ringvorlesung: Alexa, ChatGPT & Co: Wie haltet ihr es mit der Ethik? - Eine interdisziplinäre Perspektive auf KI
- Umgang mit Deep-Fake. Aktuelle Stunde. 02.04.2023. WDR