Emotion Recognition Technologies and Dignity in AI-Based Surveillance Capitalism
DOI:
https://doi.org/10.69970/gjlhd.v12i2.1273Abstract
Businesses, governments and other entities are increasingly presented with AI-based ‘emotion recognition’ biometric systems, promoted as tools offering robust insights into the honesty, comprehension or health support needs of individuals, particularly students and employees. Australian universities may consider adopting this technology as they expand their AI engagement in learning/assessment platforms and student support systems. Automated emotion recognition systems pose legal and human rights challenges arising from their potential to be used deterministically; their potential lack of reproducibility, replicability and validity; and their susceptibility to bias, notwithstanding their possible utility. Further, they rely on non-consensual or co-opted participation of individuals whose dignity is eroded by consequent reduction from persons to data subjects. This article evaluates such systems through a dignitarian human rights lens, highlighting the need for a precautionary approach.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Griffith Journal of Law & Human Dignity

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.