Evaluating privacy compliance and ethical standards for personal assistant AI under GDPR and CCPA

Thesis title: Evaluating privacy compliance and ethical standards for personal assistant AI under GDPR and CCPA
Author: García Ramírez, Mariel
Thesis type: Diploma thesis
Supervisor: Sigmund, Tomáš
Opponents: Pavlů, David
Thesis language: English
Abstract:
Privacy, legal compliance, and ethical concerns are some of the issues raised by the growing use of Personal Assistant Artificial Intelligence (PAAI) systems in daily life. Examples of these systems are Apple Siri and Amazon Alexa, technologies that are built into devices like smartphones, smart speakers, and smart devices at home that continuously process vast amounts of user data through behavioral profiling and voice recognition. These kinds of systems are getting attention because of the concerns regarding algorithmic responsibility, data governance, and consumer autonomy within digital ecosystems because of their dual role as data collectors and service facilitators. This thesis seeks to determine whether these assistants comply with two of the most significant privacy laws in the world: the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). According to this study, privacy is reinterpreted as the preservation of relevant information in a variety of social contexts. This interpretation is based on the contextual integrity theoretical framework developed by Nissenbaum (2004). Applying frameworks such as Solove's (2006) taxonomy of privacy harms and Zuboff's (2020) critique of surveillance capitalism, the thesis develops a comprehensive method for evaluating how well Siri and Alexa comply with legal and ethical privacy standards. This point of view also includes the fact that frameworks support the thesis. Since Siri and Alexa have quite different data design principles, economic strategies, and regulatory scrutiny, the theoretical synthesis and literature review support the selection of these two systems as case studies. We provide a thorough comparative analysis of consent processes, third-party access, interface design, and data minimization strategies using these theoretical tools. The approach being employed is primarily qualitative and is based on the examination of several documents, including court rulings, privacy policies, compliance reports, news, and technical literature. Systemic differences in legal compliance are revealed by this method. These discrepancies include unclear third-party data practices and insufficient enforcement of user rights. A lot of attention is paid to legal case precedents, such as the investigation into Apple's ATT and the fines imposed by Amazon under the GDPR. These examples demonstrate differences between stated privacy standards and real-world practices. The examples mentioned above show that platforms privacy solutions can be used to enhance ecosystem governance and enhance user protection. The findings show that while both PAAI systems exhibit some compliance with the CCPA and GDPR, they are implementing user autonomy, transparency, and justice. There is a significant difference between the two. According to the study's findings, ethical governance includes more than just complying with the law; it also entails making legally enforceable commitments to maintain context and obtain meaningful consent. By providing a straightforward method of evaluating AI-powered digital assistants and advocating for stricter regulations to support their ethical and legal development, the research adds to discussions in academia and in the real world.
Keywords: Regulatory framework; CCPA; GDPR; Ethics; Privacy
Thesis title: Evaluating privacy compliance and ethical standards for personal assistant AI under GDPR and CCPA
Author: García Ramírez, Mariel
Thesis type: Diplomová práce
Supervisor: Sigmund, Tomáš
Opponents: Pavlů, David
Thesis language: English
Abstract:
Privacy, legal compliance, and ethical concerns are some of the issues raised by the growing use of Personal Assistant Artificial Intelligence (PAAI) systems in daily life. Examples of these systems are Apple Siri and Amazon Alexa, technologies that are built into devices like smartphones, smart speakers, and smart devices at home that continuously process vast amounts of user data through behavioral profiling and voice recognition. These kinds of systems are getting attention because of the concerns regarding algorithmic responsibility, data governance, and consumer autonomy within digital ecosystems because of their dual role as data collectors and service facilitators. This thesis seeks to determine whether these assistants comply with two of the most significant privacy laws in the world: the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). According to this study, privacy is reinterpreted as the preservation of relevant information in a variety of social contexts. This interpretation is based on the contextual integrity theoretical framework developed by Nissenbaum (2004). Applying frameworks such as Solove's (2006) taxonomy of privacy harms and Zuboff's (2020) critique of surveillance capitalism, the thesis develops a comprehensive method for evaluating how well Siri and Alexa comply with legal and ethical privacy standards. This point of view also includes the fact that frameworks support the thesis. Since Siri and Alexa have quite different data design principles, economic strategies, and regulatory scrutiny, the theoretical synthesis and literature review support the selection of these two systems as case studies. We provide a thorough comparative analysis of consent processes, third-party access, interface design, and data minimization strategies using these theoretical tools. The approach being employed is primarily qualitative and is based on the examination of several documents, including court rulings, privacy policies, compliance reports, news, and technical literature. Systemic differences in legal compliance are revealed by this method. These discrepancies include unclear third-party data practices and insufficient enforcement of user rights. A lot of attention is paid to legal case precedents, such as the investigation into Apple's ATT and the fines imposed by Amazon under the GDPR. These examples demonstrate differences between stated privacy standards and real-world practices. The examples mentioned above show that platforms privacy solutions can be used to enhance ecosystem governance and enhance user protection. The findings show that while both PAAI systems exhibit some compliance with the CCPA and GDPR, they are implementing user autonomy, transparency, and justice. There is a significant difference between the two. According to the study's findings, ethical governance includes more than just complying with the law; it also entails making legally enforceable commitments to maintain context and obtain meaningful consent. By providing a straightforward method of evaluating AI-powered digital assistants and advocating for stricter regulations to support their ethical and legal development, the research adds to discussions in academia and in the real world.
Keywords: GDPR; Ethics; Regulatory Framework; CCPA; Privacy

Information about study

Study programme: Information Systems Management/Management of Business Informatics
Type of study programme: Magisterský studijní program
Assigned degree: Ing.
Institutions assigning academic degree: Vysoká škola ekonomická v Praze
Faculty: Faculty of Informatics and Statistics
Department: Department of Systems Analysis

Information on submission and defense

Date of assignment: 7. 11. 2024
Date of submission: 26. 6. 2025
Date of defense: 2025

Files for download

The files will be available after the defense of the thesis.

    Last update: