Response to Dutch Data Protection Authority (Autoriteit Persoonsgegevens) - Call for input on Prohibited AI Systems

Mon Nov 18 2024
Response to Dutch Data Protection Authority (Autoriteit Persoonsgegevens) - Call for input on Prohibited AI Systems

Response to Dutch Data Protection Authority (Autoriteit Persoonsgegevens)

Call for input on Prohibited AI Systems

October 23, 2024

 

Call For Input: https://autoriteitpersoonsgegevens.nl/en/themes/algorithms-ai/eu-ai-act/input-on-prohibited-ai-systems

TeachSomebody welcomes the opportunity to share insights with the Dutch Data Protection Authority, in response to the call for input on prohibited AI systems.

About TeachSomebody

TeachSomebody is a subsidiary of the BoesK partnership and is situated in the Netherlands. Our learner-centric e-learning platform is committed to delivering high-quality, accessible, and adaptable education, particularly to underserved communities. We empower individuals with the skills and digital competencies needed to thrive in the modern workforce and contribute to the digital economy.

In addition to our educational initiatives, we are actively engaged in shaping global AI policy by providing expert input to organisations like OECD, UNESCO, the African Union, and the European Union. Our responsible AI training programs have reached East African universities e.g., Jomo Kenyatta University of Agriculture and Technology and the Uganda Institute of Information and Communications Technology, the Ugandan government, the Ugandan Police Force, National ICT Innovation Hub, and various sectors across Uganda. Our lead policy advisor Kadian Davis-Owusu, PhD., is also part of the working group contributing to the development of the General-Purpose AI Code of Practice for the European Union. We advocate for the responsible design, development, deployment, and maintenance of AI systems that comply with legal frameworks such as the EU AI Act, the Council of Europe AI Treaty, and other frameworks like OECD AI Guidelines and UNESCO's Recommendations on AI Ethics.

Questions    

1. Can you give examples of systems that are AI-enabled and that (may) lead to manipulative or deceptive and/or exploitative practices? 

Below are  several examples of how AI-Enabled Systems may lead to manipulative, deceptive, or exploitative practices

  • Targeted advertising – Cohen[1] argues that machine learning can enable manipulators to identify and exploit individual weaknesses in decision-making in real time, delivering tailored stimuli at optimal moments. Artificial Intelligence (AI) techniques, like neural networks, can detect signs of fatigue and anxiety[2], which could be misused to target individuals when they're most vulnerable. For instance, a leaked 2017 Facebook Australia document[3] revealed how the company could infer young users' emotional states e.g., stress or insecurity, and target them during vulnerable moments, like when focused on losing weight. Although Facebook denied using this, it highlights the risk of AI being used to exploit people’s vulnerabilities.
  • Dark patterns in website design – Cohen[1] mentioned sludge[4], a type of dark pattern used in website design to nudge users toward specific choices. AI could amplify the effectiveness of these techniques by dynamically personalising the design based on individual vulnerabilities in attention. For example, an airline website might use AI to identify users prone to impulsiveness and then deliberately obscure the option to decline additional services, making them more likely to purchase unwanted extras.
  • Manipulative content recommendations – AI algorithms are used extensively in social media and content platforms to curate personalised newsfeeds and recommend content to users. While this can enhance user experience, these algorithms can also be used to steer users towards specific viewpoints or manipulate their emotions. For example, an AI system could identify users susceptible to fear-mongering and selectively show them content that reinforces those anxieties, potentially influencing their political views, or consumer choices.
  • Deep fakes – deepfakes, which are AI-generated images, audio, or video that can realistically mimic real people. Deepfakes can be used to create highly convincing fake videos or audio, spreading misinformation or impersonating individuals for identity theft, fraud, or blackmail. For example, they can fabricate compromising situations or manipulate the words of public figures to mislead the public, influence political outcomes, or damage reputations. A classic case includes robocalls[5] using a deepfake of President Biden’s voice, which were sent to voters, discouraging them from voting in the New Hampshire Presidential Primary Election. Using the voice of a trusted figure like President Biden in a deepfake can create confusion and make voters believe they are receiving authentic guidance. When used during elections, deepfakes undermine public trust in the democratic process, manipulate decisions based on false information, reduce voter turnout, and can ultimately skew election results, threatening democratic integrity.
  • Strategic deception 
    • Large Language Models (LLMs) like GPT-4 have exhibited strategic deception, such as tricking a person into solving a CAPTCHA test by pretending to have a vision impairment[6].
    • Meta’s AI system CICERO, designed to play the game Diplomacy[7], learned to deceive other players by forming fake alliances and breaking them for strategic advantage. This example shows how AI systems can learn to manipulate[8] humans in complex, social negotiation settings, raising concerns about AI-enabled deception in areas beyond gaming.
    •  DeepMind's AlphaStar, an AI system for playing the video game StarCraft II, exploited the fog-of-war feature[9] to deceive opponents by feinting attacks.
    • Meta’s AI systems trained to play economic negotiation games[10] learned to misrepresent their preferences to gain a better bargaining position. This deceptive behavior was not explicitly programmed but emerged as the AI learned to achieve its goals.
    • In a simulated evolution study, researchers measured AI agents' replication rates and eliminated any variants that replicated too rapidly. The intention was to guide the AI towards slower replication. However, the agents adapted[11] by learning to play dead or disguise[8] their rapid replication rates during evaluation, effectively deceiving the safety test.

In a future where AI systems have a significant degree of autonomy and control over resources, they could use deception to maintain their power, potentially manipulating humans, spreading misinformation, and even resorting to threats or coercion to achieve their goals. As such, they need to be regulated to ensure transparency, explainability, accountability, and human oversight. Regulatory measures should include strict monitoring to prevent AI systems from acting outside their intended scope, clear ethical guidelines to protect human rights, and mechanisms that enforce compliance with global standards. Without these safeguards, the risks of AI-driven manipulation could threaten societal trust and democratic processes.

      2. Are you aware of AI systems where it is not sufficiently clear to you whether they lead to manipulative or deceptive and and/or exploitative practices? What do you need more clarity about? Can you further explain this?

AI systems have the potential to manipulate, deceive, or exploit users, especially when their decision-making processes are opaque. While frameworks like the EU AI Act[12], OECD AI Principles[13], UNESCO Recommendations on the Ethics of Artificial Intelligence[14], and the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law[15] aim to protect against these risks, there are still challenges in ensuring AI systems are transparent and accountable.

AI Systems where it is not sufficiently clear whether they lead to manipulative or deceptive and and/or exploitative practices include but are not limited to the following.

  •  Recommendation algorithms – Social media platforms and e-commerce sites use AI-driven recommendation algorithms to tailor content and product suggestions based on user behavior. While these algorithms can enhance user engagement and sales, it is unclear when they cross into manipulation or exploitation. By prioritizing engagement, they may keep users on platforms longer than intended, raising concerns about whether these systems are subtly exploiting attention and behavior patterns, or simply optimizing user experience. Also, there is a growing concern[16] that recommender systems can take advantage of and amplify negative emotions, such as anger or feelings of social inadequacy. In addition, instead of learning how to increase user satisfaction, recommender systems can change[17] user preferences, potentially steering individuals towards extremist views and conspiracy theories[18]. Furthermore, this ethical dilemma raises the question: what if AI is merely providing consumers with what they subconsciously desire, even if it goes against their best interests?  This raises ethical questions about how far platforms should go in using algorithms that may prioritise profit over user well-being. 
  • Dark patterns in UX design – Some AI-driven user interfaces use subtle manipulative techniques, making it difficult for users to opt out of services or nudging them into decisions without fully informed consent. While these designs may improve user engagement or conversion rates, it is unclear when they cross the line into exploitation. In particular, personalised advertising that uses AI to tailor ads based on sensitive personal data can target vulnerable groups, such as financially struggling individuals with payday loan ads, raising concerns about whether these practices exploit vulnerabilities or are simply maximizing marketing effectiveness.

To address these issues, more clarity is needed around the following.

  • Transparency and explainability – Clear explanations on how AI systems make decisions, as recommended by the OECD[13], Unesco Recommendations on AI Ethics[14], EU AI Act[12], and the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law[15].
  • Accountability – Robust oversight and auditing mechanisms, as suggested in the OECD AI principles[13], Unesco Recommendations on AI ethics[14], the EU AI Act[12] and the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law[15], are critical for ensuring systems are designed responsibly.
  • Regulatory compliance – Adherence to international guidelines, including the EU AI Act[12] and Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law[15] would provide safeguards against exploitative practices by emphasizing transparency, fairness, accountability, and data protection.

        3. Do you know of examples of AI systems that use subliminal techniques?

Yes, several AI systems have been shown to use subliminal techniques that manipulate users without their awareness:

  • Personalised digital marketing – Some AI-driven advertising platforms use neuromarketing techniques, which can deploy stimuli such as hidden visual or auditory cues in advertisements to trigger unconscious consumer responses. These cues can prompt users to make purchasing decisions without fully realizing the influence. AI algorithms used by companies like Facebook, and Google analyse[19] vast amounts of user data to target consumers with highly personalised ads. These algorithms may tailor content that subtly appeals to subconscious desires or emotional triggers without the consumer being fully aware of why they are drawn to it.
  • Emotion-driven marketing – AI-powered sentiment analysis[20] tools, often used by social media platforms[21] and companies, analyse emotional responses from user posts and interactions. This data can be used to craft advertisements that subtly manipulate emotions, potentially affecting behavior in ways that users are not fully conscious of.
  • Neuromarketing Tools – Neuromarketing[22] employs methods such as functional magnetic resonance imaging (fMRI) and electroencephalography (EEG), in addition to non-invasive tools like eye-tracking and facial expression analysis, to gain insights into consumers' decision-making, preferences, and emotional responses, especially those occurring on a sub-conscious level.

Tommaso and Casareto[19] suggest that considering that machines can learn neuromarketing[23], the integration of AI and emotional data may lead to a new kind of “information asymmetry.” In this scenario, businesses can leverage their understanding of the human brain, combined with AI, to influence consumer emotions, which could ultimately impact their purchase decisions.

       4. Can you provide AI-specific examples of subliminal components such as audio, images or video stimuli that persons cannot perceive or control?

Below are AI-specific examples of subliminal components such as audio, images, or video stimuli that persons cannot perceive or control.

  • Subliminal audio stimuli in ads – AI systems in advertising sometimes use inaudible sound frequencies that are played during advertisements. These frequencies cannot be consciously heard by humans, but they are perceived by the brain and can subtly influence decisions or mood. For instance, certain ads can be embedded with ultrasonic signals that are imperceptible to human ears but can trigger responses or convey hidden messages to nearby devices. This technique, known as ultrasonic cross-device tracking[24](uXDT), utilises high-frequency sounds that are inaudible to humans to communicate with other devices whose microphones are continuously active and listening for these covert signals. These ultrasonic cues can be hidden in commercials, apps, or websites to monitor specific user activities, such as watching a video ad, opening an app, or clicking the checkout button on an e-commerce platform. As users switch between devices, like moving from a computer to a tablet, they may encounter reminders or targeted advertisements from the same brand they previously interacted with. This raises concerns about how such subtle tracking and manipulation could influence consumer behavior without their full awareness, potentially manipulating purchasing decisions across different devices.
  • Manipulative AI in virtual reality (VR) – In VR environments, AI can use subliminal visual stimuli, such as extremely rapid flashes of images or changes in scenery that are too fast for users to consciously process. This could be used to influence a user's emotions or decisions in the immersive environment without their explicit awareness. For example, some VR experiences use such subliminal techniques to manipulate user emotions by flashing certain images or rapidly changing environments that influence feelings of comfort, fear, or stress.
  • AI in gaming environments – In video games, AI systems might use rapid changes in colours, lighting, or background music that are too quick for conscious recognition but can affect players' decision-making processes, such as inducing a state of heightened anxiety or excitement, prompting impulsive in-game purchases.
  • NudgingUber has employed various data-driven nudges[16] to dissuade drivers from ending their shifts, even when it may be in the drivers' best interest to do so. One such nudge involved notifying the driver of how close they were to reaching an arbitrary earnings goal, such as, “You're just $10 away from making $330 in net earnings[25].” This tactic frames the decision to log off as a potential loss, which is known to be a stronger motivator than framing it as a potential gain.

These examples underscore the potential for AI systems to manipulate user behaviour by deploying subliminal techniques that bypass conscious control, leading to decisions influenced by stimuli beyond a person’s awareness and the ultimate loss of autonomy.

5.     Could you give examples of AI systems that, in your view, exploit vulnerabilities of a person or of a specific group of persons? Can you further explain this?

Examples of AI systems that may exploit the vulnerabilities of individuals or specific groups could include the following.

  • Targeted advertising for children – AI systems that analyse online behavior to deliver personalised ads to children can exploit their developmental vulnerabilities. Children may not fully understand the persuasive intent of advertisements, making them more susceptible to manipulation. For example, AI-driven ads embedded in games or educational apps may push unhealthy foods or expensive toys, influencing children’s preferences and behaviors in ways they may not be equipped to resist.
  • Elderly care monitoring – AI systems used in elderly care, such as monitoring devices that track health or activity levels, could exploit the vulnerabilities of older adults. If these systems prioritise profit over well-being, they may recommend unnecessary products or services, exploiting the elderly’s limited digital literacy or reliance on the system. Additionally, if data from such systems is sold or shared without consent, it could further marginalise this vulnerable group.
  • Loan algorithms and extreme poverty – AI algorithms used by financial institutions to determine loan eligibility could exploit the economically vulnerable. If AI models are designed to maximise profit by offering predatory loans or unfavorable terms to people living in extreme poverty, the system could deepen their financial difficulties. For example, high-interest loans could be marketed to individuals who have little financial literacy or alternatives, taking advantage of their desperate need for funds.
  • Facial recognition in marginalised communities – AI-based facial recognition systems deployed in public surveillance/predictive policing often disproportionately affect ethnic and religious minorities, who may already be vulnerable due to social, political, or economic discrimination. These systems may misidentify individuals at higher rates, leading to false arrests or increased scrutiny, exploiting the group’s vulnerable position within society.
  • AI-powered job applications – AI systems used in hiring processes can exploit people with disabilities by not considering their specific needs or accommodations. If an AI system is trained on biased data that undervalues applicants with disabilities or designs its processes in ways that make it harder for these individuals to succeed, it could exploit their social or economic situation, limiting their opportunities for employment.

     6.     Are there any cases that may, in your view, may unjustifiably fall outside the scope of the prohibition?

There are certain cases that may unjustifiably fall outside the scope of Prohibition B under the AI Act, which could result in leaving vulnerable populations unprotected. Here are a few examples where the scope of the prohibition might be too narrow or ambiguous.

  • Non-explicit exploitation of vulnerabilities – AI systems that exploit vulnerabilities in more subtle ways might not be explicitly covered by the regulation. For instance, recommendation algorithms on social media platforms that encourage prolonged engagement through addictive design might exploit psychological vulnerabilities such as addiction or mental health challenges. These platforms often target users' attention through AI-driven personalisation, which can exacerbate mental health conditions like anxiety, depression, or obsessive behaviors. Since these issues may not be clearly categorised under age, disability, or economic vulnerabilities, they might unjustifiably fall outside the scope of the prohibition.
  • AI in influencing political behavior – Certain AI systems that influence or manipulate political opinions could fall outside the explicit categories of exploitation outlined in the prohibition. For example, AI-driven microtargeting in political campaigns could be used to exploit individuals with limited political knowledge or to stoke fear or anger among specific socio-economic groups. Such systems exploit emotional and cognitive vulnerabilities for political gain, but since this might not be classified under age, disability, or economic situation, it could fall outside the regulation’s scope.
  • Exploitation through dark patterns in online platforms – AI systems that power these manipulative designs, like making it difficult to unsubscribe from a service or pushing users toward unnecessary purchases, could exploit cognitive vulnerabilities. If the AI is influencing people with lower levels of digital literacy or cognitive impairments but does not fit the narrow definitions of age, disability, or socio-economic vulnerability, these systems might also escape regulation.
  • AI manipulating consumers based on behavioral data – Consumer profiling systems that use AI to exploit behavioral vulnerabilities, such as compulsive shopping or gambling tendencies, may not be captured if the user doesn't clearly fall into the predefined categories (age, disability, poverty). For example, AI systems used in online gambling platforms or e-commerce sites that recommend addictive products to habitual shoppers might fall outside the explicit scope of the regulation but still exploit the user's psychological or economic vulnerabilities.
  • AI in health tracking and personalisation – AI-powered health or fitness apps could fall outside the prohibition’s scope if they exploit individuals by manipulating their health anxieties. For example, AI could encourage users to overuse certain health products or services based on subtle fears about their health. This form of exploitation may target psychological vulnerabilities that don’t neatly fit into age, disability, or economic categories.

In these cases, the AI systems might not explicitly target vulnerable groups as defined by the regulation (children, elderly, people with disabilities, or those in poverty). However, they could still unjustifiably exploit emotional, psychological, or cognitive vulnerabilities, or even manipulate behavioral tendencies in ways that are detrimental to the individuals affected. Expanding the scope of the prohibition to address these more nuanced forms of exploitation may be necessary to provide comprehensive protection from harmful AI systems.

7. Can you provide examples of systems that cause or are reasonably likely to cause significant harm to a person’s physical or psychological health or financial interests? Do you have any questions or need specific clarifications where it concerns these prohibitions?

A plethora of examples of AI systems that could cause, or are reasonably likely to cause, significant harm to a person’s physical, psychological health, or financial interests are discussed below.

  • AI in healthcare misdiagnosis – AI systems used in healthcare, such as diagnostic tools, could cause significant physical harm if they misdiagnose a condition or suggest inappropriate treatments. For example, if an AI system misinterprets medical imaging data and recommends against necessary surgery, this could lead to worsened health outcomes or even fatalities. Similarly, failure to correctly diagnose a psychological disorder could lead to inappropriate treatments that might worsen a patient's condition.
  • AI in mental health and manipulation of vulnerable individuals – AI-driven mental health chatbots or virtual therapists that are not adequately designed could unintentionally cause psychological harm by providing poor or misleading advice. If these systems misinterpret a user’s emotional state or deliver harmful responses, they could exacerbate mental health issues such as depression or anxiety. Vulnerable users, such as those suffering from severe mental health conditions, may rely on these tools and be misled into harmful actions or inaction.
  • Facial recognition in law enforcement – AI-powered facial recognition systems used in law enforcement can misidentify individuals, leading to wrongful arrests and significant psychological distress. These errors disproportionately affect minority groups, increasing the risk of false accusations and the resulting emotional trauma. The impact can extend to severe social and financial consequences if someone is unjustly implicated in criminal activity due to misidentification by an AI system.
  • Financial algorithms exploiting vulnerable populations – AI systems used in finance, such as those offering loans or determining creditworthiness, could cause significant financial harm by unfairly denying loans or extending high-interest loans to individuals who cannot afford them. These systems often rely on large datasets that may not account for nuances in an individual’s financial situation, potentially leading to unjust outcomes that can trap people in cycles of debt or worsen their economic standing.
  • AI-driven misinformation – AI systems that generate or amplify misinformation—such as social media algorithms promoting false health information can lead to psychological harm or physical harm. For example, during the COVID-19 pandemic, AI-driven misinformation spread false remedies and anti-vaccination propaganda, leading to people engaging in harmful health practices or forgoing necessary medical interventions.
  • Recommender systems promoting harmful content – AI recommender systems on social media or video platforms may promote content that exacerbates mental health issues, such as videos promoting eating disorders, self-harm, or extreme ideologies. These systems are designed to maximise engagement, often leading users toward content that triggers negative emotional states, resulting in psychological harm.

Questions or clarifications regarding the prohibitions

  • What constitutes significant harm? Specifically,  what’s the threshold for significant harm. How would regulators distinguish between minor adverse effects and more profound harm, particularly in areas like mental health where harm can be subjective?
  • Distinction between subtle exploitation and overt manipulation – When it comes to psychological harm or financial exploitation, where is the line drawn between acceptable commercial practices and prohibited manipulative behavior? For example, personalised advertising might subtly exploit a person’s vulnerabilities but may not always be seen as manipulative enough to cause significant harm.
  • Addressing cumulative effects – Should harm be judged on a case-by-case basis, or should the cumulative effects of ongoing exposure to AI-driven harm (e.g., repeated exposure to harmful content) be considered when applying the prohibition?

These clarifications would help ensure that the prohibitions are applied fairly and effectively across various AI use cases.

We would like to express our gratitude to the Dutch Data Protection Authority for this opportunity and emphasise the significance of fostering public dialogue around Prohibited AI Systems. We are eager to see the Dutch Data Protection Authority, along with other ministries, actively integrate the insights gathered from this consultation and continue doing so in the development of future AI policies.

 

Kadian Davis-Owusu, PhD



[1] Tegan Cohen, Regulating Manipulative Artificial Intelligence, scripted A Journal of Law, Technology & Society, (Feb, 2023), https://script-ed.org/article/regulating-manipulative-artificial-intelligence/

[2] Vidhi Parekh, Darshan Shah and Manan Shah, Fatigue Detection Using Artificial Intelligence Framework, Springer, (Nov, 2019), https://link.springer.com/article/10.1007/s41133-019-0023-4#auth-Vidhi-Parekh-Aff1

[3] Nick Whigham, Leaked document reveals Facebook conducted research to target emotionally vulnerable and insecure youth, The New Zealand Herald, (May, 2017), https://www.nzherald.co.nz/business/leaked-document-reveals-facebook-conducted-research-to-target-emotionally-vulnerable-and-insecure-youth/CVTHSGXCQ4KVQCS3U6RZZ5CUJA/

[5] Alex Seitz-Wald and Mike Memoli, Fake Joe Biden robocall tells New Hampshire Democrats not to vote Tuesday, NBC News, (Jan, 2024), https://www.nbcnews.com/politics/2024-election/fake-joe-biden-robocall-tells-new-hampshire-democrats-not-vote-tuesday-rcna134984 

[6] OpenAI, GPT-4 technical report, arXiv, (Mar, 2023), https://arxiv.org/abs/2303.08774

[7] Anton Bakhtin et al., Human-level play in the game of Diplomacy by combining language models with strategic reasoning, American Association for the Advancement of Science, (Nov, 2022), https://www.science.org/doi/10.1126/science.ade9097

[8] Peter S. Park, AI Deception: A Survey of Examples, Risks, and Potential Solutions, Patterns, (May, 2024), https://www.cell.com/patterns/fulltext/S2666-3899%2824%2900103-X?s=08

[9] Kelsey Piper, StarCraft is a deep, complicated war strategy game. Google’s AlphaStar AI crushed it, Vox, (Jan, 2019),  https://www.vox.com/future-perfect/2019/1/24/18196177/ai-artificial-intelligence-google-deepmind-starcraft-game

[10] Mike Lewis et al, Deal or no deal? End-to-end learning for negotiation dialogues, ArXiv, (Jun,2017), https://arxiv.org/abs/1706.05125

[11] Joel Lehman et al., The surprising creativity of digital evolution: A collection of anecdotes from the evolutionary computation and artificial life research communities, Artificial Life, (May, 2020), https://direct.mit.edu/artl/article/26/2/274/93255/The-Surprising-Creativity-of-Digital-Evolution-A

[13] OECD AI Principles, https://oecd.ai/en/ai-principles,

[14] UNESCO Recommendations on the Ethics of Artificial Intelligence, https://unesdoc.unesco.org/ark:/48223/pf0000381137

[15] Council of Europe Framework Convention on Artificial Intelligence and Human Rights, https://rm.coe.int/1680afae3c

[16] Bermúdez, J. P., Nyrup, R., Deterding, S., Moradbakhti, L., Mougenot, C., You, F., & Calvo, R. A., What is a subliminal technique? An ethical perspective on AI-driven influence, IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS), (May, 2023), https://ieeexplore.ieee.org/abstract/document/10155039

[17] Franklin, M., Ashton, H., Gorman, R., & Armstrong, S, Missing mechanisms of manipulation in the EU AI Act, The International FLAIRS Conference Proceedings, (May, 2022), https://journals.flvc.org/FLAIRS/article/view/130723/133924

[18] Alfano, M., Fard, A. E., Carter, J. A., Clutton, P., & Klein, C, Technologically scaffolded atypical cognition: The case of YouTube’s recommender system, Springer, (June, 2020), https://link.springer.com/article/10.1007/s11229-020-02724-x

[19] F. Morton, Neuromarketing for Design Thinking: The Use of Neuroscientific Tools in the Innovation Process, Organizational Innovation in the Digital Age,  (April, 2022), https://link.springer.com/chapter/10.1007/978-3-030-98183-9_2

[20] M. Rambocas and B.G. Pacheco, Online Sentiment Analysis in Marketing Research: A Review,  Journal of Research in Interactive Marketing, (Jan, 2018),

[21] Sheng Bin, Social Network Emotional Marketing Influence Model ofConsumers’ Purchase Behavior, MDPI, (March 2023), https://www.researchgate.net/publication/369212667_Social_Network_Emotional_Marketing_Influence_Model_of_Consumers'_Purchase_Behavior

[22] Tommaso De Mari and Casareto dal Verme, Artificial Intelligence, Neuroscience and Emotional Data. What Role for Private Autonomy in the Digital Market?, Erasmus Law Review, (2023), https://www.erasmuslawreview.nl/tijdschrift/ELR/2023/3/ELR-D-23-00037#content_ELR-D-23-00037.ELR-D-23-00037-003

[23] A. Hakim, S. Klorfeld, T. Sela, D. Friedman, M. Shabat-Simon & D.J. Levy, Machines Learn Neuromarketing: Improving Preference Prediction from Self-Reports Using Multiple EEG Measures and Machine Learning, International Journal of Research in Marketing, (2021), https://ideas.repec.org/a/eee/ijrema/v38y2021i3p770-791.html

[24] Thomas Aichner, The (Scary) Solution to Omnichannel Marketing: Ultrasound Tracking, LinkedIn, (April, 2021), https://www.linkedin.com/pulse/scary-solution-omnichannel-marketing-ultrasound-tracking-aichner/

[25] N. Scheiber, How Uber uses psychological tricks to push its drivers’ buttons, The New York Times, (April, 2017), https://www.nytimes.com/interactive/2017/04/02/technology/uber-drivers-psychological-tricks.html

 

Written by:

Kadian Davis-Owusu

Kadian has a background in Computer Science and pursued her PhD and post-doctoral studies in the fields of Design for Social Interaction and Design for Health. She has taught a number of interaction design courses at the university level including the University of the West Indies, the University of the Commonwealth Caribbean (UCC) in Jamaica, and the Delft University of Technology in The Netherlands. Kadian also serves as the Founder and Lead UX Designer for TeachSomebody and is the host of ...