AMPLIFY VOL. 37, NO. 12
AI is a game-changer poised to impact businesses and individuals significantly in the years ahead. Fueled by investor ambitions, business interests, and consumer enthusiasm, the pace of AI innovation and adoption is set to accelerate. Its importance and influence will grow as AI finds novel and unforeseen applications that transform industries, society, and government operations, delivering immense economic and societal value. AI will revolutionize healthcare, finance, manufacturing, transportation, education, and more. By 2030, AI-enabled autonomous systems, humanoid robots, and AI-driven decision-making will be prevalent across industries and applications.
Amid this promise and excitement, we must not overlook AI’s dark side — the limitations, risks, and societal harms it brings. As Cutter Fellow Steve Andriole aptly described in his 2018 article, AI is “good, disruptive, and scary.”1 Its unintended consequences can be alarming and genuinely harmful when implemented without caution or ethics.
This article offers a forward-looking, balanced perspective on AI’s darker dimensions and potential impact. We explore the technical barriers, risks, and limitations associated with AI while proposing practical remedies. Emphasizing the urgent need for action, we call on all stakeholders (developers, users, governments, and regulatory bodies) to engage responsibly. By addressing these challenges now, we can steer AI toward a future that maximizes its benefits while minimizing its harms.
The Dark Side
Some of AI’s key challenges and risks include:
-
Technological barriers. Limitations in achieving true general intelligence, poor data quality, and issues with contextual understanding hinder AI performance.
-
Complexity, scalability, and sustainability. Increasingly complex AI systems face challenges in scalability, often requiring massive computational and energy resources to maintain performance.
-
Ethical and operational limitations. AI struggles with moral decision-making and often depends on human oversight for critical functions.
-
Generalization failures. AI systems can’t yet generalize learned knowledge across tasks or domains.
-
Societal harms. These encompass the misuse of AI for malicious and illegal purposes, including creating deepfakes, generating and spreading misinformation, producing biased and discriminatory outcomes, violating data privacy, and enabling surveillance for ulterior motives.
-
Security threats. AI systems are vulnerable to sophisticated cyberattacks, including those engineered by other AI systems. Furthermore, AI can aid security attacks on cyber-physical systems.
-
Economic and social disruption. AI applications can result in job displacement, increasing inequality due to automation, potential economic instability, and power concentration among tech giants.
-
Autonomous systems risks. AI-driven vehicles, drones, and weapons can pose significant dangers when used without adequate human oversight.
We discuss many of these risks below. For more, please see Sumit Mattey’s article, “Unveiling the Shadows: The Dark Side of AI in Modern Society.”2
Bias
Generative AI (GenAI) models, also known as “foundation models,” are trained on vast datasets comprising preexisting information, images, and data sourced from diverse platforms, including the Web. Unfortunately, biases inherent in this data often permeate the model’s outputs. This can result in unfair, biased, inaccurate, or narrowly focused responses, leading to discriminatory outcomes such as racial or gender prejudice.
If a language model is exposed to biased information (intentionally or unintentionally), its responses will reflect those biases. To mitigate this issue and ensure fairness and trustworthiness, it is essential to use unbiased, diverse datasets representing varied perspectives.
AI systems may also exhibit algorithmic bias arising from systematic and repeatable errors embedded within the model’s architecture.3 This can stem from preexisting societal biases, machine learning biases, emergent biases, technical design flaws, or correlation biases. These issues can produce “unfair” outcomes across applications and industries.
An article in the International Journal of Information Management Data Insights thoroughly explores AI biases, providing examples from various sectors and emphasizing the need to address these challenges.4
Misuse & Abuse
GenAI-based text, image, and video generators like ChatGPT, Midjourney, DALL-E 3, and Sora can be used to spread misinformation, promote offensive messages (e.g., sexist or racist rhetoric), and generate harmful material that incites violence or social unrest. These systems can also be used for impersonation in a way that causes reputational damage or financial harm.
Malicious actors can leverage AI chatbots to engage in antisocial or illegal activities, such as learning how to create explosives, commit theft, or cheat in various scenarios.
Robust safeguards, also known as “guardrails,” are needed to address these risks. These measures aim to prevent misuse, ensure the ethical application of AI, and hold those who exploit the technology accountable.
Security
Cybercriminals can use AI to launch sophisticated cyberattacks that evade detection, and AI itself may become a target of advanced cyber threats.5 For example, hackers could use AI-powered content generators to craft personalized, convincing spam messages or embed hidden malicious code in images, dramatically increasing the scale and effectiveness of cybersecurity attacks.
Additionally, users may inadvertently expose sensitive personal or business information by sharing it with chatbots like ChatGPT. Hackers could potentially store, analyze, or misuse this data, raising significant security, ethical, and privacy concerns.
Consider the following real-world scenarios:6
-
An executive copied and pasted their company’s 2023 strategy document into a chatbot, asking it to generate PowerPoint slides for a presentation.
-
A doctor inputted a patient’s name and medical condition into ChatGPT to draft a letter to the patient’s insurance company.
These cases highlight the urgent need for stricter safeguards, operational guidelines, and higher levels of awareness.
Legal Issues
Who owns the rights to an AI-generated essay, musical composition, or piece of art? Is it the people who provided the prompts and generated the content or those whose data was used to train the AI model?
It is worth noting that the US Copyright Office ruled that images generated by tools like Midjourney and other AI text-to-image platforms are not protected by US copyright law, as they lack the element of human authorship.7 This decision sparked debate about the legal status of AI-generated creations. Artists filed a class-action lawsuit against companies offering AI-generated art, challenging the legality of training AI systems on datasets that include their work without explicit consent.8 There is an urgent need to address AI’s legal and ethical implications in creative domains.
Impact on Employment & Society
AI content generators have the potential to automate tasks traditionally performed by humans (e.g., writing, editing, and customer service), raising concerns about job displacement. Automated decision-making systems, autonomous systems, and agentic AI could significantly reduce the need for human operators, impacting labor dynamics and raising socioeconomic issues.
AI’s broader societal impact includes the risk of exacerbating inequalities in access to its benefits, privacy erosion, and the degradation of human relationships. These concerns underscore the need for thoughtful policies and ethical considerations as AI advances.
Information Laundering
AI significantly influences how information is disseminated, but its potential biases and manipulations can distort information and/or spread misinformation. Key mechanisms through which AI contributes to information laundering include:9
-
Selective and biased presentation. AI systems can omit relevant data to create skewed narratives, and language models trained on biased content can generate misleading information. Malicious actors can intentionally design algorithms to produce misleading or biased outcomes to serve specific agendas.
-
Deepfakes and content amplification. AI enables highly realistic, fabricated content (images, videos, and audio) designed to spread false narratives. These technologies can also amplify the visibility of synthetic media, extending the audience while lending them a false sense of legitimacy.
-
Misinformation campaigns and echo chambers. Automated bots can rapidly spread false information (echo chambers), creating a false perception of credibility or consensus. Personalization algorithms reinforce existing biases by presenting content aligned with users’ beliefs, making distinguishing between true and false information harder.
There is an urgent need for vigilance, ethical practices, and robust safeguards to prevent the misuse and spreading of misinformation.
Overreliance
AI for decision-making can undermine human qualities like empathy, creativity, and ethical discernment, which are essential for sound judgment.10 This dependence can lead to dehumanization within organizations, erosion of human judgment, a decline in creative thinking, and a loss of human autonomy.
As organizations increasingly adopt AI-driven decision-making, they run the risk of using AI in contexts that require nuanced judgment and critical thinking, such as crisis management. Business leaders must learn to leverage AI capabilities while preserving leadership’s unique human qualities and retaining the essential role of human judgment.
Isolation, Psychological Manipulation & Social Implications
Despite fostering hyperconnectivity, AI-driven apps often contribute to social isolation. Virtual echo chambers and digital personas can replace human connections, leading to feelings of loneliness and alienation. A growing dependence on virtual interactions and the commercialization of online relationships will further weaken social cohesion, threatening collective well-being.
AI algorithms used by social media platforms and online services exploit human psychology, driving addictive behaviors and exacerbating mental health challenges. Inappropriate implementation and constant use of such applications affect mental health, human relationships, self-perception, and social dynamics.11 There is an urgent need for ethical standards and regulatory oversight to safeguard mental and societal health.
Environmental Impacts
Large language model training is energy-intensive, contributing to climate change and depleting natural resources. Strategies to reduce AI’s carbon footprint are urgently needed.
The swift obsolescence of AI hardware accelerates the generation of electronic waste (e-waste), posing challenges for sustainable waste management. Discarded devices and components often end up in landfills or are improperly disposed of, releasing harmful pollutants into ecosystems. Adopting sustainable design practices and responsible end-of-life management is crucial for minimizing e-waste.
Risk Mitigation
AI’s rapid expansion presents both immense opportunities and potential risks, some of which could be irreversible if not addressed. A notable example is the AI-based trading bots in the 1980s and 1990s that led to a market crash due to automated selling triggered by other bots. This prompted financial markets to implement systems that halt trading when certain thresholds of selling activity are detected.
Predicting and analyzing AI-driven risks is crucial for business leaders and developers. As companies embrace digitalization and AI, a close interaction between business, AI, and organizational strategies is essential to navigate the digital imperatives of 2030 and beyond.12
Executives planning to integrate AI should analyze its contributions to roles within their organizations and maintain the skills and professional growth ecosystem necessary for their developers to leverage AI effectively in the future.
AI systems must incorporate human oversight to mitigate catastrophic AI-driven risks in automated environments, particularly in critical areas like healthcare, defense, law, and finance. Human-in-the-loop systems ensure that human operators retain control, balancing automation with human expertise and intuition.
AI Risk Management
Many organizations are adopting AI, but not enough address its associated risks. A report by the IBM Institute for Business Value revealed that although 96% of leaders believe GenAI increases the risk of a security breach, only 24% of GenAI projects are adequately secured.13
AI risk management offers a structured approach to identifying, mitigating, and addressing these risks. It involves a combination of tools, practices, and principles focused on implementing formal AI risk management frameworks. The goal is to minimize AI’s negative impacts while maximizing its benefits.
The National Institute of Standards and Technology (NIST) introduced the NIST AI Risk Management Framework (AI RMF) to manage the risks AI poses.14 This voluntary framework integrates trustworthiness considerations throughout the AI lifecycle, from design and development to use and evaluation. AI RMF complements and aligns with other AI risk management initiatives.
Responsible Development
Responsible AI development and use are essential for mitigating AI’s ethical concerns and risks. Developers, users, and regulators collectively share this burden. Developers must ensure models are trained on diverse and representative data and implement safeguards to prevent misuse. An interdisciplinary approach is crucial to address the complex challenges of artificial general intelligence that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks, a distant goal for researchers and developers.
Regulatory frameworks are needed to address privacy, bias, and accountability concerns. Accountability and responsibility must also be embedded within an appropriate legal framework to promote the ethical use of these technologies for societal benefit.
Users must be mindful of the data they provide to AI systems, including personal information. They should use AI content generators ethically, posing valid, responsible, and morally acceptable prompts; fact-checking responses; and correcting/editing the responses before use.
General moral principles and a comprehensive overview of AI ethics should be integrated into AI curricula and education for students, as well as training programs for AI developers, data scientists, and AI researchers.
Trustworthy AI
One of the biggest challenges developers and society face is trust in AI. Core principles of a responsible or trustworthy AI include fairness, accountability, robustness, safety, privacy, and societal and environmental well-being.15 Preventative measures (guardrails) include requiring developers and deployers of high-risk AI to take specific steps across the AI lifecycle. Governments, technology companies, and individuals must join forces to establish a framework that ensures AI’s ethical and responsible development and use. Key priorities include:
-
Comprehensive data protection. Enforce stringent regulations to safeguard personal data and ensure accountability for data breaches.
-
Ethical AI practices. Center AI development and deployment around core human values and fundamental rights.
-
Transparent algorithms. Mandate transparency in AI and make AI systems explainable and open to auditing to minimize bias and prevent discrimination.
-
International collaboration. Develop global standards for AI governance to tackle cross-border issues.
-
Public education. Teach individuals to understand AI risks and how to safeguard themselves against potential harms.
Technologists should focus on improving the transparency and explainability of AI systems while actively tackling biases and establishing measures to prevent their misuse.
AI Regulation
Rapid AI advancement has spurred governments worldwide to establish evolving regulatory frameworks to balance its risks and benefits. A recent report offers a detailed overview of AI governance across the US, China, and the EU, covering key topics like system classification, cybersecurity, incident reporting, open source models, and risks tied to hazardous materials.16 It highlights legislative insights and analyzes motivations and expectations for future regulations.
AI regulation enforcement is challenging. A recent research article argues, “If we believe that AI should be regulated, then AI systems must be designed to be regulatable.”17
AI Audit: Ensuring Accountability & Reducing Risks
An AI audit, or algorithmic auditing, evaluates an AI system to ensure ethical, legal, and secure operations. It helps businesses identify risks, detect prohibited activities, address illegal bias, and implement safeguards to mitigate unacceptable risks.
The process includes documenting the AI system, assessing the development team, reviewing test datasets, analyzing inputs and outputs, and examining the model’s internal workings for transparency.
AI audits also educate executives about AI’s value and challenges. Global organizations provide AI auditing frameworks that guide businesses in responsibly adopting AI. The frameworks support risk mitigation and ensure AI technologies align with ethical standards, fostering trust and enhancing integration into digital transformation strategies, but they have yet to be widely adopted.
Why AI Projects Fail & How to Succeed
More than 80% of AI projects fail, which is twice the already-high failure rate in corporate IT projects that do not involve AI.18 Key reasons for failure include:19
-
Problem misunderstanding. Lack of clarity about the problem AI is intended to solve and AI’s capability to address it leads to misaligned objectives.
-
Insufficient data. Inadequate or poor-quality data hampers the development of effective AI models and project outcomes.
-
Overemphasis on technology. Focusing on the latest AI trends and tools rather than addressing real-world issues reduces project relevance.
-
Lack of infrastructure. Weak or inadequate infrastructure for managing data and deploying models undermines project execution.
-
Overreach. Applying AI to problems beyond its current capabilities leads to poor outcomes.
Strategies for Success
To overcome these challenges, industry leaders and developers should:
-
Bridge the gap between AI’s potential and its successful implementation, ensuring more impactful and sustainable outcomes.
-
Clearly define project goals and focus on solving meaningful, real-world problems.
-
Invest in robust infrastructure for data management and AI model deployment.
-
Recognize AI’s limitations and conduct feasibility assessments with input from technical experts to ensure realistic expectations.
-
Collaborate with government and private agencies to address data collection challenges.
-
Support employees’ continuing education and training to build expertise in AI implementation.
In the evolving AI landscape, professionals must expand their expertise beyond technical skills to remain competent and relevant.20 This includes staying updated on AI advancements, exploring the potential of AI in their work, addressing ethical and regulatory challenges, mitigating risks, and cultivating multidisciplinary knowledge. Having the knowledge, skills, and abilities to manage AI systems effectively is critical for the quality and success of AI applications.
By establishing clear success metrics, business leaders can identify underperforming AI experiments early and terminate them before costs escalate.21 However, in some cases, pausing a project rather than abandoning it may be more effective, as emerging AI capabilities could address the underlying issues.
Conclusion
If we don’t embrace AI advances, we risk falling behind. But we must remain vigilant about AI’s core issues, limitations, and risks. Failure to address these challenges can result in financial and reputation loss, security vulnerabilities, ethical dilemmas, economic and social disruption, and environmental harm. Addressing the dark side of AI demands awareness, technological innovation, collaboration, and decisive policy interventions. Put simply, we should be asking not just what AI can do, but what it should — and shouldn’t — do. The future of AI lies in our hands. Let’s unlock AI’s potential and benefits by proactively addressing its risks and unintended consequences.
Our future is a race between the growing power of technology and the wisdom with which we use it.
— Stephen Hawking22
References
1 Andriole, Steve. “AI: The Good, the Disruptive, and the Scary.” Amplify, Vol. 31, No. 2, 2018.
2 Mattey, Sumit. “Unveiling the Shadows: The Dark Side of AI in Modern Society.” LinkedIn, 29 April 2024.
3 “Algorithmic bias.” Wikipedia, accessed December 2024.
4 P.S., Varsha. “How Can We Manage Biases in Artificial Intelligence Systems — A Systematic Literature Review.” International Journal of Information Management Data Insights, Vol. 3, No. 1, April 2023.
5 Murugesan, San. “The AI-Cybersecurity Nexus: The Good and the Evil.” IT Professional, Vol. 25, No. 5, September-October 2022.
6 Lemos, Robert. “Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears.” Dark Reading, 7 March 2023.
7 Quach, Katyanna. “America: AI Artwork Is Not Authored by Humans, So Can’t Be Protected by Copyright.” The Register, 24 February 2023.
8 Escalante-De Mattei, Shanti. “Artists File Class Action Lawsuit Against AI Image Generator Giants.” ARTnews, 17 January 2023.
9 Fheili, Mohammad Ibrahim. “Information Laundering in the Age of AI: Risks and Countermeasures.” LinkedIn, 1 July 2024.
10 Ribeiro, Brunno Boaventura. “The Dark Side of AI Dependency.” LinkedIn, 22 August 2024.
11 Kak, Ajay. “The Dark Side of AI, Generative AI, and Machine Learning and Examining Its Psychological Consequences.” LinkedIn, 3 September 2024.
12 Mithas, Sunil, San Murugesan, and Priya Seetharaman. “What Is Your Artificial Intelligence Strategy?” IT Professional, Vol. 22, March-April 2020.
13 Badman, Annie. “What Is AI Risk Management?” IBM, accessed December 2024.
14 “AI Risk Management Framework.” National Institute of Standards and Technology (NIST), accessed December 2024.
15 Varsha (see 4).
16 Cheng, Deric, and Elliot McKernon. “New Report: 2024 State of the AI Regulatory Landscape.” Convergence Analysis, 27 May 2024.
17 Shen, Xudong, et al. “Directions of Technical Innovations for Regulatable AI Systems.” Communications of the ACM, Vol. 67, No. 11, October 2024.
18 Kahn, Jeremy. “Want Your Company’s AI Project to Succeed? Don’t Hand It to the Data Scientists, Says This CEO.” Fortune, 26 July 2022.
19 Ryseff, James, Brandon F. De Bruhl, and Sydne J. Newberry. “The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed.” RAND, 13 August 2024.
20 Murugesan, San. “To Thrive in the Artificial Intelligence Age, Stay Professionally Fit —Forever.” IEEE Intelligent Systems, Vol. 39, November-December 2024.
21 Gross, Grant. “When Is the Right Time to Dump an AI Project?” CIO, 10 October 2024.
22 Walker, Lauren. “Stephen Hawking Warns Artificial Intelligence Could End Humanity.” Newsweek, 14 May 2015.