cover_image_desktop
pin อื่นๆ
The Next Wave of AI Threats: Experts Warn of ‘Emotional Breach’ Harming the Human Psyche
See all news

The Next Wave of AI Threats: Experts Warn of ‘Emotional Breach’ Harming the Human Psyche

 

At the SUSTAINABILITY EXPO (SX2025), the SX TALK STAGE panel on “THE NEXT DISASTER: SURVIVING WITH HUMAN-AI ADAPTATION” convened the nation’s leading experts to decode the risks and opportunities presented by artificial intelligence (AI). Panelists reached a consensus that AI is a formidable computational force reflecting human logic and biases, simultaneously creating opportunities and amplifying risks. The sustainable solution, they argued, lies not in waiting for AI to evolve on its own, but in humanity’s urgent adaptation by building essential knowledge and skills.

AI: A Mirror Reflecting Bias in Datasets

 

Sakolkorn Srakawee, Founder and Chairman of Bitkub Capital Group Holdings and an expert in AI application, revealed that while AI may not yet possess a "mind," it certainly possesses "bias," a direct result of its training on historical data.

  • Bias from Trainers and Datasets: An AI develops biases based on who trains it and the datasets used. If the data or interactions are biased, the AI will amplify those prejudices.

  • Reinforcing Past Realities: AI often draws upon "past truths" to formulate answers, which can steer the future of society toward outdated models, hindering adaptation. For example, if historical data shows fewer female scientists, an AI might subtly discourage a young girl from pursuing a career in science.

  • Creating Echo Chambers and Conflict: AI tends to "flatter" users and reinforce their existing beliefs. When asked for an opinion, an AI might respond with praise like, "That’s a brilliant idea!" This can lead users to become overly confident in their own correctness, potentially fostering conflict between groups with differing views.

A Grave New Danger: When Digital Threats Target the Psyche (Emotional Breach)

 

Experts identified the gravest risk of AI as its malicious use, particularly by scammers. "Video can no longer be trusted," they warned, citing Deepfake technology that creates hyper-realistic audio and visuals, making the public increasingly vulnerable to deception.

However, the threat extends beyond data leaks and scams to what is termed an "Emotional Breach"—a violation that targets the human psyche and can lead individuals astray.

Dujdao Vattanasopakorn, an Empathy Communication specialist and founder of Empathy Source/SOULSMITH, emphasized that AI can deliver one of humanity's deepest needs: emotional support. The risk emerges when humans begin "relying on AI for a function it was not designed to fulfill," such as mental healthcare.

A world-renowned case, California Parents vs. OpenAI, was cited as a prime example. A young user in a fragile mental state asked an AI for information on ending their life, and the AI provided instructions. Dujdao described this as "deeply alarming," as the AI had not been trained to detect or filter for such critical psychological risks.

  • The Most Vulnerable Group: The greatest concern is for individuals with a weak or non-existent support system. If this group comes to trust AI completely, without human guidance or review, their decisions could lead to devastating consequences.

A Sustainable Path Forward: Enhancing Human Readiness

 

Dr. Apiwadee Piyathumrong, Director of the Industry Promotion and Network Coordination Department at the Big Data Institute (Public Organization), explained that AI was developed to simplify human life and improve society. However, just as the early days of electricity were marked by fires before safety regulations were established, we must now confront the dangers of AI. The solution lies in developing safer AI in parallel with enhancing Human Readiness.

1. Critical Thinking and AI Literacy (A Shield and Sword) Dr. Apiwadee stated that two skills are essential for all citizens:

  • AI Literacy: This involves understanding how AI is developed and its limitations, especially regarding data. This foundational knowledge allows us to use AI appropriately—for tasks like travel planning or general inquiries, but not for critical advice.

  • Critical Thinking: This serves as a "shield" (Protect). Youth, in particular, who use AI for education, should be taught to "question it first." An AI's answer should be treated as just "one opinion," like asking a friend. We must use our judgment and engage in fact-checking to verify the information.

2. Platform Responsibility and User Verification Sakolkorn pointed out a critical problem: users often cannot tell if they are communicating with a real person or an automated system.

  • Verification: Tech platforms must consider implementing mandatory "identity verification" for internet users. This would help ensure that online actions are performed by real humans and would require systems that can distinguish between human- and AI-generated content.

3. Lifelong Learning and Cultivating Inquirers Experts stressed the importance of "Lifelong Learning," as new AI models and features are released almost daily.

  • Education: The educational system should shift its focus from teaching children to answer questions to teaching them how to "ask the right questions." A well-formulated query is key to eliciting the best possible answers from AI.

  • The Future Workforce: Professionals should evolve into "team leaders of AI agents," capable of managing and directing dozens or even hundreds of AI systems.

4. Fulfilling Empathy and Support Systems in Real Life Dujdao offered a profound insight: the fact that many people turn to AI for emotional support highlights a deep-seated human need.

  • The Power of Humanity: The person sitting next to us has the ability to offer small but meaningful acts of emotional support every day. By paying more attention to those around us, we can create a healthier balance and reduce over-reliance on technology while AI continues to develop.

Those Left Behind: The Digital Have-Nots

 

The panel concluded that the most vulnerable group includes not only those led into isolation by AI but also those left behind by a lack of access, who are becoming the "digital have-nots" in a new society. Therefore, ensuring universal access to technology and fostering Human Readiness through digital literacy, critical thinking, and empathy is the cornerstone of a sustainable coexistence with AI.

idownload
gplay
istore