Questions? +1 (202) 335-3939 Login
Trusted News Since 1995
A service for global professionals · Friday, July 12, 2024 · 727,112,923 Articles · 3+ Million Readers

Grassroots Global Advocacy Group Red Flags the Dangers Posed to Humans by 'AI Hallucinations'

Sana Bagersh

Sana Bagersh, Founder of The Global BrainTrust

The grassroots Global BrainTrust calls for an integrated worldwide effort to address AI hallucination's incalculable risks to underrepresented populations.

We face a momentous opportunity to steer AI towards a future of empowerment for the whole world, leaving no-one behind. Working together we can realize AI’s benefits and mitigate its risks, to all.”
— Sana Bagersh

SEATTLE, WASHINGTON, USA, July 12, 2024 /EINPresswire.com/ -- -----------------------------------------------------
Introduction by Professor Ahmed Banafa
Senior Technology Advisor, The Global BrainTrust

“AI hallucinations” are results generated by AI models that are usually caused by wrong or insufficient data points, leading to errors, misinformation, biases and incorrect assumptions. The Global BrainTrust flags the immediate need to address AI hallucination risks as they can potentially pose incalculable dangers to humans. Risks emerge when important decisions are made by policymakers that are based on erroneous information, which can invariably lead to adverse impacts to humans.

“Data is king. AI results are based on the quantity, quality and accuracy of the datasets available. A ‘hallucination’ can occur in medicine, for example, where AI classifies healthy human tissue as cancerous because of insufficient data points of healthy tissue to reference.

“Hallucinations take different forms, such as an incorrect prediction of an unfolding event. It can be a false positive, identifying a threat that doesn’t exist that could lead to an extreme response. Or it could be a false negative where a real risk is not identified due to lack of data, creating a false sense of security.”

SANA BAGERSH - Founder, The Global BrainTrust
“People will be impacted, both positively and negatively. Our concern is that we will become too dependent on AI that we are complacent, and lazy, to question results that are outright misinformation or hallucinations inadvertently or maliciously generated. AI spews whatever data it is fed on, which is why we must ensure the integrity of the data that is representative, of all parts of the world.

“We know no guardrails exist yet, but we are red flagging this for all those in the dark, especially developing nations with no access to the conversation, let alone to prevention, protection and recourse.”

ABDULLAH ABONAMAH - Higher Education/ Learning Advisor, The Global BrainTrust
Professor of Computing, Machine Learning, and Analytics, Abonamah offers a simple example: “I asked ChatGPT to count my research papers, and it reported there were sixty, which was inaccurate. I then asked it to list the names of the last 10 papers and it got only two papers correct, while the other eight were made up. When I challenged it, it apologized and recommended I check Google Scholar. The moral here is that when using ChatGPT everyone must verify all responses.”

RAMSI HASHASH - Productivity Advisor, The Global BrainTrust
Productivity Specialist, Hashash says AI hallucinations are a threat because most people are unaware that AI merely predicts strings of words that match a specific query without applying logic and factoring inconsistencies. A user assumes that the text is faultless, shares it, then action is taken based on the erroneous information.

“AI will increasingly dominate our future, from hospitals, to the aeronautic industry, to schools, to the military industry where one day robots will lead military operations.”

GABRIELLA KOHLBERG - Government Development Economics Advisor, The Global BrainTrust
Kohlberg, who developed AR/VR technologies in mental illness symptomatology, says AI hallucinations pose an existential threat because of the immersive realities generated by artificial intelligence.

“AI is non-discerning in who can be exposed and succumb to hallucinations, due to the immersive nature and variations of the technology. AI answers can be perceived as absolute truths due to confirmation bias, progressively further distancing oneself from reality through repeated exposure. Hallucinations derived from AI informed immersive pseudo-realities can further compound social isolation and group norm issues, leading to a continuous cycle of mental health degradation. Limited exposure, access to further research, discernment and fact checking skills will be required more than ever.”

“Vulnerable communities from developing countries might have a more intense reaction towards AI exposure due to lack of awareness. But the immersive nature of these technologies can easily influence even a tech proficient individual to the point of hallucination, delusion and psychosis.”

Dr. WASSEEM ABAZA - Entrepreneurship Advisor, The Global BrainTrust
Abaza, a university professor, points to high risks because of the incorrect belief that AI validates its outputs. He identifies the sectors most susceptible to risk as being in education, government, security, finance and health. “AI in its current form is based on a fundamentally flawed assumption about its "intelligence" as it does not implement critical analysis. Its is a cumulation of raw data found on the internet, which can be incorrect because anyone can post anything online, leading inevitably to hallucinations. AI is not 'bad' intrinsically, but it requires developers, advocates, advisory groups, and the media to remind users to always apply their own critical thinking.”

BRIE ALEXANDER - African Diaspora Cultural Advisor, The Global BrainTrust
Alexander, who focuses on the broader impact of technology, is concerned about AI hallucination risks to marginalized communities, saying the curation of information influences user perspectives and knowledge.

“AI-driven summarization of search results highlights a significant issue in the modern digital landscape, and the true danger is even more alarming. In the Global South, particularly in Africa, where misinformation has been a strategic tool for decades, risks are magnified. AI-driven curation can exacerbate misinformation, hinder access to diverse perspectives, perpetuate existing biases, and further undermine trust and informed decision-making in these regions.

“Apart from hallucinations, algorithmic bias can inadvertently or deliberately reflect the biases of their creators, often from developed nations, leading to information that does not accurately reflect local contexts or perspectives. Another risk is content filtering where companies controlling the AI align content with specific agendas, often foreign interests, limiting exposure to diverse and locally relevant viewpoints.”

ZESHAN ZAFAR - Interfaith Affairs Advisor, The Global BrainTrust
An expert in global faith-based affairs, Zafar says people must know about the risks of misleading content, such as when deep fakes are presented as real videos. There needs to be a mobilization to criminalize deep fakes as in child pornography cases, and to establish criminal penalties.”

“We need urgent action from governments and policy advocates to impose proactive measures, new laws, and policies with broad outreach to warn people about the dangers of disinformation. Faith leaders can help raise awareness about the risks to their communities.”

Sana Bagersh
BrandMoxie
+1 206-488-8018
business@brandmoxie.com
Visit us on social media:
LinkedIn

Powered by EIN Presswire


EIN Presswire does not exercise editorial control over third-party content provided, uploaded, published, or distributed by users of EIN Presswire. We are a distributor, not a publisher, of 3rd party content. Such content may contain the views, opinions, statements, offers, and other material of the respective users, suppliers, participants, or authors.

Submit your press release