Is AI Is Dangerous For Human?
As A.I. becomes more powerful and prevalent, the voices warning about the potential risks of A.I. become increasingly audible.
"Artificial intelligence might lead to the extinction of the human race." says Stephen Hawking.
This is not an isolated concept for the renowned theoretical physicist.
"[A.I.] scares the hell out of me," Elon Musk, founder of Tesla and SpaceX, famously declared at the SXSW tech festival. "It's capable of far more than almost anyone realises, and its rate of advancement is exponential."
Unease abounds on various fronts, including the rising automation of some occupations, gender and racially-biased algorithms, and autonomous weapons that function without human control (to mention a few). And we're still in the early phases of discovering what A.I. is capable of.
ARTIFICIAL INTELLIGENCE RISKS
1.Job losses as a result of automation
2.Invasion of privacy
3.Deepfakes
4.Bad data causes algorithmic bias.
5.Inequality in society
6.Market turbulence
7.Automatization of weapons
A.I.'s 8 Dangers;
Questions regarding who is creating A.I. and for what goals make it even more important to comprehend its possible drawbacks. We will look at the potential threats of artificial intelligence and how to mitigate them in the following sections.
1. JOB LOSSES AS A RESULT OF AI AUTOMATION:
As AI-powered job automation becomes more prevalent in areas such as marketing, manufacturing, and healthcare, it becomes a significant problem. Between 2020 and 2025, 85 million jobs will be lost to automation, with Black and Latino workers particularly susceptible.
"The reason we have a low unemployment rate, which doesn't actually capture people who aren't looking for work, is largely because this economy has created a lot of lower-wage service sector jobs," futurist Martin Ford told Built In. "I don't think that's going to continue."
As A.I. robots improve in intelligence and talent, the same activities will require fewer people. While it is true that A.I. will create 97 million new jobs by 2025, many individuals will need more technical skills for these tasks and may be left behind if corporations upskill their workforces.
"If you're flipping burgers at McDonald's and more automation comes in, is one of these new jobs going to be a good match for you?" Ford stated. "Or is it possible that the new job will necessitate extensive education or training, or even intrinsic talents — such as exceptionally strong interpersonal skills or creativity — that you do not possess?" Because they are the things that computers, at least so far, are not particularly good at."Occupations requiring doctorate degrees and other post-college training are vulnerable to A.I. displacement.
2. SOCIAL MANIPULATION USING ARTIFICIAL INTELLIGENCE ALGORITHMS:
One of the main hazards of artificial intelligence, according to a 2018 assessment of potential misuse, is social manipulation. This concern has become a reality as politicians rely on platforms to promote their views, with Ferdinand Marcos, Jr., most recently using a TikTok troll army to win the votes of younger Filipinos in the 2022 election.
TikTok is powered by an artificial intelligence system that floods a user's feed with information linked to prior media they've seen on the platform. The app's critics focus on this procedure and the algorithm's failure to filter out harmful and incorrect information, casting doubt on TikTok's capacity to safeguard its users from hazardous and deceptive media.
"No one knows what's real and what's not," Ford explained. "So it actually leads to a scenario in which you literally cannot believe your own eyes and ears; you can't depend on what we've traditionally considered to be the finest possible evidence... That is going to be a tremendous problem."
3. SOCIAL SURVEILLANCE IMPLEMENTED BY AI TECHNOLOGY:
In addition to the more existential issue, Ford is concerned about how A.I. will affect privacy and security. China's usage of facial recognition technology in businesses, schools, and other settings is a prominent example. In addition to tracking a person's whereabouts, the Chinese government can collect enough data to track a person's activities, connections, and political opinions.
Another example is police forces in the United States using predictive policing algorithms to predict where crimes will occur. The issue is that these algorithms are driven by arrest rates, which disproportionately affect African-American areas. Police agencies then increase their presence in these neighborhoods, raising concerns about over-policing and whether self-proclaimed democracies can avoid using A.I. as an authoritarian tool.
"Authoritarian regimes use or will use it," Ford stated. "The question is, how much does it invade Western countries, democracies, and what constraints do we put on it?"
4. ARTIFICIAL INTELLIGENCE-BASED BIAS:
AI prejudice in many forms is also harmful. According to Princeton computer science researcher Olga Russakovsky, AI prejudice extends beyond gender and ethnicity. People build A.I. — and humans are naturally biased — in addition to data and algorithmic bias (the latter can "amplify" the former).
"A.I. researchers are primarily men, from specific racial demographics, who were raised in affluent neighbourhoods, and primarily people without disabilities," Russakovsky explained. "Because we're a fairly homogeneous population, it's difficult to think broadly about global issues."
5. THE INCREASE IN SOCIOECONOMIC INEQUALITY AS A RESULT OF AI:
Companies that fail to recognize the inherent biases built into A.I. algorithms risk jeopardizing their DEI projects via AI-powered hiring. The notion that A.I. can assess a candidate's characteristics using face and speech analysis is nonetheless polluted by racial prejudices, repeating the same discriminatory hiring practices that firms claim to be removing.
Another source of worry is the widening socioeconomic disparity caused by AI-driven job loss, which reveals the class biases of how A.I. is implemented. Automation has resulted in salary decreases of up to 70% for blue-collar workers who do more physical, repetitive activities. Meanwhile, white-collar workers have mostly been unaffected, some even earning higher.
6. LOSS OF ETHICS AND GOODWILL AS A RESULT OF AI:
Religious leaders and engineers, journalists, and political officials raise concerns about A.I.'s possible socioeconomic ramifications. Pope Francis cautioned against A.I.'s propensity to "circulate tendentious opinions and false data" in a 2019 Vatican summit themed "The Common Good in the Digital Age," emphasizing the far-reaching effects of allowing this technology to evolve without sufficient monitoring or constraint.
"If humankind's alleged technological advancement were to turn against the interests of all," he said, "this would unfortunately lead to an unfortunate regression to a form of barbarism dictated by the law of the strongest."
The fast emergence of the conversational A.I. tool ChatGPT lends weight to these worries. Many users have used technology to avoid writing tasks, jeopardizing academic integrity and originality. OpenAI even used impoverished Kenyan laborers to conduct the task to make the tool less harmful.
7. ARTIFICIAL INTELLIGENCE-BASED AUTONOMOUS WEAPONS:
As is frequently the case, scientific developments have been used for military purposes. Regarding A.I., some people are eager to act before it's too late: Over 30,000 people, including A.I. and robotics specialists, signed an open statement in 2016 opposing funding for AI-powered weaponry with autonomy.
"The most important decision facing humanity now is whether to launch a worldwide A.I. arms race or prevent one from starting," they said. "If any major military power pursues A.I. weapon development, there is essentially a global arms competition. unavoidable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become tomorrow's Kalashnikovs."
This prediction has come true in the shape of Lethal Autonomous Weapon Systems, which autonomously find and destroy targets while adhering to minimal laws. Because of the growth of strong and complicated weaponry, several of the world's most powerful countries have succumbed to fears and contributed to a technological cold war.
Many of these new weapons offer significant hazards to civilians on the ground, but the threat is heightened when autonomous weapons get into the hands of the wrong people. Hackers have mastered a wide range of cyber assaults, so it's not difficult to envisage a rogue actor penetrating autonomous weapons and causing total devastation.
If political rivalries and warmongering impulses are not reined in, artificial intelligence may be used with the worst of motives.
8. FINANCIAL CRISES CAUSED BY AI ALGORITHMS :
The financial sector has become more open to using A.I. technologies in regular finance and trading procedures. As a result, algorithmic trading may be to blame for the next global market financial disaster.
While A.I. algorithms are not influenced by human judgment or emotions, they do not account for settings, market interconnection, and human trust and fear. These algorithms then execute hundreds of deals at breakneck speed, hoping to sell a few seconds later for a modest profit. Thousands of transactions being liquidated might terrify others into doing the same, resulting in unexpected collapses and high market volatility.
Instances such as the 2010 Flash Crash and the Knight Capital Flash Crash serve as reminders of what might happen when trade-happy algorithms go nuts, whether or not quick and huge trading is deliberate.
FAQS:
1.IS ARTIFICIAL INTELLIGENCE A POTENTIAL HAZARD?
The I.T. world has long questioned the dangers that artificial intelligence poses. Some of the most serious risks presented by A.I. have been identified as job automation, the spread of fake news, and a hazardous arms race of AI-powered weaponry.
2.Is A.I. beneficial or detrimental to humans?
Experts emphasize that artificial intelligence technology is neither good nor evil in terms of morality, but its applications might have both beneficial and harmful consequences.
3.Can A.I. robots harm humans?
A robot may not harm a human person. A practical difficulty prompts this change: robots must operate alongside humans exposed to low radiation levels. Because their positronic brains are particularly sensitive to gamma radiation, levels that are safe for humans render the robots unworkable.
4.Is A.I. going to be dangerous in the future?
However, in the future, Artificial Intelligence may result in a loss of privacy. Even today, you may track your movements throughout your day. The most recent technology, such as face recognition, can detect you in a crowd, and all security cameras are equipped with it.

0 Comments