Can Artificial Intelligence Be Dangerous?

AI Dangerous



OpenAI CEO Sam Altman's Statement to the US Senate

Can Artificial Intelligence Be Dangerous?

On May 17, OpenAI's CEO Sam Altman testified before the US Senate committee, emphasizing the need to regulate artificial intelligence (AI). His statement was part of a broader discussion on the potential risks associated with AI. Altman highlighted that AI's potential to transform the world could be likened to the invention of the printing press, but he acknowledged the necessity of addressing its possible dangers.


The Need for Regulation

Altman concurred that mitigating potential risks is crucial. An American senator echoed similar concerns, suggesting that while AI holds revolutionary potential, it could be as dangerous as the atomic bomb.


The Impact of ChatGPT

The introduction of ChatGPT has caused a significant stir in the tech world. Launched on November 30, 2022, this platform has impacted various life domains, quickly amassing 100 million users across technology, health, education, commerce, business, and banking sectors.


ChatGPT's Capabilities

ChatGPT is a powerful AI tool and computer language model capable of human-like conversation and answering questions. It can now process images and videos, design websites based on simple paper sketches, and has been developed using three billion words. This extensive data input indicates the massive scale of its creation. Moreover, it continuously learns and upgrades itself through human interaction, handling both simple daily tasks and complex ones like audits.


Adoption and Popularity

Given its popularity, many companies have integrated ChatGPT into their programs. Microsoft linked it with its Bing search engine, and Snapchat’s inclusion of ChatGPT in its app further boosted its fame. Initially used by educated individuals, it soon became accessible to the general public due to its ability to understand local languages and provide humorous responses, even in Roman Urdu.


Public Reaction in Pakistan

In Pakistan, ChatGPT was widely discussed on social media, with users sharing screenshots of their interactions. Many found it unbelievable and astonishing. One friend initially thought humans were behind it until I explained that it is an automated software.


Ensuring Safety Through Regulation

While inventions mark advanced societies, ensuring public safety through control is vital. This becomes even more necessary when there is a risk of AI machines turning the human-machine relationship into one of master and slave.


Mobile Phone Dependency as a Parallel

This scenario becomes clearer when considering the relationship between humans and mobile phones. We spend significant time on our phones from morning to night, relying heavily on them for both minor and major needs. The lack of regulations for mobile phone use impacts various life aspects. Similarly, without timely laws for AI, it could soon dominate us extensively.


Benefits and Risks of AI

Undoubtedly, AI has many benefits. However, if not regulated, it poses political, social, economic, and military risks. In politics, AI chatbots can help gauge public opinion on different candidates, assist political leaders in running informed campaigns, manage websites and social media accounts, engage with people, answer their questions, and prepare relevant content. Generative AI can predict public support and pre-empt election results.


Potential Misuse in Politics

Despite these positive uses, unregulated AI can be exploited for character assassination, manipulating public opinion, and distorting facts to support or oppose specific individuals.


ChatGPT's Perspective on Political Impact

When asked about AI's impact on political communication and democracy, ChatGPT acknowledged AI's potential influence on strategic and political communication. Thus, it becomes crucial to consider ethics and ensure policymakers address these risks while using AI. ChatGPT's responses stem from data-driven insights.


Economic Implications

In the economic field, AI can enhance productivity and efficiency. A report suggests that generative AI could contribute a 7.0% annual increase to global GDP over the next decade. Generative AI models like Google Bard and ChatGPT possess advanced intelligence that most humans lack, posing another risk. Can AI take over human jobs? Although AI assists humans in many areas, there remains a constant threat of it becoming a human substitute.


Employment Displacement Concerns

Just as Uber and Careem once displaced regular drivers, automated cars may render Uber and Careem drivers jobless. Even if AI doesn't entirely replace humans, it could create new technical and non-technical jobs while rendering some obsolete. Those unwilling to adapt to these changes may suffer losses.


AI in Education

ChatGPT proves beneficial in various social sectors, most notably education. Students extensively use it for making assignments or summarizing essays. While it reduces student burden, it increases the load on professors. Differentiating AI-generated content from human-written text is challenging. According to the University of Cambridge, over-reliance on AI could make the education sector controversial and threaten its integrity.


Ethical Use in Academia

While AI's use for generating ideas can be helpful, relying entirely on it is misguided. To ensure honesty and merit in education, regulating these tools is necessary.


Military Applications

Although AI's use in civilian matters is extensive, it can also be highly beneficial in the military sector. According to a US Department of Defense official, AI can significantly aid in developing military software. Developers and coders were previously reluctant to embrace AI, but it can now assist in decision-making, strategizing, defensive planning, and analyzing data from various sensors. AI can be used in emerging military technologies like drones, missiles, jets, tanks, etc.


Autonomous Weapons and Accountability

The use of generative AI in lethal autonomous weapon systems is particularly dangerous as these weapons, once activated, can autonomously target individuals. This raises concerns about accountability, responsibility, and transparency, necessitating stringent regulations to ensure AI's positive and beneficial use for humanity, preventing it from becoming as destructive as the atomic bomb.


Global Perspectives on AI Regulation

Although Italy had initially banned ChatGPT, it has now lifted the ban. However, many countries are hesitant to introduce regulations for generative AI, fearing it might push them back in the technology race. In an era of rapid development, discussions on banning technology seem irrelevant and illogical, especially for technologically advanced states.


Emphasis on Regulations

Instead of bans, the focus should be on regulations and laws, as emphasized by OpenAI's CEO and other Silicon Valley members. According to Stanford University's 2023 Index, approximately 37 bills have been passed globally as laws, but these regulations only address limited aspects of AI at the national level. With the increasing global competition in AI, voices for regulation are either fading or minimal, which could lead to irreparable damage in the future.


Implementing proper laws and regulations is the only way to ensure AI's positive and beneficial use for humanity, preventing it from becoming as destructive as an atomic bomb.