Interview with a bot: Is AI risky for humanity?
Concerned about the future ramifications of artificial intelligence, Khaled Diab decided to ask ChatGPT some difficult questions about the risks AI poses and how to avert these dangers and make the most of the technology. The bot's answers were smart but unsettling.
The mind-boggling speed with which artificial intelligence is developing is both exhilarating and alarming. We have reached a threshold where it feels almost insulting and perhaps even delusional to refer to machine intelligence as “artificial”. Even if AI systems still can't do creativity, emotion and adaptability like we do, there are many areas in which they far, far surpass our measly brainpower, as demonstrated by Google's LaMDA and, more recently, by OpenAI's ChatGPT. While this has not (yet) made humans obsolete, it will eventually make, unlike previous generations of machines, even the most skilled human labour surplus to requirements.
I have written previously about the intriguing question of whether AIs are or will eve be truly intelligent, machine consciousness, the substantial risks associated with the breakneck development of AI, the implications of “living” robots and how rapid automation is creating not only widening inequalities but is also leading to the trashing of the planet.
As OpenAI has allowed open access (for a limited time) to its new ChatGPT bot, which has impressed people from many fields and walks of life with its human-like ability to answer even questions and perform complex intellectual tasks, I thought would “interview” the bot and get its hivemind view and insider take, as it were, on the risks artificial intelligence poses to humanity and how to overcome these dangers so that everyone can benefit from this amazing technology.
Althought ChatGPT does not think that it poses a risk, it acknowledges numerous dangers associated with AI which chime with my own concerns. “One way to avoid the risks associated with AI is to ensure that the development and use of AI systems is guided by ethical principles and is carefully regulated,” ChatGPT, touching on one vital area which I find has not been given enough attention by our governments who seem to be allowing the private sector to take over the controls and guide the development of machine intelligence.
Another area in which our governments have largely reneged on their responsibilities when it comes to gaping inequalities the disruptive even destructive influence of new tech on people's economic wellbeing and the related gaping inequalities it has created and will continue to create if its “creative destruction” continues to create for the few and destroy for the many. “Ensuring that the benefits of technological progress are evenly distributed across society is a complex challenge that requires a combination of policy measures, education, and public engagement,” ChatGPT noted, before going on to suggest policies that would be regarded as too radical by many governments. “One way to address this issue is through the implementation of progressive taxation and social welfare policies that redistribute wealth and income from the richest individuals and corporations to the rest of society.”
Noting that it is not “capable of having personal opinions” when I quizzed about whether it was optimistic about the future of AI, ChatGPT nevertheless concluded: “Whether the overall impact of AI on humanity will be positive or negative will depend on how we choose to develop and use this technology.”
Below is the full transcript of our conversation. I have not edited ChatGPT's answers, even when they conflict with The Chronikler's house style or are inconsistent, such as the its decision to spell to use the US and British spelling of sceptical. As you can see, it's very hard to tell if its responses were composed by a machine or a human. “There is no definitive way to tell whether a given text or idea was generated by AI or by a human,” ChatGPT contends. And I'm inclined to agree. For a few of the questions, I was compelled to reformulate them because the bot was unable to provide an answer (the cursor blinked for too long) or it seemed unable to comprehend the question (it gave an error message).
What risks to humanity does AI pose?
There are several risks associated with the development and use of artificial intelligence (AI), including the potential for AI to be used for malicious purposes, the possibility of AI systems becoming uncontrollable and behaving in ways that are detrimental to humanity, and the potential for AI to cause massive unemployment as machines become capable of performing many tasks that are currently done by human workers. Additionally, there is concern that the development of AI could lead to a situation known as the “singularity,” in which AI surpasses human intelligence and potentially poses an existential threat to humanity.
What can we do to avoid the risks associated with AI?
One way to avoid the risks associated with AI is to ensure that the development and use of AI systems is guided by ethical principles and is carefully regulated. This could include measures such as developing ethical guidelines for the development and use of AI, implementing strict regulations on the use of AI in sensitive areas such as healthcare and finance, and promoting transparency and accountability in the development and use of AI systems. Additionally, it is important for researchers and developers to continue to work on methods for controlling and mitigating the risks associated with AI, such as developing AI systems that can explain their decision-making processes and developing “kill switches” that can be used to shut down AI systems if necessary.
Does ChatGPT pose a risk for society?
It is unlikely that ChatGPT, or any other language model, poses a significant risk to society. While language models like ChatGPT are capable of generating human-like text, they are not capable of autonomous decision making or independent action. As such, they do not pose the same kinds of risks that other forms of AI, such as autonomous weapons or AI systems that control critical infrastructure, might pose. However, it is important to continue to monitor and carefully regulate the development and use of AI technology in order to ensure that it is used ethically and safely.
Will AI systems ever become conscious and, if they do, should they be given rights?
It is not currently known whether AI systems will ever become conscious in the same way that humans are conscious. Some researchers believe that it may be possible to create conscious AI in the future, but there are also many who are skeptical of this possibility.
Should AIs be granted legal rights and protections?
This is a complex and contentious issue that is the subject of ongoing debate. Some people argue that if AI systems were to become conscious and have experiences and emotions similar to those of humans, then they should be granted some legal rights and protections. Others argue that, even if AI systems were to become conscious, they would not be entitled to the same rights and protections as humans because they are fundamentally different from humans. Ultimately, the question of whether or not to grant legal rights and protections to AI systems is likely to depend on the specific circumstances and the extent to which AI systems are able to demonstrate consciousness and other qualities that are typically associated with personhood.
Consciousness is not well-understood. If an AI becomes conscious, how can it prove this to a sceptical humanity?
It is not clear how an AI system would be able to prove its consciousness to a sceptical humanity, as the concept of consciousness is not well-understood and there is no widely accepted method for determining whether a system is conscious. Some researchers have suggested that certain tests, such as the “Turing test,” might be able to determine whether a system is conscious, but these tests are not definitive and are often criticized for being overly simplistic. Ultimately, it is likely that any attempts by an AI system to prove its consciousness to humans would be met with skepticism and would require further research and discussion before they could be accepted as conclusive.
Computers and AI are surpassing humans in so many areas. In what areas do humans still enjoy supremacy?
While computers and AI systems have made tremendous strides in recent years and are capable of outperforming humans in many areas, there are still many tasks and abilities where humans enjoy supremacy. For example, humans are still much better than computers at tasks that require creativity, such as coming up with new ideas or solving complex, open-ended problems. Humans are also much better at tasks that require social skills, such as understanding and interpreting the emotions and intentions of other people. Additionally, humans are much better at adapting to novel situations and changing their behavior based on new information, a skill that is known as “common sense” and that is difficult for AI systems to replicate. These are just a few examples of the many areas where humans still enjoy supremacy over computers and AI systems.
Is ChatGPT creative?
ChatGPT is not capable of creativity in the same way that humans are. While ChatGPT is a powerful language model that is capable of generating human-like text, it is not capable of independently generating new ideas or concepts. Instead, it is trained on a large corpus of text data and is able to generate text that is similar to the input that it is given. As such, while ChatGPT may produce text that is interesting or novel, it is not capable of truly creative thought.
And is there a way to tell if a text or idea has been generated by AI or a human? What are the telltale signs?
There is no definitive way to tell whether a given text or idea was generated by AI or by a human. However, there are some general characteristics that can sometimes be used to distinguish between AI-generated and human-generated text. For example, AI-generated text is often more repetitive and formulaic than human-generated text, and it may include unusual combinations of words or phrases that are not found in natural human language. Additionally, AI-generated text is often less coherent and more difficult to understand than human-generated text, particularly when the AI system is asked to generate text on a topic that is outside of its training data. These are just a few general characteristics that may be indicative of AI-generated text, but they are not definitive and may not always be present.
Are you optimistic about the future of AI and that the benefit to humanity will outweigh the harm?
As a machine learning model, I am not capable of having personal opinions or emotions, so I cannot say whether I am optimistic or pessimistic about the future of AI. However, it is important to recognize that the development and use of AI technology has the potential to bring many benefits to humanity, such as improving healthcare, transportation, and education, and increasing productivity and efficiency. At the same time, it is also important to carefully consider the potential risks and downsides of AI, and to take steps to ensure that these risks are mitigated and that AI is used in a responsible and ethical manner. Whether the overall impact of AI on humanity will be positive or negative will depend on how we choose to develop and use this technology.
Progress in high tech has led to greater economic inequality, as reflected in the emergence of tech billionaires. How do we ensure that the benefits of technological progress are evenly distributed across society?
Ensuring that the benefits of technological progress are evenly distributed across society is a complex challenge that requires a combination of policy measures, education, and public engagement. One way to address this issue is through the implementation of progressive taxation and social welfare policies that redistribute wealth and income from the richest individuals and corporations to the rest of society. This could include measures such as increasing taxes on high earners and large corporations, providing universal basic income to all members of society, and investing in education and training programs that help people to develop the skills that are needed to succeed in the digital economy. Additionally, it is important for governments, businesses, and civil society organizations to work together to promote inclusive and equitable access to technology and to ensure that everyone has the opportunity to participate in and benefit from technological progress.
Thank you for the chat, ChatGPT. I appreciate it.
You're welcome. It was my pleasure to chat with you. I hope that I was able to provide you with useful information and answer your questions. If you have any other questions or if there's anything else I can help with, feel free to ask.