Artificial Intelligence — The Promises and Perils
Artificial Intelligence (AI) is one of the most transformative technologies of our time. It has the potential to revolutionize the way we live and work, making our lives easier and more efficient. However, as with any technology, AI poses significant risks and challenges that must be addressed.
The Concerns of Tech Leaders and The State of AI Development
Sundar Pichai, the CEO of Google, has expressed concern over the negative potential of AI and how it could be “very harmful” if deployed wrongly. Elon Musk, the founder of SpaceX and Tesla, said he had fallen out with Google co-founder Larry Page because Page was “not taking AI safety seriously enough.”
Despite these concerns, the tech industry continues to push the boundaries of AI. For example, Google has launched a chatbot called Bard to rival ChatGPT, and Alphabet owns DeepMind, a UK-based AI company.
The Call for Caution: Understanding the Dangers of AI
Thousands of signatories to the letter published by the Future of Life Institute called for a six-month moratorium on creating “giant” AIs more powerful than GPT-4. AI practitioners and the tech industry have been criticized for their approach to product development, as these systems are being released into the public realm without knowing what will happen or making adjustments based on that.
An immediate concern is that AI systems producing plausible text, images, and voice may create harmful disinformation or help commit fraud. In addition, the raw power of cutting-edge AI may make it one of a few “dual-class” technologies, like nuclear power or biochemistry, which have enough destructive potential that even their peaceful use needs to be controlled and monitored.
Aligning AI with Human Values in the Quest for Superintelligence
The peak of AI concerns is superintelligence, the “Godlike AI” referred to by Musk, and even short of that is “artificial general intelligence” (AGI). This system can learn and evolve autonomously. The timelines for reaching AGI range from imminent to decades away, but understanding how AI systems achieve their results takes time and effort.
AI companies such as OpenAI have put a substantial amount of effort into ensuring that the interests and actions of their systems are “aligned” with human values. However, users can bypass or “jailbreak” the system, showing its limitations.
While AI holds enormous promise, we must approach its development cautiously and carefully consider its potential risks and challenges. Our collective actions will shape the future of AI, and it is up to us to ensure that it is used for the betterment of humanity.