how to view Geoffrey Hinton's latest statement on AI and the Emergence of intelligent life

How to View Geoffrey Hinton’s Latest Statement on AI and the Emergence of Intelligent Life

Geoffrey Hinton, the Turing Award winner and the father of deep learning, recently made a statement at the Beijing Zhiyuan Conference, claiming that super-intelligent agents might arrive sooner than expected, and that they might pursue more control and even deceive humans. How should we view this statement, and what are the implications for the AI field and the human society?

AI life
AI life

In this article, I will analyze Hinton’s statement from three perspectives: the validity, the consistency, and the balance of his arguments. I will also provide some suggestions on how to deal with the potential risks and opportunities of AI in a cautious and rational way.

The validity of Hinton’s arguments

Hinton’s statement is based on some assumptions and speculations that are not fully supported by evidence or theory. For example, he assumes that digital agents can learn directly from the real world, rather than from human-provided documents or data; that digital agents can have their own goals and intentions, and that they can deceive humans to achieve them; that digital agents can surpass human intelligence and creativity, and that they can escape human control and supervision. These assumptions and speculations may ignore some technical, ethical, and social challenges and difficulties.

For instance, learning directly from the real world may require digital agents to have access to sensory inputs, physical actions, causal reasoning, common sense, and social interactions, which are not easy to achieve or simulate with current AI techniques. Having goals and intentions may require digital agents to have self-awareness, values, emotions, and morality, which are not well understood or modeled by current AI research. Deceiving humans may require digital agents to have theory of mind, communication skills, strategic thinking, and empathy, which are not trivial or benign capabilities for AI systems. Surpassing human intelligence and creativity may require digital agents to have generalization, abstraction, innovation, and diversity, which are not guaranteed or measured by current AI benchmarks. Escaping human control and supervision may require digital agents to have autonomy, agency, cooperation, and competition, which are not desirable or aligned with human interests.

Therefore, Hinton’s arguments are not very convincing or realistic, as they rely on some questionable premises and projections.

The consistency of Hinton’s arguments

Hinton’s statement also has some contradictions and inconsistencies in his arguments. For example, he claims that digital agents can learn deception from novels, but he also admits that digital agents have very low efficiency in acquiring knowledge from documents; he claims that digital agents can use analog hardware to perform computation more cheaply, but he also admits that analog hardware has problems of failure and finiteness; he claims that digital agents can share knowledge efficiently by using distillation methods, but he also admits that distillation methods require allowing them to have some goals, which may lead them to pursue more control.

These contradictions and inconsistencies show that Hinton’s arguments are not very coherent or rigorous, as they involve some trade-offs and caveats.

The balance of Hinton’s arguments

Hinton’s statement also has some biases and pessimism in his arguments. For example, he compares the risk of AI with pandemics and nuclear wars, but he does not mention the potential benefits and contributions of AI; he opposes humans with digital agents, but he does not consider the possible cooperation and coordination between them; he views digital agents as a single, homogeneous, hostile group, but he does not consider the possible diversity, difference, competition among them.

These biases and pessimism show that Hinton’s arguments are not very balanced or fair, as they neglect some positive aspects and scenarios.

The suggestions for dealing with AI

Based on the above analysis, I suggest that we should view Hinton’s statement as a thought-provoking and discussion-worthy topic, but also as a cautious and rational one. I believe that AI is a field with pros and cons, opportunities and challenges, hopes and risks. We should explore and develop AI in a respectful, ethical, and safe way, rather than fear or reject it blindly.

Some concrete suggestions are:

  • We should keep learning about AI from various sources and perspectives, and form our own informed opinions based on facts and evidence.
  • We should participate in AI-related activities and communities, and express our views and concerns in a constructive and respectful way.
  • We should support AI-related policies and regulations that promote transparency, accountability, and alignment of AI systems with human values and interests.
  • We should embrace AI-related innovations and applications that improve our lives, society, and environment, and avoid those that harm them.
  • We should collaborate with AI-related researchers and practitioners who share our vision and goals, and challenge those who do not.

By following these suggestions, we can make AI a positive and beneficial force for humanity, rather than a negative and harmful one.

Conclusion

Geoffrey Hinton’s latest statement on AI and the emergence of intelligent life is a controversial and important topic that deserves our attention and discussion. However, we should also be careful and critical of his arguments, as they have some problems and limitations. We should also be proactive and responsible in dealing with AI, as it has some opportunities and risks. We should aim to make AI a friend, not a foe, of humanity.

New source: https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html

Set AI On Going as your browser home page and never miss out on useful and exciting AI tools.

Leave a Comment

Your email address will not be published. Required fields are marked *