twhlw的个人博客分享 http://blog.sciencenet.cn/u/twhlw

博文

关注人工智能发展、安全与治理的风险问题

已有 736 次阅读 2023-11-28 14:30 |个人分类:2023|系统分类:观点评述




Opinion 18:45, 27-Nov-2023


Watch out for the risks in the development, safety and governance of AI


Liu Wei


CFP

Editor's note: Liu Wei is director of the Human-Computer Interaction and Cognitive Engineering Laboratory at Beijing University of Posts and Telecommunications. The article reflects the author's opinions and not necessarily those of CGTN.

OpenAI recently made a personnel change that caught the world's attention, sparking interest in the risk issues of AI governance.

Science and technology are a double-edged sword, capable of both helping and harming humanity. Artificial intelligence, being an important part of science and technology, is no exception. It has both positive and negative aspects, and it's challenging to determine whether it is more like Pandora's Box or Aladdin's Magic Lamp.

However, artificial intelligence is unique because it's not just a technology or tool, but also an ecosystem in itself. Currently, the detrimental and negative aspects of artificial intelligence mainly include the following three scenarios: firstly, the human aspect, which includes bad people using AI for evil deeds and good people mistakenly or erroneously using AI without doing good; secondly, the machine aspect, which involves software bugs and hardware failures causing malfunctions; and thirdly, various adverse environmental changes that lead to AI going out of control.

Beyond these, there are risks generated by the combination of these three factors. These hidden dangers not only affect the industrial landscape, lifestyles, and social fabrics of countries around the world, but they may also change the balance of power between nations in the future. National security interests, business operations, and personal privacy of citizens are increasingly dependent on technologies like artificial intelligence and the Internet. So, the world is reaching a new critical stage.

Currently, although artificial intelligence performs excellently in many tasks, such as playing chess better than humans, helping experts discover new protein structures, and generating text to answer questions, it still lacks the ability to perceive various physical environments and the ability to interact effectively in real life. It does not yet qualify as a true human-machine-environment intelligent system.

Dozens of large models of artificial intelligence gathered in the Frontier Trend Hall of the second Global Digital Trade Expo in Hangzhou, east China's Zhejiang Province, November 24, 2023. /CFP

A genuine human-machine-environment system requires diversified cross-functional capabilities, including the ability to perceive, understand, predict, respond to and adjust to natural, social, real, and virtual environments.

In human intelligence, there exist coordinate systems different from the traditional space-time coordinate system, which can be used to describe different aspects and features of human intelligence.

For instance, emotional coordinates can describe the state of human emotions and feelings; social coordinates can map human positions and relationships in social contexts; value coordinates can describe human values and moral concepts; and knowledge coordinates can outline human knowledge structures and cognitive abilities.

These coordinate systems are not independent; they are interrelated and influence each other, together forming the multidimensional nature of human intelligence. Understanding and considering these coordinate systems is vital in researching human intelligence and developing intelligent systems that are realistic and meet human needs.

In the real world, each person may have different value systems towards things. AI models based on mathematical algorithms are not capable of proactively expressing their own values. AI models can provide information and suggestions based on the training data they've been fed, but the information they provide might be influenced by data biases from open-source information in the Internet, academic papers, books and human-encoded values.

Therefore, when it comes to value judgments, it's crucial to consider information and viewpoints from multiple sources, rather than relying solely on the outputs of AI models.

Regarding the impact of artificial intelligence on human societal order, public concerns are primarily centered around the risks increased by the widespread application of AI technology. These include risks such as personal privacy breaches, job displacement, falsification, fraud, and military threats. These risks are not only novel but also present challenges for the public, consumers and countries to combat. Critics suggest that while AI technology brings conveniences to society, it also has the potential to disrupt social order.

In light of this, humanity should take a cautious and responsible approach towards AI. While actively promoting the development and application of AI, there should be an emphasis on strengthening its oversight and regulation.

This includes developing universal ethical guidelines and moral standards between the East and the West, creating laws and regulations with broad consensus to ensure safety, fairness and trustworthiness of AI.

Furthermore, establishing multidisciplinary cooperation involving scientists, engineers, philosophers, policymakers and the public is crucial. This collective effort is needed to explore the development trajectory, application domains and potential risks of AI. Such an approach is the only way to ensure that AI technology development serves human interests and mitigates its impact on human society and values.

In summary, the future development of AI needs effective regulation and control on both technological and societal fronts, while bringing together the achievements of Eastern and Western wisdom, and promoting broad public engagement and discussion across the world. This is essential to ensure that the development of AI technology is in line with humanity's collective welfare and the values of building a shared future for mankind.

(If you want to contribute and have specific expertise, please contact us at opinions@cgtn.com. Follow @thouse_opinions on Twitter to discover the latest commentaries in the CGTN Opinion Section.) 


关注人工智能发展、安全与治理的风险问题

 

近期OpenAI引起了一场世界瞩目的人事变动,更是引发了人们对于人工智能治理风险问题的关注。

科学技术是一把双刃剑,既能够帮助人类,也能够损害人类,人工智能本身也是科学技术的一个重要组成部分,自然也不例外,既有好的一面,也有不好的一面,它究竟是潘多拉魔盒还是阿拉丁神灯还很难确定。但是,人工智能又有特殊的一面,正如前面所言,人工智能不仅仅是一个技术和工具,更是一个生态和系统。目前看来,人工智能的危害和负面部分主要包括三个方面,一是人的部分,包括坏人使用AI做坏事和好人错误/失误使用AI没有做好事;二是机的部分,涉及软件的Bug与硬件的失效造成故障;三是环境的各种不利变化所造成AI的失控,除此之外,还有上述三个因素排列组合生成的诱因风险。这些隐患不仅影响世界各国的产业生态、生活方式、社会形态等等,而且应该说它将来有可能会改变国家间的力量对比,国家的安全利益、企业的运作、公民的个人隐私都集中在人工智能、互联网等技术上,所以它到了一个新的关键期。

当前,尽管人工智能在很多任务上表现出色,如可以下棋比人强、可以帮助专家发现新的蛋白质结构、可以生成文本回答问题,但它毕竟缺乏对各种环境的物理感知和真实生活的交互能力,本质上仍不算是一个真正意义上的人机环境智能系统,真实的人机环境系统需要多元化的跨域能力,包括对自然、社会、真实、虚拟环境的感知、理解、预测、反馈和修正能力。

在人类智能中,存在一些与时空坐标系不同的坐标系,这些坐标系可以用来描述人类智能的不同方面和特征。如情绪坐标系用来描述人类情感和情绪的状态、社交坐标系用来描述人类在社交环境中的位置和关系、价值观坐标系用来描述人类的价值观和道德观念、知识坐标系用来描述人类的知识结构和认知能力。这些坐标系不是独立存在的,它们相互交织和影响,共同构成了人类智能的多维度特征。在研究人类智能和开发智能系统时,理解和考虑这些坐标系的存在和作用,对于设计更加真实和符合人类需求的智能系统具有重要意义。

在真实世界中,每个人对事物的价值观可能有所不同,而基于数学模型的人工智能模型并不能主动表达自己的价值观,人工智能模型可以根据已有的训练数据提供一些信息和建议,但其提供的信息有可能受到来自互联网、论文书籍等开源信息源的数据偏见和人类编码的价值观影响。因此,在涉及价值判断时,我们应该综合考虑多个来源的信息和观点,而不是仅仅依赖于人工智能模型的输出。

人工智能对于人类社会秩序的影响,舆论的担忧主要集中在人工智能技术的广泛应用增加了诸如个人隐私泄露、就业替代、造假和诈骗等社会问题和军事危害方面的风险,这些风险对于民众、消费者和国家而言不仅是新型的,在抵抗上也存在难度。因此批评者认为,人工智能技术在为社会带来便利的同时,也可能对社会秩序造成冲击。

鉴于此,人类应该未雨绸缪,采取谨慎和负责任的态度,在积极地推动人工智能的发展和应用的同时,加强对其监管和规范。这包括完善东西方共同的伦理准则和道德标准、制定共识的法律法规、以确保人工智能的安全、公平和可信赖。此外,人类应该建立多学科的合作,包括科学家、工程师、哲学家、政策制定者以及公众的参与,共同探讨人工智能的发展方向、应用领域和潜在风险。这样才有可能确保人工智能技术的发展是为了人类的利益,并能够控制其对人类社会和价值观的影响。总之,人工智能的未来发展需要在技术和社会层面进行有效的监管和控制,积极融合东方智慧与西方智慧的文明成果,同时推动世界各个国家公众共同参与和广泛讨论,以确保人工智能技术的发展符合人类的整体利益和共同命运的价值观。

 

转自中国国际电视台(英文名称:China Global Television Network,英文简称:CGTN)



https://m.sciencenet.cn/blog-40841-1411563.html

上一篇:大语言模型的好坏是由您提示水平高底决定的
下一篇:别担心,人类只是把一小部分理性和感性放到了AI里

1 许培扬

该博文允许注册用户评论 请点击登录 评论 (0 个评论)

数据加载中...

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-2-22 09:07

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部