何毓琦
Danger of Chat-GPT and Generative AI
2024-3-13 22:14
阅读:988

In one of my earlier blogs, I wrote about my experience with Chat-GPT in which I ask GPT to produce a biography of myself. When I found many untruths in the bio writeup, I questioned the GPT as to where it found these facts. GPT confessed that it made it up

In educational field, when students use GPT to procuce essays as substitute for home work and course requirements, professors face a problem of grading students

Today, our local newspaper, the Boston Globe, reported a more serious problem. A 3/13/2024 article reports a lawyer used GPT to produce a written legal arguement about a case he is defending in court. In the well written legal brief the lawyer cited three previous cases as precedents to support his reasoning (a well known and important law practice). What the lawyer failed to check was the fact that these precedents are nonexistent which the generative AI just made up to support the write up. Fortunately, the judge checked and discover the deception for which the lawyer were disciplined and fined. But do we know how many times such error went undected and results in unjust decisions? 

We live in dangerous times! One cannot believe things that appeared in print or saw in video unless double checked and verified before acting on such information. Yet everyday we are bombarded with unsolicited information overload never mind the social media which many of us are willingly engaged in. 

How does one behave in such environment safely and comfortably?

转载本文请联系原作者获取授权,同时请注明本文来自何毓琦科学网博客。

链接地址:https://m.sciencenet.cn/blog-1565-1425216.html?mobile=1

收藏

分享到:

当前推荐数:3
推荐人:
推荐到博客首页
网友评论2 条评论
确定删除指定的回复吗?
确定删除本博文吗?