okx

[卡尔]展望未来:业内人士预测 2024 年人工智能法律挑战

时间:2024-01-01|浏览:235

去年,随着人工智能 (AI) 成为日常使用中更加重要的工具,围绕该技术的法律环境也开始发展。

从开始形成的全球法规和法律到无数指控版权和数据侵权的诉讼,人工智能受到了每个人的关注。

随着 2024 年的临近,Cointelegraph 请求从事法律和人工智能交叉领域工作的业内人士帮助分析 2023 年的教训以及它们对未来一年的意义。

要全面了解 2023 年人工智能领域发生的事情,请不要忘记查看 Cointelegraph 的“2023 年人工智能终极指南”。

欧盟人工智能法案执行延迟

2023年,欧盟成为首批在通过立法以规范高级人工智能模型的部署和开发方面取得重大进展的地区之一。

《欧盟人工智能法案》最初于4月提出,并于6月获得议会通过。

12月8日,欧洲议会和理事会谈判代表就该法案达成临时协议。

一旦完全生效,它将规范政府在生物识别监控中使用人工智能,监督 ChatGPT 等大型人工智能系统,并制定开发人员在进入市场之前应遵循的透明度规则。

然而,该法案已经受到科技行业“过度监管”的批评。

由于开发商的抵制和延误记录,贝克·麦坚时 (Baker McKenzie) 合伙人、《

Determann 人工智能法现场指南》一书

的作者 Lothar Determann告诉 Cointelegraph:

“随着欧盟人工智能法案的颁布,我们可能会看到类似的延迟时间表,这似乎并非完全不可能。”

德特曼指出,虽然该协议已于12月初达成,但最终文本仍有待观察。

他补充说,包括法国总统在内的几位主要成员国的政治家都对当前的草案表示了担忧。

“这让我想起了电子隐私法规的轨迹,欧盟于 2016 年宣布该法规将于 2018 年 5 月与《通用数据保护法规》一起生效,但五年后仍未最终确定。”

Laura De Boel, a partner in the Brussels office of law firm Wilson Sonsini Goodrich & Rosati, also pointed out that the December development is a “political agreement,” with formal adoption yet to come in early 2024.

She explained further that EU lawmakers have included a “phased grace period,” during which:

“The rules on prohibited AI systems will apply after six months, and the rules on General Purpose AI will apply after 12 months,” she said. “The other requirements of the AI Act will apply after 24 months, except that the obligations for high-risk systems defined in Annex II will apply after 36 months.”

Compliance challenges

Despite a flurry of new regulations entering the scene, 2024 will present some challenges for companies in terms of compliance.

De Boel said that the European Commission has already called on AI developers to voluntarily implement the key obligations of the AI Act even before they become mandatory:

“They will need to start building the necessary internal processes and prepare their staff.”

However, Determann said that even without a comprehensive AI regulatory scheme, “we’ll see compliance challenges as businesses grapple with the application of existing regulatory schemes to AI.”

This includes the EU General Data Protection Regulation (GDPR), privacy laws around the world, intellectual property laws, product safety regulations, property laws, trade secrets, confidentiality agreements and industry standards, among others.

To this note, in the United States, the administration of President Joe Biden issued a lengthy executive order on Oct. 30 intended to protect citizens, government agencies and companies by ensuring AI safety standards.

The order established six new standards for AI safety and security, including intentions for ethical AI usage within government agencies.

While Biden is quoted saying that the order aligns with the government’s principles of “safety, security, trust, openness,” insiders in the industry said it has created a “challenging” climate for developers.

This primarily boils down to discerning concrete compliance standards out of vague language.

In a previous interview with Cointelegraph, Adam Struck, a founding partner at Struck Capital and an AI investor, told Cointelegraph that the order makes it tricky for developers to anticipate future risks and compliance according to the legislation, which is based on assumptions about products that aren’t fully developed yet. He said:

“This is certainly challenging for companies and developers, particularly in the open-source community, where the executive order was less directive.”

Related: ChatGPT’s first year marked by existential fear, lawsuits and boardroom drama

More specific laws

Another anticipation in the legal landscape of 2024 is more specific, narrowly framed laws. This can already be seen as some countries deploy regulations against AI-generated deepfakes.

Regulators in the U.S. are already considering introducing regulations on political deepfakes in the lead-up to the 2024 presidential elections. As of late November, India has begun finalizing laws against deepfakes.

Determann cautioned AI-related businesses and those using AI products:

“Moving forward, businesses will need to stay up-to-date on these developments, which will include disclosure requirements for bots, restrictions on ‘deepfakes’ and audit requirements for job application evaluation systems.”

He continued to say that such specifically focused laws tend to have a “better chance” to result in the intended impact than overly broad regulations.

“This is because businesses can understand and comply with them more easily, and authorities can enforce them more effectively,” he explained.

There have also been rumblings that 2024 could see regulations focusing on investments in technology. Among lawmakers in the U.S., there has already been talk of regulating investments.

In July, the U.S. Senate backed bipartisan legislation that requires local companies to report any investments in Chinese technologies, particularly semiconductors used in AI.

Copyright clarification

Many in the industry also expect to see a clarification of what is deemed copyright infringement and violation of copyright law.

Over the last year, there have been a number of high-profile copyright-related lawsuits accusing leading AI developers like OpenAI, Microsoft, Meta and Google of abusing copyright laws.

These lawsuits have covered almost every form of content, from art and music, to literature and news.

Most recently, on Dec. 7, The New York Times filed a case against OpenAI and Microsoft for copyright infringement of their news content during the training of AI models. Prominent lawyers in the space have called this particular case a potential “watershed moment” for the AI and copyright space.

With the majority of these cases ongoing, 2024 is expected to bring some sort of resolution to this issue — whether wholly or partially remains to be seen.

What does AI have to say?

In the spirit of all things AI, Cointelegraph decided to ask ChatGPT itself what it believes will come of the legal scene surrounding AI in the upcoming year.

When asked, “Can you give me some predictions for AI law in 2024?” ChatGPT responded with the following assumptions:

  • More “strict regulations” globally about ethical concerns and biases.
  • Developing clearer “liability standards” for AI-related incidents for developers and users.
  • A heightened “emphasis on data privacy laws.”
  • An “increased demand for transparency in AI algorithms.”
  • The emergence of “specialized boards or regulatory bodies” dedicated to overseeing AI implementation.
  • Potential updates to labor laws and new employment models with AI in consideration.
  • Efforts to establish a “cohesive international framework for governing AI technologies.”
  • Implementation of “measures to protect consumers” from misleading or harmful AI applications.

ChatGPT 是否有所作为,只有时间才能证明,2024 年将是告诉我们所有人的一年。

请务必在 2024 年关注此空间,了解有关人工智能的最新更新。

杂志:

2023 年关于加密货币的 10 篇最佳长读物

热点:人工智能 卡尔 挑战 法律

欧易

欧易(OKX)

用戶喜愛的交易所

币安

币安(Binance)

已有账号登陆后会弹出下载

« 上一条| 下一条 »
区块链交流群
数藏交流群

合作伙伴

非小号交易所排名-专业的交易行情资讯门户网站,提供区块链比特币行情查询、比特币价格、比特币钱包、比特币智能合约、比特币量化交易策略分析,狗狗币以太坊以太币玩客币雷达币波场环保币柚子币莱特币瑞波币公信宝等虚拟加密电子数字货币价格查询汇率换算,币看比特儿火币网币安网欧易虎符抹茶XMEX合约交易所APP,比特币挖矿金色财经巴比特范非小号资讯平台。
非小号行情 yonghaoka.cn 飞鸟用好卡 ©2020-2024版权所有 桂ICP备18005582号-1