Cailian, May 22 (Editor Zhao Hao) On Tuesday (May 21) local time, technology giants such as Microsoft, Amazon, and OpenAI reached a milestone on AI security at the "Artificial Intelligence (AI) Seoul Summit". significancecryptogamestopinternational agreements.

The summit was co-hosted by South Korea and the United Kingdom. It was a follow-up meeting to the Global Artificial Intelligence Security Summit held in the United Kingdom in November last year, with the theme of "Security, Innovation, and Tolerance."

The international agreement will allow companies from many countries around the world to make voluntary commitments to ensure they are developing cutting-edge AI models in a safe manner.

Companies making AI models need to publish security frameworks that outline ways to measure potential risks, such as checking for the risk of abuse of technology by bad actors.

cryptogamestop| Technology giant promises to safely develop AI models: extreme risks will "escape the Internet cable"

The framework will include some "red lines" against technology companies that are deemed intolerable, including but not limited to the threat of automated cyber attacks and biological weapons.

Once this extreme situation is discovered, if risk reduction cannot be guaranteed, companies need to turn on the "kill switch" to stop the development process of AI models.

British Prime Minister Sunak wrote in a statementcryptogamestop"This is the first time that so many leading AI companies from different regions around the world have agreed to make the same commitment when it comes to AI security. These commitments will ensure that the world's leading AI companies provide transparency and accountability in their AI plans."

In November last year, at the UK AI Security Summit, participants signed a Bletchley Declaration, saying that all parties need to work together to set a common regulatory approach.

The Bletchley Declaration writes that AI brings huge opportunities to the world and has the potential to change or enhance human well-being, peace and prosperity. At the same time, AI also brings major risks, including in the area of daily life."All issues are critical, and we recognize the need and urgency to solve them."

France will host the next AI Security Summit in early 2025, and companies have agreed to solicit opinions before the meeting and make it public.

Within days, the Council of the European Union officially approved the Artificial Intelligence Act, which is the world's first comprehensive regulatory regulation in the field of AI. Relevant parties who violate the regulations will be held accountable. However, the UK has not yet proposed a similar AI law.

Similar to the U.S. proposition, the UK has chosen a regulatory approach called "light-touch" to respond to emerging risks through industry self-discipline and case-by-case supervision, aiming to avoid excessive intervention that hinders technological innovation.

During tomorrow's ministerial meeting, senior officials from various countries will use today's discussions as a basis to discuss specific cooperation plans such as further ensuring AI security and promoting sustainable development.