Home world news “United States Launches AI Safety Institute in Collaboration with UK to Ensure Advanced AI’s Secure Development”

“United States Launches AI Safety Institute in Collaboration with UK to Ensure Advanced AI’s Secure Development”

0
“United States Launches AI Safety Institute in Collaboration with UK to Ensure Advanced AI’s Secure Development”
United States to Launch AI Safety Institute, Partners with UK Safety Institute

In a significant move, the United States is taking proactive steps to address the growing concerns surrounding advanced artificial intelligence by launching an AI safety institute. This initiative aims to evaluate and mitigate known and emerging risks associated with cutting-edge AI models, often referred to as “frontier” AI.

Secretary of Commerce Gina Raimondo made the official announcement during her speech at the AI Safety Summit in Britain, where she emphasized the importance of collective efforts to ensure AI’s safe development. Raimondo stated, “I will almost certainly be calling on many of you in the audience who are in academia and industry to be part of this consortium. We can’t do it alone; the private sector must step up.”

This collaboration-driven approach signifies a recognition of the multifaceted nature of AI safety and the necessity for diverse expertise from academia, industry, and government agencies to address it effectively. Moreover, Secretary Raimondo pledged to establish a formal partnership between the newly created US AI safety institute and the United Kingdom Safety Institute. This international cooperation reflects a global commitment to AI safety standards.

The new AI safety institute will operate under the National Institute of Standards and Technology (NIST) and take the lead in the U.S. government’s efforts to ensure the safety of advanced AI models. Specifically, the institute will have the following key objectives:

  1. Development of Safety Standards: The institute will facilitate the development of standards for the safety, security, and testing of AI models. These standards are crucial for ensuring that AI technologies are reliable and secure in various applications.
  2. Authentication of AI-Generated Content: Establishing standards for authenticating AI-generated content is vital to combat misinformation and deepfakes, ensuring content accuracy and trustworthiness.
  3. Testing Environments for Researchers: Providing testing environments for researchers to evaluate emerging AI risks and address known impacts. This will enable a more comprehensive understanding of the risks associated with AI and their potential solutions.

This initiative aligns with an executive order signed by President Joe Biden on Monday, underscoring the U.S. government’s commitment to AI safety. The executive order mandates that developers of AI systems, especially those posing risks to U.S. national security, the economy, public health, or safety, must share the results of safety tests with the U.S. government before releasing these systems to the public. This directive falls in line with the Defense Production Act, emphasizing the importance of transparency and accountability in AI development.

Furthermore, the executive order directs various government agencies to establish standards for AI testing, addressing related risks such as chemical, biological, radiological, nuclear, and cybersecurity concerns. These measures aim to create a robust framework for evaluating, testing, and deploying AI technologies while prioritizing the safety and security of the public.

The launch of the U.S. AI safety institute and the accompanying executive order demonstrate the government’s commitment to fostering innovation while ensuring that the development and deployment of AI technologies prioritize safety and security. This multifaceted approach is poised to shape the future of AI development, setting the stage for international collaboration and the establishment of rigorous safety standards.

Sources By Agencies

LEAVE A REPLY

Please enter your comment!
Please enter your name here