AI art of a dystopian city run by algorithms
Author profile picture

Draft rules issued by the White House are set to regulate the use of artificial intelligence (AI) in the US government. The move comes after President Biden’s executive order, a comprehensive plan to amplify the benefits of AI while mitigating its potential harm. The draft rules require federal agencies to evaluate the algorithms they use, particularly in sectors like law enforcement, healthcare, and housing, for any potential discriminatory or harmful effects. The regulations align the US more closely with the EU’s regulatory approach to AI and mark a significant shift towards transparency and safety.

  • Biden issues draft rules to regulate AI use in US government, requiring assessment of algorithms by 2024.
  • These follow the executive order to ensure AI is safe, unbiased and protects privacy through standards and tools.
  • Rules signify shift towards transparency and safety, aligning more with EU’s regulatory approach.

Understanding the draft rules

As per the draft rules, federal agencies are required to assess all existing algorithms by August 2024 and cease the use of any that do not comply. The rules don’t just apply to the algorithms currently in use, but also those acquired from private companies. The US Office of Management and Budget (OMB), which issued the draft rules, has highlighted potential harms of AI in areas like healthcare, housing, and law enforcement where algorithms have previously led to discrimination or denial of services.

The draft memo addresses potential violations of citizens’ rights through predictive policing, speech-blocking AI, tenant-screening algorithms, and systems impacting immigration or child custody. However, the draft rules do exclude models related to national security, allowing agencies to issue waivers if ceasing AI use would impede critical operations.

What the executive order entails

Biden’s executive order aims to ensure AI systems are safe, unbiased, protect privacy, and support displaced workers. The National Institute of Standards and Technology (NIST) will play a crucial role in AI security, developing standards, tools, and tests. The order also focuses on protecting privacy, advancing equity, and addressing the impact of AI on workers, thus emphasizing the need for responsible AI innovation that is safe, secure, and trustworthy.

The executive order recognises the importance of international collaboration in AI governance and the US has consulted various countries, including the EU and the UK, on AI governance frameworks. This indicates the desire of the US to influence global policy and set standards for responsible AI innovation.

Addressing potential problems

While the executive order sets a clear direction for the United States, there are still challenges to be addressed. The Markup, in its section-by-section breakdown of the executive order, highlights key issues. These include creating safety standards, security standards, and rules for AI technology that could pose risks to national security or critical infrastructure.

The order also calls for the development of standards, tools, and tests to ensure the safety and security of AI systems, addressing the risks of using AI for engineering dangerous biological materials and protecting against AI-enabled fraud and deception. The draft rules now published are the first step in doing so.

Learning from past mistakes, the Dutch benefits scandal

The Dutch child care benefits scandal serves as a stark warning about the potential risks of using algorithms without proper safeguards. In this case, the Dutch tax authorities used a self-learning algorithm to identify child care benefits fraud, resulting in penalties for families based on mere suspicion, pushing them into poverty. Tens of thousands of families, often with lower incomes or belonging to ethnic minorities, were affected, with some victims committing suicide and over a thousand children being placed in foster care.

The Dutch case underscores the devastating consequences of automated systems without proper safeguards, as governments worldwide increasingly rely on algorithms and AI.

The draft rules issued by the White House are a promising step towards creating a safer, more regulated future for AI technology. The aim now lies in ensuring these rules are implemented effectively to prevent harm, while simultaneously harnessing the potential benefits of AI.