Holding Ai Accountable: Ntia Seeks Public Input to Develop Policy


As artificial intelligence (AI) powered applications continue to increase in popularity, the National Telecommunications and Information Administration (NTIA) now seeks comments and public input with the aim of crafting a report on AI accountability.

Given the recent rise in popularity of AI-powered applications such as ChatGPT, government and business officials have begun to express concern over the potential dangers and risks associated with such technology, including the use of such applications to commit crimes, infringe intellectual property rights, spread misinformation, and engage in harmful bias. In light of this, regulators in multiple countries have begun to consider ways to encourage use of AI-powered applications in ways that are legal, effective, ethical, safe, and trustworthy.

On March 16, 2023, the US Copyright Office launched an initiative to examine the copyright law and policy issues raised by AI technology, including the scope of copyright in works generated using AI tools and the use of copyrighted materials for machine-learning purposes. The UK government published its AI regulatory framework on April 4, 2023. Now, NTIA has issued an AI Accountability Request for Comment (RFC) through which it is seeking more general feedback from the public on AI accountability measures and policies.

REQUEST FOR COMMENT

With the RFC, the Biden administration is taking a step toward potential regulation of AI technology, which may involve a certification process for AI-powered applications to satisfy prior to release. The RFC states that NTIA is seeking feedback on “what policies can support the development of AI audits, assessments, certifications and other mechanisms to create earned trust in AI systems.” In particular, the announcement indicates that NTIA is seeking input on the following topics:

  • What types of data access are necessary to conduct audits and assessments
  • How regulators and other actors can incentivize and support credible assurance of AI systems along with other forms of accountability
  • What different approaches might be needed in different industry sectors, e.g., employment or healthcare

The RFC lists 34 more targeted questions, including the following:

  • What is the purpose of AI accountability mechanisms such as certifications, audits, and assessments?
  • What AI accountability mechanisms are currently being used?
  • How often should audits or assessments be conducted, and what are the factors that should inform these decisions?
  • Should AI systems be released with quality assurance certifications, especially if they are high risk?
  • What are the most significant barriers to effective AI accountability in the private sector, including barriers to independent AI audits, whether cooperative or adversarial? What are the best strategies and interventions to overcome these barriers?
  • What are the roles of intellectual property rights, terms of service, contractual obligations, or other legal entitlements in fostering or impeding a robust AI accountability ecosystem? For example, do nondisclosure agreements or trade secret protections impede the assessment or audit of AI systems and processes? If so, what legal or policy developments are needed to ensure an effective accountability framework?

NEXT STEPS

The NTIA stated that its RFC questions are not exhaustive and that commentors are not required to respond to all of the questions presented.

In the RFC, the NTIA states that it will rely on these comments, along with other public input on this topic, to draft and issue a report on AI accountability policy development, focusing especially on the AI assurance ecosystem.

The RFC was published in the Federal Register on April 13, 2023, making written comments due by June 12, 2023.

Our intellectual property team is available to assist those making submissions to the NTIA in response to the RFC.

Source: Morgan Lewis

Total
0
Shares
Related Posts