artificial intelligence Archives · Policy Print https://policyprint.com/tag/artificial-intelligence/ News Around the Globe Sun, 03 Dec 2023 11:20:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://policyprint.com/wp-content/uploads/2022/11/cropped-policy-print-favico-32x32.png artificial intelligence Archives · Policy Print https://policyprint.com/tag/artificial-intelligence/ 32 32 AI Policy and Governance: What You Need to Know https://policyprint.com/ai-policy-and-governance-what-you-need-to-know/ Wed, 06 Dec 2023 11:06:57 +0000 https://policyprint.com/?p=4028 Dedicated AI policies, in addition to other management frameworks and documentation, help organizations more effectively manage AI governance,…

The post AI Policy and Governance: What You Need to Know appeared first on Policy Print.

]]>

Dedicated AI policies, in addition to other management frameworks and documentation, help organizations more effectively manage AI governance, data usage, and ethical best practices. AI governance — the practice of monitoring, regulating, and managing AI models and usage — is being rapidly adopted by organizations around the globe.

In this guide, we’ll dissect AI policy and governance in greater detail, explaining how a comprehensive AI policy that emphasizes all areas of AI governance can lead to more manageable, explainable AI and a more ethical and successful operation as a whole.

What Is an AI Policy?

An artificial intelligence policy is a dynamic, documented framework for AI governance that helps organizations set clear guidelines, rules, and principles for how AI technology should be used and developed within the organization.

Creating an AI policy should help your business leaders clarify and highlight any ethical, legal, or compliance standards to which your organization is committed, as well as identify the “who,” “what,” “when,” “why,” and “how” for strategic AI usage that aligns with overall organizational goals and strategies.

Every organization’s AI policy will look a little different to meet their specific objectives for AI governance, but in general, most AI policies include some version of the following components and structural elements:

  • An overarching vision for AI usage and growth in the organization.
  • Mission statements, clear objectives, and/or KPIs that align with this vision.
  • Detailed information about regional, industry-specific, and relevant regulatory compliance laws as well as other ethical considerations.
  • A catalog of approved tools and services that can be used for AI development and deployment purposes.
  • Defined roles and responsibilities related to AI usage.
  • An inventory and procedure for data privacy and security mechanisms.
  • A defined procedure for reporting and addressing AI performance and security issues.
  • Standards for AI model performance evaluation.

What Is AI Governance?

AI governance is a group of best practices that includes policies, standardized processes, and data and infrastructure controls that contribute to a more ethical and controlled artificial intelligence ecosystem.

When organizations put appropriate AI governance standards and frameworks in place, training data, algorithms, model infrastructure, and the AI models themselves can be more closely monitored and controlled throughout initial development, training and retraining, deployment, and daily use. This contributes to a more efficient AI operation as well as compliance with relevant data privacy and AI ethics regulations.

Who Manages AI Governance?

AI governance is a complex process, especially if your organization is working with multiple generative AI models or other large-scale AI platforms. The following individuals and teams play important roles in different aspects of AI governance and management:

  • Executive Leaders: An organization’s C-suite and other top leaders should establish the overall direction, goals, and vision for AI governance and associated AI policies. Regardless of their specific title, all business executives should be clear on what AI tools are being used and what regulations and policies are in place to regulate that usage.
  • Chief Information Officer: Unless your organization prefers to have a chief AI officer or chief technical officer oversee this kind of work, the CIO is the primary business leader who takes broader organizational strategies and goals and applies them to actual AI governance development and implementation. This individual is also responsible for ensuring that AI integrates smoothly and securely with all other technologies and infrastructures in your business’s tech stack.
  • Chief Data Officer: The CDO is primarily responsible for data governance and data-level quality assurance. In their role, they work to manage data quality, data privacy and compliance, and transparent data preparation workflows for AI model training sets.
  • Chief Compliance Officer and Legal/Compliance Teams: This individual or group of individuals keeps up with international, national, regional, industry-specific, and other regulations that may impact how your organization can use data — including PII and intellectual property — and AI models. If a chief ethics officer works among this team, this work may go beyond simple compliance management and move toward setting up ethical decision-making and training frameworks.
  • Data Science, AI, and IT Security Teams: These are the teams that handle the hands-on development tasks for training data, algorithms, models, performance monitoring, and security safeguards. While they may not have a hand in setting AI governance standards, they will likely play the biggest role in carrying out these standards.
  • AI Ethics Committee: If your organization has established an AI ethics committee that operates separately from your C-suite executives, these committee members will act as advisors to leadership in establishing governance frameworks that consider AI ethics from all angles, including personal privacy, transparent data sourcing and training, and environmental impact.
  • HR and Learning and Development Teams: These leaders are in charge of incorporating AI governance best practices and rules into the recruitment and hiring process so all new members of the team are aware of the roles and responsibilities they have when using AI. This team may not come up with the actual training materials or goals, but because of their background with other types of training, they may be tasked with leading AI usage training across the organization.
  • Third-Party Consultants: If your organization chooses to hire third-party consultants for data management, AI development, or strategic planning, these individuals may take over some or all of the other taskwork covered above. However, you’ll want to make sure key stakeholders in your organization work collaboratively with these advisors to create an AI governance policy that is both comprehensive and fitted to your specific needs.
  • Government and Industry Regulators: Depending on the industry or region you’re working in, third-party regulators could play a major role in determining what AI governance looks like for your organization, as they establish and enforce rules for ethical AI and data use. Many countries and regional groups like the EU are currently working on more comprehensive AI legislation, so expect this group’s role in AI governance to grow quickly in the coming months and years.

Why Is AI Governance Important?

AI governance is one of the most effective ways to establish, organize, and enforce standards for AI development and use that encourages ethical and compliant practices, transparency, continual monitoring and improvement, and cross-team collaboration.

AI governance can improve AI model usage outcomes and help organizations use AI in a way that protects customer data, aligns with compliance requirements, and maintains their reputation as an ethical operator, not only with their customers but also with their partners and the industry at large.

Establishing an independent AI governance strategy can also help your organization get more out of the AI technology you’re using, as creating this type of plan requires your team to flesh out its AI vision, goals, and specific roles and responsibilities in more granular detail. The accountability that gets built into an AI governance strategy helps to prevent and mitigate dangerous biases, create a plan of action for when AI development or use goes awry, and reemphasizes the importance of maintaining personal data privacy and security.

The Benefits of Having an AI Policy for AI Governance

An AI policy extends several benefits to organizations that are looking to develop a more comprehensive AI governance strategy. These are just a handful of the ways in which a dedicated policy can help you stay on task, compliant, and oriented with your initial vision:

  • Structured guidance for all AI tool developers and users: This type of AI policy can act as a user manual for both AI developers and users of these tools, as it considers the entire AI lifecycle, from development to deployment to ongoing monitoring and fine-tuning. The standardized rules that are part of this type of policy facilitate cross-organizational buy-in and help your technical teams create a roadmap for AI best practices in real-world scenarios.
  • A mechanism for widespread accountability: AI policies provide documented rules for organizational and role-specific AI best practices. This means that all relevant stakeholders have a point of reference that clearly outlines their roles, responsibilities, procedures, limitations, and prohibitions for AI usage, which helps to avoid both ethical and compliance issues.
  • Better adherence to regulatory and data security laws: While the leaders in your organization are likely aware of regulatory and data security laws and how they apply to your business, chances are most other employees could benefit from additional clarification. Enforcing an AI policy that reiterates these laws and how they apply to your organization can assist your compliance and legal teams in communicating and mitigating issues with compliance laws at all levels of the organization.
  • Clear outline of data privacy standards and mechanisms: Beyond simply stating data security and compliance expectations, AI policies detail how data privacy works and what mechanisms are in place to protect data when it’s handled, stored, and processed for AI models. This level of detail guides all employees in how they should protect an organization’s most sensitive data assets and also gives the business a clear blueprint for what they should look for and where they should look during AI audits.
  • Builds customer trust and brand reputation: As AI’s capabilities and use cases continue to expand, many people are excited about the possibilities while others — probably including many of your customers — are more distrusting of the technology. Establishing an AI policy that enforces AI governance while creating more transparency and explainability is a responsible way to move forward and gives your customers more confidence in how your organization uses AI in its operations.
  • Preparation for incoming AI regulations: While few AI-specific regulations have passed into law at this point, several groups, including the EU, the U.K., and the U.S., are working toward more comprehensive AI regulations and laws. Creating a comprehensive AI policy now can help your organization proactively align with AI best practices before they are required in your regions of operation; doing this work now can reduce the headache of reworking your processes down the line.

AI Policy and Governance Best Practices

If your AI policy is not clear on its expectations for AI governance and general use, your teams may run into issues with noncompliance, security, and other avoidable user errors. Follow these best practices to help every member of your team, regardless of how they work with AI, remain committed to high standards of AI governance:

  • Pay attention to relevant regulations: Consider important regional, national, and industry-specific regulations and stay up-to-date with your knowledge so AI systems remain in compliance at all times.
  • Implement standards for data security and data management: AI is a data-driven technology, so be sure to use appropriate data management tools, strategies, and processes to protect and optimize that asset.
  • Cover the entire AI lifecycle in your AI policy: Your AI policy should not simply focus on how AI models are developed or how they are used. Instead, create a comprehensive policy that covers everything from data preparation and training to model creation and development, model deployment, model monitoring, and model fine-tuning.
  • Establish ethical use standards and requirements: Keep in mind employee-specific roles and responsibilities and set up role-based access controls or other security standards to underpin those rules and protect your consumers’ most sensitive data. Additionally, pay attention to important concepts like AI bias, fairness, data sourcing methods, and other factors that impact ethical use.
  • Create standards for ongoing evaluation of model performance: What will you be looking at when you’re monitoring your models in “the real world”? Your AI policy should detail important performance metrics and KPIs so you can stick to your goals and fairly evaluate performance at all stages of AI usage and development.
  • Accompany your AI policy with dedicated user training: To help all employees understand how your AI governance policy applies to their work, provide dedicated user training that covers cybersecurity, ethical use, and other best practices, ideally with real-world scenarios and examples.
  • Document and regularly update your AI policies: AI policies should not be static documents; they should dynamically change as tooling, user expectations, industry trends and regulations, and other factors shift over time.
  • Communicate your ethical practices to relevant stakeholders and customers: Strategically and transparently communicate your governance standards and details of your policy to third-party investors, customers, partners, and other important stakeholders. This communication strategy helps to establish additional trust in your brand and its ethical approach to AI.
Built-in model governance features in Amazon SageMaker.
Some AI and ML platforms, including Amazon SageMaker, include built-in model governance features to support role-based controls and other usage rules. Source: AWS.

Bottom Line: Using Your AI Policy and Governance Best Practices for Better AI Outcomes

Most businesses are already using AI in some fashion, or will likely will adopt the technology soon to keep up with the competition in their industry. Creating and adhering to an AI policy that covers compliance, ethics, security, and practical use cases in detail not only gives these organizations a more strategic leg to stand on when getting started with large-scale AI projects but also helps them meet customer and legal expectations when using AI technology.

Developing detailed AI policies and governance strategies often feels like an overwhelming process, and especially for organizations that are just dipping their toes into the AI pool, establishing this kind of policy may feel like overkill or a waste of time. But this is the wrong way to look at it; instead, think of your AI governance policy as an insurance policy for the modern enterprise. Especially as AI regulations become more well-defined in the coming months and years, it will pay to have an AI policy that proactively paves the way to more responsible and effective artificial intelligence.

Source : EWeek

The post AI Policy and Governance: What You Need to Know appeared first on Policy Print.

]]>
City of Seattle Releases Generative Artificial Intelligence Policy Defining Responsible Use for City Employees https://policyprint.com/city-of-seattle-releases-generative-artificial-intelligence-policy-defining-responsible-use-for-city-employees/ Tue, 14 Nov 2023 13:42:20 +0000 https://policyprint.com/?p=3749 Seattle – Today, the City of Seattle released its Generative Artificial Intelligence (AI) Policy to balance the opportunities created by this…

The post City of Seattle Releases Generative Artificial Intelligence Policy Defining Responsible Use for City Employees appeared first on Policy Print.

]]>

Seattle – Today, the City of Seattle released its Generative Artificial Intelligence (AI) Policy to balance the opportunities created by this innovative technology with strong guardrails to ensure it is used responsibly and accountably. The new policy aligns with President Biden’s Executive Order regarding AI announced earlier this week, and positions Seattle to continue to be a national leader in civic innovation and technology.

President Biden’s Executive Order focuses on new standards for AI developers to prioritize safety and security, protect Americans’ privacy, advance equity, protecting workers, and more. Seattle Deputy Mayor Greg Wong was in Washington D.C. for the announcement to support these new guidelines.

“Innovation is in Seattle’s DNA, and I see immense opportunity for our region to be an AI powerhouse thanks to our world-leading technology companies and research universities. Now is the time to ensure this new tool is used for good, creating new opportunities and efficiencies rather than reinforcing existing biases or inequities,” said Seattle Mayor Bruce Harrell. “As a city, we have a responsibility to both embrace new technology that can improve our service while keeping a close eye on what matters – our communities and their data and privacy. This policy is the outcome of our One Seattle approach to cross-sector collaboration and will help guide our use of this new technology for years to come.”

The City’s policy was developed after a six-month working period with the Generative AI Advisory Team and City employees. The policy, written by Seattle’s Interim Chief Technology Officer Jim Loter, based on the group’s work, takes a principle-based approach to governing the use of Generative AI, which will allow greater flexibility as technology evolves while ensuring it aligns with the City’s responsibility to serve residents.

The seven governing principles are:

  1. Innovation and Sustainability  
  2. Transparency and Accountability
  3. Validity and Reliability
  4. Bias and Harm Reduction and Fairness
  5. Privacy Enhancing
  6. Explainability and Interpretability
  7. Security and Resiliency

The City’s new AI policy touches on many aspects of generative AI. It highlights several key factors to responsible use in a municipality, including attributing AI-generated work, having an employee review all AI work before going live, and limiting the use of personal information to help build the materials AI uses to develop its product. The policy also stipulates any work with a third-party vendor or tool must also include these principles for AI. This will help novel risks that have the potential to adversely affect the City’s ability to fulfill its legal commitments and obligations about how we use and manage data and information.

City employees using AI technology will be held accountable for compliance with these commitments. All use of AI technology must go through the same technology reviews as any other new technologies. Those reviews take an in-depth look at privacy, compliance, and security, among others.

“I’m proud of the way the City of Seattle has responded thoroughly to the development of this policy,” said Seattle’s Interim Chief Technology Officer Jim Loter. “Technology is always changing. Our responses to these changes prove we are open to embracing new ways of providing services to our communities, while also mindful of the data we need to protect. I know this is an evolving topic, and I look forward to continuing this work and these conversations with experts in the field who also happen to live in our community and benefit from our services as a City. It truly emphasizes the meaning of One Seattle.”

The City policy applies to generative AI, which is a special type of AI technology. Generative AI produces new content for user requests and prompts by learning from large amounts of data called a “large language model.” The capability to create new content, and to continually learn from these large data models makes it possible for a computerized system to produce content that looks and sounds like it was done by a human. While AI, including generative AI, has the potential to enhance human work across many fields of human enterprise, its use has also raised many questions about the consequences of employing smart systems. Among these are ethics, safety, accuracy, bias, and attribution for human work used to inform AI system models. 

The Generative AI Policy Advisory Team included technology industry leaders from the University of Washington, the Allen Institute for AI, and members of the City’s Community Technology Advisory Board (CTAB). Seattle Information Technology employees provided input as well.

What members of the Generative AI Advisory Team had to say about this work and policy development:

Nicole DeCario, Director, AI & Society, and Jacob Morrison, Predoctoral Researcher, and Public Policy Lead, Allen Institute for AI and Generative Artificial Intelligence Advisory Board members

“The City of Seattle is taking a values-driven approach to creating their generative AI policy, carefully weighing the benefits and harms this technology brings. We are grateful to support this work and commend the City on its leadership in prioritizing the responsible use of AI. We hope the City’s policies can provide a blueprint for other municipalities around the country as it becomes increasingly common to interact with AI systems in our daily lives.”

CTAB member Omari Stringer

“As a CTAB member and resident of Seattle, I am happy to see the City of Seattle taking steps to ensure the responsible use of innovative technologies such as Generative AI. Although the pace of innovation often exceeds the pace of policy, it is important to engage with stakeholders early to set a strong foundation for future use cases. While I believe we should tread carefully in this new domain, especially with the importance of the work the City carries out, there are certainly many opportunities for AI to enhance the delivery of services to the public. I appreciate the unique opportunity to provide my voice and expertise to help bridge the gap between innovation and ethics.”

Source : Harrell

The post City of Seattle Releases Generative Artificial Intelligence Policy Defining Responsible Use for City Employees appeared first on Policy Print.

]]>
AI Policy Yields a Goldmine for Lobbyists https://policyprint.com/ai-policy-yields-a-goldmine-for-lobbyists/ Tue, 07 Nov 2023 07:57:51 +0000 https://policyprint.com/?p=3727 The government’s burgeoning interest in artificial intelligence policy is turning into the next big payday for K Street.…

The post AI Policy Yields a Goldmine for Lobbyists appeared first on Policy Print.

]]>

The government’s burgeoning interest in artificial intelligence policy is turning into the next big payday for K Street.

Lobbyists are rushing to sign up AI companies as clients. And K Street firms also are being enlisted by a sprawling constellation of industries and interest groups that want help influencing AI policy.

Cashing in on the latest policy fight is a classic Washington narrative. But unlike, say, cryptocurrency or marijuana regulation, AI policy touches just about every industry. Groups as disparate as the NFL Players Association, Nike, Amazon and the Mayo Clinic have enlisted help from firms to lobby on the matter.

Some lobbyists compared the boom in business opportunities to the cryptocurrency policy debate that brought K Street millions. But AI has the potential to be even bigger.

Lobbyists in the AI space said others across town are angling themselves as subject matter experts as the new work becomes available. Nearly every industry has realized they will be affected by artificial intelligence, and the business community is aggressively looking for intel, they said.

“Every lobbying firm in town is trying to make themselves out to be an expert in everything to try and lure in clients, so AI is just one of them,” said one lobbyist granted anonymity to discuss dynamics on K Street. “I’d be hard-pressed to name you an AI expert downtown. It’s hard enough to pick the AI experts in policymaking positions.”

Another lobbyist said that this past spring, lobbyists without any tech clients began bringing up artificial intelligence at political fundraisers as a means to attract new clients. The same tactic happened with cryptocurrency two years ago, the person said. “The whole point of the business is to find people who need your services and will pay you to do that,” the lobbyist said, laughing.

Carl Thorsen, a lobbyist who has some clients with a stake in the issue, compared the pattern to when Congress was trying to prohibit internet gambling years ago. Suddenly, “every consultant under the sun” was working for an internet gambling client, said Thorsen, who was counsel at the House Judiciary Committee when it handled the issue. His firm has already heard from clients that some consultants are pitching themselves as the AI experts, he added.

“What people don’t understand, they are afraid of, and I’m certain there are plenty of consultants who’ve decided to market themselves as AI experts, he said.

The lobbying frenzy started long before the White House issued its executive order on AI and comes as Congress starts to dig in on related policy. Broad swathes of industry are seeking new incentives from Washington — including subsidies for AI research and workforce retraining — while avoiding onerous rules on how they develop or deploy the emerging technology. Other industry sectors are squabbling over how AI should apply to topics as disparate as copyright, criminal justice, health care, banking and national defense. Looming over it all are calls from some top AI companies for Washington to impose a licensing regime to govern the most advanced AI models — a path some warn would lock in the dominance of leading AI firms like OpenAI.

While disclosure forms suggest OpenAI has not officially hired any lobbyists, it’s still building a ground game in Washington. The company recently tapped law firm DLA Piper to coach CEO Sam Altman on how to testify before Congress. It has also hired Washington lawyer Sy Damle, a partner at Latham & Watkins, to represent it in ongoing copyright lawsuits sparked by its generative AI tools. In September, Damle organized a letter campaign pushing back on possible AI-driven changes to copyright law, though an OpenAI spokesperson said the company had no involvement in that effort. OpenAI is also looking to hire a U.S. congressional lead, budgeting between $230,000 and $280,000 annually for that role.

Altman also gave $200,000 to President Joe Biden’s joint fundraising committee. Shortly after his donation, he participated in a June meeting between Biden and the visiting Indian Prime Minister Narendra Modi. Altman was also invited to the state dinner honoring Modi.

LinkedIn co-founder Reid Hoffman, an AI investor who has sat on OpenAI’s board, has given more than $700,000 to Biden’s joint fundraising committee and has publicly praised the administration’s recent AI executive order. Top Microsoft employees have also given tens of thousands of dollars to Biden’s joint fundraising committee.

“I’m very concerned that AI active executives are trying to cultivate Democrats now just like Big Tech cultivated Democrats a decade or two ago, and Wall Street did a decade before that,” said Jeff Hauser, founder of the Revolving Door Project. “AI knows that decisions in Washington that are made in the next few years will set the course of the industry for a generation. It’s a really good time to invest four-, five-, six-, maybe even seven-digits worth of campaign cash and potentially yield 10- or 11-digit returns.”

Source : Politico

The post AI Policy Yields a Goldmine for Lobbyists appeared first on Policy Print.

]]>
Holding Ai Accountable: Ntia Seeks Public Input to Develop Policy https://policyprint.com/holding-ai-accountable-ntia-seeks-public-input-to-develop-policy/ Fri, 21 Apr 2023 18:00:00 +0000 https://policyprint.com/?p=2817 As artificial intelligence (AI) powered applications continue to increase in popularity, the National Telecommunications and Information Administration (NTIA)…

The post Holding Ai Accountable: Ntia Seeks Public Input to Develop Policy appeared first on Policy Print.

]]>

As artificial intelligence (AI) powered applications continue to increase in popularity, the National Telecommunications and Information Administration (NTIA) now seeks comments and public input with the aim of crafting a report on AI accountability.

Given the recent rise in popularity of AI-powered applications such as ChatGPT, government and business officials have begun to express concern over the potential dangers and risks associated with such technology, including the use of such applications to commit crimes, infringe intellectual property rights, spread misinformation, and engage in harmful bias. In light of this, regulators in multiple countries have begun to consider ways to encourage use of AI-powered applications in ways that are legal, effective, ethical, safe, and trustworthy.

On March 16, 2023, the US Copyright Office launched an initiative to examine the copyright law and policy issues raised by AI technology, including the scope of copyright in works generated using AI tools and the use of copyrighted materials for machine-learning purposes. The UK government published its AI regulatory framework on April 4, 2023. Now, NTIA has issued an AI Accountability Request for Comment (RFC) through which it is seeking more general feedback from the public on AI accountability measures and policies.

REQUEST FOR COMMENT

With the RFC, the Biden administration is taking a step toward potential regulation of AI technology, which may involve a certification process for AI-powered applications to satisfy prior to release. The RFC states that NTIA is seeking feedback on “what policies can support the development of AI audits, assessments, certifications and other mechanisms to create earned trust in AI systems.” In particular, the announcement indicates that NTIA is seeking input on the following topics:

  • What types of data access are necessary to conduct audits and assessments
  • How regulators and other actors can incentivize and support credible assurance of AI systems along with other forms of accountability
  • What different approaches might be needed in different industry sectors, e.g., employment or healthcare

The RFC lists 34 more targeted questions, including the following:

  • What is the purpose of AI accountability mechanisms such as certifications, audits, and assessments?
  • What AI accountability mechanisms are currently being used?
  • How often should audits or assessments be conducted, and what are the factors that should inform these decisions?
  • Should AI systems be released with quality assurance certifications, especially if they are high risk?
  • What are the most significant barriers to effective AI accountability in the private sector, including barriers to independent AI audits, whether cooperative or adversarial? What are the best strategies and interventions to overcome these barriers?
  • What are the roles of intellectual property rights, terms of service, contractual obligations, or other legal entitlements in fostering or impeding a robust AI accountability ecosystem? For example, do nondisclosure agreements or trade secret protections impede the assessment or audit of AI systems and processes? If so, what legal or policy developments are needed to ensure an effective accountability framework?

NEXT STEPS

The NTIA stated that its RFC questions are not exhaustive and that commentors are not required to respond to all of the questions presented.

In the RFC, the NTIA states that it will rely on these comments, along with other public input on this topic, to draft and issue a report on AI accountability policy development, focusing especially on the AI assurance ecosystem.

The RFC was published in the Federal Register on April 13, 2023, making written comments due by June 12, 2023.

Our intellectual property team is available to assist those making submissions to the NTIA in response to the RFC.

Source: Morgan Lewis

The post Holding Ai Accountable: Ntia Seeks Public Input to Develop Policy appeared first on Policy Print.

]]>