technology Archives · Policy Print https://policyprint.com/tag/technology-2/ News Around the Globe Tue, 26 Mar 2024 14:52:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://policyprint.com/wp-content/uploads/2022/11/cropped-policy-print-favico-32x32.png technology Archives · Policy Print https://policyprint.com/tag/technology-2/ 32 32 EU opens new investigations into tech ‘gatekeepers’ https://policyprint.com/eu-opens-new-investigations-into-tech-gatekeepers/ Wed, 10 Apr 2024 14:48:09 +0000 https://policyprint.com/?p=4196 The announcement highlights the growing regulatory scrutiny on the power of big tech companies and follows the US…

The post EU opens new investigations into tech ‘gatekeepers’ appeared first on Policy Print.

]]>

The announcement highlights the growing regulatory scrutiny on the power of big tech companies and follows the US decision to take legal action against Apple, which it has accused of monopolising the smartphone market and crushing competition.

The European Commission will examine whether the big tech companies are preventing developers from steering customers away from controlled app stores, which could be anti-competitive.

The investigation comes under powers introduced in the Digital Markets Act (DMA) which was a landmark piece of legislation aimed at curbing the power of big tech and the commission is accusing companies of non-compliance with the act and a failure to provide a fairer and more open digital space for European citizens and businesses.

Should the investigation conclude that there is lack of full compliance with the DMA, gatekeeper companies could face heavy fines.

Designated as ‘gatekeepers’ by the DMA, Google owner Alphabet, Amazon, Apple, TikTok owner ByteDance, Meta and Microsoft have special responsibilities because of their dominance of key mobile technologies.

These companies are accused of steering developers away from competitor platforms and imposing various restrictions and limitations on their use.

The big tech companies are facing a growing legal backlash and last month Apple was fined over its iOS ecosystem and business practices by the EU.

Whether this case succeeds of not, it’s interesting to note the growing willingness of the authorities to take these tech giants to court.

About time, according to some critics.

Source: New Electronic

The post EU opens new investigations into tech ‘gatekeepers’ appeared first on Policy Print.

]]>
China’s government will no longer buy Intel or AMD chips, or Microsoft products, for its PCs https://policyprint.com/chinas-government-will-no-longer-buy-intel-or-amd-chips-or-microsoft-products-for-its-pcs/ Sun, 07 Apr 2024 14:44:44 +0000 https://policyprint.com/?p=4193 China’s government has reportedly started enforcing a new law that it passed in December this week. The law…

The post China’s government will no longer buy Intel or AMD chips, or Microsoft products, for its PCs appeared first on Policy Print.

]]>

China’s government has reportedly started enforcing a new law that it passed in December this week. The law bans the government from purchasing PCs with Intel and AMD chips inside, along with software products from Microsoft, including its Windows operating system.

According to The Financial Times (via PC World), the new rules on purchasing products for China’s government PCs were set in place in December by the country’s Information Technology Security Evaluation Center. They include all governments and agencies above what is considered to be the country’s township level.

China previously ordered its government offices and agencies to no longer use Microsoft’s Windows OS in 2022, in favor of a homegrown Linux-based OS. As a result, these new guidelines are not expected to affect Microsoft. However, the ban on Intel and AMD chips could result in a noticeable hit in the revenue numbers for both companies.

On the other hand, the ban on these products on China’s government PCs does not include their use in private businesses or by regular consumers in that country.

China previously banned the use of Apple’s iPhone products in its government buildings. It has also banned the use of products from Micron Technology for its infrastructure projects, citing security concerns.

These new moves come sometime after the United States government banned China’s Semiconductor Manufacturing International Corporation (SMIC) from exporting fabrication equipment to make certain chips in that country.

Late in 2023, the US government banned the export of some of Nvidia’s AI GPUs to China. Nvidia has instead developed AI chips, the H20, that were specifically made to conform to the restrictions of the US government’s export rules. The company started taking preorders for the H20 chips in early 2024, and are expected to begin large scale shipments of those China-specific AI GPUs sometime in the second quarter of 2024.

Source: Neowin

The post China’s government will no longer buy Intel or AMD chips, or Microsoft products, for its PCs appeared first on Policy Print.

]]>
AI Policy and Governance: What You Need to Know https://policyprint.com/ai-policy-and-governance-what-you-need-to-know/ Wed, 06 Dec 2023 11:06:57 +0000 https://policyprint.com/?p=4028 Dedicated AI policies, in addition to other management frameworks and documentation, help organizations more effectively manage AI governance,…

The post AI Policy and Governance: What You Need to Know appeared first on Policy Print.

]]>

Dedicated AI policies, in addition to other management frameworks and documentation, help organizations more effectively manage AI governance, data usage, and ethical best practices. AI governance — the practice of monitoring, regulating, and managing AI models and usage — is being rapidly adopted by organizations around the globe.

In this guide, we’ll dissect AI policy and governance in greater detail, explaining how a comprehensive AI policy that emphasizes all areas of AI governance can lead to more manageable, explainable AI and a more ethical and successful operation as a whole.

What Is an AI Policy?

An artificial intelligence policy is a dynamic, documented framework for AI governance that helps organizations set clear guidelines, rules, and principles for how AI technology should be used and developed within the organization.

Creating an AI policy should help your business leaders clarify and highlight any ethical, legal, or compliance standards to which your organization is committed, as well as identify the “who,” “what,” “when,” “why,” and “how” for strategic AI usage that aligns with overall organizational goals and strategies.

Every organization’s AI policy will look a little different to meet their specific objectives for AI governance, but in general, most AI policies include some version of the following components and structural elements:

  • An overarching vision for AI usage and growth in the organization.
  • Mission statements, clear objectives, and/or KPIs that align with this vision.
  • Detailed information about regional, industry-specific, and relevant regulatory compliance laws as well as other ethical considerations.
  • A catalog of approved tools and services that can be used for AI development and deployment purposes.
  • Defined roles and responsibilities related to AI usage.
  • An inventory and procedure for data privacy and security mechanisms.
  • A defined procedure for reporting and addressing AI performance and security issues.
  • Standards for AI model performance evaluation.

What Is AI Governance?

AI governance is a group of best practices that includes policies, standardized processes, and data and infrastructure controls that contribute to a more ethical and controlled artificial intelligence ecosystem.

When organizations put appropriate AI governance standards and frameworks in place, training data, algorithms, model infrastructure, and the AI models themselves can be more closely monitored and controlled throughout initial development, training and retraining, deployment, and daily use. This contributes to a more efficient AI operation as well as compliance with relevant data privacy and AI ethics regulations.

Who Manages AI Governance?

AI governance is a complex process, especially if your organization is working with multiple generative AI models or other large-scale AI platforms. The following individuals and teams play important roles in different aspects of AI governance and management:

  • Executive Leaders: An organization’s C-suite and other top leaders should establish the overall direction, goals, and vision for AI governance and associated AI policies. Regardless of their specific title, all business executives should be clear on what AI tools are being used and what regulations and policies are in place to regulate that usage.
  • Chief Information Officer: Unless your organization prefers to have a chief AI officer or chief technical officer oversee this kind of work, the CIO is the primary business leader who takes broader organizational strategies and goals and applies them to actual AI governance development and implementation. This individual is also responsible for ensuring that AI integrates smoothly and securely with all other technologies and infrastructures in your business’s tech stack.
  • Chief Data Officer: The CDO is primarily responsible for data governance and data-level quality assurance. In their role, they work to manage data quality, data privacy and compliance, and transparent data preparation workflows for AI model training sets.
  • Chief Compliance Officer and Legal/Compliance Teams: This individual or group of individuals keeps up with international, national, regional, industry-specific, and other regulations that may impact how your organization can use data — including PII and intellectual property — and AI models. If a chief ethics officer works among this team, this work may go beyond simple compliance management and move toward setting up ethical decision-making and training frameworks.
  • Data Science, AI, and IT Security Teams: These are the teams that handle the hands-on development tasks for training data, algorithms, models, performance monitoring, and security safeguards. While they may not have a hand in setting AI governance standards, they will likely play the biggest role in carrying out these standards.
  • AI Ethics Committee: If your organization has established an AI ethics committee that operates separately from your C-suite executives, these committee members will act as advisors to leadership in establishing governance frameworks that consider AI ethics from all angles, including personal privacy, transparent data sourcing and training, and environmental impact.
  • HR and Learning and Development Teams: These leaders are in charge of incorporating AI governance best practices and rules into the recruitment and hiring process so all new members of the team are aware of the roles and responsibilities they have when using AI. This team may not come up with the actual training materials or goals, but because of their background with other types of training, they may be tasked with leading AI usage training across the organization.
  • Third-Party Consultants: If your organization chooses to hire third-party consultants for data management, AI development, or strategic planning, these individuals may take over some or all of the other taskwork covered above. However, you’ll want to make sure key stakeholders in your organization work collaboratively with these advisors to create an AI governance policy that is both comprehensive and fitted to your specific needs.
  • Government and Industry Regulators: Depending on the industry or region you’re working in, third-party regulators could play a major role in determining what AI governance looks like for your organization, as they establish and enforce rules for ethical AI and data use. Many countries and regional groups like the EU are currently working on more comprehensive AI legislation, so expect this group’s role in AI governance to grow quickly in the coming months and years.

Why Is AI Governance Important?

AI governance is one of the most effective ways to establish, organize, and enforce standards for AI development and use that encourages ethical and compliant practices, transparency, continual monitoring and improvement, and cross-team collaboration.

AI governance can improve AI model usage outcomes and help organizations use AI in a way that protects customer data, aligns with compliance requirements, and maintains their reputation as an ethical operator, not only with their customers but also with their partners and the industry at large.

Establishing an independent AI governance strategy can also help your organization get more out of the AI technology you’re using, as creating this type of plan requires your team to flesh out its AI vision, goals, and specific roles and responsibilities in more granular detail. The accountability that gets built into an AI governance strategy helps to prevent and mitigate dangerous biases, create a plan of action for when AI development or use goes awry, and reemphasizes the importance of maintaining personal data privacy and security.

The Benefits of Having an AI Policy for AI Governance

An AI policy extends several benefits to organizations that are looking to develop a more comprehensive AI governance strategy. These are just a handful of the ways in which a dedicated policy can help you stay on task, compliant, and oriented with your initial vision:

  • Structured guidance for all AI tool developers and users: This type of AI policy can act as a user manual for both AI developers and users of these tools, as it considers the entire AI lifecycle, from development to deployment to ongoing monitoring and fine-tuning. The standardized rules that are part of this type of policy facilitate cross-organizational buy-in and help your technical teams create a roadmap for AI best practices in real-world scenarios.
  • A mechanism for widespread accountability: AI policies provide documented rules for organizational and role-specific AI best practices. This means that all relevant stakeholders have a point of reference that clearly outlines their roles, responsibilities, procedures, limitations, and prohibitions for AI usage, which helps to avoid both ethical and compliance issues.
  • Better adherence to regulatory and data security laws: While the leaders in your organization are likely aware of regulatory and data security laws and how they apply to your business, chances are most other employees could benefit from additional clarification. Enforcing an AI policy that reiterates these laws and how they apply to your organization can assist your compliance and legal teams in communicating and mitigating issues with compliance laws at all levels of the organization.
  • Clear outline of data privacy standards and mechanisms: Beyond simply stating data security and compliance expectations, AI policies detail how data privacy works and what mechanisms are in place to protect data when it’s handled, stored, and processed for AI models. This level of detail guides all employees in how they should protect an organization’s most sensitive data assets and also gives the business a clear blueprint for what they should look for and where they should look during AI audits.
  • Builds customer trust and brand reputation: As AI’s capabilities and use cases continue to expand, many people are excited about the possibilities while others — probably including many of your customers — are more distrusting of the technology. Establishing an AI policy that enforces AI governance while creating more transparency and explainability is a responsible way to move forward and gives your customers more confidence in how your organization uses AI in its operations.
  • Preparation for incoming AI regulations: While few AI-specific regulations have passed into law at this point, several groups, including the EU, the U.K., and the U.S., are working toward more comprehensive AI regulations and laws. Creating a comprehensive AI policy now can help your organization proactively align with AI best practices before they are required in your regions of operation; doing this work now can reduce the headache of reworking your processes down the line.

AI Policy and Governance Best Practices

If your AI policy is not clear on its expectations for AI governance and general use, your teams may run into issues with noncompliance, security, and other avoidable user errors. Follow these best practices to help every member of your team, regardless of how they work with AI, remain committed to high standards of AI governance:

  • Pay attention to relevant regulations: Consider important regional, national, and industry-specific regulations and stay up-to-date with your knowledge so AI systems remain in compliance at all times.
  • Implement standards for data security and data management: AI is a data-driven technology, so be sure to use appropriate data management tools, strategies, and processes to protect and optimize that asset.
  • Cover the entire AI lifecycle in your AI policy: Your AI policy should not simply focus on how AI models are developed or how they are used. Instead, create a comprehensive policy that covers everything from data preparation and training to model creation and development, model deployment, model monitoring, and model fine-tuning.
  • Establish ethical use standards and requirements: Keep in mind employee-specific roles and responsibilities and set up role-based access controls or other security standards to underpin those rules and protect your consumers’ most sensitive data. Additionally, pay attention to important concepts like AI bias, fairness, data sourcing methods, and other factors that impact ethical use.
  • Create standards for ongoing evaluation of model performance: What will you be looking at when you’re monitoring your models in “the real world”? Your AI policy should detail important performance metrics and KPIs so you can stick to your goals and fairly evaluate performance at all stages of AI usage and development.
  • Accompany your AI policy with dedicated user training: To help all employees understand how your AI governance policy applies to their work, provide dedicated user training that covers cybersecurity, ethical use, and other best practices, ideally with real-world scenarios and examples.
  • Document and regularly update your AI policies: AI policies should not be static documents; they should dynamically change as tooling, user expectations, industry trends and regulations, and other factors shift over time.
  • Communicate your ethical practices to relevant stakeholders and customers: Strategically and transparently communicate your governance standards and details of your policy to third-party investors, customers, partners, and other important stakeholders. This communication strategy helps to establish additional trust in your brand and its ethical approach to AI.
Built-in model governance features in Amazon SageMaker.
Some AI and ML platforms, including Amazon SageMaker, include built-in model governance features to support role-based controls and other usage rules. Source: AWS.

Bottom Line: Using Your AI Policy and Governance Best Practices for Better AI Outcomes

Most businesses are already using AI in some fashion, or will likely will adopt the technology soon to keep up with the competition in their industry. Creating and adhering to an AI policy that covers compliance, ethics, security, and practical use cases in detail not only gives these organizations a more strategic leg to stand on when getting started with large-scale AI projects but also helps them meet customer and legal expectations when using AI technology.

Developing detailed AI policies and governance strategies often feels like an overwhelming process, and especially for organizations that are just dipping their toes into the AI pool, establishing this kind of policy may feel like overkill or a waste of time. But this is the wrong way to look at it; instead, think of your AI governance policy as an insurance policy for the modern enterprise. Especially as AI regulations become more well-defined in the coming months and years, it will pay to have an AI policy that proactively paves the way to more responsible and effective artificial intelligence.

Source : EWeek

The post AI Policy and Governance: What You Need to Know appeared first on Policy Print.

]]>
City of Seattle Releases Generative Artificial Intelligence Policy Defining Responsible Use for City Employees https://policyprint.com/city-of-seattle-releases-generative-artificial-intelligence-policy-defining-responsible-use-for-city-employees/ Tue, 14 Nov 2023 13:42:20 +0000 https://policyprint.com/?p=3749 Seattle – Today, the City of Seattle released its Generative Artificial Intelligence (AI) Policy to balance the opportunities created by this…

The post City of Seattle Releases Generative Artificial Intelligence Policy Defining Responsible Use for City Employees appeared first on Policy Print.

]]>

Seattle – Today, the City of Seattle released its Generative Artificial Intelligence (AI) Policy to balance the opportunities created by this innovative technology with strong guardrails to ensure it is used responsibly and accountably. The new policy aligns with President Biden’s Executive Order regarding AI announced earlier this week, and positions Seattle to continue to be a national leader in civic innovation and technology.

President Biden’s Executive Order focuses on new standards for AI developers to prioritize safety and security, protect Americans’ privacy, advance equity, protecting workers, and more. Seattle Deputy Mayor Greg Wong was in Washington D.C. for the announcement to support these new guidelines.

“Innovation is in Seattle’s DNA, and I see immense opportunity for our region to be an AI powerhouse thanks to our world-leading technology companies and research universities. Now is the time to ensure this new tool is used for good, creating new opportunities and efficiencies rather than reinforcing existing biases or inequities,” said Seattle Mayor Bruce Harrell. “As a city, we have a responsibility to both embrace new technology that can improve our service while keeping a close eye on what matters – our communities and their data and privacy. This policy is the outcome of our One Seattle approach to cross-sector collaboration and will help guide our use of this new technology for years to come.”

The City’s policy was developed after a six-month working period with the Generative AI Advisory Team and City employees. The policy, written by Seattle’s Interim Chief Technology Officer Jim Loter, based on the group’s work, takes a principle-based approach to governing the use of Generative AI, which will allow greater flexibility as technology evolves while ensuring it aligns with the City’s responsibility to serve residents.

The seven governing principles are:

  1. Innovation and Sustainability  
  2. Transparency and Accountability
  3. Validity and Reliability
  4. Bias and Harm Reduction and Fairness
  5. Privacy Enhancing
  6. Explainability and Interpretability
  7. Security and Resiliency

The City’s new AI policy touches on many aspects of generative AI. It highlights several key factors to responsible use in a municipality, including attributing AI-generated work, having an employee review all AI work before going live, and limiting the use of personal information to help build the materials AI uses to develop its product. The policy also stipulates any work with a third-party vendor or tool must also include these principles for AI. This will help novel risks that have the potential to adversely affect the City’s ability to fulfill its legal commitments and obligations about how we use and manage data and information.

City employees using AI technology will be held accountable for compliance with these commitments. All use of AI technology must go through the same technology reviews as any other new technologies. Those reviews take an in-depth look at privacy, compliance, and security, among others.

“I’m proud of the way the City of Seattle has responded thoroughly to the development of this policy,” said Seattle’s Interim Chief Technology Officer Jim Loter. “Technology is always changing. Our responses to these changes prove we are open to embracing new ways of providing services to our communities, while also mindful of the data we need to protect. I know this is an evolving topic, and I look forward to continuing this work and these conversations with experts in the field who also happen to live in our community and benefit from our services as a City. It truly emphasizes the meaning of One Seattle.”

The City policy applies to generative AI, which is a special type of AI technology. Generative AI produces new content for user requests and prompts by learning from large amounts of data called a “large language model.” The capability to create new content, and to continually learn from these large data models makes it possible for a computerized system to produce content that looks and sounds like it was done by a human. While AI, including generative AI, has the potential to enhance human work across many fields of human enterprise, its use has also raised many questions about the consequences of employing smart systems. Among these are ethics, safety, accuracy, bias, and attribution for human work used to inform AI system models. 

The Generative AI Policy Advisory Team included technology industry leaders from the University of Washington, the Allen Institute for AI, and members of the City’s Community Technology Advisory Board (CTAB). Seattle Information Technology employees provided input as well.

What members of the Generative AI Advisory Team had to say about this work and policy development:

Nicole DeCario, Director, AI & Society, and Jacob Morrison, Predoctoral Researcher, and Public Policy Lead, Allen Institute for AI and Generative Artificial Intelligence Advisory Board members

“The City of Seattle is taking a values-driven approach to creating their generative AI policy, carefully weighing the benefits and harms this technology brings. We are grateful to support this work and commend the City on its leadership in prioritizing the responsible use of AI. We hope the City’s policies can provide a blueprint for other municipalities around the country as it becomes increasingly common to interact with AI systems in our daily lives.”

CTAB member Omari Stringer

“As a CTAB member and resident of Seattle, I am happy to see the City of Seattle taking steps to ensure the responsible use of innovative technologies such as Generative AI. Although the pace of innovation often exceeds the pace of policy, it is important to engage with stakeholders early to set a strong foundation for future use cases. While I believe we should tread carefully in this new domain, especially with the importance of the work the City carries out, there are certainly many opportunities for AI to enhance the delivery of services to the public. I appreciate the unique opportunity to provide my voice and expertise to help bridge the gap between innovation and ethics.”

Source : Harrell

The post City of Seattle Releases Generative Artificial Intelligence Policy Defining Responsible Use for City Employees appeared first on Policy Print.

]]>
AI Policy Yields a Goldmine for Lobbyists https://policyprint.com/ai-policy-yields-a-goldmine-for-lobbyists/ Tue, 07 Nov 2023 07:57:51 +0000 https://policyprint.com/?p=3727 The government’s burgeoning interest in artificial intelligence policy is turning into the next big payday for K Street.…

The post AI Policy Yields a Goldmine for Lobbyists appeared first on Policy Print.

]]>

The government’s burgeoning interest in artificial intelligence policy is turning into the next big payday for K Street.

Lobbyists are rushing to sign up AI companies as clients. And K Street firms also are being enlisted by a sprawling constellation of industries and interest groups that want help influencing AI policy.

Cashing in on the latest policy fight is a classic Washington narrative. But unlike, say, cryptocurrency or marijuana regulation, AI policy touches just about every industry. Groups as disparate as the NFL Players Association, Nike, Amazon and the Mayo Clinic have enlisted help from firms to lobby on the matter.

Some lobbyists compared the boom in business opportunities to the cryptocurrency policy debate that brought K Street millions. But AI has the potential to be even bigger.

Lobbyists in the AI space said others across town are angling themselves as subject matter experts as the new work becomes available. Nearly every industry has realized they will be affected by artificial intelligence, and the business community is aggressively looking for intel, they said.

“Every lobbying firm in town is trying to make themselves out to be an expert in everything to try and lure in clients, so AI is just one of them,” said one lobbyist granted anonymity to discuss dynamics on K Street. “I’d be hard-pressed to name you an AI expert downtown. It’s hard enough to pick the AI experts in policymaking positions.”

Another lobbyist said that this past spring, lobbyists without any tech clients began bringing up artificial intelligence at political fundraisers as a means to attract new clients. The same tactic happened with cryptocurrency two years ago, the person said. “The whole point of the business is to find people who need your services and will pay you to do that,” the lobbyist said, laughing.

Carl Thorsen, a lobbyist who has some clients with a stake in the issue, compared the pattern to when Congress was trying to prohibit internet gambling years ago. Suddenly, “every consultant under the sun” was working for an internet gambling client, said Thorsen, who was counsel at the House Judiciary Committee when it handled the issue. His firm has already heard from clients that some consultants are pitching themselves as the AI experts, he added.

“What people don’t understand, they are afraid of, and I’m certain there are plenty of consultants who’ve decided to market themselves as AI experts, he said.

The lobbying frenzy started long before the White House issued its executive order on AI and comes as Congress starts to dig in on related policy. Broad swathes of industry are seeking new incentives from Washington — including subsidies for AI research and workforce retraining — while avoiding onerous rules on how they develop or deploy the emerging technology. Other industry sectors are squabbling over how AI should apply to topics as disparate as copyright, criminal justice, health care, banking and national defense. Looming over it all are calls from some top AI companies for Washington to impose a licensing regime to govern the most advanced AI models — a path some warn would lock in the dominance of leading AI firms like OpenAI.

While disclosure forms suggest OpenAI has not officially hired any lobbyists, it’s still building a ground game in Washington. The company recently tapped law firm DLA Piper to coach CEO Sam Altman on how to testify before Congress. It has also hired Washington lawyer Sy Damle, a partner at Latham & Watkins, to represent it in ongoing copyright lawsuits sparked by its generative AI tools. In September, Damle organized a letter campaign pushing back on possible AI-driven changes to copyright law, though an OpenAI spokesperson said the company had no involvement in that effort. OpenAI is also looking to hire a U.S. congressional lead, budgeting between $230,000 and $280,000 annually for that role.

Altman also gave $200,000 to President Joe Biden’s joint fundraising committee. Shortly after his donation, he participated in a June meeting between Biden and the visiting Indian Prime Minister Narendra Modi. Altman was also invited to the state dinner honoring Modi.

LinkedIn co-founder Reid Hoffman, an AI investor who has sat on OpenAI’s board, has given more than $700,000 to Biden’s joint fundraising committee and has publicly praised the administration’s recent AI executive order. Top Microsoft employees have also given tens of thousands of dollars to Biden’s joint fundraising committee.

“I’m very concerned that AI active executives are trying to cultivate Democrats now just like Big Tech cultivated Democrats a decade or two ago, and Wall Street did a decade before that,” said Jeff Hauser, founder of the Revolving Door Project. “AI knows that decisions in Washington that are made in the next few years will set the course of the industry for a generation. It’s a really good time to invest four-, five-, six-, maybe even seven-digits worth of campaign cash and potentially yield 10- or 11-digit returns.”

Source : Politico

The post AI Policy Yields a Goldmine for Lobbyists appeared first on Policy Print.

]]>
The US and China may be ending an agreement on science and technology cooperation https://policyprint.com/the-us-and-china-may-be-ending-an-agreement-on-science-and-technology-cooperation/ Wed, 30 Aug 2023 19:40:12 +0000 https://policyprint.com/?p=3425 A decades-old science and technology cooperative agreement between the United States and China expires on Aug. 27, 2023. On the…

The post The US and China may be ending an agreement on science and technology cooperation appeared first on Policy Print.

]]>

A decades-old science and technology cooperative agreement between the United States and China expires on Aug. 27, 2023. On the surface, an expiring diplomatic agreement may not seem significant. But unless it’s renewed, the quiet end to a cooperative era may have consequences for scientific research and technological innovation.

The possible lapse comes after U.S. Rep. Mike Gallagher, R-Wis., led a congressional group warning the U.S. State Department in July 2023 to beware of cooperation with China. This group recommended to let the agreement expire without renewal, claiming China has gained a military advantage through its scientific and technological ties with the U.S.

The State Department has dragged its feet on renewing the agreement, only requesting an extension at the last moment to “amend and strengthen” the agreement.

The U.S. is an active international research collaborator, and since 2011 China has been its top scientific partner, displacing the United Kingdom, which had been the U.S.‘s most frequent collaborator for decades. China’s domestic research and development spending is closing in on parity with that of the United States. Its scholastic output is growing in both number and quality. According to recent studies, China’s science is becoming increasingly creative, breaking new ground.

As a policy analyst and public affairs professor, I research international collaboration in science and technology and its implications for public policy. Relations between countries are often enhanced by negotiating and signing agreements, and this agreement is no different. The U.S.’s science and technology agreement with China successfully built joint research projects and shared research centers between the two nations.

U.S. scientists can typically work with foreign counterparts without a political agreement. Most aren’t even aware of diplomatic agreements, which are signed long after researchers have worked together. But this is not the case with China, where the 1979 agreement became a prerequisite for and the initiator of cooperation.

A 40-year diplomatic investment

The U.S.-China science and technology agreement was part of a historic opening of relations between the two countries, following decades of antagonism and estrangement. U.S. President Richard Nixon set in motion the process of normalizing relations with China in the early 1970s. President Jimmy Carter continued to seek an improved relationship with China.

China had announced reforms, modernizations and a global opening after an intense period of isolation from the time of the Cultural Revolution from the late 1950s until the early 1970s. Among its “four modernizations” was science and technology, in addition to agriculture, defense and industry.

While China is historically known for inventing gunpowderpaper and the compass, China was not a scientific power in the 1970s. American and Chinese diplomats viewed science as a low-conflict activity, comparable to cultural exchange. They figured starting with a nonthreatening scientific agreement could pave the way for later discussions on more politically sensitive issues.

On July 28, 1979, Carter and Chinese Premier Deng Xiaoping signed an “umbrella agreement” that contained a general statement of intent to cooperate in science and technology, with specifics to be worked out later.

In the years that followed, China’s economy flourished, as did its scientific output. As China’s economy expanded, so did its investment in domestic research and development. This all boosted China’s ability to collaborate in science – aiding their own economy.

Early collaboration under the 1979 umbrella agreement was mostly symbolic and based upon information exchange, but substantive collaborations grew over time.

A major early achievement came when the two countries published research showing mothers could ingest folic acid to prevent birth defects like spina bifida in developing embryos. Other successful partnerships developed renewable energy, rapid diagnostic tests for the SARS virus and a solar-driven method for producing hydrogen fuel.

Joint projects then began to emerge independent of government agreements or aid. Researchers linked up around common interests – this is how nation-to-nation scientific collaboration thrives.

Many of these projects were initiated by Chinese Americans or Chinese nationals working in the United States who cooperated with researchers back home. In the earliest days of the COVID-19 pandemic, these strong ties led to rapid, increased Chinese-U.S. cooperation in response to the crisis.

Time of conflict

Throughout the 2000s and 2010s, scientific collaboration between the two countries increased dramatically – joint research projects expanded, visiting students in science and engineering skyrocketed in number and collaborative publications received more recognition.

As China’s economy and technological success grew, however, U.S. government agencies and Congress began to scrutinize the agreement and its output. Chinese know-how began to build military strength and, with China’s military and political influence growing, they worried about intellectual property theft, trade secret violations and national security vulnerabilities coming from connections with the U.S.

Recent U.S. legislation, such as the CHIPS and Science Act, is a direct response to China’s stunning expansion. Through the CHIPS and Science Act, the U.S. will boost its semiconductor industry, seen as the platform for building future industries, while seeking to limit China’s access to advances in AI and electronics.

A victim of success?

Some politicians believe this bilateral science and technology agreement, negotiated in the 1970s as the least contentious form of cooperation – and one renewed many times – may now threaten the United States’ dominance in science and technology. As political and military tensions grow, both countries are wary of renewal of the agreement, even as China has signed similar agreements with over 100 nations.

The United States is stuck in a world that no longer exists – one where it dominates science and technology. China now leads the world in research publications recognized as high quality work, and it produces many more engineers than the U.S. By all measures, China’s research spending is soaring.

Even if the recent extension results in a renegotiated agreement, the U.S. has signaled to China a reluctance to cooperate. Since 2018, joint publications have dropped in number. Chinese researchers are less willing to come to the U.S. Meanwhile, Chinese researchers who are in the U.S. are increasingly likely to return home taking valuable knowledge with them.

The U.S. risks being cut off from top know-how as China forges ahead. Perhaps looking at science as a globally shared resource could help both parties craft a truly “win-win” agreement.

Source: The Conversation

The post The US and China may be ending an agreement on science and technology cooperation appeared first on Policy Print.

]]>
Google Updates Its Privacy Policy to Allow Data Scraping for AI Training https://policyprint.com/google-updates-its-privacy-policy-to-allow-data-scraping-for-ai-training/ Sun, 30 Jul 2023 08:00:00 +0000 https://policyprint.com/?p=3347 The latest updates to Google’s privacy policy reveal that Google may use any public information available to train…

The post Google Updates Its Privacy Policy to Allow Data Scraping for AI Training appeared first on Policy Print.

]]>

The latest updates to Google’s privacy policy reveal that Google may use any public information available to train its various AI products and services.

Google has made updates to its privacy policy that now allow it to take any publicly available data and use it for artificial intelligence (AI) training purposes.

The update to the company’s privacy policy came on July 1 and can be compared to previous versions of the policy via a link published on the site’s update page.

In the latest version, changes can be seen that include the addition of Google’s AI models, Bard and Cloud AI capabilities, to the services it may train by using “information that’s publicly available online” or from “other public sources.”

The updated Google policy conditions (in green) as of July 1, 2023. Source: screenshot 

The policy update infers that Google is now making it clear to the public and its users that anything that is publicly uploaded online could be used in its training processes with the current and future AI systems it develops. 

This update from Google comes shortly after OpenAI, the developer of the popular AI chatbot ChatGPT, was charged with a class-action lawsuit in California over allegedly scraping private information from users via the internet.

The suit claims that OpenAI used data from millions of comments on social media, blogs, Wikipedia and other personal information from users to train ChatGPT without first getting consent to do so. The lawsuit concludes that this, therefore, violated the copyrights and privacy rights of millions of users on the internet.

Twitter’s recent change in the number of tweets users are able to access depending on their account verification status has caused rumors across the internet that it was imposed partially due to AI data scraping.

The documents of Twitter’s developers read that rate limits were imposed as a method to manage the volume of requests made to Twitter’s application program interface.

Elon Musk, the owner and former CEO of Twitter, recently tweeted about the platform “getting data pillaged so much that it was degrading service for normal users.”

Source: Coin Telegraph

The post Google Updates Its Privacy Policy to Allow Data Scraping for AI Training appeared first on Policy Print.

]]>
US Looks to Restrict China’s Access to Cloud Computing to Protect Advanced Technology https://policyprint.com/us-looks-to-restrict-chinas-access-to-cloud-computing-to-protect-advanced-technology/ Thu, 20 Jul 2023 08:00:00 +0000 https://policyprint.com/?p=3317 The Biden administration is preparing to restrict Chinese companies’ access to U.S. cloud-computing services, WSJ reported Tuesday, citing people familiar with…

The post US Looks to Restrict China’s Access to Cloud Computing to Protect Advanced Technology appeared first on Policy Print.

]]>

The Biden administration is preparing to restrict Chinese companies’ access to U.S. cloud-computing servicesWSJ reported Tuesday, citing people familiar with the situation, in a move that could further strain relations between the world’s economic superpowers. From the report:The new rule, if adopted, would likely require U.S. cloud-service providers such as Amazon.com and Microsoft to seek U.S. government permission before they provide cloud-computing services that use advanced artificial-intelligence chips to Chinese customers, the people said. The Biden administration’s move would follow other recent measures as Washington and Beijing wage a high-stakes conflict over access to the supply chain for the world’s most advanced technology.

Beijing Monday announced export restrictions on metals used in advanced chip manufacturing, days ahead of a visit to China by Treasury Secretary Janet Yellen. The proposed restriction is seen as a means to close a significant loophole. National-security analysts have warned that Chinese AI companies might have bypassed the current export controls rules by using cloud services. These services allow customers to gain powerful computing capabilities without purchasing advanced equipment — including chips — on the control list, such as the A100 chips by American technology company Nvidia.

Source: Slash Dot

The post US Looks to Restrict China’s Access to Cloud Computing to Protect Advanced Technology appeared first on Policy Print.

]]>
Belarus To Terminate Cooperation With France in Culture, Education, Science https://policyprint.com/belarus-to-terminate-cooperation-with-france-in-culture-education-science/ Sat, 06 May 2023 12:12:00 +0000 https://policyprint.com/?p=2924 Belarus is going to terminate the agreement with France on cooperation in the field of culture, education, science…

The post Belarus To Terminate Cooperation With France in Culture, Education, Science appeared first on Policy Print.

]]>
Belarus is going to terminate the agreement with France on cooperation in the field of culture, education, science and technology, and the media.

Report informs citing the Belarusian media that the bill has been posted on the National Legal Internet Portal.

From what date this agreement ceases to operate is not specified.

“To recognize as invalid the Law of the Republic of Belarus dated December 2, 2010 No. 203-З ‘On ratification of the agreement between the government of the Republic of Belarus and the government of the French Republic on cooperation in the field of culture, education, science and technology, mass media’ from the moment the agreement is terminated,” the document notes.

The agreement was signed in Paris in January 2010. It provided for close cooperation in the field of education, when Belarusian students studied French with the use of modern methods and could receive education in France, while French students could, in turn, come to Belarus to study.

The post Belarus To Terminate Cooperation With France in Culture, Education, Science appeared first on Policy Print.

]]>
TikTok and Higher Education https://policyprint.com/tiktok-and-higher-education/ Sat, 21 Jan 2023 16:29:59 +0000 https://policyprint.com/?p=2683 “There is a lot of political fervor over TikTok and its connections to the Chinese government, and this…

The post TikTok and Higher Education appeared first on Policy Print.

]]>

“There is a lot of political fervor over TikTok and its connections to the Chinese government, and this is coming out in the form of these perhaps symbolic bans,” said Kurt Opsahl, general counsel for the Electronic Frontier Foundation. “Those limitations suggested that Auburn was making more of a statement than a new policy.”

Cancel culture of both the left and right meet at the convenient doorstep of the other, China, in the halls of what arguably should be the most protected zone for free speech in the United States: its colleges and universities. Censorship has its requisite foe. With the magician’s sleight of hand, these bans transform bald-faced censorship into a sudden awareness of security risks.

Wow.

Let’s discuss. A legal guidance is necessary to make clear whether government bans on the use of TikTok on government devices and networks apply to higher education. That broader legal question has long been a slippery slope for state institutions. Decades went into the question, for example, on whether Americans With Disabilities Act regulations for the federal and state government required compliance among state colleges and universities. I am not sure that issue ever got fully settled until the Obama administration pushed regulations clearly down the Title path of the legislation to sections II and III, effectively ending the debate.

Psycho-political analysis might help us see how this stampede is taking shape. Real and perceived American concerns about the People’s Republic have coalesced into the only issue upon which our benighted political parties can find common ground. No surprise that the first committee formed in this new Congress centers on this point, House Select Committee on the Strategic Competition Between the U.S. and the Chinese Communist Party. Anti-Communism held the Republican Party together through much of the second half of the 20th century. As we watch the GOP ricochet between its big money elites and its angry grassroots, why not resurrect a tried and true formula to keep the peace? And signal a bipartisan gesture as flourish?

Histrionics are the problem with this composition. Real issues do exist in the calculus between the U.S. and the PRC geopolitically, socially and economically. Overheating about a popular application on the internet distracts from the deeper thinking that needs take shape around how to address military testiness and global competition. Far be it from the Republican Party to be histrionic, however, and I am being as sarcastic as I know how to be in print. From the local congressional race I personally experienced to the masters—Trump, Bannon, Jordan—99 percent of what comes out their mouths is nothing but drama, obscuring and displacing the possibility of more nuanced thought. Democrats, eager to appear “bipartisan,” better watch their step not to compete in seeing who can scream the loudest.

Enter hypocrisy, if not idiocy, of the highest order. Myriad groups, for example the Electronic Frontier Foundation, and individuals (and yes, I count myself among them) have been preaching privacy and security issues from the rooftops for at least a generation of sustained discussion on this front. Now, all of a sudden, we have awareness on the part of Congress, state elected officials including governors and higher education administrators that roosts on blocking the ports to a popular app on the internet? With nothing else to say about precisely what those security and privacy issues are apart from generalizations about how applications scrape and use data? Someone in China has access to what a coed at Auburn University likes to watch on TikTok? With no rules around how that data is gathered technologically (i.e., algorithmic design), managed professionally (Sold to advertisers? Delivered on silver platters to Chairman Xi?), or secured administratively (what are the rules?). We might hear echoes of our own circumstances. These challenges are exactly what we face in the United States in the gap that exists between consumers and tech companies. Mirror, mirror on the wall …

Had I not dedicated the lion’s share of my career life to higher education, I might just sit back and laugh. But I cannot. I actually took my career direction seriously (and wrote a dissertation, let us not forget, on Catholic women’s higher education, which was an enterprise that knew something about devotion) and therefore must speak out. Students, faculty, administrators, alumni, stand up in protest to this ridiculous and dangerous rhinoceros. If ever there was an educational moment for us to learn and teach about privacy and security, TikTok provides a most excellent example. We in higher education should take every opportunity to exploit it. We set a very bad precedent, however, to leap over the unique work we can do to educate and instead jump on the censorship bandwagon. Jump off, Auburn, and any other institution headed down that path. Remember your missions! Freedom of thought and speech are both drivers and values necessary to make those missions work. And don’t you let any politician knock you off that mantle that is yours and ours to cherish.

Source : Inside Highered

The post TikTok and Higher Education appeared first on Policy Print.

]]>