Technology Archives · Policy Print https://policyprint.com/category/technology/ News Around the Globe Tue, 26 Mar 2024 14:52:11 +0000 en-US hourly 1 https://policyprint.com/wp-content/uploads/2022/11/cropped-policy-print-favico-32x32.png Technology Archives · Policy Print https://policyprint.com/category/technology/ 32 32 EU opens new investigations into tech ‘gatekeepers’ https://policyprint.com/eu-opens-new-investigations-into-tech-gatekeepers/ Wed, 10 Apr 2024 14:48:09 +0000 https://policyprint.com/?p=4196 The announcement highlights the growing regulatory scrutiny on the power of big tech companies and follows the US…

The post EU opens new investigations into tech ‘gatekeepers’ appeared first on Policy Print.

]]>

The announcement highlights the growing regulatory scrutiny on the power of big tech companies and follows the US decision to take legal action against Apple, which it has accused of monopolising the smartphone market and crushing competition.

The European Commission will examine whether the big tech companies are preventing developers from steering customers away from controlled app stores, which could be anti-competitive.

The investigation comes under powers introduced in the Digital Markets Act (DMA) which was a landmark piece of legislation aimed at curbing the power of big tech and the commission is accusing companies of non-compliance with the act and a failure to provide a fairer and more open digital space for European citizens and businesses.

Should the investigation conclude that there is lack of full compliance with the DMA, gatekeeper companies could face heavy fines.

Designated as ‘gatekeepers’ by the DMA, Google owner Alphabet, Amazon, Apple, TikTok owner ByteDance, Meta and Microsoft have special responsibilities because of their dominance of key mobile technologies.

These companies are accused of steering developers away from competitor platforms and imposing various restrictions and limitations on their use.

The big tech companies are facing a growing legal backlash and last month Apple was fined over its iOS ecosystem and business practices by the EU.

Whether this case succeeds of not, it’s interesting to note the growing willingness of the authorities to take these tech giants to court.

About time, according to some critics.

Source: New Electronic

The post EU opens new investigations into tech ‘gatekeepers’ appeared first on Policy Print.

]]>
China’s government will no longer buy Intel or AMD chips, or Microsoft products, for its PCs https://policyprint.com/chinas-government-will-no-longer-buy-intel-or-amd-chips-or-microsoft-products-for-its-pcs/ Sun, 07 Apr 2024 14:44:44 +0000 https://policyprint.com/?p=4193 China’s government has reportedly started enforcing a new law that it passed in December this week. The law…

The post China’s government will no longer buy Intel or AMD chips, or Microsoft products, for its PCs appeared first on Policy Print.

]]>

China’s government has reportedly started enforcing a new law that it passed in December this week. The law bans the government from purchasing PCs with Intel and AMD chips inside, along with software products from Microsoft, including its Windows operating system.

According to The Financial Times (via PC World), the new rules on purchasing products for China’s government PCs were set in place in December by the country’s Information Technology Security Evaluation Center. They include all governments and agencies above what is considered to be the country’s township level.

China previously ordered its government offices and agencies to no longer use Microsoft’s Windows OS in 2022, in favor of a homegrown Linux-based OS. As a result, these new guidelines are not expected to affect Microsoft. However, the ban on Intel and AMD chips could result in a noticeable hit in the revenue numbers for both companies.

On the other hand, the ban on these products on China’s government PCs does not include their use in private businesses or by regular consumers in that country.

China previously banned the use of Apple’s iPhone products in its government buildings. It has also banned the use of products from Micron Technology for its infrastructure projects, citing security concerns.

These new moves come sometime after the United States government banned China’s Semiconductor Manufacturing International Corporation (SMIC) from exporting fabrication equipment to make certain chips in that country.

Late in 2023, the US government banned the export of some of Nvidia’s AI GPUs to China. Nvidia has instead developed AI chips, the H20, that were specifically made to conform to the restrictions of the US government’s export rules. The company started taking preorders for the H20 chips in early 2024, and are expected to begin large scale shipments of those China-specific AI GPUs sometime in the second quarter of 2024.

Source: Neowin

The post China’s government will no longer buy Intel or AMD chips, or Microsoft products, for its PCs appeared first on Policy Print.

]]>
Woz calls out US lawmakers for TikTok ban: ‘I don’t like the hypocrisy’ https://policyprint.com/woz-calls-out-us-lawmakers-for-tiktok-ban-i-dont-like-the-hypocrisy/ Mon, 01 Apr 2024 14:21:56 +0000 https://policyprint.com/?p=4187 Apple co-founder Steve Wozniak has criticized the US government’s targeting of TikTok, saying it is hypocritical to single…

The post Woz calls out US lawmakers for TikTok ban: ‘I don’t like the hypocrisy’ appeared first on Policy Print.

]]>

Apple co-founder Steve Wozniak has criticized the US government’s targeting of TikTok, saying it is hypocritical to single out one social media platform for tracking users and not apply the same rule to all.

In an interview with news channel CNN, Woz was asked about Apple’s so-called “walled garden” approach to protecting users, and in response he said he was glad for the protection that he gets, and that Apple does a better job in this respect than other companies.

“And tracking you – tracking you is questionable. But my gosh, look at what we’re accusing TikTok of, and then go look at Facebook and Google and that’s how they make their businesses,” he added. “I mean, Facebook was a great idea. But then they make all their money just by tracking you and advertising, and Apple doesn’t really do that so much.”

Earlier this month, the US Congress passed the Protecting Americans from Foreign Adversary Controlled Applications Act, which aims to force TikTok’s Chinese owner ByteDance to either sell off its US-based biz or face being banned from operating in the country.

“I don’t understand it, I don’t see why,” commented Woz. “What are we saying? We’re saying ‘Oh, you might be tracked by the Chinese.’ Well, they learned it from us.”

Similar points are made in an article in Nikkei Asia, which states that US social media apps have formed a key part of Washington’s global influence operations for many years, and have provided “unparalleled intelligence collection opportunities” and “helped to project certain American political and cultural values into foreign societies.”

Woz continued by saying that “If you have a principle [that] a person should not be tracked without them knowing it, you apply it the same to every company, or every country. You don’t say, ‘Here’s one case where we’re going to outlaw an app, but we’re not gonna do it in these other cases.’ So I don’t like the hypocrisy, and that’s obviously coming from a political realm.”

The engineering brains behind Apple’s early products such as the Apple I and II personal computers, Woz also became an early member of digital rights group the Electronic Frontier Foundation (EFF).

He revealed in the interview that he largely avoids “the social web,” but gets a lot of fun out of watching TikTok “even if it’s just for rescuing dog videos and stuff.”

The Apple co-founder was also reported to have been hospitalized in Mexico City last November with a suspected stroke following a speech at the World Business Forum, but has apparently made a full recovery.

Source: The Register

The post Woz calls out US lawmakers for TikTok ban: ‘I don’t like the hypocrisy’ appeared first on Policy Print.

]]>
E.U. launches probes into Meta, Apple and Alphabet under sweeping new tech law https://policyprint.com/e-u-launches-probes-into-meta-apple-and-alphabet-under-sweeping-new-tech-law/ Tue, 26 Mar 2024 14:15:55 +0000 https://policyprint.com/?p=4181 The European Union on Monday began an investigation into Apple, Alphabet and Meta, in its first probe under the sweeping new Digital…

The post E.U. launches probes into Meta, Apple and Alphabet under sweeping new tech law appeared first on Policy Print.

]]>

The European Union on Monday began an investigation into AppleAlphabet and Meta, in its first probe under the sweeping new Digital Markets Act tech legislation.

“Today, the Commission has opened non-compliance investigations under the Digital Markets Act (DMA) into Alphabet’s rules on steering in Google Play and self-preferencing on Google Search, Apple’s rules on steering in the App Store and the choice screen for Safari and Meta’s ‘pay or consent model,’” the European Commission said in a statement.

The first two probes focus on Alphabet and Apple and relate to so-called anti-steering rules. Under the DMA, tech firms are not allowed to block businesses from telling their users about cheaper options for their products or about subscriptions outside of an app store.

“The way that Apple and Alphabet’s implemented the DMA rules on anti-steering seems to be at odds with the letter of the law. Apple and Alphabet will still charge various recurring fees, and still limit steering,” the E.U.’s competition chief, Margrethe Vestager, said Monday at a news conference.

Apple has already fallen foul of the E.U.’s rules. This month, the company was fined 1.8 billion euros ($1.95 billion) after the European Commission said it found that Apple had applied restrictions on app developers that prevented them from informing iOS users about alternative and cheaper music subscription services available outside of the app.

In a third inquiry, the commission said it is investigating whether Apple has complied with its DMA obligations to ensure that users can easily uninstall apps on iOS and change default settings. The probe also focuses on whether Apple is actively prompting users with choices to allow them to change default services on iOS, such as for the web browser or search engine.

The commission said that it is “concerned that Apple’s measures, including the design of the web browser choice screen, may be preventing users from truly exercising their choice of services within the Apple ecosystem.”

Apple said it believes it is in compliance with the DMA.

“We’re confident our plan complies with the DMA, and we’ll continue to constructively engage with the European Commission as they conduct their investigations. Teams across Apple have created a wide range of new developer capabilities, features, and tools to comply with the regulation,” an Apple spokesperson told CNBC on Monday.

The fourth probe targets Alphabet, as the European Commission looks into whether the firm’s display of Google search results “may lead to self-preferencing in relation to Google’s,” other services such as Google Shopping, over similar rival offerings.

“To comply with the Digital Markets Act, we have made significant changes to the way our services operate in Europe,” Oliver Bethell, director of competition at Alphabet, said in a statement.

“We have engaged with the European Commission, stakeholders and third parties in dozens of events over the past year to receive and respond to feedback, and to balance conflicting needs within the ecosystem. We will continue to defend our approach in the coming months.”

Alphabet pointed to a blog post from earlier this month, wherein the company outlined some of those changes — including giving Android phone users the option to easily change their default search engine and browser, as well as making it easier for people to see comparison sites in areas like shopping or flights in Google searches.

Meta investigation

The fifth and final investigation focuses on Meta and its so-called pay and consent model. Last year, Meta introduced an ad-free subscription model for Facebook and Instagram in Europe. The commission is looking into whether offering the subscription model without ads or making users consent to terms and conditions for the free service is in violation of the DMA.

“The Commission is concerned that the binary choice imposed by Meta’s ‘pay or consent’ model may not provide a real alternative in case users do not consent, thereby not achieving the objective of preventing the accumulation of personal data by gatekeepers.”

Thierry Breton, the E.U.’s internal market commissioner, said during the news conference that there should be “free alternative options” offered by Meta for its services that are “less personalized.”

“Gatekeepers” is a label for large tech firms that are required to comply with the DMA in the E.U.

“We will continue to use all available tools, should any gatekeeper try to circumvent or undermine the obligations of the DMA,” Vestager said.

Meta said subscriptions are a common business model across various industries.

“Subscriptions as an alternative to advertising are a well-established business model across many industries, and we designed Subscription for No Ads to address several overlapping regulatory obligations, including the DMA. We will continue to engage constructively with the Commission,” a Meta spokesperson told CNBC on Monday.

Tech giants at risk of fines

The commission said it intends to conclude its probes within 12 months, but Vestager and Breton during the Monday briefing stressed that the DMA does not dictate a hard deadline for the timeline of the inquiry. The regulators will inform the companies of their preliminary findings and explain measures they are taking or the gatekeepers should take in order to address the commission’s concerns.

If any company is found to have infringed the DMA, the commission can impose fines of up to 10% of the tech firms’ total worldwide turnover. These penalties can increase to 20% in case of repeated infringement.

The commission said it is also looking for facts and information to clarify whether Amazon may be preferencing its own brand products on its e-commerce platform over rivals. The commission is further studying Apple’s new fee structure and other terms and conditions for alternative app stores.

This month, the tech giant announced that users in the E.U. would be able to download apps from websites rather than through its proprietary App Store — a change that Apple has resisted for years.

The E.U.’s research into Apple and Amazon does not comprise official investigations.

Source: NBC News

The post E.U. launches probes into Meta, Apple and Alphabet under sweeping new tech law appeared first on Policy Print.

]]>
TikTok Rapidly Grows Office Footprint, Toughens RTO Policy https://policyprint.com/tiktok-rapidly-grows-office-footprint-toughens-rto-policy/ Sun, 11 Feb 2024 16:54:17 +0000 https://policyprint.com/?p=4159 The social media giant is eyeing 600K SF in San Jose, Seattle, Nashville TikTok is undertaking a rapid…

The post TikTok Rapidly Grows Office Footprint, Toughens RTO Policy appeared first on Policy Print.

]]>

The social media giant is eyeing 600K SF in San Jose, Seattle, Nashville

TikTok is undertaking a rapid expansion of its U.S. office footprint as it toughens its return to office mandate on workers.

The Chinese-owned social media giant is shopping for what could be more than 600K SF in San Jose, Seattle and Nashville, rapidly expanding an office footprint that now encompasses space in New York, Los Angeles, San Francisco and Austin.

TikTok is using a customized app to monitor its tougher return-to-office policy, which requires its U.S. workforce of 7,000 to be in the physical office at least three days a week, with an unspecified number of workers required to come in five days a week.

The app, which TikTok calls My RTO, tracks badge swipes to determine if employees are fulfilling the RTO mandate.

TikTok is in talks to occupy 100K SF of the newly built 16-story Moore Building on Music Row in Nashville. Los Angeles-based TikTok has been leasing three floors encompassing about 50K SF at One Nashville, anchoring a WeWork space, according to a report in CoStar.

TikTok parent ByteDance is negotiating a sublease agreement that will expand its footprint at the former Roku complex in San Jose from 660K SF to more than 1M SF, the report said.

ByteDance currently subleases two buildings on Coleman Avenue that Roku decided to vacate in its Coleman Highline portfolio last year. Roku is still seeking a tenant for two other buildings at the Coleman complex.

TikTok also is finalizing plans to double its space at the Lincoln Square North Tower in Bellevue, WA, where it currently leases about 132K SF, taking space that was offloaded by Microsoft last year. TikTok occupies 100K SF in the Key Center, about a block away from the Lincoln Square tower, the report said.

TikTok won’t have any trouble locating available tech space in West Coast locations as many opportunities exist in space listed for sublease by tech companies that have been downsizing their footprints.

Analysts are predicting that TikTok’s U.S. revenue will increase by more than 25% in 2024 to $11B, an amount equal to 3.5% of the total digital ad spend in the country.

Source: Globest

The post TikTok Rapidly Grows Office Footprint, Toughens RTO Policy appeared first on Policy Print.

]]>
How Apple’s App Tracking Policy Curbs Financial Fraud https://policyprint.com/how-apples-app-tracking-policy-curbs-financial-fraud/ Fri, 29 Dec 2023 01:13:21 +0000 https://policyprint.com/?p=4099 An essential adage these days is to protect your private data to keep fraudsters at bay. A new…

The post How Apple’s App Tracking Policy Curbs Financial Fraud appeared first on Policy Print.

]]>

An essential adage these days is to protect your private data to keep fraudsters at bay. A new paper has quantified the incidence of financial fraud complaints among app users who follow that advice. Titled “Consumer Surveillance and Financial Fraud,” the paper was co-authored by Wharton finance professor Huan Tang and finance professors Bo Bian at the University of British Columbia and Michaela Pagel at Washington University in St. Louis.

The authors focused on Apple’s App Tracking Transparency (ATT) policy, which by default opts out users on Apple’s iOS platform from sharing their data. They found that a 10% increase in the number of iOS users in a given zip code results in a 3.21% drop in financial fraud complaints from that location. The study also found that “the effects are concentrated in complaints related to lax data security and privacy.”

The drop in financial fraud complaints could grow tenfold if tight privacy laws are universally applied. “If the whole population of [cell phone] users on both the iOS and Android platforms were subject to a policy like the ATT, then the number of financial fraud complaints should drop to 32%, assuming the effect scales up linearly,” Tang said.

Apple’s ATT policy, which was launched in April 2021, required all app providers to obtain explicit user permission before tracking them across apps or websites owned by other companies. Consequently, without a user’s permission, Apple would not provide those apps and websites with so-called “mobile identifiers.”

Although the ATT policy only applies to mobile users, it has implications for commercial surveillance and fraud among the general population due to the prevalence of smartphones, the paper pointed out. After the ATT policy, companies with an app are 42% less likely to experience cyber incidents, compared to firms without an app, it added. The paper described the implementation of ATT as “an event that enhances data security and privacy standards.”

A Shock to the Data Industry

The ATT policy dealt “a major shock to the data industry,” especially providers of mobile apps that are available on the Apple App Store or the Google Play store, the paper stated. As of February 2022, 82% of users refused to grant permission to track them, or only 18% of app users allowed tracking among those who were asked for such permission, according to Flurry, a mobile advertising company.

“Facebook is the largest victim of Apple’s privacy campaign, because 98% of Facebook’s revenue comes from targeted ads.”— Huan Tang

According to Tang, Meta’s Facebook tops the list of ATT casualties. “Facebook is the largest victim of Apple’s privacy campaign, because 98% of Facebook’s revenue comes from targeted ads,” she said. In February 2022, Facebook’s share price plunged a record 26% after it announced its 2021 fourth-quarter results, where it blamed Apple’s privacy laws and macroeconomic challenges for its forecast of lower revenues in the subsequent quarter. Apple’s privacy policy would cost the company $10 billion in 2022, Facebook had warned. The implementation of ATT also caused sharp falls in the stock prices of other firms that own active iOS apps, the paper noted, citing a companion paper on data privacy in mobile apps that Tang co-authored.

Tang explained how exactly the ATT hurt Facebook. In order to target consumers for advertising, Facebook needs to link different pieces of data from various sources about the same individual using a mobile identifier that links all of the individual’s mobile devices and that links all user choices from different websites, she explained. But after ATT, Facebook couldn’t use mobile identifiers unless iOS users explicitly agreed to share their data with a third party, she added.

Facebook’s Loss, Apple’s Gain

Apple, in contrast, benefited because its users were happy that it was taking steps to protect their privacy, Tang said. “Apple’s privacy campaign is self-serving because it allows the tech giant to tap into the targeted ad industry,” she continued. “And its largest opponent besides Google is Facebook. By taking down Facebook, there’s a void to be filled.” Incidentally, France’s competition authority and Italy’s antitrust agency accused Apple of abusing its dominance in the market to set unfair conditions.

Apple stepped in later with crowd-level targeting, where it could use aggregated information of specific communities of users it created, Tang added. Other platforms that wanted to target Apple users had to adopt that approach, which allows “less refined targeting,” she explained. As Apple’s guide to search ads stated, “targeting specific audiences will prevent ads from appearing to users who have turned off the Personalized Ads setting.”

Apple had begun tightening the screws on data privacy more than a year before it launched the ATT policy, the paper noted. In December 2020, Apple introduced “nutrition” privacy labels, which required all developers to provide information about their data practices in a standardized and user-friendly format. Developers who failed to comply with that policy faced the risk of having their future app updates rejected by Apple’s app store.

In July 2022, Google too launched data safety forms on its Google Play platform, which also required firms to disclose the types of data they collected from users and how they would use that. Google’s data safety form also required disclosure of data security practices, including whether the user data is encrypted during transit.

How the Study Tracked Financial Fraud

The authors began with detailed foot traffic data from Safegraph (a provider of datasets on global places) to calculate zip-code-level shares of iPhone users out of all smartphone users. Next, they analyzed data from the Consumer Fraud Prevention Bureau (CFPB) and the Federal Trade Commission (FTC) on the number of financial fraud complaints and the amount of money lost due to fraud. They then applied the 82% opt-out rate of ATT to arrive at their finding of a 3.21% reduction in financial fraud complaints.

“Apple’s privacy campaign is self-serving because it wants to tap into the targeted ad industry.”— Huan Tang

Significantly, the study found that trends in the likelihood and number of financial fraud complaints were more pronounced among minorities, women, and younger people, suggesting that these groups are more vulnerable to surveillance and fraud. Those findings contribute to the process of creating new regulations and rules to enhance consumer data protection and privacy, the paper stated.

To isolate CFPB complaints that relate to financial fraud originating from lax data security, the authors used keyword searches to look for indicators such as fraud, scam, or identity theft. They used that in combination with a machine learning method that generates a likelihood of complaints being related to financial fraud caused by data security issues.

Main Findings of the Study

  • A 10% increase in the number of iOS users in a given zip code results in a 3.21% drop in financial fraud complaints from that location.
  • About 26% of financial companies listed in the CFPB complaints database own an app, and 11% of them collect and share user data with third parties, such as data brokers, other websites, and advertising networks. The effect of ATT on consumer complaints is more pronounced for companies that are active in the app market, share user data with third parties, or do not encrypt user data in transit.
  • Complaints of financial fraud are more likely in categories like credit reporting and debt collection than in others like student loans and mortgages. Specifically, the ATT policy reduced the number of financial fraud complaints about credit reporting and debt collection in a zip code by 2.48% and 0.61%, respectively, when it has 10% more iOS users.
  • The ATT policy helped reduce money lost in all complaints by 4.7%. Of that, the money lost as reported in internet and data security complaints would be about 40% less with the ATT policy.

Regulatory Reforms

“Our results provide compelling evidence in favor of industry-led regulations aimed at constraining consumer surveillance practices,” the paper stated. Tang recently presented her findings to the FTC, which she said is eager to use her paper’s findings in its efforts to frame future regulation on data privacy and security.

“For their cost and benefit analysis, the FTC was interested in the cost to consumers when firms collect excessive amount of data, but it is very hard to find empirical evidence of that,” she said. “This is where our paper comes in. We provide a point estimate.”

According to Tang, Apple’s efforts at strengthening data privacy for cell phone users have advantages over the European Union’s General Data Protection Regulation (GDPR) that was launched in 2018. She said users have found it cumbersome to navigate the privacy notices of firms that pop up on their screens, especially because they are not standardized and require multiple clicks before they can understand how their data might be used. A CNBC report referred to that experience of users as “consent fatigue.”

The paper pointed to other efforts that are underway to limit data transfers across firms, including Google’s plan to phase out third-party cookies in Chrome by 2024. Similar to the GDPR, laws in Virginia and Connecticut require opt-in consent for sharing sensitive personal information, according to a report by OneTrust, a firm that advises companies on issues including privacy standards. Other privacy laws in California, Colorado, and Utah follow an opt-out mechanism for consent in most areas, it added.

Source : Knowledge at Wharton

The post How Apple’s App Tracking Policy Curbs Financial Fraud appeared first on Policy Print.

]]>
AI Policy and Governance: What You Need to Know https://policyprint.com/ai-policy-and-governance-what-you-need-to-know/ Wed, 06 Dec 2023 11:06:57 +0000 https://policyprint.com/?p=4028 Dedicated AI policies, in addition to other management frameworks and documentation, help organizations more effectively manage AI governance,…

The post AI Policy and Governance: What You Need to Know appeared first on Policy Print.

]]>

Dedicated AI policies, in addition to other management frameworks and documentation, help organizations more effectively manage AI governance, data usage, and ethical best practices. AI governance — the practice of monitoring, regulating, and managing AI models and usage — is being rapidly adopted by organizations around the globe.

In this guide, we’ll dissect AI policy and governance in greater detail, explaining how a comprehensive AI policy that emphasizes all areas of AI governance can lead to more manageable, explainable AI and a more ethical and successful operation as a whole.

What Is an AI Policy?

An artificial intelligence policy is a dynamic, documented framework for AI governance that helps organizations set clear guidelines, rules, and principles for how AI technology should be used and developed within the organization.

Creating an AI policy should help your business leaders clarify and highlight any ethical, legal, or compliance standards to which your organization is committed, as well as identify the “who,” “what,” “when,” “why,” and “how” for strategic AI usage that aligns with overall organizational goals and strategies.

Every organization’s AI policy will look a little different to meet their specific objectives for AI governance, but in general, most AI policies include some version of the following components and structural elements:

  • An overarching vision for AI usage and growth in the organization.
  • Mission statements, clear objectives, and/or KPIs that align with this vision.
  • Detailed information about regional, industry-specific, and relevant regulatory compliance laws as well as other ethical considerations.
  • A catalog of approved tools and services that can be used for AI development and deployment purposes.
  • Defined roles and responsibilities related to AI usage.
  • An inventory and procedure for data privacy and security mechanisms.
  • A defined procedure for reporting and addressing AI performance and security issues.
  • Standards for AI model performance evaluation.

What Is AI Governance?

AI governance is a group of best practices that includes policies, standardized processes, and data and infrastructure controls that contribute to a more ethical and controlled artificial intelligence ecosystem.

When organizations put appropriate AI governance standards and frameworks in place, training data, algorithms, model infrastructure, and the AI models themselves can be more closely monitored and controlled throughout initial development, training and retraining, deployment, and daily use. This contributes to a more efficient AI operation as well as compliance with relevant data privacy and AI ethics regulations.

Who Manages AI Governance?

AI governance is a complex process, especially if your organization is working with multiple generative AI models or other large-scale AI platforms. The following individuals and teams play important roles in different aspects of AI governance and management:

  • Executive Leaders: An organization’s C-suite and other top leaders should establish the overall direction, goals, and vision for AI governance and associated AI policies. Regardless of their specific title, all business executives should be clear on what AI tools are being used and what regulations and policies are in place to regulate that usage.
  • Chief Information Officer: Unless your organization prefers to have a chief AI officer or chief technical officer oversee this kind of work, the CIO is the primary business leader who takes broader organizational strategies and goals and applies them to actual AI governance development and implementation. This individual is also responsible for ensuring that AI integrates smoothly and securely with all other technologies and infrastructures in your business’s tech stack.
  • Chief Data Officer: The CDO is primarily responsible for data governance and data-level quality assurance. In their role, they work to manage data quality, data privacy and compliance, and transparent data preparation workflows for AI model training sets.
  • Chief Compliance Officer and Legal/Compliance Teams: This individual or group of individuals keeps up with international, national, regional, industry-specific, and other regulations that may impact how your organization can use data — including PII and intellectual property — and AI models. If a chief ethics officer works among this team, this work may go beyond simple compliance management and move toward setting up ethical decision-making and training frameworks.
  • Data Science, AI, and IT Security Teams: These are the teams that handle the hands-on development tasks for training data, algorithms, models, performance monitoring, and security safeguards. While they may not have a hand in setting AI governance standards, they will likely play the biggest role in carrying out these standards.
  • AI Ethics Committee: If your organization has established an AI ethics committee that operates separately from your C-suite executives, these committee members will act as advisors to leadership in establishing governance frameworks that consider AI ethics from all angles, including personal privacy, transparent data sourcing and training, and environmental impact.
  • HR and Learning and Development Teams: These leaders are in charge of incorporating AI governance best practices and rules into the recruitment and hiring process so all new members of the team are aware of the roles and responsibilities they have when using AI. This team may not come up with the actual training materials or goals, but because of their background with other types of training, they may be tasked with leading AI usage training across the organization.
  • Third-Party Consultants: If your organization chooses to hire third-party consultants for data management, AI development, or strategic planning, these individuals may take over some or all of the other taskwork covered above. However, you’ll want to make sure key stakeholders in your organization work collaboratively with these advisors to create an AI governance policy that is both comprehensive and fitted to your specific needs.
  • Government and Industry Regulators: Depending on the industry or region you’re working in, third-party regulators could play a major role in determining what AI governance looks like for your organization, as they establish and enforce rules for ethical AI and data use. Many countries and regional groups like the EU are currently working on more comprehensive AI legislation, so expect this group’s role in AI governance to grow quickly in the coming months and years.

Why Is AI Governance Important?

AI governance is one of the most effective ways to establish, organize, and enforce standards for AI development and use that encourages ethical and compliant practices, transparency, continual monitoring and improvement, and cross-team collaboration.

AI governance can improve AI model usage outcomes and help organizations use AI in a way that protects customer data, aligns with compliance requirements, and maintains their reputation as an ethical operator, not only with their customers but also with their partners and the industry at large.

Establishing an independent AI governance strategy can also help your organization get more out of the AI technology you’re using, as creating this type of plan requires your team to flesh out its AI vision, goals, and specific roles and responsibilities in more granular detail. The accountability that gets built into an AI governance strategy helps to prevent and mitigate dangerous biases, create a plan of action for when AI development or use goes awry, and reemphasizes the importance of maintaining personal data privacy and security.

The Benefits of Having an AI Policy for AI Governance

An AI policy extends several benefits to organizations that are looking to develop a more comprehensive AI governance strategy. These are just a handful of the ways in which a dedicated policy can help you stay on task, compliant, and oriented with your initial vision:

  • Structured guidance for all AI tool developers and users: This type of AI policy can act as a user manual for both AI developers and users of these tools, as it considers the entire AI lifecycle, from development to deployment to ongoing monitoring and fine-tuning. The standardized rules that are part of this type of policy facilitate cross-organizational buy-in and help your technical teams create a roadmap for AI best practices in real-world scenarios.
  • A mechanism for widespread accountability: AI policies provide documented rules for organizational and role-specific AI best practices. This means that all relevant stakeholders have a point of reference that clearly outlines their roles, responsibilities, procedures, limitations, and prohibitions for AI usage, which helps to avoid both ethical and compliance issues.
  • Better adherence to regulatory and data security laws: While the leaders in your organization are likely aware of regulatory and data security laws and how they apply to your business, chances are most other employees could benefit from additional clarification. Enforcing an AI policy that reiterates these laws and how they apply to your organization can assist your compliance and legal teams in communicating and mitigating issues with compliance laws at all levels of the organization.
  • Clear outline of data privacy standards and mechanisms: Beyond simply stating data security and compliance expectations, AI policies detail how data privacy works and what mechanisms are in place to protect data when it’s handled, stored, and processed for AI models. This level of detail guides all employees in how they should protect an organization’s most sensitive data assets and also gives the business a clear blueprint for what they should look for and where they should look during AI audits.
  • Builds customer trust and brand reputation: As AI’s capabilities and use cases continue to expand, many people are excited about the possibilities while others — probably including many of your customers — are more distrusting of the technology. Establishing an AI policy that enforces AI governance while creating more transparency and explainability is a responsible way to move forward and gives your customers more confidence in how your organization uses AI in its operations.
  • Preparation for incoming AI regulations: While few AI-specific regulations have passed into law at this point, several groups, including the EU, the U.K., and the U.S., are working toward more comprehensive AI regulations and laws. Creating a comprehensive AI policy now can help your organization proactively align with AI best practices before they are required in your regions of operation; doing this work now can reduce the headache of reworking your processes down the line.

AI Policy and Governance Best Practices

If your AI policy is not clear on its expectations for AI governance and general use, your teams may run into issues with noncompliance, security, and other avoidable user errors. Follow these best practices to help every member of your team, regardless of how they work with AI, remain committed to high standards of AI governance:

  • Pay attention to relevant regulations: Consider important regional, national, and industry-specific regulations and stay up-to-date with your knowledge so AI systems remain in compliance at all times.
  • Implement standards for data security and data management: AI is a data-driven technology, so be sure to use appropriate data management tools, strategies, and processes to protect and optimize that asset.
  • Cover the entire AI lifecycle in your AI policy: Your AI policy should not simply focus on how AI models are developed or how they are used. Instead, create a comprehensive policy that covers everything from data preparation and training to model creation and development, model deployment, model monitoring, and model fine-tuning.
  • Establish ethical use standards and requirements: Keep in mind employee-specific roles and responsibilities and set up role-based access controls or other security standards to underpin those rules and protect your consumers’ most sensitive data. Additionally, pay attention to important concepts like AI bias, fairness, data sourcing methods, and other factors that impact ethical use.
  • Create standards for ongoing evaluation of model performance: What will you be looking at when you’re monitoring your models in “the real world”? Your AI policy should detail important performance metrics and KPIs so you can stick to your goals and fairly evaluate performance at all stages of AI usage and development.
  • Accompany your AI policy with dedicated user training: To help all employees understand how your AI governance policy applies to their work, provide dedicated user training that covers cybersecurity, ethical use, and other best practices, ideally with real-world scenarios and examples.
  • Document and regularly update your AI policies: AI policies should not be static documents; they should dynamically change as tooling, user expectations, industry trends and regulations, and other factors shift over time.
  • Communicate your ethical practices to relevant stakeholders and customers: Strategically and transparently communicate your governance standards and details of your policy to third-party investors, customers, partners, and other important stakeholders. This communication strategy helps to establish additional trust in your brand and its ethical approach to AI.
Built-in model governance features in Amazon SageMaker.
Some AI and ML platforms, including Amazon SageMaker, include built-in model governance features to support role-based controls and other usage rules. Source: AWS.

Bottom Line: Using Your AI Policy and Governance Best Practices for Better AI Outcomes

Most businesses are already using AI in some fashion, or will likely will adopt the technology soon to keep up with the competition in their industry. Creating and adhering to an AI policy that covers compliance, ethics, security, and practical use cases in detail not only gives these organizations a more strategic leg to stand on when getting started with large-scale AI projects but also helps them meet customer and legal expectations when using AI technology.

Developing detailed AI policies and governance strategies often feels like an overwhelming process, and especially for organizations that are just dipping their toes into the AI pool, establishing this kind of policy may feel like overkill or a waste of time. But this is the wrong way to look at it; instead, think of your AI governance policy as an insurance policy for the modern enterprise. Especially as AI regulations become more well-defined in the coming months and years, it will pay to have an AI policy that proactively paves the way to more responsible and effective artificial intelligence.

Source : EWeek

The post AI Policy and Governance: What You Need to Know appeared first on Policy Print.

]]>
City of Seattle Releases Generative Artificial Intelligence Policy Defining Responsible Use for City Employees https://policyprint.com/city-of-seattle-releases-generative-artificial-intelligence-policy-defining-responsible-use-for-city-employees/ Tue, 14 Nov 2023 13:42:20 +0000 https://policyprint.com/?p=3749 Seattle – Today, the City of Seattle released its Generative Artificial Intelligence (AI) Policy to balance the opportunities created by this…

The post City of Seattle Releases Generative Artificial Intelligence Policy Defining Responsible Use for City Employees appeared first on Policy Print.

]]>

Seattle – Today, the City of Seattle released its Generative Artificial Intelligence (AI) Policy to balance the opportunities created by this innovative technology with strong guardrails to ensure it is used responsibly and accountably. The new policy aligns with President Biden’s Executive Order regarding AI announced earlier this week, and positions Seattle to continue to be a national leader in civic innovation and technology.

President Biden’s Executive Order focuses on new standards for AI developers to prioritize safety and security, protect Americans’ privacy, advance equity, protecting workers, and more. Seattle Deputy Mayor Greg Wong was in Washington D.C. for the announcement to support these new guidelines.

“Innovation is in Seattle’s DNA, and I see immense opportunity for our region to be an AI powerhouse thanks to our world-leading technology companies and research universities. Now is the time to ensure this new tool is used for good, creating new opportunities and efficiencies rather than reinforcing existing biases or inequities,” said Seattle Mayor Bruce Harrell. “As a city, we have a responsibility to both embrace new technology that can improve our service while keeping a close eye on what matters – our communities and their data and privacy. This policy is the outcome of our One Seattle approach to cross-sector collaboration and will help guide our use of this new technology for years to come.”

The City’s policy was developed after a six-month working period with the Generative AI Advisory Team and City employees. The policy, written by Seattle’s Interim Chief Technology Officer Jim Loter, based on the group’s work, takes a principle-based approach to governing the use of Generative AI, which will allow greater flexibility as technology evolves while ensuring it aligns with the City’s responsibility to serve residents.

The seven governing principles are:

  1. Innovation and Sustainability  
  2. Transparency and Accountability
  3. Validity and Reliability
  4. Bias and Harm Reduction and Fairness
  5. Privacy Enhancing
  6. Explainability and Interpretability
  7. Security and Resiliency

The City’s new AI policy touches on many aspects of generative AI. It highlights several key factors to responsible use in a municipality, including attributing AI-generated work, having an employee review all AI work before going live, and limiting the use of personal information to help build the materials AI uses to develop its product. The policy also stipulates any work with a third-party vendor or tool must also include these principles for AI. This will help novel risks that have the potential to adversely affect the City’s ability to fulfill its legal commitments and obligations about how we use and manage data and information.

City employees using AI technology will be held accountable for compliance with these commitments. All use of AI technology must go through the same technology reviews as any other new technologies. Those reviews take an in-depth look at privacy, compliance, and security, among others.

“I’m proud of the way the City of Seattle has responded thoroughly to the development of this policy,” said Seattle’s Interim Chief Technology Officer Jim Loter. “Technology is always changing. Our responses to these changes prove we are open to embracing new ways of providing services to our communities, while also mindful of the data we need to protect. I know this is an evolving topic, and I look forward to continuing this work and these conversations with experts in the field who also happen to live in our community and benefit from our services as a City. It truly emphasizes the meaning of One Seattle.”

The City policy applies to generative AI, which is a special type of AI technology. Generative AI produces new content for user requests and prompts by learning from large amounts of data called a “large language model.” The capability to create new content, and to continually learn from these large data models makes it possible for a computerized system to produce content that looks and sounds like it was done by a human. While AI, including generative AI, has the potential to enhance human work across many fields of human enterprise, its use has also raised many questions about the consequences of employing smart systems. Among these are ethics, safety, accuracy, bias, and attribution for human work used to inform AI system models. 

The Generative AI Policy Advisory Team included technology industry leaders from the University of Washington, the Allen Institute for AI, and members of the City’s Community Technology Advisory Board (CTAB). Seattle Information Technology employees provided input as well.

What members of the Generative AI Advisory Team had to say about this work and policy development:

Nicole DeCario, Director, AI & Society, and Jacob Morrison, Predoctoral Researcher, and Public Policy Lead, Allen Institute for AI and Generative Artificial Intelligence Advisory Board members

“The City of Seattle is taking a values-driven approach to creating their generative AI policy, carefully weighing the benefits and harms this technology brings. We are grateful to support this work and commend the City on its leadership in prioritizing the responsible use of AI. We hope the City’s policies can provide a blueprint for other municipalities around the country as it becomes increasingly common to interact with AI systems in our daily lives.”

CTAB member Omari Stringer

“As a CTAB member and resident of Seattle, I am happy to see the City of Seattle taking steps to ensure the responsible use of innovative technologies such as Generative AI. Although the pace of innovation often exceeds the pace of policy, it is important to engage with stakeholders early to set a strong foundation for future use cases. While I believe we should tread carefully in this new domain, especially with the importance of the work the City carries out, there are certainly many opportunities for AI to enhance the delivery of services to the public. I appreciate the unique opportunity to provide my voice and expertise to help bridge the gap between innovation and ethics.”

Source : Harrell

The post City of Seattle Releases Generative Artificial Intelligence Policy Defining Responsible Use for City Employees appeared first on Policy Print.

]]>
AI Policy Yields a Goldmine for Lobbyists https://policyprint.com/ai-policy-yields-a-goldmine-for-lobbyists/ Tue, 07 Nov 2023 07:57:51 +0000 https://policyprint.com/?p=3727 The government’s burgeoning interest in artificial intelligence policy is turning into the next big payday for K Street.…

The post AI Policy Yields a Goldmine for Lobbyists appeared first on Policy Print.

]]>

The government’s burgeoning interest in artificial intelligence policy is turning into the next big payday for K Street.

Lobbyists are rushing to sign up AI companies as clients. And K Street firms also are being enlisted by a sprawling constellation of industries and interest groups that want help influencing AI policy.

Cashing in on the latest policy fight is a classic Washington narrative. But unlike, say, cryptocurrency or marijuana regulation, AI policy touches just about every industry. Groups as disparate as the NFL Players Association, Nike, Amazon and the Mayo Clinic have enlisted help from firms to lobby on the matter.

Some lobbyists compared the boom in business opportunities to the cryptocurrency policy debate that brought K Street millions. But AI has the potential to be even bigger.

Lobbyists in the AI space said others across town are angling themselves as subject matter experts as the new work becomes available. Nearly every industry has realized they will be affected by artificial intelligence, and the business community is aggressively looking for intel, they said.

“Every lobbying firm in town is trying to make themselves out to be an expert in everything to try and lure in clients, so AI is just one of them,” said one lobbyist granted anonymity to discuss dynamics on K Street. “I’d be hard-pressed to name you an AI expert downtown. It’s hard enough to pick the AI experts in policymaking positions.”

Another lobbyist said that this past spring, lobbyists without any tech clients began bringing up artificial intelligence at political fundraisers as a means to attract new clients. The same tactic happened with cryptocurrency two years ago, the person said. “The whole point of the business is to find people who need your services and will pay you to do that,” the lobbyist said, laughing.

Carl Thorsen, a lobbyist who has some clients with a stake in the issue, compared the pattern to when Congress was trying to prohibit internet gambling years ago. Suddenly, “every consultant under the sun” was working for an internet gambling client, said Thorsen, who was counsel at the House Judiciary Committee when it handled the issue. His firm has already heard from clients that some consultants are pitching themselves as the AI experts, he added.

“What people don’t understand, they are afraid of, and I’m certain there are plenty of consultants who’ve decided to market themselves as AI experts, he said.

The lobbying frenzy started long before the White House issued its executive order on AI and comes as Congress starts to dig in on related policy. Broad swathes of industry are seeking new incentives from Washington — including subsidies for AI research and workforce retraining — while avoiding onerous rules on how they develop or deploy the emerging technology. Other industry sectors are squabbling over how AI should apply to topics as disparate as copyright, criminal justice, health care, banking and national defense. Looming over it all are calls from some top AI companies for Washington to impose a licensing regime to govern the most advanced AI models — a path some warn would lock in the dominance of leading AI firms like OpenAI.

While disclosure forms suggest OpenAI has not officially hired any lobbyists, it’s still building a ground game in Washington. The company recently tapped law firm DLA Piper to coach CEO Sam Altman on how to testify before Congress. It has also hired Washington lawyer Sy Damle, a partner at Latham & Watkins, to represent it in ongoing copyright lawsuits sparked by its generative AI tools. In September, Damle organized a letter campaign pushing back on possible AI-driven changes to copyright law, though an OpenAI spokesperson said the company had no involvement in that effort. OpenAI is also looking to hire a U.S. congressional lead, budgeting between $230,000 and $280,000 annually for that role.

Altman also gave $200,000 to President Joe Biden’s joint fundraising committee. Shortly after his donation, he participated in a June meeting between Biden and the visiting Indian Prime Minister Narendra Modi. Altman was also invited to the state dinner honoring Modi.

LinkedIn co-founder Reid Hoffman, an AI investor who has sat on OpenAI’s board, has given more than $700,000 to Biden’s joint fundraising committee and has publicly praised the administration’s recent AI executive order. Top Microsoft employees have also given tens of thousands of dollars to Biden’s joint fundraising committee.

“I’m very concerned that AI active executives are trying to cultivate Democrats now just like Big Tech cultivated Democrats a decade or two ago, and Wall Street did a decade before that,” said Jeff Hauser, founder of the Revolving Door Project. “AI knows that decisions in Washington that are made in the next few years will set the course of the industry for a generation. It’s a really good time to invest four-, five-, six-, maybe even seven-digits worth of campaign cash and potentially yield 10- or 11-digit returns.”

Source : Politico

The post AI Policy Yields a Goldmine for Lobbyists appeared first on Policy Print.

]]>
Google’s Employee-Friendly Leave Policies Set High Standards for Work-Life Balance https://policyprint.com/googles-employee-friendly-leave-policies-set-high-standards-for-work-life-balance/ Mon, 06 Nov 2023 02:36:48 +0000 https://policyprint.com/?p=3709 In the pursuit of a fulfilling career, one often yearns for a job that not only offers financial…

The post Google’s Employee-Friendly Leave Policies Set High Standards for Work-Life Balance appeared first on Policy Print.

]]>

In the pursuit of a fulfilling career, one often yearns for a job that not only offers financial security but also nurtures a healthy work-life balance. While such opportunities may seem elusive, tech giant Google has emerged as a beacon of hope, renowned for its employee-centric leave policies that set a benchmark in the corporate world.

Every company has its own set of rules and regulations, often dictating the number of leaves allocated to employees. However, the stark reality in many organizations is that these well-deserved leaves often go unutilized due to the relentless demands of the job. In contrast, Google stands out with its remarkable benefits, luring aspiring professionals to join its ranks.

Google’s leave policies extend far beyond the ordinary, offering an array of enticing benefits. Among these are paid leaves, vacation days, provision of complimentary food and snacks within the workplace, reimbursement for internet and mobile phone expenses, and the flexibility for employees to work remotely for up to four weeks annually. This unwavering commitment to work-life balance has positioned Google as an employer of choice.

Beyond the customary national and festival holidays, Google provides an array of specialized leaves to cater to employees’ diverse needs. These include casual leave, sick leave, adoption leave, maternity leave, and paternity leave, among others. Notably, Google employees are entitled to a generous quota of 15 paid leaves each year. These accrued leaves are deposited into their leave balance account after completing one year of service, affording them the flexibility to utilize these as per their requirements.

Google’s benevolence further extends to granting six casual leaves annually, which employees can avail of when attending to personal matters. These leaves, however, are accessible only after the completion of the probationary period.

Additionally, Google extends the privilege of six sick leaves each year, emphasizing the importance of employee well-being. In a recent enhancement, Google now offers an extended allocation of 20 paid vacation leaves annually, up from the previous 15 days. It is important to note that the extent of paid time off (PTO) varies depending on the employee’s job title and length of service with the company.

Furthermore, Google adopts a hybrid work model, wherein employees have the flexibility to work from home for several days each week. This approach, designed to accommodate diverse needs, reinforces Google’s commitment to fostering a balanced and harmonious work environment.

Source : News 18

The post Google’s Employee-Friendly Leave Policies Set High Standards for Work-Life Balance appeared first on Policy Print.

]]>