POLICY INSIGHT

Why Tech Policy Matters Now

AI is transforming everything from business to public trust, offering breakthroughs in research and productivity while also fueling deepfakes, scams, and data misuse. Explore how tech policy can strike the balance, whether through light-touch rules, proactive regulation, or empowering individuals with control over their data, and why choices made now on transparency, privacy, infrastructure, and innovation will determine if technology strengthens democracy or erodes it.

Introduction

We are living through a technological transformation as significant as the invention of the car. Just as society had to decide how to regulate roads, we now face questions about how to govern artificial intelligence, data use, and emerging technologies.

AI has already demonstrated its incredible, positive potential. It could propel research, eliminate inefficiencies, and increase productivity in previously unimaginable ways.

While AI can be a powerful crime-fighting tool, that power can also be used to perpetuate fraud and other crimes, ultimately harming society and democratic institutions.

For example:

Technology has always advanced faster than the policies meant to guide it. From social media to artificial intelligence, digital tools are transforming society, education, infrastructure, and public discourse.

As a result, individuals, communities, and governments now face a core challenge: advancing innovation without losing sight of human dignity, opportunity, and trust for future generations.

This Insight provides a starting point for engaging with your representatives on technology policymaking.

 

What is Tech Policy?

Technology or tech policy is a broad term encompassing both informal norms and formal regulations guiding technology use, development, and governance across society. In a private setting, it can refer to organizational policies or the personal guidelines individuals establish for themselves. It includes everyday decisions, such as families setting parental controls on devices, schools banning cell phones in classrooms, or companies setting rules for using AI tools and safeguarding data.

In the race to adopt AI, many organizations and the government are facing a new challenge: AI sprawl. The rapid, uncoordinated deployment of AI tools across departments, each solving specific, narrow problems, has led to fragmented systems, rising costs, and governance risks. This has prompted businesses to develop corporate policies on implementing and using AI.

In government contexts, tech policy refers to the laws and regulations governing development, deployment, and access to technology. These policies influence whether innovation advances or threatens core values such as privacy, safety, free expression, economic opportunity, and national security.

The next question in tech policy, specifically regarding AI, is which level of government should legislate and on what. There is a growing debate over whether the U.S. needs one comprehensive tech and AI framework or many individual, narrow bills. While this conversation is still evolving, a unified framework through the federal government would provide clarity, interoperability, and consistent standards, while allowing agencies to develop sector-specific regulations over time.

At the same time, the federal government has been criticized for moving too slowly, and several states have stepped into the gap. Over fifteen states now have AI commissions working on regulation and policy.

FEDERAL REGULATORY ACTORS

Tech policy involves two types: internal guidelines for government staff and external regulations, as introduced in The Policy Circle AI Primer: Understanding AI. Many executive agencies have usage guidelines or reports available for review. The White House issues both types of policy through executive orders to agencies.

The key agencies in the executive branch affecting AI regulations and policies are:

In Congress, the Bipartisan House Task Force on AI released its final report in December 2024, outlining more than 80 policy recommendations but leaving comprehensive federal legislation unresolved.

STATES PUSHING FORWARD

The use and implementation of AI in government is growing. States have taken the initiative to test new policies. Utah, Illinois, and California have all passed additional laws to protect their citizens. The result is a fragmented set of regulations in each state that could slow innovation and deployment across the country and beyond.

Outward-facing policies that affect business are just part of the equation. Internal policies for government staff on the use of AI are just as important. Colorado documented its process for piloting generative AI across agencies and shared it as a template for other states to follow. After a trial period, the majority of participants in their pilot program reported feeling more productive when using AI to manage their workload. The results directly led to the adoption of their statewide AI policy.

 

How to Think About Tech Policy

Competing approaches to tech policy balance various priorities. We will refer to these approaches as minimal guardrails, Proactive Governance, and Flip the Paradigm.

MINIMAL GUARDRAILS: MOVE FAST, FIX IT LATER

Proponents for this framework argue that innovation should not be slowed by regulation. Guardrails should only be introduced after measurable harm is proven. It prioritizes speed, experimentation, and global competitiveness, especially in emerging tech fields like AI and biotech. For some, the hesitancy to regulate stems from concerns about overregulating AI and infringing on other rights, like free speech.

PROACTIVE GOVERNANCE: LEARN FROM THE LAST TWENTY YEARS

This school of thought focuses on the harms of unchecked technology, from mental health impacts and surveillance to misinformation and exploitation. Proponents think that the past should have taught policymakers valuable lessons. Applying those lessons to new technologies, such as AI, means regulating them now, not just reacting after harm has occurred.

FLIP THE PARADIGM: FOCUS ON DATA

As artificial intelligence (AI) becomes embedded in every facet of daily life, a powerful way to shape tech policy is by focusing on data agency: the ability of individuals and organizations to control their own data. AI thrives on vast, high-quality datasets, pushing platforms to harvest more user information, often without clear consent or transparency. Most users don’t fully understand how their data is collected, stored, or used.

A data agency-driven approach to tech policy prioritizes transparency, accountability, and individual control. It means giving people the right tools to see who is using their data, for what purpose, and control over it. For example, Project Liberty is working to guide policymakers in building balanced frameworks that enable innovation while respecting personal freedom. Just as we’ve come to expect food labels or credit disclosures, we can begin to expect real-time control panels for our digital lives.

The Policy Circle’s Data Privacy and Cybersecurity Insight dives into what data privacy is, as well as the roles of each level of government.

Individuals can become active participants, not passive data sources, in shaping the future of AI (10 minutes):

 

Spokes of Tech Policy

Wheel showing the areas of tech policy that this Insight covers.

These policy views inform how to approach different sectors and themes connected to tech policy. The following “spokes” highlight major areas, not an exhaustive list, illustrating how technology and policy intersect:

  • Transparency and Accountability
  • Consent and Privacy
  • Innovation
  • Land Use and Energy
  • Critical Infrastructure
  • Taxes and Tariffs

TRANSPARENCY AND LIABILITY

In an era when AI models influence decisions across healthcare, education, law enforcement, and finance, questions about how these systems work and their associated liability are at the forefront. Transparency and accountability are foundational to trust in both public and private institutions. Yet even with transparency, it is essential to remember that AI models are not infallible; they are algorithmic systems shaped by data and design, which means accuracy and fairness cannot be guaranteed.

Transparency means ensuring that users can understand how an AI model was developed, what kind of data it was trained on, and the intended purpose of its decisions or outputs. Here are the components of a framework for increased transparency in AI use:

  • Clear liability rules: Developers, deployers, and regulators must know where accountability lies when AI fails.
  • Mandatory transparency tools: Labels, audits, and explainability standards should be required for high-risk systems.
  • Oversight of government use: Public agencies using AI must be held to high standards and remain accountable to citizens. For instance, in Virginia, Governor Glenn Youngkin signed an order authorizing AI to scan state regulations and flag unnecessary provisions, an effort aimed at making regulatory processes more efficient and accountable to taxpayers.
  • Cross-Sector Collaboration for Self-Governance: Civil society, academia, and the tech sector co-create norms for ethical AI governance.

Proposed solutions include:

  • AI “nutrition labels are standardized disclosures showing what data was used, how a model was trained, and its known biases or risks. Labeling does not always have the intended effect. For example, the Parental Advisory labels on films backfired by becoming a marketing tool that amplifies, rather than limits, controversial content.
  • Explainability mandates, especially in lending, housing, and employment sectors, are necessary to ensure that people can understand and challenge AI-driven decisions.

Transparency also applies to government use of AI.

At the federal level, the White House issued a July 2025 Executive Order that provided direction on qualifications for AI models that can be used internally. The order also mandated the development of a new AI Action Plan from the OMB. It emphasized goals such as economic competitiveness, national security, maintaining U.S. leadership, and explicitly called for AI systems free from ideological bias. Since the U.S. government issues large contracts, procurement standards will likely influence broader commercial AI by pushing vendors to design more accountable and transparent tools.

WHO’S ACCOUNTABLE?

Right now, there is no unified federal law that assigns liability when AI systems fail.

For example:

  • If a facial recognition tool leads to a false arrest, who is liable: the software company, the police department, or the developer?
  • If AI misdiagnoses a patient, does the blame rest with the hospital for implementing  AI, the physician for using it, or the developer?

EMERGING POLICIES AND FRAMEWORKS

  • The EU AI Act, which may influence global standards, places liability for outcomes on the organizations that deploy AI in high-risk settings. It also requires risk assessments, human oversight, and precise documentation of how AI systems work.
  • In the U.S., the Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act) was introduced in April 2025. It would require AI developers to provide content provenance tools, establish standards for detecting synthetic content, and prohibit unauthorized use of copyrighted material for AI training.
  • Sector-specific regulations (e.g., HIPAA, Equal Credit Opportunity Act) may indirectly cover AI decisions, but are often outdated or unprepared for its complexity. The U.S. House has introduced the Healthy Technology Act of 2025, which would allow certain AI/machine learning systems to be recognized as practitioners and eligible to prescribe drugs under specific federal oversight. The bill implicitly raises questions of liability in medical diagnosis and decision-making.

To explore this topic more as well as some direct applications for AI, see The Policy Circle’s AI and Government Transparency Insight.

CONSENT AND PRIVACY

AI models rely on vast, high-quality datasets to function. Training data has historically been pulled from publicly available content or scraped from the Internet. However, this data increasingly includes personal creations, biometric markers, copyrighted content, and behavioral patterns. Consent and privacy protections become increasingly important.

We’re also learning more about how digital platforms, often optimized for engagement, amplify human vulnerabilities, including compulsive behavior, addiction, and even mental health conditions like anxiety. Particularly concerning is the impact on brain development in minors, where unfiltered access to certain content has measurable consequences. That’s why age-appropriate access and consent, especially parental consent, are growing policy priorities.

A SHIFT TOWARDS OPT-IN MODELS

Traditionally, platforms relied on lengthy terms-of-service agreements that users clicked through without much awareness. The trend is shifting toward opt-in by default, in which users actively set the terms of their digital experiences. With AI, this could mean choosing whether personal data can be used to train AI models or whether one’s likeness can appear in AI-generated content.

This user-centered, opt-in approach marks a major evolution in digital rights, essential for building healthier, more respectful online environments.

LEGAL AND POLICY ACTION AT THE FEDERAL LEVEL

  • The Children’s Online Privacy Protection Act (COPPA) requires parental consent for data collection from children under 13.
  • The Take It Down Act requires platforms to remove explicit imagery within 48 hours of a verified request, with enforcement by the FTC.
  • Ongoing proposals like the Kids Online Safety Act (KOSA) seek to hold platforms accountable for designing safer experiences for young users.
    • The SCREEN Act proposed age verification at the national level.
    • The American Privacy Rights Act (ARPA) aimed to create a national standard for data privacy, including access, deletion, and opt-out. Although the bill originally had broad support, revisions weakened key protections. APRA has stalled in Congress with no markup scheduled and its legislative future uncertain.

ACTIVITY AT THE STATE LEVEL

States have taken the lead in digital consent and data protections through laws like Illinois’s Biometric Information Privacy Act (BIPA), which requires informed consent before collecting biometric identifiers (e.g., face scans, fingerprints), California’s Citizen Privacy Act (CCPA), and California Privacy Rights Act (CPRA) which give users rights to access, delete, or opt out of personal data sales CPRA is often cited as the U.S. counterpart to Europe’s GDPR.

States are now moving beyond general tech policy laws and enacting laws that specifically target the unique risks and opportunities of artificial intelligence systems. Two leading examples, Utah and Colorado, demonstrate the new landscape of AI regulation in the U.S.

  • Utah’s AI Policy Act is the nation’s first law explicitly tailored to generative AI and its impact on consumers. It requires disclosure whenever consumers interact with AI. Any business using generative AI in communications with Utah residents must comply, regardless of where the business is based. The act also established an Office of AI Policy for enforcement and public education, and the first regulatory sandbox.
  • The Colorado Artificial Intelligence Act targets “high-risk” AI systems that help make sensitive decisions. It requires both AI developers and deployers to implement risk management programs, conduct impact assessments, and ensure reasonable safeguards to prevent discrimination and consumer harm. Consumers must be notified whenever an AI system is involved in a major decision and can appeal or seek human review. Carve-outs exist to avoid overregulation of lower-risk users.

These laws remain fragmented, applying only within individual state borders. While district court rulings could broaden their impact, the result is still a patchwork of protections that leaves businesses navigating inconsistent rules and users with uneven rights depending on their state of residence.

WHY GLOBAL STANDARDS MATTER

Because digital platforms and AI companies operate globally, many are based in jurisdictions with weak privacy protections, such as China, where the state may access data from domestic tech firms, or certain Eastern European countries with limited enforcement.

Without interoperable global rules, firms can relocate to less restrictive countries and evade national protections. International cooperation, through frameworks like the EU’s GDPR or the OECD Privacy Guidelines, is needed to establish consistent standards and reduce regulatory arbitrage.

FUELING INNOVATION, NOT OVERREGULATION

Artificial Intelligence is reshaping the foundations of economic power, military strategy, healthcare delivery, public infrastructure, and even cultural influence. As AI systems increasingly guide how people work, communicate, and learn, the question is how governments should engage and what values will guide the process.

CHINA’S STATE-LED AI MODEL: A GLOBAL CHALLENGE

China has made global AI leadership a cornerstone of its national strategy. By 2030, it aims to dominate AI research and deployment across defense, manufacturing, and surveillance. The Chinese government subsidizes AI research, chip and semiconductor production, and dictates how businesses and institutions use AI. There is no functional concept of privacy, consent, or transparency. Instead, real-time facial recognition and the social credit system reflect an AI regime that consolidates power rather than empowers citizens.

China’s AI model is increasingly influential in the Global South, where Chinese firms offer cheap, state-backed AI and surveillance tools without regard for civil liberties. If unchecked, China’s vision for AI could become the global standard, embedding authoritarian values into the code that shapes how the world works.

The World Economic Forum dug into this topic at length (1 hour, 5 minutes):

THE U.S. DILEMA: FUEL INNOVATION, AVOID OVERREACH

In response, the United States is confronting a strategic dilemma: How to boost innovation without becoming overly interventionist. History shows that federal support has played a foundational role in past technological leaps, from railroads and highways to electricity, the telephone, the internet, and broadband by funding the basic research that made it happen. However, many fear that the government “picking winners and losers” could distort markets and lead to failures, such as the well-known case of Solyndra in solar energy.

Congressional leaders are now debating the appropriate federal role. Should the government invest directly in chip manufacturing, fund basic R&D, or let the private sector lead?

“…fragmented regulations could stall federal IT modernization and hinder U.S. leadership in AI. Without a unified national framework, conflicting state rules risk driving up compliance costs, limiting innovation, and undermining agencies’ ability to adopt the best commercial AI tools available.”

DANIEL CASTRO OF ITIF

FEDERAL ENGAGEMENT: AGENCIES, AUTHORITIES, AND FUNDING MECHANISMS

The Department of Commerce, Department of Defense (DoD), Department of Energy (DoE), and the National Science Foundation (NSF) all play critical roles:

These efforts are enabled under the Commerce Clause and Spending Clause of the U.S. Constitution, which allow Congress to regulate interstate commerce and allocate federal funds for strategic purposes. Still, Congress’s role remains vital: appropriating funds, holding hearings, and ensuring that agency priorities align with national values.

SECURING CRITICAL SUPPLY CHAINS FOR AI INFRASTRUCTURE

AI systems depend on data, software, and physical infrastructure, including rare earth magnets essential for drones, EVs, and advanced computing. The U.S. has long relied on imports, particularly from China, for these materials, posing a national security risk as tensions escalate.

The Defense Advanced Research Projects Agency (DARPA) announced a major investment in MP Materials’ new magnet manufacturing facility in Fort Worth, Texas, known as the “Independence” site, to reduce this dependency. This public-private partnership aims to reestablish a domestic supply chain for neodymium-iron-boron (NdFeB) magnets, a key component in motors and precision-guided systems. The facility has begun producing magnets and is scaling to support U.S. defense and industrial needs. While full permitting details are not public, the facility has received federal tax credits and DoD funding, demonstrating how supply chain security is now a pillar of AI policy and national competitiveness.

STATES AS LABORATORIES OF INNOVATION

At the state level, Utah, Texas, and Virginia have started regulatory sandbox initiatives that let companies test AI technologies under light-touch supervision. These models aim to balance innovation and oversight and may inform national frameworks. Meanwhile, other states are expanding broadband access, research incentives, and procurement reforms that accelerate AI adoption in schools, hospitals, and public safety.

There is a growing consensus that strategic public investment, clear ethical guardrails, and interoperable national standards are necessary to ensure AI serves democracy. But the U.S. is also trying to avoid the trap of bureaucratic overregulation that Vice President Vance has alluded to, which stifles agility and collaboration.

For more on the importance of innovation, check out The Policy Circle’s Economic Growth Brief.

PROTECTING CRITICAL INFRASTRUCTURE

AI is increasingly embedded in vital systems like utilities, transportation, healthcare, and defense, making these services a target. The 2021 Colonial Pipeline ransomware attack temporarily halted operations of one of the largest fuel pipelines in the United States. It demonstrated how attacks on infrastructure can trigger crises and expose deep vulnerabilities. Sophisticated threats, from domestic cyber groups to state-sponsored operations, like China’s Salt Typhoon campaign targeting U.S. communications firms, underscore why tech policy includes investment in resilient systems and coordinated defenses.

THE NEXT WAVE OF COMPUTING

Quantum computing poses significant risks and opportunities. Because of its design, it can compute possibilities simultaneously instead of computing each possibility individually like a classical computer, meaning it can reach answers to complex problems much faster. When paired with AI, it can potentially expedite research in industries like drug development, forecasting, and more.

The race to quantum computing with low error rates is already happening. Because of the potential for expedited computing, quantum computers can break encryption, which poses a serious threat to privacy and security. The NIST is already working on post-quantum encryption standards for when this risk becomes reality. Still, work is ongoing on what security will look like post-classical computers.

THE FEDERAL GOVERNMENT’S ROLE

The federal government leads by funding infrastructure, setting voluntary standards, and coordinating threat intelligence.

The following agencies are responsible for protecting critical infrastructures:

  • Cybersecurity and Infrastructure Security Agency (CISA) leads efforts to monitor threats and share best practices through programs like the Joint Cyber Defense Collaborative (JCDC).
  • The NIST provides AI Risk Management Frameworks.
  • Entities like the U.S. Cyber Command (USCYBERCOM) and the National Security Agency (NSA) collaborate through the AI Security Center to protect military and defense systems.
  • The Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI) contribute through incident response and investigation.

Executive orders and interagency programs support coordination, research, and crisis response, though no single federal body has oversight over the full AI-cybersecurity interface.

Several Congressional committees are vital in overseeing the agencies that protect critical infrastructure from AI-powered threats.

Key players include the House Committee on Homeland Security, particularly its Subcommittee on Cybersecurity and Infrastructure Protection, and the Senate Homeland Security and Governmental Affairs Committee. These bodies oversee the CISA and its risk mitigation efforts.

Meanwhile, the House Armed Services Committee (via its Cyber and Innovation Subcommittee) and the Senate Select Committee on Intelligence monitor national defense and intelligence agencies like USCYBERCOM, the NSA, and the FBI. The House and Senate Commerce Committees also oversee standards bodies like NIST.

These committees are where national policy gets shaped, budgets are authorized, and oversight hearings are held, making them a crucial focus for civic engagement.

THE ROLE OF STATES

While the federal government sets national security standards, states are crucial laboratories of innovation and governance, especially regarding AI integration and cybersecurity. Oversight may vary from state to state, but the importance is growing as AI-powered threats increasingly target elections, utilities, and public services.

Many states are establishing Science, Technology, or Innovation Committees within their legislatures to guide the ethical and secure adoption of AI. Others assign oversight responsibilities to Homeland Security, Public Safety, or Emergency Management committees, which monitor the resilience of digital systems tied to infrastructure and essential services like water, energy, and education.

A standout example is New Jersey, where the state legislature is considering creating a centralized Office of Cybersecurity Infrastructure. This office would weave AI safety into the state’s broader cybersecurity strategy, coordinate public–private response efforts, and set a precedent for integrating AI oversight into existing emergency frameworks.

States are also experimenting with incident response plans, AI-safety audits, and secure procurement policies, though many remain in early or reactive stages. Importantly, state appropriations and budget committees determine how much funding is directed toward these initiatives, shaping the future of digital resilience at the local level.

GovRAMP is a non-profit that emerged in 2021 to help standardize cybersecurity practices across state and local governments. Its mission is summarized by its “verify once, serve many” philosophy, meaning cloud service providers undergo one streamlined security validation, and government agencies can rely on that assessment rather than conducting their own each time. It ensures continuous monitoring and aligns with the federal framework through NIST standards.

This model empowers states and local entities, especially those with limited resources, to sharpen cybersecurity postures collaboratively, without duplicated effort or costs. It’s a key example of how states coordinate cybersecurity resilience against AI-driven threats through shared infrastructure and standards.

WHAT CITIZENS CAN DO

Communities can strengthen resilience by asking key questions at the local level:

  • Are critical systems protected against AI-driven attacks? Microsoft is actively participating in conversations that cities are having about cybersecurity.
  • Does state policy support public–private threat sharing? Programs like CISA’s JCDC can only succeed if local governments and utilities actively participate.
  • Are there recovery and continuity plans for essential services? Hospitals, schools, and infrastructure systems need resilience built in.

Engaging in infrastructure and AI security policy helps transform these challenges from obscure technical issues into civic priorities that reflect democratic values and local needs. Learn how to use your voice through The Policy Circle’s Civic Engagement Brief.

LAND USE AND ENERGY

Artificial Intelligence isn’t just a digital phenomenon; it’s dependent on physical assets. Every AI model, from image generators to large language models, requires immense computing power. This power demands land, electricity, and water. As AI adoption accelerates across industries, so does the infrastructure needed to support it: data centers, high-voltage lines, cloud campuses, and water-intensive cooling systems. The U.S. electric grid is already feeling the strain. Utilities and regulators warn that surging demand, driven partly by AI, could outpace supply within this decade.

This new reality has forced a national reckoning: how do we balance innovation with resilience? How do we support AI development without overwhelming our communities, power grids, and ecosystems? Land use and energy policy, once back-office concerns, are now front-line issues in tech governance.

Watch the Wall Street Journal’s video showing AI and data center’s energy usage (7 minutes):

FEDERAL ROLES AND RESPONSE

Federal engagement on AI’s energy and land use has mirrored the broader U.S. approach to tech policy: serious activity across multiple agencies but without a unified strategy. The Department of Energy has issued studies warning about rising power demand from data centers and the strain on local grids. At the same time, other agencies promote clean energy adoption and grid modernization. Still other agencies have worked on efficiency standards for advanced computing, but their guidance is largely voluntary. This results in a patchwork where efforts are genuine but often fragmented.

Executive action on this growing problem is concentrated in a few agencies:

  • Department of Energy (DOE): Tracking the explosive growth in data center energy demand and driving strategies across generation, transmission, and demand-side flexibility to meet those needs.
    • The agency has identified 16 federal sites, including national lab campuses, as potential locations to co-locate AI infrastructure with energy generation resources, aiming for streamlined permitting and operational speed.
    • Office of Electricity (OE): Leads DOE’s efforts to ensure the nation’s electric grid is secure and resilient. It focuses on R&D for grid modernization, distributed energy integration, microgrid deployment, and reliable power for mission-critical systems, including data centers.
    • Advanced Research Projects Agency–Energy (ARPA-E): Funds high-risk, innovative energy technologies, such as dynamic cooling systems and grid optimization tools, that are critical for managing the unique power needs of digital infrastructure.

Against this backdrop, the White House has attempted to move beyond piecemeal action. A July 2025 Executive Order, “Accelerating Federal Permitting of Data Center Infrastructure,” sought to streamline approvals for large-scale data centers and their supporting energy projects. By fast-tracking permitting, opening access to federal lands, and providing opportunities for financing, the order reflects an effort to align federal policy more directly with the infrastructure demands of AI.  Yet, this raises questions about whether the rush to scale comes at the expense of environmental safeguards and community input.

STATE EXAMPLES

Artificial intelligence may run in the cloud, but it’s rooted in the ground. Data centers, solar fields, nuclear reactors, and transmission lines sit on land that’s privately owned or managed by governments. It’s the states, counties, and local communities that ultimately make the call.

States bear dual responsibilities: driving economic growth while safeguarding power grids strained by the rise of AI. Data centers are notoriously energy-hungry; the rapid build-out of hyperscale facilities threatens to outpace available capacity. Texas has taken a national leadership role with Senate Bill 6 (SB 6), passed in 2025, which mandates that large energy users like data centers and crypto mines provide backup power details, participate in emergency demand response, and share the cost of infrastructure upgrades. This proactive approach balances innovation with grid stability and has become a model other states are watching closely.

Pennsylvania is embracing another path to resilient innovation by doubling down on nuclear energy. A reactor at the iconic Three Mile Island site is scheduled to restart in 2028, delivering zero-emissions electricity to power next-gen digital infrastructure.

Other communities are navigating the push and pull of development and preservation:

  • West Virginia: Faced with local pushback, residents have raised concerns about grid stress, land use, and limited community benefit from incoming data centers.
  • Virginia: Several counties now require data center developers to disclose energy usage and environmental impacts.
  • Tennessee: In Gallatin, Meta partnered with the Tennessee Valley Authority (TVA) to power its data center with solar energy, showcasing a public-private model of sustainable innovation.

These examples highlight the range of approaches states are experimenting with, from strict regulation to clean energy collaboration. The common thread is that AI’s physical footprint is becoming an actualized issue. It’s playing out in zoning commissions, utility boards, and statehouses nationwide. See The Policy Circle’s Energy and Environment Brief to follow up on this pressing issue.

TAXATION AND TARIFFS

Artificial intelligence and the digital economy are transforming governments’ decisions about taxation, trade, and industrial competitiveness. Tax policy and tariffs have become central to how governments fund innovation, regulate digital platforms, and assert sovereignty over fast-evolving tech sectors.

Policymakers at every level of government (state and city) are experimenting with digital services taxes (DSTs), targeted surcharges, and semiconductor incentives to ensure the digital economy delivers public returns. These measures trigger international trade tensions, prompting coordination efforts through frameworks like the OECD global tax deal and retaliatory tariffs.

Meanwhile, U.S. states are shaping the tech landscape through their fiscal levers, whether through taxes on cloud services, reductions in data center subsidies, or new requirements for energy-intensive infrastructure.

This evolving tax landscape reflects an important fact: while fiscal policy remains its own domain, its tools have become central tech policy drivers. Choices about who pays, where investment flows, and how digital infrastructure is taxed will shape AI’s current and future pace.

INTERNATIONAL EXAMPLES

The global landscape shows how these overlaps between tax and tech policy are already playing out, with countries developing DSTs in ways that shape both trade relations and investments in innovation:

  • Canada’s Digital Services Tax and the U.S.: Canada rescinded its digital services tax to restart trade negotiations with the U.S., highlighting the international complexities and policy reversals tied to DSTs
  • France’s Digital Services Tax: Applies a 3% tax on revenues generated in France by large digital companies (like Google, Amazon, and Facebook). Some proceeds support French AI and cloud-sovereignty initiatives.
  • OECD Global Tax Framework: Over 140 countries have agreed that large multinational companies, especially those offering digital services, should pay taxes not just where they are headquartered, but also where their customers are. This shift means countries can capture tax revenue tied to local digital activity to help fund public infrastructure, public services, and innovation.

International coordination would involve bodies like:

FEDERAL LEGISLATION

At the federal level, recent actions illustrate how tax and industrial policy are being used to shape the trajectory of AI and the broader digital economy:

  • R&D Tax Credit (1981): Longstanding credit that sustains corporate innovation spending. Recent legislation was introduced to restore full expensing and expand incentives for startups and small firms.
  • CHIPS and Science Act (2022): Provides tax credits and subsidies to onshore semiconductor manufacturing, strengthening the hardware backbone for AI.
  • Executive Order on AI Exports (2025): Directs a national strategy to promote the export of U.S. AI technologies, linking trade, tax, and industrial policy to maintain global competitiveness.
  • Foreign DSTs Response (2025): A presidential memorandum revived tariff authority under Section 301 to counter foreign DSTs, authorizing retaliatory measures to protect U.S. digital companies.

Together, these measures show that tax levers are not just fiscal tools but central instruments of U.S. tech competitiveness and innovation strategy. Learn more about fiscal and monetary policy in The Policy Circle’s Taxes Brief or about tariffs in the Trade Brief.

STATE AND LOCAL MOVEMENT

States shape the AI-tech landscape through taxation, surcharges, and incentives. Their options include taxing or exempting digital services (like cloud and AI APIs), adding targeted surcharges on large tech firms, offering tax credits or abatements for R&D and semiconductor manufacturing, and creating (or tightening) incentives for data centers. In practice, these levers set the effective cost of doing AI/tech business in a state, influencing where companies build, expand, or scale back.

Several states have been implementing plans and legislation using these levers. Some of the most significant are:

  • Maryland (2025): Launched the first digital advertising tax in the nation. Recently, its “gag rule” (prohibiting platforms from disclosing cost pass-throughs) was struck down by the 4th Circuit, keeping broader legal challenges alive.
  • Washington (2025): Taxes hyperscale computing with an “advanced computing” B&O (Business & Occupation) surcharge that will jump significantly in 2026. This is a rare, highly targeted tax instrument.
  • Iowa (2025): Scaled back its data center electricity and fuel sales-tax exemptions and introduced new reporting and certification requirements, marking a notable retreat from previous policy.
  • Florida (2025): Signaled a policy shift by ending sales-tax breaks for data centers under 100 MW, signaling a broader reconsideration of energy-intensive incentives.
  • New York Green CHIPS (2025): Strengthened its semiconductor push with up to $10 billion in targeted R&D and manufacturing credits tied to environmental performance.

PARTICIPATING IN POLICY-MAKING

  • Clarify tax bases for digital goods and services to reduce patchwork compliance and improve predictability.
  • Tie incentives to transparency and resilience. For example, energy-use and environmental reporting requirements, or renewable and efficiency commitments, are conditions for data-center tax benefits. See what Virginia found in its investigation.
  • Performance-based incentives (jobs, capex, local benefits) with sunset reviews to reassess costs/benefits as AI infrastructure scales.
  • Coordinate with localities: enable or standardize local disclosure/impact rules (cities like Chicago already tax some digital consumption), while avoiding double-taxing or conflicts.

 

Questions to Guide Tech Policy

Shaping responsible technology policy is a collaboration between global frameworks and local realities. While many principles around AI, data, and digital systems enjoy broad consensus, the outcomes hinge on how policies are implemented in communities, statehouses, and industries.

To move from ideals to effective action, and ensure that innovation strengthens both trust and competitiveness, leaders and citizens alike can reflect on key questions when evaluating tech policies:

  • Does this policy protect people’s control over personal data and digital identity? Are consent, data use, and privacy safeguards clearly defined and easy to understand?
  • Is the use of AI or other technologies transparent and accountable? Can people understand how decisions are made? Is there meaningful human oversight?
  • Does the policy expand access and opportunity across communities? Does it help close digital divides and include those often left out, such as rural residents or under-resourced populations?
  • Are we building resilient systems, not just compliant ones? How does this policy contribute to long-term readiness in areas like cybersecurity, broadband, education, and workforce development?
  • Does the policy allow innovation to flourish while protecting public trust? Are there ethical guardrails that enable experimentation without compromising human dignity or civil liberties?

These conversation starters help communities shape policies that reflect their values and vision for the future.

They can be used to guide:

  • Legislative or regulatory review.
  • Public hearings or town halls.
  • Strategic planning within agencies.
  • Community conversations or school board discussions.
  • Advocacy and civic engagement efforts.

Whether you’re reviewing a bill about AI in education, data privacy standards, or smart city infrastructure, these questions help ensure that every policy decision reflects core values of trust, inclusion, resilience, and human dignity.

If these five guiding questions aren’t asked where decisions are made, that’s your cue to speak up.

 

Additional Sources

To learn more about what private AI policies could look like, see these guides for corporations:

  • Elizabeth M. Adams, a speaker at The Policy Circle’s 2025 Summit, has developed courses for businesses to develop and implement AI use guidelines that help fight AI sprawl.
  • TeachAI.org has developed sample toolkits for schools to use to build and implement AI policy.
Updated: September 26, 2025

Newest Policy Circle Briefs

Terror Groups and Rogue States

Terror Groups and Rogue States

Terror groups and rogue states pose significant and diverse threats to U.S. national security and the safety of its allies, affecting both local and g…
Read
Civic Engagement

Civic Engagement

What is civic engagement? Why is it important, and how can you become more involved in your community? In this Brief, we aim to provide a deeper under…
Read
Education Savings Accounts

Education Savings Accounts

Education Savings Accounts (ESAs) are transforming school choice by giving families direct control over public education funds to customize their chil…
Read
The Executive Branch

The Executive Branch

The executive branch of the U.S. carries out and enforces laws. Some roles, namely the president and vice president, are well known. Most Americans ar…
Read

About the policy Circle

The Policy Circle is a nonpartisan, national 501(c)(3) that informs, equips, and connects women to be more impactful citizens.

AI Assistant