GeoAI and the Law Newsletter
Keeping geospatial professionals informed on the legal and policy issues that will impact GeoAI.
Recent Developments in GeoAI and the Law
The Future of Privacy Forum published a report this week that discusses how states are approaching regulating AI (link below). This report is a valuable resource for organizations that are developing, deploying or procuring AI systems, provides valuable information on both the similarities in how states are approaching regulation, as well as, perhaps most importantly, the differences. As will be discussed below in the Deep Dive, these differences may appear to be minor, as many state laws appear to have the same look or feel. This is not surprising, as legislators from different states will often work together to develop “uniform” laws as it is easier to work from a draft than to create something from scratch. However, even the slightest wording change can have a significant impact. For example, a defined term may differ slightly between states, or restrictions in one state may only arise when an AI system is a “controlling factor” in a decision, whereas in another state those same restrictions may be imposed if an AI system “facilitates’ a decision. Until a comprehensive federal law is developed, understanding these differences will be critical.
Recommended Reading
Human drivers are to blame for most serious Waymo collisions (Understanding AI)
An analysis of a Wayvo report indicating that its autonomous vehicles have been involved in fewer accidents than human-driven cars would have driving the same distance (22 million miles).
Data Protection Commission launches inquiry into Google AI model (Irish Data Protection Commission)
The Irish Data Protection Commission announced the start of an inquiry as to whether Google has complied with its General Data Protection Regulation obligations when developing the foundational AI model, Pathways Language Model 2 (PaLM 2).
China releases AI safety governance framework (OneTrust)
The Framework includes key principles, including AI research and application, identification of AI safety risks from the technology and its application, and the implementation of tailored preventive measures.
U.S. State AI Legislation: How U.S. State Policymakers Are Approaching Artificial Intelligence Regulation (Future of Privacy Forum)
The report analyzes key trends and concepts from proposed and enacted U.S. state AI legislation.
Commerce proposes new requirements for AI developers, cloud providers (FedScoop)
The proposed rule would implement new reporting mechanisms on developmental AI and computing activities, cybersecurity protocols and results from red-teaming exercises.
The Deep Dive
The Future of Privacy Forum’s (FPF) publication “U.S. State AI Legislation: How U.S. State Policymakers Are Approaching Artificial Intelligence Regulation” is a valuable resource for geospatial companies that are working with AI. The report is a comprehensive analysis of how states are considering regulating AI, examining not only recently passed laws, but also bills that have been introduced in state legislatures. FPF has also interviewed lawmakers and regulators across the country.
One of the strengths of the report is its analysis of the similarities between the approaches taken by states on a number of key issues. For example, the report identifies common requirements that are imposed upon developers and deployers of AI systems. These requirements, many of which will be familiar to readers of this newsletter, include:
Transparency – States want developers and deployers of AI systems that are considered particularly risky to make it clear how the AI systems operate, to include their decision-making processes and impacts. In addition, states want developers of those AI systems to make certain disclosures to deployers.
Assessments – States also want developers of riskier AI systems to assess, and conduct regular audits, on such systems’ performance, biases, and risks of discrimination.
AI Governance Programs - Comprehensive state AI regulation typically requires developers and deployers of risker AI systems to create governance programs and/or develop risk management policies and procedures. The intent of requiring these programs and policies is to help ensure that AI technologies are responsible, ethical, and operate in compliance with relevant laws and regulations.
However, the report also notes that there are important differences between the various state laws. For example, both the Colorado AI Act and a bill in the California legislature would regulate AI systems that make “consequential decisions”. However, the report highlights that there is a difference in how that term is defined. The Colorado Act defines a consequential decision as “a decision that has a material, legal, or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of…” California AB 2930, on the other hand, defines a consequential decision as “a decision or judgment that has a legal, material, or similarly significant effect on an individual’s life relating to access to government benefits or services, assignments of penalties by government, or the impact of, or the cost, terms, or availability of, any of the following…” While both reference decisions having a material, legal or significant impact, the California bill is the only one that specifically includes government services as well as a list of other decisions that are covered.
Similarly, states are struggling to define what threshold of AI decision-making should trigger regulation. The report notes that three terms are generally used: "facilitating decision making" (lowest threshold), "substantial factor” (median threshold), and "controlling factor" (highest threshold)”. Industry is concerned that the lowest threshold would capture all technologies that use AI, despite their risks being very low, citing calculators or Excel. On the other hand, consumer advocates argue that it is too easy to avoid triggering the highest threshold. For example, having a human oversee a decision made by an AI system would arguably avoid triggering the “controlling factor”. However, a human could simply rubber stamp the decision made by AI, in which case the oversight would be of limited value.
These differences in the definitions and thresholds may seem small but they will have a significant impact on companies that develop or deploy AI. It will be critical for businesses that operate in several states to understand the differences, particularly in the early days of AI laws and regulations.
GeoAI and the Law is not legal advice. The reader should consult with a trained lawyer on legal matters associated with GeoAI.
For those interested in learning more about Geospatial Law: Geospatial Law, Policy and Ethics (Routledge, 2025)