GeoAI and the Law Newsletter
Keeping geospatial professionals informed on the legal and policy issues that will impact GeoAI.
Recent Developments in GeoAI and the Law
In light of recent releases of improved “open” AI sytems, there has been a lot of discussions about the relative benefits (and risks) from a technology standpoint of “open source”, “open weight” and “closed source” AI systems. However, what many geospatial professionals may not realize is that similar discussions are also taking place among lawmakers and regulators. For example, proponents of “open” AI systems are concerned that a bill in California could impose liability upon certain developers of “open” systems that are subsequently used by third parties to create derivative models through fine tuning that result in “critical harm”. This topic will be discussed in a future Deep Dive, but the issue seemed important enough to raise here.
Recommended Reading
An excellent summary on the practical challenges a lawyer faces in trying to apply AI laws and policies to the “real world”.
Why the Data Ocean is Being Segmented
A detailed discussion of the consequences that the lack of clarity regarding ownership rights in data are having on the availability of training data for LLMs.
Updated in July by the International Association of Privacy Professionals
Quebec municipalities using artificial intelligence to track tree cover, cars, pools
Article discusses how the public may react to use of GeoAI applications by government agencies.
FACT SHEET: NTIA AI Report Calls for Monitoring, But Not Mandating Restrictions of Open AI Models
Fact Sheet references NTIA’s Report on Dual-Use Foundation Models with Widely Available Model Weights.
The Deep Dive
One of the reasons this newsletter was started was to highlight the disconnect between how the geospatial community is considering using GeoAI and the legal and regulatory framework that is developing around artificial intelligence (AI) in general. The risk is that the geospatial community will build AI and Machine Learning (ML) tools that will not satisfy legal and policy requirements, and then become frustrated when these tools are not adopted, or the impact of GeoAI is not as significant as believed possible.
As a result, I am making an effort to keep up with what is being written or said about potential GeoAI applications and then applying that to current legal and regulatory discussions. For example, this week I will be attending a workshop on GeoAI at the Fourteenth Session of the Committee of Experts of the United Nations Global Geospatial Information Management. (I will also be speaking on AI as part of a panel on the Forum on the Integration of Terrestrial, Maritime and Cadastral Domains.)
Also along those lines, last week I read State of AI for Earth Observation: A concise overview from sensors to applications. This 52-page report includes an excellent—and detailed—discussion of earth observation sensors and how AI/ML can be applied in the earth observation domain. Unfortunately, the discussion on “AI safety” is only half of a page long and is at such an elevated level that it is of no value for those who are trying to develop GeoAI tools in a manner that will comply with existing safety and security requirements.
For example, one of the sectors identified in the report as a potential market for GeoAI is infrastructure. However, as noted in last week’s newsletter, AI applications for “critical infrastructure” may be considered “high risk” under the EU AI Act (the “AI Act”), which came into force on August 2nd. Providers of high-risk systems under the AI Act must comply with a number of requirements, including risk management, data quality and governance, transparency, and cybersecurity.
Similar requirements apply in the United States. As noted in an earlier edition of the newsletter, certain infrastructure-related applications may be considered “safety-impacting” under guidance published by the Office for Management and Budget. Specifically, per “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence” federal agencies wishing to develop, procure or use AI for new or existing covered safety-impacting activities will need to (i) complete an AI impact assessment; (ii) test the AI for performance in a real-world context; and (iii) independently evaluate the AI. This is on top of other steps agencies must take on an ongoing basis, which include regularly evaluating the risks from the use of AI, and mitigating emerging risks to rights and safety, among others.
The report concludes with a discussion of the role of Research and Technology Organizations (RTOs) to “act as a bridge between academia, industry and government”. However, it does not discuss the need to align technical developments in GeoAI with rapidly evolving laws and policies. But if such alignment does not occur in RTOs, where will it take place? RTOs are one of the few organizations that have ability to pull together a multi-disciplinary team to address these issues, work that could then be built on to advance adoption in other contexts.