GeoAI and the Law Newsletter
Keeping geospatial professionals informed on the legal and policy issues that will impact GeoAI.
Recent Developments in GeoAI and the Law
As has been discussed here several times, there is significant overlap between the data protection/privacy communities and the AI communities. As a result, one of the best resources to keep up with legal and policy developments in the AI sector has been the International Association of Privacy Professionals (IAPP). The Deep Dive goes into detail on the intersection of general-purpose AI models for the geospatial sector with Europe’s AI Act. If you want more information on the AI Act, I suggest you follow the ongoing IAPP series 10 Operational Impacts of the EU AI Act.
Recommended Reading
The good, the not-so-good, and the ugly of the UN’s blueprint for AI (Brookings Institute)
The report, written by a prominent privacy expert and former member of the Obama cabinet notes the “[r]apidly expanding networks on AI policy, safety, and development [that] have produced unprecedented levels of international cooperation around AI.”
California AI bill passes State Assembly, pushing AI fight to Newsom (Washington Post)
Readers will note that I have highlighted the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act several times in the past few months. As this article reports, the bill “has deepened a rift in the world of AI researchers, developers and entrepreneurs over how serious the potential risks of the technology are.” It is now being brought to California’s governor to either approve or veto.
According to the press release, the U.S. Artificial Intelligence Safety Institute (Safety Institute) at the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) announced agreements that enable formal collaboration on AI safety research, testing and evaluation with both Anthropic and OpenAI. The agreements establish a framework for the Safety Institute to receive access to major new models from each company prior to and following their public release. Readers will recall the Safety Institute was created pursuant to the Biden Administration’s 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
AI Act: Have Your Say on Trustworthy General-Purpose AI (European Union)
The European AI Office has extended the deadline for comments to its consultation on trustworthy general-purpose AI models under the AI Act to September 18, 2024. This is separate from its call for expression of interest to participate in the drawing-up of the first General-Purpose AI Code of Practice, which closed on August 25. Geospatial businesses that build or use AI systems should consider submitting comments given the impact that the AI Act could have on GeoAI.
Artificial intelligence in New South Wales (Parliament of New South Wales)
Key recommendations of the report include greater protection of intel, ectual property rights of creators, liaising with their state and federal counterparts to ensure a consistent approach in the governance of artificial intelligence, and the Government conduct a regulatory gap analysis, in consultation with relevant industry, technical and legal experts to (x) assess the relevance and application of existing law to artificial intelligence, (y) identify where changes to existing legislation may be required, and (z) determine where new laws are needed.
The Deep Dive
Readers of this newsletter may recall an earlier discussion around whether the term GeoAI is misleading, as it is simply AI. At the time I said that I generally agree with the premise that it hurts the geospatial community to separate itself from the larger technology community, but I do believe that the geospatial ecosystem differs from other technology sectors, and that there are novel legal issues associated with geospatial information. Therefore, I chose to use the term GeoAI because I believe the legal framework that develops around AI will have a disproportionate impact on geospatial companies.
I came across a good example of this phenomenon when trying to determine how the EU AI Act (the “AI Act”) would apply to businesses that built an AI model by fine tuning the Prithvi-weather-climate model. The Prithvi-weather-climate model, a collaboration between NASA and IBM Research, is a foundation model, pre-trained on Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2) data to support a variety of weather and climate applications. According to NASA, “[t]his application of artificial intelligence (AI) to NASA weather and climate data [emphasis added] has the potential to not only improve public safety [emphasis added], but open new doors into expanding our understanding of meteorological processes.”
The model highlights several of the issues that I consider to be unique to geospatial. First, the versatility of geospatial information makes the same data sets useful for a variety of applications, such as both climate and weather. However, as will be discussed below, the legal risks associated with such applications will vary. Also, the geospatial ecosystem is broad, in this case consisting of government agencies (NASA), the private sector (IBM) and the research community (e.g., climatologists). Each stakeholder in the ecosystem is subject to different laws and policies and has different legal and operational risks that must be considered. For example, NASA has very different authorities and obligations than a research lab at a university. Yet they are both part of the ecosystem, using the same data (and in this case models built on the data).
The AI Act regulates both AI systems as well as “general purposes AI models.” General-purpose AI models are defined as “an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications.” However, the AI Act does not apply to models, including their output, specifically developed and put into service for the sole purpose of scientific research and development.
Presumably the Prithvi weather-climate model, as used by NASA and researchers would be exempt under the AI Act. However, according to reports, NASA plans to publish the Prithvi-weather-climate model on Hugging Face, a platform that helps users build, deploy, and train machine learning models. This is in keeping with NASA’s open science principles to make all its data accessible and usable by communities everywhere. Therefore, companies should be able to build AI systems using the model, or fine tune their own models by using the Prithvi-weather-climate model.
Given NASA’s open science principles and Hugging Face’s open-source roots, presumably the Prithvi-weather-climate model will be licensed under some type of open-source license. The AI Act provides that general-purpose AI models released under “free and open-source licenses” are not subject to many of the Act’s transparency provisions, provided certain information about the model, including the weights, model architecture, and model usage are made publicly available. Also, for a license to be considered free and open source it must require users to credit the original provider of the model and that identical or comparable terms of distribution “are respected.”
Given all this, more research would be required on what information about the model NASA is planning on making public to determine if the “free and open-source license” exclusion applies. Also, as a U.S. federal agency, NASA has certain limitations on what licensing requirement it feels it can — or should — impose on users. Therefore, further research also would be required on the terms of the model’s license on Hugging Face. If either of these conditions are not met, the model might not be considered free and open source and therefore companies using the model might be subject to the AI Act’s transparency requirements.
Moreover, the AI Act imposes requirements imposed on general-purpose AI models, even if free and open-sourced, if the model is considered a “systemic risk.” Systemic risk is defined under the AI Act as a risk that is (i) specific to the high-impact capabilities of general- purpose AI models, or (ii) having a significant impact on the [EU] market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain This includes risk “that a particular event could lead to a chain reaction” that could affect a community, a city, or domain.
The AI Act further defines high-impact capabilities in general-purpose AI models as capabilities that match or exceed the capabilities recorded in the most advanced AI models, as measured in floating point operations (‘FLOPs’). Article 51 states a general-purpose AI model shall be presumed to have high impact capabilities when the cumulative amount of computation used for its training measured in FLOPs is greater than 10^25. The EU plans to adjust this threshold over time. Therefore, a company will need to understand the Prithvi weather-climate model capabilities, as measured in FLOPs, as well as keep up with changes in the EU’s thresholds, benchmarks, and indicators for the most advanced AI models.
Even if the Prithvi weather-climate model does not have “high-impact” capabilities, it still might be considered to have a systemic risk. Article 51 states the several ways a model can be determined to have systemic risk, including based on a decision of the Commission or qualified experts that it has capabilities that meet certain criteria. One of the factors, for example, is “whether it has a high impact on the internal market, which is presumed when it has been made available to at least 10,000 registered business users established in the [European] Union”.
While an AI model for climate likely would not have 10,000 business users, a successful AI weather model could. As a result, a business would need to monitor whether its capabilities might subject it to Article 56, which includes obligations to perform model evaluations, including conducting and documenting “red team testing of the model with a view to identifying and mitigating systemic risk.
As a reminder, this analysis is not meant to be legal advice. As noted above, much more research needs to be done on the Prithvi weather-climate model in order to determine how it might be classified under the AI Act. Moreover, there are many open issues with respect to the AI Act that the EU will address in the coming months before enforcement begins. But hopefully this discussion shows the challenge that geospatial businesses will face in trying to determine if compliance with the AI Act is required. While every business using AI in Europe will face these challenges, for the reasons noted above, I expect that it will be more challenging for many geospatial businesses.