GeoAI and the Law Newsletter
Keeping geospatial professionals informed on the legal and policy issues that will impact GeoAI.
Summary of Recent Developments in GeoAI and the Law
The suggested readings this week include legal developments from around the globe. A link of particular interest to me from a geospatial standpoint was the UN’s resolution on AI. The resolution, which promotes the safe and secure use of AI to achieve the Sustainable Development Goals (SDGs), encourages stakeholders from government, the private sector and civil society to “develop and support regulatory and governance approaches and frameworks related to safe, secure and trustworthy artificial intelligence systems that create an enabling ecosystem at all levels”. Given the growth in GeoAI solutions for SDGs, I hope the geospatial communities within each nation take an active role in the development of these frameworks.
Recommended Reading
OECD's live repository of AI strategies & policies - OECD.AI
OECD has developed a live repository of over 1000 AI policy initiatives from 69 countries, territories and the EU.
Belgian Data Protection authority issues opinion on repurposing of data for ML/AI model training under GDPR
See beslissing-ten-gronde-nr.-46-2024.pdf (gegevensbeschermingsautoriteit.be).
State of AI Regulation in Africa
A comprehensive list of AI regulations in Africa
UN adopts landmark resolution to regulate AI globally
Resolution can be found here: Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development
China’s New Draft AI Law Prioritizes Industry Development
Draft AI Law was reportedly drafted by Chinese academics and intended to promote AI development as well as protect its citizens. (link to draft law included in article)
The Deep Dive
Each week, the Deep Dive will provide a detailed analysis on how a particular legal matter (e.g., a case, law, regulation, policy, issue) pertaining to AI could impact the geospatial community and/or GeoAI in particular.
As noted in last week’s edition, one of the challenges for geospatial organizations when tracking laws and regulations concerning AI, is that there are a number of different definitions of AI systems being used worldwide.
Another challenge is that these laws, regulations and policies take many different forms. For example, the EU AI Act, has several different designations for AI systems depending on the perceived risk in their use. These designations are (1) unacceptable risk (present a threat to individuals), (2) high risk (could negatively impact safety or human rights), (3) general purpose (i.e., generative AI) (4) limited risk (does not pose a significant risk of harm to health, safety, or fundamental rights of natural persons) and (5) no risk. The requirements and oversight imposed on developers and deployers of AI systems vary depending upon the designation. Those with unacceptable risks, for example, will be banned, those that are high risk must undergo assessment before deployment and may need to be registered, while limited risk AI systems may only require notice.
The State of Utah took another approach with the Artificial Intelligence Policy Act which will go into effect in May. It establishes liability for the use of AI in ways that violate Utah’s consumer protection laws if proper disclosure is not made. But it also allows companies that are considering developing an AI product to enter into a risk mitigation agreement with the Office of Artificial Intelligence Policy (a newly created agency) while more detailed regulations are being developed.
On the other hand, Governor Youngkin of Virginia issued Executive Order Number 30 (2024) in January that included provisions directing the Virginia IT Agency (VITA) to issue
“guiding principles for the ethical use of AI, general parameters to determine the business case for AI, a mandatory approval process for all Al capabilities, a set of mandatory disclaimers to accompany any products or outcomes generated by Al, methods to mitigate third-party risks, and measures to ensure that the data of private citizens are protected”.
VITA subsequently published Policy Standards for the Utilization of Artificial Intelligence by the Commonwealth of Virginia which directs state agencies to only procure and use AI “if there is a positive outcome for the citizens of the Commonwealth.” AI should be the “optimal solution” for the outcome and agencies should conduct a regulatory impact analysis to assess costs and benefits. A state registry for AI was also created and agencies are instructed to add any intended use of AI for approval by several departments.
As a result, increasingly both public and private sector geospatial organizations should consider the legal environment in which they operate before deploying GeoAI solutions.