GeoAI and the Law Newsletter
Keeping geospatial professionals informed on the legal and policy issues that will impact GeoAI.
Recent Developments in GeoAI and the Law
One of the benefits of using Substack is that you can see how many readers open the links that are included in the newsletters. Surprisingly, although a large percentage of the subscribers are opening up the newsletter, very few open up the links. In order to make sure that the newsletter is providing subscribers with worthwhile information, I ask that you include in the comments section below whether the links are of interest, and if not, what you might like to see instead.
Recommended Reading
Newsom vetoes controversial AI bill (The Hill)
Readers of this newsletter have been able to follow this bill as it moved through the California state legislature. From the article, “’While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,’ Newsom wrote.”
Artificial Intelligence Systems and the GDPR: A Data Protection Perspective (Data Protection Authority of Belgium)
An excellent discussion of the intersection of the General Data Protection Regulation (GDPR)i and the Artificial Intelligence (AI) Act in the context of AI system development.
The National Guidelines On AI Governance and Ethics (Malaysia)
The objectives include supporting the implementation of the Malaysia National AI Roadmap 2021 – 2025, building trustworthiness in AI, managing risks caused by the development and deployment of AI technology and maximizing the benefits of AI to the nation.
Insuring Generative AI: Risks and Mitigation Strategies (MunichRe)
Insurance is an important tool in any risk management strategy. This report discusses some of the novel risks associated with Generative AI as well as how an entity can assess, price, and insure some of these risks.
Consultation Paper on AI Regulation (UNESCO)
“Since 2016, over thirty countries have passed laws explicitly mentioning Artificial Intelligence (AI) and in 2024, the discussion about AI bills in legislative bodies has increased globally. To better understand the current AI governance environment, UNESCO has mapped the different regulatory approaches for AI. The consultation paper will be published as a policy brief to inform and guide parliamentarians in crafting evidence-based AI legislation.”
The Deep Dive
As noted in the UNESCO report referenced in Recommend Reading, governments around the world are rushing to develop a legal and regulatory framework around AI. These laws and regulations are being written in such a way that they are likely to have a significant impact on many of applications that are being touted for "GeoAI", such as climate change and the SDGs. Yet, there is little evidence that the geospatial sector is providing input into these laws and regulations. Nor is there much discussion within the geospatial community on the impact that these laws and regulations could have on the geospatial sector.
This lack of awareness is unfortunate, as AI regulators are increasingly capturing geospatial in the context of risks associated with AI. The most recent case in point is the report from the Data Protection Authority of Belgium on Artificial Intelligence Systems and the GDPR referenced in Recommended Readings. The report discusses several instances in which standard uses of geospatial information can result in risks.
The report notes, for example, that GDPR requires personal data to be accurate and, where necessary, kept up to date. The AI Act builds upon this principle by requiring high-risk AI systems to use high-quality and unbiased data to prevent discriminatory outcomes. The example below taken from the report shows how outdated geospatial information could be biased and therefore raise concerns under the AI Act.
“a financial institution develops an AI system to automate loan approvals. The system analyzes various data points about loan applicants, including credit history, income, and demographics (postal code). However, the training data for the AI system unknowingly reflects historical biases: the data stems from a period when loans were more readily granted in wealthier neighborhoods (with a higher average income). The AI system perpetuates these biases as loan applicants from lower-income neighborhoods might be systematically denied loans, even if they are financially qualified. This results in a discriminatory outcome and might raise serious concerns about the system's compliance with the AI Act."
Similarly, GDPR requires that personal data must be collected for specific and legitimate purposes, and its use must be limited to what is necessary for those purposes. This requirement is intended to both prevent excessive collection and to ensure that personal data is not used beyond its intended purpose. The AI Act strengthens the purpose limitation principle for high-risk AI systems by emphasizing the need for the intended purposes to be well-defined and documented. As an example, the report highlights the use of an AI system by a financial institution for loan approval which:
“in addition to standard identification data and credit bureau information, also utilizes geolocation data (e.g., past locations visited) and social media data (e.g., friends' profiles and interests) of a data subject. This extensive data collection, including geolocation and social media data, raises concerns about the system's compliance with the GDPR."
The overlap between geospatial applications using AI and news laws and regulations to address perceived risks associated with AI will continue to grow. Hopefully, the geospatial community will not recognize this intersection too late.