GeoAI and the Law Newsletter
Keeping geospatial professionals informed on the legal and policy issues that will impact GeoAI.
Summary of Recent Developments in GeoAI and the Law
One of the challenges of discussing GeoAI from a legal standpoint is that each of the constituent components (i.e., geospatial and AI) covers a wide range of technologies, disciplines and applications. For example, the term artificial intelligence is often used to include machine learning and deep learning, while geospatial technologies include geographic information science, remote sensing, and computer science. Similarly, GeoAI is being used in numerous applications including detecting illegal mining, improving health care, defense and intelligence applications, and developing “smart cities.” As discussed in the Deep Dive, these terms can be misleading and require different approaches when determining the legal risks that a geospatial organization will face when developing or deploying AI.
Recommended Reading
FTC Blog Post on Legal and Ethical Use of AI
Recommendations include: (i) don’t misrepresent what these services are or can do, (ii) don’t offer these services without adequately mitigating risks of harmful output; and (iii) don’t violate consumer privacy rights.
Governing With Artificial Intelligence; Are Governments Ready
This policy paper published by the Organization for Economic Co-operation and Development (OECD) outlines the key trends and policy challenges in the development, use, and deployment of AI in and by the public sector. It includes discussions on (i) the potential benefits and specific risks associated with AI use in the public sector; (ii) how AI in the public sector can be used to improve productivity, responsiveness, and accountability and (iii) some of the key policy issues and presents examples of how countries are addressing them across the OECD.
A View from DC: One step forward for the American Privacy Rights Act (IAPP)
This article discusses the removal of several AI-related provisions from a bill in Congress that would for the first time create a comprehensive federal privacy law. Many believe there is a good chance this bill could become law.
EDPS Guidelines on generative AI: embracing opportunities, protecting people
The European Data Protection Supervisor (EDPS) published guidance earlier this month. One of the topics it addressed was web scraping, stating: “[t]he EDPS has already cautioned against the use of web scraping techniques to collect personal data, through which individuals may lose control of their personal information when these are collected without their knowledge, against their expectations, and for purposes that are different from those of the original collection. The EDPS has also stressed that the processing of personal data that is publicly available remains subject to EU data protection legislation. In that regard, the use of web scraping techniques to collect data from websites and their use for training purposes might not comply with relevant data protection principles, including data minimisation and the principle of accuracy, insofar as there is no assessment on the reliability of the sources.”
The issue of web scraping was also addressed in a recent report published by the European Data Protection Board.
Apple to delay launch of AI-powered features in Europe, blames EU tech rules
The announcement highlights that while the EU AI is not yet in force, there are other laws - in the EU and elsewhere - that impact the development and deployment of AI systems.
The Deep Dive
The term GeoAI is often used as if it is one technology or narrow set of applications. However, GeoAI actually refers to a broad range of technologies. For example, Claude 3.5 Sonnet defined GeoAI as the “the integration of artificial intelligence techniques, including machine learning and deep learning, with geospatial data and technologies to analyze, interpret, and derive insights from location-based information. It combines principles from geographic information science, remote sensing, and computer science to solve complex spatial problems and enhance decision-making processes in various domains.”
The term “geospatial data” is similarly misleading, as it can include diverse data types such as satellite imagery, GPS coordinates, LiDAR point clouds, census data and weather data. Even the term large language models (LLMs) can apply to distinct AI systems that generate text, images, sound, songs and video. (Soon, a single LLM that can generate all of these content types will be generally available.) Moreover, as noted above the power of both geospatial and AI is that they can be used in a wide range of applications.
Given the technologies and applications are broad and diverse, so are the potential legal issues. While it will be relatively easy to identify the legal issues or laws that might arise, understanding how to apply the law to a particular technology or application is much more challenging. For example, as discussed in last week’s edition of GeoAI and the Law, copyright protection varies between geospatial data types. Similarly, concerns over location privacy will vary depending upon the geospatial information (i.e., its type, accuracy, completeness and timeliness) and the particular law’s definition of what is being protected.
In addition, as noted in previous editions of this newsletter, GeoAI systems will be subject to different regulatory regimes depending upon the sector(s) in which their applications are used. For example, the Department of Health and Human Services will have different regulations pertaining to AI than the Department of Homeland Security. Similarly, while GeoAI applications for smart cities could be subject to federal, state and local laws and regulations, defense and intelligence applications will have their own set of requirements.
Finally, the legal and policy framework around AI will likely be “risk-based.” That is, certain AI systems or applications for AI will require greater legal and regulatory scrutiny because of their potential for harm. For example, the use of machine learning for computer vision will be subject to a different regulatory regime than generative AI. Similarly, as discussed here, “safety-impacting” GeoAI applications are likely to be subject to different legal requirement than “rights-impacting” GeoAI applications. Understanding the difference will require both identifying the potential risks and developing mitigating measures.
To navigate these issues, when developing or deploying AI, geospatial organizations should consider creating a team with a collective understanding of AI, geospatial and relevant laws. Such multi-disciplinary teams are considered best practice in AI governance and for the reasons described above are arguably more important for GeoAI. In order to understand the legal obligations and limitations that apply to the organization’s particular use of GeoAI, these teams should consider the sector(s) being targeted, the types of geospatial information being analyzed or created, the potential risks associated with the use of the AI system and ways to mitigate such risks. Such a multi-disciplinary team will be critical to balance GeoAI’s potential, with operational practicalities and legal requirements and risks.