GeoAI and the Law Newsletter
Keeping geospatial professionals informed on the legal and policy issues that will impact GeoAI
Recent Developments in GeoAI and the Law
Most changes in the law do not happen overnight. It can take years for a law to move from concept to reality, as words are carefully parsed and negotiated and various stakeholders weigh in. This has certainly been the case in the United States, particularly over the past few years, given the partisanship in Washington.
However, every now and then there is a Black Swan event that results in legislation being hastily drafted and rushed through. For example, the events of 9/11 led to quick passage of the Homeland Security Act of 2002, which resulted in moving a large number of agencies into a newly created Department of Homeland Security and granting significant new authorities.
The fallout from the recent global IT outage apparently connected to a software update from CloudStrike is not over. However, its severity and repercussions feel significant enough event that Congress will feel the need to quickly act. It is too early to predict what law(s) might arise, but any measures taken will almost certainly impact AI, and by implication GeoAI.
Recommended Reading
DHS AI Corps Lead: GenAI Needs Program Akin to FedRAMP
To quote from the article “[a] top official with the Department of Homeland Security (DHS) said Thursday that generative AI (GenAI) tools used within the Federal government should be risk-evaluated through a program similar to the Federal Risk and Authorization Management Program (FedRAMP).”
Employers Find Openings to Share AI Bias Liability With Vendors
In finding that Workday’s automated tools could be found responsible for unlawful discrimination, the court noted when comparing the tool to a spreadsheet program “[b]y contrast, Workday does qualify as an agent because its tools are alleged to perform a traditional hiring function of rejecting candidates at the screening stage and recommending who to advance to subsequent stages, through the use of artificial intelligence and machine learning,”
Berkeley Research Group’s (BRG) 2024 Global AI Regulation Report evaluates where AI regulation stands, the challenges organizations face in complying, and what key stakeholders see as most important for the development of effective AI policy moving forward.
PRIVACY ENHANCING TECHNOLOGY (PET): PROPOSED GUIDE ON SYNTHETIC DATA GENERATION
Published by Singapore’s Data Protection Authority, the guide focuses on the use of synthetic data to generate structured data. It includes good practices and risk assessments/considerations for generating synthetic data to minimize risks as well as governance controls, contractual process, and technical measures.
AI system development: CNIL’s recommendations to comply with the GDPR
France’s Data Protection Authority has published its first recommendations on the application of the GDPR to the development of artificial intelligence systems.
The Deep Dive
Each week, the Deep Dive will provide a detailed analysis on how a particular legal matter (e.g., a case, law, regulation, policy, issue) pertaining to AI could impact the geospatial community and/or GeoAI in particular.
The final text of the EU AI Act (the “AI Act”) was published on 12 July 2024. It enters into force twenty (20) days after publication (i.e., August 1, 2024), although many provisions will be phased in over twenty-four (24) months. The AI Act regulates “AI systems”, defined as "a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."
The AI Act classifies AI systems by level of risks, which is determined by “[t]he combination of the probability of an occurrence of harm and the severity of the harm.” The level of risk determines the responsibilities of “providers” – a company that develops or has developed an AI system or a general AI model - and “deployers”, defined as an organization that uses an AI system.
There are several levels of risks. One of the most important for GeoAI businesses is “high risk AI systems”. These are defined AI systems that could negatively impact safety or human rights. High risk systems include products already protected by EU product safety legislation (i.e., medical devices, aircraft, cars) as well as AI systems used in sectors specifically listed in the AI Act that could impact safety or human rights. These sectors include:
Biometrics
Critical infrastructure
Employment
Education
Risk to fundamental rights and services (including eligibility to public services, credit or healthcare etc.)
Law enforcement
Border Control
The AI Act sets forth specific obligations with which providers of high-risk systems most comply. These include:
Risk management
Data quality and governance
Documentation and traceability
Transparency
Human oversight
Accuracy, cybersecurity and robustness
Demonstrated compliance via conformity assessments
A company selling GeoAI in any of these sectors should consider whether its system is considered high risk for purposes of the AI Act. If a company determines that the requirements for high risk AI systems does apply, it should immediately begin taking steps to make sure that it can comply, as several obligations are likely to require significant changes to current operations and most must be implemented by August 2, 2026.
GeoAI and the Law is not legal advice. The reader should consult with a trained lawyer on legal matters associated with AI or geospatial technology or information.
GeoAI and the Law is a free newsletter. However, so as to foster the development of lawyers who understand this critical sector it is accepting contributions in order to fund law students who wish to attend geospatial conferences.