GeoAI and the Law Newsletter
Keeping geospatial professionals informed on the legal and policy issues that will impact GeoAI.
Summary of Recent Developments in GeoAI and the Law
It struck me this week as I was going through the Recommended Readings, that I cannot remember a time in my 30 years of practicing law when both a technology and the effort to regulate it have both evolved so rapidly. A colleague compared the development of AI to the development of the internet. In some ways I think he is correct, because as with AI, lawmakers and regulators back then did not fully understand the internet or its implications. However, the advancements in the internet did not move as quickly as they are with AI, and in the early days of the internet at least, governments (particularly in the U.S.) generally took a more hands off approach. Which is arguably a big reason why the internet flourished.
Unlike with some recent technology sectors, such as new space, or drones and other autonomous vehicles, government approval, for the most part, is not currently needed to operate an AI system. Nor, unlike space, does the government need to be a major customer for the technology to be adopted. Which, in addition to all the other technological advancements such as computer power and the cloud, is why we have been able to see such rapid advancements in AI.
However, in contrast with the internet, governments are scrambling to regulate the technology (paradoxically, while also trying to promote the growth and use of AI within their nations). This is resulting in a patchwork of laws, regulations, Executive Orders, policies, guidance, etc. across the globe. Some of these, while (mostly) well intentioned, will prove to be flawed or misconceived. Others likely will be conflicting or confusing.
Time will tell what impact these efforts will have on the growth and adoption of GeoAI. In any event, it will be an interesting journey, from both a technological and a legal standpoint.
Recommended Reading
Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence
OMB’s follow up guidance to federal agencies on use of artificial intelligence.
Artificial Intelligence (Regulation) Bill gets a second reading in UK parliament.
According to this report from IAPP, the bill “a guiding principle in the proposed bill revolves around transparency, whereby organizations developing, deploying or using AI must be transparent about it, while testing it thoroughly and in conjunction with existing consumer and data protection as well as intellectual property laws.”
NTIA Artificial Intelligence Accountability Report
One of the eight recommendations is that the “federal government should require that government suppliers, contractors, and grantees adopt sound AI governance and assurance practices for AI used in connection with the contract or grant, including using AI standards and risk management practices recognized by federal agencies, as applicable.”
AI code of ethics, governance guidelines almost complete, says ministry
Earlier this month, the Malaysian government reported that its AI code of ethics and guidelines were almost complete. According to a report published this week (behind paywall), industry groups claim the draft AI guidelines contain confusing, inconsistent recommendations.
Artificial Intelligence Foundation Models Report
The report, published by Australia’s Commonwealth Scientific and Industrial Research Organisation (CSIRO includes a discussion of policy levers to help governments harness the opportunities and mitigate the risks of AI.
The Deep Dive
Each week, the Deep Dive will provide a detailed analysis on how a particular legal matter (e.g., a case, law, regulation, policy) pertaining to AI could impact the geospatial community and/or GeoAI in particular.
As noted above, this week the Office for Management ang Budget (OMB) published “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence” – guidance for federal agencies wishing to develop, procure or use AI (the “Guidance”). Of particular note for the geospatial community are the measures federal agencies must take in order to use AI that are considered to impact safety (“safety-impacting”) or rights (“rights-impacting). Particular attention is paid to the use of AI for these purposes because of the potential consequences if the AI system is flawed or fails to act as intended.
An annex to the Guidance includes a list of uses that are presumed to be either safety-impacting or rights impacting. Many of these uses have a location or geospatial component to them.
For example, purposes that Are presumed to be safety-impacting include: (i) autonomously or semi-autonomously moving vehicles, whether on land, underground, at sea, in the air, or in space; (ii) controlling the transport, safety, design, or development of hazardous chemicals or biological agents, (iii) controlling industrial emissions and environmental impacts; or (iv) choosing to summon first responders to an emergency. Uses of AI is presumed to be rights-impacting if it is used or expected to be used, in real-world conditions, to control or significantly influence the outcomes of any of the following (i) in law enforcement contexts, (a) gunshots; tracking personal vehicles over time in public spaces, including license plate readers, and (b) conducting physical location-monitoring or tracking of individuals; (ii) monitoring individuals’ physical location for immigration and detention-related purposes; or forecasting the migration activity of individuals; (iii) monitoring tenants in the context of public housing; (iv) providing valuations for homes; underwriting mortgages; (v) conducting workplace surveillance (vi) making insurance determinations and risk assessments; or (vii) making decisions regarding access to, eligibility for, or revocation of critical government resources or services;
By December 1, before using both new or existing covered safety-impacting or rights-impacting AI, a federal agency must:
· Complete an AI impact assessment;
· Test the AI for performance in a real-world context; and
· Independently evaluate the AI.
In addition, on an ongoing basis, agencies should:
· Conduct ongoing monitoring;
· Regularly evaluate risks from the use of AI;
· Mitigate emerging risks to rights and safety;
· Ensure adequate human training and assessment;
· Provide additional human oversight, intervention, and accountability as part
of decisions or actions that could result in a significant impact on rights or
safety; and
· Provide public notice and plain-language documentation.
Also, before using new or existing rights-impacting AI. agencies should:
· Identify and assess AI’s impact on equity and fairness, and mitigate algorithmic discrimination when it is present; and,
· Consult and incorporate feedback from affected communities and the public.
On an ongoing basis, before using AI for such uses, agencies should also:
· Conduct ongoing monitoring and mitigation for AI-enabled discrimination;
· Notify negatively affected individuals;
· Maintain human consideration and remedy processes; and
· Maintain options to opt-out for AI-enabled decisions.
The Guidance provides useful insight into both governments’ perceived risk in the use of AI as well as possible solutions. The entire geospatial ecosystem should take note.