GeoAI and the Law Newsletter
Keeping geospatial professionals informed on the legal and policy issues that will impact GeoAI.
Recent Developments in GeoAI and the Law
There are a number of diverse links in this edition’s Recommended Reading. These include an analysis of intellectual property rights and Generative AI and an attempt to develop a comprehensive set of common references and taxonomy for risks associated with AI. In addition, South Africa and Australia recently published AI policies. The Deep Dive highlights the challenges geospatial companies building or deploying GeoAI will face at the national level as AI laws and policies develop around the globe.
Recommended Reading
South Africa National Artificial Intelligence Policy Framework (Republic of South Africa - Department of Communications and Digital Technologies)
Generative AI and intellectual property: Copyright implications for AI inputs, outputs (IAPP)
Second in a two-part series exploring intellectual property laws, their issues and how they are being impacted by the development and applications of generative AI systems.
Policy for the responsible use of AI in government (Australian Government - Digital Transformation Agency)
Effective September 1, 2024, this policy “aims to create a coordinated approach to government’s use of AI and has been designed to complement and strengthen – not duplicate – existing frameworks in use by the APS [Australian Public Service]. In recognition of the speed and scale of change in this area, the policy is designed to evolve over time as the technology changes, leading practices develop, and the broader regulatory environment matures.”
AI Risk Repository (MIT)
“The risks posed by Artificial Intelligence (AI) are of considerable concern to academics, auditors, policymakers, AI companies, and the public. However, a lack of shared understanding of AI risks can impede our ability to comprehensively discuss, research, and react to them. This paper addresses this gap by creating an AI Risk Repository to serve as a common frame of reference. This creates a foundation for a more coordinated, coherent, and complete approach to defining, auditing, and managing the risks posed by AI systems.”
Illinois Amends it Human Rights Act to include AI
The State of Illinois amended the Illinois Human Rights Act to include covered entities using AI for various employment related tasks (e.g., hiring, firing, promoting) so that it subjects employees to discrimination on the basis of protected classes under the Act may constitute a violation. Of note, the law specifically states that using zip code as a proxy for protected classes.
The Deep Dive
I have been interested in the several recent reports comparing how different regions of the world are regulating AI, and the impact that this could have on geospatial organizations. Two in particular that have caught my eye are Regulatory Mapping on Artificial Intelligence In Latin America, published by AccessNow and Navigating Governance Frameworks for Generative AI Systems in the Asia-Pacific prepared by the Future of Privacy Forum (FPF). The reports highlight that there are several similarities between the approaches countries are taking. For example, the stated goals of most countries are to develop a framework that promotes the development of AI within both the public and private sectors in a way that is ethical, equitable and transparent. To quote from South Africa’s National AI Policy, referenced above:
“The National AI Policy Framework for South Africa represents a strategic blueprint aimed at harnessing AI technologies to propel the country’s economic growth, technological advancement, and societal wellbeing. Emphasizing ethical development, the framework prioritizes the responsible deployment of AI that aligns with South Africa’s values and priorities.”
Similarly, Australia’s Policy for Responsible Use of AI in government states:
“AI has an immense potential to improve social and economic wellbeing. Development and deployment of AI is accelerating. It already permeates institutions, infrastructure, products and services, with this transformation occurring across the economy and in government.
For government, the benefits of adopting AI include more efficient and accurate agency operations, better data analysis and evidence-based decisions, and improved service delivery for Australians. Many areas of the Australian Public Service (APS) already use AI to improve their work and engagement with the public.
To unlock innovative use of AI, Australia needs a modern and effective regulatory system. Internationally, governments have introduced new regulations to address AI’s distinct risks, focused on preventative, risk-based guardrails that apply across the supply chain and throughout the AI lifecycle.”
Developing a legal and policy framework that advances the benefits of AI while also protecting against a broad range of potential risks, will be a challenge for the public and private sector alike. However, the challenge for the private sector in general, and the geospatial community in particular, is the variety of government stakeholders within each country that will need to be accounted for. Numerous government agencies have either been given authority – or proactively published guidance – on the deployment and regulation of AI. For example, FPF notes in its report that “[w]ithin jurisdictions, a further difference is in which agencies or branches of government have been leading efforts to govern generative AI.” It goes on to note that in Japan, “efforts [to govern AI] have involved the Ministry of Foreign Affairs, the Ministry of Internal Affairs and Communications, the Digital Agency, and the Ministry of Economy, Trade and Industry” while in Australia (i) the Department of Industry, Science and Resources leads public consultations and framework development and (ii) the eSafety Commissioner, has provided guidance to industry on mitigating online safety risks from generative AI. Of note, while data protection authorities in other parts of the world have taken a leading role in AI regulation, according to the report, Australia’s data protection authority “has largely been confined to its participation in the Digital Platform Regulators Forum (DP-REG), and it has not issued any generative AI-specific guidance to date.”
While this decentralization makes sense from a legal standpoint, as the applications of AI are so diverse, and each country has its own unique legal system and governmental structure, it will make it challenging for geospatial organizations operating in different countries, as there likely will not be a single focal point within a government to get the necessary approvals or even to identify the applicable regulations. Even organizations within a country, such as national mapping agencies, may struggle to keep track of the various authorities and guidance.