GEOAI and the Law
Keeping geospatial professionals informed on the legal and policy issues that will impact GeoAI
Summary of Recent Developments in GeoAI and the Law
Each week this section will give a high-level overview of legal and policy developments that could have an impact on GeoAI. The legal development that received the greatest attention was Elon Musk’s lawsuit against OpenAI. However, from a GeoAI standpoint, given the important role that California has played in data protection and privacy in the U.S., the development that could have the biggest impact is the California Privacy Protection Agency board’s decision on March 8 to begin the rulemaking process for automated decision making. Automated decisionmaking technology is defined in the Draft Risk Assessment and Automated Technology Regulations (which was circulated to the board prior to the meeting to facilitate discussion) as “any technology that processes personal information and uses computation to execute a decision, replace human decisionmaking, or substantially facilitate human decisionmaking.” This broad definition could include many GeoAI applications. A link to the full document can be found below.
Recommended Reading
AI advisory committee wants law enforcement agencies to rethink use case inventory. (Fedscoop)
The article discusses why a government advisory committee believes law enforcement should include the use of automated license plate readers as an AI use case.
Government of India’s Advisory for AI Companies
India reverses course on AI and publishes an advisory that could impact large AI developers and deployers.
Trending Now Podcast (Williams Mullen)
A podcast on court cases involving copyright issues associated with Generative AI training data and outputs.
Draft Risk Assessment and Automated Decisionmaking Technology Regulations
On March 8, California Privacy Protection Agency’s board moved closer to begin drafting regulations pertaining to automated decision making and privacy.
Researchers tested leading AI models for copyright infringement using popular books, and GPT-4 performed worst
Results from a report by an AI model evaluation company examining how major AI models responded to queries to produce copyrighted content.
AI (and other) Companies: Quietly Changing Your Terms of Service Could Be Unfair or Deceptive
Federal Trade Commission (FTC) states that it may be an unfair or deceptive practice for a company to change its privacy policies to permit retractive use of personal information for AI training purposes.
The Deep Dive
Each week, the Deep Dive will provide a detailed analysis on how a particular legal matter (e.g., a case, law, regulation, policy) pertaining to AI could impact the geospatial community and/or GeoAI in particular.
Our first Deep Dive examines the evolving legal and policy framework in the U.S. The starting point is the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence issued on October 30, 2023 (the “AI Executive Order”). It aims to balance innovation with AI safety, equity, privacy, and security. It directs government agencies to establish safety and testing standards and large AI developers and cloud providers to share critical information with the U.S. government. (The AI Executive Order is quite lengthy and detailed, but the Congressional Research Service published the Highlights of the 2023 Executive Order on Artificial Intelligence for Congress which is an easier read.) Subsequently, a number of government agencies have issued policies and guidance in response to the AI Executive Order. The most important of these was the Office of Management and Budget’s draft policy on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.
Not surprisingly, Congress has been active on AI. There are several bills before Congress pertaining to various aspects of AI. For example, the AI Foundation Model Transparency Act would direct the Federal Trade Commission, in consultation with the National Institute for Standards and Technology (NIST), the U.S. Copyright Office, and the Office of Science and Technology Policy (OSTP), to set transparency standards for foundation model deployers, by asking them to make certain information publicly available to consumers and direct companies to provide consumers and the FTC with information on the model’s training data, model training mechanisms, and whether user data is collected in inference. Other bills before Congress that could impact GeoAI include:
· Assuring Safe, Secure, Ethical, and Stable Systems for AI Act - This bill would direct the President to appoint a task force to establish an organizational structure for artificial intelligence governance and oversight in the U.S. government on matters such as privacy, civil rights, and civil liberties.
· AI Labeling Act of 2023 – This bill would require disclosures for AI-generated content, including visual, image, audio and text content.
· Algorithmic Accountability Act of 2023 – This bill would require certain businesses that use automated decision systems to make critical decisions (e.g. significant effects on the cost of healthcare, housing, or educational opportunities) to study and report about the impact of those systems on consumers.
Bills that could impact AI have also been introduced in a number of state legislatures. Two of the most noteworthy include AB-2930 Automated decision tools, which was introduced in the California State Assembly in February. The same month, S.B. No. 2 (An Act Concerning Artificial Intelligence) was introduced in the Connecticut General Assembly. Both bills would regulate the development and deployment of high-risk AI systems and generative AI systems by the private sector, but in different ways.
Several court cases that could have an impact on AI in general, and GeoAI in particular. The most noteworthy is the lawsuit filed by the New York Times against Microsoft and OpenAI. In addition, the FTC is reportedly investigating OpenAI’s data collection, data security and outputs to determine if consumers are being harmed.
In future issues we will explore these and other matters (both in the U.S. and globally) in more detail, with a focus on their potential impact to GeoAI and the geospatial community.
valuable content of a newsletter focused on legal developments in the field of artificial intelligence (AI) and specifically emphasizes its coverage of GeoAI.