Skip to content
JOIN TNPA

Artificial Intelligence (AI)

Last Updated on December 20, 2024

What is artificial intelligence (AI)?

Machine learning, deep learning, generative AI, natural language processing, and image recognition, to name just a few, are all part of the AI concept. Technologists used to resist the term artificial intelligence because it was too broadly applied to have real meaning. Now the term has become accepted and used widely. 

  • From Forbes: AI is when machines learn from human intelligence in order to automate repetitive tasks, augment cognitive capacities and facilitate the decision-making process.
  • From Dataro: Artificial intelligence (AI) is a type of computer science focused on building smarter machines that can perform tasks that would typically require human thought.  

What are the recent legislative developments?

Important artificial intelligence legislation could move early in 2025 when the bipartisan AI regulatory legislation of Senators John Thune (R-SD) and Amy Klobuchar (D-MN) gains traction in the new Congress. The legislation was reported out of the Senate Commerce Committee in mid-2024. See the Federal Legislation section below for more details.

In September 2024, California Governor Gavin Newsom vetoed comprehensive AI legislation, which was complex and far-reaching. The governor described it as “well-intentioned” but noted that its requirements would have called for “stringent” regulations that would have been onerous for the state’s leading AI companies. The California Legislature did not move to override the veto but is likely to again consider comprehensive AI regulatory legislation in 2025.

What is the issue?

Artificial intelligence is different than other tools because it is so new and we, as a society and as a sector, are scrambling to keep up with the implications of its use. Both the positive contributions to our data analysis, creative generation, and workflows and the potential risks to our ethics and credibility need to be considered.  

Why do nonprofits care?

Like all industries and sectors, nonprofits must adapt to keep pace with a world becoming more reliant on AI. Lagging behind could mean missed opportunities and increased risk of becoming irrelevant and unsustainable over time. 

The nonprofit sector is held to higher ethical standards than other sectors. The most valuable currencies for nonprofits are trust and credibility. Our ethical standards and approaches matter, even – or especially — when legislation and regulation are not staying current with technological capabilities.  

Federal legislation

In October 2023, President Biden issued an Executive Order (EO) calling for “Safe, Secure, and Trustworthy Artificial Intelligence.” The EO: 

  • Requires developers of the most powerful AI systems to share their safety test results and other critical information with the federal government; 
  • Protects against the risks of using AI to engineer dangerous biological materials; 
  • Protects Americans from fraud and deception by establishing standards and best practices for detecting AI-generated content; 
  • Orders the development of a National Security Memorandum that directs the National Security Council and the White House Chief of Staff to ensure that the US military and intelligence community use AI safely, ethically, and effectively. 

In July 2024, the Senate Commerce Committee reported out for consideration on the Senate Floor S. 3312, the Artificial Intelligence Research, Innovation, and Accountability Act. TNPA has been working closely on this legislation with Senator John Thune (R-SD), the lead sponsor of S. 3312, which includes a bipartisan group of eight Senators – four Republicans and four Democrats – who have signed on to the bill. This legislation represents a balanced approach to the potential regulation of AI, including calling for the Department of Commerce to be the principal regulator of AI. The legislation provides for a layered level of regulation depending on the potential risk associated with the use of AI – the greater the potential risk, the higher the level of regulation. 

The Bipartisan Artificial Intelligence Task Force, co-chaired by Senators Martin Heinrich (D-NM) and Mike Rounds (R-SD), continues to explore the AI issue. TNPA has worked closely with Senators Heinrich and Rounds. In the House, the Congressional Artificial Intelligence Caucus, which includes 54 House members, has been co-chaired by Representatives Anna Eshoo (D-CA) and Michael McCaul (R-TX). Representative Eshoo is retiring, and a new co-chair has yet to be named.

State legislation

Colorado was the first state to enact comprehensive AI legislation in May 2024, SB205. The new law becomes effective February 1, 2026. It applies generally to developers and deployers of “high risk AI systems,” defined as AI systems that make or are a substantial factor in making a consequential decision. The law defines “consequential decision” as “any decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of: (a) education enrollment or an education opportunity, (b) employment or an employment opportunity, (c) a financial or lending service, (d) an essential government service, (e) health-care services, (f) housing, (g) insurance, or (h) a legal service.” The law addresses AI bias, establishing a requirement for human oversight throughout the life cycle of AI systems and requiring significant documentation around the use of AI. It applies to any person doing business in Colorado who develops an “AI system” or deploys a “high-risk AI system.” In effect, this means that it applies to any organization using a high-risk AI system, whether or not that system is consumer-facing. Notably, the law explicitly excludes a private right of action, leaving enforcement solely to the Colorado Attorney General.

European Union regulation

In March 2024, the European Parliament formally adopted the European Union (EU) Artificial Intelligence Act (AI Act). This represents the world’s first comprehensive law regulating AI. Over the next few years, the AI Act will be specified and supplemented further by secondary EU legislation implementing and delegated acts to be adopted by the EU Commission.

The law is broad and introduces sweeping new obligations and restrictions. The Act would classify AI systems depending on their risk level to health, safety, and fundamental rights. According to this risk-based approach, there are four levels of risk: unacceptable, high, limited, and minimal/none. AI systems that create unacceptable risk, including social credit scoring systems and certain predictive policing applications, are entirely banned. High-risk AI systems are subject to extensive requirements and regulation, and limited-risk AI systems bear fewer significant regulatory burdens.

The EU AI Act is expected to take full effect in 2026 or 2027. However, portions of the Act may apply as early as six months after publication by the Council of the European Union. Enforcement will be conducted by separate regulators in each of the 27 member states in coordination with a new EU AI Office and EU AI Board, and penalties for noncompliance could result in fines up to 35 million euros (approximately $39 million) or 7% of a company’s revenues, whichever is higher.

The agreement defines an AI system as a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

The law applies extraterritorially and covers all providers, users, manufacturers, and distributors of AI systems used in the EU market.

Prohibited uses of AI include uses or applications of AI that are considered a clear threat to fundamental rights. These banned systems include AI: (i) that is used to exploit vulnerabilities of individuals due to factors such as age, disability, or social or economic circumstances; (ii) AI with the capacity to manipulate human behavior to circumvent free will; (iii) employing social scoring regarding behavior or personal characteristics; (iv) with emotion recognition capabilities used in the workplace and educational institutions; (v) with capacity for scraping of images to create facial recognition databases; and (vi) employing biometric categorization systems that utilize sensitive characteristics, such as political, religious, or philosophical beliefs, or race or sexual orientation.

Back To Top