Skip to content
JOIN TNPA

Artificial Intelligence (AI)

Last Updated on Tuesday, November 12, 2024

Recent News | September 29, 2024: California Governor Gavin Newsom vetoes comprehensive artificial intelligence legislation. Among the provisions of the legislation, which is complex and far-reaching, are the following – The legislation covers entities that spend $100 million or more in a year on AI development. It places liability on both the developers of AI and the deployers of AI. Developers of AI face a long list of requirements, including strict prevention of unauthorized access to an AI model, conducting an annual review of protocols, and publishing of annual safety and security protocols. In his veto message the Governor described the legislation as “well intentioned,” but noted that its requirements would have called for “stringent” regulations that would have been onerous for the state’s leading artificial intelligence companies. At this point it is uncertain if the California Legislature will override the Governor’s veto. Here is a link to the legislation.

What is Artificial Intelligence (AI)?

AI is an umbrella term that includes many different tools. At its most basic, AI encompasses tasks that we used to think only humans could do, but now machines are able to complete. Here are a few definitions 

  • From Forbes: AI is when machines learn from human intelligence in order to automate repetitive tasks, augment cognitive capacities and facilitate the decision-making process.
  • From Dataro: Artificial intelligence (AI) is a type of computer science focused on building smarter machines that can perform tasks that would typically require human thought.  

Machine learning, deep learning, generative AI, natural language processing, and image recognition, to name just a few, are all part of the AI concept. 

Technologists used to resist the term Artificial Intelligence because it encompassed so many different things, it didn’t have real meaning. Now the term has become accepted and used widely. 

What is the issue?

Artificial intelligence, and all the various tools that term encompasses, is different than other tools because it is so new and we, as a society and as a sector, are scrambling to keep up with the implications of its use. Both the positive contributions to our data analysis, creative generation, and workflows AND the potential risks to our ethics and credibility need to be considered.  

Here are eight key ethical implications to consider when using Artificial Intelligence:  

1) Bias and Fairness: 

  • Challenge: AI systems can inherit biases present in training data, leading to discriminatory outcomes. 
  • Ethical Concern: Unintended bias can result in unfair treatment of certain groups and reinforce or exacerbate existing social and economic inequalities and disparities. 

2) Transparency and Accountability: 

  • Challenge: The complexity of AI algorithms can make decision-making processes opaque. 
  • Ethical Concern: Lack of transparency may erode trust, and accountability becomes challenging when the rationale behind AI-driven decisions is unclear. 

3) Privacy and Data Security: 

  • Challenge: AI often relies on large datasets, raising concerns about the privacy and security of sensitive information. 
  • Ethical Concern: Mishandling data can compromise the privacy of individuals and expose them to risks. 

 4) Inclusivity and Accessibility: 

  • Challenge: AI technologies may not consider diverse perspectives or be accessible to all demographics. 
  • Ethical Concern: Exclusion of certain groups can lead to unequal access to benefits and opportunities provided by AI. 
  • Ethical Concern: Failing to involve diverse stakeholders in decision-making can result in solutions that do not adequately address community needs. 

5) Decision-Making and Autonomy: 

  • Challenge: AI systems may make decisions autonomously, impacting individuals’ lives. 
  • Ethical Concern: Determining the extent of autonomy and accountability for AI decisions raises questions about responsibility and control. 

6) Impact on Employment: 

  • Challenge: Automation through AI may lead to job displacement. 
  • Ethical Concern: Organizations must consider the societal impact and potential harm caused by job loss and strive to mitigate negative consequences. 

7) Sustainability and Environmental Impact: 

  • Challenge: The energy consumption of AI infrastructure can be substantial. 
  • Ethical Concern: Nonprofits need to consider the environmental impact of AI initiatives and prioritize sustainable practices. 

8) Responsible Research and Development: 

  • Challenge: Rapid advancements in AI may outpace ethical considerations. 
  • Ethical Concern: Ethical AI adoption requires continuous evaluation, reflection, and adaptation to evolving ethical standards. 

Why do nonprofits care?

  • Like all industries and sectors, nonprofits must adapt to keep pace with a world becoming more reliant on AI. Lagging behind could mean missed opportunities, and increased risk of becoming irrelevant and unsustainable over time.  
     
  • The nonprofit sector is held to higher ethical standards than other sectors. The most valuable currencies for nonprofits are trust and credibility. Our ethical standards and approaches matter, even – or especially — when legislation and regulation are not staying current with technological capabilities.  

On October 30, 2023, President Biden issued an Executive Order (EO) calling for “Safe, Secure, and Trustworthy Artificial Intelligence.” The EO: 

  • Requires developers of the most powerful AI systems to share their safety test results and other critical information with the federal government; 
  • Protects against the risks of using AI to engineer dangerous biological materials; 
  • Protects Americans from fraud and deception by establishing standards and best practices for detecting AI-generated content; 
  • Orders the development of a National Security Memorandum that directs the National Security Council and the White House Chief of Staff to ensure that the US military and intelligence community use AI safely, ethically, and effectively. 

Federal Legislation

On July 31, the Senate Commerce Committee reported out for consideration on the Senate Floor S. 3312, the Artificial Intelligence Research, Innovation, and Accountability Act. TNPA has been working closely on this legislation with Senator John Thune (R-SD), the Senate Republican Whip. Senator Thune is the lead sponsor of S. 3312, which includes a bipartisan group of eight Senators – four Republicans and four Democrats – who have signed on to the Thune bill. This legislation represents a balanced approach to the potential regulation of AI, including calling for the Department of Commerce to be the principal regulator of AI. The legislation provides for a layered level of regulation depending on the level of potential risk associated with the use of AI – the greater the potential risk, the higher level of regulation. Given that Congress has recessed until after the November election, the legislation is not expected to be taken up this year on the Senate floor, but its passage out of the Commerce Committee is significant and it could be taken up early next year.

Also, on the Senate side, Majority Leader Chuck Schumer (D-NY) has publicly commented on the need for Congress to develop policies that allow AI to move forward while protecting consumer welfare. A Bipartisan Artificial Intelligence Task Force, co-chaired by Senators Martin Heinrich (D-NM) and Mike Rounds (R-SD), includes 14 other Senators who continue to explore the AI issue. 

Similarly, in the House, the Congressional Artificial Intelligence Caucus is co-chaired by Representatives Anna Eshoo (D-CA) and Michael McCaul (R-TX) and includes 54 House members. 

Among the specific legislation introduced in Congress are: 

  • S. 1626, The AI Shield for Kids Act introduced by Senator Rick Scott (R-FL). This legislation would require the Federal Communications Commission, in consultation with the Federal Trade Commission, to issue rules prohibiting entities from offering minors consumer artificial intelligence features in products of those entities without parental consent.  
  • H.R. 4223, The National AI Commission Act introduced jointly by Representatives Ken Buck (R-CO) and Anna Eshoo (D-CA). The legislation would create a bipartisan commission, appointed by both the President and Congressional leaders, whose mission will be to evaluate the risks and possible harm of AI innovation and establish the necessary “… long-term guardrails to ensure that artificial intelligence is aligned with the values shared by all Americans.” 

Legislation in the States

In May Colorado became the First State to enact Comprehensive AI Legislation. The new law becomes effective February 1, 2026. It applies generally to developers and deployers of “high risk AI systems,” which are defined as AI systems that make, or are a substantial factor in making a consequential decision. The law also defines “consequential decision” as “any decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of: (a) education enrollment or an education opportunity, (b) employment or an employment opportunity, (c) a financial or lending service, (d) an essential government service, (e) health-care services, (f) housing, (g) insurance, or (h) a legal service.”

Here is a copy of the law: SB205. The law is aimed at addressing AI bias, establishing a requirement of human oversight throughout the life cycle of AI systems, and requiring significant documentation around the use of AI. It applies to any person doing business in Colorado who develops an “AI system” or deploys a “high-risk AI system.” In effect, this means that it applies to any organization using a high-risk AI system, whether or not that system is consumer-facing. Importantly, the law explicitly excludes a Private Right of Action, leaving enforcement solely to the Colorado Attorney General.

Also at the state level, introduced bills include: 

  • In Illinois – HB 3285 would create an “Artificial Intelligence Consent Act.” Under this legislation, if a person creates an image or video that uses Artificial Intelligence to mimic or replicate another person’s voice or likeness in a manner that would otherwise deceive an average viewer, the creator must provide a disclosure that states that the replication is not authentic. 
  • In New York – AB 6790 prohibits the use of media created by AI, which is not authentic and is used to influence an election. 

European Union Artificial Intelligence Regulation

On March 13, 2024, the European Parliament formally adopted the European Union (EU) Artificial Intelligence Act (AI Act). This represents the world’s first comprehensive law regulating AI. Over the next few months and years, the AI Act will be specified and supplemented further by secondary EU legislation — implementing and delegated acts to be adopted by the EU Commission.

The law is broad and introduces sweeping new obligations and restrictions. The Act would classify AI systems depending on the level of risk they pose to health, safety, and fundamental rights. According to this risk-based approach, there are four levels of risk: unacceptable, high, limited, and minimal/none. AI systems that create unacceptable risk — including social credit scoring systems and certain predictive policing applications — are entirely banned, while high-risk AI systems are subject to extensive requirements and regulation, and limited-risk AI systems bear fewer significant regulatory burdens.

The EU AI Act is expected to fully take effect in 2026 or 2027. However, portions of the Act may apply as early as six months after publication by the Council of the European Union. Enforcement will be conducted by separate regulators in each of the 27 member states in coordination with a new EU AI Office and EU AI Board, and penalties for noncompliance could result in fines up to 35 million Euros (approximately $39 million) or 7% of a company’s revenues, whichever is higher.

The agreement defines an AI system as a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

The law applies extraterritorially and covers all providers, users, manufacturers, and distributors of AI systems used in the EU market.

Prohibited uses of AI include: uses or applications of AI that are considered a clear threat to fundamental rights and, therefore, are prohibited. These banned systems include AI: (i) that is used to exploit vulnerabilities of individuals due to factors such as age, disability, or social or economic circumstances; (ii) AI with the capacity to manipulate human behavior to circumvent free will; (iii) employing social scoring regarding behavior or personal characteristics; (iv) with emotion recognition capabilities used in the workplace and educational institutions; (v) with capacity for scraping of images to create facial recognition databases; and (vi) employing biometric categorization systems that utilize sensitive characteristics, such as political, religious, or philosophical beliefs, or race or sexual orientation.

Back To Top