Skip to content
JOIN TNPA

Artificial Intelligence (AI)

Last Updated on Wednesday, April 17, 2024

Recent Developments | March 13, 2024: The European Parliament formally adopted the European Union (EU) Artificial Intelligence Act (AI Act). This represents the world’s first comprehensive law regulating AI. Over the next few months and years, the AI Act will be specified and supplemented further by secondary EU legislation — implementing and delegated acts to be adopted by the EU Commission. (See the section below titled “European Union Artificial Intelligence Regulation.”)

What is Artificial Intelligence (AI)?

AI is an umbrella term that includes many different tools. At its most basic, AI encompasses tasks that we used to think only humans could do, but now machines are able to complete. Here are a few definitions 

  • From Forbes: AI is when machines learn from human intelligence in order to automate repetitive tasks, augment cognitive capacities and facilitate the decision-making process.
  • From Dataro: Artificial intelligence (AI) is a type of computer science focused on building smarter machines that can perform tasks that would typically require human thought.  

Machine learning, deep learning, generative AI, natural language processing, and image recognition, to name just a few, are all part of the AI concept. 

Technologists used to resist the term Artificial Intelligence because it encompassed so many different things, it didn’t have real meaning. Now the term has become accepted and used widely. 

What is the issue?

Artificial intelligence, and all the various tools that term encompasses, is different than other tools because it is so new and we, as a society and as a sector, are scrambling to keep up with the implications of its use. Both the positive contributions to our data analysis, creative generation, and workflows AND the potential risks to our ethics and credibility need to be considered.  

Here are eight key ethical implications to consider when using Artificial Intelligence:  

1) Bias and Fairness: 

  • Challenge: AI systems can inherit biases present in training data, leading to discriminatory outcomes. 
  • Ethical Concern: Unintended bias can result in unfair treatment of certain groups and reinforce or exacerbate existing social and economic inequalities and disparities. 

2) Transparency and Accountability: 

  • Challenge: The complexity of AI algorithms can make decision-making processes opaque. 
  • Ethical Concern: Lack of transparency may erode trust, and accountability becomes challenging when the rationale behind AI-driven decisions is unclear. 

3) Privacy and Data Security: 

  • Challenge: AI often relies on large datasets, raising concerns about the privacy and security of sensitive information. 
  • Ethical Concern: Mishandling data can compromise the privacy of individuals and expose them to risks. 

 4) Inclusivity and Accessibility: 

  • Challenge: AI technologies may not consider diverse perspectives or be accessible to all demographics. 
  • Ethical Concern: Exclusion of certain groups can lead to unequal access to benefits and opportunities provided by AI. 
  • Ethical Concern: Failing to involve diverse stakeholders in decision-making can result in solutions that do not adequately address community needs. 

5) Decision-Making and Autonomy: 

  • Challenge: AI systems may make decisions autonomously, impacting individuals’ lives. 
  • Ethical Concern: Determining the extent of autonomy and accountability for AI decisions raises questions about responsibility and control. 

6) Impact on Employment: 

  • Challenge: Automation through AI may lead to job displacement. 
  • Ethical Concern: Organizations must consider the societal impact and potential harm caused by job loss and strive to mitigate negative consequences. 

7) Sustainability and Environmental Impact: 

  • Challenge: The energy consumption of AI infrastructure can be substantial. 
  • Ethical Concern: Nonprofits need to consider the environmental impact of AI initiatives and prioritize sustainable practices. 

8) Responsible Research and Development: 

  • Challenge: Rapid advancements in AI may outpace ethical considerations. 
  • Ethical Concern: Ethical AI adoption requires continuous evaluation, reflection, and adaptation to evolving ethical standards. 

Why do nonprofits care?

  • Like all industries and sectors, nonprofits must adapt to keep pace with a world becoming more reliant on AI. Lagging behind could mean missed opportunities, and increased risk of becoming irrelevant and unsustainable over time.  
     
  • The nonprofit sector is held to higher ethical standards than other sectors. The most valuable currencies for nonprofits are trust and credibility. Our ethical standards and approaches matter, even – or especially — when legislation and regulation are not staying current with technological capabilities.  

On October 30, 2023, President Biden issued an Executive Order (EO) calling for “Safe, Secure, and Trustworthy Artificial Intelligence.” The EO: 

  • Requires developers of the most powerful AI systems to share their safety test results and other critical information with the federal government; 
  • Protects against the risks of using AI to engineer dangerous biological materials; 
  • Protects Americans from fraud and deception by establishing standards and best practices for detecting AI-generated content; 
  • Orders the development of a National Security Memorandum that directs the National Security Council and the White House Chief of Staff to ensure that the US military and intelligence community use AI safely, ethically, and effectively. 

Is there legislation relating to AI?

Yes. Congress is beginning to focus on the AI issue with hearings on the issue in both the House and Senate.  

On the Senate side, Majority Leader Chuck Schumer (D-NY) has publicly commented on the need for Congress to develop policies that allow AI to move forward while protecting consumer welfare. A Bipartisan Artificial Intelligence Task Force, co-chaired by Senators Martin Heinrich (D-NM) and Mike Rounds (R-SD), includes 14 other Senators who continue to explore the AI issue. 

Similarly, in the House, the Congressional Artificial Intelligence Caucus is co-chaired by Representatives Anna Eshoo (D-CA) and Michael McCaul (R-TX) and includes 54 House members. 

Among the specific legislation introduced in Congress are: 

  • S. 1626, The AI Shield for Kids Act introduced by Senator Rick Scott (R-FL). This legislation would require the Federal Communications Commission, in consultation with the Federal Trade Commission, to issue rules prohibiting entities from offering minors consumer artificial intelligence features in products of those entities without parental consent.  
  • H.R. 4223, The National AI Commission Act introduced jointly by Representatives Ken Buck (R-CO) and Anna Eshoo (D-CA). The legislation would create a bipartisan commission, appointed by both the President and Congressional leaders, whose mission will be to evaluate the risks and possible harm of AI innovation and establish the necessary “… long-term guardrails to ensure that artificial intelligence is aligned with the values shared by all Americans.” 

At the state level, introduced bills include: 

  • In Illinois – HB 3285 would create an “Artificial Intelligence Consent Act.” Under this legislation, if a person creates an image or video that uses Artificial Intelligence to mimic or replicate another person’s voice or likeness in a manner that would otherwise deceive an average viewer, the creator must provide a disclosure that states that the replication is not authentic. 
  • In New York – AB 6790 prohibits the use of media created by AI, which is not authentic and is used to influence an election. 

European Union Artificial Intelligence Regulation

On March 13, 2024, the European Parliament formally adopted the European Union (EU) Artificial Intelligence Act (AI Act). This represents the world’s first comprehensive law regulating AI. Over the next few months and years, the AI Act will be specified and supplemented further by secondary EU legislation — implementing and delegated acts to be adopted by the EU Commission.

The law is broad and introduces sweeping new obligations and restrictions. The Act would classify AI systems depending on the level of risk they pose to health, safety, and fundamental rights. According to this risk-based approach, there are four levels of risk: unacceptable, high, limited, and minimal/none. AI systems that create unacceptable risk — including social credit scoring systems and certain predictive policing applications — are entirely banned, while high-risk AI systems are subject to extensive requirements and regulation, and limited-risk AI systems bear fewer significant regulatory burdens.

The EU AI Act is expected to fully take effect in 2026 or 2027. However, portions of the Act may apply as early as six months after publication by the Council of the European Union. Enforcement will be conducted by separate regulators in each of the 27 member states in coordination with a new EU AI Office and EU AI Board, and penalties for noncompliance could result in fines up to 35 million Euros (approximately $39 million) or 7% of a company’s revenues, whichever is higher.

The agreement defines an AI system as a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

The law applies extraterritorially and covers all providers, users, manufacturers, and distributors of AI systems used in the EU market.

Prohibited uses of AI include: uses or applications of AI that are considered a clear threat to fundamental rights and, therefore, are prohibited. These banned systems include AI: (i) that is used to exploit vulnerabilities of individuals due to factors such as age, disability, or social or economic circumstances; (ii) AI with the capacity to manipulate human behavior to circumvent free will; (iii) employing social scoring regarding behavior or personal characteristics; (iv) with emotion recognition capabilities used in the workplace and educational institutions; (v) with capacity for scraping of images to create facial recognition databases; and (vi) employing biometric categorization systems that utilize sensitive characteristics, such as political, religious, or philosophical beliefs, or race or sexual orientation.

Back To Top