X

ABA House Implements 3 Key Guidelines for Enhanced AI Utilization

artificial intelligence in legal industry

The American Bar Association (ABA) recently adopted a resolution, Resolution 604, addressing the need for accountability, transparency, and traceability in using artificial intelligence (AI). The resolution was introduced at the 2023 ABA Midyear Meeting by Lucy Thomson, a founding Cybersecurity Legal Task Force member.

The resolution calls on organizations to design, develop, deploy, and use AI to follow specific guidelines. For example, AI developers must ensure that their products, services, and systems are subject to human authority, oversight, and control. Organizations must also be accountable for the consequences of their use of AI and take reasonable steps to prevent harm or injury. Developers must also ensure the transparency and traceability of AI and protect related intellectual property.

The Cybersecurity Legal Task Force has also urged Congress, federal executive agencies, state legislatures, and regulators to adopt these guidelines in AI-related laws and standards. According to the resolution, people should know when they are interacting with AI and be able to challenge its outcomes if necessary, which is why transparency is a crucial aspect of the resolution.

The resolution’s report also emphasizes the importance of traceability in ensuring trustworthy AI. If AI results in undesirable outcomes, traceability helps developers understand what went wrong and how to prevent similar issues in the future. Legal responsibility must lie with responsible individuals and organizations rather than with computers or algorithms.

The resolution was supported by several sections of the ABA, including the Antitrust Law Section, Tort Trial and Insurance Practice Section, Science & Technology Law Section, and the Standing Committee on Law and National Security. The House of Delegates has previously considered two other measures related to AI, including Resolution 112, passed in 2019, which addressed ethical and legal issues arising from using AI in the legal profession.

Resolution 700, adopted in February 2022, called on governmental entities to refrain from using pretrial risk-assessment tools unless the data supporting the risk assessment is transparent, publicly disclosed and validated to demonstrate the absence of bias.

Artificial intelligence innovations, such as self-driving cars, diagnostic assistants for hospital clinicians, and autonomous self-directed weapons systems, have raised significant legal and ethical questions. The White House Office of Science and Technology Policy and organizations like the U.S. Equal Employment Opportunity Commission (EEOC) are working to ensure that AI complies with federal civil rights laws and prevents discrimination. Cities like New York City and states like California have also taken measures to prevent AI from violating anti-discrimination and privacy laws.

Past ABA President Laurel Bellows also spoke in favor of Resolution 604, emphasizing that lawyers are responsible for staying informed on AI-related issues. She stated that the resolution is a prime example of the House of Delegates’ impact on the world and its citizens.

Adopting Resolution 604 by the American Bar Association highlights the need for accountability, transparency, and traceability in AI. The resolution calls on organizations to follow specific guidelines to enhance AI and use it in a responsible and trustworthy manner. The resolution has the support of various sections of the ABA, as well as government entities and organizations working to ensure the ethical use of AI.

REFERENCES:

ABA House adopts 3 guidelines to improve use of artificial intelligence

Rachel E: