Algorithmically Excluded Part VI: Regulation in a New Administration.

By Anthony May

In Part I of this series, I referenced the following quote from Elon Musk:

[Artificial Intelligence, or AI] doesn’t have to be evil to destroy humanity – if AI has a goal and humanity just happens to come in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings.

Mr. Musk’s new boss may have a difference of opinion. Although we’ve yet to encounter an AI Magic 8 Ball, if Musk were to ask whether the Trump Administration will continue the Biden Administration’s efforts to regulate AI, the response would likely be: “Outlook not so good.

This installment will provide a brief overview of the current administration’s efforts to curtail the disparate impacts of AI tools in employment and provide some educated assessments of what we can expect after January 20, 2025.

I.  Where We Are

The Biden Administration has made no secret about its attempt to regulate AI, particularly in the employment space. President Biden’s October 2023 Executive Order highlighted the need for increased safeguards on AI in a variety of spaces, which sparked a call to action from civil rights organizations. Those entities pushed federal administrative agencies, specifically the Equal Employment Opportunity Commission (EEOC), to establish “know your rights” guidance that will assist individuals in identifying when AI is used to make employment decisions can violated existing laws.

The EEOC, as well as several other administrative agencies, answered that call. The EEOC has issued several guidance documents, including the Artificial Intelligence and Algorithmic Fairness Initiative, and made targeting discriminatory uses of AI in employment decisions a strategic priority through 2028. The Department of Justice (DOJ) followed suit, issuing similar papers. Most recently in September 2024, the DOJ updated its Evaluation of Corporate Compliance Programs Guidance to assist prosecutors in deciding whether corporations could be criminally liable for failing to manage the emerging risks of AI that could violate applicable laws.

II. Where We’re Going

As we look to January 2025, experts expect a very different view from the incoming President, resulting in White House regulatory rollbacks, increased state legislation, and more pressure on employers to self-regulate.

The Trump Administration is likely to take a more hands-off approach to regulating AI, both in general and in the workplace. In recent comments, President Trump’s selection to lead the Federal Trade Commission (FTC), Andrew Ferguson, stated that he will aim to “end the FTC’s attempt to become an AI regulator[.]” In November 2024, the Society for Human Resource Management (SHRM) hosted a panel of attorneys and policy experts who opined that the Trump Administration will replace Biden’s AI Executive Order “with a more hands-off approach to spark more innovation.” The new administration will also likely withdraw the Blueprint for an AI Bill of Rights, which discussed employers using such technologies against organized labor movements.

According to JD Supra, the Trump Administration may reinstitute some rendition of its February 2019 Maintaining American Leadership in Artificial Intelligence Executive Order, which focused on promoting federal agency guidance for integrating AI technologies into the private sector. Similarly, the change in administration will have impacts on agency guidance from the EEOC and the National Labor Relations Board (NLRB), whose latest guidance, warning employers against surveilling employees participating in protected organized labor actions, may be in jeopardy. Notwithstanding, the Trump Administration may support the National Institute of Standards and Technology (NIST) new technical guidance discussing ways to mitigate the risks stemming from generative AI, which President Trump supported in his first term and President Biden subsequently called for.

The Trump Administration’s stance on AI regulation may also continue the trend of more active regulation in AI at the state level. As Rachel See, senior counsel for Seyfarth Shaw LLP, stated recently at SHRM’s panel, “In the absence of federal movement, that really motivates state legislatures to do something.” At least 40 states introduced bills addressing AI in 2024, many of which were related to discrimination and automated employment decisions. That trend will like continue, and perhaps accelerate, in the years to come, which may contribute to a “very complicated compliance regime” as employers continue to hire and employ individuals remotely in a variety of states.

The onus will be on employers to be vigilant and proactive about understanding this patchwork of developments. While it’s possible that President-elect Trump may use his majority in the House and Senate to advance AI legislative authorities and “establish[] a national standard that simplifies regulatory compliance and preempts conflicting regulatory frameworks,” if the Trump Administration takes a more passive approach, employers will need to take steps to ensure they are up to date in the myriad ways they could violate existing and developing laws. Given the impending uncertainty, it will be incumbent upon employers to prioritize transparency measures and take affirmative steps to perform AI audits in their existing systems to mitigate risk of inherent AI bias.

I have written and presented extensively on the intersection of AI and employment law, most recently presenting on this topic at the Society for Human Resource Management (SHRM) Talent Conference & Expo and at the 2024 National Employment Lawyers Association (NELA) Annual Convention. If you have questions about this matter or are interested in learning more about how you can take steps now to prevent future litigation, please call us at 410-962-1030 today for a consultation and learn more about my practice here.

 

Authored by

http://Anthony%20May
Anthony May Partner