Generative artificial intelligence (AI) may have its developmental roots in the 1950s, but the technology burst into the mainstream in 2023. In recent months, AI has sparked curiosity and innovation with capabilities across text, image, programming code and data analysis. The rapidly evolving technology creates a challenge as public sector entities seek to leverage AI to optimize efficiencies and innovation without jeopardizing privacy, data integrity, compliance and cybersecurity.
The potential applications for AI in the public sector are seemingly limitless. Municipalities can use AI to enhance citizen access to services. Government agencies can tap into the technology’s analytic capabilities to integrate and interpret data. Cities and states can leverage AI to create predictive models that enable anything from better budget allocations to enhanced traffic flows.
With AI guidelines, guardrails and best practices evolving alongside the technology, public sector organizations face the challenge of balancing opportunity with risk. It’s a balance that requires continual monitoring and risk assessment across an organization’s official use as well as all contractor, vendor and employee use.
Assessing AI Risks
Public sector organizations face distinct risks when adopting AI due to their responsibilities to the public, the intake and management of large amounts of confidential information, and the essential services the organizations deliver. The primary risks public entities must consider include:
- The potential for improper access or misuse of sensitive data, such as tax returns, legal records and health data, is a significant factor for public sector organizations, their vendors and contractors. In addition, public sector entities are typically held to higher transparency standards compared to their private sector peers.
- Data errors or incorrect decisions made based on AI have higher stakes for many public sector organizations. For services like law enforcement, social services or health care, AI-related mistakes can impact human lives, public safety and privacy rights.
- Data hallucinations produced through AI raise potential equity and fairness concerns for public entities. AI hallucinations occur when the system generates false or misleading information. Current reports put OpenAI’s (ChatGPT) hallucination rate at 3%, Meta at 5% and Anthropic at 8%.
- Bias can also be a factor with AI based on the training inputs used, model architecture and user interaction with output. As a result, public sector organizations must conduct extensive due diligence when selecting and implementing AI tools. For example, in 2023, the IRS was revealed to be conducting audits of Black taxpayers at three times the rate of other taxpayers based on trained bias within an AI system.
Implementing AI Governance
Mitigating these risks requires public sector organizations to implement comprehensive governance frameworks that address current challenges and adapt as the technology evolves. In the U.S., governance policies and regulations are being established at the federal, state and municipal levels.
In October 2023, the White House issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The Executive Order outlines a framework for AI governance and risk mitigation, charging federal governmental agencies with the completion of specific tasks. For example, it directs federal departments and agencies to designate a chief AI officer, develop an agency-specific AI strategy and align with defined AI safety and security practices.
The Executive Order complements the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework and Secure Software Development Framework. These comprehensive frameworks provide a map to assist governmental agencies with the design, development and use of AI tools and systems. In addition, the Executive Order taps NIST to develop additional guidance for AI deployment as well as standards and testing procedures for AI developers.
Along with the federal guidance, states and municipalities are also addressing AI governance, many in direct alignment with the NIST AI Risk Management Framework’s guidance on managing risks related to the trustworthiness, privacy, security and transparency of AI systems. Several states, including California, Oklahoma and Virginia, are creating cross-functional teams of AI experts to partner with agency leaders to assess risk, monitor AI use and develop policies.
In today’s fluid environment, public sector organizations must take a risk-based approach to establishing AI governance. AI experts can assist in assessing risks associated with current and prospective AI tools. Advisors can also provide valuable guidance at each stage of the NIST governance framework: govern, map, measure and manage.
The public sector industry experts at CBIZ can help organizations navigate AI governance and mitigate risks. Connect with a member of our team and gain access to more resources here.
This article includes input from Elzar Camper, Managing Director, and Ariel Allensworth, Senior Consultant, CBIZ Pivot Point Security. Elzar and Ariel are responsible for bringing a range of consulting, program management and international cyber and physical security experience to the public sector and many other industries.
| With a potential recession on the horizon, we know you want resources to help your business master the moment. We've put together our Agility & Excellence Resource Center to bring you strategies and solutions with a finger on the pulse of what's ahead. |