The release of ChatGPT marked the beginning of a new era for artificial intelligence (AI), democratizing access to powerful tools for both technical and non-technical users. Today, AI is poised to revolutionize multiple aspects of our lives, from how we learn and work to how we create art and cure illness.
In the public sector, state and local government agencies are readily adopting AI tools. A recent survey of senior professionals from state, local, and federal agencies showed that 51% use an AI application several times a week.
Within state and local agencies, teams are exploring multiple AI use cases. They are automating form-based processes to increase employee productivity; implementing generative AI (GenAI)-assisted chatbots to improve digital experiences; enhancing fraud detection; and more.
There’s no doubt that AI has the power to transform government operations and the delivery of citizen services. But this powerful technology also introduces new risks. Addressing those risks requires a comprehensive strategy that progresses along with an agency’s increasing use of AI. Beyond gaining visibility into initial AI usage and establishing baseline protections, every agency must then safeguard the agency’s integration of AI capabilities with existing systems and deploy advanced, AI-based defenses to combat increasingly sophisticated threats.
As use cases become clearer, executives in state and local governments have started to recognize the tremendous potential for AI. At the same time, many of the leaders I talk to about AI remain skeptical about its current maturity. CIOs, in particular, have shared with me their concerns about three key issues:
Accuracy: CIOs question the reliability and accuracy of AI-generated output — and they should. That output can be skewed by both inadvertent errors and intentional, malicious efforts to deceive.
If a government agency’s AI model is poisoned by attackers, that agency could inadvertently spread AI-generated misinformation to citizens. Imagine a state government’s chatbot providing incorrect information about unemployment benefits or health services.
Similarly, CIOs want to avoid using inaccurate information as the basis for decision-making. If they use AI to detect fraud in government programs, for example, AI-related errors could lead to time wasted scrutinizing legitimate claims or missed opportunities for detecting fraudulent activity.
Privacy: CIOs have legitimate concerns that sensitive citizen information could be exposed, especially if employees unintentionally input citizen data or files into systems that are collecting data to train AI models. Using an AI tool to analyze citizen health data to build a report, for example, could enable its AI model to collect sensitive information and then expose that data in other contexts.
Security: CIOs are rightly concerned about the multiple ways data and models could be compromised as their agencies deepen the use of AI. For example, attackers might manipulate prompts for citizen-facing AI-based chatbots to trick the models into generating misinformation or to gain access to backend systems. They need ways to address a full array of threats to their AI ecosystem to protect not only citizen data but also agency systems.
Despite these concerns, the vast majority of public sector organizations lack a solid plan to address the implications of using AI. And yet some have moved forward with AI deployments, leaving themselves exposed to headline-producing breaches and dissemination of misinformation.
Many state and local government agencies have already begun their AI journey with pilot programs or limited deployments of AI tools. Even in organizations not actively pursuing AI, individual employees or teams may be using readily available AI tools — a phenomenon known as “shadow AI.” Regardless of whether its use is officially sanctioned, AI is present in most agencies, making it crucial to develop and implement an AI security strategy.
That strategy should have facets that correspond to each stage of the AI journey. In my work with public sector organizations, I have seen them typically progress through three key stages of AI use:
Consumption: Initially, agencies consume AI. They experiment with models, applications, and tools, including GenAI services. This phase often involves both approved and shadow AI use.
Integration: Agencies then begin integrating AI with their existing data and systems. This integration begins to unlock the true value of AI for increasing productivity or improving digital experiences.
Advanced defense: Agencies often implement some security capabilities in the first two stages of their journey. But as they progress with AI adoption, they need to strengthen their security posture. To defend against increasingly sophisticated, AI-assisted threats, agencies must implement advanced security capabilities and AI tools.
Organizations should construct a strategy that matches each of the phases of their AI use. Implementing a baseline defense for protecting data during the AI consumption phase is the first step.
The baseline focuses on establishing comprehensive visibility into AI usage across the organization while developing robust education and policy frameworks. A critical component of this baseline is addressing cyber threats related to large language model (LLM) data acquisition. LLMs continuously ingest data to improve their models, and agencies must pay particular attention to two primary data acquisition vectors.
Web scraping: Many LLMs automatically collect and process information from public websites. Commercial businesses might be more concerned with web scraping than public sector organizations, since scraping could result in the use of a business’s intellectual property for training models. Still, government agencies must be aware that anything they publish on their websites could be collected as data for AI models.
Interactions with AI tools: When employees use AI tools, such as GenAI services, they might inadvertently expose sensitive information through their prompts and queries. Even seemingly innocent actions, such as asking an AI tool to analyze a data set or create a visualization, can accidentally transfer sensitive data to external LLM systems.
Addressing these risks requires baseline protections. Agencies should establish clear AI usage policies, robust data protection controls, and thorough security measures for safeguarding citizen data. In particular, the baseline strategy should focus on:
Visibility and awareness: Agencies need to see what LLMs and AI tools are being used and by whom. They should also determine which AI bots they will allow to scrape their websites, and employ bot management tools to distinguish good from bad bots.
Education and policies: It is critical to educate employees on how AI tools work and how to avoid exposing sensitive data. For example, they should learn how to avoid entering sensitive data into a GenAI tool’s prompt. Agencies should also put in place policies that can reduce the likelihood of security and privacy issues caused by potential employee missteps. And because the AI landscape is dynamic, agencies will need to continuously update education and fine-tune policies.
Data privacy and data loss prevention (DLP): In addition to setting policies for AI usage, agencies must implement DLP tools that prevent personally identifiable information (PII) and other sensitive citizen data from leaking through AI usage.
Zero Trust: Implementing a Zero Trust Network Access solution can help control shadow AI while also making sure users access only vetted and permitted AI tools.
During the next phase of the AI journey, organizations begin integrating AI tools and models into their existing systems and processes. State and local government agencies might connect AI tools to systems that process forms, or they might incorporate AI-based chatbots into citizen portals.
The process of integration can present technical challenges. For example, an organization’s existing data might lack proper curation and structure for use with AI tools. Meanwhile, integration can create security risks, since many AI tools operate outside of the organization’s direct control.
Addressing these challenges requires focusing on a few critical areas, including data preparation and data privacy.
Data preparation: To prepare data for AI tools, teams must label and classify data. This process also helps protect that data, since teams can implement security controls based on classifications.
Data privacy: Agencies need to implement a data privacy framework plus an array of security capabilities to protect data as it flows among internal systems and external AI services. In addition to DLP capabilities, agencies need visibility into and control over the APIs used to connect with external AI services. They need capabilities for network analytics and monitoring. And they need tools such as a cloud access security broker (CASB) and secure web gateway (SWG), which can help control AI traffic flows.
Beyond enhancing employee productivity and delivering new digital experiences to citizens, AI can transform cyber security. Using advanced algorithms and machine learning (ML), AI can help security teams detect emerging threats, manage vulnerabilities, and respond to incidents automatically. Automation of traditionally manual tasks not only increases efficiency but also reduces human error in critical security operations.
Of course, attackers are also harnessing AI to enhance their malicious operations. They are using AI to generate more sophisticated phishing campaigns, improve reconnaissance, and develop advanced malware. Meanwhile, they are targeting vulnerabilities in GenAI systems.
To counter these evolving threats, agencies must adopt a proactive approach to security. They should not only leverage AI's defensive capabilities but also account for AI-specific vulnerabilities and attack vectors with a few best practices:
Select safe AI models: As agencies continue to connect their systems with AI services, they must ensure AI models and model vendors meet their quality and security standards. Maintaining an “allow list” of approved models and AI applications can reduce the risk that poorly secured systems will create vulnerabilities in their environments.
Implement AI-driven security tools: Security tools that use ML or AI can provide advanced threat intelligence, helping agencies identify threats much earlier than traditional tools. At the same time, AI and ML tools can spot vulnerabilities in IT environments so agencies can strengthen their defenses ahead of incidents.
Employ continuous monitoring: IT teams must ensure that AI apps and environments have not been compromised. Continuously monitoring for anomalous behavior is critical for providing early signals of malicious actions.
The integration of AI into public sector organizations presents both significant opportunities and security challenges. Success requires a structured approach to security that evolves as organizations expand their use of AI. Organizations must begin with fundamental security controls and progressively enhance their security posture as their AI capabilities and integrations mature. This includes maintaining a strong focus on data protection and governance, implementing continuous monitoring and assessment processes, and investing in ongoing education and policy development.
Cloudflare’s connectivity cloud enables state and local governments to implement a comprehensive security framework for AI with intelligent, cloud-native services in a unified platform. Cloudflare for Public Sector offerings assembles key security, networking, and app development services that meet rigorous FedRAMP standards. With Cloudflare, your agency can continue to advance in your AI journey while retaining control of sensitive data.
This article is part of a series on the latest trends and topics impacting today’s technology decision-makers.
Learn more about how to support the integration of AI in your organization without leaving sensitive data at risk in the Ensuring safe AI practices: A CISO’s guide on how to create a scalable AI strategy guide.
Dan Kent — @danielkent1
Field CTO for Public Sector, Cloudflare
After reading this article you will be able to understand:
The top three AI-related concerns of public sector CIOs
The typical three-stage AI journey for state and local governments
Critical steps for securing AI implementations