The intersection of AI and security: Balancing transformation and risk

As AI proliferates throughout the government space, enhancing operational efficiency, streamlining public services and making measurable impacts, the acceleration of these capabilities also creates new vulnerabilities. Public sector leaders must engage in a careful balancing act, taking advantage of transformative opportunities while combating increasingly sophisticated cyber threats.

Kelly Moan, Chief Information Security Officer of the City of New York, described the ultimate goal: to “enable technology innovation through security, rather than it being a blocker.” Moan joined other public sector technology leaders and Google experts at Google Public Sector Gen AI Live & Labs, in collaboration with GovExec, to discuss strategies for navigating the critical intersection of AI and security.

Thoughtful, strategic AI implementation

As the public sector workforce becomes more familiar with AI use cases and capabilities, their concerns around the technology are waning. A newly released study Google commissioned with GovExec demonstrates this growing comfort, finding a 20% decrease in the number of respondents concerned about AI security. Though this represents a marked decrease, 57% still reported concern — and their concerns are not unfounded.

“While AI offers tremendous opportunities to improve cybersecurity, it also offers a lot of additional threats to cybersecurity,” said Suman Taneja, Deputy Chief Information Officer at Hunter College. “Some of the models that we have used for threat prevention or vulnerability assessment and prioritization may not work anymore in the world of deep fakes where malicious actors can do things at scale.”

Despite reservations held by the majority, 40% of respondents also reported cybersecurity as a current AI use case for their agency. These two statistics aren’t at odds but represent the core challenge at the intersection of AI and security: even as AI implementation creates new attack surfaces, AI also offers the potential to help mitigate those risks.

For the City of New York, balancing these two sides depends on thoughtful, intentional implementation of AI — not simply jumping into AI-driven solutions because they’re the current trend but carefully establishing and following a strategic plan. 

“We're endeavoring to be an enabler of artificial intelligence in a responsible and ethical way,” Moan said. “The City released publicly an AI action plan … and we also released principles and guidance that show New Yorkers what and how we think about the use of AI.”

Secure deployment requires guardrails, verification and transparency 

In the mayor’s office, technology is a key element of the city’s overall performance. The NYC mayor recently launched the NYC Performance Management Cabinet, led by Camille Joseph Varlack, Chief of Staff and Deputy Mayor for Administration. At Gen AI Live & Labs, Varlack highlighted the role of technology in this initiative.

“Anytime we are thinking about incorporating technology … we are thinking about all of the creative ways that we can drive city performance,” Varlack said, also noting that she and her team are “always focused on the risk and the security and how we make sure that we are being as careful as possible.”  

Careful consideration is paramount to incorporating AI in a safe and secure manner, especially given the scale of the City of New York, its workforce, and its citizens.

“It's important that New York City be a global leader in applied AI. If we can figure out ways to bring AI to the largest municipal city in the country, I think that is going to be key,” Varlack said. “We have 8.3 million folks that live in this city, and [we are] making sure that we integrate it in a safe, ethical and equitable manner.”

For Farhan Abdullah, director of information technology at the New York City School Construction Authority, success depends on integrating guardrails and transparency to ensure responses are reliable. Increases in efficiency or productivity are meaningless if the output is wrong. It’s important that responses be grounded and clearly link back to source material.

Moreover, AI systems must be designed to only retrieve and process information based on which user is accessing the system. Does the user have rights to the documents that are the source material for responses? 

“Verification mechanisms are not a nice-to-have, they’re a must-have, especially for public sector organizations,” Abdullah said. “We're responsible for the safety and security of our citizens in New York City in our AI policy strategy. These are some of the things that we must have to make sure that we're not relying on unreliable information.”

Workforce upskilling and education

On top of optimizing and securing the technology itself, managing risk around AI requires education. The workforce must be trained and comfortable with new tools and solutions. The New York City government employs hundreds of thousands of people. As AI becomes more commonplace among the workforce, each of those people is a potential vector for AI-related cyber attack or exploitation. 

“We're talking to stakeholders who may not understand cybersecurity but also may not understand AI, and so they may not realize that the data they're choosing to offhandedly upload into a LLM might be going somewhere that it shouldn't be going,” Moan said.

To demonstrate these risks and potential fallout, Moan said her team engages in conversations to walk through what a “bad day” would look like by reviewing incident response plans, as well as testing the sensitivity of systems and applications. 

Abdullah also shared how his organization is approaching education among employees: “We launched an AI literacy program across the agency, and our team started identifying business challenges and where AI could fit to improve business processes.” 

For New York City, initial use cases are typically straightforward, achievable projects that can demonstrate immediate business value, whether by revenue generation, enhancing processes or enhancing user experiences. These types of visible “wins” are a way to clearly demonstrate the value of AI and bring people on board to drive adoption. 

As an example, Abdullah highlighted bringing automation to the building code compliance division, which manages a database referencing sections of New York City building code. 

“Imagine those code books are being updated every month, every year,” he said, “and so every time there's an update, the team has to go and manually compare thousands of items across 3,000-plus pages of New York City building code books.”

Instead, the organization deployed an AI solution that creates a comparison report highlighting changes, turning a process that used to take weeks into one that can be completed in hours. It’s a success that Abdullah is looking to scale across the organization.

“We are going to continue to expand our training program, we're going to continue to give access to secure AI tools for our users,” he said. “Our goal is to empower users to use AI effectively and responsibly … and we’re going to continue to develop use cases that can improve our business processes.”

This is the second in a series of articles based on Google Public Sector Gen AI Live & Labs, in collaboration with GovExec, which convened industry experts and leaders across city and state governments, as well as higher education, to discuss this new era of innovation. To help keep you at the forefront of the latest advancements, sign up today to receive the Google Public Sector Newsletter.

For more insights from the Gen AI Live & Labs event, read the first article and discover how AI is accelerating a new era of public sector innovation.

This content is made possible by our sponsor Google; it is not written by and does not necessarily reflect the views of GovExec's editorial staff. 

NEXT STORY: AI is accelerating a new era of public sector innovation