In software development, AI needs a human teammate

Presented by General Dynamics General Dynamics's logo

In government software development, the question isn’t whether AI will replace coders, but how humans and machines will work together. AI can already generate code, catch bugs, and suggest fixes in seconds. But only developers can weigh mission priorities, make security tradeoffs, and shape applications for real-world users.

“Anything that can be generated is being generated,” said Todd Bracken, senior solutions director at General Dynamics Information Technology (GDIT). In coding, testing, and even user-interface design and review, “AI can take the full lifecycle out of the hands of the developer and make the developer more of a reviewer.”

This shift doesn’t mean developers are disappearing. Instead, their role is evolving away from writing every line of code and toward guiding requirements, specifications, and decisions that ensure the product meets the mission.

Shifting Roles in the Human–AI Partnership

AI has significant limitations when it comes to complex reasoning, so the human in the loop remains critical. AI doesn’t understand mission priorities, security tradeoffs, or end-user needs.

“We’re seeing some difficulties in using AI to reason through extremely large code bases, like in mainframe modernization,” Bracken noted. That capability may come, “but there’s still a lot of challenge there.”

This means developers are still the ones making decisions, setting direction, and shaping the product. AI can accelerate workflows, but humans anchor the mission context. The future of app development depends on knowing what to delegate to AI and where human oversight is indispensable.

Industry experts agree.

"AI enables federal developers to accelerate innovation by automating routine tasks," said Rob Smith, area vice president of public sector at GitLab. "This frees developers to focus on strategic priorities that require human judgment, such as implementing scalable governance frameworks and ensuring AI deployments align with mission-critical goals."

Developers will also need to emphasize uniquely human skills, such as user experience, design analysis, and communication, alongside coding ability. Those who blend both will be best positioned as AI reshapes the development process.

Starting Small to Build Confidence

While AI offers powerful augmentations, it’s far from perfect. Government agencies will need to strike a thoughtful balance between automation and human oversight. For most, a start-small approach will be the safest course.

Agencies should start with small, targeted pilot projects and focus on the solutions that deliver the biggest impact right away, such as AI-assisted knowledge management.

“One of our offerings is a software factory knowledge management solution that is AI-assisted,” Bracken said. “It’s loading up all of your user manuals, your style guides, your process guides — things that help developers be more proficient in building solutions — and they can interact with those documents, versus having to search through hundreds of documents just to figure out when they should be submitting a change request.”

Some in government may still be leery of letting AI loose on their data, and this start-small approach can help drive the culture change needed for broader adoption. Minor use cases can demonstrate safe, secure AI implementation. Early pilots may even be best deployed in air-gapped environments to ensure additional safeguards.

“In some cases, people aren’t aware that those models are using your data to train,” Bracken said. “So the first thing we came out with last year was ‘private, secure and local’. Meaning we could deploy it anywhere, and there was no ‘phone home’ to some large cloud-based model.”

Building Toward Secure and Sustainable AI

Ultimately, security will be central to gaining confidence in AI-assisted development. A focus on demonstrable safeguards can help agencies and end users trust that data is protected, while also unlocking new efficiencies.

“The reward outweighs the risk,” Bracken said. “AI allows us to do our job better and we can provide better correlations to help us reduce risk faster. But to do that, we have to make sure we apply AI most effectively and build a framework to use it safely and securely.”

In the end, success in AI-assisted development won’t come from the tools alone, but from how agencies choose to use them. By combining human judgment with machine speed, agencies can build software that evolves as quickly as the missions it serves.

This content is made possible by our sponsor GDIT; it is not written by and does not necessarily reflect the views of GovExec's editorial staff

NEXT STORY: Cyber Threat Intelligence in Practice: Maturing Threat Intelligence for a Proactive Cyber Defense