What I learned in federal web work
Next week will be my last week on the contract. I decided to leave, and before I move on, I wanted to write down what I actually did day to day.
I wasn't a direct employee of the agency. I worked for a contractor, assigned to the agency's program as a tech lead / architect. That's the arrangement behind a lot of federal web work. You're on the team day-to-day, but your paycheck comes from somewhere else.
There was a clearance process before I could start. Background check, forms, references, a wait. It's standard for federal contract work, and it shapes onboarding more than most private-sector jobs.
What I actually did
Most of my time went to public-facing content sites. Thousands of pages, visitors in the millions, content in several languages. Most of the translations were handled by an outside service, MotionPoint. A few sites used Drupal's built-in translation instead, for the Spanish, Chinese, and Korean variants. Each had their own quirks around localized URLs and different writing systems.
A lot of the work was keeping a portfolio of existing Drupal sites healthy. Version upgrades, module updates, fixes, and feature work. I was also part of a larger project to consolidate over 400 Drupal sites into 8 hubs. That one is still in progress.
I ran audits. Inventorying thousands of PDFs. Checking accessibility. Finding broken links and stale content. Most of that manual work turned into Python scripts. PDF audits, analytics pulls, scrapers, redirect checks, report generators. The team still uses some of them.
I supported the data analytics team too. GA4, Google Search Console, and GTM. Pulling reports, fixing tagging issues, investigating discrepancies, and keeping the tracking in sync with site changes. I also automated some of the report generation in Python.
The first thing I took on was the deployment process. Most of it already ran on Jenkins, but one critical step still needed someone to run a shell script from their laptop. Moving that last piece to Jenkins was my first win.
Later, I added CodeDeploy for the lower environments to speed up deploys there. By the end, a deploy that used to take over an hour ran in a few minutes.
On-call was part of the job. Production issues don't wait for business hours.
I also spent time on a content syndication platform outside our main stack. Grails-based, older, not something I'd worked with before. The Groovy I'd been writing for the Jenkins pipelines transferred over, which made picking up the platform faster than it would have been otherwise. I learned it well enough to troubleshoot it and helped with a server upgrade to Ubuntu 22.04.
A few other things I touched.
- Two React apps, smaller front-end pieces layered on top of the CMS and tied into the federal design system.
- AWS day to day. Load balancers, target groups, CodeDeploy, and a fair amount of debugging production issues that only show up with real traffic.
- Testing with Playwright and Cypress, pytest for Python. Toward the end I experimented with Playwright plus an LLM to catch visual regressions that normal diffs miss.
- AI demos and proofs-of-concept, showing leadership what modern tools could do and how they might fit without blowing up a security review.
The stack included Drupal, PHP, Python, JavaScript, React, Groovy, Jenkins, Docker, AWS, Splunk, and USWDS (the US Web Design System). By the end, a lot of AI tooling too (Gemini, OpenAI, vector databases, MCP servers).
A few things I noticed
Process is real. Security reviews, accessibility audits, content freezes, sprint planning across pods. The first month it felt slow. Then you get why it's there.
Jira, Confluence, and Bitbucket were all self-hosted for FedRAMP compliance. Cloud versions weren't an option.
There are a lot of meetings. Standups, sprint planning, architecture reviews, content reviews, pod syncs, one-on-ones, demos, status calls. Some were essential. Some weren't.
Accessibility isn't a checkbox. It changes how you name things, structure pages, and test.
Editors and program leads were a big part of who I worked with day to day, not just other engineers.
You work with a lot of outside vendors. Akamai for CDN and edge security, an AWS partner for infrastructure, GSA for search.gov integration, and various audit and review vendors along the way. A real part of the job is running down answers across company boundaries, not just inside your team.
Communication is a big part of the job. Probably more than I expected going in.
I wrote a lot. Status updates, runbooks, handoff docs, tickets. I'm literally writing a handoff email as I type this.
A lot of time went to explaining technical tradeoffs to people who aren't technical.
Toward the end, I did some AI work. AI-assisted content workflows, LLM classifiers for PDFs, and MCP servers. We also started conversations about bringing in coding agents like Codex and Claude Code as part of our workflow. Those approvals are still pending on the government side.
Debugging in the cloud is its own thing. AWS production doesn't look like your laptop. You spend a lot of time in Splunk, CloudWatch, and deploy histories.
I had the fortune of leading smart people and working for a manager who trusted me to do the job. That made most of the rest of it easier.
That's it. On to whatever's next.