Enova Deployments: A Journey to Self-Service

Enova deployments

by: Sierra Navarro, Head of Technology Operations

When many of us joined Enova in the early 2010s, there was a scheduled (or “standard”) release day each week. All releases happened on this one day, usually a Thursday, and it was an all.day.affair. 

The team would pile into a conference room at 9 am, expecting to stay until at least 5 pm. We would spend over eight hours deploying releases, eating lunch in the room, and sometimes staying late in the evening if it was a particularly difficult day. 

Perforce commands were run one at a time (we didn’t use Git yet), our batch jobs had to be stopped for the entirety of the deployment, and the release to our monolith application took 60-90 minutes to cycle through the cluster. 

Rinse and repeat through each of our brands (we had 5 apps at the time), add in time to do manual post-release checks, and you’d have finally completed a full day of deployments.

However, if something didn’t go smoothly, more time would be added. Sometimes this included pushing out a few “emergency” releases which added even more time. 

Eventually, commands were chained together into the first iteration of our current deployment solution (we affectionately called it deploy-a-tron in those early days), “standard” release days were joined by “off-cycle” releases, and things evolved. 

But we knew we could do better. 

And so we did. 

The 5-10 releases a week in 2010 became 30 releases a week by 2014, which turned into 80 releases a week by 2018, and as many as 150 releases a week in 2019. As we progressed, emergency releases became few and far between and every workday was a “standard” release day.

As release volume increased, we automated key compliance controls such as the segregation of duties, which is a significant requirement since the financial services industry is heavily regulated. By automating our controls we ensured reliability and repeatability as we scaled.

And although we deployed 150+ times in just 5 working days earlier this year, we knew we could do even better.

To this end, the Deployment Engineering team recently tackled two challenging projects: streamlining the deployment pipeline generation process and putting self-service releases into the hands of developers.

 

Self-Service Deployment Pipelines

First the team built self-service deployment pipelines. This allowed developers to create their own build and deployment pipelines as code in just a matter of minutes. 

Previously, setting up a new deployment pipeline meant:

  • Creating a ticket requesting pipelines for a new service or lambda, requiring a lot of back-and-forth
  • Manually copying existing staging and production jobs
  • Manually re-configuring the jobs for the new application

Which…

  • Was not reliably reproducible
  • Made communication and collaboration on changes unnecessarily difficult
  • Required a Deployment Engineer’s time and attention

Under the old process, the average time from start to finish was anywhere from a few hours to entire weeks. Currently, all we need is a simple ~6-10 line pull request. This request auto-generates staging and production pipeline jobs that are already configured, source-controlled, and ready to go.

Under the new solution, the total time from start to finish is about 25 minutes.

This results in improved time-to-delivery for our Software Engineering teams as well as a reduced workload for the Deployment Engineering team, all while maintaining compliance controls.

Support is already live for Lambdas, Ruby, Go, and Vue (with Ember close behind) – and although this was a huge win, we didn’t stop there!

 

Self-Service Deployments

Utilizing our new pipeline solution we took things even further and automated releases down to a single self-service push of a button.

When a developer is ready to push their code to production, they have to do nothing more than hit a button on their deployment ticket, confirm the automatically-generated release parameters are correct, and voila

No more asking someone in Deployment Engineering to release for them, no waiting in a queue until someone’s available to deploy – just release when you’re ready.

Behind the scenes checks review the same requirements we did previously – things like ensuring all pull request checks are successful, stories are accepted by stakeholders, and segregation of duties checks pass. 

If everything passes, the release will go out automatically with just the push of a button.

In the event there are any issues, we’ve added robust error messaging so troubleshooting and resolution are self-service, too.

 

What’s Next?

We’ve certainly come a long way from those early days of all-day releases, but there is always more to do. In the upcoming months, the Deployment Engineering team will be focusing on a variety of enhancements including more self-service solutions, blue-green deployments, canary releases, and establishing a container pipeline. If this kind of challenging work sounds interesting to you, check out our openings across technology and join us in building the next generation of Enova.

Latest Tweets

Archives