- Posted by Josh Quint, Lead Systems Architect
- 0 Comments
- AWS, DevOps, Perspective, Strategy
The term DevOps has exploded as a technical buzzword in the last few years. A search on any tech jobs board will reveal a multitude of posts for DevOps engineers, almost as many as ‘Cloud’ engineers or developers. The job descriptions apply the DevOps term to a wide variety of different tasks and responsibilities, depending on the organization.
Historically, most organizations used clearly-defined and distinct Developer and Operations teams. These teams were very much separate, with minimal overlap and communication.
The deployment methods used by these organizations typically involved “throwing the code over the wall”: once the Development team decided their code worked, they packaged it up with some documentation and handed off to the Ops team to deploy and maintain. If the code did not work, the Ops team simply threw it back over the wall with commentary along the lines of “it doesn’t work!”
Much of this resulted from the lack of thought given to scalability. Developers wrote the code on one platform, despite the fact that Ops would test it on another platform.
In an attempt to keep Development and Operations teams from playing a never-ending game of code volleyball, QA teams stepped into the middle, leveraging their development background to more easily communicate errors back to the development team and reducing the likelihood of the Ops teams receiving faulty applications.
However, adding the QA team also adds certain complications to the deployment process. In addition to the original wall between Development and Operations, now there’s a second wall between QA and Ops. Each team also deploys code in their own environments, which don’t necessarily stay in sync without automation to keep them consistent. If there are even minor differences between environments, an application that works perfectly in Dev might possibly pass QA, but could then still fail catastrophically when the Ops team deployed it to Production.
The DevOps model changes the legacy code deployment strategy by combining Dev, QA, and Ops into an integrated organization, removing the walls and barriers, and softening the distinction between these teams. Environments are treated as a part of the code and kept consistent through all aspects of the deployment while using similar version-control tools.
Today, there are several methods to achieve the DevOps goal:
The One-Man Band Method
The One-Man Band approach implements the DevOps strategy by having all members (sometimes, quite literally, one person) be responsible for Development, QA, and Operations. This model is most commonly used in smaller organizations that aren’t able to support many team members.
The axiom “Jack of all trades, master of none” perfectly describes this method. Engineers have to split their time and talents evenly between code development, testing, and operations.
The Silo Method
The Silo model keeps the separation between teams, up to the management level, allowing for higher levels of expertise to emerge as each engineer draws on their own strengths – as dictated by their role within the organization or team. Teams are strongly encouraged to keep communications open and work together to solve issues.
Organizationally, this is almost identical to the “throw the code over the wall” approach, causing a tendency to emulate the old model as the teams maintain separation. DevOps then becomes a mantra that the company uses in name only.
The Embedded Method: The Turing Group Approach
Turing Group strives to approach DevOps with the Embedded Method. We have a team of Development-oriented Engineers and Architects working in tandem with a team of Operations-oriented Engineers and Architects. Both teams report to the same management, and both are involved throughout the development, deployment, testing, and operations processes.
The teams begin by thinking through how the software will run, so the Cloud infrastructure is developed more like a piece of software. Resources on the team can exercise their personal expertise in a particular area, but are fully collaborative with all others in the DevOps organization. This process allows for expert design of both software and infrastructure components, but without the problems of complete separation. We also find this keeps the development efficiently streamlined, as there a fewer surprises when code is developed and deployed into Testing and then to Production environments.
Treating code and infrastructure in a similar manner allows us to build and manage the infrastructure via an iterative code-like development process. Infrastructure changes can be made just as swiftly as application code, and tested as thoroughly as a regular code update. We can handle an increase in load or a different use-case as easily as we can add a feature to the software.
The Embedded method also enables us to save approximately 30-40% more time than the “over the wall” approach. Not only does it cut down on various iterations, but the AWS infrastructure is also flexible enough that we can adjust and revise servers on the fly.
Want to know how our approach to DevOps can benefit your business? Contact us.