The core of a DevOps pipeline consists of the following: continuous
integration/continuous delivery (CI/CD), continuous testing (CT), continuous deployment,
continuous monitoring, continuous feedback/evolution, and continuous operations.
This whitepaper provides insight into what these concepts mean
and how they serve as building blocks for DevOps.
Continuous Integration & Delivery
Before continuous integration (CI) was in place, developers built the application
features in silos and submitted them separately. The concept of CI has completely
changed how developers go about sharing their code changes with the master
branch. With CI, the system frequently integrates the code changes into a central
repository several times a day.
As a result, merging the different code changes becomes easier and is also less
You’ll also encounter integration bugs early, and the sooner you spot
them, the easier it is to work on resolving them.
Continuous delivery (CD) is about incremental delivery of updates/software to
production. While serving as an extension of CI, CD enables you to automate your
entire software release operation. It allows you to look beyond just the unit tests and
perform other tests such as integration tests and UI tests.
As a result, the developers can perform a more comprehensive validation on updates
to ensure bug-free deployment. With CD in place, you increase the frequency of
releasing new features. Consequently, it enhances the customer feedback loop,
thereby creating the opportunity for better customer involvement.
Thus, CI/CD serve as linchpins to any DevOps pipeline
Continuous Testing & Deployment
- Continuous testing (CT) is another key component of a DevOps pipeline. With
continuous testing, you can perform automated tests on the code integrations
accumulated during the continuous integration phase.
- Besides ensuring high-quality application development, continuous testing also
evaluates the release’s risks before it proceeds to the delivery pipeline.
the script development part, continuous testing doesn’t require any other
- Testers write the test scripts before the commencement of coding. As a result,
once the code integration happens, the tests begin to run one after the other
- Automating tests is not always straightforward. It can be a messy business, and
takes time to learn how to do it effectively. Despite the difficulty and
required in getting up to speed on automated testing, it’s well worth it.
- Teams that know their complete suite of automatically executed tests have
advantages over those without. Such teams know their tests will notify them of
problems. They feel comfortable making changes. They proceed with confidence.
- There’s an element of ambiguity when people talk about continuous delivery and
continuous deployment. People often interchange the two terms although
there’s a substantial difference between them.
- Continuous deployment succeeds continuous delivery, and the updates that
successfully pass through the automated testing are released into production
automatically. As a result, it enables multiple production deployments in a
- While the goal of continuous delivery is to make your software ready for its
release instantly, the actual job of pushing it into production is manual.
where continuous deployment comes into the picture.
- And, as mentioned earlier, if the updates can be deployed, they’ll be deployed
automatically through continuous deployment.
Best Practices: Continuous Deployments
Continuous Development Strategies
Blue/Green Or Red/Black
- This is another fail-safe process. In this method, two identical production
environments work in parallel. One is the currently-running production
receiving all user traffic (depicted as Blue). The other is a clone of it, but
(Green). Both use the same database back-end and app configuration:
- The new version of the application is
deployed in the green environment
and tested for functionality and
performance. Once the testing results
are successful, application traffic is
routed from blue to green. Green then
becomes the new production.
- there is an issue after green becomes live, traffic can be routed back to blue.
- In a blue-green deployment, both systems use the same persistence layer or
database back end. It’s essential to keep the application data in sync, but a
mirrored database can help achieve that.
- You can use the primary database by blue for write operations and use the
secondary by green for read operations. During switchover from blue to green,
database is failed over from primary to secondary. If green also needs to write
during testing, the databases can be in bidirectional replication.
- Once green becomes live, you can shut down or recycle the old blue instances.
might deploy a newer version on those instances and make them the new green
for the next release.
- Blue-green deployments rely on traffic routing. This can be done by updating DNS
CNAMES for hosts. However, long TTL values can delay these changes.
Alternatively, you can change the load balancer settings so the changes take
immediately. Features like connection draining in ELB can be used to serve
- Canary deployment is like blue-green, except it’s more risk-averse. Instead of
switching from blue to green in one step, you use a phased approach.
- With canary deployment, you deploy a new application code in a small part
of the production infrastructure. Once the application is signed off for
release, only a few users are routed to it. This minimizes any impact.
- With no errors reported, the new version can gradually roll out to the rest of
the infrastructure. The image below demonstrates canary deployment:
- The main challenge of canary deployment is to devise a way to route some
users to the new application. Also, some applications may always need the
same group of users for testing, while others may require a different group
Best Practices: Continuous Deployment Pipeline
Monitoring your systems and environment is crucial to ensure
- In the production environment, the operations team leverages continuous
monitoring to validate if the environment is stable and that the applications do
what they’re supposed to do.
- Rather than monitoring only their systems, DevOps encourages them to monitor
applications too. With continuous monitoring in place, you can continuously keep
a tab on your application performance.
- The data thus gathered from monitoring application performance and issues can
be used to discover trends and also identify areas of improvement.
- In order to successfully implement CI/CD, monitoring is crucial. As a starting
organizations need both control and visibility into their DevOps environment by
collecting and instrumenting everything.
- Considering the amounts of data, this can be an insurmountable challenge for
organizations. Continuous monitoring of your entire DevOps life cycle will
development and operations teams collaborate to optimize the user experience
of the way, leaving more time for your next big innovation.
As a part of continuous monitoring we should be doing
following three things:
- We use logging to represent state transformations within an application. When
things go wrong, we need logs to establish what change in state caused the
- But the problem is that obtaining, transferring, storing and parsing logs is
expensive. Because of this it is crucial to only log what is necessary; only
that can be acted upon should be stored. Log only actionable information.
- Tracing is another major component of monitoring, and it's becoming even
more useful in microservice architectures. A few of the previous resources in
logging section covered tracing and often suggested using correlation IDs for
tracing transactions through different parts of your microservices architecture.
- When it comes to front-end tracing, you'll have to use browser tools. Google
excellent documentation on how to trace performance issues in Chrome using
its DevTools suite.
Instrumentation & Monitoring:
- Instrumenting an application and monitoring the results represents the use of a
system. It is most often used for diagnostic purposes. For example we would use
monitoring systems to alert developers when the system is not operating
- Instrumentation tends to be very cheap to compute. Metrics take nanoseconds
to update and some monitoring systems operate on a “pull” model, which
means that the service is not affected by monitoring load.
- Generally the more data you have, the more useful monitoring becomes.
- So typically you would want to instrument all of your services. But make sure
you pick a simple, scalable monitoring system.
- During monitoring, Alerting is also important one where monitoring system
should be able to generate proper and meaningful alerts.
DevOps monitoring can be done at six different levels
Application Performance Monitoring(APM):
This is the process of monitoring the backend architecture of an application to resolve
performance issues and bottlenecks on time.
The APM methodology works in three phases:
- Identifying the Problem:
phase involves proactively
monitoring an application for
issues before a problem actually
occurs. For this, a number of tools
are used to discover problems at
the infrastructure and application
level, which includes user
experience monitoring, synthetic
monitoring wherein the user
interactions are synthesized to
unveil the problems.
- Isolating the Problem:
problems are identified, they must
be isolated from the environment
to ensure that they do not impact
the entire environment.
- Solving Problem by Diagnosing the
Once the problem is
detected and isolated, it is
diagnosed at code-level to
understand the cause of the
problem and fix it.
Network Performance Monitoring:
It's the practice of consistently
checking a network for deficiencies or
failure to ensure continued network
performance. This may include
monitoring network components such
as servers, routers, firewalls, etc. If any
of these components slows down or
fails, network administrators are
notified about the same, ensuring that
any network outrage is avoided.
Infrastructure monitoring verifies
availability of IT infra components in a
data center or on cloud infrastructure
(IaaS). This involves monitoring the
resources, their availability, checking
under-utilized and over-utilized
resources to optimize IT infra and
operational cost associated with it.
Database Performance Monitoring:
- By monitoring the database, it is
possible to track performance,
security, backup, file growth of the
DB. The main goal of database
monitoring is to examine how a DB
server- both hardware and software are performing.
- This can include taking regular
snapshots of performance indicators
that help in determining the exact
time at which a problem occurred.
When DBAs can examine the time
when a problem occurred, it's
possible for them to figure out the
possible reason for it as well.
API monitoring is the practice of examining applications' APIs, usually in a
production environment. It gives visibility ofperformance, availability, and
functional correctness of APIs, which may include factors like the number of
hits on an API, where an API is called from, how an API responds to the user
request (time spent on executing a transaction), keep a track of poorly
performing APIs, etc.
Continuous Feedback & Operations
- People often overlook continuous feedback in a DevOps pipeline, and it
doesn’t get as much limelight as the other components. However, continuous
feedback is equally valuable.
- In fact, the purpose of continuous feedback resonates very well with one of the
core DevOps goals—product improvement through customers’/stakeholders’ feedback
- Merely delivering your applications faster doesn’t equate to successful
business outcomes or increased end-user satisfaction. You’ll have to ensure
that you and your end users are on the same page with your releases.
- That’s exactly what continuous feedback can help you do, and that’s why it’s
an important DevOps component.
- Continuous operations is a relatively newer concept. According to Gartner,
continuous operations is defined as “Those characteristics of a dataprocessing
system that reduce or eliminate the need for planned downtime,
such as scheduled maintenance.
- One element of 24-hours-day, seven-day-a-week operation.” The goal of
continuous operations is to effectively manage hardware as well as software
changes so that there’s only minimal interruption to the end users. Setting up
continuous operations in your DevOps pipeline will cost you a lot. However,
considering the massive advantage that it brings to the table— minimizing
core systems’ unavailability—shelling out a lot of money for it will probably be
justified in the long run.
"Continuous Everything" serves as the backbone for modern day
By adopting this methodology to the different phases of development
operations within agile, your company can achieve its' goals.
Contact Inventive-IT for a complimentary assessment to see how the
"Continuous Everything" philosophy can be applied to your business.
Schedule your Assessment Today