-
Notifications
You must be signed in to change notification settings - Fork 1
What is Cloud Native
Cloud-native is currently one of the biggest trends in the software industry. It has already changed the way we think about developing, deploying and operating software products. But what exactly is “cloud native”? Cloud native is a lot more than just signing up with a cloud provider and using it to run your existing applications. It affects the design, implementation, deployment, and operation of your application. “cloud-native” is an approach to building and running applications that exploits the advantages of the cloud computing delivery model. “Cloud-native” is about how applications are created and deployed, not where. It implies that the apps live in the public cloud, as opposed to an on-premises datacenter.
Developing Cloud Native applications allow businesses to vastly improve their time-to-market and maximize business opportunities. Moving to the cloud not only helps businesses move faster, but it also meets growing customer expectations that products and services should be delivered via the cloud with high availability and reliability. The following are few of the common motivations for moving to Cloud Native application architectures. Speed - The ability to innovate, experiment, and deliver value more quickly than our competitors. Safety - The ability to move rapidly but also maintain stability, availability, and durability. Scale - The ability to elastically respond to changes in demand.
In the enterprise, the time it takes to provision new application environments and deploy new versions of software is typically measured in days, weeks, or months. This lack of speed severely limits the risk that can be taken on by any one release, because the cost of making and fixing a mistake is also measured on that same timescale. Internet companies are often cited for their practice of deploying hundreds of times per day. Why are frequent deployments important? If you can deploy hundreds of times per day, you can recover from mistakes almost instantly. If you can recover from mistakes almost instantly, you can take on more risk. If you can take on more risk, you can try wild experiments—the results might turn into your next competitive advantage.
Cloud-native application architectures balance the need to move rapidly with the needs of stability, availability, and durability. It’s possible and essential to have both. We’re not talking about mistake prevention, which has been the focus of many expensive hours of process engineering in the enterprise. Big design up front, exhaustive documentation, architectural review boards, and lengthy regression testing cycles all fly in the face of the speed that we’re seeking. Of course, all of these practices were created with good intentions. Unfortunately, none of them have provided consistently measurable improvements in the number of defects that make it into production.
Our architectures must provide us with the tools necessary to see failure when it happens. We need the ability to measure everything, establish a profile for “what’s normal,” detect deviations from the norm (including absolute values and rate of change), and identify the components contributing to those deviations. Feature-rich metrics, monitoring, alerting, and data visualization frameworks and tools are at the heart of all cloudnative application architectures. Fault isolation In order to limit the risk associated with failure, we need to limit the scope of components or features that could be affected by a failure. Monolithic application architectures often possess this type of failure mode. Cloud-native application architectures often employ microservices. By composing systems from microservices, we can limit the scope of a failure in any one microservice to just that microservice, but only if combined with fault tolerance.
Let me explain why anyone should care about creating apps "this way" vs "past approaches" by setting some context. I think "past approaches" can be summed up as "get really good at predicting the future". This is the crux of the discussion. In the past we spent incredible time, energy, and money trying to "predict the future" :
- predict future business models
- predict what customers want
- predict the "best way" to design/architect a system
- predict how to keep applications from failing
- etc, etc.
Instead of trying to "get better at predicting the future", we want to build a system that "gets better at responding to change and unpredictability". This system is made up of people and is delivered through technology.
How do we predict what customers want? We don't; we run cheap experiments as quickly as we can by putting ideas/products/services in front of customers and measuring its impact.
How do we predict the best way to architect a system? We don't; we experiment within our organization and determine what fits best for its employees/capabilities/expectations by observation.
How do we predict how to keep applications from failing? We don't; we expect failure and architect for observability so we can quickly triage failures, restore service and leverage chaos testing to continuously prove this, and on and on.
In the software development world they used to develop everything using a technique called, “Waterfall Development.” That means, they have lots of meetings, figure out every little bit, every part of the software, all the different things, and then they go off for a year and build this giant software package.
The problem is, by the time you’ve built something, it’s been a year later – and the whole thing is done, so if you need to make a fix, it’s a lot harder.
Agile software development is a set of methods that result in fast and frequent delivery of value to your customers. It promotes well planned, small iterations by highly collaborative cross-functional teams. Agile methodologies provide an alternative to the sequential development and long release cycles traditionally associated with waterfall.
Continuous Delivery, enabled by Agile product development practices, is about shipping small batches of software to production constantly, through automation. Continuous delivery makes the act of releasing dull and reliable, so organizations can deliver frequently, at less risk, and get feedback faster from end users.
Microservices is an architectural approach to developing an application as a collection of small services; each service implements business capabilities, runs in its own process and communicates via HTTP APIs or messaging. Each microservice can be deployed, upgraded, scaled, and restarted independent of other services in the application, typically as part of an automated system, enabling frequent updates to live applications without impacting end customers
Containers offer both efficiency and speed compared with standard virtual machines (VMs). Using operating system (OS)-level virtualization, a single OS instance is dynamically divided among one or more isolated containers, each with a unique writable file system and resource quota. The low overhead of creating and destroying containers combined with the high packing density in a single VM makes containers an ideal compute vehicle for deploying individual microservices
Architecture is the process by which we make software engineering decisions that have system-wide impact and are often expensive to reverse. Our architectural decision making directly impacts our success or failure. We can express that impact three ways:
- Architectural decision making can enhance or detract from our ability to practice DevOps
- Architectural decision making can enhance or detract from our ability to practice Continuous Delivery
- Architectural decision making can exploit or waste the characteristics of Cloud Infrastructure
Adherence to sound architecture principles will help improve our ability to successful.