-
Notifications
You must be signed in to change notification settings - Fork 1
Home
Cloud-native is currently one of the biggest trends in the software industry. It has already changed the way we think about developing, deploying and operating software products. But what exactly is “cloud native”? Cloud native is a lot more than just signing up with a cloud provider and using it to run your existing applications. It affects the design, implementation, deployment, and operation of your application. “cloud-native” is an approach to building and running applications that exploits the advantages of the cloud computing delivery model. “Cloud-native” is about how applications are created and deployed, not where. It implies that the apps live in the public cloud, as opposed to an on-premises datacenter.
Developing Cloud Native applications allow businesses to vastly improve their time-to-market and maximize business opportunities. Moving to the cloud not only helps businesses move faster, but it also meets growing customer expectations that products and services should be delivered via the cloud with high availability and reliability. The following are few of the common motivations for moving to Cloud Native application architectures. Speed - The ability to innovate, experiment, and deliver value more quickly than our competitors. Safety - The ability to move rapidly but also maintain stability, availability, and durability. Scale - The ability to elastically respond to changes in demand.
In the enterprise, the time it takes to provision new application environments and deploy new versions of software is typically measured in days, weeks, or months. This lack of speed severely limits the risk that can be taken on by any one release, because the cost of making and fixing a mistake is also measured on that same timescale. Internet companies are often cited for their practice of deploying hundreds of times per day. Why are frequent deployments important? If you can deploy hundreds of times per day, you can recover from mistakes almost instantly. If you can recover from mistakes almost instantly, you can take on more risk. If you can take on more risk, you can try wild experiments—the results might turn into your next competitive advantage.
Cloud-native application architectures balance the need to move rapidly with the needs of stability, availability, and durability. It’s possible and essential to have both. We’re not talking about mistake prevention, which has been the focus of many expensive hours of process engineering in the enterprise. Big design up front, exhaustive documentation, architectural review boards, and lengthy regression testing cycles all fly in the face of the speed that we’re seeking. Of course, all of these practices were created with good intentions. Unfortunately, none of them have provided consistently measurable improvements in the number of defects that make it into production.
Our architectures must provide us with the tools necessary to see failure when it happens. We need the ability to measure everything, establish a profile for “what’s normal,” detect deviations from the norm (including absolute values and rate of change), and identify the components contributing to those deviations. Feature-rich metrics, monitoring, alerting, and data visualization frameworks and tools are at the heart of all cloudnative application architectures. Fault isolation In order to limit the risk associated with failure, we need to limit the scope of components or features that could be affected by a failure. Monolithic application architectures often possess this type of failure mode. Cloud-native application architectures often employ microservices. By composing systems from microservices, we can limit the scope of a failure in any one microservice to just that microservice, but only if combined with fault tolerance.
TBD (overview of monoliths, difficulties in provisioning/configuring servers)
MVP
Continuous Delivery, enabled by Agile product development practices, is about shipping small batches of software to production constantly, through automation. Continuous delivery makes the act of releasing dull and reliable, so organizations can deliver frequently, at less risk, and get feedback faster from end users.
Microservices is an architectural approach to developing an application as a collection of small services; each service implements business capabilities, runs in its own process and communicates via HTTP APIs or messaging. Each microservice can be deployed, upgraded, scaled, and restarted independent of other services in the application, typically as part of an automated system, enabling frequent updates to live applications without impacting end customers
Containers offer both efficiency and speed compared with standard virtual machines (VMs). Using operating system (OS)-level virtualization, a single OS instance is dynamically divided among one or more isolated containers, each with a unique writable file system and resource quota. The low overhead of creating and destroying containers combined with the high packing density in a single VM makes containers an ideal compute vehicle for deploying individual microservices
TBD (remind audience that just because we are developing or the cloud, the principles of good software design still apply. reinforce that OOP is still applicable for the cloud and that the knowledge developers gained in the past is most likely still valuable)
Cloud Native Architectures enhance our ability to practice DevOps and Continuous Delivery, and they exploit the characteristics of Cloud Infrastructure. I define Cloud Native Architectures as having the following six qualities:
- Modularity (via Microservices)
- Observability
- Deployability
- Testability
- Disposability
- Replaceability
There are a lot of challenges we could face when designing and developing Cloud Native applications, so it is important to follow certain principles while developing Cloud Native applications. Heroku developed the Twelve-Factor App, which is essentially a manifesto describing the rules and guidelines that need to be followed to build a Cloud Native application. These factors serve as an excellent introduction to the discipline of building and deploying applications in the cloud. The Twelve-Factor App describes 12 design principles that are used in Cloud Native application architectures. We will see a brief summary of these principles here. To see more, visit the 12factor.net site. The 12-factor app methodology addresses the following challenges: • How can you get to fail faster? • How can you get everything you need ready so then, at a push of a button, you and your team can get your work done? • How do you take that code and then through process make it available for production? • And then how can you automate that process with continuous integration, continuous deployment, and even sometimes continuous delivery? • How do you get people to change methodology? What are the processes that they need to define to do the work?
- Factor 1: Codebase You need to build on top of one codebase, fully tracked with revision control and many deployments. Deployments should be automatic, so everything can run in different environments without work.
- Factor 2: Isolated Dependencies The second factor is about explicitly declaring and isolating dependencies because the app is a standalone and needs to install dependencies. This is why you declare what you need in the code. “Your development, your QA and your production have to be identical for 12-factor apps to actually work because when you scale for web, you can’t have any room for error.”
- Factor 3: Config Here you store your configuration files in the environment. This factor focuses on how you store your data — the database Uniform Resource Identifier (URI) will be different in development, QA and production.
- Factor 4: Backing Services You need to treat backing services like attached resources because you may want different databases depending on which team you’re on. Sometimes dev will want a lot of logs, while QA will want less. With this method, even each developer can have their own config file.
- Factor 5: Build, Run, Release You want to strictly separate the Build and Run stages, making sure everything has the right libraries. Then you put everything together in something that can be released and installed in the environment and then be able to run it. “As a developer, I want to be able to develop something that is this nice, tightly deliverable object, thing, that I can give to Ops. Ops then can put it in that environment and run it. And ideally, I will never hear from Ops again, because you’ve made it a well-defined solution” with few dependencies.
- Factor 6: Stateless Processes You want to make sure that stuff is stored in a backing store, meaning you’re able to scale out and do what you need to do. The app is executed in one or more stateless processes. As you scale up and out, you don’t want to have a state that you need to pass along.
- Factor 7: Port Binding Exporting services via port binding allows your internal customers to access your endpoints without traversing security. Then, you are able to access your app directly via a port to know if it’s your app or another point in the stack that isn’t working properly.
- Factor 8: Concurrency Small, defined apps allow scaling out as needed to handle the varying loads. Break your app into much smaller pieces and then look for services out there that you either have to write or can consume.
- Factor 9: Disposability Make sure changes can take effect very quickly. It’s about making sure you can start up and take down fast. And that you can handle a crash.
- Factor 10: Dev-Prod Parity Development, staging and production should be as similar as possible. Continuous deployment needs continuous integration based on matching environments to limit deviation and errors. If you keep dev, staging and production as similar as possible, anyone can understand it and release it. This is of course simply good development but it also enables a lot more scalability.
- Factor 11: Logs Your app only worries about creating a sort of event stream. Then, depending on the configuration, you can decide where that log will publish.
- Factor 12: Admin Processes Your admin tools ship with the product. For example, do not go messing with the database. Instead, use the tooling you built alongside your app to go and check the database. This also means that the privileges are the same across your system — no more special cases that put your security at risk.
- MASA is a new architectural model introduced by Gartner which stands for Mesh app and service architecture. It reflects what has emerged over the last five years, as organizations have experimented several digital projects.
- In the race to digital transformation, MASA focuses on enabling rich, fluid and dynamic connections of people, processes, services, content, devices and things. It enables the digital business technology platform within organizations.
- MASA mesh is based on a multidimensional model where an application is an interconnected mesh of independent and autonomous apps and services. Hence the name… It often incorporates functionality from other applications to create its own functionality that is shared in turn with external systems via APIs.
- A MASA application covers a specific process or activity. It is made of several apps and services. Each app serves the need of a specific user persona within the process/activity. For example, if you take the Uber application, there are two different apps: “Uber for travellers” and “Uber for drivers”. We should also consider how the different channels – such as mobile, connected things, social media, partners … – interact with those apps and contribute to the global application.
The SOLID principles are not rules. They are not laws. They are not perfect truths. These are statements on the order of ‘An apple a day keeps the doctor away.’ This is a good principle, it is good advice, but it’s not a pure truth, nor is it a rule.
Classes are one of the most fundamental building blocks of modern application development and the foundation of object-oriented design (OOD). Classes consist of both state, exposed through fields and properties, and logic, exposed through methods. Applications that adhere to the single responsibility principle consist of many small classes, each of which have only one responsibility or reason to change, that are used collectively to build higher-level features. Having more, smaller, focused classes makes applications easier to maintain and test.
When a developer makes a change to an existing codebase, there’s always a risk of introducing new bugs. This isn’t a criticism of the developer’s abilities but merely a reality of software development. As an application continues to grow, the possibility of inadvertently introducing unexpected behavior increases with even the most minor change. The open/closed principle attempts to mitigate this side effect by favoring extension of existing software entities over modification. This principle encourages developers to treat extensibility as a first-class citizen and isolate areas of probable change when writing code. When combined with judicious test coverage and continuous integration, the open/closed principle can significantly increase overall application stability and enable shorter release cycles by reducing the need for extensive regression testing. The goal of the open/closed principle is not to prevent changes completely but to limit them by emphasizing the importance of extensibility.
“Keep It Simple, Stupid!” – I would add some extra exclamation marks (!!!!) to try to keep this in your mind. The simpler your code is, the simpler it will be to maintain it in the future, and of course, if other people see it, they will thank you for that. The KISS principle was coined by Kelly Johnson, and it states that most systems work best if they are kept simple rather than making them complex; therefore simplicity should be a key goal in design and unnecessary complexity should be avoided. My advice is to avoid using fancy features from the programming language you’re working with only because the language lets you use them. This is not to say that you should not use those features, but use them only when there are perceptible benefits to the problem you’re solving. With this premise, From https://itexico.com/blog/bid/99765/software-development-kiss-yagni-dry-3-principles-to-simplify-your-life
Don’t Repeat Yourself – How many times do you see that there are similar codes, in different parts of a system. Well, this principle is formulated by Andrew Hunt and David Thomas in their book The Pragmatic Programmer that every piece of knowledge must have a single, unambiguous, authoritative representation within a system. In other words, you must try to maintain the behavior of a functionality of the system in a single piece of code. In the other hand, when the DRY principle is violated it is called as WET solutions, which is stand for either Write Everything Twice or We Enjoy Typing. I know this principle is very useful, especially in big applications where they are constantly maintained, changed and extended by a lot of programmers. But you also should not abuse of DRYing all things you do, remember the first two principles KISS and YAGNI, in first place. From https://itexico.com/blog/bid/99765/software-development-kiss-yagni-dry-3-principles-to-simplify-your-life
“You Aren’t Gonna Need It” – Sometimes, as developers, we try to think a lot in the future of the project coding some extra features “just in case we need them” or “we will eventually need them”. Just one word… Wrong! I’ll repeat it this way: You didn’t need it, you don’t need it and in most of the cases… “You Aren’t Gonna Need It”. YAGNI is a principle behind the extreme programming (XP) practice of “Do the Simplest Thing That Could Possibly Work”. Even when this principle is part of XP, it is applicable in all kind of methodologies and own processes of development. When you feel an unexplained anxiety to code some extra features that in the moment are not necessary but you think they will be useful in the future, just calm down and see all the pending work you have at this moment. You can’t waste time coding those features that maybe you will need to correct or change because they do not fit to what is needed, or in the worst scenario, they will not be used. From https://itexico.com/blog/bid/99765/software-development-kiss-yagni-dry-3-principles-to-simplify-your-life
A key principle of software development and architecture is the notion of separation of concerns. At a low level, this principle is closely related to the Single Responsibility Principle of object oriented programming. The general idea is that one should avoid co-locating different concerns within the design or code. For instance, if your application includes business logic for identifying certain noteworthy items to display to the user, and your application formats such items in a certain way to make them more noticeable, it would violate separation of concerns if both the logic for determining which items were noteworthy and the formatting of these items were in the same place. The design would be more maintainable, less tightly coupled, and less likely to violate the Don’t Repeat Yourself principle if the logic for determining which items needed formatted were located in a single location (with other business logic), and were exposed to the user interface code responsible for formatting simply as a property.