15 Essentials for Cloud Native Apps

Read the notes,原文链接

One

Cloud Native applications must always consist of a single code repository tracked in the version control system. A code repository is a source code repository or group of repositories that share a common root.

A single code repository for an application is used to generate any number of immutable publishers1, these publishes are intended to be used in different environments. Following this particular discipline, the team must analyze the seams of its application and potentially determine the whole that should be split into microservices.2If you have multiple code repositories, you have a system that needs to be decomposed, not a single application.

This rule can be broken when a codebase is used to generate multiple applications. For example, a single codebase has multiple startup scripts or even multiple execution points in a single wrapper module.

Multiple applications in a single codebase often indicate that multiple teams are maintaining a codebase, which can become ugly for a variety of reasons.

In other words, * one codebase, one application * does not mean that you are not allowed to share code between multiple applications. It just means that sharing code is another codebase.

This also does not mean that all shared code must be microservices. Instead, you should evaluate whether shared code should be considered as a separately released product.

API

Assuming you have fully embraced all the other factors discussed in this book. You are building a native cloud application, after checking the code into the repository, the tests will be automatically run, and the release candidate can be run in a laboratory environment within a few minutes.

Now, another team in your organization starts building services that interact with your code. Then, another team sees it, and they join the team and bring their services. Soon, you will have multiple teams working together to build services with horizontal dependencies, all at different release rhythms.

The nightmare of integration failures can occur without discipline. To avoid these integration failures and formally recognize your API as a first-class artifact of the development process, * APIs first * enable teams to collaborate with each other on public contracts without interfering with the internal development process.

The concept of “* mobile first” * is getting more and more attention. It refers to everything you do from the beginning of a project revolving around the idea that what you build is a product that mobile devices are going to consume. Similarly, * API first * means that you are building an API for use by Client applications and services.

By designing your API first, you can easily discuss it with your stakeholders (your internal team, customers, or other teams in your organization who may want to use your API) until you code yourself to the point where you can’t return. Then, with this collaboration, you can build user cases, mock up APIs, and generate docs that can be used to further socialize the intent and functionality of the built service.

All of this can be done to review (and test!) your direction and plan without having to invest too much money in the pipeline that supports a given API.

This mode is an extension of the contract-first development mode, in which developers focus on building the edges or seams of their applications first. Continuous testing of integration points through CI servers allows both teams to use their own services and still maintain reasonable assurance that all components work properly.

  • APIs first * liberate organizations from waterfalls, elaborate systems that follow pre-planned orchestration patterns and allow products to evolve into organic, self-organizing ecosystems that can grow to respond to new and unforeseen needs.

Dependency

The cloud is the maturity of the classic enterprise model, therefore, our applications need to * grow * to take advantage of the cloud. Applications cannot assume that servers or application containers will have everything they need. Instead, applications need to carry dependencies with them.

Most contemporary programming languages have some tools for managing application dependencies. ‘Maven’ and’Gradle 'are two of the most popular tools in the Java world. Regardless of which tool is used, these utilities provide a common set of features: they allow developers to declare dependencies and make the tool responsible for ensuring those dependencies are met.

Many of these tools also have the ability to isolate dependencies. This is done by analyzing declared dependencies and bundling (also known as * vendoring *) these dependencies into a substructure underneath or within the application artifact itself.

Failure to properly isolate dependencies can lead to unsolvable problems. Among some of the most common dependency-related problems, you can have a developer version X working on a workstation with some dependency libraries, but version * X + 1 * the library is installed in a central location in production. This can lead to everything from runtime failures all the way down to subtle faults that are insidious and difficult to diagnose. If left unaddressed, these types of failures can bring down an entire server or cost a company millions of dollars due to undiagnosable data corruption.

Properly managing the dependencies of an application is about the concept of redeployable. It should not be considered that the runtime of deploying an application is not automated. Ideally, the container of the application is bundled (or booted, as some frameworks call it) in the release artifacts of the application, or better yet, the application has no container at all.

Design,

  • Build, Release, Run * requires a strict distinction between the build and run phases of development. This is excellent advice, and failure to follow this guideline may cause you difficulties in the future. In addition to the * Build, Release, Run * trio of Three-factor Verification, discrete * design * steps are also critical.

The process from design to code to run is an iterative process that can happen in a small or longer amount of time that your team can handle. If the team has a mature CI/CD process, it can take several minutes from design to production.

A single codebase will be used during the build process to generate compiled artifacts. This artifact is then merged with configuration information * external to the application to produce an * immutable * published version. The immutable version is then released to the cloud environment (development, QA, production, etc.) and run. The focus of this chapter is that each of the following deployment phases is isolated and occurs separately.

Design

In the world of waterfall application development, we spend a lot of time designing applications before writing a single line of code. This type of software development lifecycle does not adapt well to the needs of modern applications that need to be released frequently.

However, this doesn’t mean that we don’t design at all. Instead, it means that we design small features to be released, and we have a high-level design that informs everything we do. But we also know that designs change, and a small amount of design is part of * each iteration *, not completely up-front.

Application developers best understand the dependencies of the application, and declare them at the design stage to declare dependencies and the way these dependencies are sold or bundled into the application. In other words, the developer decides which libraries the application will use and how to bundle these libraries into an immutable published version.

Establish

During the build phase, the code repository is converted into versioned binary artifacts. During this phase, dependencies declared during the design phase are extracted and bundled into the build artifacts (often referred to simply as “builds”). In Java world, a build could be a WAR 1Or JAR files, or possibly ZIP files or binary executables for other languages and frameworks.

Ideally, a build is created by a Continuous Integration server, and there is a 1: many relationship between build and deployment. A single build should be able to be released or deployed to any number of environments, and each unmodified build should be able to work as expected. The immutability of this artifact, as well as compliance with other factors (especially * environment parity *), gives you confidence that if your application can run in quality check, it will run in a production environment.

** If you find yourself troubleshooting “working on my machine” issues, it’s a clear indication that the four stages of this process may not be as separate as they should be. **

Publish

In the Cloud Native world, publishing is usually done by pushing to your cloud environment. The output of the build phase is combined with environment-specific and application-specific configuration information to produce another immutable artifact, the * publish version *.

Versions must be unique, and ideally, each version should be labeled with some kind of unique ID, such as a timestamp or auto-increment number.

Let’s say your CI system has just built your application and tagged that artifact build-1234. The CI system can then publish the application to development, staging, and production environments. The scenario is up to you, but each publish should be unique because each publish combines the original with * environment-specific * configuration settings.

If something goes wrong, you want to be able to audit what has been published to a given environment and roll back to a previous version if necessary. This is another key reason to keep releases immutable and uniquely identified.

Run

The runtime phase is usually also done by cloud providers (although developers need to be able to run applications locally). Details vary between providers, but the usual pattern is to place your application in some kind of container (Docker, Garden, Warden, etc.) and then start a process to launch your application.

It is worth noting that ensuring that developers can run applications locally on their own workstations, while still allowing them to deploy to multiple clouds through CD pipelines, is usually a difficult problem to solve. However, this is worth solving because developers need to be unhindered when dealing with native cloud applications.

When the application is running, the cloud runtime will be responsible for keeping it running, monitoring its health and aggregating its logs, as well as a large number of other management tasks, such as dynamic scaling and fault tolerance.

Configuration,

Configurational chemistry

Think of configurations, credentials, and codes as volatile substances that explode when combined.

This may sound a bit harsh, but not following this rule can cause you to be extremely frustrated, which will only bring your application closer to a production environment.

In order to keep configuration separate from code and credentials, we need to have a very clear definition of configuration. Configuration refers to any value that can change with deployment (for example, developer workstation, QA, and production). This may include:

  • URLs and other information about fallback services, such as web services and SMTP servers
  • Information needed to find and connect to the database
    Credentials for third-party services (such as Amazon AWS) or APIs (such as Google Maps, Twitter, and Facebook)
    Information that may typically be bundled in properties files or configuration XML or YML

Configuration does not include internal information that it is part of the application itself. Similarly, if the value remains unchanged across all deployments (which is part of your immutable building block), it is not a configuration.

Credentials are extremely sensitive information and there are absolutely no transactions in the codebase. Typically, developers will extract credentials from compiled source code and put them into properties files or XML configurations, but this doesn’t actually solve the problem. Bundled resources, including XML and properties files, are still part of the codebase. This means that credentials bundled in resource files that come with your application still violate this rule.

Treat your application like open source

Check if you have properly externalized credentials and configuration to imagine the consequences of pushing application source code to GitHub.

If the public has access to your code, are you exposing sensitive information about the resources or services your application relies on? Can people see internal URLs, credentials for support services, or other information that is sensitive or irrelevant to people not working in the target environment?

If you can open source the codebase without exposing sensitive or environment-specific information, you may have isolated the code, configuration, and credentials well.

Obviously, we don’t want to expose credentials, but the need for external configuration is often not so obvious. External configuration supports our ability to * automatically * deploy immutable versions to multiple environments through CD pipelines, and helps us maintain parity between development/production environments.

Externalized configuration

It’s one thing to say that your application’s configuration should be * externalized *, but it’s another thing to actually do it. If you’re using a Java application, you might bundle the release artifact with a property file. Other types of applications and languages tend to use YAML files, while .NET applications traditionally get their configuration from XML-based * web.config * and * machine.config * files.

You should think of * all * of these as * anti-patterns * of the cloud. All of these situations make it impossible for you to change configurations between environments while still maintaining the same published artifacts.

The brute force approach to externalizing configuration is to get rid of all the configuration files and then traverse the codebase and modify them in the expectation that all these values are provided by environment variables. Environment variables are considered a best practice for externalizing configuration, especially on Cloud Computing platforms like Cloud Foundry or Heroku.

Depending on your cloud provider, you may be able to use their tools to manage * fallback services * or * bind services * to expose structured environment variables containing service credentials and URLs to applications in a secure manner.

Another way you are strongly advised to externalize your configuration is to actually use a server product that is designed to expose the configuration. One such open source server is Spring Cloud Configuration Server, but there are countless other products available. One thing you should be aware of when purchasing a configuration server product is support for version control. If you are externalizing your configuration, you should be able to secure data changes and get a history of who made the changes and when. It is this requirement that makes a configuration server located at the top of a version control repository (such as * git) * attractive.

Logs

Logs should be treated as * event streams *, that is, logs are sequences of events emitted from an application in chronological order. The key to handling logs the Cloud Native way is that true Cloud Native applications never route or store their own output streams.

Cloud applications cannot make any assumptions about the file system they are running on, unless it is ephemeral. Native cloud applications write all their log entries to’stdout ‘and’stderr’. This can scare a lot of people, worried that it means losing control.

You should consider the aggregation, processing, and storage of logs as a non-functional requirement that is not met by your application, but by your cloud provider or other tool suite that runs in partnership with the platform.

When your application is decoupled from the knowledge of log storage, processing, and analysis, your code will become simpler, and you can rely on industry-standard tools and stacks to handle logs. In addition, if you need to change the way logs are stored and processed, there is no need to modify the application *.

One of the many reasons your application should not control the ultimate fate of its logs is due to scalability. When you have a fixed number of instances on a fixed number of servers, it seems to make sense to store logs on disk. But when your application can go from 1 running instance to 100 on the fly, and you don’t know * where * those instances are running, you need your cloud provider to aggregate those logs on your behalf.

Simplifying the application’s log sending process allows you to reduce the code base and focus more on the core business value of the application.

Disposability

On a cloud instance, the life of an application is as short as the infrastructure that supports it. The process for Cloud Native applications is one-time, which means they can be started or stopped quickly. ** If an application cannot be started quickly and shut down properly, it cannot be quickly scaled, deployed, released, or restored **. We need to build applications that not only realize this, but must also * embrace * it to make the most of the platform.

** If you are launching an application and it takes a few minutes to enter a stable state, in today’s high-traffic world, this can mean rejecting hundreds or thousands of requests when launching the application **. What’s more, depending on the platform on which the application is deployed, such a slow startup time may actually trigger alerts or warnings because the application cannot pass its health check. Extremely slow startup times may even prevent your application from launching completely in the cloud.

If your application is under increasing load, and you need to quickly launch more instances to handle that load, any delay in the startup process may hinder its ability to handle that load. If the application fails to shut down quickly and properly, it may also prevent the ability to restart it again after failure. Failure to shut down the system fast enough may also bring the risk of not being able to process resources, which may corrupt data.

When writing many applications, they perform many long-running activities during startup, such as fetching data to fill the cache or preparing other runtime dependencies. In order to truly embrace the Cloud Native architecture, this activity needs to be handled separately. For example, ** you can externalize the cache as a * support service * so that your application can quickly perform up and down operations without the need for preload operations **.

Backing

A * support service * is any service that your application relies on for its functionality. This is a fairly broad definition, and its broad scope is intentional. Some of the most common types of support services include data storage, messaging systems, caching systems, and many other types of services, including those that perform business functions or security.

When building applications designed to run in cloud environments where file systems must be treated as ephemeral, you also need to treat file storage or disk as a fallback service. You should not read or write files on disk as you would with regular enterprise applications. Instead, file storage should be a support service bound to your application as a resource.

An application, a set of support services, and resource bindings (connection lines) for these services. Binding resources is really just a way to connect your application to support services. Resource bindings for a database may include usernames, passwords, and URLs that allow your application to use that resource.

We should have externalized configuration (separate from credentials and code), and our published product must be immutable. Applying these other rules to the way the application uses background services, we end up with some rules for resource binding:

Applications should * declare * their requirements for a given support service, but should allow the cloud environment to perform actual resource binding.

  • The binding of the application to its supporting services should be done through external configuration.
  • It should be possible to attach and detach support services from the application at will * without redeploying the application *.

For example, suppose you have an application that needs to communicate with an Oracle database. You write your application so that its specific Oracle database dependencies * are declared * (the way this is declared is usually for a certain language or toolset) *. * The source code of the application assumes that the configuration of resource binding occurs outside the application.

This means that there will * never * be lines of code in your application that tightly couple the application to a specific support service. Similarly, you may also have a support service for sending emails, so you know you will communicate with it via SMTP. However, the exact implementation of the mail server has no impact on your application, and your application should not rely on an SMTP server that exists in a specific location or has specific credentials.

Environment

Although some organizations have made greater developments, many of us may be working in environments where the Shared Development Environment has different scalability and reliability profiles compared to QA, and QA differs from the production environment. the database drivers used in dev and QA differ from the production version. Security rules, firewalls, and other environment configuration settings also differ. Some people are able to deploy to certain environments, while others do not. Finally, and worst of all, people are afraid to deploy, they have little confidence that if the product works in one environment, it will work in another.

While discussing the * design, build, release, run * cycle, I brought up the notion that the “run on my machine” scenario is the Cloud Native anti-pattern. The same goes for other phrases we’ve heard before losing hours or days of firefighting and troubleshooting: “It works in Quality Assurance” and “It works in the product”.

The purpose of applying rigor and discipline to environmental parity is to give your team and the entire organization confidence that the application * can be used anywhere *.1个

While the opportunities to create gaps between environments are almost limitless, the most common culprits are usually:

  • Time
  • People
  • Resources

Time

In many organizations, it can take weeks or months from developers checking in code to production completion. In such organizations, you will often hear phrases such as “Q3 release” or “20xx December release”. Such phrases are a warning sign for anyone paying attention.

When there is such a time interval, people often forget what changes were made in the release (even if there are enough release notes), and more importantly, developers have forgotten what the code looks like.

With a modern approach, organizations should strive to reduce the time interval from check-in to production, from weeks or months to * minutes or hours *. The end of the correct CD pipeline should be to perform automated testing in different environments until changes are automatically pushed to the * production * environment. With the help of a cloud that supports zero downtime deployment, this pattern can become the norm.

People

  • Humans * Applications should never be deployed, at least not to any environment other than your own workstation or lab. If the correct build pipeline exists, the application will be automatically deployed to all applicable environments based on CI tools and security restrictions within the target cloud instance, and can be manually deployed to other environments.

In fact, even if your target is a public cloud provider, you can still use cloud-hosted CD tools like CloudBees or Wercker to automate your testing and deployment.

Although there are always exceptions, I think if you can’t deploy by pressing a button or automatically responding to some event, you’re doing it wrong.

Resources

The way we use and provide * support services * is usually one such compromise. Our application may require a database, and we know that in production we will connect it to an Oracle or Postgres server, but setting it up to be available locally for development is very cumbersome, so we will compromise and use an in-memory database that is * like the target database.

Every time we make one of these compromises, we widen the gap between our development and production environments. The bigger the gap, the less predictable we are about how our applications work. As predictability declines, so does reliability. If reliability declines, we lose the * continuous * process from code check-in to production deployment. It makes everything we do vulnerable. The worst part is that we often don’t know the consequences of increasing the dev/prod gap until it’s too late.

When evaluating every step in the development lifecycle when building on-premises cloud applications, you need to flag and question every decision that increases the functional gap between deployment environments, and you need to resist the urge to mitigate this by allowing your environment to maintain the difference even if it seems trivial at the time.

Administrative

In some cases, using a management process is actually not a * good * idea, and you should always be asking yourself if a management process is what you want, or if a different design or architecture is better suited to your needs. Examples of management processes that might be refactored into something else include:

  • Database migration
  • Interactive Programming Console (REPL)
  • Run timed scripts, such as nightly batch jobs or hourly imports
  • Run a one-time job that only executes custom code once

First, let’s look at the issue of timers (usually managed by applications such as Autosys or Cron). One idea might be to only internalize timers and have your application wake up every * n * hour to perform its batch operations. On the surface, this seems like a good solution, but what happens when there are 20 application instances running in one Availability Zone and another 15 instances running in another zone? If they are all performing the same batch operation on the timer, you are basically causing confusion at this point, and corrupted or duplicate data will be just one of many terrible events caused by this pattern.

Interactive shells are also problematic for a number of reasons, but the biggest one is that even if it is possible to reach that shell, you can only interact with the temporary memory of a single instance. If the application is properly built as a * stateless process *, then I think there is little value in exposing the REPL for process introspection.2

Next, let’s look at the mechanism that triggers a timed or batch management process. This is usually Autosys that occurs due to some external timer stimulus (such as cron or) executing a shell script. In the cloud, you can’t expect to be able to invoke these commands, so you need to find other ways to trigger temporary activities in your application.

There are multiple solutions, but the one I find most appealing is to expose a RESTful endpoint that can be used to invoke ad hoc functionality, especially when migrating the rest of the application to Cloud Native.

This still allows the timing function to be called at will, but moves the stimulus for this operation * outside * of the application. In addition, this method solves the * problem that internal timers can only be executed * once * at most * on dynamic scaling instances. Batch operations are processed once by one application instance and can then interact with other * support services * to complete the task. Securing batch endpoints should also be fairly simple so that they can only be operated by authorized personnel. Even more useful, your batch operations can now be flexibly scaled and take advantage of all other cloud benefits.

If you still feel the need to take advantage of management processes, you should make sure to use them in a way that is consistent with the functionality provided by your cloud provider. In other words, don’t use your preferred programming language to generate new processes to run your work; use tools designed to run one-time tasks in a Cloud Native way.

Port

Avoid ports identified by the container

Web applications (especially those already running inside an enterprise) are often executed in some kind of server container. Java world is full of containers like Tomcat, JBoss, Liberty, and WebSphere. Other web applications may run in other containers, such as Microsoft Internet Information Server (IIS).

In non-cloud environments, web applications have been deployed into these containers, and then the container is responsible for assigning ports to the application when it starts.

An extremely common pattern in enterprises that manage their own web servers is to host multiple applications in the same container, separate the applications by Port Number (or URL hierarchy), and then use DNS to provide a user-friendly appearance around that server. For example, you might have a host called appserver (virtual or physical), and have assigned ports 8080 through 8090. Instead of having users remember the Port Number, use DNS hostname associations like app1 for some applications, appserver: 8080 for app2, and so on.

Avoid micro-management port allocation

Platform as a Service is adopted here, so that both developers and developers no longer have to perform this micro-management. Your cloud provider should manage port allocation for you, as it may also manage routing, scaling, high availability, and fault tolerance, all of which require the cloud provider to manage certain aspects of the network, including routing hostnames to ports and mapping external Port Numbers to container internal ports.

The original 12 factors for port binding use the term “export” because it is assumed that cloud-native applications are independent and never injected into any type of external application server or container.

Practicality and the nature of existing enterprise applications can make it difficult or impossible to build applications in this way. As a result, there are slightly less restrictive guidelines that * a 1:1 association must always be maintained between the application and the application server *. In other words, your cloud provider may support web application containers, but it is extremely unlikely to support hosting multiple applications in the same container, as it is nearly impossible to achieve durability, scalability, and elasticity.

For modern applications, the impact of port binding on developers is very simple: your application can http://localhost:12001 run on the developer’s workstation, while in quality check it can run on http://192.168.1.10:2000 and http://app.company.com in the production environment. Applications developed with export port binding in mind support this environment-specific port binding without any code changes.

Application is a support service

Finally, applications developed to allow externalized runtime port binding can act as support services for another application. This flexibility and all the other benefits of running on the cloud are very powerful.

Stateless process

Practical Definition of Stateless

One of the questions I often ask is due to confusion over the concept of statelessness. People want to know how to build a stateless process. After all, every application needs * some * state, right? Even the simplest applications leave some data floating, so how can you have a truly stateless process?

Stateless Application does not make any assumptions about the contents of memory before processing the request, nor does it make any assumptions about the contents of memory after processing the request. During the processing of requests or transactions, the application can create and use transients, but when the client receives a response, the data should all disappear.

Simply put, all persistent state must be external to the application, provided by a backup service.

For example, a microservice that exposes functions for user management must be stateless, so a list of all users is maintained in a support service such as an Oracle or MongoDB database. For obvious reasons, database statelessness is meaningless.

Shared-free mode

Processes often communicate with each other by sharing common resources. Even without considering migrating to Cloud Services, there are many benefits to adopting a no * sharing * model.

First, anything that is shared between processes is the responsibility that makes all of these processes more vulnerable. In many high availability models, processes will share data through multiple technologies to choose a cluster leader, determine if a process is a primary or a backup process, and so on.

When running in the cloud, you need to avoid all of these options. Your processes can disappear instantly without any warning, * which is a good thing *. Processes come and go, scale horizontally and vertically, and are highly disposable. This means that anything shared between processes can also disappear, potentially causing cascading failures.

Needless to say, but * the file system is not a fallback service *. This means that you cannot view files as a way for applications to share data. Disks in the cloud are temporary disks, and in some cases even read-only disks.

If processes need to share data, such as the session state of a group of processes forming a web farm, this session state should be externalized and made available through a true support service.

Data cache

Especially in container-based long-running web applications, a common pattern is to cache frequently used data during process startup. As mentioned in this book, processes need to start and stop quickly, and spending a long time filling the cache in memory violates this principle.

To make matters worse, storing what your application thinks is always available in-memory cache can make your application bloated, leaving each of your instances (which should be scalable elastically) consuming far more memory than it needs.

There are dozens of third-party caching products, including Gemfire and Redis, all of which are designed to act as fallback service caches for applications. They can be used for session state, but also for caching data that may be needed during startup and avoiding tightly coupled data sharing between processes.

Concurrency

  • Concurrency *, it is recommended that we * scale * native cloud applications using the process model. Sometimes, if the application reaches its capacity limit, the solution is to increase its size. If the application can only handle a certain number of requests per minute, the preferred solution is to simply make the application * larger *.

Adding CPU, RAM, and other resources (virtual or physical) to a single monolithic application is called vertical scaling, and this type of behavior is generally unpopular in today’s civilized society.

A more modern approach, ideally the kind of elastically scalable cloud support, is to scale up **, or * horizontally *. Instead of making a large process bigger and then distributing the application load between these processes, you can create multiple processes.

Most cloud providers have perfected this feature and can even configure rules that will dynamically scale the number of application instances based on the available load or other runtime telemetry in the system.

If you want to build a one-time, stateless, shared-free process, you will be able to take full advantage of horizontal scaling and running multiple concurrent instances of the application

Telemetry

When monitoring applications, there are usually several different categories of data.

Application Performance Monitoring (APM)

  • Specific domain telemetry
  • Health status and system logs

The first is APM, which consists of a series of events that tools outside the cloud can use to monitor the health of your application. This is something you are responsible for, as the definition and watermark of performance is specific to your application and criteria. The data used to provide the APM dashboard is often quite generic and can come from multiple applications across multiple Lines of Business.

Second, domain-specific telemetry is also up to you. This refers to events and data streams that are meaningful to your business and can be used for your own analysis and reporting. Such event streams are often fed into “Big data” systems for warehousing, analysis, and prediction.

The difference between APM and domain-specific telemetry may not be immediately apparent. Think of it this way: APM can provide you with the average number of HTTP requests processed per second by your application, while domain-specific telemetry can tell you the number of widgets sold to iPad users in the last 20 minutes.

Finally, health and system logs should be provided by your cloud provider. They make up a series of events, such as application startup, shutdown, expansion, web request tracking, and the results of regular health checks.

Cloud Services make a lot of things simple, but monitoring and telemetry are still difficult, and perhaps even more difficult than monitoring for traditional enterprise applications. When you stare at a pipeline that contains routine health checks, request audits, business-level events, and streams that track data and performance metrics, that’s an incredible amount of data.

When planning a monitoring strategy, you need to consider the amount of information to be aggregated, the speed of input, and the amount of information to be stored. If your application dynamically scales from 1 instance to 100 instances, this can also lead to a hundredfold increase in log traffic.

Auditing and monitoring cloud applications is often overlooked, but may be some of the most important things to plan and properly execute for production deployments. If you’re not going to blindly launch satellites into orbit without being able to monitor them, you shouldn’t do the same with cloud applications.

Authentication and authorization

Security is an important part of any application and cloud environment. * Security is never an afterthought *.

Many times, we are so focused on implementing the functional requirements of an application that we neglect one of the most important aspects of delivering any application, whether it is enterprise-oriented, mobile-device-oriented, or application-oriented. Cloud.

Cloud Native applications are the most secure applications. Your code (whether compiled or raw) is transferred across multiple data centers, executed in multiple containers, and accessed by countless clients (some legitimate, most harmful).

Even if the only reason to implement security in your application is that you can audit track which data changes have been made by which user, this alone is enough to prove that it takes relatively little time and effort to secure your application endpoints.

Ideally, all Cloud Native applications will use RBAC (Role-Based Access Control) to protect all their endpoints. Every request for an application resource should know who is making the request and the role that consumer belongs to. These roles indicate whether the calling Client has sufficient permissions to allow the application to fulfill the request.

With tools such as OAuth2, OpenID Connect, various SSO servers and standards, and almost unlimited language-specific authentication and authorization libraries, security should be incorporated into the development of applications from the beginning, rather than adding bolted projects after the application runs in production.

Cloud Native

What is Cloud Native?

Buzzwords and phrases like “SOA”, “Cloud Native”, and “Microservices” all started popping up because we needed a faster and more efficient way to communicate our ideas about a certain topic. This was critical to facilitating meaningful conversations about complex topics, and we ended up establishing a * shared context * or * common language *.

The problem with these buzzwords is that they rely on a common or common understanding among multiple parties. Similar classic games电话1On an unprecedented scale, this so-called shared understanding quickly deteriorated to confusion with each other.

We saw this through SOA (service-oriented architecture) and again through the concept of Cloud Native. It seems that every time this concept is shared, the meaning changes until we think about Cloud Native as much as IT professionals.

To understand “Cloud Native”, we must first understand “cloud”. Many people believe that “cloud” is synonymous with open and unrestricted public access to the internet. Although there are some such cloud products, there is far from a complete definition.

In the context of this book, cloud refers to Platform as a Service. PaaS providers expose a platform that hides infrastructure details from application developers, which sits on top of Infrastructure as a Service (IaaS). Examples of PaaS providers include Google App Engine, Redhat Open Shift, Pivotal Cloud Foundry, Heroku, AppHarbor, and Amazon AWS.

The key takeaway is that cloud is not necessarily synonymous with public, and enterprises are setting up their own Private Cloud in their own IaaS or data centers of third-party IaaS providers such as VMware or Citrix.

Cast doubt on the word “native” in the word “cloud native”. ** This creates the false impression that only brand new green applications developed natively within the cloud can be considered Cloud Native. This is completely incorrect **.

  • A Cloud Native application is an application that has been designed and implemented to run on a “Platform as a Service” installation and includes horizontal elastic scaling. *

Why use the Cloud?

Not long ago, deploying applications on physical servers was considered the norm for building applications - from large towers in air-conditioned rooms to ultra-thin * 1U * devices installed in actual Data centers.

Bare metal deployments are fraught with problems and risks: we can’t scale applications dynamically, the deployment process is difficult, changes to hardware can cause application failures, and hardware failures often result in massive data loss and significant downtime.

This sparked the virtualization revolution. Everyone agreed to stop using bare metal, so the hypervisor was born. The industry decided to put an abstraction layer on top of hardware so that we could simplify deployment, scale out applications, and hopefully avoid a lot of downtime and sensitivity to hardware failures.

In today’s world of always-connected smart devices and even smarter software, you have to search long and hard for a company that doesn’t have some kind of software development process as its cornerstone. Even in the traditional manufacturing industry, where companies make hard * physical products *, manufacturing doesn’t happen without software. Without software, you can’t organize people to build things efficiently and at scale, and without software, you certainly can’t participate in a global market.

No matter what industry you’re in, you can’t compete in today’s market without the ability to quickly deliver * unfailing * software. It needs to be able to dynamically scale to handle previously unheard of amounts of data. If you can’t handle * Big data *, then your competitors will. If you can’t produce software that can handle massive loads, stay responsive and change as fast as the market, then your competitors will find a way to do it.

This brings us to the essence of * Cloud Native *. In the past, companies could escape the days of distraction by spending an inordinate amount of time and resources on DevOps tasks, building and maintaining fragile infrastructure, and worrying about the consequences of production deployments that only happen once every blue moon.

This is the age of Cloud Services and we need to build applications in a way that encompasses this way. We need to build applications so that we can spend most of our time on the hedgehog (a big deal) and have someone or someone else take care of the many little things of the fox. Super fast IPO time is no longer a good time; it is necessary to avoid being left behind by our competitors. We want to be able to put resources into our business area and let other experts handle things they do better than we do.

By adopting a * Cloud Native architecture * and assuming * everything is a service, * and deploying them in a cloud environment to build our applications, we can achieve all these benefits and more. The question is not * Why Cloud Native? * The question you have to ask yourself is, why not * embrace Cloud Native?