The architecture components of an application can be categorized as either frontend or backend. In some scenarios, these components can be hosted to operate from different computing environments. As part of the tiered hybrid architecture pattern, the computing environments are located in an on-premises private computing environment and in Google Cloud.
Frontend application components are directly exposed to end users or devices. As a result, these applications are often performance sensitive. To develop new features and improvements, software updates can be frequent. Because frontend applications usually rely on backend applications to store and manage data—and possibly business logic and user input processing—they're often stateless or manage only limited volumes of data.
To be accessible and usable, you can build your frontend applications with various frameworks and technologies. Some key factors for a successful frontend application include application performance, response speed, and browser compatibility.
Backend application components usually focus on storing and managing data. In some architectures, business logic might be incorporated within the backend component. New releases of backend applications tend to be less frequent than releases for frontend applications. Backend applications have the following challenges to manage:
- Handling a large volume of requests
- Handling a large volume of data
- Securing data
- Maintaining current and updated data across all the system replicas
The three-tier application architecture is one of the most popular implementations for building business web applications, like ecommerce websites containing different application components. This architecture contains the following tiers. Each tier operates independently, but they're closely linked and all function together.
- Web frontend and presentation tier
- Application tier
- Data access or backend tier
Putting these layers into containers separates their technical needs, like scaling requirements, and helps to migrate them in a phased approach. Also, it lets you deploy them on platform-agnostic cloud services that can be portable across environments, use automated management, and scale with cloud managed platforms, like Cloud Run or Google Kubernetes Engine (GKE) Enterprise edition. Also, Google Cloud-managed databases like Cloud SQL help to provide the backend as the database layer.
The tiered hybrid architecture pattern focuses on deploying existing frontend application components to the public cloud. In this pattern, you keep any existing backend application components in their private computing environment. Depending on the scale and the specific design of the application, you can migrate frontend application components on a case-by-case basis. For more information, see Migrate to Google Cloud.
If you have an existing application with backend and frontend components hosted in your on-premises environment, consider the limits of your current architecture. For example, as your application scales and the demands on its performance and reliability increase, you should start evaluating whether parts of your application should be refactored or moved to a different and more optimal architecture. The tiered hybrid architecture pattern lets you shift some application workloads and components to the cloud before making a complete transition. It's also essential to consider the cost, time, and risk involved in such a migration.
The following diagram shows a typical tiered hybrid architecture pattern.
In the preceding diagram, client requests are sent to the application frontend that is hosted in Google Cloud. In turn, the application frontend sends data back to the on-premises environment where the application backend is hosted (ideally through an API gateway).
With the tiered hybrid architecture pattern, you can take advantage of Google Cloud infrastructure and global services, as shown in the example architecture in the following diagram. The application frontend can be reached over Google Cloud. It can also add elasticity to the frontend by using auto-scaling to dynamically and efficiently respond to scaling demand without over provisioning infrastructure. There are different architectures that you can use to build and run scalable web apps on Google Cloud. Each architecture has advantages and disadvantages for different requirements.
For more information, watch Three ways to run scalable web apps on Google Cloud on YouTube. To learn more about different ways to modernize your ecommerce platform on Google Cloud, see How to build a digital commerce platform on Google Cloud.
In the preceding diagram, the application frontend is hosted on Google Cloud to provide a multi-regional and globally optimized user experience that uses global load balancing, autoscaling, and DDoS protection through Google Cloud Armor.
Over time, the number of applications that you deploy to the public cloud might increase to the point where you might consider moving backend application components to the public cloud. If you expect to serve heavy traffic, opting for cloud-managed services might help you save engineering effort when managing your own infrastructure. Consider this option unless constraints or requirements mandate hosting backend application components on-premises. For example, if your backend data is subject to regulatory restrictions, you probably need to keep that data on-premises. Where applicable and compliant, however, using Sensitive Data Protection capabilities like de-identification techniques, can help you move that data when necessary.
In the tiered hybrid architecture pattern, you also can use Google Distributed Cloud in some scenarios. Distributed Cloud lets you run Google Kubernetes Engine clusters on dedicated hardware that's provided and maintained by Google and is separate from Google Cloud data center. To ensure that Distributed Cloud meets your current and future requirements, know the limitations of Distributed Cloud when compared to a conventional cloud-based GKE zone.
Advantages
Focusing on frontend applications first has several advantages including the following:
- Frontend components depend on backend resources and occasionally on other frontend components.
- Backend components don't depend on frontend components. Therefore, isolating and migrating frontend applications tends to be less complex than migrating backend applications.
- Because frontend applications often are stateless or don't manage data
by themselves, they tend to be less challenging to migrate than backends.
- Frontend components can be optimized as part of the migration to use stateless architecture. For more information, watch How to port stateful web apps to Cloud Run on YouTube.
Deploying existing or newly developed frontend applications to the public cloud offers several advantages:
- Many frontend applications are subject to frequent changes. Running these applications in the public cloud simplifies the setup of a continuous integration/continuous deployment (CI/CD) process. You can use CI/CD to send updates in an efficient and automated manner. For more information, see CI/CD on Google Cloud.
- Performance-sensitive frontends with varying traffic load can benefit substantially from the load balancing, multi-regional deployments, Cloud CDN caching, serverless, and autoscaling capabilities that a cloud-based deployment enables (ideally with stateless architecture).
Adopting microservices with containers using a cloud-managed platform, like GKE, lets you use modern architectures like microfrontend, which extend microservices to the frontend components.
Extending microservices is commonly used with frontends that involve multiple teams collaborating on the same application. That kind of team structure requires an iterative approach and continuous maintenance. Some of the advantages of using microfrontend are as follows:
- It can be made into independent microservices modules for development, testing, and deployment.
- It provides separation where individual development teams can select their preferred technologies and code.
- It can foster rapid cycles of development and deployment without affecting the rest of the frontend components that might be managed by other teams.
Whether they're implementing user interfaces or APIs, or handling Internet of Things (IoT) data ingestion, frontend applications can benefit from the capabilities of cloud services like Firebase, Pub/Sub, Apigee, Cloud CDN, App Engine, or Cloud Run.
Cloud-managed API proxies help to:
- Decouple the app-facing API from your backend services, like microservices.
- Shield apps from backend code changes.
- Support your existing API-driven frontend architectures, like backend for frontend (BFF), microfrontend, and others.
- Expose your APIs on Google Cloud or other environments by implementing API proxies on Apigee.
You can also apply the tiered hybrid pattern in reverse, by deploying backends in the cloud while keeping frontends in private computing environments. Although it's less common, this approach is best applied when you're dealing with a heavyweight and monolithic frontend. In such cases, it might be easier to extract backend functionality iteratively, and to deploy these new backends in the cloud.
The third part of this series discusses possible networking patterns to enable such an architecture. Apigee hybrid helps as a platform for building and managing API proxies in a hybrid deployment model. For more information, see Loosely coupled architecture, including tiered monolithic and microservices architectures.
Best practices
Use the information in this section as you plan for your tiered hybrid architecture.
Best practices to reduce complexity
When you're applying the tiered hybrid architecture pattern, consider the following best practices that can help to reduce its overall deployment and operational complexity:
- Based on the assessment of the communication models of the identified applications, select the most efficient and effective communication solution for those applications.
Because most user interaction involves systems that connect across multiple computing environments, fast and low-latency connectivity between those systems is important. To meet availability and performance expectations, you should design for high availability, low latency, and appropriate throughput levels. From a security point of view, communication needs to be fine-grained and controlled. Ideally, you should expose application components using secure APIs. For more information, see Gated egress.
- To minimize communication latency between environments, select a Google Cloud region that is geographically close to the private computing environment where your application backend components are hosted. For more information, see Best practices for Compute Engine regions selection.
- Minimize high dependencies between systems that are running in different environments, particularly when communication is handled synchronously. These dependencies can slow performance, decrease overall availability, and potentially incur additional outbound data transfer charges.
- With the tiered hybrid architecture pattern, you might have larger volumes of inbound traffic from on-premises environments coming into Google Cloud compared to outbound traffic leaving Google Cloud. Nevertheless, you should know the anticipated outbound data transfer volume leaving Google Cloud. If you plan to use this architecture long term with high outbound data transfer volumes, consider using Cloud Interconnect. Cloud Interconnect can help to optimize connectivity performance and might reduce outbound data transfer charges for traffic that meets certain conditions. For more information, see Cloud Interconnect pricing.
- To protect sensitive information, we recommend encrypting all communications in transit. If encryption is required at the connectivity layer, you can use VPN tunnels, HA VPN over Cloud Interconnect, and MACsec for Cloud Interconnect.
To overcome inconsistencies in protocols, APIs, and authentication mechanisms across diverse backends, we recommend, where applicable, to deploy an API gateway or proxy as a unifying facade. This gateway or proxy acts as a centralized control point and performs the following measures:
- Implements additional security measures.
- Shields client apps and other services from backend code changes.
- Facilitates audit trails for communication between all cross-environment applications and its decoupled components.
- Acts as an
intermediate communication layer
between legacy and modernized services.
- Apigee and Apigee hybrid lets you host and manage enterprise-grade and hybrid gateways across on-premises environments, edge, other clouds, and Google Cloud environments.
To facilitate the establishment of hybrid setups, use Cloud Load Balancing with hybrid connectivity. That means you can extend the benefits of cloud load balancing to services hosted on your on-premises compute environment. This approach enables phased workload migrations to Google Cloud with minimal or no service disruption, ensuring a smooth transition for the distributed services. For more information, see Hybrid connectivity network endpoint groups overview.
Sometimes, using an API gateway, or a proxy and an Application Load Balancer together, can provide a more robust solution for managing, securing, and distributing API traffic at scale. Using Cloud Load Balancing with API gateways lets you accomplish the following:
- Provide high-performing APIs with Apigee and Cloud CDN, to reduce latency, host APIs globally, and increase availability for peak traffic seasons. For more information, watch Delivering high-performing APIs with Apigee and Cloud CDN on YouTube.
- Implement advanced traffic management.
- Use Google Cloud Armor as a DDoS protection and network security service to protect your APIs.
- Manage efficient load balancing across gateways in multiple regions. For more information, watch Securing APIs and Implementing multi-region failover with Private Service Connect and Apigee on YouTube.
Use API management and service mesh to secure and control service communication and exposure with microservices architecture.
- Use Cloud Service Mesh to allow for service-to-service communication that maintains the quality of service in a system composed of distributed services where you can manage authentication, authorization, and encryption between services.
- Use an API management platform like Apigee that lets your organization and external entities consume those services by exposing them as APIs.
Establish common identity between environments so that systems can authenticate securely across environment boundaries.
Deploy CI/CD and configuration management systems in the public cloud. For more information, see Mirrored networking architecture pattern.
To help increase operational efficiency, use consistent tooling and CI/CD pipelines across environments.
Best practices for individual workload and application architectures
- Although the focus lies on frontend applications in this pattern, stay aware of the need to modernize your backend applications. If the development pace of backend applications is substantially slower than for frontend applications, the difference can cause extra complexity.
- Treating APIs as backend interfaces streamlines integrations, frontend development, service interactions, and hides backend system complexities. To address these challenges, Apigee facilitates API gateway/proxy development and management for hybrid and multicloud deployments.
- Choose the rendering approach for your frontend web application based on the content (static versus dynamic), the search engine optimization performance, and the expectations about page loading speeds.
- When selecting an architecture for content-driven web applications, various options are available, including monolithic, serverless, event-based, and microservice architectures. To select the most suitable architecture, thoroughly assess these options against your current and future application requirements. To help you make an architectural decision that's aligned with your business and technical objectives, see Comparison of different architectures for content-driven web application backends, and Key Considerations for web backends.
With a microservices architecture, you can use containerized applications with Kubernetes as the common runtime layer. With the tiered hybrid architecture pattern, you can run it in either of the following scenarios:
Across both environments (Google Cloud and your on-premises environments).
- When using containers and Kubernetes across environments, you have the flexibility to modernize workloads and then migrate to Google Cloud at different times. That helps when a workload depends heavily on another and can't be migrated individually, or to use hybrid workload portability to use the best resources available in each environment. In all cases, GKE Enterprise can be a key enabling technology. For more information, see GKE Enterprise hybrid environment.
In a Google Cloud environment for the migrated and modernized application components.
- Use this approach when you have legacy backends on-premises that lack containerization support or require significant time and resources to modernize in the short-term.
For more information about designing and refactoring a monolithic app to a microservice architecture to modernize your web application architecture, see Introduction to microservices.
You can combine data storage technologies depending on the needs of your web applications. Using Cloud SQL for structured data and Cloud Storage for media files is a common approach to meet diverse data storage needs. That said, the choice depends heavily on your use case. For more information about data storage options for content-driven application backends and effective modalities, see Data Storage Options for Content-Driven Web Apps. Also, see Your Google Cloud database options, explained