NOTE: Some aspects of this product are in Beta. The hybrid installation options are GA. To join the Beta program, reach out to your Apigee representative.

About environments

An environment provides an isolated context or "sandbox" for running API proxies. In a single organization, you can create multiple environments.

The following code shows an example overrides configuration where multiple environments are defined.

namespace: my-namespace
org: my-organization
...
envs:
  - name: test
    serviceAccountPaths:
      synchronizer: "your_keypath/synchronizer-manager-service-account.json
      udca: "your_keypath/analytic-agent-service-account.json

  - name: prod
    serviceAccountPaths:
      synchronizer: "your_keypath/synchronizer-manager-service-account.json
      udca: "your_keypath/analytic-agent-service-account.json
...

Suppose a proxy with the base path /foo1 is deployed to environment test. You could call the proxy like this:

curl -k https://api.example.com/foo1

When this call hits the ingress, the ingress knows to send it to the message processor associated with the test environment, which handles the request.

Similarly, if foo1 is also deployed to the prod environment, you could make a proxy request like this, to the host alias apiprod.mydomain.net:

curl -k https://apiprod.example.com/foo1

And the call is routed by the ingress to the MP associated with that host.

Antipattern: Deploy all of your proxies to one hybrid environment.

Best practice: Create multiple environments and deploy a limited number of proxies to each one.

Limit the number of proxy deployments

For hybrid, the fact that many environments can share the same virtual hosts as defined in environment groups means that you must think carefully about how you manage your proxy deployments to any given environment. In hybrid, the best practice is to create multiple environments and deploy a limited number of proxies to each one.

How many proxies should you deploy to an environment? There is not a set answer to this question; however, the following table provides general guidance on why it's a good idea to limit the number of proxies deployed to each environment and what you need to think about when managing proxy deployments:

Issue to consider Description
Message processor boot-up time There is a direct correlation between the amount of time a message processor (MP) takes to boot up and the number of proxies deployed to that MP. In an auto-scaling Kubernetes environment, an increase in boot time might be a problem. The more proxies that are deployed to the MP, the longer it will take for that MP to come up if it needs to be scaled or recreated.
Scaling performance If you have several proxies deployed to an environment, and one of the proxies gets a lot of traffic so that it frequently auto-scales, all of the proxies in that environment will scale with it. The performance effect of scaling multiple proxies with a single high-traffic proxy might be a problem.
Noisy neighbor If you have several proxies deployed to the same environment, and one proxy crashes, then all of the proxies in the environment will be taken down while the MPs restart. By limiting the number of proxies deployed to an environment, you minimize the impact of a single proxy crashing.

Environment configuration reference

For a complete list of environment configuration elements, see envs in the Configuration property reference.

Working with environments

For more information about configuration, see the following topics: