Announcing Coherence 2.0 and CNC, the first open source IaC framework
All posts

Preview Environments: A Best Practices Guide

Learn about implementing best practices for preview environment utilization in your testing and development workflow to optimize automation, data management, and resource allocation
August 5, 2024

Preview environments are temporary, self-contained deployments of an application. The inherent support for concurrent tests in preview environments makes multi-environment testing easy and efficient, allowing teams to iterate quickly and deliver new features.

This article explores several best practices for fully utilizing preview environments in your testing and development workflow.

Summary of preview environment best practices

Concept Description
Automate environment creation and updates with code changes Integrate your preview environment creation with your version control system for automatic environment creation and updates whenever code changes.
Decide on a data management strategy for your preview environments Will you use anonymized or generated test data? How will data generation and disposal be handled?
Store secrets in a secret manager Store environment variables and secrets in a secret manager separately from your code to ensure the safety of sensitive data.
Allocate just the right amount of resources Allocate resources (such as CPU and memory) for each preview environment based on its intended use. Smaller environments for basic testing require fewer resources than environments used for performance testing.

In the rest of this article, we examine each of these four key practices in detail, providing a deeper understanding of their implementation. Additionally, we explore important considerations to ensure the successful integration of preview environments into your development workflow.

Automate environment creation and updates with code changes

Integrate your preview environment creation with your version control system for automatic environment creation and updates whenever code changes. This unlocks powerful automation and streamlines the testing process. To understand this better, let’s take a closer look at the Git workflow lifecycle in the context of preview environments.

Git workflow for preview environments

Environment creation

The process starts with a developer committing code. The push triggers a preconfigured workflow within your version control platform (e.g., GitHub Actions, GitLab).

If your team works with many feature branches and frequently pushes code, automation streamlines the testing process. It allows for immediate environment creation and testing as soon as code changes are pushed. The workflow script automatically provisions a new preview environment.

This approach provides a number of benefits:

  • Automated scripts ensure consistent configuration across all preview environments, minimizing the risk of errors during manual setup.
  • Developers don’t need to wait for environments to be set up manually, freeing them to focus on coding and testing.

It also has a few drawbacks:

  • Creating a new environment for every change can strain infrastructure resources, impact performance, and cause delays in environment creation.
  • Testing every change, even those under development, can lead to noisy test results and wasted effort. Prioritizing stable code before testing in preview environments is essential.
  • A manual approach might be better for major code changes or refactoring efforts. This allows for more control over the environment configuration and resource allocation.

Alternatively, environment creation can be triggered by pull requests. You might choose to automate creation only for pull requests with a certain number of commits or exceeding a specific codebase change size, which helps manage resource allocation.

Again here, there are some upsides:

  • Automating creation can efficiently handle high volumes of PRs, which is a benefit for large teams with frequent development activity.
  • This approach encourages developers to submit more complete and well-tested code for review within PRs, leading to higher overall code quality.

And there are some downsides as well:

  • If a PR is rejected or abandoned, the associated environment might be created and deleted shortly afterward, leading to wasted resources.
  • If PRs contain incomplete or untested code, these environments might be used for unproductive testing, leading to wasted effort.

{{banner-large-dark="/banners"}}

Testing in the preview environment

Testers or developers can access the newly created preview environment using a unique URL. The environment reflects the code changes in the specific branch, enabling focused testing.

Note that preview environments should not have access to any external components. These components (such as microservices) must be mocked or stubbed to ensure isolation from production environments. This also applies to the test data used in preview environments, such as user credentials—test data must be mocked or anonymized. Even isolated environments pose potential risks if real data is involved.

Code is merged or discarded

If the code changes are approved and merged into the main branch, the workflow might automatically update the existing preview environment for the main branch with the latest code. Alternatively, a separate merge might trigger a new preview environment creation for the main branch.

If the feature branch is discarded, the workflow can automatically delete the associated preview environment, freeing up resources.

Choosing the right method: an example

The ideal approach for triggering preview environment creation depends on your team’s workflow and resource constraints. Weigh the pros and cons of automated creation based on push frequency versus pull requests and consider factors such as resource availability.

Here’s a yaml file that illustrates how to automate the creation of a preview environment.

services:
  db:
    x-cnc:
      name: db
      type: database
      engine: postgres
      version: '14'
      adapter: postgresql
  frontend:
    build:
      context: frontend
      dockerfile: ''
    x-cnc:
      type: frontend
      build: ./build.sh
      url_path: /
      assets_path: assets
    deploy:
      resources:
        limits:
          cpus: 1
          memory: 2G
    x-coherence:
      repo_id: 13
  server:
    build:
      context: backend
      dockerfile: ''
    x-cnc:
      type: backend
      system:
        health_check: ''
      migrate: alembic upgrade head
      url_path: /api
    deploy:
      resources:
        limits:
          cpus: 1
          memory: 2G
    command: python server.py
    x-coherence:
      repo_id: 13

The yaml file above specifies the front-end, back-end and database services of a full-stack monorepo app, and is generated by Coherence as you fill out the forms using the UI. However, if the service edit forms don't include the settings you need for advanced controls, you can edit the yaml file directly.

{{banner-small-4="/banners"}}

Decide on a data management strategy for your preview environments

While preview environments offer isolation for testing code changes, they need data to function realistically. There are two main approaches to provisioning data for preview environments: using pre-existing data or data provisioning during environment creation.

Using pre-existing data

You can leverage existing anonymized production data or predefined test data sets. This approach offers a realistic testing experience but requires careful consideration.

This method utilizes predefined test data sets designed for your application, ensuring thorough testing of various scenarios. However, while it offers a realistic testing experience, it requires strong anonymization techniques to comply with data privacy regulations.

Provisioning data at environment creation

In this approach, you generate synthetic data on demand (without involving actual user information) when the preview environment is created. This approach ensures data privacy and avoids the need for managing anonymized data.

Choosing the right approach

Here are a few important things to weigh in making a decision:

  • For a highly realistic testing experience: Consider anonymized production data, but prioritize strong anonymization and data privacy compliance.
  • For broader test coverage: Use predefined test data sets that cover a variety of scenarios and edge cases.
  • For data privacy and avoiding anonymization complexity: Consider synthetic data generation, which eliminates the need to manage sensitive information.
  • A combination approach: You can leverage a mix of these methods, such as using anonymized data for core functionality testing and synthetic data for specific edge cases.

Data security and disposal

Regardless of the chosen approach, here are key data security and disposal considerations:

  • Access controls: Implement strict access controls for the preview environment. Limit access only to authorized personnel who need the data for testing purposes.
  • Data backups (if applicable): If using anonymized production data, consider regular backups to a secure location for specific testing needs. However, to avoid database bloating, prioritize deleting this data when the environment is no longer needed.
  • Data disposal integration: Integrate automated cleanup processes within your CI/CD pipeline to remove data automatically (e.g., upon merging or discarding a branch). This optimizes resource consumption and minimizes data retention risks.

{{banner-small-1="/banners"}}

Store secrets in a secret manager

As discussed above, preview environments rely on data to function properly. However, some data, such as API keys, passwords, or database credentials, can be sensitive for testing purposes. This sensitive data must be stored and retrieved securely to prevent unauthorized access or breaches while allowing developers to perform effective testing. Here are a few things to keep in mind when dealing with sensitive data.

Separate storage

Secrets should never be stored directly in code repositories or the preview environment configuration. Inject sensitive data (such as passwords and credentials, encryption keys, and API keys) during runtime only. When the application running in the preview environment needs a secret (e.g., database password), it should be retrieved from the secrets manager using a secure API call.

Rotation and short-lived credentials

Consider using short-lived credentials or automatic rotation mechanisms for secrets used in preview environments. Use tools like AWS Secrets Manager with automatic key rotation. This service can be configured to generate and rotate API keys or database credentials at regular intervals (e.g., daily or weekly). This ensures that even if an attacker gains access to a temporary credential, its validity expires quickly, reducing the risk of unauthorized access to production systems.

Scoping environment variables

Define specific variables for each environment (like development, staging, or preview), and use different .env files to store configuration details. These files can specify database connection strings, API endpoints, or feature flags. When the application runs in a specific environment, it loads the corresponding environment variable file, ensuring that it uses the appropriate configuration settings for that environment. This allows you to easily test your application with different configurations, like database connections or API keys, on a feature branch before merging it to the main codebase.

For example, you can try a new database connection string in a preview environment before making it live in production. This way, you can ensure that everything works smoothly before making permanent changes. Also, remember to configure your application to ignore these files from version control systems to prevent accidental exposure of sensitive data.

Allocate just the right amount of resources

Overprovisioning resources can lead to performance bottlenecks and increased costs, while underprovisioning can hinder testing efficiency. Here are four strategies to help you manage your resources optimally:

  • Allocate resources (CPU, memory) based on the intended use of each environment. Smaller environments for basic testing can have lower resource quotas than environments used for performance or load testing.
  • Use spot instances for tasks that do not require persistence. They reduce costs considerably and are ideal for fault-tolerant workloads, such as batch or data processing.
  • Set resource thresholds for individual environments. If an environment consistently exceeds its limits, it might indicate the need for a different resource allocation or optimization of the test workload.
  • Implement monitoring tools to track CPU, memory, and storage usage across your preview environments. This will provide insights into resource allocation effectiveness and help identify potential bottlenecks.
  • Promote resource awareness among developers. Encourage them to consider the resource footprints of their code changes and tests when working with preview environments.

Another aspect to take into account when discussing resources is vendor lock-in. Try not to get confined to a particular vendor by relying on a service that supports migrations. Coherence, for instance, is built on the Cloud Native Computing Foundation (CNCF) standards, providing limitless customization options. This flexibility allows you to potentially migrate your infrastructure and workflows to another platform if/when needed.

{{banner-small-2="/banners"}}

Considerations

While preview environments offer significant advantages for development workflows, they do present some design considerations that should be taken into consideration.

Resource strain

Creating and maintaining numerous preview environments can strain your infrastructure resources, leading to performance slowdowns for both the environments and your development team. Overprovisioning environments or creating them too frequently can exacerbate this issue.

It is also important to look out for orphan environments: development/testing environments that are no longer actively used or maintained. These environments can accumulate over time for various reasons, such as abandoned features or completed projects. To prevent orphan environments from consuming resources, regularly review your infrastructure and archive/delete environments that are no longer required. You can automate this process by implementing policies in your CI/CD pipeline (e.g., time to live) or by using autoscaling features to automatically scale down resources for idle environments. This reduces the costs associated with underutilized resources in orphan environments.

Security concerns

Automated environment creation brings convenience, but it also requires strong security measures. Automated environments can become vulnerable to unauthorized access or potential security breaches without proper access controls and security protocols, especially if they contain sensitive data.

Shared services

Shared services can mitigate resource strain. A shared microservice, for example, can be used by many preview environments if it does not contain environment-specific data. This approach optimizes resource allocation and reduces overall infrastructure demands.

{{banner-small-1="/banners"}}

Last thoughts

Preview environments are a powerful tool that can empower development teams of all sizes to achieve greater agility, higher code quality, and faster delivery cycles.

As the practices and technologies described in this article evolve, we can expect preview environments to become an even more essential component of the modern development workflow.