What Are Preview Environments and How Do They Improve Your Development Workflow?
A preview environment is an ephemeral, isolated deployment created automatically for each pull request or feature branch. It gives every in-progress feature its own URL, its own database, and its own running instance of the application — so teams can review, test, and approve changes before anything touches the shared codebase. Preview environments eliminate the bottleneck of a single staging environment and dramatically shorten the feedback loop between writing code and shipping it.
This article explains the problems preview environments solve, how they work across their full lifecycle, and the three main approaches to setting them up — from deploying code directly to servers, to containers, to Kubernetes.
What problems does a shared staging environment create?
The traditional development workflow looks like this:
- Create a feature branch
- Commit code and open a pull request
- Review the code changes
- Merge the pull request into the base branch
- Deploy the base branch to a shared staging environment
- Test the feature in staging
- Deploy to production or request changes
The critical flaw is step 5: the feature cannot be tested in a realistic environment until it is merged into the base branch and deployed to staging. This creates several problems:
- Conflicting features. When multiple developers merge features into the same branch, their changes interact in unpredictable ways. A feature that worked in isolation may break when combined with another.
- Untestable code. If two features are merged but only one is ready for release, there is no clean way to deploy just the approved feature. The staging branch becomes a mix of ready and unready code.
- Blame ambiguity. When a bug appears in staging after multiple merges, identifying which feature introduced it requires time-consuming investigation.
- Bottleneck. The staging environment becomes a shared resource that teams compete for. One broken deployment blocks testing for everyone.
Preview environments solve all of these problems by giving each feature its own complete environment before it is merged.
What is a preview environment?
A preview environment is a short-lived, fully functional copy of your application that is created automatically when a pull request is opened and destroyed when the pull request is merged or closed. Each preview environment runs in complete isolation — with its own URL, its own database, and its own set of services.
The advantages over a shared staging environment are:
- Independent testing. Every feature is tested in isolation. There is no risk of one developer's work breaking another developer's tests.
- Faster feedback. Reviewers can click a link and see the feature running immediately, without waiting for a staging deployment cycle.
- Clean base branch. Code is only merged after it has been tested and approved in its preview environment, so the base branch stays deployable at all times.
- No environment conflicts. Teams stop competing for access to a single staging server. Ten features in progress means ten independent environments.
- Production-like previews. Each environment mirrors production configuration, so what you see in preview is what you get in production.
The following diagram shows the relationship between production, staging, and preview environments. Each active pull request gets its own isolated environment:
How does the preview environment lifecycle work?
Preview environments are disposable by design. They exist to test a feature, and once the feature is merged or abandoned, the environment is destroyed. This lifecycle is fully automated through CI/CD pipelines. If you are new to automation workflows, check out how to start using GitHub Actions to set up automated deployments.
The lifecycle follows four stages:
- PR opened → environment created. When a developer opens a pull request, the CI/CD pipeline automatically builds and deploys the application to a new, isolated environment.
- PR updated → environment redeployed. Each new commit pushed to the branch triggers a redeployment, so the preview always reflects the latest code.
- PR merged or closed → environment destroyed. When the pull request is merged into the base branch or closed without merging, the environment and all its resources are automatically deleted.
- Unique URL generated. Every preview environment gets a URL derived from the branch name. For example, a branch named
feature/user-registerin a project calledManhattanmight producehttps://feature-user-register.manhattan.com.
This lifecycle can span minutes or months depending on the feature size, review process, and business requirements. The key is that environments are created and destroyed automatically — developers do not manage infrastructure manually.
How do preview environments handle databases?
Each preview environment creates its own exclusive database when the environment is provisioned, and deletes that database when the environment is destroyed. This guarantees complete data isolation between features.
The initial data can come from two sources:
- Cloned from staging. The preview database starts as a copy of the staging database, giving QA testers realistic data to work with.
- Seeded with defaults. The database is populated with a standard seed script, ensuring a consistent starting point for every preview.
Because each environment owns its data independently, testers can create, modify, and delete records without worrying about affecting other environments. If the data gets into a bad state, the environment can be redeployed to reset it to the initial seed values.
How do preview environments work with multiple services?
Modern applications often span multiple repositories — a frontend service and a backend API, for example. A new feature might require changes to both: the frontend depends on a new endpoint that only exists in the backend feature branch. Creating a preview environment for only the frontend and pointing it at the staging backend would fail because the new endpoint does not exist there yet.
The solution is a branch naming convention. Both repositories use the same branch name for the feature — for example, feature/new-feature. The preview environment system detects matching branch names across repositories and deploys both services together into the same environment. The frontend preview talks to the backend preview, and the full feature can be tested end to end.
How do you set up preview environments?
There are three main approaches to provisioning preview environments, each with increasing levels of automation and scalability.
Can you set up preview environments by deploying code directly to servers?
Yes, but it is the least efficient method. Assuming you have a server dedicated to preview environments, the steps are:
- Install dependencies. Set up the web server, database, language runtime, virtual hosts, and any other services the application needs.
- Create DNS records. Configure the preview URL to point to the server.
- Deploy the code. Upload files via SFTP or a similar method.
This approach has significant problems:
- Slow. Installing and configuring services takes minutes to hours. Deploying hundreds of files over SFTP adds more time. Preview environments should be ready in seconds, not minutes.
- Complex. The number of steps requires an orchestration tool like Ansible, Puppet, or Chef to automate. That automation code itself must be written and maintained.
- Error-prone. A misconfigured virtual host, an unavailable package repository, or a failed dependency installation can prevent the environment from being created at all.
Why are containers better for preview environments?
Docker containers eliminate most of the installation and configuration complexity. With Docker installed on the server, the steps become:
- Create DNS records for the preview URL.
- Build the container image in your CI/CD pipeline.
- Push the image to a container registry.
- Run the container on the server.
This is faster and more reliable because the container image includes everything the application needs — runtime, dependencies, configuration — in a single artifact. There is no manual installation step.
However, running containers on a single server still leaves questions unanswered:
- How do you route traffic to the correct container? You need a reverse proxy with dynamic forwarding rules.
- How do you handle SSL certificates for each preview URL?
- How do you scale when one server is not enough for all active previews?
- How do you manage resource allocation across multiple projects?
What is the best way to run preview environments at scale?
The best approach for running preview environments at scale is a container orchestrator like Kubernetes. Instead of managing individual servers, you manage a cluster that handles scheduling, networking, SSL, and scaling automatically. The setup requires:
- A Kubernetes cluster on a cloud provider (DigitalOcean Kubernetes, Google GKE, Amazon EKS, Civo, or similar).
- Dockerfiles for every service in your project. You can use Dockadvisor to check your Dockerfiles for best practices and security issues.
- Kubernetes manifests that define how each service is deployed, exposed, and scaled.
Kubernetes handles the problems that containers on a single server cannot: automatic SSL via cert-manager, ingress routing to preview URLs, horizontal scaling when more resources are needed, and clean teardown when environments are destroyed.
If managing your own Kubernetes cluster is more infrastructure than your team wants to maintain, platforms like Deckrun simplify this significantly. Deckrun deploys your containers to managed cloud providers and handles the Kubernetes complexity — cluster management, SSL certificates, DNS records, and environment teardown — so your team can focus on building features instead of managing infrastructure.
Comparison: direct deploy vs containers vs Kubernetes
| Direct deploy | Containers (single server) | Kubernetes | |
|---|---|---|---|
| Setup speed | Minutes to hours | Seconds to minutes | Seconds |
| Complexity | High (manual config) | Medium (Docker + reverse proxy) | Low once configured |
| Scalability | Limited to one server | Limited to one server | Scales horizontally |
| SSL/DNS automation | Manual | Partial | Fully automated |
| Maintenance burden | High (scripts, config mgmt) | Medium (Docker, proxy rules) | Low (orchestrator handles it) |
| Reliability | Fragile (many failure points) | Good (containerized) | Excellent (self-healing) |
Frequently asked questions
How long does it take to create a preview environment?
With containers and Kubernetes, a preview environment is typically ready within 30 to 90 seconds after a pull request is opened. The exact time depends on the container image size, the number of services, and whether images are cached. Direct server deployments take significantly longer — often 5 to 15 minutes.
Do preview environments replace staging?
Not necessarily. Preview environments replace the need to merge code into a shared staging branch for testing, but many teams keep a staging environment for integration testing, performance testing, or final QA before production. The key difference is that staging is no longer a bottleneck — features arrive there already tested and approved.
How much do preview environments cost?
Costs depend on the infrastructure approach. On Kubernetes, environments share cluster resources and are destroyed when no longer needed, so you only pay for resources while preview environments are active. For most teams, the cost of running preview environments is a fraction of the developer time saved by eliminating staging bottlenecks and catching bugs earlier.
Can preview environments work with monorepos?
Yes. In a monorepo setup, the CI/CD pipeline detects which services were changed in the pull request and deploys only the affected services to the preview environment. Unchanged services can either be skipped or deployed from the base branch to provide a complete environment.
Are preview environments secure?
Preview environments should be treated as non-production environments. Best practices include restricting access with authentication or IP allowlists, using separate credentials from production, and automatically destroying environments when pull requests are closed to minimize the attack surface.