How to Deploy Laravel to Kubernetes
Intro
Most tutorials on deploying Laravel to Kubernetes stop at getting a basic app running. But a real Laravel application has queue workers, a scheduler, migrations that need to run on every deploy, environment variables, SSL certificates...
This post is intended for developers who want to deploy a Laravel application to Kubernetes (k8s) on a real cloud provider (e.g., AWS, Google Cloud, DigitalOcean) with a production-ready configuration. This guide assumes you're deploying to a real cloud provider, not a local Minikube setup. After reading it, you'll know how to:
- Dockerize your Laravel application
- Write the Kubernetes manifests for your Laravel app
- Connect to MySQL
- Run your Laravel database migrations safely
- Deploy queue workers
- Configure Kubernetes to execute your Laravel scheduler
- Set up SSL with Let's Encrypt
- Configure auto-scaling to handle traffic spikes
Prerequisites
- Docker installed
- Basic familiarity with Docker (you don't need to be an expert, we'll walk through the Dockerfile)
- Access to a Kubernetes cluster on a Cloud Provider (e.g., Amazon EKS, Google GKE, DigitalOcean DOKS)
- kubectl CLI tool installed and set up with access to the Kubernetes cluster
- A working Laravel app on your local computer (this post covers Laravel 13, but is also valid for older versions of Laravel)
- Access to a Container Registry to upload your Docker images
- A MySQL database (we recommend a managed database service from your cloud provider; we'll explain why later)
The following components should be installed in your Kubernetes cluster:
- Metrics Server, to enable auto-scaling with Kubernetes HPA
- Cert Manager with a
ClusterIssuerconfigured for Let's Encrypt, to issue SSL certificates automatically - Traefik Ingress Controller, to expose your app to the Internet. This guide uses Traefik because the community ingress-nginx project was retired in March 2026 with no further security patches or updates. If you see other tutorials recommending it, they're outdated. If you prefer to stay in the NGINX ecosystem, F5 maintains a separate, actively supported NGINX Ingress Controller.
Dockerizing Your Laravel App for Production
Don't start from a bare Ubuntu or Debian image and install PHP yourself. You'll spend hours configuring extensions, permissions, and security hardening, and you'll probably miss something.
Instead, we're going to use the serversideup/php Nginx/PHP-FPM Docker image as our base. It comes with sensible production defaults, proper permissions, and the extensions most Laravel apps need out of the box.
In this example, we're using PHP 8.4, so the image name and tag are serversideup/php:8.4-fpm-nginx. For building the frontend assets, we're using an Alpine-based Node.js 24 (LTS) image: node:24-alpine.
Node.js is only needed to compile frontend assets. A multi-stage build lets us use it during the build step without including it in the final image, keeping the image small.
Make sure you have a .dockerignore file in the root of your project. Without it, Docker will copy everything into the build context, including node_modules, .env, .git, and vendor. This slows down builds and can leak secrets into your image. At a minimum, exclude:
.env
.git
node_modules
vendor
storage/logs
Here's the final Dockerfile:
# syntax=docker/dockerfile:1
ARG PHP_VERSION=8.4
FROM serversideup/php:${PHP_VERSION}-fpm-nginx AS basephp
COPY . /var/www/html
RUN composer install --optimize-autoloader --prefer-dist --no-dev --no-interaction
# Copy app with composer dependencies and generate npm node_modules
FROM node:24-alpine AS node_modules
WORKDIR /var/www/html
COPY /var/www/html /var/www/html
RUN npm install
RUN npm run build
# Copy app with composer dependencies and public directory from node_modules step
FROM basephp
COPY /var/www/html/public /var/www/html/public
ENV SSL_MODE=off
Before building, you can run your Dockerfile through Dockadvisor to catch common issues like syntax errors, security issues, and style consistency.
For the rest of this guide, we'll assume the image has been built and pushed to a remote registry at: deckrun/laravel:76d57ec. Note we're using the Git commit SHA as the image tag, not latest. This is intentional: immutable tags make deploys predictable and rollbacks trivial.
Kubernetes Manifests — The Core Deployment
Now let's write the Kubernetes manifests to run the app.
Secrets
To store sensitive data like the Laravel APP_KEY, we use a Kubernetes Secret instead of hardcoding it in the Deployment. There are a few reasons for this:
- Secrets can be managed separately from your Deployment YAML, so you don't commit credentials to Git.
- They don't appear in plain text when you run
kubectl describe deployment. - Multiple resources (web, workers, scheduler, migration jobs) can reference the same Secret, so you update credentials in one place.
- On most managed Kubernetes clusters, Secrets are encrypted at rest in etcd.
Heads up: Kubernetes Secrets are only base64-encoded by default, not encrypted. The security comes from access control (RBAC) and encryption at rest, not the format itself. For highly sensitive environments, consider external tools like Vault or External Secrets Operator.
apiVersion: v1
kind: Secret
metadata:
name: laravel-secret
data:
APP_KEY: YmFzZTY0OnY5RG9MQ2E3bERkdU1UeEp2dHpWcUpMajdXb2VKeFMzQytleTlQcXhqa0k9Cg==
In practice, you'd create this Secret with kubectl create secret generic laravel-secret --from-literal=APP_KEY=$(php artisan key:generate --show) rather than committing the YAML above.
Web Deployment
Once we have the secret defined, we can write the files to deploy our app.
Here's the Deployment for our main Laravel app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: laravel-app
labels:
app: laravel-app
spec:
selector:
matchLabels:
app: laravel-app
template:
metadata:
labels:
app: laravel-app
spec:
automountServiceAccountToken: false
containers:
- name: laravel-app
image: deckrun/laravel:76d57ec
imagePullPolicy: IfNotPresent
env:
- name: PORT
value: "8080"
- name: APP_DEBUG
value: "false"
- name: APP_ENV
value: production
- name: LOG_CHANNEL
value: stderr
- name: LOG_LEVEL
value: info
- name: APP_KEY
valueFrom:
secretKeyRef:
key: APP_KEY
name: laravel-secret
optional: false
livenessProbe:
failureThreshold: 3
httpGet:
path: /up
port: 8080
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
ports:
- containerPort: 8080
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /up
port: 8080
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: "1"
memory: 512Mi
requests:
cpu: 500m
memory: 256Mi
This Deployment defines everything Kubernetes needs to run your Laravel app: the environment variables (with APP_KEY pulled from the Secret we defined earlier), the container image to use, and the port it listens on. The livenessProbe and readinessProbe both hit Laravel's built-in /up health endpoint: the liveness probe restarts the container if the app stops responding, and the readiness probe keeps traffic away from pods that aren't ready to serve requests yet (for example, during startup).
The resources block sets CPU and memory requests and limits, which Kubernetes uses both to schedule pods on nodes with enough capacity and to enforce limits if a pod tries to consume more than it should.
Web Service
The Deployment runs our pods, but pods alone aren't reachable. We need a Service to give them a stable internal address:
apiVersion: v1
kind: Service
metadata:
name: laravel-app
labels:
app: laravel-app
spec:
selector:
app: laravel-app
type: ClusterIP
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
This Service gives our Deployment a stable internal address. Pods come and go (Kubernetes might restart them, scale them up, or move them to a different node), and each new pod gets a different IP. The Service abstracts that away: anything inside the cluster can reach our Laravel app at laravel-app:80, regardless of which pods are running behind it.
The type: ClusterIP means this Service is only reachable from inside the cluster. That's what we want for a web app: external traffic should come through an Ingress, not directly to the Service. We'll set that up later.
Connecting to MySQL
Now we need to connect our Laravel app to MySQL.
Don't deploy MySQL inside your cluster. Running MySQL in the cluster means you're on the hook for backups, replication, failover, persistent volumes, and version upgrades. You're doing DBA work without being a DBA.
Use a managed database from your cloud provider instead (DigitalOcean Managed MySQL, Amazon RDS, Google Cloud SQL). You get automated backups, high availability, security patches, and monitoring, and you only deal with a connection string. It costs more than running it yourself, but the time you save and the incidents you avoid make it worth it.
We'll add the database password (base64-encoded) to the Secret we already created. In practice, you'd create or update the Secret using kubectl rather than committing the base64-encoded password to Git.
DB_PASSWORD: eUViNXc3cExvMHR5cHlGdFlFTEFnMVgzdXFFZ3B4ZjkK
And add these environment variables to the env section of the Deployment:
- name: DB_CONNECTION
value: "mysql"
- name: DB_HOST
value: "private-laravel-app-mysql-prd-do-user-1379367-0.d.db.ondigitalocean.com"
- name: DB_PORT
value: "25060"
- name: DB_DATABASE
value: "defaultdb"
- name: DB_USERNAME
value: "doadmin"
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
key: DB_PASSWORD
name: laravel-secret
optional: false
For DB_HOST, use the private hostname from your provider, not the public one. Private hostnames route traffic through your VPC, which is faster and safer: the traffic never leaves the cloud provider's internal network, and the database doesn't need to be exposed to the public internet. Most managed databases let you disable public access entirely once you've confirmed private connectivity works.
Note: You'll need to add these env vars to your Worker Deployment and Scheduler CronJob too.
Running Migrations
Every time you deploy a new version of your Laravel app, you need to run database migrations to update the schema for the new code. These migrations should run exactly once per deployment, which rules out running them in the entrypoint of your container (every pod would try to run them) or as an init container (same problem with multiple replicas). The right tool is a Kubernetes Job: a resource that runs a command to completion, once.
apiVersion: batch/v1
kind: Job
metadata:
labels:
app: laravel-app
name: laravel-migrations-76d57ec
spec:
ttlSecondsAfterFinished: 3600
template:
spec:
automountServiceAccountToken: false
restartPolicy: Never
containers:
- name: laravel-migrations
image: deckrun/laravel:76d57ec
command: ["php", "artisan", "migrate", "--force"]
env:
- name: APP_DEBUG
value: "false"
- name: APP_ENV
value: production
- name: LOG_CHANNEL
value: stderr
- name: LOG_LEVEL
value: info
- name: APP_KEY
valueFrom:
secretKeyRef:
key: APP_KEY
name: laravel-secret
optional: false
- name: DB_CONNECTION
value: "mysql"
- name: DB_HOST
value: "private-laravel-app-mysql-prd-do-user-1379367-0.d.db.ondigitalocean.com"
- name: DB_PORT
value: "25060"
- name: DB_DATABASE
value: "defaultdb"
- name: DB_USERNAME
value: "doadmin"
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
key: DB_PASSWORD
name: laravel-secret
optional: false
Two important details here: the Job name includes the image tag (laravel-migrations-76d57ec). Jobs are immutable by name, so if you reuse the same name on your next deploy, Kubernetes will refuse to run it again. Including the tag in the name creates a new Job per deploy. ttlSecondsAfterFinished: 3600 tells Kubernetes to clean up the Job one hour after it completes, so finished Jobs don't pile up in your cluster.
kubectl apply -f migrations-job.yaml
kubectl wait --for=condition=complete job/laravel-migrations-76d57ec --timeout=300s
kubectl apply -f deployment.yaml
If you skip the wait step and apply both at the same time, your new pods might start serving traffic against the old database schema.
All of this (naming the Job uniquely, waiting for completion, handling retries and cleanup) is boilerplate that every Laravel deployment needs. In Deckrun, you write it once as a pre-deployment hook in your config file, and it runs on every deploy automatically.
Queue Workers
Laravel queue workers run as a separate process. We'll deploy them as their own Deployment, using the same Docker image but running php artisan queue:work instead of serving HTTP:
apiVersion: apps/v1
kind: Deployment
metadata:
name: laravel-worker
labels:
app: laravel-worker
spec:
selector:
matchLabels:
app: laravel-worker
template:
metadata:
labels:
app: laravel-worker
spec:
automountServiceAccountToken: false
terminationGracePeriodSeconds: 60
containers:
- name: laravel-worker
image: deckrun/laravel:76d57ec
imagePullPolicy: IfNotPresent
command:
- php
- artisan
- queue:work
- --max-time=3600
- --max-jobs=1000
- --sleep=3
- --tries=3
env:
- name: APP_DEBUG
value: "false"
- name: APP_ENV
value: production
- name: LOG_CHANNEL
value: stderr
- name: LOG_LEVEL
value: info
- name: APP_KEY
valueFrom:
secretKeyRef:
key: APP_KEY
name: laravel-secret
optional: false
resources:
limits:
cpu: "1"
memory: 512Mi
requests:
cpu: 500m
memory: 256Mi
The terminationGracePeriodSeconds: 60 is important here. When Kubernetes needs to stop a pod (during a deploy, a scale-down, or a node drain), it sends a SIGTERM signal and waits for the container to exit cleanly. By default, Kubernetes only waits 30 seconds before force-killing the process with SIGKILL. That's not enough for a queue worker: if it's in the middle of processing a job when the signal arrives, it needs time to finish that job before exiting. Setting the grace period to 60 seconds gives queue:work room to complete the current job and shut down cleanly, instead of leaving jobs in an inconsistent state. If your jobs take longer than 60 seconds to complete, increase this value accordingly.
A note on the queue:work flags: Laravel workers have known memory leaks over time, so --max-time=3600 and --max-jobs=1000 make the worker exit after one hour or 1000 jobs (whichever comes first). Kubernetes then restarts it automatically because it's a Deployment, giving you a fresh process with clean memory. --sleep=3 avoids hammering the queue backend when it's empty, and --tries=3 retries failed jobs before giving up.
In Deckrun, a queue worker is a single entry in your config file, no extra YAML.
The Scheduler
Laravel's scheduler normally runs as a cron entry on your server, executing php artisan schedule:run every minute. On Kubernetes, the equivalent is a CronJob: a resource that spins up a container on a schedule you define.
You might be wondering why we use a CronJob with schedule:run instead of a long-running Deployment with schedule:work. The difference matters: schedule:work keeps a PHP process alive permanently, consuming memory even when there's nothing to run. With a CronJob, Kubernetes spins up a container every minute, runs the scheduled tasks, and the container exits. You get a clean process every time, and kubectl get jobs gives you a history of recent executions with their success or failure status.
apiVersion: batch/v1
kind: CronJob
metadata:
labels:
app: laravel-app
name: laravel-app-scheduler
namespace: default
spec:
schedule: '* * * * *'
jobTemplate:
spec:
template:
metadata:
labels:
app: laravel-app
spec:
automountServiceAccountToken: false
containers:
- name: laravel-app-scheduler
image: deckrun/laravel:76d57ec
imagePullPolicy: IfNotPresent
command:
- php
- artisan
- schedule:run
env:
- name: APP_DEBUG
value: "false"
- name: APP_ENV
value: production
- name: LOG_CHANNEL
value: stderr
- name: LOG_LEVEL
value: info
- name: APP_KEY
valueFrom:
secretKeyRef:
key: APP_KEY
name: laravel-secret
optional: false
resources:
limits:
cpu: "1"
memory: 512Mi
requests:
cpu: 500m
memory: 256Mi
restartPolicy: OnFailure
In Deckrun, the scheduler is one line in your config file, no CronJob YAML required.
SSL and Custom Domains
Getting traffic from the internet to your app requires two things: a way to route external requests to the right Service, and a valid SSL certificate for your domain. On Kubernetes, the first piece is an Ingress resource, and the second is handled by cert-manager issuing Let's Encrypt certificates automatically.
We're assuming both pieces are already in place on your cluster (see the prerequisites): Traefik as the Ingress Controller, and cert-manager with a ClusterIssuer configured for Let's Encrypt. Setting those up is a topic of its own; the cert-manager docs cover it well.
With those installed, the only thing left is to define an Ingress for our app.
Ingress
An Ingress tells Kubernetes how external traffic should reach your Service:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt
labels:
app: laravel-app
name: laravel-app
namespace: default
spec:
ingressClassName: traefik
rules:
- host: laravel-app.143.244.198.216.nip.io
http:
paths:
- backend:
service:
name: laravel-app
port:
number: 80
path: /
pathType: Prefix
tls:
- hosts:
- laravel-app.143.244.198.216.nip.io
secretName: laravel-app-tls-cert
The cert-manager.io/cluster-issuer: letsencrypt annotation tells cert-manager to automatically issue a Let's Encrypt certificate for our domain. Without this, you'd need to generate and rotate certificates manually.
The ingressClassName: traefik field determines which Ingress Controller handles the incoming traffic.
In the rules section, we map our domain to the laravel-app Service on port 80. We're using a nip.io domain here, a free service that maps any IP address to a hostname without needing to buy a domain. In production, you'd replace this with your actual domain (e.g., myapp.example.com).
The tls section references a Secret where cert-manager will store the generated certificate. You don't need to create this Secret yourself; cert-manager handles it automatically.
Once the Ingress is applied, cert-manager will detect the tls block, request a certificate from Let's Encrypt, and store it in the Secret you named. Traefik automatically picks up any Ingress resource with ingressClassName: traefik and starts routing traffic to the backend Service. Traefik will then serve HTTPS traffic for your domain using that certificate, and renew it automatically before it expires.
In Deckrun, you add a custom domain with a single CLI command. Certificates are provisioned and renewed automatically.
Auto-scaling
A single pod is rarely enough for production: traffic spikes will knock it over, and keeping 10 pods running 24/7 when you only need them at peak hours is a waste of money. Kubernetes solves this with the Horizontal Pod Autoscaler (HPA), which automatically adjusts the number of replicas based on metrics you define.
We'll define two HPAs, one for the web app and one for the queue worker, because each scales on different signals:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
labels:
app: laravel-app
name: laravel-app
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: laravel-app
minReplicas: 2
maxReplicas: 5
metrics:
- resource:
name: cpu
target:
averageUtilization: 70
type: Utilization
type: Resource
The web app scales on CPU because incoming HTTP traffic is the main driver of its workload. With minReplicas: 2, we always keep two pods running to handle small traffic spikes and survive single-pod failures without downtime. When average CPU usage across pods goes above 70%, Kubernetes adds replicas up to a maximum of 5. When traffic drops, it scales back down.
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
labels:
app: laravel-app
name: laravel-worker
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: laravel-worker
minReplicas: 1
maxReplicas: 10
metrics:
- resource:
name: memory
target:
averageValue: 192Mi
type: AverageValue
type: Resource
The worker scales on memory instead of CPU. Queue workers usually spend time waiting on I/O (database queries, external API calls, file operations) rather than burning CPU, so CPU utilization is a poor signal for scaling them. Memory usage, on the other hand, grows with how much data each worker is holding while processing jobs.
Ideally, you'd scale workers based on queue depth (the number of pending jobs), but that requires custom metrics. That's a topic for another post.
These numbers (70%, 192Mi, min and max replicas) are starting points. You'll want to adjust them based on your actual traffic patterns and worker memory usage.
One important thing: once an HPA is active, it owns the replicas field of its Deployment. If you manually change replicas in your Deployment YAML and apply it, the HPA will immediately overwrite your change. If you need to pause auto-scaling temporarily (for example, during a debugging session), delete the HPA first, then scale the Deployment manually.
In Deckrun, auto-scaling is a few lines in your config file. CPU, memory, min and max replicas, applied to web or workers.
The Full Picture
If you've made it this far, let's count what we've built:
- A Dockerfile with multi-stage build
- A
.dockerignore - A Secret for application and database credentials
- A Deployment for the web app (with health checks, resources, env vars)
- A Service to expose the web app inside the cluster
- A Job for database migrations (with unique naming per deploy and cleanup)
- A Deployment for queue workers (with graceful shutdown and memory management)
- A CronJob for the scheduler
- An Ingress for SSL and custom domain
- Two HPAs for auto-scaling web and workers
That's over 300 lines of YAML across 9 files, not counting the Dockerfile. And this is the minimum for a production-ready Laravel app. Every time you add a new environment variable, a new domain, or change a resource limit, you edit YAML. Every time you onboard a new developer to your team, they have to learn all of this.
There's another way to do this.
Here's the same Laravel app, deployed with Deckrun:
# deckrun.toml
app = 'my-laravel-app'
[env]
APP_ENV = 'production'
LOG_LEVEL = 'info'
APP_DEBUG = 'false'
[deploy]
[deploy.pre]
command = 'php artisan migrate --force'
[[processes]]
name = 'app'
size = 'small'
[processes.http]
internal_port = 8080
[processes.liveness]
type = 'http'
path = '/up'
[processes.readiness]
type = 'http'
path = '/up'
[processes.autoscaling]
min_replicas = 2
max_replicas = 5
[[processes.autoscaling.metrics]]
resource = 'cpu'
avg_utilization = 70
[[processes]]
name = 'worker'
command = 'php artisan queue:work --max-time=3600 --max-jobs=1000 --sleep=3 --tries=3'
size = 'micro'
[processes.autoscaling]
min_replicas = 1
max_replicas = 10
[[processes.autoscaling.metrics]]
resource = 'memory'
avg_value = '192Mi'
[[cronjobs]]
name = 'scheduler'
schedule = '* * * * *'
command = 'php artisan schedule:run'
size = 'pico'
That's it. One file, under 40 lines. No Secret YAML, no Deployment YAML, no Service, no Ingress, no HPA, no Job, no CronJob. Deckrun generates and manages all of that for you on your own Kubernetes cluster (DigitalOcean or Scaleway), running on infrastructure you control.
Secrets are set from the CLI, so they stay out of your config file and out of your Git repo:
deck secrets set APP_KEY
deck secrets set DB_PASSWORD
And your app gets a free HTTPS subdomain on deploy (something like my-laravel-app.deckrun.app), with a valid SSL certificate out of the box. When you're ready to use your own domain, it's one CLI command away.
You deploy with one command:
deck deploy
No kubectl. No context switching. No copying YAML between files.
And you keep everything that makes Kubernetes worth using: your own cloud provider, your own infrastructure, no vendor lock-in, predictable pricing. What you drop is the 300 lines of boilerplate that look the same in every Laravel project.
Try Deckrun free for 30 days — no credit card required.
FAQ
Should I run MySQL inside Kubernetes?
No. Databases are stateful, and Kubernetes is designed for stateless workloads. Running MySQL in the cluster means you're responsible for backups, replication, failover, persistent volumes, and version upgrades. Use a managed database from your cloud provider instead (DigitalOcean Managed MySQL, Amazon RDS, Google Cloud SQL). It costs more than self-hosting, but the time you save and the incidents you avoid make it worth it.
How do I run Laravel migrations on Kubernetes?
Use a Kubernetes Job that runs php artisan migrate --force once per deployment. Don't run migrations in your container's entrypoint (every pod would try to run them) or as an init container (same problem with multiple replicas). Name the Job uniquely per deploy (for example, including the Git commit SHA) because Jobs are immutable by name in Kubernetes. Apply the Job first, wait for it to complete with kubectl wait, then apply your Deployment to avoid new pods serving traffic against an old schema.
How do I scale Laravel queue workers on Kubernetes?
Deploy queue workers as a separate Kubernetes Deployment (not the same one as your web app) and use a HorizontalPodAutoscaler to scale them based on memory usage. CPU is a poor signal for workers because they typically wait on I/O rather than burning CPU. Memory usage grows with the amount of data each worker holds while processing jobs, which correlates better with actual load. For more precise scaling, you can use custom metrics based on queue depth, but that requires additional tooling like KEDA.
How do I view logs from my Laravel app on Kubernetes?
Use kubectl logs <pod-name> to view logs from a specific pod, or kubectl logs -l app=laravel-app --tail=100 -f to stream logs from all pods with a given label. For this to work reliably, your Laravel app must write logs to stdout/stderr instead of a log file. Set LOG_CHANNEL=stderr in your environment variables so Laravel sends log output where Kubernetes can collect it. For long-term log storage and search, pipe your cluster logs into a centralized logging service (Grafana Loki, Datadog, or your cloud provider's logging product).
Do I need Helm or kustomize to deploy Laravel on Kubernetes?
No, you can deploy Laravel with plain YAML manifests, as shown in this guide. Helm and kustomize become useful when you need to deploy the same application to multiple environments (staging, production, per-customer) with different configurations, or when you're managing many applications across a team. For a single Laravel app in production, they add complexity without much benefit. Start with plain YAML and reach for Helm or kustomize only when you actually need to parameterize your deployments.
How much does it cost to deploy Laravel on Kubernetes?
Expect a minimum of around $80-120/month for a basic production setup on DigitalOcean: about $50-70/month for a 2-node DOKS cluster (2 vCPU / 4 GB each), $15-25/month for a managed MySQL database, $10/month for a load balancer, and a few dollars for a container registry and object storage if you need it. This doesn't include your Laravel app's compute usage, which scales with traffic. Kubernetes is not the cheapest way to deploy a Laravel app (a single VPS is), but it gives you auto-scaling, zero-downtime deploys, and portability in exchange for that baseline cost.