Skip to main content

Overview

The Deployments page provides comprehensive management of Kubernetes deployments. View deployment status, manage replicas, restart deployments, edit YAML manifests, and monitor associated pods. Deployments List

List View

Features

Real-time Status

Live deployment status with available/desired replicas

Quick Actions

Restart deployments and edit YAML directly from the list

Search & Filter

Find deployments by name and filter by status

Sorting

Sort by name, status, replicas, or age

Deployment Status

Deployments display one of three statuses:
All replicas are running and ready. The deployment is healthy.Indicator: Green badge with “Available”
Deployment is currently updating, scaling, or rolling out changes.Indicator: Yellow badge with “Progressing”
Some replicas are not ready, or the deployment has issues.Indicator: Red badge with “Degraded”

Table Columns

ColumnDescription
NameDeployment name (clickable to view details)
StatusCurrent deployment status badge
ReplicasAvailable/Desired replica count (e.g., “3/3”)
AgeTime since deployment creation
ActionsQuick action buttons (Restart, Edit YAML)

Detail View

Click any deployment name to view comprehensive details.

Overview Section

1

Basic Information

  • Namespace: Current namespace
  • Labels: Key-value labels
  • Selectors: Pod selector labels
  • Strategy: Deployment strategy (RollingUpdate/Recreate)
2

Replica Information

  • Desired Replicas: Target number of pods
  • Available Replicas: Currently running and ready
  • Unavailable Replicas: Pods not ready
  • Updated Replicas: Pods with latest template
3

Timestamps

  • Created: When deployment was created
  • Last Updated: Most recent change timestamp

Pods Section

View all pods belonging to this deployment: Deployment Pods Pod Information:
  • Name: Pod name with status icon
  • Status: Running, Pending, Failed, etc.
  • Node: Node where pod is running
  • Restarts: Number of container restarts
  • Age: Pod uptime
Click any pod name to navigate to the pod detail page with logs and events

Topology View

Visualize deployment relationships: Deployment Topology The topology graph shows:
  • Deployment (center node)
  • ConfigMaps used by deployment
  • Secrets used by deployment
  • HorizontalPodAutoscaler (if configured)
  • Pods managed by deployment
Interactive graph - click nodes to navigate to their detail pages

Resource Metrics

Real-time resource usage charts (when metrics available):

CPU Usage

Current CPU usage vs limits

Memory Usage

Current memory usage vs limits

Events

Recent events related to this deployment:
  • Scaled up/down events
  • Image pull events
  • Replica set creation
  • Pod scheduling events
  • Error events

Actions

Restart Deployment

Perform a rolling restart (equivalent to kubectl rollout restart):
1

Click Restart Button

In the list view or detail view, click the “Restart” button
2

Confirm Action

A dialog appears asking for confirmation
This triggers a rolling restart of all pods. Ensure your deployment has multiple replicas to avoid downtime.
3

Monitor Progress

Watch the deployment status change to “Progressing” as pods restart
4

Completion

Status returns to “Available” when all pods are ready
Use Cases:
  • Apply configuration changes from ConfigMaps/Secrets
  • Clear pod-level issues
  • Restart after environment updates
  • Force image pull (if using :latest tag)

Edit YAML

Edit deployment manifest and create a Pull Request:
1

Open YAML Editor

Click “Edit YAML” button in detail view
2

Make Changes

Monaco editor appears with full YAML manifest
Editor features: syntax highlighting, auto-completion, error detection
3

Common Edits

  • Update container image tag
  • Adjust resource limits/requests
  • Add/modify environment variables
  • Change replica count
  • Update labels/annotations
4

Create Pull Request

Click “Create PR” to submit changes via GitHub integration
Requires GitHub App setup. See GitHub Integration
Example: Update Image Tag
spec:
  template:
    spec:
      containers:
      - name: app
        image: myapp:v1.0.0  # Change to v1.1.0
        resources:
          limits:
            memory: "512Mi"
            cpu: "500m"

Status Filter

Filter deployments by status:
Show all deployments regardless of status
Use the global search bar to find deployments by name:
Search: api-gateway
Result: Filters to deployments containing "api-gateway"
Search is case-insensitive and matches partial names.

ConfigMaps & Secrets

Deployments often reference ConfigMaps and Secrets:
1

View in Topology

See linked ConfigMaps/Secrets in the topology graph
2

Navigate to Resource

Click ConfigMap/Secret nodes to view details
3

Update Configuration

Edit ConfigMap/Secret values and restart deployment to apply

Horizontal Pod Autoscaler

If an HPA is configured:
  • View target CPU utilization
  • See min/max replica limits
  • Monitor current replica count
  • Navigate to HPA details

Best Practices

Always configure liveness and readiness probes for zero-downtime deployments
Set CPU/memory limits and requests to prevent resource starvation
Run at least 2 replicas for high availability
Use RollingUpdate strategy with appropriate maxSurge and maxUnavailable
Use specific version tags instead of :latest for reproducibility

Troubleshooting

Deployment Not Available

Symptom: Deployment shows as Degraded with 0/3 available replicas Possible Causes:
  • Image pull errors
  • Resource limits too low
  • Failed readiness probes
  • Node resource constraints
Solutions:
  1. Check Events tab for specific errors
  2. View pod details and logs
  3. Verify resource requests/limits
  4. Check node capacity

Pods Restarting Frequently

Symptom: High restart count in pods table Possible Causes:
  • Application crashes
  • OOMKilled (memory limit exceeded)
  • Failed liveness probes
  • Resource constraints
Solutions:
  1. View pod logs for crash reasons
  2. Check container restart history
  3. Increase memory limits if OOMKilled
  4. Review liveness probe configuration

Rolling Update Stuck

Symptom: Deployment stuck in “Progressing” status Possible Causes:
  • New pods failing readiness checks
  • Insufficient cluster resources
  • Pod disruption budget blocking termination
Solutions:
  1. Check new pod logs and events
  2. Verify resource availability
  3. Review readiness probe configuration
  4. Check pod disruption budgets

Next Steps