When deploying Kubernetes in an enterprise environment or behind a firewall, there is often a need to configure a proxy server for accessing external resources. This is critically important for pulling container images, updating packages, and interacting with external APIs. In this guide, we will cover all levels of proxy configuration in Kubernetes—from node configuration to individual pods.
Why Use a Proxy in Kubernetes Clusters
Kubernetes clusters in enterprise environments often operate in isolated networks with limited internet access. A proxy server addresses several critical tasks:
- Pulling Container Images — Docker Hub, Google Container Registry, private registries require external access
- Package Updates — Installing dependencies via apt, yum, pip inside containers
- Accessing External APIs — Integration with cloud services, monitoring, logging
- Security — Traffic control, domain filtering, request logging
- Caching — Speeding up repeated requests to the same resources
Without proper proxy configuration, you may encounter errors like "image pull failed," "connection timeout," or "network unreachable" when trying to deploy applications. This is especially critical for automated CI/CD pipelines, where every second of downtime costs money.
Important: For enterprise clusters, it is recommended to use datacenter proxies with high bandwidth and connection stability, as the functionality of the entire infrastructure depends on them.
Levels of Proxy Configuration in Kubernetes
Kubernetes has a multi-layered architecture, and proxies need to be configured at each level depending on the tasks:
| Level | What is Configured | Purpose |
|---|---|---|
| Operating System | System Environment Variables | Access for utilities (curl, wget, apt) |
| Container Runtime (Docker/containerd) | Daemon Configuration | Pulling container images |
| kubelet | kubelet Launch Parameters | Interacting with the API server |
| Pods | Environment Variables in Manifests | Application access to external APIs |
| kubectl | Client Environment Variables | Managing the cluster through a proxy |
Each level requires separate configuration, and skipping any of them can lead to issues. For example, if you configure the proxy only for Docker but not for the pods, images will be pulled, but applications inside the containers will not be able to access external APIs.
Configuring Proxy for Docker and containerd
The container runtime is the first component to configure, as it is responsible for pulling container images from external registries. Let's look at the configuration for both popular runtimes.
Configuring Proxy for Docker
For Docker, you need to create a systemd drop-in file that will add environment variables to the Docker service:
# Create a directory for the configuration
sudo mkdir -p /etc/systemd/system/docker.service.d
# Create a file with proxy settings
sudo tee /etc/systemd/system/docker.service.d/http-proxy.conf <<EOF
[Service]
Environment="HTTP_PROXY=http://proxy.company.com:8080"
Environment="HTTPS_PROXY=http://proxy.company.com:8080"
Environment="NO_PROXY=localhost,127.0.0.1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.cluster.local,.svc"
EOF
# Reload the systemd configuration
sudo systemctl daemon-reload
# Restart Docker
sudo systemctl restart docker
# Check that the settings have been applied
sudo systemctl show --property=Environment docker
After this, Docker will be able to pull images through the proxy server. You can check its operation with the command:
docker pull nginx:latest
Configuring Proxy for containerd
Containerd is used in modern Kubernetes clusters as the primary container runtime. Configuring the proxy for it is slightly different:
# Create a directory for the configuration
sudo mkdir -p /etc/systemd/system/containerd.service.d
# Create a file with proxy settings
sudo tee /etc/systemd/system/containerd.service.d/http-proxy.conf <<EOF
[Service]
Environment="HTTP_PROXY=http://proxy.company.com:8080"
Environment="HTTPS_PROXY=http://proxy.company.com:8080"
Environment="NO_PROXY=localhost,127.0.0.1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.cluster.local,.svc"
EOF
# Reload the configuration
sudo systemctl daemon-reload
sudo systemctl restart containerd
# Check the status
sudo systemctl status containerd
Tip: If you are using a private container registry, add its domain to NO_PROXY to avoid unnecessary delays and SSL certificate issues.
Configuring Proxy for kubelet
Kubelet is the Kubernetes agent that runs on each node in the cluster. It also needs access to the API server and external resources. The configuration depends on how Kubernetes is installed.
For kubeadm Clusters
If you are using kubeadm, configure the proxy through systemd:
# Create a directory for kubelet configuration
sudo mkdir -p /etc/systemd/system/kubelet.service.d
# Create a file with proxy settings
sudo tee /etc/systemd/system/kubelet.service.d/http-proxy.conf <<EOF
[Service]
Environment="HTTP_PROXY=http://proxy.company.com:8080"
Environment="HTTPS_PROXY=http://proxy.company.com:8080"
Environment="NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,10.244.0.0/16,.cluster.local,.svc"
EOF
# Reload kubelet
sudo systemctl daemon-reload
sudo systemctl restart kubelet
For Managed Kubernetes (EKS, GKE, AKS)
In managed Kubernetes services, kubelet configuration is usually done through node launch parameters or user data scripts. For example, for AWS EKS:
#!/bin/bash
# User data script for EKS worker nodes
# Configure proxy for the system
cat <<EOF >> /etc/environment
HTTP_PROXY=http://proxy.company.com:8080
HTTPS_PROXY=http://proxy.company.com:8080
NO_PROXY=localhost,127.0.0.1,169.254.169.254,.ec2.internal,.cluster.local
EOF
# Configuration for kubelet
mkdir -p /etc/systemd/system/kubelet.service.d
cat <<EOF > /etc/systemd/system/kubelet.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://proxy.company.com:8080"
Environment="HTTPS_PROXY=http://proxy.company.com:8080"
Environment="NO_PROXY=localhost,127.0.0.1,169.254.169.254,.ec2.internal,.cluster.local"
EOF
systemctl daemon-reload
systemctl restart kubelet
Note the addition of 169.254.169.254 in NO_PROXY — this is the address of the AWS metadata service, which should be accessible without a proxy.
Configuring Proxy at the Pod Level
Even if you have configured the proxy for Docker and kubelet, applications inside the pods will not automatically use the proxy. You need to explicitly specify environment variables in the Kubernetes manifests.
Configuring via Deployment Manifest
The simplest way is to add environment variables to the container specification:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app
image: my-company/my-app:latest
env:
- name: HTTP_PROXY
value: "http://proxy.company.com:8080"
- name: HTTPS_PROXY
value: "http://proxy.company.com:8080"
- name: NO_PROXY
value: "localhost,127.0.0.1,.cluster.local,.svc,10.0.0.0/8"
ports:
- containerPort: 8080
Using ConfigMap for Centralized Configuration
To avoid duplicating proxy settings in each Deployment, create a ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: proxy-config
namespace: default
data:
HTTP_PROXY: "http://proxy.company.com:8080"
HTTPS_PROXY: "http://proxy.company.com:8080"
NO_PROXY: "localhost,127.0.0.1,.cluster.local,.svc,10.0.0.0/8"
Then use this ConfigMap in the Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
spec:
containers:
- name: app
image: my-company/my-app:latest
envFrom:
- configMapRef:
name: proxy-config
This approach simplifies management: when the proxy address changes, you only need to update the ConfigMap, and after restarting the pods, the new settings will be applied automatically.
Automatic Injection via MutatingWebhook
To automatically add proxy variables to all pods, you can use a MutatingAdmissionWebhook. This is an advanced approach that requires developing your own webhook service, but it allows for centralized management of settings without modifying application manifests.
Correct NO_PROXY Configuration
The NO_PROXY variable defines which addresses and domains should bypass the proxy server. Incorrect NO_PROXY configuration is the most common cause of issues in Kubernetes clusters.
Mandatory Exceptions for Kubernetes
The following addresses and ranges MUST ALWAYS be included in NO_PROXY:
| Address/Range | Purpose |
|---|---|
localhost, 127.0.0.1 |
Local connections |
.cluster.local |
Cluster internal DNS |
.svc |
Kubernetes services |
10.0.0.0/8 |
Pod network (depends on CNI) |
10.96.0.0/12 |
Service network (default) |
172.16.0.0/12 |
Docker private networks |
192.168.0.0/16 |
Private local networks |
Example of Complete NO_PROXY Configuration
NO_PROXY=localhost,127.0.0.1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,10.96.0.0/12,.cluster.local,.svc,.default.svc,.default.svc.cluster.local,kubernetes.default.svc,kubernetes.default.svc.cluster.local
Warning: Some applications do not support CIDR notation in NO_PROXY. In such cases, use wildcards: 10.* instead of 10.0.0.0/8.
Configuring kubectl to Work Through a Proxy
If you manage a cluster from a workstation that is behind a proxy, configure the environment variables for kubectl:
# For Linux/macOS - add to ~/.bashrc or ~/.zshrc
export HTTP_PROXY=http://proxy.company.com:8080
export HTTPS_PROXY=http://proxy.company.com:8080
export NO_PROXY=localhost,127.0.0.1,kubernetes.default.svc,.cluster.local
# For Windows PowerShell
$env:HTTP_PROXY="http://proxy.company.com:8080"
$env:HTTPS_PROXY="http://proxy.company.com:8080"
$env:NO_PROXY="localhost,127.0.0.1,kubernetes.default.svc,.cluster.local"
After this, kubectl will be able to connect to the cluster's API server through the proxy. Check its operation:
kubectl cluster-info
kubectl get nodes
Configuring Proxy with Authentication
If the proxy server requires authentication, add the credentials to the URL:
export HTTP_PROXY=http://username:password@proxy.company.com:8080
export HTTPS_PROXY=http://username:password@proxy.company.com:8080
Security: Do not store passwords in plain text in configuration files. Use environment variables or Kubernetes secrets to store proxy credentials.
Diagnosing and Solving Common Issues
Even with proper configuration, issues can arise. Let's look at the most common errors and how to resolve them.
"ImagePullBackOff" Error When Pulling Images
Symptoms: Pods do not start, and events show the error "Failed to pull image" or "connection timeout."
Diagnosis:
# Check pod events
kubectl describe pod <pod-name>
# Check proxy settings in Docker/containerd
sudo systemctl show --property=Environment docker
sudo systemctl show --property=Environment containerd
# Try pulling the image manually on the node
sudo docker pull nginx:latest
sudo crictl pull nginx:latest
Solution: Ensure that the proxy is configured for the container runtime and that the image registry domain is not in NO_PROXY.
DNS Resolution Issues Inside the Cluster
Symptoms: Pods cannot communicate with each other by DNS names (e.g., service-name.namespace.svc.cluster.local).
Diagnosis:
# Check DNS from the pod
kubectl run -it --rm debug --image=busybox --restart=Never -- nslookup kubernetes.default
# Check proxy variables in the pod
kubectl exec -it <pod-name> -- env | grep PROXY
Solution: Add .cluster.local and .svc to NO_PROXY.
Slow Performance or Timeouts When Accessing External APIs
Symptoms: Applications are slow or receive timeouts when making requests to external services.
Diagnosis:
# Check proxy availability from the pod
kubectl exec -it <pod-name> -- curl -v -x http://proxy.company.com:8080 https://www.google.com
# Measure response time
kubectl exec -it <pod-name> -- time curl -x http://proxy.company.com:8080 https://api.example.com
Solution: The issue may be with the performance of the proxy server. Consider using residential proxies with geographically close locations to reduce latency.
SSL/TLS Errors When Working Through a Proxy
Symptoms: Errors like "certificate verify failed" or "SSL handshake failed."
Cause: Some proxy servers perform SSL inspection (decrypting HTTPS traffic), which requires installing the proxy's root certificate.
Solution:
# Create a ConfigMap with the proxy certificate
kubectl create configmap proxy-ca-cert --from-file=ca.crt=/path/to/proxy-ca.crt
# Mount the certificate in the pod and add it to the system store
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: app
volumeMounts:
- name: proxy-ca
mountPath: /usr/local/share/ca-certificates/proxy-ca.crt
subPath: ca.crt
volumes:
- name: proxy-ca
configMap:
name: proxy-ca-cert
Best Practices for Proxies in Production
Based on the experience of operating Kubernetes clusters in enterprise environments, here are recommendations for reliable proxy operation:
1. Use High-Availability Proxy Servers
The proxy becomes a single point of failure for the entire cluster. Set up multiple proxy servers behind a load balancer:
HTTP_PROXY=http://proxy-lb.company.com:8080
Where proxy-lb.company.com is the load balancer in front of several proxy servers.
2. Centralized Configuration Management
Use ConfigMap or Secret to store proxy settings instead of hardcoding them in each manifest:
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-proxy-config
namespace: kube-system
data:
HTTP_PROXY: "http://proxy-lb.company.com:8080"
HTTPS_PROXY: "http://proxy-lb.company.com:8080"
NO_PROXY: "localhost,127.0.0.1,.cluster.local,.svc,10.0.0.0/8"
3. Monitoring and Alerting
Set up monitoring for the availability of proxy servers and alerts for issues:
- Proxy response time (should be < 100ms for local proxies)
- Number of connection errors to the proxy
- Number of ImagePullBackOff events in the cluster
- CPU and network load on proxy servers
4. Document NO_PROXY Exceptions
Keep documentation of which domains and IP addresses are added to NO_PROXY and why. This will help with troubleshooting and security audits.
5. Test Changes in a Dev Environment
Before changing proxy settings in production, always test in a dev/staging cluster:
# Test pod to check the proxy
apiVersion: v1
kind: Pod
metadata:
name: proxy-test
spec:
containers:
- name: test
image: curlimages/curl:latest
command: ["sleep", "3600"]
env:
- name: HTTP_PROXY
value: "http://new-proxy.company.com:8080"
- name: HTTPS_PROXY
value: "http://new-proxy.company.com:8080"
# Check the availability of external resources
kubectl exec -it proxy-test -- curl -v https://registry.k8s.io
kubectl exec -it proxy-test -- curl -v https://docker.io
6. Use Different Types of Proxies for Different Tasks
For critical components (pulling images, cluster API), use fast datacenter proxies, and for applications requiring geographical IP diversity—residential or mobile proxies.
7. Regularly Update the NO_PROXY List
When adding new services or changing network topology, update NO_PROXY. Automate this through Helm charts or Kustomize:
# values.yaml for Helm chart
proxy:
enabled: true
http: "http://proxy.company.com:8080"
https: "http://proxy.company.com:8080"
noProxy:
- localhost
- 127.0.0.1
- .cluster.local
- .svc
- 10.0.0.0/8
- internal-service.company.com
Conclusion
Configuring proxies in Kubernetes clusters is a multi-layered task that requires attention to detail at each level: from the operating system and container runtime to individual pods. Proper configuration ensures the smooth operation of the cluster, secure access to external resources, and compliance with corporate security policies.
Key points to remember:
- Configure proxies at all levels: OS, container runtime, kubelet, pods
- Correctly configure NO_PROXY, including all internal cluster networks
- Use centralized management through ConfigMap
- Monitor the availability and performance of proxy servers
- Test changes before applying them in production
For mission-critical Kubernetes clusters, we recommend using reliable datacenter proxies with high availability and low latency. This will ensure stable infrastructure operation and minimize downtime risks due to network access issues.
When issues arise, use a systematic approach to diagnosis: check settings at each level, analyze logs and events, and test connections manually. Most proxy issues in Kubernetes can be resolved with proper configuration of environment variables and NO_PROXY.