Docker containers often require access to external resources through proxies — for scraping, testing from different regions, or bypassing restrictions. Incorrect proxy configuration leads to connection errors, real IP leaks, and application failures. In this article, we will explore all methods of configuring proxies in Docker: from simple environment variables to advanced scenarios with docker-compose and custom networks.
Why Proxies are Needed in Docker Containers
Docker containers are used in various scenarios where proxies become a necessity. Let's consider the main tasks that proxies solve in containerized applications.
Scraping and Data Collection: If you are running scrapers in Docker containers to collect data from marketplaces (Wildberries, Ozon), social networks, or search engines, proxies protect against IP bans. Containers allow scaling scraping — running 10-50 instances simultaneously, each with its own proxy.
Testing from Different Regions: When developing web applications or mobile APIs, it is often necessary to check how the service works from different countries. Docker containers with proxies from various geolocations allow automating such testing in a CI/CD pipeline.
Automation and Bots: Containers with Selenium, Puppeteer, or Playwright for browser automation require proxies to work with multiple accounts. Each container gets its own proxy and isolated environment.
Bypassing Corporate Restrictions: In some infrastructures, Docker containers must go through a corporate proxy to access the internet. Without proper configuration, containers will not be able to download packages or access external APIs.
Important: Docker containers inherit the network settings of the host system, but proxies need to be configured explicitly. Simply having a proxy on the host does not mean that containers will automatically use it.
Basic Setup via Environment Variables
The simplest way to configure a proxy in a Docker container is to pass environment variables at runtime. This method works for most applications that respect the standard HTTP_PROXY, HTTPS_PROXY, and NO_PROXY variables.
Running a Container with Proxy via docker run:
docker run -d \
-e HTTP_PROXY="http://username:password@proxy.example.com:8080" \
-e HTTPS_PROXY="http://username:password@proxy.example.com:8080" \
-e NO_PROXY="localhost,127.0.0.1,.local" \
your-image:latest
Explanation of Parameters:
HTTP_PROXY— proxy for HTTP requestsHTTPS_PROXY— proxy for HTTPS requests (use http://, not https://)NO_PROXY— list of addresses that should not go through the proxy
Example with SOCKS5 Proxy:
docker run -d \
-e HTTP_PROXY="socks5://username:password@proxy.example.com:1080" \
-e HTTPS_PROXY="socks5://username:password@proxy.example.com:1080" \
your-image:latest
Common Mistake: Using https:// in the proxy URL for HTTPS_PROXY. It is correct to specify http:// or socks5://, even for HTTPS traffic. The protocol in the URL indicates the type of proxy server, not the type of traffic.
Configuring Proxy for Docker Daemon (affects all containers):
If you need all containers to use a proxy by default, configure the Docker daemon. Create a file /etc/systemd/system/docker.service.d/http-proxy.conf:
[Service]
Environment="HTTP_PROXY=http://proxy.example.com:8080"
Environment="HTTPS_PROXY=http://proxy.example.com:8080"
Environment="NO_PROXY=localhost,127.0.0.1,.local"
After making changes, restart Docker:
sudo systemctl daemon-reload
sudo systemctl restart docker
Configuring Proxies in Dockerfile
When building a Docker image that needs to download packages through a proxy (e.g., apt-get, pip, npm), you need to configure the proxy at build time. Docker supports build-time arguments for this.
Example Dockerfile with Proxy Support:
FROM python:3.11-slim
# Arguments for proxy (passed during build)
ARG HTTP_PROXY
ARG HTTPS_PROXY
ARG NO_PROXY
# Setting environment variables for build
ENV HTTP_PROXY=${HTTP_PROXY}
ENV HTTPS_PROXY=${HTTPS_PROXY}
ENV NO_PROXY=${NO_PROXY}
# Installing dependencies through proxy
RUN apt-get update && apt-get install -y curl
# Installing Python packages
COPY requirements.txt .
RUN pip install --proxy ${HTTP_PROXY} -r requirements.txt
# Copying the application
COPY . /app
WORKDIR /app
# Removing proxy variables for runtime (optional)
ENV HTTP_PROXY=
ENV HTTPS_PROXY=
CMD ["python", "app.py"]
Building the Image with Proxy Arguments:
docker build \
--build-arg HTTP_PROXY=http://proxy.example.com:8080 \
--build-arg HTTPS_PROXY=http://proxy.example.com:8080 \
--build-arg NO_PROXY=localhost,127.0.0.1 \
-t my-app:latest .
For Node.js Applications (npm via Proxy):
FROM node:18-alpine
ARG HTTP_PROXY
ARG HTTPS_PROXY
# Configuring npm to work through proxy
RUN npm config set proxy ${HTTP_PROXY}
RUN npm config set https-proxy ${HTTPS_PROXY}
COPY package*.json ./
RUN npm install
# Clearing proxy settings for npm
RUN npm config delete proxy
RUN npm config delete https-proxy
COPY . .
CMD ["node", "server.js"]
Tip: If the proxy is needed only for building and not for running the application, clear the environment variables at the end of the Dockerfile. This prevents accidental credential leaks in the container logs.
Proxy Configuration in docker-compose.yml
Docker Compose simplifies proxy management for multi-container applications. You can configure proxies globally or for individual services.
Basic Configuration with Proxy for a Single Service:
version: '3.8'
services:
parser:
image: python:3.11-slim
environment:
- HTTP_PROXY=http://username:password@proxy.example.com:8080
- HTTPS_PROXY=http://username:password@proxy.example.com:8080
- NO_PROXY=localhost,127.0.0.1,db
volumes:
- ./app:/app
working_dir: /app
command: python parser.py
db:
image: postgres:15
# Database does not use proxy
environment:
- POSTGRES_PASSWORD=secret
Using a .env File for Secure Credential Storage:
Create a file .env in the directory with docker-compose.yml:
PROXY_URL=http://username:password@proxy.example.com:8080
NO_PROXY=localhost,127.0.0.1
Reference the variables in docker-compose.yml:
version: '3.8'
services:
parser:
image: python:3.11-slim
environment:
- HTTP_PROXY=${PROXY_URL}
- HTTPS_PROXY=${PROXY_URL}
- NO_PROXY=${NO_PROXY}
volumes:
- ./app:/app
working_dir: /app
command: python parser.py
Proxy Configuration for Building the Image in docker-compose:
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile
args:
- HTTP_PROXY=${PROXY_URL}
- HTTPS_PROXY=${PROXY_URL}
- NO_PROXY=${NO_PROXY}
environment:
- HTTP_PROXY=${PROXY_URL}
- HTTPS_PROXY=${PROXY_URL}
ports:
- "3000:3000"
Scaling with Different Proxies for Each Instance:
If you need to run multiple parsers, each with its own proxy, use separate services:
version: '3.8'
services:
parser-1:
image: my-parser:latest
environment:
- PROXY_URL=http://user:pass@proxy1.example.com:8080
volumes:
- ./data:/data
parser-2:
image: my-parser:latest
environment:
- PROXY_URL=http://user:pass@proxy2.example.com:8080
volumes:
- ./data:/data
parser-3:
image: my-parser:latest
environment:
- PROXY_URL=http://user:pass@proxy3.example.com:8080
volumes:
- ./data:/data
Proxy Configuration at the Application Level
Some applications do not support standard HTTP_PROXY environment variables. In such cases, you need to configure the proxy in the application code or configuration files.
Python (requests library):
import os
import requests
# Getting the proxy from the environment variable
proxy_url = os.getenv('PROXY_URL', 'http://proxy.example.com:8080')
proxies = {
'http': proxy_url,
'https': proxy_url
}
# Using the proxy in requests
response = requests.get('https://api.example.com/data', proxies=proxies)
print(response.json())
# For SOCKS5 proxy install pip install requests[socks]
# proxies = {
# 'http': 'socks5://user:pass@proxy.example.com:1080',
# 'https': 'socks5://user:pass@proxy.example.com:1080'
# }
Python (aiohttp for asynchronous requests):
import os
import aiohttp
import asyncio
async def fetch_with_proxy():
proxy_url = os.getenv('PROXY_URL')
async with aiohttp.ClientSession() as session:
async with session.get(
'https://api.example.com/data',
proxy=proxy_url
) as response:
data = await response.json()
print(data)
asyncio.run(fetch_with_proxy())
Node.js (axios library):
const axios = require('axios');
const { HttpsProxyAgent } = require('https-proxy-agent');
const proxyUrl = process.env.PROXY_URL || 'http://proxy.example.com:8080';
const agent = new HttpsProxyAgent(proxyUrl);
axios.get('https://api.example.com/data', {
httpsAgent: agent
})
.then(response => {
console.log(response.data);
})
.catch(error => {
console.error('Error:', error.message);
});
Node.js (built-in https module):
const https = require('https');
const { HttpsProxyAgent } = require('https-proxy-agent');
const proxyUrl = process.env.PROXY_URL;
const agent = new HttpsProxyAgent(proxyUrl);
const options = {
hostname: 'api.example.com',
port: 443,
path: '/data',
method: 'GET',
agent: agent
};
const req = https.request(options, (res) => {
let data = '';
res.on('data', (chunk) => data += chunk);
res.on('end', () => console.log(JSON.parse(data)));
});
req.on('error', (error) => console.error(error));
req.end();
Selenium with Proxy in Docker Container:
from selenium import webdriver
from selenium.webdriver.common.proxy import Proxy, ProxyType
import os
proxy_url = os.getenv('PROXY_URL', 'proxy.example.com:8080')
# Configuring proxy for Chrome
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument(f'--proxy-server={proxy_url}')
# For authentication, use an extension or SSH tunnel
driver = webdriver.Chrome(options=chrome_options)
driver.get('https://example.com')
print(driver.title)
driver.quit()
Puppeteer with Proxy:
const puppeteer = require('puppeteer');
(async () => {
const proxyUrl = process.env.PROXY_URL || 'proxy.example.com:8080';
const browser = await puppeteer.launch({
args: [`--proxy-server=${proxyUrl}`],
headless: true
});
const page = await browser.newPage();
// Authentication for the proxy
await page.authenticate({
username: 'your-username',
password: 'your-password'
});
await page.goto('https://example.com');
console.log(await page.title());
await browser.close();
})();
Which Type of Proxy to Choose for Docker
The choice of proxy type depends on the task that your application is solving in the Docker container. Let's review the main scenarios and recommendations.
| Proxy Type | When to Use | Advantages | Disadvantages |
|---|---|---|---|
| Datacenter | Scraping, API requests, testing | High speed, low cost, stability | Easily detected, blocked on secured sites |
| Residential | Working with social networks, marketplaces, complex sites | Real IPs, low risk of blocking, wide geography | More expensive, slower than datacenter, limited traffic |
| Mobile | Testing mobile APIs, bypassing strict blocks | Maximum anonymity, IPs from mobile operators | High cost, limited geography |
Recommendations for Choosing for Specific Tasks:
Scraping Marketplaces and Product Catalogs: If you are scraping Wildberries, Ozon, or other marketplaces through Docker containers, use residential proxies. These platforms actively block datacenter IPs. Set up proxy rotation every 5-10 minutes to simulate different users.
API Testing and Development: For testing your own APIs or integrations with external services, datacenter proxies are suitable. They provide high speed and connection stability, which is important for automated tests in CI/CD.
Selenium/Puppeteer Automation: When running browser automation in containers for working with secured sites (social networks, banks, complex web applications), choose residential proxies. They reduce the likelihood of captchas and blocks.
Geo-distributed Testing: If you need to check service availability from different countries, use residential proxies with specific geolocation selection. Run several Docker containers, each with a proxy from its country.
Scaling Tip: When launching 10+ containers with proxies, use a proxy pool with automatic rotation. This simplifies management and prevents the reuse of a single IP by different containers.
Troubleshooting Common Issues
When working with proxies in Docker containers, typical problems arise. Let's review the most common ones and how to solve them.
Problem 1: Container Cannot Connect to Proxy
Symptoms: "Connection refused", "Proxy connection failed" errors, timeouts when starting the container.
Solutions:
- Check proxy availability from the host:
curl -x http://proxy:port https://example.com - Ensure that the proxy server is accessible from the Docker network. If the proxy is on the host's localhost, use
host.docker.internal(Mac/Windows) or the host's IP in the bridge network - Check firewall rules — Docker containers may be blocked from outgoing connections
- For authenticated proxies, verify the correctness of the username and password, escape special characters in the URL
Example of Accessing the Proxy on the Host:
# Mac/Windows
docker run -e HTTP_PROXY=http://host.docker.internal:8080 my-image
# Linux (find the host's IP in the bridge network)
ip addr show docker0 # Usually 172.17.0.1
docker run -e HTTP_PROXY=http://172.17.0.1:8080 my-image
Problem 2: DNS Does Not Resolve Through Proxy
Symptoms: "Could not resolve host" errors, DNS requests do not go through the proxy.
Solutions:
- Use a SOCKS5 proxy instead of HTTP — SOCKS5 proxies DNS requests
- Configure DNS in Docker: add
--dns 8.8.8.8when starting the container - For applications that do not support DNS through the proxy, use proxychains inside the container
# Dockerfile with proxychains
FROM python:3.11-slim
RUN apt-get update && apt-get install -y proxychains4
# Configuring proxychains
RUN echo "strict_chain\nproxy_dns\n[ProxyList]\nsocks5 proxy.example.com 1080" > /etc/proxychains4.conf
# Running the application through proxychains
CMD ["proxychains4", "python", "app.py"]
Problem 3: Real IP Leak from the Container
Symptoms: The target service sees the host or container IP instead of the proxy IP.
Solutions:
- Check that the application is indeed using the proxy — make a request to https://api.ipify.org
- Ensure that all HTTP clients in the code are configured to use the proxy
- For WebRTC and WebSocket connections, the proxy may not work — disable WebRTC in browsers
- Check request headers — some libraries add X-Forwarded-For with the real IP
IP Leak Test in the Container:
# Without proxy
docker run --rm curlimages/curl:latest curl https://api.ipify.org
# With proxy
docker run --rm \
-e HTTPS_PROXY=http://proxy.example.com:8080 \
curlimages/curl:latest curl https://api.ipify.org
Problem 4: Slow Performance Through Proxy
Symptoms: High latency, timeouts, slow data loading.
Solutions:
- Check the proxy speed directly from the host — the problem may be with the proxy itself
- Increase timeouts in the application for working with slow proxies
- Use keep-alive connections to reuse TCP connections
- For residential proxies, slow speed is normal; optimize the number of requests
- Configure a connection pool in HTTP clients for parallel requests
Problem 5: Proxy Works for HTTP but Not for HTTPS
Symptoms: HTTP requests go through, HTTPS returns SSL/TLS errors.
Solutions:
- Ensure that the HTTPS_PROXY variable is set (not just HTTP_PROXY)
- Use http:// in the proxy URL for HTTPS_PROXY, not https://
- Check if the proxy supports the CONNECT method for HTTPS tunneling
- For proxies with self-signed certificates, disable SSL verification (only for testing!)
Security and Credential Management
Storing proxy credentials in Docker containers requires special attention to security. Improper password management can lead to leaks in logs, images, or repositories.
Use Docker Secrets for Production:
Docker Swarm supports a secrets mechanism for securely storing passwords. Create a secret:
echo "http://username:password@proxy.example.com:8080" | docker secret create proxy_url -
Use it in docker-compose for Swarm:
version: '3.8'
services:
app:
image: my-app:latest
secrets:
- proxy_url
environment:
- PROXY_URL_FILE=/run/secrets/proxy_url
command: sh -c 'export HTTP_PROXY=$(cat $$PROXY_URL_FILE) && python app.py'
secrets:
proxy_url:
external: true
Environment Variables via Files (.env):
For development, use .env files, but NEVER commit them to Git. Add to .gitignore:
# .gitignore
.env
.env.local
*.env
Create a .env.example without real credentials:
# .env.example
PROXY_URL=http://username:password@proxy.example.com:8080
NO_PROXY=localhost,127.0.0.1
Avoid Hardcoding Credentials in Images:
Danger: Never write passwords directly in the Dockerfile using ENV. These values are saved in image layers and are accessible even after deletion.
# ❌ BAD - password will remain in the image
FROM python:3.11
ENV HTTP_PROXY=http://user:secretpass@proxy.com:8080
COPY . /app
# ✅ GOOD - pass through build arguments or runtime variables
FROM python:3.11
ARG HTTP_PROXY
# Used only during build, not saved in the final image
Use Multi-Stage Builds to Clean Up Credentials:
# Build stage with proxy
FROM python:3.11 AS builder
ARG HTTP_PROXY
ENV HTTP_PROXY=${HTTP_PROXY}
COPY requirements.txt .
RUN pip install -r requirements.txt
# Final stage without proxy variables
FROM python:3.11-slim
COPY --from=builder /usr/local/lib/python3.11/site-packages /usr/local/lib/python3.11/site-packages
COPY . /app
WORKDIR /app
CMD ["python", "app.py"]
Rotating Proxy Credentials:
If your proxy provider supports an API for generating temporary credentials, use them instead of static passwords. Create an init script in the container:
#!/bin/bash
# entrypoint.sh
# Getting temporary credentials from the API
PROXY_CREDS=$(curl -s https://api.proxyservice.com/generate-temp-auth)
export HTTP_PROXY="http://${PROXY_CREDS}@proxy.example.com:8080"
# Running the application
exec python app.py
Logging Without Leaking Passwords:
Configure logging so that proxy variables do not appear in the output:
import os
import re
def safe_log_env():
"""Logging environment variables without passwords"""
for key, value in os.environ.items():
if 'PROXY' in key:
# Masking password in URL
safe_value = re.sub(r'://([^:]+):([^@]+)@', r'://\1:****@', value)
print(f"{key}={safe_value}")
else:
print(f"{key}={value}")
safe_log_env()
Conclusion
Configuring proxies in Docker containers is an important skill for developers working with scraping, automation, and distributed systems. You learned how to set up proxies through environment variables, Dockerfile, and docker-compose, how to integrate proxies into application code in Python and Node.js, and how to solve typical connection and security issues.
Key points for successful proxy work in Docker: use environment variables for flexibility, store credentials securely through secrets or .env files, choose the type of proxy depending on the task, and always test for the absence of real IP leaks before deploying to production.
For most scraping and automation tasks in Docker containers, we recommend using residential proxies — they provide high anonymity and minimal risk of blocks when working with secured sites and APIs. If you need maximum speed for API testing or internal tasks, consider datacenter proxies.