Back to Blog

How to Fix Timeout Errors When Using a Proxy

Timeout errors through proxies are a common issue when parsing and automating. We analyze the causes and provide working solutions with code examples.

📅December 15, 2025
```html

How to Fix Timeout Errors When Working Through a Proxy

The request hung, the script crashed with a TimeoutError, and data wasn't received. Familiar situation? Timeout errors through a proxy are one of the most common problems when parsing and automating. Let's analyze the causes and provide concrete solutions.

Why timeout errors occur

A timeout is not one problem, but a symptom. Before treating it, you need to understand the cause:

Slow proxy server. An overloaded server or geographically distant proxy adds latency to each request. If your timeout is 10 seconds but the proxy responds in 12 — you get an error.

Blocking on the target site's side. A site may intentionally "hang" suspicious requests instead of explicitly refusing them. This is a tactic against bots — keeping the connection open indefinitely.

DNS issues. The proxy must resolve the domain. If the proxy's DNS server is slow or unavailable — the request hangs at the connection stage.

Incorrect timeout configuration. One general timeout for everything is a common mistake. Connect timeout and read timeout are different things and should be configured separately.

Network issues. Packet loss, unstable proxy connection, routing problems — all of this leads to timeouts.

Types of timeouts and their configuration

Most HTTP libraries support several types of timeouts. Understanding the difference between them is key to proper configuration.

Connect timeout

Time to establish a TCP connection with the proxy and target server. If the proxy is unavailable or the server doesn't respond — this timeout will trigger. Recommended value: 5-10 seconds.

Read timeout

Time to wait for data after the connection is established. The server connected but is silent — read timeout will trigger. For regular pages: 15-30 seconds. For heavy APIs: 60+ seconds.

Total timeout

Total time for the entire request from start to finish. A safeguard against hung connections. Usually: connect + read + buffer.

Example configuration in Python with the requests library:

import requests

proxies = {
    "http": "http://user:pass@proxy.example.com:8080",
    "https": "http://user:pass@proxy.example.com:8080"
}

# Tuple: (connect_timeout, read_timeout)
timeout = (10, 30)

try:
    response = requests.get(
        "https://target-site.com/api/data",
        proxies=proxies,
        timeout=timeout
    )
except requests.exceptions.ConnectTimeout:
    print("Failed to connect to proxy or server")
except requests.exceptions.ReadTimeout:
    print("Server did not send data in time")

For aiohttp (asynchronous Python):

import aiohttp
import asyncio

async def fetch_with_timeout():
    timeout = aiohttp.ClientTimeout(
        total=60,      # Total timeout
        connect=10,    # Connection timeout
        sock_read=30   # Data read timeout
    )
    
    async with aiohttp.ClientSession(timeout=timeout) as session:
        async with session.get(
            "https://target-site.com/api/data",
            proxy="http://user:pass@proxy.example.com:8080"
        ) as response:
            return await response.text()

Retry logic: the right approach

A timeout is not always fatal. Often a retry request succeeds. But retries need to be done wisely.

Exponential backoff

Don't hammer the server with retry requests without a pause. Use exponential backoff: each subsequent attempt has an increasing delay.

import requests
import time
import random

def fetch_with_retry(url, proxies, max_retries=3):
    """Request with retry and exponential backoff"""
    
    for attempt in range(max_retries):
        try:
            response = requests.get(
                url,
                proxies=proxies,
                timeout=(10, 30)
            )
            response.raise_for_status()
            return response
            
        except (requests.exceptions.Timeout, 
                requests.exceptions.ConnectionError) as e:
            
            if attempt == max_retries - 1:
                raise  # Last attempt — re-raise the error
            
            # Exponential backoff: 1s, 2s, 4s...
            # + random jitter to avoid creating waves of requests
            delay = (2 ** attempt) + random.uniform(0, 1)
            print(f"Attempt {attempt + 1} failed: {e}")
            print(f"Retrying in {delay:.1f} seconds...")
            time.sleep(delay)

tenacity library

For production code, it's more convenient to use ready-made solutions:

from tenacity import retry, stop_after_attempt, wait_exponential
from tenacity import retry_if_exception_type
import requests

@retry(
    stop=stop_after_attempt(3),
    wait=wait_exponential(multiplier=1, min=1, max=10),
    retry=retry_if_exception_type((
        requests.exceptions.Timeout,
        requests.exceptions.ConnectionError
    ))
)
def fetch_data(url, proxies):
    response = requests.get(url, proxies=proxies, timeout=(10, 30))
    response.raise_for_status()
    return response.json()

Proxy rotation on timeouts

If one proxy constantly gives timeouts — the problem is with it. The logical solution: switch to another.

import requests
from collections import deque
from dataclasses import dataclass, field
from typing import Optional
import time

@dataclass
class ProxyManager:
    """Proxy manager with failure tracking"""
    
    proxies: list
    max_failures: int = 3
    cooldown_seconds: int = 300
    _failures: dict = field(default_factory=dict)
    _cooldown_until: dict = field(default_factory=dict)
    
    def get_proxy(self) -> Optional[str]:
        """Get a working proxy"""
        current_time = time.time()
        
        for proxy in self.proxies:
            # Skip proxies on cooldown
            if self._cooldown_until.get(proxy, 0) > current_time:
                continue
            return proxy
        
        return None  # All proxies on cooldown
    
    def report_failure(self, proxy: str):
        """Report a failed request"""
        self._failures[proxy] = self._failures.get(proxy, 0) + 1
        
        if self._failures[proxy] >= self.max_failures:
            # Put proxy on cooldown
            self._cooldown_until[proxy] = time.time() + self.cooldown_seconds
            self._failures[proxy] = 0
            print(f"Proxy {proxy} put on cooldown")
    
    def report_success(self, proxy: str):
        """Reset failure counter on success"""
        self._failures[proxy] = 0


def fetch_with_rotation(url, proxy_manager, max_attempts=5):
    """Request with automatic proxy switching on errors"""
    
    for attempt in range(max_attempts):
        proxy = proxy_manager.get_proxy()
        
        if not proxy:
            raise Exception("No available proxies")
        
        proxies = {"http": proxy, "https": proxy}
        
        try:
            response = requests.get(url, proxies=proxies, timeout=(10, 30))
            response.raise_for_status()
            proxy_manager.report_success(proxy)
            return response
            
        except (requests.exceptions.Timeout, 
                requests.exceptions.ConnectionError):
            proxy_manager.report_failure(proxy)
            print(f"Timeout through {proxy}, trying another...")
            continue
    
    raise Exception(f"Failed to get data after {max_attempts} attempts")

When using residential proxies with automatic rotation, this logic is simplified — the provider automatically switches IPs on each request or at a specified interval.

Asynchronous requests with timeout control

For mass parsing, synchronous requests are inefficient. An asynchronous approach allows processing hundreds of URLs in parallel, but requires careful timeout handling.

import aiohttp
import asyncio
from typing import List, Tuple

async def fetch_one(
    session: aiohttp.ClientSession, 
    url: str,
    semaphore: asyncio.Semaphore
) -> Tuple[str, str | None, str | None]:
    """Load one URL with timeout handling"""
    
    async with semaphore:  # Limit concurrency
        try:
            async with session.get(url) as response:
                content = await response.text()
                return (url, content, None)
                
        except asyncio.TimeoutError:
            return (url, None, "timeout")
        except aiohttp.ClientError as e:
            return (url, None, str(e))


async def fetch_all(
    urls: List[str],
    proxy: str,
    max_concurrent: int = 10
) -> List[Tuple[str, str | None, str | None]]:
    """Batch loading with timeout and concurrency control"""
    
    timeout = aiohttp.ClientTimeout(total=45, connect=10, sock_read=30)
    semaphore = asyncio.Semaphore(max_concurrent)
    
    connector = aiohttp.TCPConnector(
        limit=max_concurrent,
        limit_per_host=5  # No more than 5 connections per host
    )
    
    async with aiohttp.ClientSession(
        timeout=timeout,
        connector=connector
    ) as session:
        # Set proxy for all requests
        tasks = [
            fetch_one(session, url, semaphore) 
            for url in urls
        ]
        results = await asyncio.gather(*tasks)
    
    # Statistics
    success = sum(1 for _, content, _ in results if content)
    timeouts = sum(1 for _, _, error in results if error == "timeout")
    print(f"Successful: {success}, Timeouts: {timeouts}")
    
    return results


# Usage
async def main():
    urls = [f"https://example.com/page/{i}" for i in range(100)]
    results = await fetch_all(
        urls, 
        proxy="http://user:pass@proxy.example.com:8080",
        max_concurrent=10
    )

asyncio.run(main())

Important: Don't set too high concurrency. 50-100 simultaneous requests through one proxy is already a lot. Better to use 10-20 with multiple proxies.

Diagnostics: how to find the cause

Before changing settings, determine the source of the problem.

Step 1: Check the proxy directly

# Simple test via curl with timing
curl -x http://user:pass@proxy:8080 \
     -w "Connect: %{time_connect}s\nTotal: %{time_total}s\n" \
     -o /dev/null -s \
     https://httpbin.org/get

If time_connect is more than 5 seconds — the problem is with the proxy or network to it.

Step 2: Compare with direct request

import requests
import time

def measure_request(url, proxies=None):
    start = time.time()
    try:
        r = requests.get(url, proxies=proxies, timeout=30)
        elapsed = time.time() - start
        return f"OK: {elapsed:.2f}s, status: {r.status_code}"
    except Exception as e:
        elapsed = time.time() - start
        return f"FAIL: {elapsed:.2f}s, error: {type(e).__name__}"

url = "https://target-site.com"
proxy = {"http": "http://proxy:8080", "https": "http://proxy:8080"}

print("Direct:", measure_request(url))
print("Via proxy:", measure_request(url, proxy))

Step 3: Check different proxy types

Timeouts may depend on the proxy type:

Proxy Type Typical Latency Recommended Timeout
Datacenter 50-200 ms Connect: 5s, Read: 15s
Residential 200-800 ms Connect: 10s, Read: 30s
Mobile 300-1500 ms Connect: 15s, Read: 45s

Step 4: Log details

import logging
import requests
from requests.adapters import HTTPAdapter

# Enable debug logging
logging.basicConfig(level=logging.DEBUG)
logging.getLogger("urllib3").setLevel(logging.DEBUG)

# Now you'll see all stages of the request:
# - DNS resolution
# - Connection establishment
# - Request sending
# - Response receiving

Checklist for solving timeout errors

Quick algorithm for handling timeout errors:

  1. Identify the timeout type — connect or read? These are different problems.
  2. Check the proxy separately — does it work at all? What's the latency?
  3. Increase timeouts — perhaps the values are too aggressive for your proxy type.
  4. Add retry with backoff — single timeouts are normal, resilience is important.
  5. Configure rotation — automatically switch to another proxy on issues.
  6. Limit concurrency — too many simultaneous requests overload the proxy.
  7. Check the target site — it may be blocking or throttling your requests.

Conclusion

Timeout errors through a proxy are a solvable problem. In most cases, it's enough to properly configure timeouts for your proxy type, add retry logic, and implement rotation on failures. For tasks with high stability requirements, use residential proxies with automatic rotation — learn more at proxycove.com.

```