Back to Blog

Proxy Rotation Automation via API: How to Set Up Proxy Switching for Scraping and Arbitrage

A complete guide to automating proxy rotation via API: code examples, integration with scrapers and anti-detect browsers, and solutions to common issues.

📅February 16, 2026

Manual proxy rotation when working with hundreds of requests is a waste of time and money. API rotation allows you to automatically switch IP addresses during blocks, distribute load, and scale scraping or multi-accounting. In this guide, we'll explore how to set up automatic proxy rotation for different tasks: from marketplace scraping to farming Facebook Ads accounts.

This material is suitable for both developers writing scrapers in Python or Node.js, and arbitrageurs using ready-made tools with API integration.

Why automate proxy rotation via API

Automatic IP address rotation via API solves several critical tasks that specialists face in different areas:

For marketplace and website scraping: When collecting data from Wildberries, Ozon, or Avito, each IP can make a limited number of requests (usually 50-200 per hour). API rotation allows automatic switching to a new IP when reaching the limit or receiving a captcha, ensuring continuous data collection.

For arbitrage and multi-accounting: When working with 20-50 Facebook Ads advertising accounts or Instagram accounts, you need to isolate each profile. API allows programmatically assigning a unique proxy to each account in Dolphin Anty or AdsPower, automatically recreating sessions during blocks.

For SMM automation: Mass posting services on Instagram, TikTok, or VK must distribute actions between IP addresses to avoid rate limits. API provides the ability to dynamically obtain new proxies for each session or group of accounts.

Main advantages of API automation compared to manual rotation:

  • Speed: IP change happens in milliseconds programmatically, without human intervention
  • Scalability: You can manage thousands of proxies simultaneously through a single interface
  • Fault tolerance: Automatic replacement of non-working proxies without stopping the process
  • Flexibility: Configure rotation rules for specific tasks: by time, by number of requests, by geography
  • Cost savings: Optimal traffic usage through load balancing

Typical use case: you're scraping competitor prices on Wildberries. Without API, you need to manually track blocks, log into the proxy provider panel, copy new data, and paste it into the script. With API, all this happens automatically: the script receives a 429 error (Too Many Requests), sends a request to the proxy service API, receives a new IP, and continues working.

Types of proxy rotation: sticky sessions vs automatic rotation

Before setting up automation, it's important to understand the difference between types of IP address rotation. The choice of strategy depends on your task.

Sticky Sessions (session proxies)

When using sticky sessions, one IP address is assigned to your session for a certain time (usually from 5 to 30 minutes). Rotation occurs only after the session time expires or upon your API request.

When to use:

  • Working with social media accounts (Instagram, Facebook) — frequent IP changes raise suspicions
  • Filling multi-page forms where you need to maintain a session
  • Testing ads from a specific region during a session
  • Scraping sites with authorization where IP change leads to logout

Example API request for creating a sticky session (usually uses a special login format):

// Format: username-session-SESSIONID:password
// SESSIONID — any string, same = one IP

proxy = "username-session-abc123:password@gate.proxycove.com:8000"

// All requests with session-abc123 will get one IP for the session duration
// For a new IP use a different SESSIONID: session-xyz789

Automatic rotation on each request

IP address changes with each new connection to the proxy server. This is the standard behavior of residential proxies without specifying session parameters.

When to use:

  • Mass scraping without authorization (prices, contacts, listings)
  • Bypassing aggressive rate limits on public APIs
  • Collecting data from sites that ban IPs after 10-20 requests
  • Checking content availability from different regions

Python usage example (each request = new IP):

import requests

proxy = {
    "http": "http://username:password@gate.proxycove.com:8000",
    "https": "http://username:password@gate.proxycove.com:8000"
}

# Each request will get a new IP
for i in range(10):
    response = requests.get("https://api.ipify.org", proxies=proxy)
    print(f"Request {i+1}, IP: {response.text}")

Timer-based rotation

You programmatically control when to change IP: every N minutes, after M requests, or upon receiving certain errors. This is a hybrid approach implemented through the proxy service API.

When to use:

  • Traffic consumption optimization — rotation only when necessary
  • Working with sites that track patterns (too frequent rotation = ban)
  • Balancing between anonymity and session stability
Rotation Type IP Change Frequency Tasks Traffic Consumption
Sticky Session 5-30 minutes Multi-accounting, authorization Low
Auto-rotation Each request Scraping, bypassing rate limits Medium
Timer-based Configurable Universal Optimized

Basics of working with proxy service APIs

Most modern proxy providers offer two management methods: through a web panel and through API. API provides programmatic access to functions: getting a list of proxies, creating new sessions, checking balance, usage statistics.

Typical proxy service API methods

Although each provider has its own documentation, standard methods usually include:

  • GET /api/v1/proxy/list — get a list of available proxies with filtering by country, type
  • POST /api/v1/proxy/rotate — forcibly change IP in an active session
  • GET /api/v1/account/balance — check remaining traffic or balance
  • GET /api/v1/stats — usage statistics: traffic volume, number of requests, errors
  • POST /api/v1/session/create — create a new sticky session with parameters (country, city, duration)

Authentication usually occurs through an API key in the request header:

curl -X GET "https://api.provider.com/v1/proxy/list?country=US" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json"

Response usually comes in JSON format:

{
  "status": "success",
  "data": {
    "proxies": [
      {
        "ip": "123.45.67.89",
        "port": 8000,
        "country": "US",
        "city": "New York",
        "protocol": "http",
        "username": "user123",
        "password": "pass456"
      }
    ],
    "total": 150,
    "available": 147
  }
}

Session management via API

For tasks requiring control over IP lifetime (multi-accounting, working with accounts), named session creation via API is used. This allows programmatic management of dozens and hundreds of isolated IP addresses.

Example of creating a session with parameters:

POST /api/v1/session/create
{
  "country": "US",
  "state": "California",
  "session_duration": 600,  // 10 minutes
  "session_id": "facebook_account_001"
}

// Response:
{
  "status": "success",
  "session": {
    "id": "facebook_account_001",
    "proxy": "gate.provider.com:8000",
    "username": "user-session-facebook_account_001",
    "password": "your_password",
    "ip": "45.67.89.123",
    "expires_at": "2024-01-15T15:30:00Z"
  }
}

Now you can use this proxy in your script or antidetect browser, and the IP will remain unchanged for 10 minutes. To extend the session, send a repeat request with the same session_id.

Python automation examples: requests, Selenium, Scrapy

Python is the most popular language for scraping and automation. Let's look at examples of integrating API proxy rotation with main tools.

Automatic proxy rotation in requests

The requests library is used for simple HTTP requests. For automatic rotation, let's create a wrapper class that changes proxies on errors:

import requests
import random
import time

class RotatingProxySession:
    def __init__(self, proxy_list):
        """
        proxy_list: list of dictionaries with proxy data
        [{"http": "http://user:pass@ip:port", "https": "..."}]
        """
        self.proxy_list = proxy_list
        self.current_proxy = None
        self.session = requests.Session()
        self.rotate()
    
    def rotate(self):
        """Select a random proxy from the list"""
        self.current_proxy = random.choice(self.proxy_list)
        self.session.proxies.update(self.current_proxy)
        print(f"Switched to proxy: {self.current_proxy['http']}")
    
    def get(self, url, max_retries=3, **kwargs):
        """GET request with automatic rotation on errors"""
        for attempt in range(max_retries):
            try:
                response = self.session.get(url, timeout=10, **kwargs)
                
                # If blocked — change proxy
                if response.status_code in [403, 429, 503]:
                    print(f"Got {response.status_code}, changing proxy...")
                    self.rotate()
                    time.sleep(2)
                    continue
                
                return response
                
            except requests.exceptions.ProxyError:
                print(f"Proxy not working, attempt {attempt+1}/{max_retries}")
                self.rotate()
                time.sleep(2)
            
            except requests.exceptions.Timeout:
                print("Timeout, changing proxy...")
                self.rotate()
                time.sleep(2)
        
        raise Exception(f"Failed to execute request after {max_retries} attempts")

# Usage:
proxies = [
    {"http": "http://user1:pass@gate1.com:8000", "https": "http://user1:pass@gate1.com:8000"},
    {"http": "http://user2:pass@gate2.com:8000", "https": "http://user2:pass@gate2.com:8000"},
]

session = RotatingProxySession(proxies)

# Scraping Wildberries
for page in range(1, 50):
    url = f"https://www.wildberries.ru/catalog/page={page}"
    response = session.get(url)
    print(f"Page {page}: {response.status_code}")

Integration with Selenium for browser automation

Selenium is used for scraping sites with JavaScript and automating browser actions. To change proxy, you need to recreate the driver, as proxy settings are set during initialization:

from selenium import webdriver
from selenium.webdriver.chrome.options import Options
import time

class SeleniumRotatingProxy:
    def __init__(self, proxy_list):
        self.proxy_list = proxy_list
        self.driver = None
        self.current_proxy_index = 0
    
    def create_driver(self):
        """Create a new driver with current proxy"""
        if self.driver:
            self.driver.quit()
        
        proxy = self.proxy_list[self.current_proxy_index]
        
        chrome_options = Options()
        chrome_options.add_argument(f'--proxy-server={proxy}')
        chrome_options.add_argument('--headless')  # without GUI
        
        self.driver = webdriver.Chrome(options=chrome_options)
        print(f"Created driver with proxy: {proxy}")
    
    def rotate(self):
        """Switch to next proxy"""
        self.current_proxy_index = (self.current_proxy_index + 1) % len(self.proxy_list)
        self.create_driver()
    
    def get_with_retry(self, url, max_retries=3):
        """Open URL with automatic proxy change on errors"""
        for attempt in range(max_retries):
            try:
                if not self.driver:
                    self.create_driver()
                
                self.driver.get(url)
                
                # Check for blocking (e.g., captcha search)
                if "captcha" in self.driver.page_source.lower():
                    print("Captcha detected, changing proxy...")
                    self.rotate()
                    time.sleep(3)
                    continue
                
                return self.driver.page_source
                
            except Exception as e:
                print(f"Error: {e}, changing proxy (attempt {attempt+1})")
                self.rotate()
                time.sleep(3)
        
        raise Exception("Failed to load page")

# Usage:
proxies = [
    "http://user:pass@gate1.com:8000",
    "http://user:pass@gate2.com:8000",
]

bot = SeleniumRotatingProxy(proxies)

# Scraping Ozon
for i in range(10):
    html = bot.get_with_retry(f"https://www.ozon.ru/category/page-{i}")
    print(f"Got HTML for page {i}, length: {len(html)}")

bot.driver.quit()

Scrapy with middleware for proxy rotation

Scrapy is a framework for large-scale scraping. Proxy rotation is implemented through middleware that automatically applies to all requests:

# middlewares.py

import random
from scrapy.exceptions import IgnoreRequest

class RotatingProxyMiddleware:
    def __init__(self, proxy_list):
        self.proxy_list = proxy_list
    
    @classmethod
    def from_crawler(cls, crawler):
        # Get proxy list from settings
        proxy_list = crawler.settings.getlist('ROTATING_PROXY_LIST')
        return cls(proxy_list)
    
    def process_request(self, request, spider):
        # Assign random proxy to each request
        proxy = random.choice(self.proxy_list)
        request.meta['proxy'] = proxy
        spider.logger.info(f'Using proxy: {proxy}')
    
    def process_exception(self, request, exception, spider):
        # On proxy error — retry with another
        proxy = random.choice(self.proxy_list)
        spider.logger.warning(f'Proxy error, switching to: {proxy}')
        request.meta['proxy'] = proxy
        return request  # retry request

# settings.py

DOWNLOADER_MIDDLEWARES = {
    'myproject.middlewares.RotatingProxyMiddleware': 350,
}

ROTATING_PROXY_LIST = [
    'http://user:pass@gate1.com:8000',
    'http://user:pass@gate2.com:8000',
    'http://user:pass@gate3.com:8000',
]

# Retry requests on errors
RETRY_TIMES = 5
RETRY_HTTP_CODES = [403, 429, 500, 502, 503]

Now each Scrapy request will automatically get a random proxy from the list, and on errors will retry with a different IP.

Node.js automation: axios, Puppeteer, Playwright

Node.js is popular for creating scrapers and bots thanks to asynchronicity and good integration with browser tools. Let's look at proxy rotation examples in main libraries.

Axios with automatic rotation

Axios is a library for HTTP requests. Let's create a class with a proxy pool and automatic replacement on errors:

const axios = require('axios');
const HttpsProxyAgent = require('https-proxy-agent');

class RotatingProxyClient {
  constructor(proxyList) {
    this.proxyList = proxyList;
    this.currentIndex = 0;
  }

  getProxy() {
    const proxy = this.proxyList[this.currentIndex];
    this.currentIndex = (this.currentIndex + 1) % this.proxyList.length;
    return proxy;
  }

  async request(url, options = {}, maxRetries = 3) {
    for (let i = 0; i < maxRetries; i++) {
      const proxy = this.getProxy();
      const agent = new HttpsProxyAgent(proxy);

      try {
        const response = await axios.get(url, {
          ...options,
          httpsAgent: agent,
          timeout: 10000
        });

        // If blocked — next attempt
        if ([403, 429, 503].includes(response.status)) {
          console.log(`Status ${response.status}, changing proxy...`);
          continue;
        }

        return response.data;

      } catch (error) {
        console.log(`Error with proxy ${proxy}: ${error.message}`);
        if (i === maxRetries - 1) throw error;
      }
    }
  }
}

// Usage:
const proxies = [
  'http://user:pass@gate1.com:8000',
  'http://user:pass@gate2.com:8000',
];

const client = new RotatingProxyClient(proxies);

(async () => {
  for (let page = 1; page <= 20; page++) {
    const data = await client.request(`https://api.example.com/products?page=${page}`);
    console.log(`Page ${page}: received ${data.length} products`);
  }
})();

Puppeteer with proxy rotation

Puppeteer controls Chrome browser. Proxy is set at browser launch, so to change it you need to recreate the instance:

const puppeteer = require('puppeteer');

class PuppeteerRotatingProxy {
  constructor(proxyList) {
    this.proxyList = proxyList;
    this.currentIndex = 0;
    this.browser = null;
  }

  async createBrowser() {
    if (this.browser) await this.browser.close();

    const proxy = this.proxyList[this.currentIndex];
    console.log(`Launching browser with proxy: ${proxy}`);

    this.browser = await puppeteer.launch({
      headless: true,
      args: [`--proxy-server=${proxy}`]
    });
  }

  rotate() {
    this.currentIndex = (this.currentIndex + 1) % this.proxyList.length;
  }

  async scrape(url, maxRetries = 3) {
    for (let i = 0; i < maxRetries; i++) {
      try {
        if (!this.browser) await this.createBrowser();

        const page = await this.browser.newPage();
        
        // Proxy authentication (if required)
        await page.authenticate({
          username: 'your_username',
          password: 'your_password'
        });

        await page.goto(url, { waitUntil: 'networkidle2', timeout: 30000 });

        // Check for captcha
        const content = await page.content();
        if (content.includes('captcha')) {
          console.log('Captcha detected, changing proxy...');
          this.rotate();
          await this.createBrowser();
          continue;
        }

        return content;

      } catch (error) {
        console.log(`Error: ${error.message}, attempt ${i+1}`);
        this.rotate();
        await this.createBrowser();
      }
    }
    throw new Error('Failed to load page');
  }
}

// Usage:
const proxies = ['gate1.com:8000', 'gate2.com:8000'];
const scraper = new PuppeteerRotatingProxy(proxies);

(async () => {
  const html = await scraper.scrape('https://www.avito.ru/moskva');
  console.log(`Got HTML length: ${html.length}`);
  await scraper.browser.close();
})();

Playwright with rotation support

Playwright is a modern alternative to Puppeteer with better performance. Proxy setup is similar:

const { chromium } = require('playwright');

async function scrapeWithRotation(urls, proxyList) {
  let proxyIndex = 0;

  for (const url of urls) {
    const proxy = proxyList[proxyIndex];
    
    const browser = await chromium.launch({
      headless: true,
      proxy: {
        server: proxy,
        username: 'your_user',
        password: 'your_pass'
      }
    });

    const page = await browser.newPage();
    
    try {
      await page.goto(url, { timeout: 30000 });
      const title = await page.title();
      console.log(`${url} → ${title} (proxy: ${proxy})`);
    } catch (error) {
      console.log(`Error on ${url}: ${error.message}`);
    }

    await browser.close();
    
    // Next proxy for next URL
    proxyIndex = (proxyIndex + 1) % proxyList.length;
  }
}

const urls = [
  'https://www.wildberries.ru',
  'https://www.ozon.ru',
  'https://www.avito.ru'
];

const proxies = [
  'http://gate1.com:8000',
  'http://gate2.com:8000'
];

scrapeWithRotation(urls, proxies);

API integration with antidetect browsers: Dolphin Anty, AdsPower

For arbitrageurs and SMM specialists working with multi-accounting, manually assigning proxies to each profile in Dolphin Anty or AdsPower takes hours. The APIs of these browsers allow automating profile creation and proxy binding.

Automating Dolphin Anty via API

Dolphin Anty provides a local API (usually at http://localhost:3001/v1.0), through which you can create profiles, assign proxies, and launch browsers programmatically.

Python script example for mass profile creation with unique proxies:

import requests
import json

DOLPHIN_API = "http://localhost:3001/v1.0"
API_TOKEN = "your_dolphin_api_token"

# Proxy list from your provider (obtained via their API)
proxies = [
    {"host": "gate1.com", "port": 8000, "login": "user1", "password": "pass1"},
    {"host": "gate2.com", "port": 8000, "login": "user2", "password": "pass2"},
]

def create_profile_with_proxy(name, proxy):
    """Create profile in Dolphin with proxy binding"""
    
    payload = {
        "name": name,
        "tags": ["Facebook Ads", "Auto-created"],
        "proxy": {
            "type": "http",  # or socks5
            "host": proxy["host"],
            "port": proxy["port"],
            "login": proxy["login"],
            "password": proxy["password"]
        },
        "fingerprint": {
            "os": "win",
            "webRTC": {
                "mode": "altered",
                "fillBasedOnIp": True
            },
            "canvas": {
                "mode": "noise"
            }
        }
    }
    
    headers = {
        "Authorization": f"Bearer {API_TOKEN}",
        "Content-Type": "application/json"
    }
    
    response = requests.post(
        f"{DOLPHIN_API}/browser_profiles",
        headers=headers,
        data=json.dumps(payload)
    )
    
    if response.status_code == 200:
        profile = response.json()
        print(f"✓ Created profile: {name}, ID: {profile['id']}")
        return profile['id']
    else:
        print(f"✗ Error creating {name}: {response.text}")
        return None

# Create 50 profiles with proxy rotation
for i in range(50):
    proxy = proxies[i % len(proxies)]  # circular rotation
    profile_name = f"FB_Account_{i+1:03d}"
    create_profile_with_proxy(profile_name, proxy)

Now you have 50 profiles in Dolphin Anty, each with a unique browser fingerprint and proxy. To launch a profile programmatically:

def start_profile(profile_id):
    """Launch browser profile"""
    response = requests.get(
        f"{DOLPHIN_API}/browser_profiles/{profile_id}/start",
        headers={"Authorization": f"Bearer {API_TOKEN}"}
    )
    
    if response.status_code == 200:
        data = response.json()
        print(f"Profile launched, WebDriver port: {data['automation']['port']}")
        return data['automation']['port']
    else:
        print(f"Launch error: {response.text}")

# Launch profile and control via Selenium
port = start_profile("profile_id_here")

from selenium import webdriver
driver = webdriver.Remote(
    command_executor=f'http://127.0.0.1:{port}',
    options=webdriver.ChromeOptions()
)
driver.get("https://facebook.com")

Automating AdsPower

AdsPower also provides a local API. The logic is similar to Dolphin, but with different endpoints:

import requests

ADSPOWER_API = "http://local.adspower.net:50325/api/v1"

def create_adspower_profile(name, proxy):
    payload = {
        "name": name,
        "group_id": "0",  # Profile group ID
        "domain_name": "facebook.com",
        "open_urls": ["https://facebook.com"],
        "repeat_config": ["0"],
        "username": proxy["login"],
        "password": proxy["password"],
        "proxy_type": "http",
        "proxy_host": proxy["host"],
        "proxy_port": proxy["port"],
        "proxy_user": proxy["login"],
        "proxy_password": proxy["password"]
    }
    
    response = requests.post(
        f"{ADSPOWER_API}/user/create",
        json=payload
    )
    
    if response.json()["code"] == 0:
        user_id = response.json()["data"]["id"]
        print(f"✓ Created AdsPower profile: {name}, ID: {user_id}")
        return user_id
    else:
        print(f"✗ Error: {response.json()['msg']}")

# Creating profiles
for i, proxy in enumerate(proxies):
    create_adspower_profile(f"TikTok_Account_{i+1}", proxy)

Such automation is critically important when working with dozens of accounts. Instead of manually copying proxy data into each profile, you run a script and get a ready infrastructure in minutes.

Error handling and automatic fallback

When working with proxies, situations are inevitable: IP is blocked by the target site, proxy server doesn't respond, traffic runs out. Proper error handling is key to stable automation.

Error types and handling strategies

Error Cause Solution
HTTP 403 Forbidden IP in site's ban list Change proxy, add delay
HTTP 429 Too Many Requests Rate limit exceeded Change IP, increase interval
ProxyError / Timeout Proxy server not responding Remove from pool, take next
407 Proxy Authentication Required Incorrect login/password Check credentials, update
Captcha on page Site detected bot Change IP, use mobile proxies

Implementing smart retry system

Instead of simple request retry, let's create a system with exponential backoff and blacklist of non-working proxies:

import requests
import time
from collections import defaultdict

class SmartProxyRotator:
    def __init__(self, proxy_list):
        self.proxy_list = proxy_list
        self.blacklist = set()  # IPs that don't work
        self.error_count = defaultdict(int)  # error counter by IP
        self.max_errors = 3  # after 3 errors — to blacklist
    
    def get_working_proxy(self):
        """Get proxy that's not in blacklist"""
        available = [p for p in self.proxy_list if p not in self.blacklist]
        if not available:
            # All proxies banned — clear blacklist
            print("⚠ All proxies blocked, resetting blacklist")
            self.blacklist.clear()
            self.error_count.clear()
            available = self.proxy_list
        return available[0]
    
    def mark_error(self, proxy):
        """Mark proxy error"""
        self.error_count[proxy] += 1
        if self.error_count[proxy] >= self.max_errors:
            self.blacklist.add(proxy)
            print(f"✗ Proxy {proxy} added to blacklist")
    
    def request_with_retry(self, url, max_retries=5):
        """Request with smart retries"""
        for attempt in range(max_retries):
            proxy = self.get_working_proxy()
            
            try:
                # Exponential backoff: 1s, 2s, 4s, 8s...
                if attempt > 0:
                    delay = 2 ** attempt
                    print(f"Waiting {delay}s before attempt {attempt+1}")
                    time.sleep(delay)
                
                response = requests.get(
                    url,
                    proxies={"http": proxy, "https": proxy},
                    timeout=15
                )
                
                # Success — reset error counter
                if response.status_code == 200:
                    self.error_count[proxy] = 0
                    return response
                
                # Blocked — change proxy
                elif response.status_code in [403, 429]:
                    print(f"Status {response.status_code}, changing proxy")
                    self.mark_error(proxy)
                    continue
                
            except requests.exceptions.ProxyError:
                print(f"ProxyError with {proxy}")
                self.mark_error(proxy)
                
            except requests.exceptions.Timeout:
                print(f"Timeout with {proxy}")
                self.mark_error(proxy)
        
        raise Exception(f"Failed to execute request after {max_retries} attempts")

# Usage:
proxies = [
    "http://user:pass@gate1.com:8000",
    "http://user:pass@gate2.com:8000",
    "http://user:pass@gate3.com:8000",
]

rotator = SmartProxyRotator(proxies)

for i in range(100):
    try:
        response = rotator.request_with_retry(f"https://api.example.com/data?page={i}")
        print(f"✓ Page {i}: {len(response.text)} bytes")
    except Exception as e:
        print(f"✗ Critical error on page {i}: {e}")

Monitoring and alerts

For production systems, it's important to track proxy pool health in real-time. Add metric logging:

import logging
from datetime import datetime

class ProxyMonitor:
    def __init__(self):
        self.stats = {
            "total_requests": 0,
            "successful": 0,
            "failed": 0,
            "proxy_errors": defaultdict(int),
            "start_time": datetime.now()
        }
        
        # Configure logging
        logging.basicConfig(
            filename='proxy_rotation.log',
            level=logging.INFO,
            format='%(asctime)s - %(levelname)s - %(message)s'
        )
    
    def log_request(self, proxy, success, error=None):
        self.stats["total_requests"] += 1
        
        if success:
            self.stats["successful"] += 1
            logging.info(f"✓ Success with {proxy}")
        else:
            self.stats["failed"] += 1
            self.stats["proxy_errors"][proxy] += 1
            logging.error(f"✗ Error with {proxy}: {error}")
    
    def get_report(self):
        uptime = datetime.now() - self.stats["start_time"]
        success_rate = (self.stats["successful"] / self.stats["total_requests"] * 100) if self.stats["total_requests"] > 0 else 0
        
        return f"""
=== Proxy Rotation Report ===
Uptime: {uptime}
Total requests: {self.stats["total_requests"]}
Successful: {self.stats["successful"]} ({success_rate:.1f}%)
Errors: {self.stats["failed"]}

Problematic proxies:
{self._format_errors()}
        """
    
    def _format_errors(self):
        sorted_errors = sorted(
            self.stats["proxy_errors"].items(),
            key=lambda x: x[1],
            reverse=True
        )
        return "\n".join([f"  {proxy}: {count} errors" for proxy, count in sorted_errors[:5]])

# Integration with rotator
monitor = ProxyMonitor()

# In request loop:
try:
    response = rotator.request_with_retry(url)
    monitor.log_request(current_proxy, success=True)
except Exception as e:
    monitor.log_request(current_proxy, success=False, error=str(e))

Best practices and traffic optimization

To maximize efficiency of proxy rotation automation and minimize costs, follow these recommendations:

1. Use appropriate rotation type for the task

  • Sticky sessions for multi-accounting and working with accounts — reduces suspicion
  • Auto-rotation for mass scraping without authorization — maximum anonymity
  • Timer-based for tasks with moderate blocking — balance between cost and reliability

2. Implement request delays

Even with proxy rotation, too high request frequency can trigger blocks. Add random delays:

import random
import time

def smart_delay(min_seconds=1, max_seconds=5):
    """Random delay to mimic human behavior"""
    delay = random.uniform(min_seconds, max_seconds)
    time.sleep(delay)

# In scraping loop
for url in urls:
    response = session.get(url)
    process_data(response)
    smart_delay(2, 7)  # 2-7 seconds between requests

3. Cache successful responses

To avoid repeated requests for the same data, implement caching:

import hashlib
import json
from pathlib import Path

class CachedProxySession:
    def __init__(self, cache_dir="./cache"):
        self.cache_dir = Path(cache_dir)
        self.cache_dir.mkdir(exist_ok=True)
    
    def _get_cache_key(self, url):
        return hashlib.md5(url.encode()).hexdigest()
    
    def get(self, url, max_age_hours=24):
        cache_file = self.cache_dir / f"{self._get_cache_key(url)}.json"
        
        # Check cache
        if cache_file.exists():
            cache_age = time.time() - cache_file.stat().st_mtime
            if cache_age < max_age_hours * 3600:
                print(f"✓ Using cached response for {url}")
                return json.loads(cache_file.read_text())
        
        # Make request
        response = rotator.request_with_retry(url)
        
        # Save to cache
        cache_file.write_text(json.dumps({
            "url": url,
            "data": response.text,
            "timestamp": time.time()
        }))
        
        return response.text

4. Monitor proxy quality

Regularly check proxy performance and remove slow or unreliable ones:

def test_proxy_speed(proxy, test_url="https://httpbin.org/ip"):
    """Measure proxy response time"""
    start = time.time()
    try:
        response = requests.get(
            test_url,
            proxies={"http": proxy, "https": proxy},
            timeout=10
        )
        elapsed = time.time() - start
        
        if response.status_code == 200:
            return elapsed
        return None
    except:
        return None

# Test all proxies
proxy_speeds = {}
for proxy in proxy_list:
    speed = test_proxy_speed(proxy)
    if speed:
        proxy_speeds[proxy] = speed
        print(f"{proxy}: {speed:.2f}s")
    else:
        print(f"{proxy}: FAILED")

# Use only fast proxies (< 3 seconds)
fast_proxies = [p for p, s in proxy_speeds.items() if s < 3.0]

5. Distribute load across geolocations

For scraping regional content, use proxies from appropriate countries:

class GeoProxyRotator:
    def __init__(self, proxies_by_country):
        """
        proxies_by_country: {"US": [...], "UK": [...], "DE": [...]}
        """
        self.proxies_by_country = proxies_by_country
    
    def get_proxy_for_country(self, country_code):
        """Get random proxy from specific country"""
        if country_code not in self.proxies_by_country:
            raise ValueError(f"No proxies for {country_code}")
        return random.choice(self.proxies_by_country[country_code])
    
    def scrape_by_region(self, urls_by_country):
        """Scrape URLs using proxies from matching countries"""
        results = {}
        
        for country, urls in urls_by_country.items():
            proxy = self.get_proxy_for_country(country)
            results[country] = []
            
            for url in urls:
                response = requests.get(url, proxies={"http": proxy})
                results[country].append(response.text)
        
        return results

# Usage
geo_rotator = GeoProxyRotator({
    "US": ["http://user:pass@us-gate1.com:8000", "http://user:pass@us-gate2.com:8000"],
    "UK": ["http://user:pass@uk-gate1.com:8000"],
    "DE": ["http://user:pass@de-gate1.com:8000"]
})

data = geo_rotator.scrape_by_region({
    "US": ["https://amazon.com/product1", "https://amazon.com/product2"],
    "UK": ["https://amazon.co.uk/product1"],
    "DE": ["https://amazon.de/product1"]
})

Conclusion

Automating proxy rotation via API is a critical skill for anyone working with web scraping, multi-accounting, or large-scale data collection. Proper implementation allows you to:

  • Scale operations from dozens to thousands of requests per hour
  • Minimize blocking and captchas through intelligent IP rotation
  • Reduce manual work and human errors
  • Optimize traffic costs through caching and smart retry logic
  • Maintain high success rates with automatic fallback mechanisms

Key takeaways from this guide:

  1. Choose the right rotation strategy — sticky sessions for accounts, auto-rotation for scraping, timer-based for optimization
  2. Implement robust error handling — exponential backoff, blacklisting, automatic retries
  3. Monitor and optimize — track success rates, proxy speeds, and traffic consumption
  4. Use appropriate tools — Python/Node.js for custom solutions, antidetect browsers for multi-accounting
  5. Follow best practices — add delays, cache responses, test proxy quality regularly

Whether you're scraping marketplaces, managing advertising accounts, or automating social media, API-driven proxy rotation is the foundation of reliable, scalable automation. Start with the examples in this guide, adapt them to your specific needs, and gradually build more sophisticated systems as your requirements grow.

Ready to implement proxy rotation?

ProxyCove offers residential and mobile proxies with built-in API support, sticky sessions, and automatic rotation. Perfect for scraping, multi-accounting, and automation at any scale.