If you work with a large number of proxies β scraping marketplaces, managing multiple social media accounts, or running ads β you know the problem: suddenly, some proxies stop working, and your tasks come to a halt. A health check for the proxy pool solves this problem automatically: the system checks each IP, excludes non-working ones, and uses only stable connections.
In this guide, we will discuss how to set up an automatic health check for your proxy pool: from simple availability checks to advanced monitoring with replacement of faulty proxies. This is suitable for any tasks β from scraping Wildberries to multi-accounting in Facebook Ads.
What is a proxy health check and why is it needed
A health check is an automated monitoring system for a proxy pool that regularly checks each IP address for availability, speed, and correct operation. When you work with dozens or hundreds of proxies, some of them inevitably stop working: they expire, get banned, the provider blocks access, or simply slow down.
Without a health check, you will only learn about the problem when a task fails with an error: the parser wonβt collect data, an account will get banned due to a non-working proxy, or an ad wonβt launch. With a configured health check, the system automatically excludes faulty proxies from rotation and uses only stable connections.
Why a health check is needed:
- Operational stability: excluding non-working proxies before they disrupt your tasks
- Time savings: no need to manually check each IP and search for error causes
- Account security: a slow or unstable proxy can raise suspicions from the platform
- Cost optimization: you pay only for working proxies, not for the entire pool
A health check is especially critical for business tasks: if you manage 30 client accounts on Instagram, scrape competitor prices on Ozon, or run ads on Facebook Ads β downtime due to a non-working proxy can cost money and reputation.
Methods for checking proxy uptime
There are several levels of proxy checking β from simple availability checks to deep analysis of anonymity and speed. The choice of method depends on your tasks: for scraping, a basic check is sufficient, while for multi-accounting on social media, you need to check geolocation and anonymity.
1. Basic availability check (Ping Check)
The simplest method is to send an HTTP request through the proxy to a test server and check if a response is received. Public services like httpbin.org, ip-api.com, or your own test server are usually used.
What is checked: whether the proxy responds to requests or not (status 200 OK). This is the minimum check that filters out completely non-working IPs.
When it is sufficient: scraping public data, gathering information from websites without strict protection, mass tasks where speed of checking is important.
2. Latency check
The response time of the proxy is measured β how many milliseconds pass from sending the request to receiving the response. Slow proxies (more than 3-5 seconds) can cause timeouts and raise suspicions from platforms.
What is checked: response time (latency) and speed stability. Proxies with latency over 5000 ms are usually excluded from the pool.
When it is important: working with social media (Instagram, TikTok), ad accounts (Facebook Ads, Google Ads), tasks where page load speed is critical.
3. Geolocation and IP reputation check
The compliance of the IP with the declared country and city is checked, as well as the reputation of the IP (whether it is on blacklists, whether it is used for spam). For residential proxies, this is critical β platforms check the match of geolocation with account data.
What is checked: country and city of the IP, provider, presence in spam databases (DNSBL, Spamhaus), type of connection (residential/datacenter).
When it is critical: multi-accounting on social media, traffic arbitrage, working with accounts tied to specific cities (for example, posting ads on Avito).
4. Anonymity level check
The level of anonymity of the proxy is determined β whether it transmits headers that reveal your real IP (X-Forwarded-For, Via). Proxies come in three types: transparent (they pass the real IP), anonymous (they hide the IP but show that it is a proxy), and elite (completely anonymous).
What is checked: presence of headers X-Forwarded-For, X-Real-IP, Via, Proxy-Connection. For business tasks, only elite proxies are needed.
When it is mandatory: working with platforms with strict anti-fraud protection (Facebook, Google, TikTok), multi-accounting, traffic arbitrage.
| Check Method | What it checks | For which tasks |
|---|---|---|
| Ping Check | Availability (200 OK) | Scraping, mass data collection |
| Latency Check | Response speed | Social media, ad accounts |
| Geo Check | Geolocation, IP reputation | Multi-accounting, local tasks |
| Anonymity Check | Anonymity level | Arbitrage, anti-fraud platforms |
Basic health check setup: availability check
Let's start with a simple health check setup that checks the availability of each proxy in the pool. This method is suitable for most tasks and takes 10-15 minutes to set up.
Step 1: Prepare the list of proxies
Create a file with your proxies in the format IP:PORT:USER:PASS or http://user:pass@ip:port. Each proxy should be on a new line.
Example of the proxies.txt file:
192.168.1.100:8080:user1:pass1 192.168.1.101:8080:user2:pass2 192.168.1.102:8080:user3:pass3
Step 2: Choose a test URL
For availability checking, you need a stable server that returns a simple response. Popular options include:
- httpbin.org/ip β returns the proxy's IP address in JSON format
- ip-api.com/json β returns IP and geolocation
- icanhazip.com β returns only the IP (the fastest)
- Your own server β if you need to check access to a specific site
For a basic check, httpbin.org/ip is sufficient β it is stable and returns a structured response.
Step 3: Set up the checking script
Create a simple script that reads the list of proxies, sends a request through each, and checks the response status. Here is an example in Python (the most popular language for such tasks):
import requests
from concurrent.futures import ThreadPoolExecutor
import time
def check_proxy(proxy_line):
"""Check a single proxy"""
try:
# Parse the proxy line
parts = proxy_line.strip().split(':')
proxy_url = f"http://{parts[2]}:{parts[3]}@{parts[0]}:{parts[1]}"
proxies = {
'http': proxy_url,
'https': proxy_url
}
# Send a request with a timeout of 10 seconds
start_time = time.time()
response = requests.get('http://httpbin.org/ip',
proxies=proxies,
timeout=10)
latency = (time.time() - start_time) * 1000 # in milliseconds
if response.status_code == 200:
return {
'proxy': proxy_line,
'status': 'working',
'latency': round(latency, 2),
'ip': response.json().get('origin')
}
except Exception as e:
return {
'proxy': proxy_line,
'status': 'failed',
'error': str(e)
}
# Read the proxy file
with open('proxies.txt', 'r') as f:
proxies = f.readlines()
# Check all proxies in parallel (up to 20 at a time)
with ThreadPoolExecutor(max_workers=20) as executor:
results = list(executor.map(check_proxy, proxies))
# Save working proxies
working_proxies = [r for r in results if r and r['status'] == 'working']
with open('working_proxies.txt', 'w') as f:
for proxy in working_proxies:
f.write(proxy['proxy'])
print(f"Checked: {len(proxies)}")
print(f"Working: {len(working_proxies)}")
print(f"Not working: {len(proxies) - len(working_proxies)}")
This script checks all proxies in parallel (20 at a time), speeding up the process significantly. The result is a file named working_proxies.txt containing only working proxies.
Step 4: Automate the check
To keep the health check running continuously, set up the script to run automatically on a schedule:
Linux/Mac (cron):
# Check every 30 minutes */30 * * * * /usr/bin/python3 /path/to/check_proxies.py
Windows (Task Scheduler):
- Open "Task Scheduler"
- Create a new task β Trigger: every 30 minutes
- Action: run python.exe with the path to your script
β οΈ Important:
Do not check proxies too frequently (more than once every 15 minutes) β this creates load on test services and may lead to blocking. The optimal frequency is every 30-60 minutes for stable proxies, every 10-15 minutes for tasks where availability is critical.
Advanced monitoring: speed, geolocation, anonymity
For business tasks, a basic availability check is not enough β you need to monitor speed, geolocation, and anonymity level. This is especially important for multi-accounting on social media and traffic arbitrage, where platforms strictly check proxies.
Speed and stability check
A slow proxy (latency over 3-5 seconds) can raise suspicions from platforms: Instagram and Facebook track page load times, and a slow connection is a sign of proxy usage. Moreover, slow proxies slow down your work and can cause timeouts.
What to check:
- Latency: average time from request to response. Norm: up to 1000 ms for residential, up to 300 ms for datacenter proxies
- Download speed: how many kilobytes per second are downloaded through the proxy. Norm: at least 500 Kbps
- Stability: check 3-5 consecutive requests β latency should not fluctuate significantly (a variation of more than 50% is a bad sign)
Example of an extended speed check:
def check_proxy_speed(proxy_url):
"""Check speed and stability"""
latencies = []
# Make 5 requests to check stability
for i in range(5):
try:
start = time.time()
response = requests.get('http://httpbin.org/ip',
proxies={'http': proxy_url, 'https': proxy_url},
timeout=10)
latency = (time.time() - start) * 1000
latencies.append(latency)
time.sleep(0.5) # pause between requests
except:
return None
avg_latency = sum(latencies) / len(latencies)
max_latency = max(latencies)
min_latency = min(latencies)
stability = (max_latency - min_latency) / avg_latency * 100
return {
'avg_latency': round(avg_latency, 2),
'stability': round(stability, 2), # % variation
'status': 'good' if avg_latency < 3000 and stability < 50 else 'slow'
}
Geolocation check
For multi-accounting, it is critical that the geolocation of the proxy matches the account data. If you manage an account for a Moscow company through a proxy from Vladivostok β this is a red flag for the platform. Use the service ip-api.com to check geolocation:
def check_proxy_geo(proxy_url):
"""Check proxy geolocation"""
try:
response = requests.get('http://ip-api.com/json',
proxies={'http': proxy_url, 'https': proxy_url},
timeout=10)
data = response.json()
return {
'ip': data.get('query'),
'country': data.get('country'),
'city': data.get('city'),
'isp': data.get('isp'),
'proxy_type': data.get('proxy'), # True if proxy detected
'mobile': data.get('mobile') # True for mobile IPs
}
except:
return None
Keep geolocation data for each proxy and use it when distributing tasks: accounts from Moscow β through Moscow proxies, regional ads on Avito β through proxies of the required city.
Anonymity check
Proxies come in three levels of anonymity: transparent, anonymous, and elite. For working with Facebook, Instagram, TikTok, and other platforms with anti-fraud protection, only elite proxies are needed β they do not transmit headers that reveal proxy usage.
What to check:
- Headers X-Forwarded-For, X-Real-IP, Via β should be absent
- The IP in the response should match the proxy's IP (not your real IP)
- User-Agent should be transmitted unchanged
def check_proxy_anonymity(proxy_url):
"""Check anonymity level"""
try:
response = requests.get('http://httpbin.org/headers',
proxies={'http': proxy_url, 'https': proxy_url},
timeout=10)
headers = response.json()['headers']
# Check for headers revealing proxy usage
proxy_headers = ['X-Forwarded-For', 'X-Real-Ip', 'Via', 'Proxy-Connection']
detected_headers = [h for h in proxy_headers if h in headers]
if len(detected_headers) == 0:
return 'elite' # completely anonymous
elif 'X-Forwarded-For' not in headers:
return 'anonymous' # hides IP but shows it is a proxy
else:
return 'transparent' # transmits real IP
except:
return None
For business tasks, use only elite proxies. Mobile proxies by default have elite level, as they use real IPs from mobile operators.
Automatic rotation: replacing faulty proxies
A health check becomes truly useful when it not only checks proxies but also automatically replaces non-working ones with working ones. This is critical for continuous tasks: scraping marketplaces, price monitoring, auto-posting on social media.
Strategy 1: Pool with priorities
Create two lists of proxies: primary (working) and backup. The health check constantly checks the primary pool, and when a non-working proxy is detected, it replaces it with a proxy from the backup pool.
How it works:
- The health check checks all proxies from the primary pool every 30 minutes
- Non-working proxies are moved to a "quarantine" list
- A working proxy from the backup pool is taken and added to the primary
- After 2-4 hours, proxies in quarantine are checked again β if they work, they return to the backup
Example implementation:
import json
from datetime import datetime, timedelta
class ProxyPool:
def __init__(self):
self.working = [] # primary pool
self.backup = [] # backup pool
self.quarantine = {} # {proxy: timestamp when it entered quarantine}
def check_and_rotate(self):
"""Check and rotate proxies"""
failed_proxies = []
# Check the primary pool
for proxy in self.working:
if not self.is_proxy_working(proxy):
failed_proxies.append(proxy)
self.quarantine[proxy] = datetime.now()
# Remove non-working from the primary pool
self.working = [p for p in self.working if p not in failed_proxies]
# Add from backup as needed
needed = len(failed_proxies)
for i in range(needed):
if len(self.backup) > 0:
new_proxy = self.backup.pop(0)
if self.is_proxy_working(new_proxy):
self.working.append(new_proxy)
# Check quarantine β if a proxy has been in quarantine for more than 4 hours, check it
now = datetime.now()
for proxy, quarantine_time in list(self.quarantine.items()):
if now - quarantine_time > timedelta(hours=4):
if self.is_proxy_working(proxy):
self.backup.append(proxy)
del self.quarantine[proxy]
self.save_state()
def save_state(self):
"""Save the state of the pool"""
state = {
'working': self.working,
'backup': self.backup,
'quarantine': {k: v.isoformat() for k, v in self.quarantine.items()}
}
with open('proxy_pool_state.json', 'w') as f:
json.dump(state, f)
Strategy 2: Round-robin with exclusion
A simpler approach: use all proxies in turn (round-robin), but temporarily exclude a proxy from rotation for 30-60 minutes if it returns an error. Suitable for tasks where speed is important, not perfect stability.
How it works:
- Proxies are selected in a circle: 1, 2, 3, 4, 1, 2, 3, 4...
- If a proxy returns an error, it is excluded for 30 minutes
- After 30 minutes, the proxy automatically returns to rotation
- If a proxy fails 3 times in a row β it is excluded for 4 hours
This method is good for scraping and mass tasks where you can skip a few requests without critical consequences.
Strategy 3: Weighted rotation by metrics
An advanced approach: each proxy is assigned a "weight" based on metrics (speed, stability, success rate of requests). Proxies with high weight are used more often, those with low weight β less often. Suitable for critical tasks: multi-accounting, arbitrage.
Weight formula:
weight = (success_rate * 0.5) + (speed_score * 0.3) + (uptime * 0.2) where: - success_rate: % of successful requests in the last hour (0-100) - speed_score: 100 - (latency / 50) β the faster, the higher - uptime: % of time the proxy was available in the last 24 hours
Proxies with a weight above 70 are used for critical tasks (logging into accounts), with a weight of 40-70 β for regular tasks, below 40 β temporarily excluded.
Ready-made tools for proxy pool health check
If you do not want to write your own script, use ready-made solutions. Many of them have a web interface, API, and integration with popular tools.
1. ProxyChecker by Proxy-Store
A free utility for Windows/Linux with a graphical interface. Checks availability, speed, anonymity, and geolocation. Supports HTTP, HTTPS, SOCKS4/5. Exports results to TXT, CSV, JSON.
Pros: simple interface, fast checking (up to 1000 proxies per minute), filters by country and speed.
Cons: no automatic rotation, needs to be run manually.
2. Proxy Scraper & Checker
An open-source project in Python with automatic collection of free proxies and health check. Suitable for experiments and testing, but not for business (free proxies are unstable).
Pros: free, automatic proxy collection, customizable checks.
Cons: low quality of free proxies, frequent blocks.
3. Proxy Pool Manager (commercial solutions)
Paid services with a full cycle of proxy management: health check, automatic rotation, API, integration with anti-detect browsers (Dolphin Anty, AdsPower, Multilogin). Examples: Bright Data Proxy Manager, Smartproxy Dashboard, Oxylabs Proxy Rotator.
Pros: all-in-one solution, 24/7 support, ready-made integrations.
Cons: high cost (from $50/month), tied to a specific proxy provider.
4. Built-in health check in anti-detect browsers
If you use anti-detect browsers for multi-accounting, many of them have built-in proxy checks:
- Dolphin Anty: checks availability and speed when adding proxies to a profile
- AdsPower: automatic proxy check before launching a profile
- Multilogin: built-in proxy tester with anonymity check
- GoLogin: checks geolocation and IP reputation
These tools are convenient for SMM specialists and arbitrageurs who work with a small number of accounts (up to 50-100). For larger volumes, a custom solution is needed.
| Tool | Type | Functions | For whom |
|---|---|---|---|
| ProxyChecker | Free utility | Availability, speed, anonymity check | Small business, one-time checks |
| Custom script | Open-source | Full customization, automation | Developers, large pools |
| Proxy Manager | Commercial SaaS | Health check, rotation, API, support | Business, critical tasks |
| Anti-detect browsers | Built-in functionality | Basic check when launching a profile | SMM, arbitrage, up to 100 accounts |
Business use cases
Let's discuss specific cases of how a proxy pool health check solves real business tasks.
Case 1: Scraping competitor prices on marketplaces
Task: a seller on Wildberries scrapes prices from 500 competitors every 2 hours to automatically adjust their prices. A pool of 50 proxies is used.
Problem without health check: some proxies are blocked by Wildberries after 100-200 requests, the parser fails with errors, and data is collected incompletely. It is necessary to manually check and replace proxies every 2-3 days.
Solution with health check: every 30 minutes, the system checks all 50 proxies with a request to Wildberries. Non-working ones (status 403, 429, or timeout) are automatically replaced with backup proxies from a pool of 20 backup proxies. The parser always uses only working proxies.
Result: scraping stability increased from 70% to 98%, manual work reduced from 2 hours a day to 10 minutes a week.
Case 2: Multi-accounting for an SMM agency
Task: an SMM agency manages 80 Instagram accounts for clients through Dolphin Anty. Each account is tied to its own proxy (1 account = 1 proxy).
Problem without health check: if a proxy stops working, the manager only learns about it when they cannot log into the client's account. During this time, Instagram may block the account due to "suspicious activity" (sudden IP change).
Solution with health check: every 60 minutes, the system checks all 80 proxies (availability + geolocation). If a proxy does not respond, the manager receives a notification in Telegram, and in Dolphin Anty, the profile settings are automatically updated to a backup proxy from the same city.
Result: the number of account blocks due to proxy issues decreased from 5-7 per month to 0-1. Savings: ~$500/month on account recovery.
Case 3: Traffic arbitrage on Facebook Ads
Task: an arbitrageur runs ads with 15 Facebook Ads accounts. Each account uses its own residential proxy from the USA.
Problem without health check: Facebook strictly checks the stability of IPs. If a proxy "jumps" (IP changes or connection drops), the account is put under review or immediately banned. The cost of losing an account: $200-500 (recovery + downtime of campaigns).
Solution with health check: checks every 15 minutes: availability, speed (latency must be stable), anonymity (elite level). If a proxy shows instability (latency variation over 30%), it is excluded from rotation until the reasons are clarified. For critical accounts, only proxies with uptime > 99.5% over the last 24 hours are used.
Result: the number of bans due to proxy issues decreased from 2-3 per month to 0. ROI increased by 15% due to stable campaign performance.
π‘ Tip:
For critical tasks (multi-accounting, arbitrage), use residential proxies with high uptime. They are more expensive than datacenter proxies, but stability and low risk of blocks pay off the price difference.
Common mistakes when setting up health check
Let's discuss typical mistakes that reduce the effectiveness of health checks or create new problems.
Mistake 1: Checking too frequently
Problem: checking every 1-5 minutes creates a huge load on proxies and test services. Public services (httpbin.org, ip-api.com) may block your IP for flooding. Moreover, frequent checks consume traffic β if you have 100 proxies and check every minute, thatβs 144,000 requests per day.
Solution: for stable proxies, checking every 30-60 minutes is sufficient. For critical tasks β every 15 minutes. Use your own test server instead of public services if frequent checks are needed.
Mistake 2: Checking only availability
Problem: a proxy may respond to requests (status 200 OK) but be slow (latency 10 seconds) or have incorrect geolocation. For business tasks, such a proxy is useless or even dangerous.
Solution: check comprehensively β availability + speed + geolocation + anonymity. For multi-accounting, geolocation is critical; for scraping, speed is important; for arbitrage, all together.
Mistake 3: Lack of quarantine
Problem: a proxy may temporarily "drop" due to server reboot or provider issues, but may work again in 1-2 hours. If such proxies are immediately removed from the pool, you lose working IPs.
Solution: use a quarantine system β non-working proxies are not removed but excluded for 2-4 hours. After this time, they are checked again, and if they work, they return to the pool.
Mistake 4: Ignoring stability metrics
Problem: a proxy may work, but unstable β latency jumps from 500 ms to 5000 ms, periods of downtime can occur, which can lead to failures in your tasks.
Solution: monitor stability metrics and ensure that the proxies used have consistent performance. This will help to avoid issues and ensure smooth operation of your tasks.