When scraping marketplaces, automating social media, or collecting data via APIs, it is critically important to choose the right request sending strategy. Incorrect configuration can lead to IP blocks, CAPTCHAs, and wasted time. In this guide, we will discuss when to use parallel requests for maximum speed and when to use sequential requests for safety.
Difference between Parallel and Sequential Requests
Sequential requests are when your script or program sends requests one by one: it waits for a response to the first request before sending the second. This is slow but safe and appears as natural as possible to the target site.
Parallel requests are when multiple requests (5, 10, 50, or even hundreds) are sent simultaneously without waiting for responses to previous ones. This is significantly faster but puts a load on the server and may raise suspicions from anti-fraud systems.
Imagine scraping prices from 10,000 products on Wildberries. Sequentially, with a 2-second delay between requests, it would take 20,000 seconds or 5.5 hours. If you run 20 parallel streams, it takes only 16 minutes. The difference is obvious, but there are nuances.
Important: Parallel requests do not mean "send 1000 requests at once." This is controlled parallelism ā for example, 10-50 active streams, each with delays. Without control, you will get banned immediately.
Comparison of Methods
| Parameter | Sequential | Parallel |
|---|---|---|
| Speed | Slow (1 request at a time) | Fast (10-100+ simultaneously) |
| Risk of Blocking | Low | Medium-High |
| Proxy Load | Minimal | High |
| Configuration Complexity | Simple | Requires Experience |
| Memory Consumption | Low | High |
| Error Handling | Easier to Track | Harder to Log |
When to Use Parallel Requests
Parallel requests are the choice when speed is critical and the volume of data is large. However, it is important to understand: this only works with the correct proxy setup and load control.
Ideal Scenarios for Parallel Requests
1. Scraping Marketplaces with Large Catalogs
If you need to collect prices from 50,000 products on Wildberries or Ozon, sequential scraping will take days. With 20-30 parallel streams and data center proxies, the task can be completed in a few hours.
Configuration: 20-30 streams, each with a separate IP, with a delay of 1-3 seconds between requests within the stream. Rotate IPs every 100-200 requests.
2. Collecting Data from Public APIs
Many APIs (e.g., weather services, company databases, geolocation services) have limits on requests from a single IP: 100-1000 per day. Parallel requests through a pool of proxies allow you to bypass these limits.
Example: You need to collect data on 10,000 companies via an API. The limit is 500 requests/day per IP. Using 20 proxies in parallel = 10,000 requests in one day instead of 20 days.
3. Checking Resource Availability
If you are checking the availability of websites, monitoring mirrors, or checking server status ā parallel requests save hours. Here, simulating human behavior is not necessary; only speed matters.
4. Bulk Proxy Testing
When purchasing large pools of proxies (1000+ IPs), you need to quickly check their functionality, speed, and geolocation. Sequential testing will take hours, while parallel testing takes minutes.
Attention: Parallel requests are NOT suitable for working with protected platforms (Facebook Ads, Instagram API, Google Ads), where simulating real user behavior is important. Use sequential requests there.
Key Requirements for Parallel Requests
- Large pool of proxies (at least 10-20 IPs, better 50-100+)
- Automatic IP rotation on errors
- Control of the number of simultaneous streams (no more than 50-100)
- Delays between requests even within streams (0.5-2 sec)
- Error logging for analyzing blocking reasons
- Retry system for timeouts
When to Use Sequential Requests
Sequential requests are the choice of safety and reliability over speed. They simulate real user behavior and minimize the risk of blocks on protected platforms.
Mandatory Scenarios for Sequential Requests
1. Working with Advertising Accounts
Facebook Ads, TikTok Ads, Google Ads track not only IPs but also behavior patterns. Parallel requests from a single account will immediately raise suspicions. One account = one stream = sequential actions with delays of 5-15 seconds.
Example: You manage 20 Facebook advertising accounts through the anti-detect browser Dolphin Anty. Each account operates in a separate profile with a mobile proxy, actions are strictly sequential: login ā check statistics ā adjust bids ā logout. Delays of 7-12 seconds between actions.
2. Automating Actions on Social Media
Instagram, TikTok, VK have strict limits on actions: likes, follows, comments. Exceeding limits or acting too quickly = shadowban or complete blocking. Only sequential requests with random delays of 20-60 seconds.
Configuration for Instagram: One account can make a maximum of 60 likes/hour. That's 1 like per minute with delays of 45-75 seconds (randomization is important!). Use a separate proxy for each account.
3. Authorization and Working with Personal Accounts
Any actions requiring login (email services, banks, marketplaces as a seller) must be performed sequentially. Parallel login attempts from different IPs to one account is a direct path to blocking.
4. Sites with Strict Anti-Bot Protection
Platforms with Cloudflare, Akamai, PerimeterX analyze not only the frequency of requests but also their patterns. If 10 requests come simultaneously from one IP or User-Agent ā this is a clear sign of a bot. Sequential requests with delays of 3-10 seconds look natural.
5. Small Volume of Data
If you need to scrape 50-100 pages, the time difference between sequential and parallel scraping is insignificant (5 minutes vs. 1 minute). However, the sequential method guarantees no issues.
Proper Delays for Sequential Requests
| Platform/Task | Delay between Requests | Randomization |
|---|---|---|
| Facebook Ads (actions in the account) | 7-15 seconds | ±30% |
| Instagram (likes, follows) | 45-90 seconds | ±40% |
| TikTok (views, likes) | 30-60 seconds | ±35% |
| Google Ads (API requests) | 5-10 seconds | ±25% |
| Scraping with Cloudflare | 3-7 seconds | ±30% |
| Regular sites without protection | 1-3 seconds | ±20% |
Tip: Randomization of delays is critically important. If your script makes a request exactly every 5.00 seconds ā this is a bot pattern. Use random from 4 to 7 seconds to simulate a human.
Blocking Risks with Different Methods
Understanding the risks helps choose the right strategy and set up protection. Blocks occur not only due to request frequency but also due to their patterns.
What Anti-Fraud Systems Monitor
1. Request Frequency from One IP
If 100 requests come from one IP per minute ā this is an obvious bot. Limits vary: regular sites tolerate 10-30 requests/minute, protected platforms ā 2-5 requests/minute.
Solution for Parallel Requests: Distribute requests across a large pool of IPs. For example, 1000 requests/minute = 50 IPs with 20 requests each. This looks like 50 regular users.
2. Identical Intervals Between Requests
Requests exactly every 2.00 seconds ā a sign of automation. A real person clicks with varying intervals: 1.8 sec, 3.2 sec, 2.1 sec.
Solution: Add randomization ±30-50% from the base delay. Instead of fixed 5 seconds, use random(3.5, 7.5).
3. Lack of Typical User Behavior
A real user does not go directly to the product page ā they first visit the homepage, search for a category, click on a product. A bot requests a specific URL immediately.
Solution for Critical Platforms: Simulate the full path of the user. Before scraping a product, make 2-3 requests: homepage ā category ā product. This slows down the process but reduces the risk of blocking by 70-80%.
4. Suspicious User-Agent and Headers
Outdated User-Agent (e.g., Chrome 95 in 2024), absence of Accept-Language, Referer headers ā signs of a bot.
Solution: Use current User-Agents (Chrome 120+, Firefox 120+), add a full set of headers like a real browser. Rotate User-Agent along with IP.
Comparison of Blocking Risks
| Scenario | Risk with Sequential | Risk with Parallel |
|---|---|---|
| Scraping Marketplace (10K requests) | Low (5-10%) | Medium (20-30%) |
| Working with Facebook Ads | Low (2-5%) | Critical (80-95%) |
| Instagram Automation | Medium (15-25%) | High (60-80%) |
| Public APIs (within limits) | Very Low (1-3%) | Low (5-10%) |
| Sites with Cloudflare | Medium (10-20%) | High (40-60%) |
Which Proxies are Suitable for Each Method
The type of proxy directly affects the ability to use parallel or sequential requests. Incorrect choice will lead to blocks or overpayment.
Proxies for Parallel Requests
Data Center Proxies are the optimal choice for mass scraping and parallel requests. They are cheap (from $1-3 per IP/month), fast (ping 20-50 ms), and available in large volumes. The downside is that they are easily identified as proxies, so they are not suitable for protected platforms.
When to Use: Scraping marketplaces, collecting data from public sources, checking resource availability, bulk API requests to services without strict protection.
Configuration: Purchase a pool of 50-100 IPs, set up 20-30 parallel streams, each stream uses its own IP. Rotate every 100-200 requests or on error.
Residential Proxies are more expensive (from $3-7 per 1 GB of traffic) but appear as real users. They are suitable for parallel requests to protected platforms if speed is needed, but with caution.
When to Use: Scraping social media (without authorization), collecting data from sites with Cloudflare, working with platforms that block data centers. For parallel requests, a large pool of IPs with automatic rotation is needed.
Important: When making parallel requests through residential proxies, monitor traffic consumption. 10,000 requests can "consume" 5-10 GB, costing $20-50. Data centers are cheaper: unlimited traffic for $100-200/month for 100 IPs.
Proxies for Sequential Requests
Mobile Proxies are the most reliable type for working with protected platforms. IPs look like real mobile devices (4G/5G operators), minimizing the risk of blocks. The downside is that they are expensive (from $50-150 per IP/month).
When to Use: Facebook Ads, Instagram, TikTok, Google Ads ā anywhere maximum security and simulating real user behavior are needed. One account = one mobile proxy = sequential actions.
Configuration: Each advertising account or social media account is tied to a separate mobile IP. Actions are strictly sequential with delays of 10-60 seconds. IP is not rotated (one account always works from one IP).
Residential Proxies are a good alternative to mobile ones if the budget is limited. They are suitable for less critical tasks: scraping with authorization, SMM automation, working with marketplaces as a seller.
When to Use: Managing marketplace accounts (Wildberries, Ozon as a seller), automating posting on social media (not mass), scraping data requiring authorization.
Recommendations for Choosing Proxies
| Task | Proxy Type | Request Method | Number of IPs |
|---|---|---|---|
| Scraping Marketplaces (large volume) | Data Centers | Parallel | 50-100+ |
| Facebook Ads (multi-accounting) | Mobile | Sequential | 1 IP per account |
| Instagram Automation | Mobile/Residential | Sequential | 1 IP per account |
| Scraping with Cloudflare | Residential | Parallel (with caution) | 20-50 |
| Public APIs (bulk collection) | Data Centers | Parallel | 10-30 |
| Marketplaces (seller's personal account) | Residential | Sequential | 1 IP per account |
Optimal Settings: Delays, Streams, Timeouts
Proper configuration of parameters is critical for balancing speed and safety. Too aggressive settings will lead to blocks, while too cautious ones will waste time.
Configuring Parallel Requests
Number of Concurrent Streams
This is a key parameter. Too many streams = overload on proxies and the target server. Too few = low speed.
Recommendations:
- Scraping Marketplaces: 20-50 streams with a pool of 50+ proxies
- Public APIs: 10-30 streams, based on API limits
- Protected Sites: 5-15 streams, more = risk of blocking
- Proxy Testing: 50-100 streams (speed is more important here)
Delays Within Streams
Even when working in parallel, each stream should pause between its requests. This reduces the load on one IP and decreases the risk of blocking.
Recommendations:
- Simple Sites: 0.5-2 seconds between requests in one stream
- Marketplaces: 1-3 seconds with randomization ±30%
- Cloudflare Sites: 2-5 seconds with randomization ±40%
- APIs with Limits: calculate based on the limit (e.g., 100 requests/minute = 0.6 sec/request, make it 1 sec for buffer)
Timeouts
The time to wait for a response from the server. Too short a timeout = data loss due to slow responses. Too long = hanging streams.
Recommendations:
- Fast Sites: 10-15 seconds
- Slow Sites/APIs: 20-30 seconds
- Through Residential Proxies: +5-10 seconds (they are slower than data centers)
- Connection Timeout: 5-10 seconds (time to establish a connection)
Retry
In case of errors (timeout, 503, proxy block), you need to repeat the request with a different IP. Without retry, you will lose some data.
Configuration: 2-3 attempts per request, change proxy after each failed attempt, pause 3-5 seconds before retry.
Configuring Sequential Requests
Base Delay Between Requests
Depends on the platform and type of actions. The main rule: simulate a real user.
Platform Recommendations:
- Facebook Ads (navigating between sections of the account): 7-15 seconds
- Instagram (likes): 45-90 seconds, maximum 60 likes/hour
- Instagram (follows): 60-120 seconds, maximum 30 follows/hour
- TikTok (views): 30-60 seconds
- Scraping with Authorization: 3-7 seconds
- Marketplaces (actions in the seller's account): 5-10 seconds
Randomization
Is mandatory for all sequential requests. Use a deviation of ±30-50% from the base delay.
Example: Base delay of 10 seconds, randomization ±40% ā actual delays will be 6-14 seconds (random value each time).
Timeouts
For sequential requests, longer timeouts can be used since there is no risk of blocking all streams.
Recommendations: 30-60 seconds for protected platforms (Facebook, Instagram), 15-30 seconds for regular sites.
Practical Advice: Start with conservative settings (fewer streams, longer delays), gradually increase aggressiveness while monitoring the error rate. If errors >5-10% ā step back.
Tools for Implementing Both Methods
The choice of tool depends on your task and technical skills. For business tasks (arbitrage, SMM, e-commerce), use ready-made solutions without coding. For technical tasks ā libraries and frameworks.
Ready-Made Solutions Without Code (for Business)
Anti-detect Browsers for Multi-Accounting
If you work with advertising accounts or social media, anti-detect browsers are the industry standard. They automatically manage proxies, browser fingerprints, and isolate accounts.
Popular Solutions:
- Dolphin Anty: leader for Facebook/TikTok arbitrage, free plan for 10 profiles, easy proxy setup
- AdsPower: good for e-commerce (Amazon, eBay), has automation through RPA (no code)
- Multilogin: the most expensive ($100+/month), but maximum protection for serious arbitrage
- GoLogin: budget alternative ($25/month), suitable for SMM and small teams
How They Work with Proxies: Create a browser profile ā attach a proxy ā all actions in this profile go through this IP. One profile = one account = sequential actions. For parallel work, open several profiles simultaneously (each with its own proxy).
Scrapers and Parsers (Ready-Made)
For collecting data from marketplaces and websites, there are ready-made tools with GUI that do not require programming.
- Octoparse: visual parser builder, proxy support, can set up parallel streams through the interface
- ParseHub: similar to Octoparse, free plan for 200 pages, delay settings through GUI
- Scrapy Cloud: cloud service for running Scrapy spiders (requires minimal Python knowledge)
SMM Automation (No Code)
For managing social media, there are services with automation through the interface.
- Jarvee: automation for Instagram, TikTok, Twitter, built-in proxy support, delay settings through GUI (caution: aggressive automation leads to bans)
- Ingramer (Inflact): safe automation for Instagram, works through their proxies
- Combin: targeted follows/likes on Instagram, supports external proxies
Technical Tools (for Developers)
If you write your own scripts for scraping or automation, use proven libraries.
Python (most popular for scraping):
- Requests + threading/asyncio: for simple parallel requests, easy to set up proxies
- aiohttp: asynchronous library for high-parallel requests (1000+ simultaneously)
- Scrapy: scraping framework, built-in support for proxy rotation, middleware for delays
- Selenium: for sites with JavaScript, slower but bypasses many protections
- Playwright: modern alternative to Selenium, faster and more convenient
JavaScript/Node.js:
- Axios: popular library for HTTP requests, simple proxy setup
- Puppeteer: headless Chrome Node.js API, great for scraping dynamic content
- Request-Promise: promise-based HTTP requests, easy to use with proxies