If you are running marketplace scraping overnight, farming Facebook Ads accounts for 8 hours, or automating mass posting on Instagram — you have encountered the problem of session disconnections. The proxy changes the IP, the browser loses cookies, the script crashes after three hours of operation. In this guide, we will discuss how to set up stable long-lasting sessions for tasks that run for several hours to a day without interruption.
What is session management and why is it needed
Session management is the management of the connection state between your tool (browser, script, bot) and the target service over an extended period. For short tasks — scraping 100 products in 5 minutes — this is not critical. But if the task runs for several hours, it is important to preserve:
- The same IP address — so the site does not suspect device spoofing
- Cookies and localStorage — for authentication and tracking actions
- Browser fingerprint — a set of device characteristics (User-Agent, screen resolution, WebGL)
- Script state — which pages have been processed, where it stopped due to a failure
If even one parameter changes during the process — the site may block the account, interrupt captcha scraping, or reset the authentication session.
Typical long-running tasks: farming Facebook Ads accounts (6-12 hours of warming up), scraping all products in a category on Wildberries (3-8 hours), mass posting on 50 Instagram accounts (4-10 hours with delays), monitoring competitor prices 24/7.
Common problems with long sessions
Let's examine what most often breaks long-lasting sessions and leads to task stoppage:
1. Proxy IP rotation
Many proxy services by default change the IP every 5-15 minutes. For scraping without authentication, this is fine, but if you are logged into a Facebook Ads account — changing the IP from Moscow to St. Petersburg in the middle of a session will raise suspicion. The platform will request login confirmation, send a code to your phone, or even block the account for suspicious activity.
Solution: use sticky sessions — a mode where the proxy provides the same IP for 10 minutes, 1 hour, or 24 hours. More details on this in the section below.
2. Connection timeout on the proxy side
Some proxy providers terminate the connection if there is no activity for 10-30 minutes. If your script pauses between actions (for example, simulating a person — reading a product for 5 minutes, then moving to the next), the proxy may close the connection. When trying to continue, the script will receive an error and crash.
Solution: set up keep-alive requests (ping the proxy every 2-3 minutes) or choose a provider without strict timeouts. Residential and mobile proxies usually maintain the connection longer than data centers.
3. Changing browser fingerprint
If you restart the anti-detect browser or the script creates a new instance of the browser — the fingerprint changes. Even with the same IP, the site will see that the User-Agent, screen resolution, font list, or canvas fingerprint differ from the previous session. This triggers anti-fraud systems.
Solution: save the browser profile (in Dolphin Anty, AdsPower, Multilogin this is done automatically) and do not create a new one for each task launch. If using Selenium or Puppeteer — save the User Data Directory with cookies and settings.
4. Loss of script state on failure
The script has been scraping for 6 hours, processed 8000 products out of 10000, and crashed due to a network error. If progress is not saved — you will have to start from scratch. This is especially critical for tasks lasting 12+ hours.
Solution: save intermediate results to a database or file every N iterations (for example, every 100 products). When restarted, the script will continue from the last saved position.
Which proxies are suitable for long-running tasks
Not all types of proxies are equally good for long sessions. Here is a comparison based on stability and IP lifespan:
| Proxy Type | IP Lifespan | Stability | Suitable for |
|---|---|---|---|
| Data centers | Unlimited (static IP) | High, but easily detectable | Scraping without authentication, price monitoring |
| Residential | 10 min — 24 hours (sticky sessions) | Average (depends on the provider) | Account farming, scraping with authentication |
| Mobile | 5-30 minutes (change by operator timer) | Low (frequent IP changes) | Short tasks on social networks, bypassing strict blocks |
| ISP proxies | Unlimited (static residential IP) | Very high | Long tasks with authentication, farming premium accounts |
Recommendations for selection:
- For scraping marketplaces without authentication (Wildberries, Ozon, Yandex.Market) — data centers with static IPs are suitable. They are cheap, fast, and if the site does not strictly block data centers — they can handle tasks for 12+ hours.
- For farming Facebook Ads, TikTok Ads, Google Ads accounts — only residential or ISP proxies with sticky sessions for 24 hours. Mobile proxies are not suitable due to frequent IP changes.
- For Instagram, TikTok automation — residential proxies with sticky sessions for 1-6 hours. If the task is short (posting on 10 accounts in an hour) — mobile proxies can also work.
- For 24/7 monitoring (tracking competitor prices, news scraping) — ISP proxies or data centers, if the site does not block them.
Important: Mobile proxies are NOT suitable for long-running tasks! The IP changes every 5-30 minutes by the mobile operator's timer, and you cannot control this. Use them only for short tasks (account registration, one-time posting, captcha bypass).
Sticky sessions: how to fix the IP for 24 hours
Sticky sessions are a mode of proxy operation where you receive the same IP address for a specified time: 10 minutes, 1 hour, 6 hours, or 24 hours. This is critical for tasks with authentication.
How sticky sessions work
Typically, sticky sessions are implemented through a session ID in the proxy URL. Instead of the standard format:
http://username:password@proxy.example.com:8000
You add the session parameter:
http://username-session-mysession123:password@proxy.example.com:8000
Now all requests with the identifier mysession123 will go through the same IP until the session lifetime expires (usually 10-30 minutes by default). If a longer session is needed — the provider may offer a lifetime parameter:
http://username-session-mysession123-lifetime-1440:password@proxy.example.com:8000
Where lifetime-1440 means 1440 minutes (24 hours).
Setting up sticky sessions in popular services
In residential proxies: most providers support sticky sessions through parameters in the username. Check the format in your provider's documentation. Typical options:
username-session-ABC123— fixes the IP for the default time (10-30 minutes)username-session-ABC123-sessionduration-60— fixes for 60 minutesusername-country-us-session-ABC123— IP from the USA with fixation
In ISP proxies: usually, the IP is static by default, sticky sessions are not required — you always get the same address until you manually change the proxy.
In data centers: the IP is static, no additional settings are needed.
Example of use in an anti-detect browser
Suppose you are farming a Facebook Ads account in Dolphin Anty. The task is 8 hours of warming up (browsing websites, watching videos, liking). Setup:
- Open the browser profile in Dolphin Anty
- Go to the "Proxy" section
- Select the type: HTTP or SOCKS5
- Enter the proxy host and port
- In the "Login" field, specify:
username-session-farm001-sessionduration-480(480 minutes = 8 hours) - Enter the password
- Click "Check Proxy" — make sure the IP is recognized
- Save the profile
Now for 8 hours, all requests from this profile will go through one IP. Even if you close the browser and open it again after an hour — using the same session ID (farm001) will get you the same IP.
Tip: Use understandable session IDs related to the task. For example, farm-fb-account-001, parse-wb-electronics. This will simplify debugging if you have dozens of parallel tasks.
Setting up anti-detect browsers for long sessions
Anti-detect browsers (Dolphin Anty, AdsPower, Multilogin, GoLogin, Octo Browser) are designed specifically for long-lasting sessions with fingerprint preservation. However, there are setup nuances that are critical for tasks lasting 8+ hours.
1. Saving the browser profile
A browser profile is a set of cookies, localStorage, fingerprint (User-Agent, canvas, WebGL, fonts). All anti-detect browsers automatically save profiles upon closing. The main thing is not to create a new profile for each task launch!
Correct approach:
- Create a profile once for a specific task (for example, "Farm FB account #1")
- Set up the proxy with a sticky session
- Perform the first run, log into the account
- Close the browser — the profile will be saved
- On the next launch, open THE SAME profile — authentication and fingerprint will be preserved
Incorrect approach:
- Create a new profile every day for the same task
- Manually delete cookies between launches
- Change the fingerprint (User-Agent, screen resolution) in the middle of the task
2. Configuring fingerprint for stability
For long tasks, choose a REALISTIC fingerprint that matches the proxy. If the proxy is from Russia (Moscow) — do not set the User-Agent from a MacBook Pro with an English locale. Better:
- OS: Windows 10 or 11 (the most popular in Russia)
- Browser: Latest version of Chrome (automatically updates in anti-detect)
- Screen resolution: 1920x1080 (the most common)
- Language: ru-RU, timezone: Europe/Moscow
- WebRTC: disable or spoof to the proxy IP (to prevent leaking the real IP)
In Dolphin Anty and AdsPower, there is a "Create random fingerprint" function — it generates a plausible combination of parameters. For long tasks, this is safer than configuring manually.
3. Disabling automatic updates and reboots
If the task runs for 12 hours, make sure that:
- The computer does not go to sleep (disable in Windows/macOS power settings)
- The antivirus does not reboot the system for updates (postpone updates)
- The anti-detect browser does not update automatically in the middle of the task (disable auto-update in settings or schedule it for nighttime)
4. Using the API of anti-detect browsers for automation
Dolphin Anty, AdsPower, Multilogin provide APIs for managing profiles from scripts. This allows:
- To launch a browser profile from a Python/Node.js script
- To connect to it via Selenium or Puppeteer
- To perform a long task
- To automatically close the profile upon completion
Example of launching a Dolphin Anty profile via API (Python):
import requests
from selenium import webdriver
# Launching the profile via Dolphin Anty API
profile_id = "123456"
response = requests.get(f"http://localhost:3001/v1.0/browser_profiles/{profile_id}/start")
data = response.json()
# Connecting Selenium to the launched browser
options = webdriver.ChromeOptions()
options.debugger_address = data['automation']['port']
driver = webdriver.Chrome(options=options)
# Performing the task
driver.get("https://example.com")
# ... your scraping or automation code ...
# Closing the profile
requests.get(f"http://localhost:3001/v1.0/browser_profiles/{profile_id}/stop")
This approach ensures that the fingerprint and cookies will be preserved, even if the script crashes — upon restart, you will connect to the same profile.
Automation and state preservation
For tasks lasting 8+ hours, it is critically important to save progress so that in case of failure, you do not have to start from scratch. Let's discuss methods for different tools.
1. Saving progress in a database
If you are scraping 10,000 products from Wildberries, save the results in SQLite, PostgreSQL, or MongoDB after every 50-100 products. The table structure:
CREATE TABLE parsing_progress (
id INTEGER PRIMARY KEY,
url TEXT,
status TEXT, -- 'pending', 'completed', 'error'
data TEXT, -- JSON with results
created_at TIMESTAMP
);
When starting, the script checks which URLs have not been processed yet (status = 'pending') and continues from there. If the script crashes — upon restart, it will skip already processed products.
2. Using task queues
For complex tasks (for example, farming 50 Facebook Ads accounts in parallel), use queue systems: Celery (Python), Bull (Node.js), RabbitMQ. The principle:
- Create a list of tasks (50 accounts)
- Each task is independent (its own browser profile, its own proxy)
- Workers take tasks from the queue and execute them
- If a worker crashes — the task returns to the queue and is taken by another worker
This guarantees that no task will be lost, even if some processes crash.
3. Logging and monitoring
For tasks lasting 12+ hours, set up detailed logging:
- Log every action (opened a page, clicked a button, received data)
- Save screenshots on errors (in Selenium:
driver.save_screenshot('error.png')) - Use log levels: INFO for normal actions, WARNING for suspicious situations (captcha, slow loading), ERROR for failures
Example of setting up logging in Python:
import logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler('parsing.log'),
logging.StreamHandler()
]
)
logger = logging.getLogger(__name__)
# In the code
logger.info(f"Processed product {product_id}")
logger.warning(f"Slow page loading: {url}")
logger.error(f"Parsing error: {error}")
Monitoring and recovery after disconnection
Even with the correct setup of the proxy and browser, the session may disconnect: the network went down, the proxy rebooted, the site issued a captcha. It is important to detect the problem quickly and restore operation.
1. Checking proxy availability
Before starting the task and periodically (every 30-60 minutes), check that the proxy is working:
import requests
def check_proxy(proxy_url):
try:
response = requests.get(
'https://api.ipify.org?format=json',
proxies={'http': proxy_url, 'https': proxy_url},
timeout=10
)
if response.status_code == 200:
ip = response.json()['ip']
logger.info(f"Proxy is working, IP: {ip}")
return True
except Exception as e:
logger.error(f"Proxy is not responding: {e}")
return False
# Check before starting
if not check_proxy(proxy_url):
logger.error("Proxy is unavailable, stopping the task")
exit(1)
2. Handling captchas and blocks
If the site shows a captcha (Google reCAPTCHA, hCaptcha, Cloudflare Turnstile) — the task stops. Solutions:
- Automatic captcha solving: integration with services like 2Captcha, Anti-Captcha, CapMonster. They solve captchas in 10-30 seconds, and the script continues working.
- Change proxy: if the captcha appeared due to a suspicious IP — switch to another proxy from the pool and continue.
- Pause and retry: sometimes captchas appear due to too fast actions. Take a 2-5 minute break, then repeat the request.
3. Automatic restart on failure
Wrap the main code in a try-except block and restart the task on error:
import time
max_retries = 3
retry_delay = 60 # seconds
for attempt in range(max_retries):
try:
# Main task code
run_parsing()
break # If successful — exit the loop
except Exception as e:
logger.error(f"Error on attempt {attempt + 1}: {e}")
if attempt < max_retries - 1:
logger.info(f"Restarting in {retry_delay} seconds...")
time.sleep(retry_delay)
else:
logger.error("Maximum number of attempts exceeded, stopping")
raise
4. Notifications about problems
For tasks that run overnight or on weekends, set up notifications for critical errors:
- Telegram bot: sends a message on error (via the python-telegram-bot library)
- Email: via SMTP (smtplib library in Python)
- SMS: via Twilio or similar services
Example of sending a notification in Telegram:
import requests
def send_telegram_alert(message):
bot_token = "YOUR_BOT_TOKEN"
chat_id = "YOUR_CHAT_ID"
url = f"https://api.telegram.org/bot{bot_token}/sendMessage"
requests.post(url, data={'chat_id': chat_id, 'text': message})
# On error
try:
run_parsing()
except Exception as e:
send_telegram_alert(f"⚠️ Parsing error: {e}")
Practical use cases
Let's discuss specific tasks and optimal session management setup for each.
Scenario 1: Farming a Facebook Ads account (8 hours of warming up)
Task: Warm up a new Facebook Ads account before launching ads. You need to simulate the behavior of a regular user: logging into Facebook, reading the feed, watching videos, liking, clicking on ads. A total of 8 hours of activity with breaks.
Setup:
- Proxy: Residential with sticky session for 8-12 hours, country — the same as specified in the account (if the account is from the USA — proxy from the USA)
- Browser: Dolphin Anty or AdsPower, create a separate profile for this account
- Fingerprint: Realistic for the country (Windows 10, Chrome, resolution 1920x1080, language en-US for the USA)
- Automation: Script on Selenium with random delays (5-15 minutes between actions), simulating scrolling and mouse movement
- Progress saving: Logging all actions to a file to continue from the last point in case of failure
Risks: Changing the IP in the middle of the session — Facebook will request login confirmation. Too fast actions — the account will come under suspicion.
Scenario 2: Scraping all products in a category on Wildberries (6 hours)
Task: Scrape all products in the "Electronics" category on Wildberries (about 50,000 products). You need to get the name, price, rating, and number of reviews. Scraping is done without authentication.
Setup:
- Proxy: Data center with a static IP (Wildberries usually does not strictly block data centers) or residential with a sticky session for 6+ hours
- Browser: Not mandatory, you can use requests + BeautifulSoup (faster) or Selenium (if the site is JavaScript-based)
- Progress saving: SQLite database, save every 100 products. On restart, skip already processed ones.
- Error handling: If a product fails to load (404, timeout) — skip and continue, log the error
Risks: Wildberries may show a captcha with too frequent requests. Solution — add a delay of 1-3 seconds between products or use a proxy pool with rotation.
Scenario 3: Mass posting on 30 Instagram accounts (5 hours)
Task: Post the same post on 30 client Instagram accounts. Each account has its own text and hashtags. This needs to be done with delays to avoid looking like spam.
Setup:
- Proxy: Residential with sticky session for 1-2 hours, each account has its own proxy (to avoid accounts being linked by IP)
- Browser: Dolphin Anty, create 30 profiles (one for each account), each with its own proxy
- Automation: The script launches profiles sequentially, posts via Instagram Web or API, and closes the profile. Delay between accounts — 10-15 minutes.
- Progress saving: List of accounts in CSV, marking status (posted/pending/error)
Risks: Instagram may block the account for mass actions. Solution — add random delays, simulate human behavior (scrolling the feed before posting).
Scenario 4: Monitoring competitor prices on Ozon 24/7
Task: Monitor prices of 500 competitor products on Ozon every hour, recording changes in a database. The task runs continuously.
Setup:
- Proxy: ISP proxy with a static IP (never changes) or a data center
- Automation: Cron job (Linux) or Task Scheduler (Windows), runs the script every hour
- Data saving: PostgreSQL or MySQL, table with fields: product_id, price, timestamp
- Error handling: If Ozon is unavailable (500 error) — skip the iteration, log it, repeat in an hour
Risks: Ozon may block the IP with too frequent requests. Solution — use a pool of 3-5 proxies with rotation.
Conclusion
Session management for long tasks is a combination of the right proxy choice, anti-detect browser setup, and reliable automation with progress preservation. Key points:
- For tasks with authentication (account farming, working with ad accounts), use residential or ISP proxies with sticky sessions for 6-24 hours
- For scraping without authentication, data centers with static IPs are suitable — they are cheaper and faster
- Mobile proxies are NOT suitable for long-running tasks due to frequent IP changes
- Preserve the browser profile and do not change the fingerprint in the middle of the task
- Be sure to log progress and set up automatic restarts on failures
- For critical tasks, set up notifications for problems (Telegram, email)
If you plan to run tasks for 8+ hours with authentication (account farming, social media automation, working with ad accounts), we recommend trying residential proxies with sticky session support — they provide a stable IP throughout the session and minimal risk of blocks. For scraping marketplaces and monitoring prices without authentication, data center proxies are suitable — they are faster and cheaper with the same connection stability.