I'm stuck i've used https://www.webshare.io/ and in the beginning i got great results with 100 proxies, around 300 URL's/s including Google.
After a while it slowed down, and now i'm not getting it up to more than 30 URL's a second.
Now i'm using rotatingproxies.com, a solid speed of 540...
Hi folks :)
I want to make some beermoney by sharing some PDF (behind a paid shorten link) on forums and others community websites.
I spot 35 forums to do this (not here !) but I don't want to share the links, one by one, forum by forum, manually.
I search some tools do the job :
Anyone know if its possible to format the comment poster comments so that it does not post a single line ?
This is a comment. All comments are like this. How do I change it?
I would like to do:
This is a comment.
All comments are like this.
How can I change it...
Hi, all -
I just started using SB last week. I recently put on a VPS for long-term scraping.
I went out and got the 10 free Webshare proxies to test out. Harvesting URLs from keywords yields zero results, 10 errors.
Webshare says that those public proxies might have been flagged, let's...
Hey guys i need help.
i have about 20k urls of local businesses websites and i want to extract their emails from this huge list .. can scrape box do this? and how accurate can it be compared it Hunter.io ?
I'm looking for to do my tiered link building strategies using these tools
- RankerX - tier 1, tier 2
- GSA SER - tier 2, tier 3, and 4 (maybe)
- Scrape Box - scapper webdata, and
- SEO Content Machine - to spinner articles
May I install/run all these tools in my local desktop? If...
I've been scraping the past few days with everything going as planned with no issues. However this morning after restarting my computer and scrapebox, I am only pulling google account URLs like the one below...
Maybe I'm missing something, but it looks like there's no option for "deep crawl" in the latest version of scrapebox. Doesn't this make it practically useless? Why would the devs remove the option?
If I'm missing something please let me know.
I do. But it's only working fine and tuned for adult sites like escort sites, cam sites, tube sites and porn blogs. For other niches Im having ZERO + ZERO. Am I missing something? My proxies are great and Im well versed on Scrapebox (no footprints AT ALL), but what's going on?
I have two...
Hi BHW Users :)
I am very happy today because earlier I didn't know how to find expired domain but I learned on BlackHatWorld and found 2 old websites with high-quality backlinks. Both the domain is related to my niche. I also checked on Archive (dot) Org and both the domains look clean
Hello all first thanks for visiting my thread.
So i have this situation that i checking bulk URL list in google.com manually with using this in google search :
and i think by doing like this one by one for the url would be taking to much time if i have...
Can decapther solving for Scrapebox > addons > Google Competition Finder?
If can, how to use? Because i try check with 1 proxy + anticaptca failed.
For now i use 20 dedicated proxy with 5s delay for 1 keyword and worked, but waste my time if i check 100K keyword. Any suggest?
What I'm trying to do is search for my niche in every town in the UK and the info I'm trying to get is url email postcode phone number. Is scrapebox my way to that information. I got some urls then scraped for emails but they didn't match up with the urls so had to go through emails manually...
I need help scraping approximately 10,000 data files across various known domains. All files are of the same file type. I'd like to find someone with experience using the Grab File feature of Scrapebox that is comfortable with the process. Please let me know if you're interested and provide a...
Looking for help getting contact information for WAGS (wives and girlfriends of ballers (professional athletes in the USA)).
We would like to gather as much of the following info as possible-
First and Last Name
Instagram User Name
Number of Instagram Followers
I have scrape box, and I am trying to scrape all the URL's by crawling 20 levels with the "Grab URL's by crawling" feature, down 20 levels. The problem is, it only grabs that which has the same root domain, not the other links or URL's. Is this a basic limitation in the software, or do I...