Slickplan Help Sitemap Builder Exporting & Importing

Why does the Site Crawler fail to import my website?

Site Crawler import failures and solutions: FAQs

The Site Crawler quickly imports website structures into Slickplan’s Sitemap Builder. If something goes wrong, here are some common questions and solutions for issues you might encounter:

Why am I unable to crawl more than 10,000 pages?

The Site Crawler can process up to 10,000 pages per crawl. To handle larger websites, run the crawler multiple times and import the site in sections.
We recommend keeping sections under a couple of thousand pages. Slickplan can handle 1000+ pages, but performance may need to catch up. Since the app uses JavaScript to edit and move pages, anything over a few thousand pages can slow down your browser.

Why is the Site Crawler importing only a few pages?

This issue might be due to custom server security rules or a restricted firewall. If possible, adjust the firewall settings to whitelist our site crawler’s IP and User Agent:

  • IP:
  • User-Agent: SlickplanCrawler/*

Then try again.

Why can’t the crawler import my site?

Some websites use a robots exclusion standard to guide web robots on which areas to ignore. To make our crawler bypass this rule, enable the “Ignore robots.txt file rules” option in the Import dialog when using our Site Crawler or Google XML importer.

Can I import a website that requires a login and password?

Our Site Crawler currently supports only Basic HTTP authentication; Form-based authentication is not available.
You might be able to import using a standard Sitemap.xml file or WordPress XML.

If you have any issues, please email us, and we will be happy to help.