Slickplan Help Sitemap Builder Exporting & Importing

Why does the Site Crawler fail to import my website?

The Site Crawler is a helpful tool that makes your work timesaving and simple. Within a few minutes you can build your sitemap, but what if something went wrong?

Below you can find several questions customers are asking us frequently:

  1. “Why can’t I crawl more than 10 000 pages?”
    • The Site Crawler is limited to 10 000 pages during one crawling process, however you can run the crawler a few times and import your website in parts – each part as a separate section (link). Please note we don’t recommend having more than couple thousand pages in one section, Slickplan can handle large sitemaps with 1000+ pages but once you hit that threshold you will start to see a performance lag while working. The app uses JavaScript frameworks to edit and move pages around and typically anything over a few thousand is too much for the browser to process and reorganize.
  2. “The Site Crawler imports only a few pages from my website. What could be the cause?”
    • This may be caused by a custom server’s security rules or very restricted firewall.
      If you can change the firewall settings, or someone can do that for you, please add our site crawler’s IPs or User Agent to a whitelist and try again:
      – IP:
      – User-Agent: SlickplanCrawler/*
  3. “The crawler can’t import my site. Why?”
    • Some websites have a robots exclusion standard. This standard specifies how to inform the web robot about which areas of the website should not be processed or scanned. You can tell our crawler to simply ignore this ruleset file. To do so please enable the Ignore robots.txt file option on the Import dialog when using our Site Crawler or Google XML importer.
  4. “Can I import a website that needs login/password?”

If you still experience any issues please send us an email – we will be happy to help.