You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I configured a monitor as described here and tried to run it in scrapinghub manually.
But then I get the following ERROR:
[root] Job runtime exception Less
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/sh_scrapy/crawl.py", line 148, in _run_usercode _run(args, settings)
File "/usr/local/lib/python2.7/site-packages/sh_scrapy/crawl.py", line 103, in _run _run_scrapy(args, settings)
File "/usr/local/lib/python2.7/site-packages/sh_scrapy/crawl.py", line 111, in _run_scrapy execute(settings=settings)
File "/usr/local/lib/python2.7/site-packages/scrapy/cmdline.py", line 150, in execute _run_print_help(parser, _run_command, cmd, args, opts)
File "/usr/local/lib/python2.7/site-packages/scrapy/cmdline.py", line 90, in _run_print_help func(*a, **kw)
File "/usr/local/lib/python2.7/site-packages/scrapy/cmdline.py", line 157, in _run_command cmd.run(args, opts) File "/usr/local/lib/python2.7/site-packages/scrapy/commands/crawl.py", line 57, in run self.crawler_process.crawl(spname, **opts.spargs) File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 171, in crawl crawler = self.create_crawler(crawler_or_spidercls) File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 200, in create_crawler return self._create_crawler(crawler_or_spidercls)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 205, in _create_crawler return Crawler(spidercls, self.settings)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 41, in __init__ self.stats = load_object(self.settings['STATS_CLASS'])(self)
File "/usr/local/lib/python2.7/site-packages/scrapy/utils/misc.py", line 44, in load_object mod = import_module(module)
File "/usr/local/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name)
ImportError: No module named statscollectors
DotScrapy Persistence Add-on is enabled.
And in Spiders > Settings STATS_CLASS = spidermon.contrib.stats.statscollectors.LocalStorageStatsHistoryCollector
The text was updated successfully, but these errors were encountered:
I configured a monitor as described here and tried to run it in scrapinghub manually.
But then I get the following ERROR:
DotScrapy Persistence Add-on is enabled.
And in Spiders > Settings
STATS_CLASS = spidermon.contrib.stats.statscollectors.LocalStorageStatsHistoryCollector
The text was updated successfully, but these errors were encountered: