I am trying to schedule a scrapy 2.1.0 spider with the help of scrapyd 1.2
curl --insecure http://localhost:6800/schedule.json -d project=bid -d spider=test
This should in theory start the crawl for spider test within project bid. Instead it outputs the error message:
{"node_name": "spider1", "status": "error", "message": "Scrapy 2.1.0 - no active project\n\nUnknown command: list\n\nUse \"scrapy\" to see available commands\n"}
If I cd into the project directory there is the project with several spiders which I can start via "cd /var/spiders/ && scrapy crawl test &".
However beeing in another folder would also give me the message "no active projects":
/var$ scrapy list
Scrapy 2.1.0 - no active project
Unknown command: list
Use "scrapy" to see available commands
This looks like the exact same info I get from scrapyd, so I suspect that I need to configure somehow the working directory where my projects live.
Scrapyd is running and I can access the console via web "gui".
What is the right approach to start the job via scrapyd?
Before you can launch your spider with scrapyd, you'll have to deploy your spider first. You can do this by: