Multifarious Scrapy examples. Spiders for alexa / amazon / douban / douyu / github / linkedin etc.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
geekan edb1cb116b
Merge pull request #15 from bryant1410/master
6 years ago
alexa 升级scrapy1.0版本 8 years ago
alexa_topsites add alexa_topsites 8 years ago
amazonbook 升级scrapy1.0版本 8 years ago
dianping update 7 years ago
dmoz 升级scrapy1.0版本 8 years ago
doubanbook 升级scrapy1.0版本 8 years ago
doubanmovie 升级scrapy1.0版本 8 years ago
douyu write result to file. 8 years ago
general_spider add return x 8 years ago
github_trending add github_trending repo. 8 years ago
googlescholar update download delay. 8 years ago
hacker_news add hacker_news 8 years ago
hrtencent 升级scrapy1.0版本 8 years ago
linkedin 升级scrapy1.0版本 8 years ago
misc Update auto_join_text to False to make all actions more uniform. 8 years ago
pandatv add pandatv 8 years ago
proxylist 升级scrapy1.0版本 8 years ago
qqnews 升级scrapy1.0版本 8 years ago
reddit add reddit 8 years ago
sinanews 升级scrapy1.0版本 8 years ago
sis json: update 8 years ago
template update template. 8 years ago
tutorial 升级scrapy1.0版本 8 years ago
underdev add twitch as doing 8 years ago
v2ex add v2ex list extraction. 8 years ago
youtube_trending add youtube_trending 8 years ago
zhibo8 deleted: zhibo8/index.html 8 years ago
zhihu 升级scrapy1.0版本 8 years ago
ziroom add an example for crawl ziroom 7 years ago
.gitignore update .gitignore 8 years ago Fix broken Markdown headings 7 years ago chmod +x 9 years ago update download delay. 8 years ago not print useless msg 8 years ago


Multifarious scrapy examples with integrated proxies and agents, which make you comfy to write a spider.

Don't use it to do anything illegal!

Real spider example: doubanbook


git clone
cd scrapy-examples/doubanbook
scrapy crawl doubanbook


There are several depths in the spider, and the spider gets real data from depth2.

  • Depth0: The entrance is
  • Depth1: Urls like外国文学 from depth0
  • Depth2: Urls like from depth1

Example image

douban book

Avaiable Spiders

  • tutorial
    • dmoz_item
    • douban_book
    • page_recorder
    • douban_tag_book
  • doubanbook
  • linkedin
  • hrtencent
  • sis
  • zhihu
  • alexa
    • alexa


  • Use parse_with_rules to write a spider quickly.
    See dmoz spider for more details.

  • Proxies

    • If you don't want to use proxy, just comment the proxy middleware in settings.
    • If you want to custom it, hack misc/ by yourself.
  • Notice

    • Don't use parse as your method name, it's an inner method of CrawlSpider.

Advanced Usage

  • Run ./ <PROJECT> to start a new project.
    It will automatically generate most things, the only left things are:
    • PROJECT/PROJECT/spider/

Example to hack and

Hacked with additional fields url and description:

from scrapy.item import Item, Field

class exampleItem(Item):
    url = Field()
    name = Field()
    description = Field()

Hacked with start rules and css rules (here only display the class exampleSpider):

class exampleSpider(CommonSpider):
    name = "dmoz"
    allowed_domains = [""]
    start_urls = [
    # Crawler would start on start_urls, and follow the valid urls allowed by below rules.
    rules = [
        Rule(sle(allow=["/Arts/", "/Games/"]), callback='parse', follow=True),

    css_rules = {
        '.directory-url li': {
            '__use': 'dump', # dump data directly
            '__list': True, # it's a list
            'url': 'li > a::attr(href)',
            'name': 'a::text',
            'description': 'li::text',

    def parse(self, response):
        info('Parse '+response.url)
        # parse_with_rules is implemented here:
        self.parse_with_rules(response, self.css_rules, exampleItem)