Multifarious Scrapy examples. Spiders for alexa / amazon / douban / douyu / github / linkedin etc.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
geekan edb1cb116b
Merge pull request #15 from bryant1410/master
6 years ago
alexa 升级scrapy1.0版本 8 years ago
alexa_topsites add alexa_topsites 8 years ago
amazonbook 升级scrapy1.0版本 8 years ago
dianping update 7 years ago
dmoz 升级scrapy1.0版本 8 years ago
doubanbook 升级scrapy1.0版本 8 years ago
doubanmovie 升级scrapy1.0版本 8 years ago
douyu write result to file. 8 years ago
general_spider add return x 8 years ago
github_trending add github_trending repo. 8 years ago
googlescholar delay.sh: update download delay. 8 years ago
hacker_news add hacker_news 8 years ago
hrtencent 升级scrapy1.0版本 8 years ago
linkedin 升级scrapy1.0版本 8 years ago
misc Update auto_join_text to False to make all actions more uniform. 8 years ago
pandatv add pandatv 8 years ago
proxylist 升级scrapy1.0版本 8 years ago
qqnews 升级scrapy1.0版本 8 years ago
reddit add reddit 8 years ago
sinanews 升级scrapy1.0版本 8 years ago
sis json: update 8 years ago
template update template. 8 years ago
tutorial 升级scrapy1.0版本 8 years ago
underdev add twitch as doing 8 years ago
v2ex add v2ex list extraction. 8 years ago
youtube_trending add youtube_trending 8 years ago
zhibo8 deleted: zhibo8/index.html 8 years ago
zhihu 升级scrapy1.0版本 8 years ago
ziroom add an example for crawl ziroom 7 years ago
.gitignore update .gitignore 8 years ago
README.md Fix broken Markdown headings 7 years ago
clean.sh clean.sh chmod +x 9 years ago
delay.sh delay.sh: update download delay. 8 years ago
startproject.sh not print useless msg 8 years ago

README.md

scrapy-examples

Multifarious scrapy examples with integrated proxies and agents, which make you comfy to write a spider.

Don't use it to do anything illegal!


Real spider example: doubanbook

Tutorial

git clone https://github.com/geekan/scrapy-examples
cd scrapy-examples/doubanbook
scrapy crawl doubanbook

Depth

There are several depths in the spider, and the spider gets real data from depth2.

  • Depth0: The entrance is http://book.douban.com/tag/
  • Depth1: Urls like http://book.douban.com/tag/外国文学 from depth0
  • Depth2: Urls like http://book.douban.com/subject/1770782/ from depth1

Example image

douban book


Avaiable Spiders

  • tutorial
    • dmoz_item
    • douban_book
    • page_recorder
    • douban_tag_book
  • doubanbook
  • linkedin
  • hrtencent
  • sis
  • zhihu
  • alexa
    • alexa
    • alexa.cn

Advanced

  • Use parse_with_rules to write a spider quickly.
    See dmoz spider for more details.

  • Proxies

    • If you don't want to use proxy, just comment the proxy middleware in settings.
    • If you want to custom it, hack misc/proxy.py by yourself.
  • Notice

    • Don't use parse as your method name, it's an inner method of CrawlSpider.

Advanced Usage

  • Run ./startproject.sh <PROJECT> to start a new project.
    It will automatically generate most things, the only left things are:
    • PROJECT/PROJECT/items.py
    • PROJECT/PROJECT/spider/spider.py

Example to hack items.py and spider.py

Hacked items.py with additional fields url and description:

from scrapy.item import Item, Field

class exampleItem(Item):
    url = Field()
    name = Field()
    description = Field()

Hacked spider.py with start rules and css rules (here only display the class exampleSpider):

class exampleSpider(CommonSpider):
    name = "dmoz"
    allowed_domains = ["dmoz.org"]
    start_urls = [
        "http://www.dmoz.com/",
    ]
    # Crawler would start on start_urls, and follow the valid urls allowed by below rules.
    rules = [
        Rule(sle(allow=["/Arts/", "/Games/"]), callback='parse', follow=True),
    ]

    css_rules = {
        '.directory-url li': {
            '__use': 'dump', # dump data directly
            '__list': True, # it's a list
            'url': 'li > a::attr(href)',
            'name': 'a::text',
            'description': 'li::text',
        }
    }

    def parse(self, response):
        info('Parse '+response.url)
        # parse_with_rules is implemented here:
        #   https://github.com/geekan/scrapy-examples/blob/master/misc/spider.py
        self.parse_with_rules(response, self.css_rules, exampleItem)