Scraping With Scrapy : Part 1

This post is having the instructions to install Scrapy and starting your first project.

What is Scrapy?

Scrapy is an application framework for crawling web sites and extracting structured data which can be used for a wide range of useful applications, like data mining, information processing or historical archival. It’s all in Python. Read more here.

Installing Scrapy

1.   Install gcc and lxml.

sudo apt-get install python-dev
sudo apt-get install libevent-dev
sudo apt-get install libxml2 libxml2-dev
sudo apt-get install libxml2-dev libxslt-dev
sudo apt-get install python-lxml

2.   Install twisted

sudo apt-get install python-twisted python-libxml2 python-simplejson

3.   Install pyOpenSSL

tar -zxvf pyOpenSSL-0.13.tar.gz
cd pyOpenSSL-0.13
sudo python install

#If any error like gcc exit status 1 pops then :
sudo apt-get update
sudo apt-get install yum rpm

sudo yum install python-devel libxml2-devel libxslt-devel
sudo yum install pyOpenSSL

sudo apt-get install libssl-dev

4.  Install pycrypto

tar -zxvf pycrypto-2.5.tar.gz
cd pycrypto-2.5
sudo python install

5.   Install easy_install:(if you don’t have easy_install)


6.   Install w3lib

sudo easy_install -U w3lib

7.   Install scrapy

sudo easy_install Scrapy

Creating a project in Scrapy

  scrapy startproject my_first_project 

The directory structure will look like :


I’ll be covering how to write a simple spider and crawl spider (which recursively crawls website), in the Part 2 of this Scrapy series.

Some useful links : Scrapper? , Web Crawler?.




#crawler, #programming, #python, #scrapy, #web-crawler