|URL||Insert at||Status||HTML Title||External Links||Google Analytics?||Click to crawl Url||Actions|
|http://5pider.com.br||11 months ago||done||5pider – Servidores Amazon e Infraestrutura de TI||103||n/a||Crawl Url||Delete|
|http://www.sportket.com||1 year ago||done||江苏快3形态走势图||73||n/a||Crawl Url||Delete|
|http://www.ee.ee||1 year ago||done||ee.ee||0||n/a||Crawl Url||Delete|
|http://www.taniarascia.com/||1 year ago||done||Tania Rascia – Web Design and Development||112||Yes||Crawl Url||Delete|
|http://oliseglobalagency.org||2 years ago||done||Olise Global Home||12||n/a||Crawl Url||Delete|
|http://jibsengineering.com||2 years ago||done||JIBS Engineering Service Ltd - Home||26||n/a||Crawl Url||Delete|
|http://www.havecv.com/suleiman||2 years ago||done||Suleiman A Mamman||25||n/a||Crawl Url||Delete|
|http://smarbly.com||2 years ago||done||Smarbly||14||n/a||Crawl Url||Delete|
|http://www.havecv.com||2 years ago||done||HaveCv||28||n/a||Crawl Url||Delete|
|http://safsms.com/blog/||2 years ago||done||SAFSMS Blog | by FlexiSAF||38||Yes||Crawl Url||Delete|
A web crawler is a process that is performed by a search engine crawler while it is looking for significant website paths or links on the index page of a website. This process is called Web crawling or spidering.
Web crawlers can be used to gather specific relevant of information from Web pages, such as harvesting e-mail addresses (usually for spam), or address that links to some specific website or app or your choice. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code.
In the above tutorial demo, i created a simple web crawler that do the following;
The system is going to allow the user to insert URLs via a form. After submitting the URLs will be saved to MySQL table.
After the URLs is saved, the user can see it in a table view. The user can also delete URLs. Every URL will have a default status call “new”. Whenever the Crawling of URL is completed, the status will change to “done”.
During the crawling process, the status should be changed to “crawling”. Inside the table, the status of each URL should be visible and the user should be able to filter for the status.
Each result of each crawling will be stored in the database table “urls_metrics”. When it is not possible to fetch the metrics (e.g. if the URL is offline) the URL status is going to change to “crawling failed”. The Google Analytics result is going to change to n/a If the URL doesn’t have Google Analytics. The system will also allow the user to fetch all URLs with the status “new, crawling and done”.