Link Harvesting in Python

I’ve done extensive work with link validation in websites, using a mix of Ruby / Anemone (spidering library) and Watir (web automation library.)

In this post I’ll cover a similar approach from the Python side using Python and BeautifulSoup.  What’s nice about this pairing is that it’s all part of the standard library in Python.  You don’t have to install or download any extra modules.

I won’t claim that my code is clean or even very good, but in a few hours I put together a Python script that simply ran a scan of all “a” tags on a home page, then iterated over each link on the home page to collect all their links.  This gives an idea of what can be accomplished and what other interesting variations could be created.

Spidering Script in Python

from bs4 import BeautifulSoup                                                        
import urllib                                                                        
import re                                                                            
 
home_page_urls = [] # List for the homepage links              
all_links = [] # List where I'll drop all links later                                                                                                                                
def scan(base_url, location=0):                                                  
    html = urllib.urlopen(base_url)                                                  
    bt = BeautifulSoup(html.read(), 'lxml')                                          
    links = bt.find_all('a')   # BeautifulSoup grabs all a tags on page                    
 
    for link in links:  # Iterating over each a tag                       
        a = link['href']  # Grabbing the href value for each a tag                 
        if re.match('^/[a-z1-9]', a):   # Regex matching relative links (i.e. /test.html)  
            if location == 0:                                                        
                home_page_urls.append(base_url + a)                                  
                all_links.append(base_url + a)                                       
            elif location == 1:                                                      
                all_links.append(base_url + a)                                       
        elif re.match('(http://\S+)', a):                                            
            if location == 0:                                                        
                home_page_urls.append(a)                                             
                all_links.append(a)                                                  
            elif location == 1:                                                      
                all_links.append(a)                                                  
        print "[*] Total links captured so far: " + str(len(all_links))              
 
def sub_page_scan(base_url):                                                         
    # First get links on start page                                                  
    print "[*] Starting Test on " + base_url                                         
    scan(base_url)                                                                   
    for url in home_page_urls:                                                           
        scan(url, 1)                                                                 
    all_link_no_dups = list(set(all_links))                                          
    for link in all_link_no_dups:                                                    
        print link + '\n'                                                            
 
sub_page_scan('http://somesite')

Python Spider Script Breakdown

I built out two methods in Python.  The first is the engine that will find all the links on a given url/page.  Since there are different layers on a site (Home page / sub page, sub sub page, etc.) I choose to only care about 2 layers (homepage and subpage.)  I used a location parameter (which is defaulted to location value of 0.)  The value 0 I treat as the homepage… any other value is a sub page.

Therefore if the script is called to a url, I default to hit the page sent in as a parameter and default it to be treated as a homepage.  That list will be iterated over later, to collect all the links on each sub page.

I’m able to grab the links per page, using BeautifulSoup to grab all the “a” tags:

    bt = BeautifulSoup(html.read(), 'lxml')                                          
    links = bt.find_all('a')

This will be a list of each entire a tag, like <a href=”/blah.html”>blah</a>  Since I only want to collect the juicy URL, I iterate over this collection with a for loop… each item in the loop I then attach the [‘href’] component to – which now gives me the URL for each “a” tag that BeautifulSoup found.

Relative vs. Full URLs

In the case of a site with relative paths that are prepended with a “/”, I added some regex to grab those URL’s and prepend the base URL to them.  If however the link pulled out of the a tag is a full URL (starting with HTTP), I drop it into the list without prepending a base url.

Those Lists

For convenience, I made two lists… one for the homepage and one for the sub page.  The homepage list will contain a list of all links on the home page.  This list is iterated over to derive all the links on each sub page, which are stored in a subsequent list.

Simple stuff using the .append() method in Python.

Controlling the Flow

A second method is used as the controller. It takes a URL passed in as a parameter and sends it to the first method (not specifying a location), which populates the homepage list, full of homepage links.

After the first list is returned it then iterates over the homepage list, and takes each link and passes it back into the first method, scanning that link for more links – this time it specifies the location as 1 – meaning a sub page.  The results are now stored into an alternate list called “all_links.”

Output

The final output is a list of non repeating (unique) url’s, which is passed out using the Python set method.

Improvement

I’m sure I have loads of areas that can be improved in this script.  One that bothers me is the nested “if” statements.  These should really be method calls, over nesting multiple if’s which is somewhat dirty.

What Python was Used/Learned

The following Python concepts/code was used in this exercise:

  • Beautiful Soup module
  • Regular Expressions
  • URLLIB
  • Lists
  • Methods
  • Parameters (including Default values)
  • If/Else control logic
  • “Set” to remove duplicate values from a list

 

Link Harvesting in Python
User Rating: 0 (0 votes)