三种数据抓取的办法
- 正则表达式(re库)
- BeautifulSoup(bs4)
- lxml
*利用之前构建的下载网页函数,获取指标网页的html,咱们以https://guojiadiqu.bmcx.com/A...为例,获取html。
from get_html import downloadurl = 'https://guojiadiqu.bmcx.com/AFG__guojiayudiqu/'page_content = download(url)
*假如咱们须要爬取该网页中的国家名称和详情,咱们顺次应用这三种数据抓取的办法实现数据抓取。
1.正则表达式
from get_html import downloadimport reurl = 'https://guojiadiqu.bmcx.com/AFG__guojiayudiqu/'page_content = download(url)country = re.findall('class="h2dabiaoti">(.*?)</h2>', page_content) #留神返回的是listsurvey_data = re.findall('<tr><td bgcolor="#FFFFFF" id="wzneirong">(.*?)</td></tr>', page_content)survey_info_list = re.findall('<p> (.*?)</p>', survey_data[0])survey_info = ''.join(survey_info_list)print(country[0],survey_info)
2.BeautifulSoup(bs4)
from get_html import downloadfrom bs4 import BeautifulSoupurl = 'https://guojiadiqu.bmcx.com/AFG__guojiayudiqu/'html = download(url)#创立 beautifulsoup 对象soup = BeautifulSoup(html,"html.parser")#搜寻country = soup.find(attrs={'class':'h2dabiaoti'}).textsurvey_info = soup.find(attrs={'id':'wzneirong'}).textprint(country,survey_info)
3.lxml
from get_html import downloadfrom lxml import etree #解析树url = 'https://guojiadiqu.bmcx.com/AFG__guojiayudiqu/'page_content = download(url)selector = etree.HTML(page_content)#可进行xpath解析country_select = selector.xpath('//*[@id="main_content"]/h2') #返回列表for country in country_select: print(country.text)survey_select = selector.xpath('//*[@id="wzneirong"]/p')for survey_content in survey_select: print(survey_content.text,end='')
运行后果:
最初,援用《用python写网络爬虫》中对三种办法的性能比照,如下图:
仅供参考。