作者etudiant (chawit)
看板Python
标题[问题] 新手爬虫(被挡的问题)
时间Sun Oct 9 22:41:29 2022
板上的大大们好,小弟又来请教问题了,最近在爬群众募资平台的资料,但很常会中间好
几页爬不到东西,过一阵子又有了,想请问大家是什麽问题...不确定是不是跟下图这个
检查网路连线的有关,有时候我换页点很快也会遇到QQ 如果有关的话,想请问是否有解
决的办法,谢谢!
https://i.imgur.com/GdWEijn.jpg
附上我的程式码:
(当初逻辑上是先去外面的页面抓完每页的id名称再套进去网址去找每个项目的资讯,之
後再转成Excel)
import requests
import bs4
import time
import random
import pandas as pd
collect_title=[]
collect_category=[]
collect_goal=[]
collect_final=[]
collect_people=[] #空列表之後存资料
def get_page_info(URL):
headers = {'cookie':'age_checked_for=12925;','user-agent': 'Mozilla/5.0 (W
indows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.
0.0.0 Safari/537.36'}
#含第九页一项18禁的cookie
response = requests.get(URL,headers=headers)
soup = bs4.BeautifulSoup(response.text,"html.parser")
data=soup.find_all('div','text-black project text-neutral-600 mx-4 mb-4 pb
-12 relative h-full')
for t in data:
href = t.find("a","block").attrs["href"] #取a标签下href後面的网址
link ="
https://www.zeczec.com"+href
response_2 = requests.get(link,headers=headers)
soup_2 = bs4.BeautifulSoup(response_2.text,"html.parser") #解析取下的
网址中的网页内容
main_info = soup_2.find_all('div','container lg:my-8')
for i in main_info:
#category = i.find('a','underline text-neutral-600 font-bold inlin
e-block').text.strip()
category = i.find_all('a','underline text-neutral-600 font-bold in
line-block')[1].text.strip()
title = i.find('h2', 'text-xl mt-2 mb-1 leading-relaxed').text.str
ip()
final_cash = i.find('div','text-2xl font-bold js-sum-raised whites
pace-nowrap leading-relaxed').text.strip()
goal_cash = i.find('div','text-xs leading-relaxed').text.strip()
people = i.find('span','js-backers-count').text.strip()
final='类别:{} 标题:{} 目标:{} 实际:{} 赞助人数:{}'.format(categor
y,title,goal_cash[6:],final_cash[3:],people)
print(final)
collect_category.append(category)
collect_title.append(title)
collect_goal.append(goal_cash[6:])
collect_final.append(final_cash[3:])
collect_people.append(people) #丢入collect列表
time.sleep(2)
for i in range(1,13,1):
print("第"+str(i)+"页")
URL="
https://www.zeczec.com/categories?category=1&page="+str(i)+"&type=0"
get_page_info(URL)
delay_time=[3,7,8,5]
delay=random.choice(delay_time)
time.sleep(delay)
print(len(collect_goal)) #计算抓了几笔
#print(collect_final)
#print(collect_people)
col1 = "类别"
col2 = "标题"
col3 = "目标金额"
col4 = "实际金额"
col5 = "赞助人数"
data = pd.DataFrame({col1:collect_category,col2:collect_title,col3:collect_goa
l,col4:collect_final,col5:collect_people}) #三栏放list
data.to_excel('音乐.xlsx', sheet_name='sheet1', index=False)
遇到问题的画面:
突然好几页不能抓到这样,不知道是requests 还是太频繁吗?
https://i.imgur.com/PaN0z1N.jpg
--
※ 发信站: 批踢踢实业坊(ptt.cc), 来自: 42.72.132.142 (台湾)
※ 文章网址: https://webptt.com/cn.aspx?n=bbs/Python/M.1665326491.A.116.html
1F:→ surimodo: 就是太频繁阿 time.sleep 数字调高一点10/09 23:59
2F:→ cocoaswifty: 你最後不就说出答案了10/09 23:59
原来如此,谢谢大家,我原本以为我那样的间隔算高了QQ
※ 编辑: etudiant (42.72.132.142 台湾), 10/10/2022 02:55:27