作者MOONY135 (谈无慾)
看板C_Sharp
标题Re: [问题] 网路爬虫 八卦版
时间Wed Apr 25 22:47:48 2018
※ 引述《l8PeakNeymar (十八尖山内马尔)》之铭言:
: 这个问题困扰我一段时间
: 因为网路上都是python或java的教学
: 想请问用C# console专案来爬虫的问题
: 目前只要爬到八卦板或是西斯板之类的
: 像是我要求看这个网页:
: https://webptt.com/cn.aspx?n=bbs/Gossiping/M.1234567890.A.D55.html
: 回传却是这个:
: https://webptt.com/cn.aspx?n=/ask/over18
: 在思考要怎麽把自己已满18岁认证的˙Cookies一起送给伺服器
: 乱试很多class:
: System.Net.Cookie、HttpWebRequest、WebRequest...
: 结果都不行 因为其实我也不懂原理
: 请问有板友可以教学吗?非常感激!
: -----
: Sent from JPTT on my Xiaomi Redmi Note 4.
2015写的python 不知道还有没有用
重点应该在那行payload
import os, sys
import csv
import datetime, time
import requests
from bs4 import BeautifulSoup
import Ptt_FileGet
tStart = time.time()
payload = {'from':'/bbs/Gossiping/index.html','yes':'yes'}
rs = requests.session()
index_Page = rs.post('
https://webptt.com/cn.aspx?n=/ask/over18', verify=False,
data=payload)
index_Page = rs.get("
https://webptt.com/cn.aspx?n=bbs/Gossiping/index.html")
soup_index_Page = BeautifulSoup(index_Page.text,"html.parser") #抓每篇文章的
URL联结
print("soup_index_Page's Type: ",type(soup_index_Page))
index_tag = soup_index_Page.find_all('a', href=True)
page = index_tag[7].get('href')
page_number_index = page.index('.html')
index_num = int( page[page_number_index-4:page_number_index])#这边会因为网址长
度而不一样
day_today = datetime.datetime.now()
day_minus = day_today + datetime.timedelta(days = -1)
day_yest = day_minus.strftime("%m/%d")[1:]
#day_yest = day_today.strftime("%m/%d")[1:]
URL_filename='D:\Ptt_data\Gossiping_'+day_today.strftime("%m%d")+'_URL.csv'
URL_file = open(URL_filename, 'w', newline='')
#print("I Will Create One For You")
URL_w = csv.writer(URL_file)
URL_w.writerow(['author', 'date', 'link']) #创好档案名称了
#此段存文章的网页联结
data_filename='D:\Ptt_data\Gossiping_'+day_today.strftime("%m%d")+'_data.csv'
data_file = open(data_filename, 'w', newline='')
data_w = csv.writer(data_file)
data_w.writerow([u'作者', u'日期', u'标题', u'价格']) #创好档案名称了
#此段存文章的资料
data_file.close()
count = 0
PTT_URL = '
https://webptt.com/cn.aspx?n='
print("yesterday is ",day_yest)
day_test=' '+day_yest #後面的Post_date有多一个空白 这边是为了简单处理才这样做
print(index_num)
while count == 0 :
try:
res_index = requests.get(PTT_URL + '/bbs/' + 'Gossiping' +
'/index' + str(index_num) + '.html',)
soup_index = BeautifulSoup(res_index.text,"html.parser") #抓每篇
文章的URL联结
main_container_index = soup_index.select('.r-ent')
for link in main_container_index:
try: #下面这一段是怕有人砍文章会造成错误 找不到文
章就pass
Post_author = link.select('div.author')[0].text
Post_date = link.select('div.date')[0].text
Post_link = link.find('a')['href'] #这边是重点 很怪阿
URL_Link = PTT_URL + Post_link
data = [ [Post_author, Post_date,PTT_URL + Post_link]]
URL_w.writerows(data)
if Post_date==day_test:
Ptt_FileGet.data_save(PTT_URL + Post_link,
data_filename)
except:
pass
if Post_date >= day_test:
index_num = index_num - 1
print("================", index_num, "====================")
else:
count = 1
print('The End')
except:
pass
--
※ 发信站: 批踢踢实业坊(ptt.cc), 来自: 1.169.72.64
※ 文章网址: https://webptt.com/cn.aspx?n=bbs/C_Sharp/M.1524667672.A.B80.html