python登录并爬取淘宝信息代码⽰例本⽂主要分享关于python登录并爬取淘宝信息的相关代码,还是挺不错的,⼤家可以了解下。
#!/usr/bin/env python
# -*- coding:utf-8 -*-
from selenium import webdriver
import time
import datetime
import traceback
controller做什么的import logging
import os
from selenium.webdrivermon.action_chains import ActionChains
import codecs
#登录
def login(driver,site):
<(site)
time.sleep(5)
try:
#点击请登录
driver.find_element_by_class_name("h").click()
time.sleep(5)
#输⼊账号和密码
driver.find_element_by_id("TPL_username_1").send_keys(u"yourusername")
time.sleep(5)
#print driver.find_element_by_id("TPL_username_1")
driver.find_element_by_id("TPL_password_1").send_keys(u"yourpsd")
time.sleep(5)
#点击登录
python基础代码大全下载driver.find_element_by_id("J_SubmitStatic").click()
time.sleep(30)
except:
print u"failure"
def crawlmarket(driver,filename,site):
#driver = webdriver.Firefox()
<(site)
driver.maximize_window()
time.sleep(10)网站设计搭建
time.sleep(10)
test = driver.find_elements_by_xpath("//a[@class='J_ItemLink']")
#是否获取到消息,若⽆则登录
if len(test)==0:
login(driver,site)
time.sleep(30)
resultstrall=""
resultstr=""
strinfo =""
for i in range(0,len(test),1):
if test[i].text != "" :
resultstr = test[i].text.strip()+'\n'
print resultstr
resultstrall += resultstr
#是否成功抓取
if resultstrall !="":
f = codecs.open(filename,'w','utf-8')
f.write(resultstrall)
f.close()
#若没有成功抓取将⽹站写⼊error
else:
strinfo = filename+","+site
print strinfo
ferror = codecs.open("",'a','utf-8')
ferror.write(strinfo)
ferror.close()
driver.quit()
def crawltaobaosousuo(driver,filename,site):
#driver = webdriver.Firefox()
<(site)
driver.maximize_window()
time.sleep(10)
<(site)
time.sleep(30)
test = driver.find_elements_by_xpath("//a[@class='J_ClickStat']")
resultstrall=""
resultstr=""
strinfo =""
for i in range(0,len(test),1):
word怎么让文本框隐形if test[i].text != "" :
resultstr = test[i].text.strip()+'\n'
print resultstr
resultstrall += resultstr
javascript实现indexof方法if resultstrall !="":
f = codecs.open(filename,'w','utf-8')
f.write(resultstrall)
f.close()
else:
strinfo = filename+","+site
print strinfo
ferror = codecs.open("",'a','utf-8')
ferror.write(strinfo)
ferror.close()
driver.quit()
def jiexi(driver):
f = open("1.txt","r")
for line in f:
time.sleep(60)
info = line.split(",")
href = info[1]
filename = info[0].decode("utf-8")
print filename
qq登录页面制作html
if "markets" in href:
crawlmarket(driver,filename,href)
else:
crawltaobaosousuo(driver,filename,href)
if __name__ =='__main__':
driver = webdriver.Firefox()
jiexi(driver)
⼩结
有改进策略⼀起探讨,可以抓取淘宝部分⽹页内容,根据⾃⼰的需求改改吧,会被风控。个⼈觉得不登录的效果更好。
以上就是本⽂关于python 登录并爬取淘宝信息代码⽰例的全部内容,希望对⼤家有所帮助。感兴趣的朋友可以继续参阅本站其他相关专题。如有不⾜之处,欢迎留⾔指出。感谢朋友们对本站的⽀持!
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。
发表评论