[{"createTime":1735734952000,"id":1,"img":"hwy_ms_500_252.jpeg","link":"https://activity.huaweicloud.com/cps.html?fromacct=261f35b6-af54-4511-a2ca-910fa15905d1&utm_source=V1g3MDY4NTY=&utm_medium=cps&utm_campaign=201905","name":"华为云秒杀","status":9,"txt":"华为云38元秒杀","type":1,"updateTime":1735747411000,"userId":3},{"createTime":1736173885000,"id":2,"img":"txy_480_300.png","link":"https://cloud.tencent.com/act/cps/redirect?redirect=1077&cps_key=edb15096bfff75effaaa8c8bb66138bd&from=console","name":"腾讯云秒杀","status":9,"txt":"腾讯云限量秒杀","type":1,"updateTime":1736173885000,"userId":3},{"createTime":1736177492000,"id":3,"img":"aly_251_140.png","link":"https://www.aliyun.com/minisite/goods?userCode=pwp8kmv3","memo":"","name":"阿里云","status":9,"txt":"阿里云2折起","type":1,"updateTime":1736177492000,"userId":3},{"createTime":1735660800000,"id":4,"img":"vultr_560_300.png","link":"https://www.vultr.com/?ref=9603742-8H","name":"Vultr","status":9,"txt":"Vultr送$100","type":1,"updateTime":1735660800000,"userId":3},{"createTime":1735660800000,"id":5,"img":"jdy_663_320.jpg","link":"https://3.cn/2ay1-e5t","name":"京东云","status":9,"txt":"京东云特惠专区","type":1,"updateTime":1735660800000,"userId":3},{"createTime":1735660800000,"id":6,"img":"new_ads.png","link":"https://www.iodraw.com/ads","name":"发布广告","status":9,"txt":"发布广告","type":1,"updateTime":1735660800000,"userId":3},{"createTime":1735660800000,"id":7,"img":"yun_910_50.png","link":"https://activity.huaweicloud.com/discount_area_v5/index.html?fromacct=261f35b6-af54-4511-a2ca-910fa15905d1&utm_source=aXhpYW95YW5nOA===&utm_medium=cps&utm_campaign=201905","name":"底部","status":9,"txt":"高性能云服务器2折起","type":2,"updateTime":1735660800000,"userId":3}]
1.最简单的Python爬虫
最简单的Python爬虫莫过于直接使用urllib.request.urlopen(url=某网站)或者requests.get(url=某网站)例如:
爬取漫客栈里面的漫画文章链接:运用Python爬虫下载漫客栈里面的漫画
代码和运行结果:
这是最简单也是最基础的Python爬虫.
2.需要添加headers的Python爬虫
有的网址爬取数据需要添加User-Sgent、Cookie等字段信息,这个时候我们需要添加一个请求头,也就是一个字典,User-Sgent、Cookie等字段信息就放这里面。如:
运用Python爬虫下载表情包文章链接:运用Python爬虫下载表情包
没加请求头
加上请求头:
是不是加与没加,就有很大的区别.
3.所爬取的数据在NetWork里面
有个时候,我们所爬取的数据添加请求头之后,也爬取不到,这个时候,我们就需要想一想NetWork,下面有XHR和JS,也许所需要数据就在这两个其中的一个里面。如:
爬取王者荣耀英雄皮肤爬取王者荣耀英雄皮肤
如果用上面第二种方法,可以发现,就算添加请求头,也访问不到数据,我们看一下网页源代码,发现,这些数据根本就不在源代码中,所以这样肯定爬不到数据。我们点击电脑键盘F12,然后再点击NetWork下面的JS,按F5刷新,可以发现,这些图片的下载链接在JS下面的一个json文件里。
4.动态加载的数据
动态加载的,像网易云音乐,虽然我们也可以在NetWork下面找到相应的数据,但是这是一个post请求,比较复杂,我们可以使用selenium模块,这个过程我就不讲解了这里有关于它的文章链接:运用selenium下载网易云音乐
5.总结
上面讲解的这些,我都有关于它们的文章,读者可以自行找到并阅读。也许我还是一个Python爬虫小白吧!讲解的深度还不够,希望大家谅解,在以后的日子里,我会加油学的。如果读者觉得我的这篇文章对于你有所帮助,希望大家给我点一个小小的赞,谢谢!