examples-of-web-crawlers
一些非常有趣的python爬虫例子,对新手比较友好,主要爬取淘宝、天猫、微信、微信读书、豆瓣、QQ等网站。(Some interesting examples of python crawlers that are friendly to beginners. )
Top Related Projects
爬虫集合
:rainbow:Python3网络爬虫实战:淘宝、京东、网易云、B站、12306、抖音、笔趣阁、漫画小说下载、音乐电影下载等
Python ProxyPool for web spider
😮python模拟登陆一些大型网站,还有一些简单的爬虫,希望对你们有所帮助❤️,如果喜欢记得给个star哦🌟
以撸代码的形式学习Python
Quick Overview
The "examples-of-web-crawlers" repository is a collection of Python web crawlers and scrapers for various popular Chinese websites and services. It includes scripts for automating tasks like downloading images from Zhihu, sending messages on WeChat, and scraping data from Bilibili. The project aims to provide practical examples for learning web scraping techniques.
Pros
- Offers a diverse range of real-world web scraping examples
- Includes detailed documentation and instructions for each crawler
- Provides solutions for interacting with popular Chinese platforms
- Demonstrates various scraping techniques and libraries
Cons
- Primarily focused on Chinese websites, limiting its global applicability
- Some examples may become outdated as target websites change
- Potential ethical concerns regarding automated interactions with social platforms
- May require additional configuration for use outside of China
Code Examples
- Downloading images from Zhihu:
from zhihu_crawler import ZhihuCrawler
crawler = ZhihuCrawler()
crawler.login(account="your_account", password="your_password")
crawler.download_images_by_question_id(question_id="26037846")
- Sending WeChat messages:
from wechat_sender import WeChatSender
sender = WeChatSender()
sender.login()
sender.send_message(to_user="Friend's Name", message="Hello from Python!")
- Scraping Bilibili video information:
from bilibili_crawler import BilibiliCrawler
crawler = BilibiliCrawler()
video_info = crawler.get_video_info(bvid="BV1xx411c7mD")
print(video_info)
Getting Started
To use these web crawlers:
-
Clone the repository:
git clone https://github.com/shengqiangzhang/examples-of-web-crawlers.git
-
Install required dependencies:
cd examples-of-web-crawlers pip install -r requirements.txt
-
Navigate to the specific crawler directory and run the script:
cd 1.xxx爬虫 python xxx_crawler.py
Make sure to read the README file in each crawler's directory for specific instructions and requirements.
Competitor Comparisons
爬虫集合
Pros of awesome-spider
- Extensive collection of web crawling resources and projects
- Well-organized with categories for different types of crawlers
- Includes a wide range of programming languages and frameworks
Cons of awesome-spider
- Lacks detailed explanations or tutorials for each project
- Some listed projects may be outdated or no longer maintained
- Doesn't provide ready-to-use code examples for beginners
Code Comparison
examples-of-web-crawlers:
import requests
from bs4 import BeautifulSoup
url = 'https://example.com'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
awesome-spider:
No direct code examples are provided in the repository. It serves as a curated list of web crawling projects and resources rather than offering code snippets.
Summary
examples-of-web-crawlers focuses on providing practical, ready-to-use web crawling examples in Python, making it ideal for beginners and those looking for quick implementation. awesome-spider, on the other hand, offers a comprehensive list of web crawling resources across various languages and frameworks, serving as a valuable reference for developers seeking diverse crawling solutions. While examples-of-web-crawlers provides hands-on code examples, awesome-spider excels in breadth of information and language diversity.
:rainbow:Python3网络爬虫实战:淘宝、京东、网易云、B站、12306、抖音、笔趣阁、漫画小说下载、音乐电影下载 等
Pros of python-spider
- More comprehensive coverage of web scraping techniques and targets
- Includes advanced topics like anti-crawling measures and distributed crawling
- Better organized with separate folders for different scraping projects
Cons of python-spider
- Less focus on practical, real-world applications compared to examples-of-web-crawlers
- May be overwhelming for beginners due to its extensive scope
- Some examples might be outdated or require maintenance
Code Comparison
examples-of-web-crawlers:
def get_info(self, url):
html = requests.get(url, headers=self.headers).text
selector = etree.HTML(html)
name = selector.xpath('//h1[@class="username"]/text()')[0]
following = selector.xpath('//span[@class="zm-profile-side-following zg-clear"]/a[1]/strong/text()')[0]
followers = selector.xpath('//span[@class="zm-profile-side-following zg-clear"]/a[2]/strong/text()')[0]
return name, following, followers
python-spider:
def parse_one_page(html):
pattern = re.compile('<dd>.*?board-index.*?>(\d+)</i>.*?data-src="(.*?)".*?name"><a'
'.*?>(.*?)</a>.*?star">(.*?)</p>.*?releasetime">(.*?)</p>'
'.*?integer">(.*?)</i>.*?fraction">(.*?)</i>.*?</dd>', re.S)
items = re.findall(pattern, html)
for item in items:
yield {
'index': item[0],
'image': item[1],
'title': item[2],
'actor': item[3].strip()[3:],
'time': item[4].strip()[5:],
'score': item[5] + item[6]
}
The code comparison shows that python-spider uses more advanced parsing techniques with regular expressions, while examples-of-web-crawlers relies on XPath for HTML parsing. python-spider's approach is more flexible but potentially more complex for beginners.
Python ProxyPool for web spider
Pros of proxy_pool
- Focuses specifically on proxy management, providing a robust solution for maintaining and utilizing proxy pools
- Offers a RESTful API for easy integration with other applications
- Includes automatic proxy validation and scoring system
Cons of proxy_pool
- Limited in scope compared to examples-of-web-crawlers, which offers a variety of web scraping examples
- May require additional setup and maintenance for proxy servers
- Less beginner-friendly, as it focuses on a more specialized aspect of web scraping
Code Comparison
examples-of-web-crawlers:
import requests
from bs4 import BeautifulSoup
url = 'https://example.com'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
proxy_pool:
import requests
proxy_api = 'http://127.0.0.1:5010/get/'
proxy = requests.get(proxy_api).json()
url = 'https://example.com'
response = requests.get(url, proxies={"http": "http://{}".format(proxy)})
The code snippets demonstrate the difference in focus between the two projects. examples-of-web-crawlers shows a basic web scraping setup, while proxy_pool emphasizes proxy usage in requests.
😮python模拟登陆一些大型网站,还有一些简单的爬虫,希望对你们有所帮助❤️,如果喜欢记得给个star哦🌟
Pros of awesome-python-login-model
- Focuses specifically on login automation for various websites
- Provides more detailed examples for handling complex login scenarios
- Includes additional utilities like CAPTCHA solving and proxy support
Cons of awesome-python-login-model
- Limited to login-related tasks, less diverse in web scraping examples
- Fewer total examples compared to examples-of-web-crawlers
- Less frequently updated, potentially outdated for some websites
Code Comparison
examples-of-web-crawlers:
import requests
from bs4 import BeautifulSoup
url = 'https://example.com'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
awesome-python-login-model:
import requests
from lxml import etree
session = requests.Session()
login_url = 'https://example.com/login'
response = session.get(login_url)
html = etree.HTML(response.text)
Both repositories use Python for web scraping, but awesome-python-login-model tends to use lxml for parsing, while examples-of-web-crawlers often uses BeautifulSoup. The awesome-python-login-model examples typically involve more complex session handling for maintaining login states.
Overall, examples-of-web-crawlers offers a broader range of web scraping examples, while awesome-python-login-model provides more specialized solutions for automating login processes on various websites.
以撸代码的形式学习Python
Pros of LearnPython
- Broader scope covering various Python topics beyond web crawling
- More structured learning approach with organized chapters
- Includes exercises and projects for hands-on practice
Cons of LearnPython
- Less focused on web crawling specifically
- May not provide as many practical, ready-to-use crawler examples
- Could be overwhelming for those solely interested in web crawling
Code Comparison
LearnPython (basic web scraping example):
import requests
from bs4 import BeautifulSoup
url = "https://example.com"
response = requests.get(url)
soup = BeautifulSoup(response.text, "html.parser")
examples-of-web-crawlers (specific crawler example):
import requests
from lxml import etree
url = "https://www.zhihu.com/explore"
html = requests.get(url, headers=headers).text
selector = etree.HTML(html)
titles = selector.xpath('//h2[@class="ExploreHomePage-title"]/text()')
The LearnPython repository provides a more general approach to web scraping, while examples-of-web-crawlers offers more specific and targeted crawler examples. LearnPython uses BeautifulSoup for parsing, which is more beginner-friendly, while examples-of-web-crawlers uses lxml, which is faster but may have a steeper learning curve.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
ä¸äºé常æè¶£çpythonç¬è«ä¾å,å¯¹æ°ææ¯è¾å好
项ç®ç®ä»
ä¸äºå¸¸è§çç½ç«ç¬è«ä¾åï¼ä»£ç éç¨æ§è¾é«ï¼æ¶ææ§è¾ä¹ ã项ç®ä»£ç å¯¹æ°ææ¯è¾å好ï¼å°½éç¨ç®åçpython代ç ï¼å¹¶é æå¤§é注éã
å¦ä½ä¸è½½
没ææä¸æå¦ä½è®¾ç½®ä»£ççä¸å½ç¨æ·, å¯è·³è½¬è³éåä»åºç äºGiteeè¿è¡ä¸è½½, 以便è·å¾è¾å¿«çä¸è½½é度ã
1.æ·å®æ¨¡æç»å½
ä½¿ç¨æç¨
- ç¹å»è¿éä¸è½½ä¸è½½chromeæµè§å¨
- æ¥çchromeæµè§å¨ççæ¬å·ï¼ç¹å»è¿éä¸è½½å¯¹åºçæ¬å·çchromedriver驱å¨
- pipå®è£
ä¸åå
- pip install selenium
- ç¹å»è¿éç»å½å¾®åï¼å¹¶éè¿å¾®åç»å®æ·å®è´¦å·å¯ç
- å¨mainä¸å¡«åchromedriverçç»å¯¹è·¯å¾
- å¨mainä¸å¡«åå¾®åè´¦å·å¯ç
#æ¹æä½ çchromedriverç宿´è·¯å¾å°å
chromedriver_path = "/Users/bird/Desktop/chromedriver.exe"
#æ¹æä½ çå¾®åè´¦å·
weibo_username = "æ¹æä½ çå¾®åè´¦å·"
#æ¹æä½ çå¾®åå¯ç
weibo_password = "æ¹æä½ çå¾®åå¯ç "
æ¼ç¤ºå¾ç
2.天ç«ååæ°æ®ç¬è«
ä½¿ç¨æç¨
- ç¹å»è¿éä¸è½½ä¸è½½chromeæµè§å¨
- æ¥çchromeæµè§å¨ççæ¬å·ï¼ç¹å»è¿éä¸è½½å¯¹åºçæ¬å·çchromedriver驱å¨
- pipå®è£
ä¸åå
- pip install selenium
- pip install pyquery
- ç¹å»è¿éç»å½å¾®åï¼å¹¶éè¿å¾®åç»å®æ·å®è´¦å·å¯ç
- å¨mainä¸å¡«åchromedriverçç»å¯¹è·¯å¾
- å¨mainä¸å¡«åå¾®åè´¦å·å¯ç
#æ¹æä½ çchromedriverç宿´è·¯å¾å°å
chromedriver_path = "/Users/bird/Desktop/chromedriver.exe"
#æ¹æä½ çå¾®åè´¦å·
weibo_username = "æ¹æä½ çå¾®åè´¦å·"
#æ¹æä½ çå¾®åå¯ç
weibo_password = "æ¹æä½ çå¾®åå¯ç "
æ¼ç¤ºå¾ç
3.ç¬åæ·å®æå·²è´ä¹°çå®è´æ°æ®
ä½¿ç¨æç¨
- ç¹å»è¿éä¸è½½ä¸è½½chromeæµè§å¨
- æ¥çchromeæµè§å¨ççæ¬å·ï¼ç¹å»è¿éä¸è½½å¯¹åºçæ¬å·çchromedriver驱å¨
- pipå®è£
ä¸åå
- pip install selenium
- pip install pyquery
- ç¹å»è¿éç»å½å¾®åï¼å¹¶éè¿å¾®åç»å®æ·å®è´¦å·å¯ç
- å¨mainä¸å¡«åchromedriverçç»å¯¹è·¯å¾
- å¨mainä¸å¡«åå¾®åè´¦å·å¯ç
#æ¹æä½ çchromedriverç宿´è·¯å¾å°å
chromedriver_path = "/Users/bird/Desktop/chromedriver.exe"
#æ¹æä½ çå¾®åè´¦å·
weibo_username = "æ¹æä½ çå¾®åè´¦å·"
#æ¹æä½ çå¾®åå¯ç
weibo_password = "æ¹æä½ çå¾®åå¯ç "
æ¼ç¤ºå¾ç
4.æ¯å¤©ä¸åæ¶é´æ®µéè¿å¾®ä¿¡åæ¶æ¯æé女å
ç®ä»
ææ¶åï¼ä½ 徿³å ³å¿å¥¹ï¼ä½æ¯ä½ 太å¿äºï¼ä»¥è³äºå¥¹ä¸ç´æ±æ¨ï¼è§å¾ä½ ä¸å¤å ³å¿å¥¹ãä½ æèªä¸å³å¿ï¼ä¸æ¬¡ä¸å®è¦åæ¶åæ¶æ¯ç»å¥¹ï¼åªææ¯å å¥è¯ï¼å¯æ¯ä½ åå¿è®°äºãä½ è§å¾èªå·±å¾å§å±ðï¼ä½æ¯å¥¹åè§å¾ä½ ä¸è´è´£ã
ç°å¨ï¼åä¸ç¨æ å¿äºï¼ç¨pythonå°±å¯ä»¥ç»å¥³å宿¶åæç¤ºæ¶æ¯äºï¼èä¸ä¸ä¼æ¼è¿æ¯ä¸ä¸ªå ³é®æ¶å»ï¼æ¯å¤©æ©ä¸èµ·åºãä¸ååé¥ãæä¸åé¥ãæä¸ç¡è§ï¼é½ä¼åæ¶åæ¶æ¯ç»å¥¹äºï¼èä¸è¿å¯ä»¥è®©å¥¹å¦ä¹ è±è¯åè¯å¦ï¼
å¨çæ¥æ¥ä¸´ä¹æ¶ï¼èªå¨åç¥ç¦è¯ãå¨èæ¥æ¥ä¸´ä¹æ¶ï¼æ¯å¦**ä¸å «å¦å¥³èã女ç¥èãæ äººèãæ¥èãå£è¯è**ï¼èªå¨åé®åè¯å¦ï¼åä¹ä¸ç¨æ å¿ä»è¯´ä½ 没æä»ªå¼æäºð
æéè¦çæ¶åï¼å®æ¶å¯ä»¥ç¥é女åç**æ ææ ç»ªææ°**å¦ï¼åä¹ä¸ç¨æ å¿å¥³åè«åå ¶å¦çæ°äºã
ä½¿ç¨æç¨
- pipå®è£ ä¸åå
- pip install wxpy
- pip install requests
- 设置以ä¸å 容
- 设置config.iniç¸å ³ä¿¡æ¯
æ¼ç¤ºå¾ç

5.ç¬å5Kå辨çè¶ æ¸ å¯ç¾å£çº¸
ç®ä»
å£çº¸çéæ©å ¶å®å¾å¤§ç¨åº¦ä¸è½çåºçµè主人çå å¿ä¸çï¼æçäººåæ¬¢é£æ¯ï¼æçäººåæ¬¢æç©ºï¼æçäººåæ¬¢ç¾å¥³ï¼æçäººåæ¬¢å¨ç©ãç¶èï¼ç»ç©¶æä¸å¤©ä½ å·²ç»äº§ç审ç¾ç²å³äºï¼ä½ä½ ä¸å®å³å®è¦æ¢å£çº¸çæ¶åï¼ååç°ç½ä¸çå£çº¸è¦ä¹å辨çä½ï¼è¦ä¹å¸¦ææ°´å°ã
è¿éæä¸æ¬¾Macä¸çå°æ¸ æ°å£çº¸ç¥å¨Pap.erï¼å¯è½æ¯Mac䏿好çå£çº¸è½¯ä»¶ï¼**èªå¸¦5Kè¶ æ¸ å辨çå£çº¸ï¼å¯æå¤ç§ç±»åå£çº¸ï¼å½æä»¬æ³å¨Windowsæè Linuxä¸ä½¿ç¨çæ¶åï¼å°±å¯ä»¥èèå°5Kè¶ æ¸ å辨çå£çº¸**ç¬å䏿¥ã
åè½æªå¾
å¦ä½è¿è¡
# 跳转å°å½åç®å½
cd ç®å½å
# å
å¸è½½ä¾èµåº
pip uninstall -y -r requirement.txt
# åéæ°å®è£
ä¾èµåº
pip install -r requirement.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
# å¼å§è¿è¡
python main.py
6.ç¬åè±ç£æè¡æ¦çµå½±æ°æ®(å«GUIçé¢ç)
项ç®ç®ä»
è¿ä¸ªé¡¹ç®æºäºå¤§ä¸æè¯¾ç¨è®¾è®¡ã平常ç»å¸¸éè¦æç´¢ä¸äºçµå½±ï¼ä½æ¯ä¸ç¥éåªäºè¯åé«ä¸è¯ä»·äººæ°å¤ççµå½±ãä¸ºäºæ¹ä¾¿ä½¿ç¨ï¼å°±å°åæ¥ç项ç®éæ°æ¹åäºãå½åæ¯å¯¹ç¬è«ææ¯ãå¯è§åææ¯çå®è·µäºãä¸»è¦æ¯éè¿ä»æè¡æ¦åä»å½±çå ³é®è¯ä¸¤ç§æ¹å¼ç¬åçµå½±æ°æ®ã
åè½æªå¾
å¦ä½è¿è¡
- æå¼Chromeæµè§å¨ï¼å¨ç½åæ è¾å ¥chrome://version/æ¥è¯¢å½åChromeçæ¬
- æå¼http://chromedriver.storage.googleapis.com/index.htmlï¼ä¸è½½å¯¹åºçæ¬çchromedriver驱å¨ï¼ä¸è½½å®æåå¡å¿ è§£å
- æå¼å½åç®å½ä¸çæä»¶
getMovieInRankingList.py
ï¼å®ä½å°ç¬¬107è¡
ï¼å°executable_path=./chromedriver.exe
ä¿®æ¹ä¸ºä½ çchromedriver驱å¨è·¯å¾ - æ§è¡å½ä»¤
pip install -r requirement.txt
å®è£ ç¨åºæéçä¾èµå - æ§è¡å½ä»¤
python main.py
è¿è¡ç¨åº
å å«åè½
- æ ¹æ®å ³é®åæç´¢çµå½±
- æ ¹æ®æè¡æ¦(TOP250)æç´¢çµå½±
- æ¾ç¤ºIMDBè¯ååå ¶ä»åºæ¬ä¿¡æ¯
- æä¾å¤ä¸ªå¨çº¿è§é¢ç«ç¹ï¼æ évip
- æä¾å¤ä¸ªäºçç«ç¹æç´¢è¯¥è§é¢ï¼ä»¥ä¾¿ä¿åå°äºç
- æä¾å¤ä¸ªç«ç¹ä¸è½½è¯¥è§é¢
- çå¾ æ´æ°
7.å¤çº¿ç¨+ä»£çæ± ç¬å天天åºéç½ãè¡ç¥¨æ°æ®(æ é使ç¨ç¬è«æ¡æ¶)
ç®ä»
æå°ç¬è«ï¼å¤§é¨å人é½ä¼æ³å°ä½¿ç¨Scrapyå·¥å ·ï¼ä½æ¯ä» ä» åçå¨ä¼ä½¿ç¨çé¶æ®µã为äºå¢å 对ç¬è«æºå¶ççè§£ï¼æä»¬å¯ä»¥æå¨å®ç°å¤çº¿ç¨çç¬è«è¿ç¨ï¼åæ¶ï¼å¼å ¥IPä»£çæ± è¿è¡åºæ¬çåç¬æä½ã
æ¬æ¬¡ä½¿ç¨å¤©å¤©åºéç½è¿è¡ç¬è«ï¼è¯¥ç½ç«å ·æåç¬æºå¶ï¼åæ¶æ°éè¶³å¤å¤§ï¼å¤çº¿ç¨ææè¾ä¸ºææ¾ã
ææ¯è·¯çº¿
- IPä»£çæ±
- å¤çº¿ç¨
- ç¬è«ä¸åç¬
æ°æ®æ ¼å¼
000056,建信æ¶è´¹å级混å,2019-03-26,1.7740,1.7914,0.98,2019-03-27 15:00
000031,åå¤å¤å ´æ··å,2019-03-26,1.5650,1.5709,0.38,2019-03-27 15:00
000048,åå¤ååºå¢å¼ºåºå¸C,2019-03-26,1.2230,1.2236,0.05,2019-03-27 15:00
000008,åå®ä¸è¯500ETFèæ¥A,2019-03-26,1.4417,1.4552,0.93,2019-03-27 15:00
000024,大æ©åå©å¢å¼ºåºå¸A,2019-03-26,1.1670,1.1674,0.04,2019-03-27 15:00
000054,é¹åååºå¢å©åºå¸,2019-03-26,1.1697,1.1693,-0.03,2019-03-27 15:00
000016,åå¤çº¯åºåºå¸C,2019-03-26,1.1790,1.1793,0.03,2019-03-27 15:00
åè½æªå¾
é 置说æ
# ç¡®ä¿å®è£
以ä¸åºï¼å¦ææ²¡æï¼è¯·å¨python3ç¯å¢ä¸æ§è¡pip install 模åå
import requests
import random
import re
import queue
import threading
import csv
import json
8.ä¸é®çæå¾®ä¿¡ä¸ªäººä¸å±æ°æ®æ¥å(äºè§£ä½ ç微信社交åå²))
ç®ä»
ä½ æ¯å¦æ³è¿çæä¸ä»½å±äºä½ çå¾®ä¿¡ä¸ªäººæ°æ®æ¥åï¼äºè§£ä½ ç微信社交åå²ãç°å¨ï¼æä»¬åºäºpython对微信好åè¿è¡å ¨æ¹ä½æ°æ®åæï¼å æ¬ï¼æµç§°ãæ§å«ãå¹´é¾ãå°åºã夿³¨åã个æ§ç¾åã头åã群èãå ¬ä¼å·çã
å ¶ä¸ï¼å¨åæå¥½åç±»åæ¹é¢ï¼ä¸»è¦ç»è®¡åºä½ çéçäººãææ 好åãä¸è®©ä»çæçæååç好åãä¸çä»çæååç好忰æ®ãå¨åæå°åºæ¹é¢ï¼ä¸»è¦ç»è®¡ææå¥½åå¨å ¨å½çåå¸ä»¥å坹好忰æå¤çç份è¿è¡è¿ä¸æ¥åæãå¨å ¶ä»æ¹é¢ï¼ç»è®¡åºä½ ç好忧嫿¯ä¾ãçåºä½ æäº²å¯ç好åï¼åæä½ çç¹æ®å¥½åï¼æ¾åºä¸ä½ æå¨å ±åç¾¤èæ°æå¤ç好忰æ®ï¼å¯¹ä½ ç好å个æ§ç¾åè¿è¡åæï¼å¯¹ä½ ç好å头åè¿è¡åæï¼å¹¶è¿ä¸æ¥æ£æµåºä½¿ç¨ç人头åç好忰æ®ã
ç®åç½ä¸å ³äºè¿æ¹é¢çæ°æ®åææç« æ¯è¾å¤ï¼ä½æ¯è¿è¡èµ·æ¥æ¯è¾éº»ç¦ï¼èæ¬ç¨åºçè¿è¡ååç®åï¼åªéè¦æ«ç ç»å½ä¸æ¥æä½å³å¯ã
åè½æªå¾
å¦ä½è¿è¡
# 跳转å°å½åç®å½
cd ç®å½å
# å
å¸è½½ä¾èµåº
pip uninstall -y -r requirement.txt
# åéæ°å®è£
ä¾èµåº
pip install -r requirement.txt
# å¼å§è¿è¡
python generate_wx_data.py
å¦ä½æå æäºè¿å¶å¯æ§è¡æä»¶
# å®è£
pyinstaller
pip install pyinstaller
# 跳转å°å½åç®å½
cd ç®å½å
# å
å¸è½½ä¾èµåº
pip uninstall -y -r requirement.txt
# åéæ°å®è£
ä¾èµåº
pip install -r requirement.txt
# æ´æ° setuptools
pip install --upgrade setuptools
# å¼å§æå
pyinstaller generate_wx_data.py
9.ä¸é®çæQQ个人å岿¥å
ç®ä»
è¿å å¹´ï¼ç±äºå¾®ä¿¡çæµè¡ï¼å¤§é¨å人ä¸åé¢ç¹ä½¿ç¨QQï¼æä»¥æä»¬å¯¹äºèªå·±çQQæ°æ®å¹¶ä¸æ¯ç¹å«äºè§£ãæç¸ä¿¡ï¼å¦æè½å¤çæä¸ä»½å±äºèªå·±çQQå岿¥åï¼é£å°æ¯æ æ¯å¼å¿çä¸ä»¶äºã
ç®åç½ä¸å ³äºQQçæ°æ®åæå·¥å ·è¾å°ï¼åå æ¯QQç¸å ³æ¥å£æ¯è¾å¤æãèæ¬ç¨åºçè¿è¡ååç®åï¼å ·æè¯å¥½çç¨æ·äº¤äºçé¢ï¼åªéè¦æ«ç ç»å½ä¸æ¥æä½å³å¯ã
ç®åæ¬ç¨åºè·åçæ°æ®å æ¬ï¼QQè¯¦ç»æ°æ®ãææºå¨çº¿æ¶é´ãééèº«ç¶æä¸å¨çº¿æ¶é´ãQQæ´»è·æ¶é´ãåå好忰éãQQ财产åæã群èåæãè¿å»ä¸å¹´æéåºçç¾¤èæ°æ®ãéå»ä¸ä¸ªææå é¤ç好忰æ®ãææä»£ä»ä¿¡æ¯ãææå¨æç人以åæå¨ææç人ãç±äºç¸å ³çæ°æ®æ¥å£æè®¿é®éå¶ï¼æä»¥æ¬ç¨åºå¹¶æ²¡æå¯¹QQ好åè¿è¡åæã
åè½æªå¾
å¦ä½è¿è¡
# 跳转å°å½åç®å½
cd ç®å½å
# å
å¸è½½ä¾èµåº
pip uninstall -y -r requirement.txt
# åéæ°å®è£
ä¾èµåº
pip install -r requirement.txt
# å¼å§è¿è¡
python main.py
10.ä¸é®çæä¸ªäººå¾®ä¿¡æååæ°æ®çµå书
ç®ä»
微信æååä¿ççä½ çæ°æ®ï¼å®çä½äºç¾å¥½çåå¿ï¼è®°å½äºæä»¬æé¿çç¹ç¹æ»´æ»´ãåæååä»æç§æä¹ä¸æ¥è®²æ¯å¨è®°å½çæ´»ï¼æåçæ´»ï¼å¹¶ä»ä¸çå°äºæ¯ä¸ªäººæ¯ä¸æ¥çæé¿ã
è¿ä¹ä¸ä»½çè´µçè®°å¿ï¼ä½ä¸å°å®ä¿å䏿¥å¢ï¼åªé䏿¯åå¡çæ¶é´ï¼å³å¯ä¸é®æå°ä½ çæååãå®å¯ä»¥æ¯çº¸è´¨ä¹¦ï¼ä¹å¯ä»¥æ¯çµå书ï¼å¯ä»¥é¿ä¹ ä¿åï¼æ¯æ´ç §ç好ï¼åææ¶é´è¶³è¿¹è®°å¿ã
- è¿æ¬ä¹¦ï¼å¯ä»¥ç¨æ¥ï¼
- éç»å©åççæ¥ç¤¼ç©
- éç»ä¼´ä¾£ççæ¥ç¤¼ç©
- éç»æªæ¥çèªå·±
- â¦â¦
ç°å¨ï¼ä½ å¯ä»¥éæ©æå°çµå书æè 纸质书ãæå°çº¸è´¨ä¹¦çè¯ï¼å¯ä»¥æ¾ç¬¬ä¸æ¹æºæè±é±è´ä¹°ï¼**æå°çµå书çè¯ï¼æä»¬å®å ¨å¯ä»¥èªå·±å¨æçæï¼è¿å¯ä»¥çä¸ä¸ç¬ä¸å°ç弿¯**ã
åè½æªå¾
å¨å¼å§åä»£ç æè·¯ä¹åï¼æä»¬å ççæç»çæçææã
çµå书ææ(å¾çå¼ç¨èªåºä¹¦å¦)
纸质书ææ(å¾çå¼ç¨èªå¿ä¹¦)
å¦ä½è¿è¡
# 跳转å°å½åç®å½
cd ç®å½å
# å
å¸è½½ä¾èµåº
pip uninstall -y -r requirement.txt
# åéæ°å®è£
ä¾èµåº
pip install -r requirement.txt
# å¼å§è¿è¡
python main.py
11.ä¸é®åæä½ çä¸ç½è¡ä¸º(web页é¢å¯è§å)
ç®ä»
æ³ççä½ æè¿ä¸å¹´é½å¨å¹²åï¼ççä½ å¹³æ¶ä¸ç½æ¯å¨æ¸é±¼è¿æ¯è®¤çå·¥ä½ï¼æ³åå¹´åº¦æ±æ¥æ»ç»ï¼ä½æ¯è¦äºæ²¡ææ°æ®ï¼ç°å¨ï¼å®æ¥äºã
è¿æ¯ä¸ä¸ªè½è®©ä½ äºè§£èªå·±çæµè§åå²çChromeæµè§åå²è®°å½åæç¨åºï¼ä»éç¨äºChromeæµè§å¨æè 以Chromiumä¸ºå æ ¸çæµè§å¨ãç®åå½å 大é¨åæµè§å¨åæ¯ä»¥Chromiumä¸ºå æ ¸çæµè§å¨ï¼æä»¥åºæ¬ä¸é½å¯ä»¥ä½¿ç¨ã使¯ä¸æ¯æä»¥ä¸æµè§å¨ï¼IEæµè§å¨ãFirefoxæµè§å¨ãSafariæµè§å¨ã
å¨è¯¥é¡µé¢ä¸ä½ å°å¯ä»¥æ¥çæå ³èªå·±å¨è¿å»çæ¶é´éæè®¿é®æµè§çååãURL以åå¿ç¢å¤©æ°çååæå以åç¸å ³çæ°æ®å¾è¡¨ã
åè½æªå¾
å¨å¼å§åä»£ç æè·¯ä¹åï¼æä»¬å ççæç»çæçææã
å¦ä½è¿è¡
å¨çº¿æ¼ç¤ºç¨åº:http://39.106.118.77:8090(æ®éæå¡å¨ï¼å¿æµå)
è¿è¡æ¬ç¨åºååç®åï¼åªéè¦æç §ä»¥ä¸å½ä»¤å³å¯è¿è¡ï¼
# 跳转å°å½åç®å½
cd ç®å½å
# å
å¸è½½ä¾èµåº
pip uninstall -y -r requirement.txt
# åéæ°å®è£
ä¾èµåº
pip install -r requirement.txt
# å¼å§è¿è¡
python app.py
# è¿è¡æååï¼éè¿æµè§å¨æå¼http://localhost:8090
12.ä¸é®å¯¼åºå¾®ä¿¡è¯»ä¹¦ç书ç±åç¬è®°
æ¬é¡¹ç®åºäº@arry-leeç项ç®wereaderä¿®æ¹èæ¥ï¼æè°¢åä½è æä¾çæºä»£ç ã
ç®ä»
å ¨æ°é è¯»çæ¶ä»£å·²ç»æ¥ä¸´ï¼ç®å使ç¨è¯»ä¹¦è½¯ä»¶çç¨æ·æ°2.1äº¿ï¼æ¥æ´»è·ç¨æ·è¶ è¿500ä¸ï¼å ¶ä¸19-35å²å¹´è½»ç¨æ·å æ¯è¶ è¿60%ï¼æ¬ç§å以ä¸å¦åç¨æ·å æ¯é«è¾¾80%ï¼åä¸å¹¿æ·±åå ¶ä»çä¼åå¸/ç´è¾å¸ç¨æ·å æ¯è¶ è¿80%ãæ¬äººä¹ æ¯ä½¿ç¨å¾®ä¿¡è¯»ä¹¦ï¼ä¸ºäºæ¹ä¾¿æ´ç书ç±å导åºç¬è®°ï¼ä¾¿å¼åäºè¿ä¸ªå°å·¥å ·ã
åè½æªå¾
å¨å¼å§åä»£ç æè·¯ä¹åï¼æä»¬å ççæç»çæçææã
å¦ä½è¿è¡
# 跳转å°å½åç®å½
cd ç®å½å
# å
å¸è½½ä¾èµåº
pip uninstall -y -r requirement.txt
# åéæ°å®è£
ä¾èµåº
pip install -r requirement.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
# å¼å§è¿è¡
python pyqt_gui.py
è¡¥å
é¡¹ç®æç»æ´æ°ï¼æ¬¢è¿æ¨staræ¬é¡¹ç®
License
Top Related Projects
爬虫集合
:rainbow:Python3网络爬虫实战:淘宝、京东、网易云、B站、12306、抖音、笔趣阁、漫画小说下载、音乐电影下载等
Python ProxyPool for web spider
😮python模拟登陆一些大型网站,还有一些简单的爬虫,希望对你们有所帮助❤️,如果喜欢记得给个star哦🌟
以撸代码的形式学习Python
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot