[{"data":1,"prerenderedAt":1417},["ShallowReactive",2],{"page-/post/spider/puppeteer-jujin-hot-ranks":3,"surrounding-page":1408},{"id":4,"title":5,"author":6,"body":7,"date":1394,"description":5,"extension":1395,"group":6,"lastmod":1396,"meta":1397,"navigation":945,"path":1400,"rawbody":1401,"seo":1402,"showTitle":5,"stem":1403,"tags":1404,"versions":6,"__hash__":1407},"content/post/spider/puppeteer-jujin-hot-ranks.md","使用puppeteer爬取掘金热榜",null,{"type":8,"value":9,"toc":1378},"minimark",[10,14,18,21,24,28,45,49,77,80,83,86,92,100,104,107,125,128,149,152,155,164,267,270,299,315,332,335,338,341,344,350,353,358,361,366,369,372,375,394,430,433,447,454,457,477,480,483,526,529,532,541,544,566,569,787,790,793,809,812,815,818,829,892,895,898,991,1001,1004,1049,1052,1055,1094,1097,1100,1103,1175,1178,1247,1250,1324,1327,1330,1347,1350,1353,1356,1359,1362,1365,1368,1371,1374],[11,12,13],"h2",{"id":13},"引言",[15,16,17],"p",{},"又开新坑了。准备花点时间研究一下爬虫、自动化方向的技术，当然还是围绕node来展开。",[15,19,20],{},"我会把前端、Node相关体系的技术内容和实际需求融合，力求闭环，闭不了就当把全部干货整理成知识库。文章首发在公众号：早早集市，感兴趣的可以关注一下。",[15,22,23],{},"本篇是基于puppeteer这个库做的爬虫demo。",[11,25,27],{"id":26},"什么是puppeteer","什么是Puppeteer",[15,29,30,31,38,39,44],{},"Puppeteer 是一个 Node 库，它提供了一个高级 API 来通过 ",[32,33,37],"a",{"href":34,"rel":35,"title":37},"https://chromedevtools.github.io/devtools-protocol/",[36],"nofollow","DevTools"," 协议控制 Chromium 或 Chrome。Puppeteer 默认以 ",[32,40,43],{"href":41,"rel":42,"title":43},"https://developers.google.com/web/updates/2017/04/headless-chrome",[36],"headless"," 模式运行，但是可以通过修改配置文件运行“有头”模式。",[11,46,48],{"id":47},"puppeteer能做什么","Puppeteer能做什么",[50,51,52,56,59,62,65,74],"ul",{},[53,54,55],"li",{},"生成页面 PDF。",[53,57,58],{},"抓取 SPA（单页应用）并生成预渲染内容（即“SSR”（服务器端渲染））。",[53,60,61],{},"自动提交表单，进行 UI 测试，键盘输入等。",[53,63,64],{},"创建一个时时更新的自动化测试环境。 使用最新的 JavaScript 和浏览器功能直接在最新版本的Chrome中执行测试。",[53,66,67,68,73],{},"捕获网站的 ",[32,69,72],{"href":70,"rel":71,"title":72},"https://developers.google.com/web/tools/chrome-devtools/evaluate-performance/reference",[36],"timeline trace","，用来帮助分析性能问题。",[53,75,76],{},"测试浏览器扩展。",[11,78,79],{"id":79},"环境准备",[15,81,82],{},"一是可以单独写一个js文件，从头开始写个demo，直接用node运行即可。",[15,84,85],{},"二是写在其他后端项目里。这里我选择写在我之前的nest的项目，方便后续灵感来了之后进一步整合。",[15,87,88],{},[89,90,91],"strong",{},"版本：",[50,93,94,97],{},[53,95,96],{},"node 18.18.2",[53,98,99],{},"puppeteer 21.7.0",[101,102,103],"h3",{"id":103},"安装",[15,105,106],{},"先装puppeteer",[108,109,114],"pre",{"className":110,"code":111,"language":112,"meta":113,"style":113},"language-typescript shiki shiki-themes github-light","pnpm i puppeteer\n","typescript","",[115,116,117],"code",{"__ignoreMap":113},[118,119,122],"span",{"class":120,"line":121},"line",1,[118,123,111],{"class":124},"sgsFI",[15,126,127],{},"如果在公司，网络不好的话，可以换个源试试",[108,129,131],{"className":110,"code":130,"language":112,"meta":113,"style":113},"pnpm config set registry https://registry.npmmirror.com\n\n",[115,132,133],{"__ignoreMap":113},[118,134,135,138,142,145],{"class":120,"line":121},[118,136,137],{"class":124},"pnpm config set registry ",[118,139,141],{"class":140},"s7eDp","https",[118,143,144],{"class":124},":",[118,146,148],{"class":147},"sAwPA","//registry.npmmirror.com\n",[15,150,151],{},"安装完成后，我们可以写一个接口用于测试，每次请求时运行一下爬取函数。然后在service里实现爬取的逻辑。",[11,153,154],{"id":154},"开始编写",[15,156,157,158,163],{},"关于",[32,159,162],{"href":160,"rel":161,"title":162},"https://pptr.dev/",[36],"example","，官网就有，很适合快速学习一下api。我仿照这个example快速开始，只不过我这里换成了掘金，因为例子里的地址因为某些原因，不方便访问。",[108,165,167],{"className":110,"code":166,"language":112,"meta":113,"style":113},"const browser = await puppeteer.launch({\n      headless: false,\n      args: ['--start-fullscreen'],\n    });\nconst page = await browser.newPage();\nawait page.goto('https://juejin.cn/hot/articles');\n\n",[115,168,169,194,206,219,225,246],{"__ignoreMap":113},[118,170,171,175,179,182,185,188,191],{"class":120,"line":121},[118,172,174],{"class":173},"sD7c4","const",[118,176,178],{"class":177},"sYu0t"," browser",[118,180,181],{"class":173}," =",[118,183,184],{"class":173}," await",[118,186,187],{"class":124}," puppeteer.",[118,189,190],{"class":140},"launch",[118,192,193],{"class":124},"({\n",[118,195,197,200,203],{"class":120,"line":196},2,[118,198,199],{"class":124},"      headless: ",[118,201,202],{"class":177},"false",[118,204,205],{"class":124},",\n",[118,207,209,212,216],{"class":120,"line":208},3,[118,210,211],{"class":124},"      args: [",[118,213,215],{"class":214},"sYBdl","'--start-fullscreen'",[118,217,218],{"class":124},"],\n",[118,220,222],{"class":120,"line":221},4,[118,223,224],{"class":124},"    });\n",[118,226,228,230,233,235,237,240,243],{"class":120,"line":227},5,[118,229,174],{"class":173},[118,231,232],{"class":177}," page",[118,234,181],{"class":173},[118,236,184],{"class":173},[118,238,239],{"class":124}," browser.",[118,241,242],{"class":140},"newPage",[118,244,245],{"class":124},"();\n",[118,247,249,252,255,258,261,264],{"class":120,"line":248},6,[118,250,251],{"class":173},"await",[118,253,254],{"class":124}," page.",[118,256,257],{"class":140},"goto",[118,259,260],{"class":124},"(",[118,262,263],{"class":214},"'https://juejin.cn/hot/articles'",[118,265,266],{"class":124},");\n",[15,268,269],{},"先解释一下上边几句。每个api我都附上了官方文档地址，我写代码一般就只看官方文档来写。",[15,271,272,276,277,279,280,283,284,287,288,291,296],{},[32,273,190],{"href":274,"rel":275,"title":190},"https://pptr.dev/api/puppeteer.puppeteernode.launch",[36],"：启动一个浏览器实例，并且可以传入配置参数，如",[115,278,43],{},"可以配置以无头(",[115,281,282],{},"'new'",")或有头模式（",[115,285,286],{},"fasle","）运行。返回值 ",[115,289,290],{},"Promise\u003C",[32,292,295],{"href":293,"rel":294,"title":295},"https://pptr.dev/api/puppeteer.browser",[36],"Browser",[115,297,298],{},">",[15,300,301,305,306,308,313],{},[32,302,242],{"href":303,"rel":304,"title":242},"https://pptr.dev/api/puppeteer.browser.newpage",[36],"：在 default browser context打开一个页面。返回值",[115,307,290],{},[32,309,312],{"href":310,"rel":311,"title":312},"https://pptr.dev/api/puppeteer.page",[36],"Page",[115,314,298],{},[15,316,317,321,322,324,329],{},[32,318,257],{"href":319,"rel":320,"title":257},"https://pptr.dev/api/puppeteer.page.goto",[36],"：导航到一个url。返回值",[115,323,290],{},[32,325,328],{"href":326,"rel":327,"title":328},"https://pptr.dev/api/puppeteer.httpresponse",[36],"HTTPResponse",[115,330,331],{}," | null>",[15,333,334],{},"所以爬取数据的思路，就和自己打开浏览器进入掘金浏览一样，打开网页 ⇒ 输入地址 ⇒ 等待加载 ⇒ 加载完成 ⇒ 看到（拿到）数据 ⇒ 数据存储",[15,336,337],{},"因为掘金不需要登录也可以直接看到文章热榜，所以直接去找页面元素，拿到数据即可。",[15,339,340],{},"对于一个前端来说，去审查元素是比较简单的，如何你不是前端的话，可以用这种方式去获取元素",[101,342,343],{"id":343},"分析页面",[345,346,347],"ol",{},[53,348,349],{},"F12或者右键审查，打开控制台，选中这个工具，去点击页面上的元素",[15,351,352],{},"![[1-img-20241119141175.png]]",[345,354,355],{},[53,356,357],{},"在左侧点击页面上的字后，右侧控制台的元素会自动聚焦",[15,359,360],{},"![[2-img-20241119141155.png]]",[345,362,363],{},[53,364,365],{},"在右侧高亮的元素上右键，可以看到复制，打开后有个复制selector，选择。因为puppeteer是使用的css selector来获取元素。",[15,367,368],{},"![[1-img-20241119141177.png]]",[101,370,371],{"id":371},"获取数据",[15,373,374],{},"puppeteer提供几个api可以用来获取到元素",[50,376,377,386],{},[53,378,379,382,383],{},[115,380,381],{},"Page.$()","  相当于 ",[115,384,385],{},"document.querySelector",[53,387,388,382,391],{},[115,389,390],{},"Page.$$()",[115,392,393],{},"document.querySelectorAll",[15,395,396,397,399,404,407,412,415,416,419,422,424,427],{},"这两个api的返回值分别是",[115,398,290],{},[32,400,403],{"href":401,"rel":402,"title":403},"https://pptr.dev/api/puppeteer.elementhandle",[36],"ElementHandle",[115,405,406],{},"\u003C",[32,408,411],{"href":409,"rel":410,"title":411},"https://pptr.dev/api/puppeteer.nodefor",[36],"NodeFor",[115,413,414],{},"\u003CSelector>> | null>"," 和 ",[115,417,418],{},"Promise\u003CArray\u003C",[32,420,403],{"href":401,"rel":421,"title":403},[36],[115,423,406],{},[32,425,411],{"href":409,"rel":426,"title":411},[36],[115,428,429],{},"\u003CSelector>>>>",[15,431,432],{},"而ElementHandle也有获取元素的相同api",[50,434,435,441],{},[53,436,437,440],{},[115,438,439],{},"ElementHandle.$()"," 相当于在获取了一个元素的基础上，再获取它的子元素",[53,442,443,446],{},[115,444,445],{},"ElementHandle.$$()"," 同理",[15,448,449,450,453],{},"这两个api的返回值和Page的两个api",[89,451,452],{},"返回值相同","。",[15,455,456],{},"获取到元素后，还需要获取元素的值。一般有两种，一种是元素的内容，一种是元素的属性值",[50,458,459,465,471,474],{},[53,460,461,464],{},[89,462,463],{},"Page.$eval","('.selector', el ⇒ el.textContent)  这种方式可以直接获取到元素内容",[53,466,467,470],{},[89,468,469],{},"Page.$$eval","('.selector, (elements) => elements.map((el) => el.getAttribute('href')') 这种则是获取元素的属性，当然这个api是获取所有内容",[53,472,473],{},"ElementHandle.$eval  同理",[53,475,476],{},"ElementHandle.$$eval 同理",[15,478,479],{},"知道了这几个api之后，爬数据基本不是问题了。",[15,481,482],{},"继续写一下代码，粘贴一下刚才复制的selector，看看能否取到数据。",[108,484,486],{"className":110,"code":485,"language":112,"meta":113,"style":113},"const number = await wrap.$eval('#juejin > div:nth-child(1) > div.view-container.hot-lists > main > div.hot-list-body > div.hot-list-wrap > div.hot-list > a:nth-child(1) > div > div.article-item-left > div.article-number.article-number-1', (e) => e.textContent)\n",[115,487,488],{"__ignoreMap":113},[118,489,490,492,495,497,499,502,505,507,510,513,517,520,523],{"class":120,"line":121},[118,491,174],{"class":173},[118,493,494],{"class":177}," number",[118,496,181],{"class":173},[118,498,184],{"class":173},[118,500,501],{"class":124}," wrap.",[118,503,504],{"class":140},"$eval",[118,506,260],{"class":124},[118,508,509],{"class":214},"'#juejin > div:nth-child(1) > div.view-container.hot-lists > main > div.hot-list-body > div.hot-list-wrap > div.hot-list > a:nth-child(1) > div > div.article-item-left > div.article-number.article-number-1'",[118,511,512],{"class":124},", (",[118,514,516],{"class":515},"sqxcx","e",[118,518,519],{"class":124},") ",[118,521,522],{"class":173},"=>",[118,524,525],{"class":124}," e.textContent)\n",[15,527,528],{},"可以看到复制来的selector非常长，是从页面最开始元素开始的，可以自己适当删减一下。一般只具体到它的父元素和它自己的元素就可以了。",[15,530,531],{},"有的时候，元素需要时间加载，比如元素基于接口渲染时，接口或者网速很慢，这个时候获取元素是获取不到的。puppeteer也有api可以等待元素加载出来",[50,533,534],{},[53,535,536],{},[32,537,540],{"href":538,"rel":539,"title":540},"https://pptr.dev/api/puppeteer.page.waitforselector",[36],"Page.waitForSelector()",[15,542,543],{},"比如要等待掘金热榜的文章列表被加载出来",[108,545,547],{"className":110,"code":546,"language":112,"meta":113,"style":113},"  await page.waitForSelector('.hot-list .article-item-wrap');\n\n",[115,548,549],{"__ignoreMap":113},[118,550,551,554,556,559,561,564],{"class":120,"line":121},[118,552,553],{"class":173},"  await",[118,555,254],{"class":124},[118,557,558],{"class":140},"waitForSelector",[118,560,260],{"class":124},[118,562,563],{"class":214},"'.hot-list .article-item-wrap'",[118,565,266],{"class":124},[15,567,568],{},"然后照着葫芦画瓢，再把文章标题、作者、热度等信息也获取到",[108,570,572],{"className":110,"code":571,"language":112,"meta":113,"style":113},"const number = await wrap.$eval(\n  '.article-number',\n  (el) => el.textContent,\n);\nconst title = await wrap.$eval('.article-title', (el) => el.textContent);\nconst hotNumber = await wrap.$eval(\n  '.article-hot .hot-number',\n  (el) => el.textContent,\n);\nconst authorName = await wrap.$eval(\n  '.article-author-name-text',\n  (el) => el.textContent,\n);\nconst authorUrl = await wrap.$eval('.article-author-name', (el) =>\n  el.getAttribute('href'),\n);\n",[115,573,574,591,598,613,617,648,665,673,686,691,709,717,730,735,765,782],{"__ignoreMap":113},[118,575,576,578,580,582,584,586,588],{"class":120,"line":121},[118,577,174],{"class":173},[118,579,494],{"class":177},[118,581,181],{"class":173},[118,583,184],{"class":173},[118,585,501],{"class":124},[118,587,504],{"class":140},[118,589,590],{"class":124},"(\n",[118,592,593,596],{"class":120,"line":196},[118,594,595],{"class":214},"  '.article-number'",[118,597,205],{"class":124},[118,599,600,603,606,608,610],{"class":120,"line":208},[118,601,602],{"class":124},"  (",[118,604,605],{"class":515},"el",[118,607,519],{"class":124},[118,609,522],{"class":173},[118,611,612],{"class":124}," el.textContent,\n",[118,614,615],{"class":120,"line":221},[118,616,266],{"class":124},[118,618,619,621,624,626,628,630,632,634,637,639,641,643,645],{"class":120,"line":227},[118,620,174],{"class":173},[118,622,623],{"class":177}," title",[118,625,181],{"class":173},[118,627,184],{"class":173},[118,629,501],{"class":124},[118,631,504],{"class":140},[118,633,260],{"class":124},[118,635,636],{"class":214},"'.article-title'",[118,638,512],{"class":124},[118,640,605],{"class":515},[118,642,519],{"class":124},[118,644,522],{"class":173},[118,646,647],{"class":124}," el.textContent);\n",[118,649,650,652,655,657,659,661,663],{"class":120,"line":248},[118,651,174],{"class":173},[118,653,654],{"class":177}," hotNumber",[118,656,181],{"class":173},[118,658,184],{"class":173},[118,660,501],{"class":124},[118,662,504],{"class":140},[118,664,590],{"class":124},[118,666,668,671],{"class":120,"line":667},7,[118,669,670],{"class":214},"  '.article-hot .hot-number'",[118,672,205],{"class":124},[118,674,676,678,680,682,684],{"class":120,"line":675},8,[118,677,602],{"class":124},[118,679,605],{"class":515},[118,681,519],{"class":124},[118,683,522],{"class":173},[118,685,612],{"class":124},[118,687,689],{"class":120,"line":688},9,[118,690,266],{"class":124},[118,692,694,696,699,701,703,705,707],{"class":120,"line":693},10,[118,695,174],{"class":173},[118,697,698],{"class":177}," authorName",[118,700,181],{"class":173},[118,702,184],{"class":173},[118,704,501],{"class":124},[118,706,504],{"class":140},[118,708,590],{"class":124},[118,710,712,715],{"class":120,"line":711},11,[118,713,714],{"class":214},"  '.article-author-name-text'",[118,716,205],{"class":124},[118,718,720,722,724,726,728],{"class":120,"line":719},12,[118,721,602],{"class":124},[118,723,605],{"class":515},[118,725,519],{"class":124},[118,727,522],{"class":173},[118,729,612],{"class":124},[118,731,733],{"class":120,"line":732},13,[118,734,266],{"class":124},[118,736,738,740,743,745,747,749,751,753,756,758,760,762],{"class":120,"line":737},14,[118,739,174],{"class":173},[118,741,742],{"class":177}," authorUrl",[118,744,181],{"class":173},[118,746,184],{"class":173},[118,748,501],{"class":124},[118,750,504],{"class":140},[118,752,260],{"class":124},[118,754,755],{"class":214},"'.article-author-name'",[118,757,512],{"class":124},[118,759,605],{"class":515},[118,761,519],{"class":124},[118,763,764],{"class":173},"=>\n",[118,766,768,771,774,776,779],{"class":120,"line":767},15,[118,769,770],{"class":124},"  el.",[118,772,773],{"class":140},"getAttribute",[118,775,260],{"class":124},[118,777,778],{"class":214},"'href'",[118,780,781],{"class":124},"),\n",[118,783,785],{"class":120,"line":784},16,[118,786,266],{"class":124},[15,788,789],{},"可以在nest的控制台打印一下，看看输出的内容是否正确。",[15,791,792],{},"爬取完成后，可以关闭一下browser",[108,794,796],{"className":110,"code":795,"language":112,"meta":113,"style":113},"await browser.close();\n\n",[115,797,798],{"__ignoreMap":113},[118,799,800,802,804,807],{"class":120,"line":121},[118,801,251],{"class":173},[118,803,239],{"class":124},[118,805,806],{"class":140},"close",[118,808,245],{"class":124},[101,810,811],{"id":811},"数据存储",[15,813,814],{},"基于学习的目的，数据可以看情况存储在本地json文件，或者数据库中，或者发送给其他服务器。",[15,816,817],{},"注意不要滥用数据或进行其他违法行为，目的仅仅是学习node，切记切记。",[15,819,820,821,824,825,828],{},"这里我选择把数据",[115,822,823],{},"articleList","存在本地json文件中。用",[115,826,827],{},"JSON.stringify(articleList, null, 2)"," 按缩进为2格式化一下数据。",[108,830,832],{"className":110,"code":831,"language":112,"meta":113,"style":113},"fs.writeFileSync(\n  `./热榜-${+new Date()}.json`,\n  JSON.stringify(articleList, null, 2),\n);\n",[115,833,834,844,863,888],{"__ignoreMap":113},[118,835,836,839,842],{"class":120,"line":121},[118,837,838],{"class":124},"fs.",[118,840,841],{"class":140},"writeFileSync",[118,843,590],{"class":124},[118,845,846,849,852,855,858,861],{"class":120,"line":196},[118,847,848],{"class":214},"  `./热榜-${",[118,850,851],{"class":173},"+new",[118,853,854],{"class":140}," Date",[118,856,857],{"class":214},"()",[118,859,860],{"class":214},"}.json`",[118,862,205],{"class":124},[118,864,865,868,871,874,877,880,883,886],{"class":120,"line":208},[118,866,867],{"class":177},"  JSON",[118,869,870],{"class":124},".",[118,872,873],{"class":140},"stringify",[118,875,876],{"class":124},"(articleList, ",[118,878,879],{"class":177},"null",[118,881,882],{"class":124},", ",[118,884,885],{"class":177},"2",[118,887,781],{"class":124},[118,889,890],{"class":120,"line":221},[118,891,266],{"class":124},[101,893,894],{"id":894},"数据处理",[15,896,897],{},"通过获取到的数据可以发现，很多数据没法直接存储，需要被洗一洗。比如上边文章编号，拿到数据之后有很多空格，存之前我们先处理处理。",[108,899,901],{"className":110,"code":900,"language":112,"meta":113,"style":113},"let reg = /(\\n|\\s)*/g;\n\nfor循环articleList\n  article.number = article.number.replace(reg, '');\n  article.hotNumber = article.hotNumber.replace(reg, '');\n",[115,902,903,941,947,952,973],{"__ignoreMap":113},[118,904,905,908,911,914,917,920,923,926,929,932,935,938],{"class":120,"line":121},[118,906,907],{"class":173},"let",[118,909,910],{"class":124}," reg ",[118,912,913],{"class":173},"=",[118,915,916],{"class":214}," /(",[118,918,919],{"class":177},"\\n",[118,921,922],{"class":173},"|",[118,924,925],{"class":177},"\\s",[118,927,928],{"class":214},")",[118,930,931],{"class":173},"*",[118,933,934],{"class":214},"/",[118,936,937],{"class":173},"g",[118,939,940],{"class":124},";\n",[118,942,943],{"class":120,"line":196},[118,944,946],{"emptyLinePlaceholder":945},true,"\n",[118,948,949],{"class":120,"line":208},[118,950,951],{"class":124},"for循环articleList\n",[118,953,954,957,959,962,965,968,971],{"class":120,"line":221},[118,955,956],{"class":124},"  article.number ",[118,958,913],{"class":173},[118,960,961],{"class":124}," article.number.",[118,963,964],{"class":140},"replace",[118,966,967],{"class":124},"(reg, ",[118,969,970],{"class":214},"''",[118,972,266],{"class":124},[118,974,975,978,980,983,985,987,989],{"class":120,"line":227},[118,976,977],{"class":124},"  article.hotNumber ",[118,979,913],{"class":173},[118,981,982],{"class":124}," article.hotNumber.",[118,984,964],{"class":140},[118,986,967],{"class":124},[118,988,970],{"class":214},[118,990,266],{"class":124},[15,992,993,994,997,998,1000],{},"把多余的",[115,995,996],{},"空格","和",[115,999,919],{},"去掉。",[15,1002,1003],{},"然后当前爬取的是哪个榜单，存的时候我也想记录一下，再去页面上找文章列表上方的榜单名称。",[108,1005,1007],{"className":110,"code":1006,"language":112,"meta":113,"style":113},"let navName = await page.$eval(\n  'div.hot-list-header > div > span.hot-title > span',\n  (el) => el.textContent,\n);\n",[115,1008,1009,1026,1033,1045],{"__ignoreMap":113},[118,1010,1011,1013,1016,1018,1020,1022,1024],{"class":120,"line":121},[118,1012,907],{"class":173},[118,1014,1015],{"class":124}," navName ",[118,1017,913],{"class":173},[118,1019,184],{"class":173},[118,1021,254],{"class":124},[118,1023,504],{"class":140},[118,1025,590],{"class":124},[118,1027,1028,1031],{"class":120,"line":196},[118,1029,1030],{"class":214},"  'div.hot-list-header > div > span.hot-title > span'",[118,1032,205],{"class":124},[118,1034,1035,1037,1039,1041,1043],{"class":120,"line":208},[118,1036,602],{"class":124},[118,1038,605],{"class":515},[118,1040,519],{"class":124},[118,1042,522],{"class":173},[118,1044,612],{"class":124},[118,1046,1047],{"class":120,"line":221},[118,1048,266],{"class":124},[15,1050,1051],{},"榜单名称拿到后，也需要处理一下空格，不再赘述。",[15,1053,1054],{},"这个榜单名称是通过接口获取到然后渲染的，在点击左侧导航栏的时候就可以发现，所以要确保右侧内容被渲染完，我们再去开始爬取行为，所以可以在开头加上这两句。确保页面加载出来的时候，这两块已经被渲染完毕。",[108,1056,1058],{"className":110,"code":1057,"language":112,"meta":113,"style":113},"await page.waitForSelector('.hot-list .article-item-wrap');\nawait page.waitForSelector(\n  'div.hot-list-header > div > span.hot-title > span',\n);\n",[115,1059,1060,1074,1084,1090],{"__ignoreMap":113},[118,1061,1062,1064,1066,1068,1070,1072],{"class":120,"line":121},[118,1063,251],{"class":173},[118,1065,254],{"class":124},[118,1067,558],{"class":140},[118,1069,260],{"class":124},[118,1071,563],{"class":214},[118,1073,266],{"class":124},[118,1075,1076,1078,1080,1082],{"class":120,"line":196},[118,1077,251],{"class":173},[118,1079,254],{"class":124},[118,1081,558],{"class":140},[118,1083,590],{"class":124},[118,1085,1086,1088],{"class":120,"line":208},[118,1087,1030],{"class":214},[118,1089,205],{"class":124},[118,1091,1092],{"class":120,"line":221},[118,1093,266],{"class":124},[15,1095,1096],{},"这样就完成了对热榜-综合榜的爬取。然后再去爬其他榜单。",[101,1098,1099],{"id":1099},"多页爬取",[15,1101,1102],{},"通过分析左侧的导航栏，可以在元素a标签上发现一个href，点击后，页面就会切换到对应的地址。所以我的思路是先采集全部的地址。",[108,1104,1106],{"className":110,"code":1105,"language":112,"meta":113,"style":113},"const navUrls = await page.$$eval(\n  '.sub-nav-item-wrap .nav-item-content a',\n  (elements) => elements.map((el) => el.getAttribute('href')),\n);\n",[115,1107,1108,1126,1133,1171],{"__ignoreMap":113},[118,1109,1110,1112,1115,1117,1119,1121,1124],{"class":120,"line":121},[118,1111,174],{"class":173},[118,1113,1114],{"class":177}," navUrls",[118,1116,181],{"class":173},[118,1118,184],{"class":173},[118,1120,254],{"class":124},[118,1122,1123],{"class":140},"$$eval",[118,1125,590],{"class":124},[118,1127,1128,1131],{"class":120,"line":196},[118,1129,1130],{"class":214},"  '.sub-nav-item-wrap .nav-item-content a'",[118,1132,205],{"class":124},[118,1134,1135,1137,1140,1142,1144,1147,1150,1153,1155,1157,1159,1162,1164,1166,1168],{"class":120,"line":208},[118,1136,602],{"class":124},[118,1138,1139],{"class":515},"elements",[118,1141,519],{"class":124},[118,1143,522],{"class":173},[118,1145,1146],{"class":124}," elements.",[118,1148,1149],{"class":140},"map",[118,1151,1152],{"class":124},"((",[118,1154,605],{"class":515},[118,1156,519],{"class":124},[118,1158,522],{"class":173},[118,1160,1161],{"class":124}," el.",[118,1163,773],{"class":140},[118,1165,260],{"class":124},[118,1167,778],{"class":214},[118,1169,1170],{"class":124},")),\n",[118,1172,1173],{"class":120,"line":221},[118,1174,266],{"class":124},[15,1176,1177],{},"然后把刚才写的爬取综合榜的过程，封装到一个函数里，作为爬取单页的方法。因为每个榜单的样式都是一样的。",[108,1179,1181],{"className":110,"code":1180,"language":112,"meta":113,"style":113},"// 伪代码\n function getPageData() {\n   await 元素加载\n   await 获取元素\n   await 获取内容\n   await 组装数据\n   await 数据处理\n   await 写入文件\n }\n",[115,1182,1183,1188,1199,1207,1214,1221,1228,1235,1242],{"__ignoreMap":113},[118,1184,1185],{"class":120,"line":121},[118,1186,1187],{"class":147},"// 伪代码\n",[118,1189,1190,1193,1196],{"class":120,"line":196},[118,1191,1192],{"class":173}," function",[118,1194,1195],{"class":140}," getPageData",[118,1197,1198],{"class":124},"() {\n",[118,1200,1201,1204],{"class":120,"line":208},[118,1202,1203],{"class":173},"   await",[118,1205,1206],{"class":124}," 元素加载\n",[118,1208,1209,1211],{"class":120,"line":221},[118,1210,1203],{"class":173},[118,1212,1213],{"class":124}," 获取元素\n",[118,1215,1216,1218],{"class":120,"line":227},[118,1217,1203],{"class":173},[118,1219,1220],{"class":124}," 获取内容\n",[118,1222,1223,1225],{"class":120,"line":248},[118,1224,1203],{"class":173},[118,1226,1227],{"class":124}," 组装数据\n",[118,1229,1230,1232],{"class":120,"line":667},[118,1231,1203],{"class":173},[118,1233,1234],{"class":124}," 数据处理\n",[118,1236,1237,1239],{"class":120,"line":675},[118,1238,1203],{"class":173},[118,1240,1241],{"class":124}," 写入文件\n",[118,1243,1244],{"class":120,"line":688},[118,1245,1246],{"class":124}," }\n",[15,1248,1249],{},"然后用一个for of 循环进行多页的爬取，注意不要用forEach，因为要保证能await",[108,1251,1253],{"className":110,"code":1252,"language":112,"meta":113,"style":113},"for (const url of navUrls) {\n  const pageUrl = 'https://juejin.cn' + url;\n  await page.goto(pageUrl);\n  await this.getPageData(page);\n}\n",[115,1254,1255,1274,1293,1304,1319],{"__ignoreMap":113},[118,1256,1257,1260,1263,1265,1268,1271],{"class":120,"line":121},[118,1258,1259],{"class":173},"for",[118,1261,1262],{"class":124}," (",[118,1264,174],{"class":173},[118,1266,1267],{"class":177}," url",[118,1269,1270],{"class":173}," of",[118,1272,1273],{"class":124}," navUrls) {\n",[118,1275,1276,1279,1282,1284,1287,1290],{"class":120,"line":196},[118,1277,1278],{"class":173},"  const",[118,1280,1281],{"class":177}," pageUrl",[118,1283,181],{"class":173},[118,1285,1286],{"class":214}," 'https://juejin.cn'",[118,1288,1289],{"class":173}," +",[118,1291,1292],{"class":124}," url;\n",[118,1294,1295,1297,1299,1301],{"class":120,"line":208},[118,1296,553],{"class":173},[118,1298,254],{"class":124},[118,1300,257],{"class":140},[118,1302,1303],{"class":124},"(pageUrl);\n",[118,1305,1306,1308,1311,1313,1316],{"class":120,"line":221},[118,1307,553],{"class":173},[118,1309,1310],{"class":177}," this",[118,1312,870],{"class":124},[118,1314,1315],{"class":140},"getPageData",[118,1317,1318],{"class":124},"(page);\n",[118,1320,1321],{"class":120,"line":227},[118,1322,1323],{"class":124},"}\n",[15,1325,1326],{},"然后就在项目的根目录，产生了9个json文件了，可以看一下数据爬取的有没有问题。处理过程中，主要问题是要等待页面元素加载完再拿，符合一个正常人去看网页的逻辑。",[15,1328,1329],{},"Puppeteer也提供了其他等待的方法，如：",[50,1331,1332,1335,1338,1341,1344],{},[53,1333,1334],{},"waitForTimeout",[53,1336,1337],{},"waitForFunction",[53,1339,1340],{},"waitForRequest",[53,1342,1343],{},"waitForResponse",[53,1345,1346],{},"等等",[15,1348,1349],{},"后续我再做登录、验证码等自动化操作时再详细总结一下。",[15,1351,1352],{},"以上就是全部内容了👏",[11,1354,1355],{"id":1355},"小结",[15,1357,1358],{},"这篇文章作为一个简单的小demo，记录一下研究puppeteer的开始，感觉可玩性很强。",[15,1360,1361],{},"可以用来在不适合摸鱼的办公环境，爬一下热榜然后通过webhook发到钉钉去看。或者用于签到、领这领那等重复工作。",[15,1363,1364],{},"等我发现了好玩的玩法，再写出来和大家分享 。",[15,1366,1367],{},"另外最近十来天一直在梳理自己的知识库，思考自己的定位问题，对未来的做一下规划，最好和最坏的情况考虑，也在学习如何运营。可以说收获满满，能无限进步的感觉很棒。",[15,1369,1370],{},"虽然也会偶尔动摇，但总体还算坚定，后续也会稳定的输出文章！！！",[15,1372,1373],{},"我是枣把儿，欢迎关注我的公众号：早早集市，来找我玩耍🥳",[1375,1376,1377],"style",{},"html pre.shiki code .sgsFI, html code.shiki .sgsFI{--shiki-default:#24292E}html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html pre.shiki code .s7eDp, html code.shiki .s7eDp{--shiki-default:#6F42C1}html pre.shiki code .sAwPA, html code.shiki .sAwPA{--shiki-default:#6A737D}html pre.shiki code .sD7c4, html code.shiki .sD7c4{--shiki-default:#D73A49}html pre.shiki code .sYu0t, html code.shiki .sYu0t{--shiki-default:#005CC5}html pre.shiki code .sYBdl, html code.shiki .sYBdl{--shiki-default:#032F62}html pre.shiki code .sqxcx, html code.shiki .sqxcx{--shiki-default:#E36209}",{"title":113,"searchDepth":196,"depth":196,"links":1379},[1380,1381,1382,1383,1386,1393],{"id":13,"depth":196,"text":13},{"id":26,"depth":196,"text":27},{"id":47,"depth":196,"text":48},{"id":79,"depth":196,"text":79,"children":1384},[1385],{"id":103,"depth":208,"text":103},{"id":154,"depth":196,"text":154,"children":1387},[1388,1389,1390,1391,1392],{"id":343,"depth":208,"text":343},{"id":371,"depth":208,"text":371},{"id":811,"depth":208,"text":811},{"id":894,"depth":208,"text":894},{"id":1099,"depth":208,"text":1099},{"id":1355,"depth":196,"text":1355},"2025-02-12T04:57:38.000Z","md","2025-08-15T14:58:41.000Z",{"category":1398,"published":1399},"技术","2023-12-11 00:00:00","/post/spider/puppeteer-jujin-hot-ranks","---\ntitle: 使用puppeteer爬取掘金热榜\ndescription: 使用puppeteer爬取掘金热榜\ntags: [\"Puppeteer\", \"Node\"]\ncategory: 技术\npublished: 2023-12-11 00:00:00\ndate: 2025-02-12 12:57:38\nlastmod: 2025-08-15 22:58:41\nshowTitle: 使用puppeteer爬取掘金热榜\n---\n## 引言\n\n又开新坑了。准备花点时间研究一下爬虫、自动化方向的技术，当然还是围绕node来展开。\n\n我会把前端、Node相关体系的技术内容和实际需求融合，力求闭环，闭不了就当把全部干货整理成知识库。文章首发在公众号：早早集市，感兴趣的可以关注一下。\n\n本篇是基于puppeteer这个库做的爬虫demo。\n\n## 什么是Puppeteer\n\nPuppeteer 是一个 Node 库，它提供了一个高级 API 来通过 [DevTools](https://chromedevtools.github.io/devtools-protocol/ \"DevTools\") 协议控制 Chromium 或 Chrome。Puppeteer 默认以 [headless](https://developers.google.com/web/updates/2017/04/headless-chrome \"headless\") 模式运行，但是可以通过修改配置文件运行“有头”模式。\n\n## Puppeteer能做什么\n\n-   生成页面 PDF。\n-   抓取 SPA（单页应用）并生成预渲染内容（即“SSR”（服务器端渲染））。\n-   自动提交表单，进行 UI 测试，键盘输入等。\n-   创建一个时时更新的自动化测试环境。 使用最新的 JavaScript 和浏览器功能直接在最新版本的Chrome中执行测试。\n-   捕获网站的 [timeline trace](https://developers.google.com/web/tools/chrome-devtools/evaluate-performance/reference \"timeline trace\")，用来帮助分析性能问题。\n-   测试浏览器扩展。\n\n## 环境准备\n\n一是可以单独写一个js文件，从头开始写个demo，直接用node运行即可。\n\n二是写在其他后端项目里。这里我选择写在我之前的nest的项目，方便后续灵感来了之后进一步整合。\n\n**版本：**\n\n-   node 18.18.2\n-   puppeteer 21.7.0\n\n### 安装\n\n先装puppeteer\n\n```typescript\npnpm i puppeteer\n```\n\n如果在公司，网络不好的话，可以换个源试试\n\n```typescript\npnpm config set registry https://registry.npmmirror.com\n\n```\n\n安装完成后，我们可以写一个接口用于测试，每次请求时运行一下爬取函数。然后在service里实现爬取的逻辑。\n\n## 开始编写\n\n关于[example](https://pptr.dev/ \"example\")，官网就有，很适合快速学习一下api。我仿照这个example快速开始，只不过我这里换成了掘金，因为例子里的地址因为某些原因，不方便访问。\n\n```typescript\nconst browser = await puppeteer.launch({\n      headless: false,\n      args: ['--start-fullscreen'],\n    });\nconst page = await browser.newPage();\nawait page.goto('https://juejin.cn/hot/articles');\n\n```\n\n先解释一下上边几句。每个api我都附上了官方文档地址，我写代码一般就只看官方文档来写。\n\n[launch](https://pptr.dev/api/puppeteer.puppeteernode.launch \"launch\")：启动一个浏览器实例，并且可以传入配置参数，如`headless`可以配置以无头(`'new'`)或有头模式（`fasle`）运行。返回值 `Promise\u003C`[Browser](https://pptr.dev/api/puppeteer.browser \"Browser\")`>`\n\n[newPage](https://pptr.dev/api/puppeteer.browser.newpage \"newPage\")：在 default browser context打开一个页面。返回值`Promise\u003C`[Page](https://pptr.dev/api/puppeteer.page \"Page\")`>`\n\n[goto](https://pptr.dev/api/puppeteer.page.goto \"goto\")：导航到一个url。返回值`Promise\u003C`[HTTPResponse](https://pptr.dev/api/puppeteer.httpresponse \"HTTPResponse\")` | null>`\n\n\n\n所以爬取数据的思路，就和自己打开浏览器进入掘金浏览一样，打开网页 ⇒ 输入地址 ⇒ 等待加载 ⇒ 加载完成 ⇒ 看到（拿到）数据 ⇒ 数据存储\n\n\n\n因为掘金不需要登录也可以直接看到文章热榜，所以直接去找页面元素，拿到数据即可。\n\n对于一个前端来说，去审查元素是比较简单的，如何你不是前端的话，可以用这种方式去获取元素\n\n### 分析页面\n\n1.  F12或者右键审查，打开控制台，选中这个工具，去点击页面上的元素\n\n![[1-img-20241119141175.png]]\n\n1.  在左侧点击页面上的字后，右侧控制台的元素会自动聚焦\n\n![[2-img-20241119141155.png]]\n\n1.  在右侧高亮的元素上右键，可以看到复制，打开后有个复制selector，选择。因为puppeteer是使用的css selector来获取元素。\n\n![[1-img-20241119141177.png]]\n\n### 获取数据\n\npuppeteer提供几个api可以用来获取到元素\n\n-   `Page.$()`  相当于 `document.querySelector`\n-   `Page.$$()`  相当于 `document.querySelectorAll`\n\n这两个api的返回值分别是`Promise\u003C`[ElementHandle](https://pptr.dev/api/puppeteer.elementhandle \"ElementHandle\")`\u003C`[NodeFor](https://pptr.dev/api/puppeteer.nodefor \"NodeFor\")`\u003CSelector>> | null>` 和 `Promise\u003CArray\u003C`[ElementHandle](https://pptr.dev/api/puppeteer.elementhandle \"ElementHandle\")`\u003C`[NodeFor](https://pptr.dev/api/puppeteer.nodefor \"NodeFor\")`\u003CSelector>>>>`\n\n而ElementHandle也有获取元素的相同api\n\n-   `ElementHandle.$()` 相当于在获取了一个元素的基础上，再获取它的子元素\n-   `ElementHandle.$$()` 同理\n\n这两个api的返回值和Page的两个api**返回值相同**。\n\n获取到元素后，还需要获取元素的值。一般有两种，一种是元素的内容，一种是元素的属性值\n\n-   **Page.\\$eval**('.selector', el ⇒ el.textContent)  这种方式可以直接获取到元素内容\n-   **Page.\\$\\$eval**('.selector, (elements) => elements.map((el) => el.getAttribute('href')') 这种则是获取元素的属性，当然这个api是获取所有内容\n-   ElementHandle.\\$eval  同理\n-   ElementHandle.\\$\\$eval 同理\n\n知道了这几个api之后，爬数据基本不是问题了。\n\n继续写一下代码，粘贴一下刚才复制的selector，看看能否取到数据。\n\n```typescript\nconst number = await wrap.$eval('#juejin > div:nth-child(1) > div.view-container.hot-lists > main > div.hot-list-body > div.hot-list-wrap > div.hot-list > a:nth-child(1) > div > div.article-item-left > div.article-number.article-number-1', (e) => e.textContent)\n```\n\n可以看到复制来的selector非常长，是从页面最开始元素开始的，可以自己适当删减一下。一般只具体到它的父元素和它自己的元素就可以了。\n\n有的时候，元素需要时间加载，比如元素基于接口渲染时，接口或者网速很慢，这个时候获取元素是获取不到的。puppeteer也有api可以等待元素加载出来\n\n-   [Page.waitForSelector()](https://pptr.dev/api/puppeteer.page.waitforselector \"Page.waitForSelector()\")\n\n比如要等待掘金热榜的文章列表被加载出来\n\n```typescript\n  await page.waitForSelector('.hot-list .article-item-wrap');\n\n```\n\n\n\n然后照着葫芦画瓢，再把文章标题、作者、热度等信息也获取到\n\n```typescript\nconst number = await wrap.$eval(\n  '.article-number',\n  (el) => el.textContent,\n);\nconst title = await wrap.$eval('.article-title', (el) => el.textContent);\nconst hotNumber = await wrap.$eval(\n  '.article-hot .hot-number',\n  (el) => el.textContent,\n);\nconst authorName = await wrap.$eval(\n  '.article-author-name-text',\n  (el) => el.textContent,\n);\nconst authorUrl = await wrap.$eval('.article-author-name', (el) =>\n  el.getAttribute('href'),\n);\n```\n\n可以在nest的控制台打印一下，看看输出的内容是否正确。\n\n爬取完成后，可以关闭一下browser\n\n```typescript\nawait browser.close();\n\n```\n\n### 数据存储\n\n基于学习的目的，数据可以看情况存储在本地json文件，或者数据库中，或者发送给其他服务器。\n\n注意不要滥用数据或进行其他违法行为，目的仅仅是学习node，切记切记。\n\n这里我选择把数据`articleList`存在本地json文件中。用`JSON.stringify(articleList, null, 2)` 按缩进为2格式化一下数据。\n\n```typescript\nfs.writeFileSync(\n  `./热榜-${+new Date()}.json`,\n  JSON.stringify(articleList, null, 2),\n);\n```\n\n### 数据处理\n\n通过获取到的数据可以发现，很多数据没法直接存储，需要被洗一洗。比如上边文章编号，拿到数据之后有很多空格，存之前我们先处理处理。\n\n```typescript\nlet reg = /(\\n|\\s)*/g;\n\nfor循环articleList\n  article.number = article.number.replace(reg, '');\n  article.hotNumber = article.hotNumber.replace(reg, '');\n```\n\n把多余的`空格`和`\\n`去掉。\n\n然后当前爬取的是哪个榜单，存的时候我也想记录一下，再去页面上找文章列表上方的榜单名称。\n\n```typescript\nlet navName = await page.$eval(\n  'div.hot-list-header > div > span.hot-title > span',\n  (el) => el.textContent,\n);\n```\n\n榜单名称拿到后，也需要处理一下空格，不再赘述。\n\n这个榜单名称是通过接口获取到然后渲染的，在点击左侧导航栏的时候就可以发现，所以要确保右侧内容被渲染完，我们再去开始爬取行为，所以可以在开头加上这两句。确保页面加载出来的时候，这两块已经被渲染完毕。\n\n```typescript\nawait page.waitForSelector('.hot-list .article-item-wrap');\nawait page.waitForSelector(\n  'div.hot-list-header > div > span.hot-title > span',\n);\n```\n\n这样就完成了对热榜-综合榜的爬取。然后再去爬其他榜单。\n\n### 多页爬取\n\n通过分析左侧的导航栏，可以在元素a标签上发现一个href，点击后，页面就会切换到对应的地址。所以我的思路是先采集全部的地址。\n\n```typescript\nconst navUrls = await page.$$eval(\n  '.sub-nav-item-wrap .nav-item-content a',\n  (elements) => elements.map((el) => el.getAttribute('href')),\n);\n```\n\n然后把刚才写的爬取综合榜的过程，封装到一个函数里，作为爬取单页的方法。因为每个榜单的样式都是一样的。\n\n```typescript\n// 伪代码\n function getPageData() {\n   await 元素加载\n   await 获取元素\n   await 获取内容\n   await 组装数据\n   await 数据处理\n   await 写入文件\n }\n```\n\n然后用一个for of 循环进行多页的爬取，注意不要用forEach，因为要保证能await\n\n```typescript\nfor (const url of navUrls) {\n  const pageUrl = 'https://juejin.cn' + url;\n  await page.goto(pageUrl);\n  await this.getPageData(page);\n}\n```\n\n然后就在项目的根目录，产生了9个json文件了，可以看一下数据爬取的有没有问题。处理过程中，主要问题是要等待页面元素加载完再拿，符合一个正常人去看网页的逻辑。\n\nPuppeteer也提供了其他等待的方法，如：\n\n-   waitForTimeout\n-   waitForFunction\n-   waitForRequest\n-   waitForResponse\n-   等等\n\n后续我再做登录、验证码等自动化操作时再详细总结一下。\n\n以上就是全部内容了👏\n\n## 小结\n\n这篇文章作为一个简单的小demo，记录一下研究puppeteer的开始，感觉可玩性很强。\n\n可以用来在不适合摸鱼的办公环境，爬一下热榜然后通过webhook发到钉钉去看。或者用于签到、领这领那等重复工作。\n\n等我发现了好玩的玩法，再写出来和大家分享 。\n\n另外最近十来天一直在梳理自己的知识库，思考自己的定位问题，对未来的做一下规划，最好和最坏的情况考虑，也在学习如何运营。可以说收获满满，能无限进步的感觉很棒。\n\n虽然也会偶尔动摇，但总体还算坚定，后续也会稳定的输出文章！！！\n\n我是枣把儿，欢迎关注我的公众号：早早集市，来找我玩耍🥳\n\n\n\n\n\n",{"title":5,"description":5},"post/spider/puppeteer-jujin-hot-ranks",[1405,1406],"Puppeteer","Node","jmF5Posx2tLN3l3gS53jvwBkOkPULGKCtEiENeyu20s",[1409,1413],{"title":1410,"path":1411,"stem":1412},"OpenClaw 安装入门（Windows）","/post/zzao/openclaw/openclaw-install-windows","post/zzao/openclaw/openclaw-install-windows",{"title":1414,"path":1415,"stem":1416},"假设你是AI，你的Skill应该是什么样的","/post/zzao/ai-skill-structure","post/zzao/ai-skill-structure",1779005086135]