Commit Graph

699 Commits (7586e3d75c62ee2226af349bad30ab665ec3732d)
 

Author SHA1 Message Date
yihua.huang 7586e3d75c add some test for github repo downloader 9 years ago
yihua.huang 800f66c4cc Revert "remove some unkown config"
This reverts commit 0e245c9896.
9 years ago
yihua.huang 73ae7a1d52 remove ci for jdk6 9 years ago
yihua.huang 0e245c9896 remove some unkown config 9 years ago
yihua.huang 9ed06ccdf0 update surefire version 9 years ago
Yihua Huang 9d0eeb9000 Merge pull request #218 from bingoko/master
添加PhantomJS无界面浏览器支持
9 years ago
Yihua Huang 84b046e4c9 Merge pull request #227 from hsqlu/master
update deprecated method
9 years ago
Yihua Huang cfde3b7657 Merge pull request #237 from SpenceZhou/master
Update RedisScheduler.java
9 years ago
SpenceZhou 165e5a72eb Update RedisScheduler.java
修改redisscheduler中获取爬取总数bug
9 years ago
Yihua Huang 5f9e1a96f2 Merge pull request #233 from x1ny/master
修正FileCacheQueueScheduler导致程序不能正常结束和未关闭流
9 years ago
Yihua Huang 7d7eb033d3 Merge pull request #234 from chy996633/master
知乎爬虫抓取
9 years ago
chy996633 afd1617b58 知乎爬虫抓取 9 years ago
x1ny 90e14b31b0 修正FileCacheQueueScheduler导致程序不能正常结束和未关闭流
FileCacheQueueScheduler中开启了一个线程周期运行来保存数据但在爬虫结束后没有关闭导致程序无法结束,以及没有关闭io流。

解决方法:
让FileCacheQueueScheduler实现Closable接口,在close方法中关闭线程以及流。
在Spider的close方法中添加对scheduler的关闭操作。
9 years ago
Qiannan Lu 155215290f resolve issue #226 10 years ago
Qiannan Lu 21f81bb8c1 update deprecated method 10 years ago
bingoko 5d365f7bf4 update and validate pom.xml
Update selenium and GhostDriver (PhantomJSDriver) to latest version.
10 years ago
bingoko d3bbece202 Add PhantomJS support for selenium
The configuration file is config.ini
The dependencies are updated in pom.xml.
Update SeleniumDownloader and WebDriverPool to support PhantomJS. 
NOTE: The versions of GhostDriver, Selenium, and PhantomJS are stable
and validated.

A GooglePlay Example is under samples package: GooglePlayProcessor.java
10 years ago
yihua.huang 56e0cd513a compile error fix 10 years ago
yihua.huang c5740b1840 change assert #200 10 years ago
yihua.huang 67eb632f4d test for issue #200 10 years ago
Yihua Huang b30ca6ce1e Merge pull request #198 from okuc/master
修正site.setHttpProxy()不起作用的bug
10 years ago
高军 590561a6e4 修正site.setHttpProxy()不起作用的bug 10 years ago
Yihua Huang 05a1f39569 Merge pull request #193 from EdwardsBean/fix-mppipeline
Bug fix:MultiPagePipeline and DoubleKeyMap concurrent bug
10 years ago
edwardsbean 74962d69b9 fix bug:MultiPagePipeline and DoubleKeyMap concurrent bug 10 years ago
Yihua Huang 6b9d21fcf3 Merge pull request #188 from EdwardsBean/retry_time
add retry sleep time
10 years ago
edwardsbean 4978665633 add retry sleep time 10 years ago
yihua.huang 8ffc1a7093 add NPE check for POST method 10 years ago
yihua.huang 8551b668a0 remove commented code 10 years ago
Yihua Huang 20422f1b63 Merge pull request #161 from zhugw/patch-4
Update FileCacheQueueScheduler.java
10 years ago
zhugw eb3c78b9d8 Update FileCacheQueueScheduler.java
这样是不是更严谨? 否则的话,中断后再次启动时, (第一个)入口地址仍会被添加到队列及写入到文件中. 
但是现在有另外一个问题存在,如第一遍全部抓取完毕了(通过spider.getStatus==Stopped判断),休眠24小时,再来抓取(通过递归调用抓取方法).
这时不同于中断后再启动,lineReader==cursor, 于是初始化时队列为空,入口地址又在urls集合中了, 故导致抓取线程马上就结束了.这样的话就没有办法去抓取网站上的新增内容了.
解决方案一:
判断抓取完毕后,紧接着覆盖cursor文件,第二次来抓取时,curosr为0, 于是将urls.txt中的所有url均放入队列中了, 可以通过这些url来发现新增url.
方案二:
对方案一进行优化,方案一虽然可以满足业务要求,但会做很多无用功,如仍会对所有旧target url进行下载,抽取,持久化等操作.而新增的内容一般都会在HelpUrl中, 比如某一页多了一个新帖子,或者多了几页内容. 故第二遍及以后来爬取时可以仅将HelpUrl放入队列中. 

希望能给予反馈,我上述理解对不对, 有什么没有考虑到的情况或者有更简单的方案?谢谢!
11 years ago
Yihua Huang 3a9c1d3002 Merge pull request #159 from zhugw/patch-3
Update Site.java
11 years ago
zhugw bc666e927d Update Site.java
setCycleRetryTimes的javadoc是这么说的:Set cycleRetryTimes times when download fail, 0 by default. Only work in RedisScheduler.
而通过查看源码发现似乎并没有做限制,即只能用于RedisScheduler. 故想问一下该javadoc是否过时了?
11 years ago
yihua.huang 42a30074c9 update urls.contains to DuplicateRemover in FileCacheQueueScheduler #157 11 years ago
Yihua Huang 689e89a9b2 Merge pull request #157 from zhugw/patch-1
Update FileCacheQueueScheduler.java
11 years ago
zhugw 1db940a088 Update FileCacheQueueScheduler.java
在使用过程中发现urls.txt文件存在重复URL的情况,经跟踪源代码,发现初始化加载文件后,读取所有的url放入一集合中,但是之后添加待抓取URL时并未判断是否已存在该集合中(即文件中)了,故导致文件中重复URL的情况.故据此对源码做了修改,还请作者审阅.
11 years ago
yihua.huang 147401ce5e remove duplicate setPath in ProxyPool 11 years ago
yihua.huang 3734865a6a fix package name =.= 11 years ago
yihua.huang e7668e01b8 fix SourceRegion error and add some tests on it #144 11 years ago
yihua.huang 4e5ba02020 fix test cont' 11 years ago
yihua.huang 4446669c24 fix test 11 years ago
yihua.huang 9866297ec4 Disable jsoup entity escape by Default. Set Html.DISABLE_HTML_ENTITY_ESCAPE to false to enable it. #149 11 years ago
yihua.huang 4e6e946dd7 more friendly exception message in PlainText #144 11 years ago
yihua.huang ebb931e0bf update assertj to test scope 11 years ago
yihua.huang af9939622b move thread package out of selector (because it is add by mistake at the beginning) 11 years ago
yihua.huang 2fd8f05fe2 change path seperator for varient OS #139 11 years ago
yihua.huang eae37c868b new sample 11 years ago
yihua.huang b3a282e58d some fix for tests #130 11 years ago
yihua.huang b75e64a61b t push origin masterMerge branch 'yxssfxwzy-proxy' 11 years ago
yihua.huang 074d767f45 Merge branch 'proxy' of github.com:yxssfxwzy/webmagic into yxssfxwzy-proxy 11 years ago
zwf 2f89cfc31a add test and fix bug of proxy module 11 years ago