-
“全部查找”
页
'xml文件中的节点
-
解析所有这些页面,收集数据,查找其他页面
-
潦草的脚本:
class test_spider(XMLFeedSpider):
name='test'
start_urls=['https://www.example.com']
custom_settings={
'ITEM_PIPELINES':{
'test.test_pipe': 100,
},
}
itertag='pages'
def parse1(self,response,node):
yield Request('https://www.example.com/'+node.xpath('@id').extract_first()+'/xml-out',callback=self.parse2)
def parse2(self,response):
yield{'COLLECT1':response.xpath('/@id').extract_first()}
for text in string.split(response.xpath(root+'/node[@id="page"]/text()').extract_first() or '','^'):
if text is not '':
yield Request(
'https://www.example.com/'+text,
callback=self.parse3,
dont_filter=True
)
def parse3(self,response):
yield{'COLLECT2':response.xpath('/@id').extract_first()}
class listings_pipe(object):
def process_item(self,item,spider):
pprint(item)
{'COLLECT1':'some data','COLLECT2':['some data','some data',…]}
在每个parse1事件之后,是否有一种调用管道的方法?把所有的东西合并起来?