共计 3261 个字符,预计需要花费 9 分钟才能阅读完成。
有一个群友在群里问个如何疾速搭建一个搜索引擎,在搜寻之后我看到了这个
代码所在
- Git:https://github.com/asciimoo/s…
官网很贴心,很不便的是曾经提供了 docker 镜像,根本 pull 下来就能够很不便的应用了,执行命令
cid=$(sudo docker ps -a | grep searx | awk '{print $1}') | |
echo searx cid is $cid | |
if ["$cid" != ""];then | |
sudo docker stop $cid | |
sudo docker rm $cid | |
fi | |
sudo docker run -d --name searx -e IMAGE_PROXY=True -e BASE_URL=http://yourdomain.com -p 7777:8888 wonderfall/searx |
而后就能够应用了, 失常查看 docker 的状态,就能够失常的应用了
思考
怎么样,是不是很不便,咱们先看看源码是怎么样实现的
咱们关上外面的代码,其实实质就是将 request 之后的后果做一个大的聚合,至于数据起源,咱们能够是来于 DB, 或者文件,咱们能够看一下他的外围代码
from urllib import urlencode | |
from json import loads | |
from collections import Iterable | |
search_url = None | |
url_query = None | |
content_query = None | |
title_query = None | |
suggestion_query = ''results_query ='' | |
# parameters for engines with paging support | |
# | |
# number of results on each page | |
# (only needed if the site requires not a page number, but an offset) | |
page_size = 1 | |
# number of the first page (usually 0 or 1) | |
first_page_num = 1 | |
def iterate(iterable): | |
if type(iterable) == dict: | |
it = iterable.iteritems() | |
else: | |
it = enumerate(iterable) | |
for index, value in it: | |
yield str(index), value | |
def is_iterable(obj): | |
if type(obj) == str: | |
return False | |
if type(obj) == unicode: | |
return False | |
return isinstance(obj, Iterable) | |
def parse(query): | |
q = [] | |
for part in query.split('/'): | |
if part == '': | |
continue | |
else: | |
q.append(part) | |
return q | |
def do_query(data, q): | |
ret = [] | |
if not q: | |
return ret | |
qkey = q[0] | |
for key, value in iterate(data): | |
if len(q) == 1: | |
if key == qkey: | |
ret.append(value) | |
elif is_iterable(value): | |
ret.extend(do_query(value, q)) | |
else: | |
if not is_iterable(value): | |
continue | |
if key == qkey: | |
ret.extend(do_query(value, q[1:])) | |
else: | |
ret.extend(do_query(value, q)) | |
return ret | |
def query(data, query_string): | |
q = parse(query_string) | |
return do_query(data, q) | |
def request(query, params): | |
query = urlencode({'q': query})[2:] | |
fp = {'query': query} | |
if paging and search_url.find('{pageno}') >= 0: | |
fp['pageno'] = (params['pageno'] - 1) * page_size + first_page_num | |
params['url'] = search_url.format(**fp) | |
params['query'] = query | |
return params | |
def response(resp): | |
results = [] | |
json = loads(resp.text) | |
if results_query: | |
for result in query(json, results_query)[0]: | |
url = query(result, url_query)[0] | |
title = query(result, title_query)[0] | |
content = query(result, content_query)[0] | |
results.append({'url': url, 'title': title, 'content': content}) | |
else: | |
for url, title, content in zip(query(json, url_query), | |
query(json, title_query), | |
query(json, content_query) | |
): | |
results.append({'url': url, 'title': title, 'content': content}) | |
if not suggestion_query: | |
return results | |
for suggestion in query(json, suggestion_query): | |
results.append({'suggestion': suggestion}) | |
return results |
后果
每个 response 的时候咱们都要以轻松的定制返回的数据(能够是网络,能够是数据库,能够是文件),那咱们进一步想一下,如果咱们能够 hack response 后果,那咱们齐全能够将本人爬来的数据做为返回后果。如果是 1024 之类的,齐全能够打造本人的“喜好”小引擎,代码我就不贴了,大家能够本人入手本人玩玩。联合 jieba 分词,能够更好玩一点。
正文完