当咱们应用 ChatGPT 实现某些工作的时候,往往须要多轮对话,比方让 ChatGPT 剖析、翻译、总结一篇网上的文章或者文档,再将总结的后果以文本的模式存储在本地。过程中免不了要和 ChatGPT“折冲樽俎”一番,事实上,这个“交涉”的过程也能够自动化,AutoGPT 能够帮忙咱们主动拆解工作,没错,程序能做到的事件,人类绝不亲力亲为。
咱们惟一须要做的,就是通知 AutoGPT 一个工作指标,AutoGPT 会主动依据工作指标将工作拆解成一个个的小工作,并且一一实现,简略且高效。
配置 AutoGPT
先确保本地环境装置好了 Python3.10.9。
接着运行 Git 命令拉取我的项目:
git clone https://github.com/Significant-Gravitas/Auto-GPT.git
随后进入我的项目的目录:
cd Auto-GPT
装置相干的依赖库:
pip3 install -r requirements.txt
装置胜利后,复制一下我的项目的配置文件:
cp .env.template .env
这里通过 cp 命令将配置文件模版.env.template 复制成为一个新的配置文件.env。
随后将 Openai 的秘钥填入配置文件:
### OPENAI
# OPENAI_API_KEY - OpenAI API Key (Example: my-openai-api-key)
# TEMPERATURE - Sets temperature in OpenAI (Default: 0)
# USE_AZURE - Use Azure OpenAI or not (Default: False)
OPENAI_API_KEY= 您的秘钥
TEMPERATURE=0
USE_AZURE=False
除了 Openai 官网的接口秘钥,AutoGPT 也反对微软 Azure 的接口。
如果心愿应用微软 Azure 的接口,须要将配置中的 USE\_AZURE 设置为 True,随后复制 azure.yaml.template 配置模版为新的 azure.yaml 配置文件。
接着将微软 Azure 服务的秘钥填入 azure.yaml 即可。
因为微软 Azure 接入 Openai 接口须要极其简单的申请流程,这里还是间接应用 OpenAI 官网的接口。
当然了,如果不想在本地装那么多依赖,也能够通过 Docker 来构建 Auto-GPT 的容器:
docker build -t autogpt .
docker run -it --env-file=./.env -v $PWD/auto_gpt_workspace:/app/auto_gpt_workspace autogpt
这里 Docker 会主动读取我的项目中的 Dockerfile 配置文件进行构建,相当不便。
至此,Auto-GPT 就配置好了。
运行 Auto-GPT
在我的项目根目录运行命令:
python3 -m autogpt --debug
即可启动 AutoGPT:
➜ Auto-GPT git:(master) python -m autogpt --debug
Warning: The file 'AutoGpt.json' does not exist. Local memory would not be saved to a file.
Debug Mode: ENABLED
Welcome to Auto-GPT! Enter the name of your AI and its role below. Entering nothing will load defaults.
Name your AI: For example, 'Entrepreneur-GPT'
AI Name:
首先创立 AutoGPT 机器人的名字:
AI Name: v3u.cn
v3u.cn here! I am at your service.
Describe your AI's role: For example,'an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.'
v3u.cn is:
创立好名字当前,Auto-GPT 就能够随时为您服务了。
首先为 AutoGPT 设置指标:
v3u.cn is: Analyze the contents of this article,the url is https://v3u.cn/a_id_303,and write the result to goal.txt
这里咱们要求 AutoGPT 剖析并且总结 v3u.cn/a\_id\_303 这篇文章,并且将剖析后果写入本地的 goal.txt 文件。
程序返回:
Enter up to 5 goals for your AI: For example: Increase net worth, Grow Twitter Account, Develop and manage multiple businesses autonomously'
Enter nothing to load defaults, enter nothing when finished.
Goal 1:
Using memory of type: LocalCache
AutoGPT 会通知你能够最多拆解为五个工作,咱们能够本人拆解,也能够让机器人帮忙咱们拆解,间接按回车,让 AutoGPT 主动拆解工作即可。
接着程序会主动爬取这篇文章的内容,而后应用 gpt-3.5-turbo 模型来进行剖析:
Goal 1:
Using memory of type: LocalCache
Using Browser: chrome
Token limit: 4000
Memory Stats: (0, (0, 1536))
Token limit: 4000
Send Token Count: 936
Tokens remaining for response: 3064
------------ CONTEXT SENT TO AI ---------------
System: The current time and date is Mon Apr 17 20:29:37 2023
System: This reminds you of these events from your past:
User: Determine which next command to use, and respond using the format specified above:
----------- END OF CONTEXT ----------------
Creating chat completion with model gpt-3.5-turbo, temperature 0.0, max_tokens 3064
The JSON object is valid.
THOUGHTS: Let's start by browsing the article at https://v3u.cn/a_id_303 to get a better understanding of its contents.
REASONING: Browsing the article will allow us to analyze its contents and determine the appropriate next steps.
PLAN:
- Browse the article
- Analyze its contents
- Determine the appropriate next steps
CRITICISM: None
NEXT ACTION: COMMAND = browse_website ARGUMENTS = {'url': 'https://v3u.cn/a_id_303', 'question': 'analyze the contents of the article'}
Enter 'y' to authorise command, 'y -N' to run N continuous commands, 'n' to exit program, or enter feedback for ...
Input:y
-=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=
[WDM] - Downloading: 100%|████████████████████████████████████████████| 8.04M/8.04M [00:03<00:00, 2.42MB/s]
Text length: 6977 characters
Adding chunk 1 / 1 to memory
Summarizing chunk 1 / 1
Creating chat completion with model gpt-3.5-turbo, temperature 0.0, max_tokens 300
SYSTEM: Command browse_website returned: Error: This model's maximum context length is 4097 tokens. However, you requested 4339 tokens (4039 in the messages, 300 in the completion). Please reduce the length of the messages or completion.
Token limit: 4000
Memory Stats: (2, (2, 1536))
Token limit: 4000
Send Token Count: 1472
Tokens remaining for response: 2528
------------ CONTEXT SENT TO AI ---------------
System: The current time and date is Mon Apr 17 20:30:19 2023
System: This reminds you of these events from your past:
['Assistant Reply: {\n"thoughts": {\n"text":"Let\'s start by browsing the article at https://v3u.cn/a_id_303 to get a better understanding of its contents.",\n "reasoning": "Browsing the article will allow us to analyze its contents and determine the appropriate next steps.",\n "plan": "- Browse the article\\n- Analyze its contents\\n- Determine the appropriate next steps",\n "criticism": "None",\n "speak": "I suggest we start by browsing the article at the given URL to analyze its contents and determine the appropriate next steps."\n},\n "command": {\n "name": "browse_website",\n "args": {\n "url": "https://v3u.cn/a_id_303",\n "question": "analyze the contents of the article"\n}\n }\n} \nResult: Command browse_website returned: Error: This model\'s maximum context length is 4097 tokens. However, you requested 4339 tokens (4039 in the messages, 300 in the completion). Please reduce the length of the messages or completion. \nHuman Feedback: GENERATE NEXT COMMAND JSON']
User: Determine which next command to use, and respond using the format specified above:
Assistant: {
"thoughts": {"text": "Let's start by browsing the article at https://v3u.cn/a_id_303 to get a better understanding of its contents.","reasoning":"Browsing the article will allow us to analyze its contents and determine the appropriate next steps.","plan":"- Browse the article\n- Analyze its contents\n- Determine the appropriate next steps","criticism":"None","speak":"I suggest we start by browsing the article at the given URL to analyze its contents and determine the appropriate next steps."},"command": {"name":"browse_website","args": {"url":"https://v3u.cn/a_id_303","question":"analyze the contents of the article"}
}
}
User: Determine which next command to use, and respond using the format specified above:
----------- END OF CONTEXT ----------------
Creating chat completion with model gpt-3.5-turbo, temperature 0.0, max_tokens 2528
最初将剖析后果写入 goal.txt 文件:
这篇文章次要阐释了苹果 Mac 电脑能够实现机器学习和深度学习工作,并且通过深度学习框架 Tensorflow 的装置和运行进行了佐证,同时也对 Tensorflow 的 CPU 和 GPU 的两种模型训练模式进行了深度比照和测试。
零打碎敲,晦涩丝滑。
结语
AutoGPT 和其余 AI 程序的不同之处在于,它专门专一于在无需人工干预的状况下生成提醒和主动执行多步骤工作。它还具备扫描互联网或在用户计算机上执行命令以获取信息的能力,这使其有别于可能仅依赖于事后存在的数据集的其余人工智能程序。
AutoGPT 的底层逻辑并不简单:先通过搜索引擎检索工作,而后把后果和指标丢给 gpt 让它给出序列化计划 json,再把计划分段丢给 gpt,最初用 shell 去创立 Python 文件 +json.load 并且执行,是一个重复递归的过程。
不能否定的是,尽管实现逻辑简略,但这无疑是一种“自我进化”的过程,置信随着工夫的推移,AutoGPT 能够更好地解决更加简单的工作。