当咱们应用ChatGPT实现某些工作的时候,往往须要多轮对话,比方让ChatGPT剖析、翻译、总结一篇网上的文章或者文档,再将总结的后果以文本的模式存储在本地。过程中免不了要和ChatGPT“折冲樽俎”一番,事实上,这个“交涉”的过程也能够自动化,AutoGPT能够帮忙咱们主动拆解工作,没错,程序能做到的事件,人类绝不亲力亲为。
咱们惟一须要做的,就是通知AutoGPT一个工作指标,AutoGPT会主动依据工作指标将工作拆解成一个个的小工作,并且一一实现,简略且高效。
配置AutoGPT
先确保本地环境装置好了Python3.10.9。
接着运行Git命令拉取我的项目:
git clone https://github.com/Significant-Gravitas/Auto-GPT.git
随后进入我的项目的目录:
cd Auto-GPT
装置相干的依赖库:
pip3 install -r requirements.txt
装置胜利后,复制一下我的项目的配置文件:
cp .env.template .env
这里通过cp命令将配置文件模版.env.template复制成为一个新的配置文件.env。
随后将Openai的秘钥填入配置文件:
### OPENAI # OPENAI_API_KEY - OpenAI API Key (Example: my-openai-api-key) # TEMPERATURE - Sets temperature in OpenAI (Default: 0) # USE_AZURE - Use Azure OpenAI or not (Default: False) OPENAI_API_KEY=您的秘钥 TEMPERATURE=0 USE_AZURE=False
除了Openai官网的接口秘钥,AutoGPT也反对微软Azure的接口。
如果心愿应用微软Azure的接口,须要将配置中的USE\_AZURE设置为True,随后复制azure.yaml.template配置模版为新的azure.yaml配置文件。
接着将微软Azure服务的秘钥填入azure.yaml即可。
因为微软Azure接入Openai接口须要极其简单的申请流程,这里还是间接应用OpenAI官网的接口。
当然了,如果不想在本地装那么多依赖,也能够通过Docker来构建Auto-GPT的容器:
docker build -t autogpt . docker run -it --env-file=./.env -v $PWD/auto_gpt_workspace:/app/auto_gpt_workspace autogpt
这里Docker会主动读取我的项目中的Dockerfile配置文件进行构建,相当不便。
至此,Auto-GPT就配置好了。
运行Auto-GPT
在我的项目根目录运行命令:
python3 -m autogpt --debug
即可启动AutoGPT:
➜ Auto-GPT git:(master) python -m autogpt --debug Warning: The file 'AutoGpt.json' does not exist. Local memory would not be saved to a file. Debug Mode: ENABLED Welcome to Auto-GPT! Enter the name of your AI and its role below. Entering nothing will load defaults. Name your AI: For example, 'Entrepreneur-GPT' AI Name:
首先创立AutoGPT机器人的名字:
AI Name: v3u.cn v3u.cn here! I am at your service. Describe your AI's role: For example, 'an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.' v3u.cn is:
创立好名字当前,Auto-GPT就能够随时为您服务了。
首先为AutoGPT设置指标:
v3u.cn is: Analyze the contents of this article,the url is https://v3u.cn/a_id_303,and write the result to goal.txt
这里咱们要求AutoGPT剖析并且总结v3u.cn/a\_id\_303这篇文章,并且将剖析后果写入本地的goal.txt文件。
程序返回:
Enter up to 5 goals for your AI: For example: Increase net worth, Grow Twitter Account, Develop and manage multiple businesses autonomously' Enter nothing to load defaults, enter nothing when finished. Goal 1: Using memory of type: LocalCache
AutoGPT会通知你能够最多拆解为五个工作,咱们能够本人拆解,也能够让机器人帮忙咱们拆解,间接按回车,让AutoGPT主动拆解工作即可。
接着程序会主动爬取这篇文章的内容,而后应用gpt-3.5-turbo模型来进行剖析:
Goal 1: Using memory of type: LocalCache Using Browser: chrome Token limit: 4000 Memory Stats: (0, (0, 1536)) Token limit: 4000 Send Token Count: 936 Tokens remaining for response: 3064 ------------ CONTEXT SENT TO AI --------------- System: The current time and date is Mon Apr 17 20:29:37 2023 System: This reminds you of these events from your past: User: Determine which next command to use, and respond using the format specified above: ----------- END OF CONTEXT ---------------- Creating chat completion with model gpt-3.5-turbo, temperature 0.0, max_tokens 3064 The JSON object is valid. THOUGHTS: Let's start by browsing the article at https://v3u.cn/a_id_303 to get a better understanding of its contents. REASONING: Browsing the article will allow us to analyze its contents and determine the appropriate next steps. PLAN: - Browse the article - Analyze its contents - Determine the appropriate next steps CRITICISM: None NEXT ACTION: COMMAND = browse_website ARGUMENTS = {'url': 'https://v3u.cn/a_id_303', 'question': 'analyze the contents of the article'} Enter 'y' to authorise command, 'y -N' to run N continuous commands, 'n' to exit program, or enter feedback for ... Input:y -=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-= [WDM] - Downloading: 100%|████████████████████████████████████████████| 8.04M/8.04M [00:03<00:00, 2.42MB/s] Text length: 6977 characters Adding chunk 1 / 1 to memory Summarizing chunk 1 / 1 Creating chat completion with model gpt-3.5-turbo, temperature 0.0, max_tokens 300 SYSTEM: Command browse_website returned: Error: This model's maximum context length is 4097 tokens. However, you requested 4339 tokens (4039 in the messages, 300 in the completion). Please reduce the length of the messages or completion. Token limit: 4000 Memory Stats: (2, (2, 1536)) Token limit: 4000 Send Token Count: 1472 Tokens remaining for response: 2528 ------------ CONTEXT SENT TO AI --------------- System: The current time and date is Mon Apr 17 20:30:19 2023 System: This reminds you of these events from your past: ['Assistant Reply: {\n "thoughts": {\n "text": "Let\'s start by browsing the article at https://v3u.cn/a_id_303 to get a better understanding of its contents.",\n "reasoning": "Browsing the article will allow us to analyze its contents and determine the appropriate next steps.",\n "plan": "- Browse the article\\n- Analyze its contents\\n- Determine the appropriate next steps",\n "criticism": "None",\n "speak": "I suggest we start by browsing the article at the given URL to analyze its contents and determine the appropriate next steps."\n },\n "command": {\n "name": "browse_website",\n "args": {\n "url": "https://v3u.cn/a_id_303",\n "question": "analyze the contents of the article"\n }\n }\n} \nResult: Command browse_website returned: Error: This model\'s maximum context length is 4097 tokens. However, you requested 4339 tokens (4039 in the messages, 300 in the completion). Please reduce the length of the messages or completion. \nHuman Feedback: GENERATE NEXT COMMAND JSON '] User: Determine which next command to use, and respond using the format specified above: Assistant: { "thoughts": { "text": "Let's start by browsing the article at https://v3u.cn/a_id_303 to get a better understanding of its contents.", "reasoning": "Browsing the article will allow us to analyze its contents and determine the appropriate next steps.", "plan": "- Browse the article\n- Analyze its contents\n- Determine the appropriate next steps", "criticism": "None", "speak": "I suggest we start by browsing the article at the given URL to analyze its contents and determine the appropriate next steps." }, "command": { "name": "browse_website", "args": { "url": "https://v3u.cn/a_id_303", "question": "analyze the contents of the article" } } } User: Determine which next command to use, and respond using the format specified above: ----------- END OF CONTEXT ---------------- Creating chat completion with model gpt-3.5-turbo, temperature 0.0, max_tokens 2528
最初将剖析后果写入goal.txt文件:
这篇文章次要阐释了苹果Mac电脑能够实现机器学习和深度学习工作,并且通过深度学习框架Tensorflow的装置和运行进行了佐证,同时也对Tensorflow的CPU和GPU的两种模型训练模式进行了深度比照和测试。
零打碎敲,晦涩丝滑。
结语
AutoGPT和其余 AI 程序的不同之处在于,它专门专一于在无需人工干预的状况下生成提醒和主动执行多步骤工作。它还具备扫描互联网或在用户计算机上执行命令以获取信息的能力,这使其有别于可能仅依赖于事后存在的数据集的其余人工智能程序。
AutoGPT的底层逻辑并不简单:先通过搜索引擎检索工作,而后把后果和指标丢给gpt让它给出序列化计划json,再把计划分段丢给gpt,最初用shell去创立Python文件+json.load并且执行,是一个重复递归的过程。
不能否定的是,尽管实现逻辑简略,但这无疑是一种“自我进化”的过程,置信随着工夫的推移,AutoGPT能够更好地解决更加简单的工作。