原文:不想本人开车,Python帮你搞定主动驾驶
01 装置环境
gym是用于开发和比拟强化学习算法的工具包,在python中装置gym库和其中子场景都较为简便。装置gym:
pip install gym
装置主动驾驶模块,这里应用Edouard Leurent公布在github上的包highway-env(链接:https://github.com/eleurent/h…):
pip install --user git+https://github.com/eleurent/highway-env
其中蕴含6个场景:
高速公路——“highway-v0”
汇入——“merge-v0”
环岛——“roundabout-v0”
泊车——“parking-v0”
十字路口——“intersection-v0”
赛车道——“racetrack-v0”
具体文档能够参考这里:https://highway-env.readthedo…
02 配置环境
装置好后即可在代码中进行试验(以高速公路场景为例):
import gym
import highway_env
%matplotlib inline
env = gym.make('highway-v0')
env.reset()
for _ in range(3):
action = env.action_type.actions_indexes["IDLE"]
obs, reward, done, info = env.step(action)
env.render()
运行后会在模拟器中生成如下场景:
绿色为ego vehicle env类有很多参数能够配置,具体能够参考原文档。
03 训练模型
3.1 数据处理
(1)stat
ehighway-env包中没有定义传感器,车辆所有的state (observations) 都从底层代码读取,节俭了许多后期的工作量。依据文档介绍,state (ovservations) 有三种输入形式:Kinematics,Grayscale Image和Occupancy grid。
Kinematics
输入V*F的矩阵,V代表须要观测的车辆数量(包含ego vehicle自身),F代表须要统计的特色数量。例:
数据生成时会默认归一化,取值范畴:[100, 100, 20, 20],也能够设置ego vehicle以外的车辆属性是地图的相对坐标还是对ego vehicle的绝对坐标。
在定义环境时须要对特色的参数进行设定:
config = \
{
"observation":
{
"type": "Kinematics",
#选取5辆车进行察看(包含ego vehicle)
"vehicles_count": 5,
#共7个特色
"features": ["presence", "x", "y", "vx", "vy", "cos_h", "sin_h"],
"features_range":
{
"x": [-100, 100],
"y": [-100, 100],
"vx": [-20, 20],
"vy": [-20, 20]
},
"absolute": False,
"order": "sorted"
},
"simulation_frequency": 8, # [Hz]
"policy_frequency": 2, # [Hz] }
Grayscale Image
生成一张W*H的灰度图像,W代表图像宽度,H代表图像高度
Occupancy grid
生成一个WHF的三维矩阵,用W*H的表格示意ego vehicle四周的车辆状况,每个格子蕴含F个特色。
(2) action
highway-env包中的action分为间断和离散两种。连续型action能够间接定义throttle和steering angle的值,离散型蕴含5个meta actions:
0: 'LANE_LEFT',
1: 'IDLE',
2: 'LANE_RIGHT',
3: 'FASTER',
4: 'SLOWER' }
(3) reward
highway-env包中除了泊车场景外都采纳同一个reward function:
这个function只能在其源码中更改,在外层只能调整权重。(泊车场景的reward function原文档里有,懒得打公式了……)
3.2 搭建模型
DQN网络的构造和搭建过程曾经在我另一篇文章中探讨过,所以这里不再具体解释。我采纳第一种state示意形式——Kinematics进行示范。
因为state数据量较小(5辆车*7个特色),能够不思考应用CNN,间接把二维数据的size[5,7]转成[1,35]即可,模型的输出就是35,输入是离散action数量,共5个。
import torch
import torch.nn as nn
from torch.autograd import Variable
import torch.nn.functional as F
import torch.optim as optim
import torchvision.transforms as T
from torch import FloatTensor, LongTensor, ByteTensor
from collections import namedtuple
import random
Tensor = FloatTensor
EPSILON = 0 # epsilon used for epsilon greedy approach
GAMMA = 0.9
TARGET_NETWORK_REPLACE_FREQ = 40 # How frequently target netowrk updates
MEMORY_CAPACITY = 100
BATCH_SIZE = 80
LR = 0.01 # learning rate
class DQNNet(nn.Module):
def __init__(self):
super(DQNNet,self).__init__()
self.linear1 = nn.Linear(35,35)
self.linear2 = nn.Linear(35,5)
def forward(self,s):
s=torch.FloatTensor(s)
s = s.view(s.size(0),1,35)
s = self.linear1(s)
s = self.linear2(s)
return s
class DQN(object):
def __init__(self):
self.net,self.target_net = DQNNet(),DQNNet()
self.learn_step_counter = 0
self.memory = []
self.position = 0
self.capacity = MEMORY_CAPACITY
self.optimizer = torch.optim.Adam(self.net.parameters(), lr=LR)
self.loss_func = nn.MSELoss()
def choose_action(self,s,e):
x=np.expand_dims(s, axis=0)
if np.random.uniform() < 1-e:
actions_value = self.net.forward(x)
action = torch.max(actions_value,-1)[1].data.numpy()
action = action.max()
else:
action = np.random.randint(0, 5)
return action
def push_memory(self, s, a, r, s_):
if len(self.memory) < self.capacity:
self.memory.append(None)
self.memory[self.position] = Transition(torch.unsqueeze(torch.FloatTensor(s), 0),torch.unsqueeze(torch.FloatTensor(s_), 0),\
torch.from_numpy(np.array([a])),torch.from_numpy(np.array([r],dtype='float32')))#
self.position = (self.position + 1) % self.capacity
def get_sample(self,batch_size):
sample = random.sample(self.memory,batch_size)
return sample
def learn(self):
if self.learn_step_counter % TARGET_NETWORK_REPLACE_FREQ == 0:
self.target_net.load_state_dict(self.net.state_dict())
self.learn_step_counter += 1
transitions = self.get_sample(BATCH_SIZE)
batch = Transition(*zip(*transitions))
b_s = Variable(torch.cat(batch.state))
b_s_ = Variable(torch.cat(batch.next_state))
b_a = Variable(torch.cat(batch.action))
b_r = Variable(torch.cat(batch.reward))
q_eval = self.net.forward(b_s).squeeze(1).gather(1,b_a.unsqueeze(1).to(torch.int64))
q_next = self.target_net.forward(b_s_).detach() #
q_target = b_r + GAMMA * q_next.squeeze(1).max(1)[0].view(BATCH_SIZE, 1).t()
loss = self.loss_func(q_eval, q_target.t())
self.optimizer.zero_grad() # reset the gradient to zero
loss.backward()
self.optimizer.step() # execute back propagation for one step
return loss
Transition = namedtuple('Transition',('state', 'next_state','action', 'reward'))
3.3 运行后果
各个局部都实现之后就能够组合在一起训练模型了,流程和用CARLA差不多,就不细说了。
初始化环境(DQN的类加进去就行了):
import gym
import highway_env
from matplotlib import pyplot as plt
import numpy as np
import time
config = \
{
"observation":
{
"type": "Kinematics",
"vehicles_count": 5,
"features": ["presence", "x", "y", "vx", "vy", "cos_h", "sin_h"],
"features_range":
{
"x": [-100, 100],
"y": [-100, 100],
"vx": [-20, 20],
"vy": [-20, 20]
},
"absolute": False,
"order": "sorted"
},
"simulation_frequency": 8, # [Hz]
"policy_frequency": 2, # [Hz]
}
env = gym.make("highway-v0")
env.configure(config)
训练模型:
dqn=DQN()
count=0
reward=[]
avg_reward=0
all_reward=[]
time_=[]
all_time=[]
collision_his=[]
all_collision=[]
while True:
done = False
start_time=time.time()
s = env.reset()
while not done:
e = np.exp(-count/300) #随机抉择action的概率,随着训练次数增多逐步升高
a = dqn.choose_action(s,e)
s_, r, done, info = env.step(a)
env.render()
dqn.push_memory(s, a, r, s_)
if ((dqn.position !=0)&(dqn.position % 99==0)):
loss_=dqn.learn()
count+=1
print('trained times:',count)
if (count%40==0):
avg_reward=np.mean(reward)
avg_time=np.mean(time_)
collision_rate=np.mean(collision_his)
all_reward.append(avg_reward)
all_time.append(avg_time)
all_collision.append(collision_rate)
plt.plot(all_reward)
plt.show()
plt.plot(all_time)
plt.show()
plt.plot(all_collision)
plt.show()
reward=[]
time_=[]
collision_his=[]
s = s_
reward.append(r)
end_time=time.time()
episode_time=end_time-start_time
time_.append(episode_time)
is_collision=1 if info['crashed']==True else 0
collision_his.append(is_collision)
我在代码中增加了一些画图的函数,在运行过程中就能够把握一些要害的指标,每训练40次统计一次平均值。
均匀碰撞发生率:
epoch均匀时长(s):
均匀reward:
能够看出均匀碰撞发生率会随训练次数增多逐步升高,每个epoch继续的工夫会逐步缩短(如果产生碰撞epoch会立即完结)。
一文尽览 | 轨迹预测二十年倒退全面回顾!
Tesla技术计划深度分析:主动标注/感知定位/决策布局/场景重建/场景仿真/数据引擎
PythonRobotics | 基于python的机器人自主导航
IROS2022 | 雪天环境的激光点云解决
全局门路布局 | RRT算法原理及实现
两万字 | 视觉SLAM钻研综述与将来趋势探讨
移动机器人视觉SLAM回环检测原理、现状及趋势
发表回复