关于openstack:OpenStack-Docker以及Kubernetes的搭建

原文地址:OpenStack, Docker以及Kubernetes的搭建Part 1: OpenStackStep 1Create a new Virtual Machine named as "StudentName-OS" and your VM should be placed in 'Lab Final Exam' folder. Deploy the machine according to the below configs.OpenStack controller node: 2 Dual core CPU4 GB RAM40 GB HDD -Network Adapter - Bridge adapterCentOS minimal OS - http://mirror.dal.nexril.net/centos/7.9.2009/isos/x86_64/ or Centos 8 Stream.Also, while spinning up VM, choose "Minimal Install" During CentOS installation, set root password as 'Dcne123'. Perform entire OpenStack Part of final lab with 'root' user. ...

August 24, 2023 · 10 min · jiezi

关于openstack:OpenStack的神秘组件-裸金属Ironic管理使用

OpenStack是目前寰球部署最宽泛的开源云基础架构,在OpenStack中提供的裸金属服务的我的项目是Ironic。OpenStack的官网次要介绍裸金属的用处在如下5方面: (1)高性能计算; (2)无奈虚拟化的硬件设施的计算工作; (3)数据库主机(一些数据库在hypervisor中运行不佳); (4)单租户、专用硬件、安全性、可靠性以及其它需要; (5)疾速部署云基础设施。 其本质是在过来的几年中随着如电信工作负载的5G,还有智能化的机器学习和人工智能,甚至是大数据,都在推动人们朝着越来越专业化的设施倒退,数据中心和云环境对立建设模式。人们心愿通过如OpenStack Ironic对物理硬件上实现自动化和管制,从而缩小设施的闲暇工夫,升高运维人员对硬件装置部署工夫。 为什么说OpenStack Ironic是一个神秘的组件: 起因一:Ironic应用了BMC(Baseboard Manager Controller)即基板治理控制器,独立的零碎在服务器通过额定的硬件控制器和PXE(Pre-boot Execution Environment)网络启动,间接把当时做好的操作系统磁盘镜像克隆到物理服务器上,免去了应用Kickstart主动装置零碎的过程,高效省时; 起因二:Ironic是通过Nova来调用的,是模仿Nova的一个虚拟化驱动,其创立和治理物理服务器资源是和虚拟化实例创立部署流程一样。 揭开OpenStack Ironic神秘的面纱,首先Ironic作为OpenStack一个独立的模块同样可与keystone、nova、neutron、cinder等组件交互,和部署虚拟机的调用流程是一样的,都是通过Nova的接口来执行创立实例,只是底层的nova-scheduler和nova-compute驱动不一样,虚拟机底层驱动采纳的是虚拟化技术,而物理机采纳的是PXE和IPMI技术。在OpenStack官网的架构时序图如下: OpenStack Ironic时序图(来源于OpenStack官网) 从时序图中能够看到Ironic组件的流程比较复杂,其次要是思考与各个组件交互和谬误异样的解决,其外围的逻辑流程能够简化为如下:用户通过Nova API和Nova Scheduler来启动一个裸金属实例,之后申请会通过Ironic API,连贯到Ironic Conductor服务,由 Ironic Conductor负责和Neutron网络、Glance镜像、Cinder存储等组件交互确定装置服务器的零碎、网络布局等,再到对应的Driver,并把信息记录到Ironic DB数据库中,最初实现实例部署,为用户提供胜利部署的物理机服务。 OpenStack Ironic部署应用,Ironic部署和Nova等罕用的组件部署形式根本一样,次要分为如下流程: (1)环境筹备,如果试验环境起码须要筹备两台物理服务器,一台作为 Ironic管制节点也就是咱们常说的controller节点,一台作为Ironic Node节点即裸金属的被治理节点,须要留神在Node节点须要具备并开启BMC、PXE性能,如果服务器有RAID须要先创立实现RAID,同时确保网络DHCP能力; (2)配置Ironic服务,次要是创立数据库,装置和配置Ironic-api和Ironic-conductor服务,配置Nova、Neutron,具体部署能够参考OpenStack Ironic官网部署。Ironic-api和Ironic-conductor服务能够部署在雷同或不同主机。用户也能够增加新的Ironic-conductor主机以应答一直增长的bare metal node。不过新增Ironic-conductor服务须要与现有Ironic-conductor放弃雷同版本。倡议每个Ironic-conductor治理100个左右的bare metal裸机节点,以均衡较优的可靠性和性能; (3)构建或应用现有镜像,部署一个裸机节点须要两组镜像:施行/部署镜像(deploy images)和用户镜像(user images)。Bare Metal Provisioning 应用 deploy images 来筹备bare metal(裸机) node,进行clean等操作,为user images的装置做筹备。user images 被装置bare metal node上,供用户最终应用。deploy images镜像包含.kernel文件和 .initramfs文件。能够间接下载OpenStack官网公布的镜像(倡议初学者优先应用)https://tarballs.opendev.org/...中下载。user images镜像能够应用disk-image-builder工具来制作,不过目前此工具仅反对centos/fedora/ubuntu/opensuse等零碎。如果想要构建UOS等镜像也能够应用虚拟机软件如vrish,创立好虚拟机后,虚拟机的qcow2磁盘文件可作为user images应用; (4)设置驱动程序,在正确配置所有服务之后,您应该用Bare Metal服务注册硬件,并确认Compute服务看到可用的硬件。一旦bare metal节点处于available provisioning状态,Compute服务就能够看到。 OpenStack Ironic在部署时可能会遇到各种问题,通过一段时间学习,次要的报错起因分成如下几类: 1)环境部署问题,例如Ironic和Nova服务倡议独自部署在不同的节点上;2)构建或应用现有镜像问题,次要是本人制作的镜像会呈现grub.efi找不到问题;3)配置问题,因为OpenStack官网对于Ironic文档更新会慢与版本更新,导致有些配置有问题,例如报'ServiceTokenAuthWrapper' object has no attribute '_discovery_cache'可批改keystoneauth1/plugin.py。 ...

January 31, 2023 · 1 min · jiezi

关于openstack:open-stack-应用题方面

在openstack公有云平台上,基于cirrors.qcow2镜像,应用命令创立一个名为cirros的镜像。 [root@controller ~]# glance image-create --name "cirros" --disk-format qcow2 --container-format bare --progress opt/openstack/images/CentOS_7.5_x86_64_XD.qcow22.在openstack公有云平台上,应用命令创立一个名为Fmin,ID为1,内存为1024 MB,磁盘为10 GB,vcpu数量为1的云主机类型。 [root@controller ~]# nova flavor-create Fmin 1 1024 10 1在openstack公有云平台上,编写模板server.yml,创立名为“m1.flavor”、 ID 为 1234、 内存为 1024MB、硬盘为 20GB、 vcpu数量为 2的云主机类型。 [root@controller ~]# openstack orchestration template version list #查看可用于编排的模板版本 [root@controller ~]# Vi server.yaml server.yamlheat_template_version: 2015-04-30description: resources: flavor: type: OS::Nova::Flavor properties: name: "m1.flavor" flavorid: "1234" disk: 20 ram: 1024 vcpus: 2outputs: flavor_info: description: Get the information of virtual machine type value: { get_attr: [ flavor, show ] } [root@controller ~]# heat stack-create m1_flavor_stack -f server.yaml #创立资源在openstack公有云平台上,通过应用命令创立云主机内部网络extnet,子网extsubnet,虚拟机浮动 IP 网段为172.18.x.0/24(其中x是考位号), 网关为172.18.x.1,网络应用vlan模式;创立云主机内网intnet,子网intsubnet,虚拟机子网 IP 网段为192.168.x.0/24(其中x是考位号),网关为192.168.x.1;实现内网子网intsubnet和内部网络extnet的连通。创立外网 ...

November 23, 2022 · 3 min · jiezi

关于openstack:openstack运维

openstack运维题openstack平台运维在openstack公有云平台上,基于cirrors.qcow2镜像,应用命令创立一个名为cirros的镜像。 [root@controller ~]# glance image-create --name "cirros" --disk-format qcow2 --container-format bare --progress opt/openstack/images/CentOS_7.5_x86_64_XD.qcow22.在openstack公有云平台上,应用命令创立一个名为Fmin,ID为1,内存为1024 MB,磁盘为10 GB,vcpu数量为1的云主机类型。 [root@controller ~]# nova flavor-create Fmin 1 1024 10 1在openstack公有云平台上,编写模板server.yml,创立名为“m1.flavor”、 ID 为 1234、 内存为 1024MB、硬盘为 20GB、 vcpu数量为 2的云主机类型。 [root@controller ~]# openstack orchestration template version list #查看可用于编排的模板版本 [root@controller ~]# Vi server.yaml server.yamlheat_template_version: 2015-04-30description: resources: flavor: type: OS::Nova::Flavor properties: name: "m1.flavor" flavorid: "1234" disk: 20 ram: 1024 vcpus: 2outputs: flavor_info: description: Get the information of virtual machine type value: { get_attr: [ flavor, show ] } [root@controller ~]# heat stack-create m1_flavor_stack -f server.yaml #创立资源在openstack公有云平台上,通过应用命令创立云主机内部网络extnet,子网extsubnet,虚拟机浮动 IP 网段为172.18.x.0/24(其中x是考位号), 网关为172.18.x.1,网络应用vlan模式;创立云主机内网intnet,子网intsubnet,虚拟机子网 IP 网段为192.168.x.0/24(其中x是考位号),网关为192.168.x.1;实现内网子网intsubnet和内部网络extnet的连通。创立外网 ...

November 23, 2022 · 3 min · jiezi

关于openstack:openstack运维题

openstack运维题openstack平台运维在openstack公有云平台上,基于cirrors.qcow2镜像,应用命令创立一个名为cirros的镜像。 [root@controller ~]# glance image-create --name "cirros" --disk-format qcow2 --container-format bare --progress opt/openstack/images/CentOS_7.5_x86_64_XD.qcow22.在openstack公有云平台上,应用命令创立一个名为Fmin,ID为1,内存为1024 MB,磁盘为10 GB,vcpu数量为1的云主机类型。 [root@controller ~]# nova flavor-create Fmin 1 1024 10 1在openstack公有云平台上,编写模板server.yml,创立名为“m1.flavor”、 ID 为 1234、 内存为 1024MB、硬盘为 20GB、 vcpu数量为 2的云主机类型。 [root@controller ~]# openstack orchestration template version list #查看可用于编排的模板版本 [root@controller ~]# Vi server.yaml server.yamlheat_template_version: 2015-04-30description: resources: flavor: type: OS::Nova::Flavor properties: name: "m1.flavor" flavorid: "1234" disk: 20 ram: 1024 vcpus: 2outputs: flavor_info: description: Get the information of virtual machine type value: { get_attr: [ flavor, show ] } [root@controller ~]# heat stack-create m1_flavor_stack -f server.yaml #创立资源在openstack公有云平台上,通过应用命令创立云主机内部网络extnet,子网extsubnet,虚拟机浮动 IP 网段为172.18.x.0/24(其中x是考位号), 网关为172.18.x.1,网络应用vlan模式;创立云主机内网intnet,子网intsubnet,虚拟机子网 IP 网段为192.168.x.0/24(其中x是考位号),网关为192.168.x.1;实现内网子网intsubnet和内部网络extnet的连通。创立外网 ...

November 23, 2022 · 3 min · jiezi

关于openstack:openstackcloudkitty组件入门级安装快速

@TOC 前言什么是CloudKitty?CloudKitty是OpenStack等的评级即服务项目。该我的项目旨在成为云的退款和评级的通用解决方案。从历史上看,它只能在OpenStack上下文中运行它,但当初能够在独立模式下运行CloudKitty。 CloudKitty容许进行基于指标的评级:它轮询终端节点以检索无关特定指标的度量值和元数据,将评级规定利用于收集的数据,并将评级数据推送到其存储后端。 CloudKitty是高度模块化的,这使得增加新性能变得容易。 架构CloudKitty能够分为四大局部: Data retrieval (API) 数据检索Data collection (cloudkitty-processor) 数据收集Data rating 数据评级Data storage 数据存储这些局部由两个过程解决:和 。数据检索局部由过程解决,其余局部由 解决。cloudkitty-apicloudkitty-processorcloudkitty-apicloudkitty-processor 以下是 CloudKitty 架构的概述: 装置yum install openstack-cloudkitty-api openstack-cloudkitty-processor openstack-cloudkitty-ui配置编辑/etc/cloudkitty/cloudkitty.conf以配置 CloudKitty [DEFAULT]verbose = Truelog_dir = /var/log/cloudkitty[oslo_messaging_rabbit]rabbit_userid = openstackrabbit_password = RABBIT_PASSWORDrabbit_hosts = RABBIT_HOST[auth]username = cloudkittypassword = CK_PASSWORDtenant = serviceregion = RegionOneurl = http://localhost:5000/v2.0[keystone_authtoken]username = cloudkittypassword = CK_PASSWORDproject_name = serviceregion = RegionOneauth_url = http://localhost:5000/v2.0auth_plugin = password[database]connection = mysql://cloudkitty:CK_DBPASS@localhost/cloudkitty[keystone_fetcher]username = adminpassword = ADMIN_PASSWORDtenant = adminregion = RegionOneurl = http://localhost:5000/v2.0[ceilometer_collector]username = cloudkittypassword = CK_PASSWORDtenant = serviceregion = RegionOneurl = http://localhost:5000设置数据库和存储后端 ...

May 2, 2022 · 4 min · jiezi

关于openstack:openstack之Designate组件入门级安装快速

@TOC 前言Designate 是一个开源 DNS 即服务施行,是用于运行云的 OpenStack 服务生态系统的一部分。Designate 是 OpenStack 的多租户 DNSaaS 服务。它提供了一个带有集成 Keystone 身份验证的 REST API。它能够配置为依据 Nova 和 Neutron 操作主动生成记录。Designate 反对多种 DNS 服务器,包含 Bind9 和 PowerDNS 4。 架构Designate 由几个不同的服务组成:API、Producer、Central、Worker 和 Mini DNS。它应用 oslo.db 兼容的数据库来存储状态和数据,并应用 oslo.messaging 兼容的音讯队列来促成服务之间的通信。所有指定服务的多个正本能够串联运行以促成高可用性部署,API 过程通常位于负载均衡器之后。 前提筹备获取admin凭据以管理员权限拜访 source admin-openrc#创立designate用户openstack user create --domain demo --password 000000 designate #将admin角色增加到designate用户openstack role add --project service --user designate admin #创立指定服务实体openstack service create --name designate --description "DNS" dns #创立 DNS 服务 API 端点 openstack endpoint create --region RegionOne dns public http://controller:9001/openstack endpoint create --region RegionOne dns internal http://controller:9001/openstack endpoint create --region RegionOne dns admin http://controller:9001/装置和配置组件装置软件包 ...

April 29, 2022 · 3 min · jiezi

关于openstack:OpenStack-第-25-版本Yoga正式发布12-年发展历程造就权威云时代

3 月 30 日,OpenStack 社区正式公布了其最新版本的更新 —— Yoga。此次更新是自 2010 年 NASA Ames 钻研核心与 Rackspace 开发者们独特创立开源基础设施即服务(IaaS)云 OpenStack 以来的第 25 次更新。 全新版本的 Yoga 反对 SmartNIC DPUs 等先进的硬件技术,通过对Kubernetes、Prometheus 等云原生软件集成进行优化和缩小技术债等形式,使得 OpenStack 内核的稳定性与可靠性得以放弃。 OpenStack Yoga 下载地址:https://www.openstack.org/sof... 12 年倒退历程造就权威“云”时代12 年前,人们眼中的云是一种能看得见的天然景象;然而明天,“云”就是所有。 OpenStack 在过来的 12 年里的倒退就是榜样,并由此迎来了属于本人的时代。 2010 年,由 NASA(美国国家航空航天局)和 Rackspace 独特单干正式发动并成立了以Apache 许可证受权的开源代码我的项目 —— Openstack。 从 Openstack 公布第一个开源的云计算平台版本 Austin,到 2012 年 9 月 第六个版本 Folsom 的公布,期间一直优化不断完善直至成熟,为其开源云计算平台稳步发展打好了坚实基础。 2013 年 4 月,OpenStack 公布了其第七个版本 Grizly ,新增了波及计算、存储、网络和共享服务等方面近 230 个新性能,无效缩小了对地方数据库的依赖。随后同年 10 月,OpenStack 公布了第八个版本 Havana。 ...

March 31, 2022 · 2 min · jiezi

关于openstack:学习OpenStack云计算实战手册-第3版

形容学习OpenStack云计算实战手册 第3版 xz内容地址https://www.aliyundrive.com/s...

March 14, 2022 · 1 min · jiezi

关于openstack:九州云黄舒泉开源基因技术为主携手塑造5G边缘计算新生态

云计算畛域的专家和实践者,OpenStack 社区晚期贡献者、推广者和实践者;多个 OpenStack 我的项目寰球前十贡献者,边缘计算我的项目 StarlingX 次要发起人之一,StarlingX 技术委员会成员,第一届 TSC 成员中惟一华人工程师;九州云 KATA、Airship、StarlingX 孵化我的项目发动工作的技术领头人。受访者:九州云 99Cloud 高级技术总监 黄舒泉采访及整顿:SegmentFault 思否编辑部5G 网络时代,边缘计算和云技术正在塑造物联网 IoT 的将来。而作为云技术的最新趋势,边缘计算也正在供应商们带来全新商机。在刚刚完结的 2021|OpenInfra Days China“开源根底设的下一个十年”大会上,咱们就看到了来自业内当先的开源云服务商九州云带来的相干分享。 作为中国当先的凋谢云边基础架构服务商,九州云自 2012 年成立之初就自带“开源基因”,是国内首批专门从事 OpenStack 和相干开源服务的企业。短短几年工夫,九州云早已成长为相干畛域的头部企业。 近日,咱们有幸采访到了九州云 99Cloud 高级技术总监 黄舒泉,听他讲述了九州云如何迅速精准的抓住“风口”,以及在推动边缘计算倒退与落地上的致力。 开源、云计算将来倒退状态:混合云据黄舒泉介绍,目前开源云计算次要是以 OpenStack 为规范的模式。多年来,九州云也是从云计算为核心再到边缘计算这样的倒退态势,这也是云计算目前的一个新的倒退模式。 在黄舒泉看来,算力在将来无处不在,云计算会向着混合云倒退。在新的倒退状态下,云计算根底资源管理相干的开源我的项目也会越来越多。比方 OpenStack、K8S、kata 等我的项目。因而,将来会有更多样的计算模式及不同开源软件框架来满足新形态计算的倒退。 紧随 5G 时代 |踊跃布局边缘计算相干畛域作为首批专门从事 OpenStack 和相干开源服务的公司,九州云在一开始就看到了云计算往边缘倒退的趋势。黄舒泉示意,早在 2017 年九州云就开始在边缘计算畛域开展相干布局,与英特尔和风河等公司独特发动了 StarlingX 的新我的项目,次要致力于边缘端治理、计算资源。近期,九州云也积极参与到基于 Edge Gallery 等开源边缘计算我的项目。 除了奉献开源和边缘计算之外,九州云也在积极参与边缘计算的落地工作中,现在,九州云已在智慧园区畛域获得不少问题,如湖州智慧园区、杭州智慧场馆等等。 黄舒泉认为,边缘计算更依赖于 5G 技术,而九州云的特点恰好是将这些基础设施的治理与 5G 网络相结合,因而在将来也会有越来越多新兴利用,包含云游戏、AR/VR 等更多新用户也会在边缘端部署,九州云也会在新的利用上有所投入,云游戏等也是九州云目前正在踊跃布局的畛域及相干业务。 随着 5G 网络一直演变,边缘计算逐步成为云时代的要害。本次 OpenInfra大会上,咱们就看到有自九州云的技术专家带来“边缘原生、边缘计算”等相干议题,都让人印象粗浅,这也让咱们对九州云接下来将要发展的重点开源我的项目颇感兴趣。 对于 Skyline 我的项目|携手塑造边缘计算新生态就下面的疑虑,黄舒泉也为咱们带来了一些最新的分享。他介绍称,正如本次OpenInfra大会上提到的Skyline,这正是九州云目前最新奉献到OpenStack的最新我的项目。该我的项目是一款现代化的治理界面——OpenStack仪表盘,是九州云基于多年积攒而翻新研发的新我的项目,能够无效改善界面体验和经营效率上问题,能更高效治理OpenStack外面的资源。 将Skyline我的项目奉献到社区后,九州云也会继续加大投入,不断完善推动该我的项目,以吸引更多合作伙伴来独特参加该我的项目,一起凋敝整个社区。 整体而言,九州云目前曾经在混合云畛域,继续将本人多年的积攒拿进去跟业内共享,一起推动整个行业的落地。在边缘云方面,九州云也踊跃走在行业前沿,与其余极具开源精力的企业携手,独特来塑造一个凋敝的边缘计算新生态。 开源奉献社区的意义:技术扭转世界家喻户晓,随着OpenStack近几年的一直演变和倒退,九州云作为OpenStack基金会的黄金会员,对社区贡献度也是位列前茅。那么这所有,对于像九州云这样的国内开源领军企业来说,到底意味着什么?这背地的推动力又是什么? 在外界眼里,OpenStack仿佛没那么“火”,但在黄舒泉看来,这恰好证实OpenStack越来越稳固。就好比越被广泛应用的技术,咱们可能会司空见惯,反而是一些更陈腐的技术,才可会引来一时间的高热点。 黄舒泉认为,近年来,随着OpenStack趋于稳定,OpenStack曾经成为云计算事实的规范。作为OpenInfra基金会的黄金会员,九州云多年来也始终将开源精力融入到本身血液中。作为一家有技术有谋求的企业,九州云不仅心愿可能引领整个社区技术的潮流(如捐献Skyline我的项目),更心愿吸引更多地企业和集体,推动社区生态凋敝。 黄舒泉示意,九州云对技术的信奉是置信技术是能够扭转这个世界、扭转这个社会的。作为一家企业,不仅仅须要生存上来,还须要在此基础上为社区继续奉献。目前为止,九州云在社区的奉献名落孙山,也阐明九州云受到了用户和宽广开发者们的认可,因此吸引了很多气味相投的技术人员的退出,成为了更多客户的抉择,而这所有,正是“开源赋能云边改革”。 ...

November 25, 2021 · 1 min · jiezi

关于openstack:均瑶陈钦霸数字化浪潮下-OpenStack助推私有云发展

陈钦霸,华东理工大学计算机技术硕士,02年入职均瑶,在数据中心、数据库、信息安全、云计算及开源生态等畛域有多年的实战经验。现负责均瑶团体信息科技部高级经理,次要负责均瑶团体信息化建设,包含OA、ERP、邮件、视频会议零碎等办公零碎,近几年次要负责公有云相干建设。受访者:均瑶团体信息科技部高级经理 陈钦霸采访及整顿:SegmentFault 思否编辑部作为国内一家大型实干型团体,均瑶自1991年成立以来,始终致力于求实翻新,近年来已在航空运输、金融服务、古代生产、教育服务、科技五大业务板块,成为业界佼佼者,开拓了一片天。 有人可能要问了,这样一家偏差传统型的大型企业,是如何一步步倒退成为明天这样在服务业内领军的龙头企业呢? 其实,这所有的所有背地,都离不开全面进行数字化转型以及开源云计算的使用。 在刚刚过来的主题为“开源基础设施的下一个十年”OpenInfra展会上,咱们有幸采访到了均瑶团体信息科技部高级经理陈钦霸,并就数字化转型、开源云计算等方面议题进行了一场别开生面的畅谈。看完上面这些内容,或者你就会找到你想要的答案。 对于企业全面进行数字化转型的契机,陈钦霸介绍称,早在两年前,均瑶团体王均金董事长就提出了“科技赋能”的议题,来推动整个团体业务倒退。 数字化转型的减速推动,应该是均瑶2021年会上,王均金董事长进一步提出“各个业务板块要深刻理解数字化理念,拥抱数字化,把握新技术,用好'方法论',科技赋能”。 特地在数字化浪潮的大趋势下,尤其对于均瑶这样——涉足航空、教育、生产等畛域偏传统业务的企业来说,数字化建设水平离同行业高水平还有差距,因而须要加大力度来推动。 团体数字化转型遇到难题对于均瑶这样的大型团体而言,数字化转型波及到旗下各行各业的子公司,因而会遇到不少难题。 对此,陈钦霸介绍道,均瑶是在2017年初开始踏足云计算畛域,通过内外部的具体调研和综合评估,下半年就孕育了“第一朵均瑶云”。而这朵云,是“从点到面,而后缓缓的推广到团体五大板块”当中去的。 刚开始,均瑶找了一家跨境电商初创公司做“公有云”孵化的试点,将其打造成了一个经典上云案例,差不多一两个月后,印证了均瑶“公有云”平台整体的稳定性,是经得起考验的。 尔后,便逐渐“下放”给各板块试用,前后花了将近一两年工夫,最终“功夫不负有心人”,均瑶团体实现了云主机规模靠近500多台、上云子公司达十多家的傲人“业绩”。 也就是说,从2017年开始,随着“新业务上云”的策略逐步开展,整个平台也开始趋于稳定、成熟,尔后的工夫里,一些子公司也开始将一些“积淀”的老业务逐渐上云,比方一些老旧的业务零碎迁徙上云,最终实现企业“从点到面”全面上云。 对于传统架构而言,新业务从立项、施行到上线,整个周期相当长。究其原因,陈钦霸解释称,比方服务器硬件设施洽购到货期要差不多两三个月,零碎装置部署、网络调试的周期也很长,整体效率低下。而数字化、云计算之后,以上这些业务基本上在一两天或者一周内就能够疾速上线,既进步了效率,又节约了投资,对企业信息化来说是一个飞速的倒退。上云业务随需应变,就像“麻利开发”,疾速迭代,实现资源依据业务弹性伸缩,“试错”成本低,不会产生资源节约。 开源云助力业务倒退对于“开源云助力企业业务倒退”方面的议题,陈钦霸也为咱们带来了不少实在的经典案例。据他回顾,早在2018年的时候,均瑶旗下的吉祥物流公司(吉祥航空子公司),新上线一个互联网在线平台,因为业务倒退比拟快,整个工期要求也比拟紧。上云后,缩小了信息化建设老本,初创投入少;放慢了零碎迭代能力,实现了灰度公布和自动化部署;晋升了效率,可专一于外围业务和开发工作;翻新了模式,可疾速搭建新业务环境进行验证,升高研发老本。 因而在引入云之后,整个平台造成了疾速部署、开发、上线、响应等一站式服务体系,当然在平台产品和服务成熟过程中,期间陆续修复了一些bug,也踩了不少坑。最终通过借助OpenStack相干技术及九州云撑持,在云利用场景等方面做了一些优化,这些经验也是与云厂商之间互相磨合的过程,也是正式深刻接触开源的开始。 在陈钦霸看来,现阶段开源仍旧是大趋势,其背地须要弱小的技术撑持团队。均瑶目前已上云的很多业务零碎,就对整个云平台的稳定性及可靠性有较高的依赖性,尤其波及到Ceph存储,关系到业务数据的安全性。 特地是在寰球都开始关注数据安全的当下,不论是国家层面还是企业,数据安全相干议题早已成为热议焦点。因而,均瑶对开源畛域的倒退也始终离不开来自单干厂商的大力支持,须要他们提供快捷技术保障服务。 企业接入开源所波及到的数据安全问题对于像均瑶这样的大型企业接入开源所波及到的数据安全问题,陈钦霸也从三个方面给出了独到见解。 一、在网络隔离前提下,还须要提供延申性能。云主机作为“人造隔离”的组件,还需与各业务部门实现资源互通。二、在业务互联条件下,还须要思考平安防护。须引入第三方平安机构,如均瑶目前单干的深服气云平安资源。三、在数据快照根底上,还须要定期异地备份。借助业余的备份工具,保留三个以上不同时间段的齐全备份数据,加固平安的最初一道防线。 既要保障网络隔离性,又要实现跨业务的互联互通。这是个矛盾点,理论利用场景中常常会产生很多问题。 因而,在接入开源的过程中,须要对业务场景提前思考分明,之后还要有明确布局,尔后才制订下一步策略。如均瑶目前正在测试容器平台利用方面,与九州增强单干,以及接下来与OpenStack的深度单干,除IaaS外,在PaaS、SaaS服务等方面都有需要。 通过引入这样的第三方业余的平安公司或云厂商,来为企业提供更多更业余的服务,是大型企业接入开源的外围。 对于跟九州云/OpenStack的单干在与OpenStack深度单干方面,陈钦霸谈到,在易用性和安全性方面,均瑶云平台的安全性现已在做进一步优化、加固和评测,且曾经拿到了相干认证。此外,云平台能力也在进行进一步拓展,比方减少更多高性能的计算节点资源,利用规模上也会进一步深入。而除了云主机,均瑶接下来可能还会跟OpenStack有进一步的容器平台、CMP多云治理平台等单干。 据陈钦霸介绍,均瑶云是基于Openstack N版搭建的“均瑶云一期”于2017年12月28日上线,面向外部子公司推广试用,2018年胜利入围上海市经信委“十佳云计算利用示范”我的项目,同年通过了公安部颁布的信息系统等级爱护备案证实第三级。 为满足业务更高要求,2020年,与九州云策略单干,“均瑶云平台”降级到更稳固的Openstack P版,同年均瑶云二期我的项目从400家参评企业中怀才不遇,获评上海市经信委“2020年企业上云”示范利用并获产业倒退专项资金。 均瑶团体的企业上云工作失去了上海市无关部门的必定,对“均瑶云”接下来的倒退也有较大的帮忙。 目前,均瑶团体外部应用均瑶有云的公司已超一半,约64.8%,应用私有云的公司占比17.6%,还有17.6%的公司应用了混合云。 均瑶云是均瑶团体重要信息基础设施,为企业业务倒退和外围数据安全提供反对和保障,具备重要战略意义。 2021年均瑶已发文全面推动分子公司“企业上云”。 结束语除了航空、教育等偏传统的畛域,近年来均瑶也已在衰弱这样的新兴畛域获得了不少实际成绩。从2020年均瑶衰弱(股票名称:均瑶衰弱)的正式上市,到往年引入了一个全新的衰弱产业——均瑶医疗,均瑶正在借助开源和上云的形式在更多畛域拓展和冲破。 这个过程中,陈钦霸作为均瑶团体的信息化“急先锋”,将会持续秉持“对信息诚信的高度撑持和依赖”,踊跃拥抱开源和云技术,为企业数字化转型砥砺“前行”,持续争做国内数字化转型企业标杆,积极响应国家“科技驱动”相干策略,为打造“数字中国”继续发力!

November 24, 2021 · 1 min · jiezi

关于openstack:均瑶陈钦霸数字化浪潮下-OpenStack助推私有云发展

受访者:均瑶团体信息科技部高级经理 陈钦霸 采访及整顿:SegmentFault 思否编辑部 陈钦霸,华东理工大学计算机技术硕士,02年入职均瑶,在数据中心、数据库、信息安全、云计算及开源生态等畛域有多年的实战经验。现负责均瑶团体信息科技部高级经理,次要负责均瑶团体信息化建设,包含OA、ERP、邮件、视频会议零碎等办公零碎,近几年次要负责公有云相干建设。 作为国内一家大型实干型团体,均瑶自1991年成立以来,始终致力于求实翻新,近年来已在航空运输、金融服务、古代生产、教育服务、科技五大业务板块,成为业界佼佼者,开拓了一片天。 有人可能要问了,这样一家偏差传统型的大型企业,是如何一步步倒退成为明天这样在服务业内领军的龙头企业呢? 其实,这所有的所有背地,都离不开全面进行数字化转型以及开源云计算的使用。 在刚刚过来的主题为“开源基础设施的下一个十年”OpenInfra展会上,咱们有幸采访到了均瑶团体信息科技部高级经理陈钦霸,并就数字化转型、开源云计算等方面议题进行了一场别开生面的畅谈。看完上面这些内容,或者你就会找到你想要的答案。 对于企业全面进行数字化转型的契机,陈钦霸介绍称,早在两年前,均瑶团体王均金董事长就提出了“科技赋能”的议题,来推动整个团体业务倒退。 数字化转型的减速推动,应该是均瑶2021年会上,王均金董事长进一步提出“各个业务板块要深刻理解数字化理念,拥抱数字化,把握新技术,用好'方法论',科技赋能”。 特地在数字化浪潮的大趋势下,尤其对于均瑶这样——涉足航空、教育、生产等畛域偏传统业务的企业来说,数字化建设水平离同行业高水平还有差距,因而须要加大力度来推动。 团体数字化转型遇到难题对于均瑶这样的大型团体而言,数字化转型波及到旗下各行各业的子公司,因而会遇到不少难题。 对此,陈钦霸介绍道,均瑶是在2017年初开始踏足云计算畛域,通过内外部的具体调研和综合评估,下半年就孕育了“第一朵均瑶云”。而这朵云,是“从点到面,而后缓缓的推广到团体五大板块”当中去的。 刚开始,均瑶找了一家跨境电商初创公司做“公有云”孵化的试点,将其打造成了一个经典上云案例,差不多一两个月后,印证了均瑶“公有云”平台整体的稳定性,是经得起考验的。 尔后,便逐渐“下放”给各板块试用,前后花了将近一两年工夫,最终“功夫不负有心人”,均瑶团体实现了云主机规模靠近500多台、上云子公司达十多家的傲人“业绩”。 也就是说,从2017年开始,随着“新业务上云”的策略逐步开展,整个平台也开始趋于稳定、成熟,尔后的工夫里,一些子公司也开始将一些“积淀”的老业务逐渐上云,比方一些老旧的业务零碎迁徙上云,最终实现企业“从点到面”全面上云。 对于传统架构而言,新业务从立项、施行到上线,整个周期相当长。究其原因,陈钦霸解释称,比方服务器硬件设施洽购到货期要差不多两三个月,零碎装置部署、网络调试的周期也很长,整体效率低下。而数字化、云计算之后,以上这些业务基本上在一两天或者一周内就能够疾速上线,既进步了效率,又节约了投资,对企业信息化来说是一个飞速的倒退。上云业务随需应变,就像“麻利开发”,疾速迭代,实现资源依据业务弹性伸缩,“试错”成本低,不会产生资源节约。 开源云助力业务倒退对于“开源云助力企业业务倒退”方面的议题,陈钦霸也为咱们带来了不少实在的经典案例。据他回顾,早在2018年的时候,均瑶旗下的吉祥物流公司(吉祥航空子公司),新上线一个互联网在线平台,因为业务倒退比拟快,整个工期要求也比拟紧。上云后,缩小了信息化建设老本,初创投入少;放慢了零碎迭代能力,实现了灰度公布和自动化部署;晋升了效率,可专一于外围业务和开发工作;翻新了模式,可疾速搭建新业务环境进行验证,升高研发老本。 因而在引入云之后,整个平台造成了疾速部署、开发、上线、响应等一站式服务体系,当然在平台产品和服务成熟过程中,期间陆续修复了一些bug,也踩了不少坑。最终通过借助OpenStack相干技术及九州云撑持,在云利用场景等方面做了一些优化,这些经验也是与云厂商之间互相磨合的过程,也是正式深刻接触开源的开始。 在陈钦霸看来,现阶段开源仍旧是大趋势,其背地须要弱小的技术撑持团队。均瑶目前已上云的很多业务零碎,就对整个云平台的稳定性及可靠性有较高的依赖性,尤其波及到Ceph存储,关系到业务数据的安全性。 特地是在寰球都开始关注数据安全的当下,不论是国家层面还是企业,数据安全相干议题早已成为热议焦点。因而,均瑶对开源畛域的倒退也始终离不开来自单干厂商的大力支持,须要他们提供快捷技术保障服务。 大型企业接入开源所波及到的数据安全问题对于像均瑶这样的大型企业接入开源所波及到的数据安全问题,陈钦霸也从三个方面给出了独到见解。 一、在网络隔离前提下,还须要提供延申性能。云主机作为“人造隔离”的组件,还需与各业务部门实现资源互通。二、在业务互联条件下,还须要思考平安防护。须引入第三方平安机构,如均瑶目前单干的深服气云平安资源。三、在数据快照根底上,还须要定期异地备份。借助业余的备份工具,保留三个以上不同时间段的齐全备份数据,加固平安的最初一道防线。 既要保障网络隔离性,又要实现跨业务的互联互通。这是个矛盾点,理论利用场景中常常会产生很多问题。 因而,在接入开源的过程中,须要对业务场景提前思考分明,之后还要有明确布局,尔后才制订下一步策略。如均瑶目前正在测试容器平台利用方面,与九州增强单干,以及接下来与OpenStack的深度单干,除IaaS外,在PaaS、SaaS服务等方面都有需要。 通过引入这样的第三方业余的平安公司或云厂商,来为企业提供更多更业余的服务,是大型企业接入开源的外围。 对于跟九州云/OpenStack的单干在与OpenStack深度单干方面,陈钦霸谈到,在易用性和安全性方面,均瑶云平台的安全性现已在做进一步优化、加固和评测,且曾经拿到了相干认证。此外,云平台能力也在进行进一步拓展,比方减少更多高性能的计算节点资源,利用规模上也会进一步深入。而除了云主机,均瑶接下来可能还会跟OpenStack有进一步的容器平台、CMP多云治理平台等单干。 据陈钦霸介绍,均瑶云是基于Openstack N版搭建的“均瑶云一期”于2017年12月28日上线,面向外部子公司推广试用,2018年胜利入围上海市经信委“十佳云计算利用示范”我的项目,同年通过了公安部颁布的信息系统等级爱护备案证实第三级。 为满足业务更高要求,2020年,与九州云策略单干,“均瑶云平台”降级到更稳固的Openstack P版,同年均瑶云二期我的项目从400家参评企业中怀才不遇,获评上海市经信委“2020年企业上云”示范利用并获产业倒退专项资金。 均瑶团体的企业上云工作失去了上海市无关部门的必定,对“均瑶云”接下来的倒退也有较大的帮忙。 目前,均瑶团体外部应用均瑶有云的公司已超一半,约64.8%,应用私有云的公司占比17.6%,还有17.6%的公司应用了混合云。 均瑶云是均瑶团体重要信息基础设施,为企业业务倒退和外围数据安全提供反对和保障,具备重要战略意义。 2021年均瑶已发文全面推动分子公司“企业上云”。 结束语除了航空、教育等偏传统的畛域,近年来均瑶也已在衰弱这样的新兴畛域获得了不少实际成绩。从2020年均瑶衰弱(股票名称:均瑶衰弱)的正式上市,到往年引入了一个全新的衰弱产业——均瑶医疗,均瑶正在借助开源和上云的形式在更多畛域拓展和冲破。 这个过程中,陈钦霸作为均瑶团体的信息化“急先锋”,将会持续秉持“对信息诚信的高度撑持和依赖”,踊跃拥抱开源和云技术,为企业数字化转型砥砺“前行”,持续争做国内数字化转型企业标杆,积极响应国家“科技驱动”相干策略,为打造“数字中国”继续发力!

November 24, 2021 · 1 min · jiezi

关于openstack:OpenStack-已死增长超-66来这里了解-OpenInfra-的最新动态

有人质疑 OpenStack 的时代曾经过来了,但数据通知咱们并非如此。 依据往年的 OpenStack 用户考察,由 OpenStack 治理的外围的数量在去年增长了66%。是的! 有超过2500万个 OpenStack 的外围在生产中,其中 Workday、雅虎和沃尔玛各运行超过 100 万个外围,中国移动运行超过 600 万个。 为什么对由开源解决方案驱动的基础设施会有如此继续的需要?请退出由 OpenInfra 基金会(原 OpenStack 基金会)主办的 OpenInfra Live: Keynotes,听取有远见的专家探讨为什么特定的部署曾经逾越了百万外围的门槛,以及独家布告、现场演示、OpenStack + Kubernetes和混合云经济学。 这将是往年大家惟一一次聚在一起的机会。来与寰球 OpenInfra 社区互动吧! 流动在中国的转播将于 2021/11/20(周六)北京工夫 09:00 开始,扫描二维码或点击链接即可注册报名,获取直播地址 https://pages.segmentfault.co... 流动亮点 在 OpenInfra Live: Keynotes 上与开源基础设施畛域的最新参与者互动吧!这两期特别版 OpenInfra Live 是你最好的机会: 与 OpenStack 和 Kubernetes 等寰球开源社区首领互动,听取这些我的项目如何反对混合云等 OpenInfra 的应用案例深刻理解混合云经济学和开源技术施展的作用庆贺咱们发表往年的超级用户大奖的得主来自寰球的开源社区首领齐聚一堂 退出咱们,立刻注册流动在中国的转播将于 2021/11/20(周六)北京工夫 09:00 开始扫描二维码或点击链接即可注册报名,获取直播地址 https://pages.segmentfault.co...

November 17, 2021 · 1 min · jiezi

关于openstack:Pike裸金属部署

变量 ctrl_ip="172.36.214.11" #controller_mgt_ip#Note: The hostname cannot contain "_"ctrl_hostname=`cat /etc/hostname`all_pwd="123456"#inspector_ip you should set on inspector_interfaceinspector_ip="10.0.0.1"inspector_intface="ens256"inspector_ippool_start="10.0.0.100"inspector_ippool_end="10.0.0.200"source /root/admin-openrcopenstack network create Provision --provider-network-type vxlan --provider-segment 4001#provision_ip you should set on inspector_interface's vlan subinterface, for example: ens256.1255provision_vlan="4001"provision_ip="20.0.0.1"provision_uuid=`openstack network show Provision | grep id|grep -v pro|grep -v qos|tr -d " "|awk -F '|' '{print$3}'`echo $provision_uuidsleep 3set inpspector interface sed -i "/BOOTPROTO/cBOOTPROTO=none" /etc/sysconfig/network-scripts/ifcfg-$inspector_intfacesed -i "/ONBOOT/cONBOOT=yes" /etc/sysconfig/network-scripts/ifcfg-$inspector_intfaceecho "IPADDR=$inspector_ip" >>/etc/sysconfig/network-scripts/ifcfg-$inspector_intfaceecho "PREFIX=24" >>/etc/sysconfig/network-scripts/ifcfg-$inspector_intfaceset provision interface echo "BOOTPROTO=none" >>/etc/sysconfig/network-scripts/ifcfg-$inspector_intface.$provision_vlanecho "DEVICE=$inspector_intface.$provision_vlan" >>/etc/sysconfig/network-scripts/ifcfg-$inspector_intface.$provision_vlanecho "ONBOOT=yes" >>/etc/sysconfig/network-scripts/ifcfg-$inspector_intface.$provision_vlanecho "IPADDR=$provision_ip" >>/etc/sysconfig/network-scripts/ifcfg-$inspector_intface.$provision_vlanecho "VLAN=yes" >>/etc/sysconfig/network-scripts/ifcfg-$inspector_intface.$provision_vlansystemctl restart networksystemctl status networkyum install qemu-img iscsi-initiator-utils python2-ironicclient psmisc gdisk -yDatabase ...

September 29, 2021 · 6 min · jiezi

关于openstack:OpenStackStein控制和网络节点合一部署

留神:运行shell,应用"source xx.sh or . xx.sh",不要应用"bash xx.sh" set environment variablesecho "#Add by Ly">>/etc/profileecho "export CONTROLLER_IP=172.36.214.11">>/etc/profileecho "export CTRL_HOST_NAME=stein-ctrl">>/etc/profileecho "export ALL_PASS=123456">>/etc/profilesource /etc/profileB_setup_base_env.shset -e -xyum install -y net-toolsyum install -y expectyum install -y tcpdumpyum install -y python-pipyum install -y treeecho "$CONTROLLER_IP $CTRL_HOST_NAME" >>/etc/hostssystemctl stop firewalldsystemctl disable firewalldsleep 2cp /etc/selinux/config /etc/selinux/config.baksed -i "/SELINUX=enforcing/cSELINUX=disabled" /etc/selinux/configsetenforce 0cp /etc/chrony.conf /etc/chrony.conf.baksed -i "/server 0.centos.pool.ntp.org iburst/cserver 10.165.7.181 iburst" /etc/chrony.confsed -i "/centos.pool.ntp.org/d" /etc/chrony.confsystemctl enable chronydsystemctl restart chronydsystemctl status chronydsleep 2chronyc sourcestimedatectl set-timezone Asia/Shanghaisleep 5#by your diyC_setup_base_soft_about_ctrl_stein.shset -e -xecho "The time now is : $CURDATE"sleep 3yum install centos-release-openstack-stein -yyum install python-openstackclient -yyum install openstack-selinux -yyum install -y mariadbyum install -y mariadb-serveryum install -y python2-PyMySQLtouch /etc/my.cnf.d/openstack.cnfecho "[mysqld]" >>/etc/my.cnf.d/openstack.cnfecho "bind-address = $CONTROLLER_IP" >>/etc/my.cnf.d/openstack.cnfecho "" >>/etc/my.cnf.d/openstack.cnfecho "default-storage-engine = innodb" >>/etc/my.cnf.d/openstack.cnfecho "innodb_file_per_table = on" >>/etc/my.cnf.d/openstack.cnfecho "max_connections = 4096" >>/etc/my.cnf.d/openstack.cnfecho "collation-server = utf8_general_ci" >>/etc/my.cnf.d/openstack.cnfecho "character-set-server = utf8" >>/etc/my.cnf.d/openstack.cnfsystemctl enable mariadb.servicesystemctl start mariadb.servicesystemctl status mariadb.servicesleep 2mysql_secure_installation <<EOFy$ALL_PASS$ALL_PASSyyyyEOF#Message queueyum install rabbitmq-server -ysystemctl enable rabbitmq-server.servicesystemctl start rabbitmq-server.servicesystemctl status rabbitmq-server.servicesleep 2rabbitmqctl add_user openstack $ALL_PASSrabbitmqctl set_permissions openstack ".*" ".*" ".*"#Memcachedyum install memcached python-memcached -ycp /etc/sysconfig/memcached /etc/sysconfig/memcached.baksed -i "/OPTIONS=\"-l 127.0.0.1,::1\"/cOPTIONS=\"-l 127.0.0.1,::1,$CONTROLLER_IP\"" /etc/sysconfig/memcachedsystemctl enable memcached.servicesystemctl start memcached.servicesystemctl status memcached.servicesleep 2#ETCDyum install etcd -ycp /etc/etcd/etcd.conf /etc/etcd/etcd.conf.baksed -i '/ETCD_DATA_DIR/cETCD_DATA_DIR="/var/lib/etcd/default.etcd"' /etc/etcd/etcd.confsed -i "/ETCD_LISTEN_PEER_URLS/cETCD_LISTEN_PEER_URLS=\"http://$CONTROLLER_IP:2380\"" /etc/etcd/etcd.confsed -i "/ETCD_LISTEN_CLIENT_URLS/cETCD_LISTEN_CLIENT_URLS=\"http://$CONTROLLER_IP:2379\"" /etc/etcd/etcd.confsed -i "/ETCD_NAME/cETCD_NAME=\"$CON_HOST_NAME\"" /etc/etcd/etcd.confsed -i "/ETCD_INITIAL_ADVERTISE_PEER_URLS/cETCD_INITIAL_ADVERTISE_PEER_URLS=\"http://$CONTROLLER_IP:2380\"" /etc/etcd/etcd.confsed -i "/ETCD_ADVERTISE_CLIENT_URLS/cETCD_ADVERTISE_CLIENT_URLS=\"http://$CONTROLLER_IP:2379\"" /etc/etcd/etcd.confsed -i "/ETCD_INITIAL_CLUSTER=/cETCD_INITIAL_CLUSTER=\"$CON_HOST_NAME=http://$CONTROLLER_IP:2380\"" /etc/etcd/etcd.confsed -i '/ETCD_INITIAL_CLUSTER_TOKEN/cETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"' /etc/etcd/etcd.confsed -i '/ETCD_INITIAL_CLUSTER_STATE/cETCD_INITIAL_CLUSTER_STATE="new"' /etc/etcd/etcd.confsystemctl enable etcdsystemctl start etcdsystemctl status etcdsleep 2D_setup_keystone_about_ctrl_stein.shset -e -xyum install openstack-keystone -yyum install httpd -yyum install mod_wsgi -ymysql -N -uroot -p$ALL_PASS<<EOFDROP DATABASE if exists keystone;CREATE DATABASE if not exists keystone;GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '$ALL_PASS';EOF#yum install openstack-keystone -y#yum install httpd -y#yum install mod_wsgi -ycp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak#[database]sed -i "/#connection = <None>/aconnection = mysql+pymysql://keystone:$ALL_PASS@$CONTROLLER_IP/keystone" /etc/keystone/keystone.conf#[token]sed -i '/provider =/aprovider = fernet' /etc/keystone/keystone.conf#Populate the Identity service databasesu -s /bin/sh -c "keystone-manage db_sync" keystonekeystone-manage fernet_setup --keystone-user keystone --keystone-group keystonekeystone-manage credential_setup --keystone-user keystone --keystone-group keystone#keystone-manage bootstrap --bootstrap-password $ALL_PASS \ --bootstrap-admin-url http://$CONTROLLER_IP:5000/v3/ \ --bootstrap-internal-url http://$CONTROLLER_IP:5000/v3/ \ --bootstrap-public-url http://$CONTROLLER_IP:5000/v3/ \ --bootstrap-region-id RegionOne#ServerNamesed -i "/#ServerName/aServerName $CONTROLLER_IP" /etc/httpd/conf/httpd.conf#Creating a soft linkln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/systemctl enable httpd.servicesystemctl start httpd.servicesystemctl status httpd.service#Configure the administrative accountexport OS_USERNAME=adminexport OS_PASSWORD=$ALL_PASSexport OS_PROJECT_NAME=adminexport OS_USER_DOMAIN_NAME=Defaultexport OS_PROJECT_DOMAIN_NAME=Defaultexport OS_AUTH_URL=http://$CONTROLLER_IP:5000/v3export OS_IDENTITY_API_VERSION=3#Create a domain, projects, users, and rolesopenstack domain create --description "An Example Domain" exampleopenstack project create --domain default --description "Service Project" serviceopenstack project create --domain default --description "Demo Project" myproject/usr/bin/expect << EOFset timeout 15spawn openstack user create --domain default --password-prompt myuserexpect "User*"send "$ALL_PASS\r"expect "Repeat *"send "$ALL_PASS\r"expect eofEOFopenstack role create myroleopenstack role add --project myproject --user myuser myroleunset OS_AUTH_URL OS_PASSWORD/usr/bin/expect << EOFset timeout 15spawn openstack --os-auth-url http://$CONTROLLER_IP:5000/v3 \ --os-project-domain-name Default --os-user-domain-name Default \ --os-project-name admin --os-username admin token issueexpect "*Password*"send "$ALL_PASS\r"expect eofEOF/usr/bin/expect << EOFset timeout 15spawn openstack --os-auth-url http://controller:5000/v3 \ --os-project-domain-name Default --os-user-domain-name Default \ --os-project-name myproject --os-username myuser token issueexpect "*Password*"send "$ALL_PASS\r"expect eofEOF#Creating admin-openrctouch /root/admin-openrcecho "export OS_PROJECT_DOMAIN_NAME=Default" >/root/admin-openrcecho "export OS_USER_DOMAIN_NAME=Default" >>/root/admin-openrcecho "export OS_PROJECT_NAME=admin" >>/root/admin-openrcecho "export OS_USERNAME=admin" >>/root/admin-openrcecho "export OS_PASSWORD=$ALL_PASS" >>/root/admin-openrcecho "export OS_AUTH_URL=http://$CONTROLLER_IP:5000/v3" >>/root/admin-openrcecho "export OS_IDENTITY_API_VERSION=3" >>/root/admin-openrcecho "export OS_IMAGE_API_VERSION=2" >>/root/admin-openrc#Creating demo-openrctouch /root/demo-openrcecho "export OS_PROJECT_DOMAIN_NAME=Default" >/root/demo-openrcecho "export OS_USER_DOMAIN_NAME=Default" >>/root/demo-openrcecho "export OS_PROJECT_NAME=myproject" >>/root/demo-openrcecho "export OS_USERNAME=myuser" >>/root/demo-openrcecho "export OS_PASSWORD=$ALL_PASS" >>/root/demo-openrcecho "export OS_AUTH_URL=http://$CONTROLLER_IP:5000/v3" >>/root/demo-openrcecho "export OS_IDENTITY_API_VERSION=3" >>/root/demo-openrcecho "export OS_IMAGE_API_VERSION=2" >>/root/demo-openrcsource /root/admin-openrcopenstack token issuesleep 2E_setup_image_about_ctrl_stein.shset -e -x#Database operations: glancemysql -N -uroot -p$ALL_PASS<<EOFDROP DATABASE if exists glance;CREATE DATABASE if not exists glance;GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '$ALL_PASS';EOFsource /root/admin-openrc/usr/bin/expect << EOFset timeout 15spawn openstack user create --domain default --password-prompt glanceexpect "User*"send "$ALL_PASS\r"expect "Repeat*"send "$ALL_PASS\r"expect eofEOFopenstack role add --project service --user glance adminopenstack service create --name glance --description "OpenStack Image" imageopenstack endpoint create --region RegionOne image public http://$CONTROLLER_IP:9292openstack endpoint create --region RegionOne image internal http://$CONTROLLER_IP:9292openstack endpoint create --region RegionOne image admin http://$CONTROLLER_IP:9292yum install openstack-glance -ycp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak#[database]sed -i "/#connection =/aconnection = mysql+pymysql://glance:$ALL_PASS@$CONTROLLER_IP/glance" /etc/glance/glance-api.conf#[keystone_authtoken]sed -i "/\[keystone_authtoken]$/apassword = $ALL_PASS" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/ausername = glance" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/aproject_name = service" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/auser_domain_name = Default" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/aproject_domain_name = Default" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/aauth_type = password" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/amemcached_servers = $CONTROLLER_IP:11211" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/aauth_url = http://$CONTROLLER_IP:5000" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/awww_authenticate_uri = http://$CONTROLLER_IP:5000" /etc/glance/glance-api.conf#[paste_deploy]sed -i "/flavor = keystone/cflavor = keystone" /etc/glance/glance-api.conf#[glance_store]sed -i "/\[glance_store]$/afilesystem_store_datadir = /var/lib/glance/images/" /etc/glance/glance-api.confsed -i "/\[glance_store]$/adefault_store = file" /etc/glance/glance-api.confsed -i "/\[glance_store]$/astores = file,http" /etc/glance/glance-api.conf#备份glance-registry.confcp /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.bak#[database]sed -i "/#connection = <None>/aconnection = mysql+pymysql://glance:$ALL_PASS@$CONTROLLER_IP/glance" /etc/glance/glance-registry.conf#[keystone_authtoken]sed -i "/\[keystone_authtoken]$/apassword = $ALL_PASS" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/ausername = glance" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/aproject_name = service" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/auser_domain_name = Default" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/aproject_domain_name = Default" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/aauth_type = password" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/amemcached_servers = $CONTROLLER_IP:11211" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/aauth_url = http://$CONTROLLER_IP:5000" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/awww_authenticate_uri = http://$CONTROLLER_IP:5000" /etc/glance/glance-registry.conf#[paste_deploy]sed -i "/flavor = keystone/cflavor = keystone" /etc/glance/glance-registry.conf#Populate the Image service databasesu -s /bin/sh -c "glance-manage db_sync" glancesystemctl enable openstack-glance-api.service openstack-glance-registry.servicesystemctl start openstack-glance-api.service openstack-glance-registry.serviceF_setup_placement_about_ctrl_stein.shset -x -e#mysql -N -uroot -p$ALL_PASS<<EOFCREATE DATABASE placement;GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY '$ALL_PASS';EOFsource /root/admin-openrc/usr/bin/expect << EOFset timeout 15spawn openstack user create --domain default --password-prompt placementexpect "User*"send "$ALL_PASS\r"expect "Repeat*"send "$ALL_PASS\r"expect eofEOFopenstack role add --project service --user placement adminopenstack service create --name placement --description "Placement API" placementopenstack endpoint create --region RegionOne placement public http://$CONTROLLER_IP:8778openstack endpoint create --region RegionOne placement internal http://$CONTROLLER_IP:8778openstack endpoint create --region RegionOne placement admin http://$CONTROLLER_IP:8778yum install openstack-placement-api -y#cp /etc/placement/placement.conf /etc/placement/placement.conf.bak#[placement_database]sed -i "/\[placement_database]$/aconnection = mysql+pymysql://placement:$ALL_PASS@$CONTROLLER_IP/placement" /etc/placement/placement.conf#[api]sed -i "/\[api]$/aauth_strategy = keystone" /etc/placement/placement.conf#[keystone_authtoken]sed -i "/\[keystone_authtoken]$/apassword = $ALL_PASS" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/ausername = placement" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/aproject_name = service" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/auser_domain_name = Default" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/aproject_domain_name = Default" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/aauth_type = password" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/amemcached_servers = $CONTROLLER_IP:11211" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/aauth_url = http://$CONTROLLER_IP:5000/v3" /etc/placement/placement.confsu -s /bin/sh -c "placement-manage db sync" placementsystemctl restart httpd#verify installationsource /root/admin-openrcplacement-status upgrade check#install osc-placementmkdir /root/.piptouch /root/.pip/pip.confecho "[global]" >/root/.pip/pip.confecho "index-url=http://10.153.3.130/pypi/web/simple" >>/root/.pip/pip.confecho "" >>/root/.pip/pip.confecho "[install]" >>/root/.pip/pip.confecho "trusted-host=10.153.3.130" >>/root/.pip/pip.confpip install osc-placementsed -i "/<\/VirtualHost>/i\ \ <Directory \/usr\/bin>" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ <IfVersion >= 2.4>" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ \ \ \ \ Require all granted" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ <\/IfVersion>" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ <IfVersion < 2.4>" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ \ \ \ \ Order allow,deny" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ \ \ \ \ Allow from all" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ <\/IfVersion>" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ <\/Directory>" /etc/httpd/conf.d/00-placement-api.confsystemctl restart httpdsystemctl status httpdopenstack --os-placement-api-version 1.2 resource class list --sort-column nameopenstack --os-placement-api-version 1.6 trait list --sort-column nameG_setup_nova_about_ctrl_stein.shset -x -e#mysql -N -uroot -p$ALL_PASS<<EOFDROP DATABASE if exists nova_api;CREATE DATABASE if not exists nova_api;DROP DATABASE if exists nova;CREATE DATABASE if not exists nova;DROP DATABASE if exists nova_cell0;CREATE DATABASE if not exists nova_cell0;GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '$ALL_PASS';EOFsource /root/admin-openrc/usr/bin/expect << EOFset timeout 15spawn openstack user create --domain default --password-prompt novaexpect "User*"send "$ALL_PASS\r"expect "Repeat*"send "$ALL_PASS\r"expect eofEOFopenstack role add --project service --user nova adminopenstack service create --name nova --description "OpenStack Compute" computeopenstack endpoint create --region RegionOne compute public http://$CONTROLLER_IP:8774/v2.1openstack endpoint create --region RegionOne compute internal http://$CONTROLLER_IP:8774/v2.1openstack endpoint create --region RegionOne compute admin http://$CONTROLLER_IP:8774/v2.1yum install -y openstack-nova-apiyum install -y openstack-nova-conductoryum install -y openstack-nova-novncproxyyum install -y openstack-nova-schedulercp /etc/nova/nova.conf /etc/nova/nova.conf.bak#[DEFAULT]sed -i "/\[DEFAULT]$/afirewall_driver = nova.virt.firewall.NoopFirewallDriver" /etc/nova/nova.confsed -i "/\[DEFAULT]$/ause_neutron = True" /etc/nova/nova.confsed -i "/\[DEFAULT]$/amy_ip = $CONTROLLER_IP" /etc/nova/nova.confsed -i "/\[DEFAULT]$/atransport_url = rabbit://openstack:$ALL_PASS@$CONTROLLER_IP" /etc/nova/nova.confsed -i "/\[DEFAULT]$/aenabled_apis = osapi_compute,metadata" /etc/nova/nova.conf#[api_database]sed -i "/\[api_database]$/aconnection = mysql+pymysql://nova:$ALL_PASS@$CONTROLLER_IP/nova_api" /etc/nova/nova.conf#[database]sed -i "/\[database]$/aconnection = mysql+pymysql://nova:$ALL_PASS@$CONTROLLER_IP/nova" /etc/nova/nova.conf#[api]sed -i "/\[api]$/aauth_strategy = keystone" /etc/nova/nova.conf#[keystone_authtoken]sed -i "/\[keystone_authtoken]$/apassword = $ALL_PASS" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/ausername = nova" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/aproject_name = service" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/auser_domain_name = Default" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/aproject_domain_name = Default" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/aauth_type = password" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/amemcached_servers = $CONTROLLER_IP:11211" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/aauth_url = http://$CONTROLLER_IP:5000" /etc/nova/nova.conf#[vnc]sed -i "/\[vnc]$/aserver_proxyclient_address = \$my_ip" /etc/nova/nova.confsed -i "/\[vnc]$/aserver_listen = \$my_ip" /etc/nova/nova.confsed -i "/\[vnc]$/aenabled = true" /etc/nova/nova.conf#[glance]sed -i "/\[glance]$/aapi_servers = http://$CONTROLLER_IP:9292" /etc/nova/nova.conf#[oslo_concurrency]sed -i "/\[oslo_concurrency]$/alock_path = /var/lib/nova/tmp" /etc/nova/nova.conf#[placement]sed -i "/\[placement]$/apassword = $ALL_PASS" /etc/nova/nova.confsed -i "/\[placement]$/ausername = placement" /etc/nova/nova.confsed -i "/\[placement]$/aauth_url = http://$CONTROLLER_IP:5000/v3" /etc/nova/nova.confsed -i "/\[placement]$/auser_domain_name = Default" /etc/nova/nova.confsed -i "/\[placement]$/aauth_type = password" /etc/nova/nova.confsed -i "/\[placement]$/aproject_name = service" /etc/nova/nova.confsed -i "/\[placement]$/aproject_domain_name = Default" /etc/nova/nova.confsed -i "/\[placement]$/aos_region_name = RegionOne" /etc/nova/nova.confsu -s /bin/sh -c "nova-manage api_db sync" novasu -s /bin/sh -c "nova-manage cell_v2 map_cell0" novasu -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" novasu -s /bin/sh -c "nova-manage db sync" novasu -s /bin/sh -c "nova-manage cell_v2 list_cells" novasystemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.servicesystemctl start openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.servicesleep 3#Verify operationsource /root/admin-openrcopenstack compute service listsleep 1openstack catalog listsleep 1openstack image listsleep 1nova-status upgrade checksleep 4H_setup_neutron_about_ctrl_stein.shset -e -x#mysql -N -uroot -p$ALL_PASS<<EOFDROP DATABASE if exists neutron;CREATE DATABASE if not exists neutron;GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '$ALL_PASS';EOFsource /root/admin-openrc/usr/bin/expect << EOFspawn openstack user create --domain default --password-prompt neutronexpect "User*"send "$ALL_PASS\r"expect "Repeat*"send "$ALL_PASS\r"expect eofEOFopenstack role add --project service --user neutron adminopenstack service create --name neutron --description "OpenStack Networking" networkopenstack endpoint create --region RegionOne network public http://$CONTROLLER_IP:9696openstack endpoint create --region RegionOne network internal http://$CONTROLLER_IP:9696openstack endpoint create --region RegionOne network admin http://$CONTROLLER_IP:9696yum install -y openstack-neutronyum install -y openstack-neutron-ml2yum install -y openstack-neutron-openvswitchyum install -y ebtables#/etc/neutron/neutron.confcp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak#[database]sed -i "/\[database]$/aconnection = mysql+pymysql://neutron:$ALL_PASS@$CONTROLLER_IP/neutron" /etc/neutron/neutron.conf#[DEFAULT]sed -i "/\[DEFAULT]$/anotify_nova_on_port_data_changes = true" /etc/neutron/neutron.confsed -i "/\[DEFAULT]$/anotify_nova_on_port_status_changes = true" /etc/neutron/neutron.confsed -i "/\[DEFAULT]$/aauth_strategy = keystone" /etc/neutron/neutron.confsed -i "/\[DEFAULT]$/atransport_url = rabbit://openstack:$ALL_PASS@$CONTROLLER_IP" /etc/neutron/neutron.confsed -i "/\[DEFAULT]$/aallow_overlapping_ips = true" /etc/neutron/neutron.confsed -i "/\[DEFAULT]$/aservice_plugins = router" /etc/neutron/neutron.confsed -i "/\[DEFAULT]$/acore_plugin = ml2" /etc/neutron/neutron.conf#[keystone_authtoken]sed -i "/\[keystone_authtoken]$/apassword = $ALL_PASS" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/ausername = neutron" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/aproject_name = service" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/auser_domain_name = Default" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/aproject_domain_name = Default" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/aauth_type = password" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/amemcached_servers = $CONTROLLER_IP:11211" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/aauth_url = http://$CONTROLLER_IP:5000" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/awww_authenticate_uri = http://$CONTROLLER_IP:5000" /etc/neutron/neutron.conf#[nova]sed -i "/\[nova]$/apassword = $ALL_PASS" /etc/neutron/neutron.confsed -i "/\[nova]$/ausername = nova" /etc/neutron/neutron.confsed -i "/\[nova]$/aproject_name = service" /etc/neutron/neutron.confsed -i "/\[nova]$/aregion_name = RegionOne" /etc/neutron/neutron.confsed -i "/\[nova]$/auser_domain_name = Default" /etc/neutron/neutron.confsed -i "/\[nova]$/aproject_domain_name = Default" /etc/neutron/neutron.confsed -i "/\[nova]$/aauth_type = password" /etc/neutron/neutron.confsed -i "/\[nova]$/aauth_url = http://$CONTROLLER_IP:5000" /etc/neutron/neutron.conf#[oslo_concurrency]sed -i "/\[oslo_concurrency]$/alock_path = /var/lib/neutron/tmp" /etc/neutron/neutron.confcp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak#[ml2]sed -i "/\[ml2]$/aextension_drivers = port_security" /etc/neutron/plugins/ml2/ml2_conf.inised -i "/\[ml2]$/amechanism_drivers = openvswitch,l2population" /etc/neutron/plugins/ml2/ml2_conf.inised -i "/\[ml2]$/atenant_network_types = vxlan,vlan" /etc/neutron/plugins/ml2/ml2_conf.inised -i "/\[ml2]$/atype_drivers = flat,vlan,vxlan" /etc/neutron/plugins/ml2/ml2_conf.ini#[ml2_type_flat]sed -i "/\[ml2_type_flat]$/aflat_networks = provider" /etc/neutron/plugins/ml2/ml2_conf.ini#[ml2_type_vlan]sed -i "/\[ml2_type_vlan]$/anetwork_vlan_ranges = physicnet:1000:2000" /etc/neutron/plugins/ml2/ml2_conf.ini#[ml2_type_vxlan]sed -i "/\[ml2_type_vxlan]$/avni_ranges = 30000:31000" /etc/neutron/plugins/ml2/ml2_conf.ini#[securitygroup]sed -i "/\[securitygroup]$/aenable_ipset = true" /etc/neutron/plugins/ml2/ml2_conf.ini#/etc/neutron/plugins/ml2/openvswitch_agent.inicp /etc/neutron/plugins/ml2/openvswitch_agent.ini /etc/neutron/plugins/ml2/openvswitch_agent.ini.bak#[agent]#sed -i "/tunnel_types = /atunnel_types = vxlan" /etc/neutron/plugins/ml2/openvswitch_agent.ini#[ovs]#sed -i "/\[ovs]$/alocal_ip = 10.214.1.2" /etc/neutron/plugins/ml2/openvswitch_agent.ini#sed -i "/\[ovs]$/atun_peer_patch_port = patch-int" /etc/neutron/plugins/ml2/openvswitch_agent.ini#sed -i "/\[ovs]$/aint_peer_patch_port = patch-tun" /etc/neutron/plugins/ml2/openvswitch_agent.ini#sed -i "/\[ovs]$/atunnel_bridge = br-tun" /etc/neutron/plugins/ml2/openvswitch_agent.ini#[securitygroup]sed -i "/\[securitygroup]$/aenable_security_group = true" /etc/neutron/plugins/ml2/openvswitch_agent.inised -i "/\[securitygroup]$/afirewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver" /etc/neutron/plugins/ml2/openvswitch_agent.inicp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.baksed -i "/\[DEFAULT]$/ainterface_driver = neutron.agent.linux.interface.OVSInterfaceDriver" /etc/neutron/l3_agent.inicp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.baksed -i "/\[DEFAULT]$/aenable_isolated_metadata = true" /etc/neutron/l3_agent.inised -i "/\[DEFAULT]$/adhcp_driver = neutron.agent.linux.dhcp.Dnsmasq" /etc/neutron/dhcp_agent.inised -i "/\[DEFAULT]$/ainterface_driver = neutron.agent.linux.interface.OVSInterfaceDriver" /etc/neutron/dhcp_agent.inised -i "/force_metadata = /aforce_metadata = true" /etc/neutron/dhcp_agent.inicp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.baksed -i "/\[DEFAULT]$/ametadata_proxy_shared_secret = $ALL_PASS" /etc/neutron/metadata_agent.inised -i "/\[DEFAULT]$/anova_metadata_host = $CONTROLLER_IP" /etc/neutron/metadata_agent.ini#Edit /etc/nova/nova.conf file and perform the fllowing actionssed -i "/\[neutron]$/ametadata_proxy_shared_secret = $ALL_PASS" /etc/nova/nova.confsed -i "/\[neutron]$/aservice_metadata_proxy = true" /etc/nova/nova.confsed -i "/\[neutron]$/apassword = $ALL_PASS" /etc/nova/nova.confsed -i "/\[neutron]$/ausername = neutron" /etc/nova/nova.confsed -i "/\[neutron]$/aproject_name = service" /etc/nova/nova.confsed -i "/\[neutron]$/aregion_name = RegionOne" /etc/nova/nova.confsed -i "/\[neutron]$/auser_domain_name = Default" /etc/nova/nova.confsed -i "/\[neutron]$/aproject_domain_name = Default" /etc/nova/nova.confsed -i "/\[neutron]$/aauth_type = password" /etc/nova/nova.confsed -i "/\[neutron]$/aauth_url = http://$CONTROLLER_IP:5000" /etc/nova/nova.confsed -i "/\[neutron]$/aurl = http://$CONTROLLER_IP:9696" /etc/nova/nova.confln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.inisu -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutronsystemctl restart openstack-nova-api.servicesystemctl enable neutron-server.service \ neutron-openvswitch-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service neutron-l3-agent.servicesystemctl start neutron-server.service \ neutron-openvswitch-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service neutron-l3-agent.servicesleep 4I_setup_dashboard_about_ctrl_stein.shset -x -eyum install openstack-dashboard -y##/etc/openstack-dashboard/local_settingscp /etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settings.baksed -i "/OPENSTACK_HOST = /cOPENSTACK_HOST = \"$CONTROLLER_IP\"" /etc/openstack-dashboard/local_settingssed -i "/ALLOWED_HOSTS = /cALLOWED_HOSTS = ['*']" /etc/openstack-dashboard/local_settings#SESSION_ENGINE = 'django.contrib.sessions.backends.cache' #CACHESsed -i "/^CACHES =/iSESSION_ENGINE = 'django.contrib.sessions.backends.cache'" /etc/openstack-dashboard/local_settingssed -i "/^[ \t]*'BACKEND'/a\\ \t'LOCATION': '$CONTROLLER_IP:11211'," /etc/openstack-dashboard/local_settingssed -i 's/django.core.cache.backends.locmem.LocMemCache/django.core.cache.backends.memcached.MemcachedCache/g' /etc/openstack-dashboard/local_settings#sed -i "/OPENSTACK_KEYSTONE_URL/cOPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST" /etc/openstack-dashboard/local_settings#sed -i "/OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT/cOPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True" /etc/openstack-dashboard/local_settings#OPENSTACK_API_VERSIONS = {# "identity": 3,# "image": 2,# "volume": 2,#}sed -i "s/#OPENSTACK_API_VERSIONS/OPENSTACK_API_VERSIONS/g" /etc/openstack-dashboard/local_settingssed -i "/# \"identity\": 3,/c\\ \"identity\": 3," /etc/openstack-dashboard/local_settingssed -i "/# \"image\": 2,/c\\ \"image\": 2," /etc/openstack-dashboard/local_settingssed -i "/# \"volume\": 2,/c\\ \"volume\": 2," /etc/openstack-dashboard/local_settingssed -i "/# \"compute\": 2,/a}" /etc/openstack-dashboard/local_settings#sed -i "/#OPENSTACK_KEYSTONE_DEFAULT_DOMAIN/cOPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\"" /etc/openstack-dashboard/local_settingssed -i "/OPENSTACK_KEYSTONE_DEFAULT_ROLE/cOPENSTACK_KEYSTONE_DEFAULT_ROLE = \"user\"" /etc/openstack-dashboard/local_settings#OPENSTACK_NEUTRON_NETWORK = {# ...# 'enable_router': False,# 'enable_quotas': False,# 'enable_distributed_router': False,# 'enable_ha_router': False,# 'enable_lb': False,# 'enable_firewall': False,# 'enable_vpn': False,# 'enable_fip_topology_check': False,#}##/etc/httpd/conf.d/openstack-dashboard.conf#cp /etc/httpd/conf.d/openstack-dashboard.conf /etc/httpd/conf.d/openstack-dashboard.conf.baksed -i "/WSGIScriptAlias/iWSGIApplicationGroup %{GLOBAL}" /etc/httpd/conf.d/openstack-dashboard.conf#systemctl restart httpd.service memcached.servicesystemctl status httpd memcachedsleep 3#Fwaasyum install openstack-neutron-fwaas -yneutron-db-manage --subproject neutron-fwaas upgrade head#lbaasv2yum install openstack-neutron-lbaas -yneutron-db-manage --subproject neutron-lbaas upgrade head#vpnaasyum install openstack-neutron-vpnaas -yneutron-db-manage --subproject neutron-vpnaas upgrade head

September 8, 2021 · 10 min · jiezi

关于openstack:OpenStackTrain版本ControllerNetworkShell脚本部署

Train版本Controller+Network装置部署阐明:因共事应用后说虚拟机无奈创立,经定位发现在CentOS零碎下有些配置项没有导致写入对应配置文件失败,故此优化了一次,但未测试。 #!/bin/bash #Author: -- Created: 2021.4#Modified -- Modified: 2021echo -e "\033[45;37m Openstack Train controller node start to install \033[0m"#===Variable===CTRL_HOST_NAME=`cat /etc/hostname | awk '{print $1}'`ALL_PASS="123456"CURDATE=`date`#Get IP addressipNum=`ip a|grep inet|grep -v 127.0.0.1|grep -v inet6|awk '{print$2}'|awk -F '/' '{print$1}'|tr -d " "|wc -l`#echo "This host IP address:"#ip a|grep inet|grep -v 127.0.0.1|grep -v inet6|awk '{print$2}'|awk -F '/' '{print$1}'|tr -d " "echo "This host IP address: `ip a|grep inet|grep -v 127.0.0.1|grep -v inet6|awk '{print$2}'|awk -F '/' '{print$1}'|tr -d " "`"if [ "$ipNum" -eq 0 ];then echo "This host does not have IP address, Please set it." exit 1fiif [ "$ipNum" -gt 1 ];then echo "This host has multiple IP addresses !" echo "Which one you choose, please enter the number of rows." while : do read -p "The number of row is : " rowNum if [[ "$ipNum" =~ ^[0-9]+$ ]]; then if [[ "$rowNum" -gt $ipNum ]]; then echo "Invaild rows!" elif [[ "$rowNum" -le 0 ]]; then echo "Invaild rows!" else CONTROLLER_IP=`ip a |grep inet|grep -v 127.0.0.1|grep -v inet6|awk '{print$2}'|awk -F '/' '{print$1}'|awk 'NR==$rowNum'` break fi else echo "Invaild rows!" fi donefiif [ "$ipNum" -eq 1 ]; then CONTROLLER_IP=`ip a |grep inet|grep -v 127.0.0.1|grep -v inet6|awk '{print$2}'|tr -d ' '|awk -F '/' '{print$1}'`fiset -eecho ""echo "Controller'ip is : $CONTROLLER_IP"echo "Controller'name is : $CTRL_HOST_NAME"echo "Openstack all passwords are : $ALL_PASS"echo "Starting time : $CURDATE"echo ""#echo "Your can cancel in 10s by 'Ctrl + D'"echo -e "\033[45;37m Your can cancel within 10s by 'Ctrl + C' \033[0m"echo -n "Wait for 10 seconds "for i in $(seq 10); do echo -n "."; sleep 1; doneecho#sleep 10echo "end"set -x#===Environment===yum install vim -yyum install net-tools -yyum install ftp -yyum install expect -yyum install tcpdump -yyum install lldpad -yyum install htop -yyum install bwm-ng -yyum install python-pip -yecho "$CONTROLLER_IP $CTRL_HOST_NAME" >>/etc/hostssystemctl stop firewalldsystemctl disable firewalldcp /etc/selinux/config /etc/selinux/config.baksed -i "/SELINUX=enforcing/cSELINUX=disabled" /etc/selinux/configsetenforce 0cp /etc/chrony.conf /etc/chrony.conf.baksed -i "/server 0.centos.pool.ntp.org iburst/cserver 10.165.7.181 iburst" /etc/chrony.confsed -i "/centos.pool.ntp.org/d" /etc/chrony.confsystemctl enable chronydsystemctl restart chronydchronyc sourcestimedatectl set-timezone Asia/Shanghaiecho "The time now is : $CURDATE"yum install python-openstackclient -yyum install openstack-selinux -y#databaseyum install mariadb mariadb-server python2-PyMySQL -ytouch /etc/my.cnf.d/openstack.cnfecho "[mysqld]" >>/etc/my.cnf.d/openstack.cnfecho "bind-address = $CONTROLLER_IP" >>/etc/my.cnf.d/openstack.cnfecho "" >>/etc/my.cnf.d/openstack.cnfecho "default-storage-engine = innodb" >>/etc/my.cnf.d/openstack.cnfecho "innodb_file_per_table = on" >>/etc/my.cnf.d/openstack.cnfecho "max_connections = 4096" >>/etc/my.cnf.d/openstack.cnfecho "collation-server = utf8_general_ci" >>/etc/my.cnf.d/openstack.cnfecho "character-set-server = utf8" >>/etc/my.cnf.d/openstack.cnfsystemctl enable mariadb.servicesystemctl start mariadb.servicesystemctl status mariadb.servicemysql_secure_installation <<EOFy$ALL_PASS$ALL_PASSyyyyEOF#Message queueyum install rabbitmq-server -ysystemctl enable rabbitmq-server.servicesystemctl start rabbitmq-server.servicesystemctl status rabbitmq-server.servicerabbitmqctl add_user openstack $ALL_PASSrabbitmqctl set_permissions openstack ".*" ".*" ".*"#Memcachedyum install -y memcachedyum install -y python-memcachedcp /etc/sysconfig/memcached /etc/sysconfig/memcached.baksed -i "/OPTIONS=\"-l 127.0.0.1,::1\"/cOPTIONS=\"-l 127.0.0.1,::1,$CONTROLLER_IP\"" /etc/sysconfig/memcachedsystemctl enable memcached.servicesystemctl start memcached.servicesystemctl status memcached.service#ETCDyum install etcd -ycp /etc/etcd/etcd.conf /etc/etcd/etcd.conf.baksed -i '/ETCD_DATA_DIR/cETCD_DATA_DIR="/var/lib/etcd/default.etcd"' /etc/etcd/etcd.confsed -i "/ETCD_LISTEN_PEER_URLS/cETCD_LISTEN_PEER_URLS=\"http://$CONTROLLER_IP:2380\"" /etc/etcd/etcd.confsed -i "/ETCD_LISTEN_CLIENT_URLS/cETCD_LISTEN_CLIENT_URLS=\"http://$CONTROLLER_IP:2379\"" /etc/etcd/etcd.confsed -i "/ETCD_NAME/cETCD_NAME=\"$CON_HOST_NAME\"" /etc/etcd/etcd.confsed -i "/ETCD_INITIAL_ADVERTISE_PEER_URLS/cETCD_INITIAL_ADVERTISE_PEER_URLS=\"http://$CONTROLLER_IP:2380\"" /etc/etcd/etcd.confsed -i "/ETCD_ADVERTISE_CLIENT_URLS/cETCD_ADVERTISE_CLIENT_URLS=\"http://$CONTROLLER_IP:2379\"" /etc/etcd/etcd.confsed -i "/ETCD_INITIAL_CLUSTER=/cETCD_INITIAL_CLUSTER=\"$CON_HOST_NAME=http://$CONTROLLER_IP:2380\"" /etc/etcd/etcd.confsed -i '/ETCD_INITIAL_CLUSTER_TOKEN/cETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"' /etc/etcd/etcd.confsed -i '/ETCD_INITIAL_CLUSTER_STATE/cETCD_INITIAL_CLUSTER_STATE="new"' /etc/etcd/etcd.confsystemctl enable etcdsystemctl start etcdsystemctl status etcd#===Identity service===mysql -N -uroot -p$ALL_PASS<<EOFDROP DATABASE if exists keystone;CREATE DATABASE if not exists keystone;GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '$ALL_PASS';EOFyum install -y openstack-keystoneyum install -y httpdyum install -y mod_wsgicp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bakif [ `cat /etc/keystone/keystone.conf|grep '^\[database\]'` != "[database]" ]; then echo "[database]" >> /keystone/keystone.confelse echo "We have this!"fi#[database]sed -i "/\[database]$/aconnection = mysql+pymysql://keystone:$ALL_PASS@$CONTROLLER_IP/keystone" /etc/keystone/keystone.conf#[token]if [ `cat /etc/keystone/keystone.conf|grep '^\[token\]'` != "[token]" ]; then echo "[token]" >> /keystone/keystone.confelse echo "We have this!"fised -i '/\[token]$/aprovider = fernet' /etc/keystone/keystone.conf#Populate the Identity service databasesu -s /bin/sh -c "keystone-manage db_sync" keystonekeystone-manage fernet_setup --keystone-user keystone --keystone-group keystonekeystone-manage credential_setup --keystone-user keystone --keystone-group keystone#keystone-manage bootstrap --bootstrap-password $ALL_PASS \ --bootstrap-admin-url http://$CONTROLLER_IP:5000/v3/ \ --bootstrap-internal-url http://$CONTROLLER_IP:5000/v3/ \ --bootstrap-public-url http://$CONTROLLER_IP:5000/v3/ \ --bootstrap-region-id RegionOne#ServerName sed -i "/#ServerName/aServerName $CONTROLLER_IP" /etc/httpd/conf/httpd.conf#Creating a soft linkln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/systemctl enable httpd.servicesystemctl start httpd.service#systemctl status httpd.service#Configure the administrative accountexport OS_USERNAME=adminexport OS_PASSWORD=$ALL_PASSexport OS_PROJECT_NAME=adminexport OS_USER_DOMAIN_NAME=Defaultexport OS_PROJECT_DOMAIN_NAME=Defaultexport OS_AUTH_URL=http://$CONTROLLER_IP:5000/v3export OS_IDENTITY_API_VERSION=3#Create a domain, projects, users, and rolesopenstack domain create --description "An Example Domain" exampleopenstack project create --domain default --description "Service Project" serviceopenstack project create --domain default --description "Demo Project" myproject/usr/bin/expect << EOFset timeout 15spawn openstack user create --domain default --password-prompt myuserexpect "User*"send "$ALL_PASS\r"expect "Repeat *"send "$ALL_PASS\r"expect eofEOFopenstack role create myroleopenstack role add --project myproject --user myuser myroleunset OS_AUTH_URL OS_PASSWORD/usr/bin/expect << EOFset timeout 15spawn openstack --os-auth-url http://$CONTROLLER_IP:5000/v3 \--os-project-domain-name Default --os-user-domain-name Default \--os-project-name admin --os-username admin token issueexpect "*Password*"send "$ALL_PASS\r"expect eofEOF/usr/bin/expect << EOFset timeout 15spawn openstack --os-auth-url http://controller:5000/v3 \--os-project-domain-name Default --os-user-domain-name Default \--os-project-name myproject --os-username myuser token issueexpect "*Password*"send "$ALL_PASS\r"expect eofEOF#Creating admin-openrctouch /root/admin-openrcecho "export OS_PROJECT_DOMAIN_NAME=Default" >/root/admin-openrcecho "export OS_USER_DOMAIN_NAME=Default" >>/root/admin-openrcecho "export OS_PROJECT_NAME=admin" >>/root/admin-openrcecho "export OS_USERNAME=admin" >>/root/admin-openrcecho "export OS_PASSWORD=$ALL_PASS" >>/root/admin-openrcecho "export OS_AUTH_URL=http://$CONTROLLER_IP:5000/v3" >>/root/admin-openrcecho "export OS_IDENTITY_API_VERSION=3" >>/root/admin-openrcecho "export OS_IMAGE_API_VERSION=2" >>/root/admin-openrc#Creating demo-openrctouch /root/demo-openrcecho "export OS_PROJECT_DOMAIN_NAME=Default" >/root/demo-openrcecho "export OS_USER_DOMAIN_NAME=Default" >>/root/demo-openrcecho "export OS_PROJECT_NAME=myproject" >>/root/demo-openrcecho "export OS_USERNAME=myuser" >>/root/demo-openrcecho "export OS_PASSWORD=$ALL_PASS" >>/root/demo-openrcecho "export OS_AUTH_URL=http://$CONTROLLER_IP:5000/v3" >>/root/demo-openrcecho "export OS_IDENTITY_API_VERSION=3" >>/root/demo-openrcecho "export OS_IMAGE_API_VERSION=2" >>/root/demo-openrcsource /root/admin-openrcopenstack token issuesleep 2#===3.Image service===#Database operations: glancemysql -N -uroot -p$ALL_PASS<<EOFDROP DATABASE if exists glance;CREATE DATABASE if not exists glance;GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '$ALL_PASS';EOFsource /root/admin-openrc/usr/bin/expect << EOFset timeout 15spawn openstack user create --domain default --password-prompt glanceexpect "User*"send "$ALL_PASS\r"expect "Repeat*"send "$ALL_PASS\r"expect eofEOFopenstack role add --project service --user glance adminopenstack service create --name glance --description "OpenStack Image" imageopenstack endpoint create --region RegionOne image public http://$CONTROLLER_IP:9292openstack endpoint create --region RegionOne image internal http://$CONTROLLER_IP:9292openstack endpoint create --region RegionOne image admin http://$CONTROLLER_IP:9292yum install openstack-glance -ycp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bakif [ `cat /etc/glance/glance-api.conf|grep '^\[database\]'` != "[database]" ]; then echo "[database]" >> /etc/glance/glance-api.confelse echo "We have this!"fi#[database]sed -i "/\[database]$/aconnection = mysql+pymysql://glance:$ALL_PASS@$CONTROLLER_IP/glance" /etc/glance/glance-api.confif [ `cat /etc/glance/glance-api.conf|grep '^\[keystone_authtoken\]'` != "[keystone_authtoken]" ]; then echo "[keystone_authtoken]" >> /etc/glance/glance-api.confelse echo "We have this!"fi#[keystone_authtoken]sed -i "/\[keystone_authtoken]$/apassword = $ALL_PASS" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/ausername = glance" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/aproject_name = service" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/auser_domain_name = Default" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/aproject_domain_name = Default" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/aauth_type = password" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/amemcached_servers = $CONTROLLER_IP:11211" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/aauth_url = http://$CONTROLLER_IP:5000" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/awww_authenticate_uri = http://$CONTROLLER_IP:5000" /etc/glance/glance-api.conf#[paste_deploy]if [ `cat /etc/glance/glance-api.conf|grep '^\[paste_deploy\]'` != "[paste_deploy]" ]; then echo "[paste_deploy]" >> /etc/glance/glance-api.confelse echo "We have this!"fised -i "/\[paste_deploy]$/aflavor = keystone" /etc/glance/glance-api.conf#[glance_store]if [ `cat /etc/glance/glance-api.conf|grep '^\[glance_store\]'` != "[glance_store]" ]; then echo "[glance_store]" >> /etc/glance/glance-api.confelse echo "We have this!"fised -i "/\[glance_store]$/afilesystem_store_datadir = /var/lib/glance/images/" /etc/glance/glance-api.confsed -i "/\[glance_store]$/adefault_store = file" /etc/glance/glance-api.confsed -i "/\[glance_store]$/astores = file,http" /etc/glance/glance-api.conf#copy glance-registry.confcp /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.bak#[database]if [ `cat etc/glance/glance-registry.conf|grep '^\[database\]'` != "[database]" ]; then echo "[database]" >> /etc/glance//etc/glance/glance-registry.conffised -i "/\[database]$/aconnection = mysql+pymysql://glance:$ALL_PASS@$CONTROLLER_IP/glance" /etc/glance/glance-registry.confif [ `cat etc/glance/glance-registry.conf|grep '^\[keystone_authtoken\]'` != "[keystone_authtoken]" ]; then echo "[keystone_authtoken]" >> /etc/glance//etc/glance/glance-registry.conffised -i "/\[keystone_authtoken]$/apassword = $ALL_PASS" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/ausername = glance" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/aproject_name = service" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/auser_domain_name = Default" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/aproject_domain_name = Default" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/aauth_type = password" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/amemcached_servers = $CONTROLLER_IP:11211" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/aauth_url = http://$CONTROLLER_IP:5000" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/awww_authenticate_uri = http://$CONTROLLER_IP:5000" /etc/glance/glance-registry.confif [ `cat /etc/glance/glance-registry.conf|grep '^\[paste_deploy\]'` != "[paste_deploy]" ]; then echo "[paste_deploy]" >> /etc/glance//etc/glance/glance-registry.conffised -i "/\[paste_deploy]$/aflavor = keystone" /etc/glance/glance-registry.confsu -s /bin/sh -c "glance-manage db_sync" glancesystemctl enable openstack-glance-api.service openstack-glance-registry.servicesystemctl start openstack-glance-api.service openstack-glance-registry.service#systemctl status openstack-glance-api.service openstack-glance-registry.service#===Placement service====mysql -N -uroot -p$ALL_PASS<<EOFCREATE DATABASE placement;GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY '$ALL_PASS';EOFsource /root/admin-openrc/usr/bin/expect << EOFset timeout 15spawn openstack user create --domain default --password-prompt placementexpect "User*"send "$ALL_PASS\r"expect "Repeat*"send "$ALL_PASS\r"expect eofEOFopenstack role add --project service --user placement adminopenstack service create --name placement --description "Placement API" placementopenstack endpoint create --region RegionOne placement public http://$CONTROLLER_IP:8778openstack endpoint create --region RegionOne placement internal http://$CONTROLLER_IP:8778openstack endpoint create --region RegionOne placement admin http://$CONTROLLER_IP:8778yum install openstack-placement-api -ycp /etc/placement/placement.conf /etc/placement/placement.conf.bakif [ `cat /etc/placement/placement.conf|grep '^\[placement_database\]'` != "[placement_database]" ]; then echo "[placement_database]" >> /etc/placement/placement.conffised -i "/\[placement_database]$/aconnection = mysql+pymysql://placement:$ALL_PASS@$CONTROLLER_IP/placement" /etc/placement/placement.confif [ `cat /etc/placement/placement.conf|grep '^\[api\]'` != "[api]" ]; then echo "[api]" >> /etc/placement/placement.conffised -i "/\[api]$/aauth_strategy = keystone" /etc/placement/placement.confif [ `cat /etc/placement/placement.conf|grep '^\[keystone_authtoken\]'` != "[keystone_authtoken]" ]; then echo "[keystone_authtoken]" >> /etc/placement/placement.conffised -i "/\[keystone_authtoken]$/apassword = $ALL_PASS" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/ausername = placement" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/aproject_name = service" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/auser_domain_name = Default" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/aproject_domain_name = Default" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/aauth_type = password" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/amemcached_servers = $CONTROLLER_IP:11211" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/aauth_url = http://$CONTROLLER_IP:5000/v3" /etc/placement/placement.confsu -s /bin/sh -c "placement-manage db sync" placementsystemctl restart httpd#verify installationsource /root/admin-openrcplacement-status upgrade check#install osc-placementmkdir /root/.piptouch /root/.pip/pip.confecho "[global]" >/root/.pip/pip.confecho "index-url=http://10.153.3.130/pypi/web/simple" >>/root/.pip/pip.confecho "" >>/root/.pip/pip.confecho "[install]" >>/root/.pip/pip.confecho "trusted-host=10.153.3.130" >>/root/.pip/pip.confpip install osc-placementsed -i "/<\/VirtualHost>/i\ \ <Directory \/usr\/bin>" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ <IfVersion >= 2.4>" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ \ \ \ \ Require all granted" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ <\/IfVersion>" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ <IfVersion < 2.4>" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ \ \ \ \ Order allow,deny" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ \ \ \ \ Allow from all" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ <\/IfVersion>" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ <\/Directory>" /etc/httpd/conf.d/00-placement-api.confsystemctl restart httpdsystemctl status httpdopenstack --os-placement-api-version 1.2 resource class list --sort-column nameopenstack --os-placement-api-version 1.6 trait list --sort-column name#===Compute service===mysql -N -uroot -p$ALL_PASS<<EOFDROP DATABASE if exists nova_api;CREATE DATABASE if not exists nova_api;DROP DATABASE if exists nova;CREATE DATABASE if not exists nova;DROP DATABASE if exists nova_cell0;CREATE DATABASE if not exists nova_cell0;GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '$ALL_PASS';EOFsource /root/admin-openrc/usr/bin/expect << EOFset timeout 15spawn openstack user create --domain default --password-prompt novaexpect "User*"send "$ALL_PASS\r"expect "Repeat*"send "$ALL_PASS\r"expect eofEOFopenstack role add --project service --user nova adminopenstack service create --name nova --description "OpenStack Compute" computeopenstack endpoint create --region RegionOne compute public http://$CONTROLLER_IP:8774/v2.1openstack endpoint create --region RegionOne compute internal http://$CONTROLLER_IP:8774/v2.1openstack endpoint create --region RegionOne compute admin http://$CONTROLLER_IP:8774/v2.1yum install -y openstack-nova-apiyum install -y openstack-nova-conductoryum install -y openstack-nova-novncproxyyum install -y openstack-nova-schedulercp /etc/nova/nova.conf /etc/nova/nova.conf.bak#[DEFAULT]if [ `cat /etc/nova/nova.conf|grep '^\[DEFAULT\]'` != "[DEFAULT]" ]; then echo "[DEFAULT]" >> /etc/nova/nova.conffised -i "/\[DEFAULT]$/afirewall_driver = nova.virt.firewall.NoopFirewallDriver" /etc/nova/nova.confsed -i "/\[DEFAULT]$/ause_neutron = True" /etc/nova/nova.confsed -i "/\[DEFAULT]$/amy_ip = $CONTROLLER_IP" /etc/nova/nova.confsed -i "/\[DEFAULT]$/atransport_url = rabbit://openstack:$ALL_PASS@$CONTROLLER_IP:5672" /etc/nova/nova.confsed -i "/\[DEFAULT]$/aenabled_apis = osapi_compute,metadata" /etc/nova/nova.conf#[api_database]if [ `cat /etc/nova/nova.conf|grep '^\[api_database\]'` != "[api_database]" ]; then echo "[api_database]" >> /etc/nova/nova.conffised -i "/\[api_database]$/aconnection = mysql+pymysql://nova:$ALL_PASS@$CONTROLLER_IP/nova_api" /etc/nova/nova.conf#[database]if [ `cat /etc/nova/nova.conf|grep '^\[database\]'` != "[database]" ]; then echo "[database]" >> /etc/nova/nova.conffised -i "/\[database]$/aconnection = mysql+pymysql://nova:$ALL_PASS@$CONTROLLER_IP/nova" /etc/nova/nova.conf#[api]if [ `cat /etc/nova/nova.conf|grep '^\[api\]'` != "[api]" ]; then echo "[api]" >> /etc/nova/nova.conffised -i "/\[api]$/aauth_strategy = keystone" /etc/nova/nova.conf#[keystone_authtoken]if [ `cat /etc/nova/nova.conf|grep '^\[keystone_authtoken\]'` != "[keystone_authtoken]" ]; then echo "[keystone_authtoken]" >> /etc/nova/nova.conffised -i "/\[keystone_authtoken]$/apassword = $ALL_PASS" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/ausername = nova" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/aproject_name = service" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/auser_domain_name = Default" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/aproject_domain_name = Default" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/aauth_type = password" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/amemcached_servers = $CONTROLLER_IP:11211" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/aauth_url = http://$CONTROLLER_IP:5000" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/awww_authenticate_uri = http://$CONTROLLER_IP:5000/" /etc/nova/nova.conf#[vnc]if [ `cat /etc/nova/nova.conf|grep '^\[vnc\]'` != "[vnc]" ]; then echo "[vnc]" >> /etc/nova/nova.conffised -i "/\[vnc]$/aserver_proxyclient_address = \$my_ip" /etc/nova/nova.confsed -i "/\[vnc]$/aserver_listen = \$my_ip" /etc/nova/nova.confsed -i "/\[vnc]$/aenabled = true" /etc/nova/nova.conf#[glance]if [ `cat /etc/nova/nova.conf|grep '^\[glance\]'` != "[glance]" ]; then echo "[glance]" >> /etc/nova/nova.conffised -i "/\[glance]$/aapi_servers = http://$CONTROLLER_IP:9292" /etc/nova/nova.conf#[oslo_concurrency]if [ `cat /etc/nova/nova.conf|grep '^\[oslo_concurrency\]'` != "[oslo_concurrency]" ]; then echo "[oslo_concurrency]" >> /etc/nova/nova.conffised -i "/\[oslo_concurrency]$/alock_path = \/var\/lib\/nova\/tmp" /etc/nova/nova.conf#[placement]if [ `cat /etc/nova/nova.conf|grep '^\[placement\]'` != "[placement]" ]; then echo "[placement]" >> /etc/nova/nova.conffised -i "/\[placement]$/apassword = $ALL_PASS" /etc/nova/nova.confsed -i "/\[placement]$/ausername = placement" /etc/nova/nova.confsed -i "/\[placement]$/aauth_url = http://$CONTROLLER_IP:5000/v3" /etc/nova/nova.confsed -i "/\[placement]$/auser_domain_name = Default" /etc/nova/nova.confsed -i "/\[placement]$/aauth_type = password" /etc/nova/nova.confsed -i "/\[placement]$/aproject_name = service" /etc/nova/nova.confsed -i "/\[placement]$/aproject_domain_name = Default" /etc/nova/nova.confsed -i "/\[placement]$/aos_region_name = RegionOne" /etc/nova/nova.confsu -s /bin/sh -c "nova-manage api_db sync" novasu -s /bin/sh -c "nova-manage cell_v2 map_cell0" novasu -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" novasu -s /bin/sh -c "nova-manage db sync" novasu -s /bin/sh -c "nova-manage cell_v2 list_cells" novasystemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.servicesystemctl start openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service#Verify operationsource /root/admin-openrcopenstack compute service listsleep 2openstack catalog listsleep 2openstack image listsleep 2nova-status upgrade checksleep 2#===Networking Service===mysql -N -uroot -p$ALL_PASS<<EOFDROP DATABASE if exists neutron;CREATE DATABASE if not exists neutron;GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '$ALL_PASS';EOFsource /root/admin-openrc/usr/bin/expect << EOFspawn openstack user create --domain default --password-prompt neutronexpect "User*"send "$ALL_PASS\r"expect "Repeat*"send "$ALL_PASS\r"expect eofEOFopenstack role add --project service --user neutron adminopenstack service create --name neutron --description "OpenStack Networking" networkopenstack endpoint create --region RegionOne network public http://$CONTROLLER_IP:9696openstack endpoint create --region RegionOne network internal http://$CONTROLLER_IP:9696openstack endpoint create --region RegionOne network admin http://$CONTROLLER_IP:9696yum install -y openstack-neutronyum install -y openstack-neutron-ml2yum install -y openstack-neutron-openvswitchyum install -y ebtables#/etc/neutron/neutron.confcp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak#[database]if [ `cat /etc/neutron/neutron.conf|grep '^\[database\]'` != "[database]" ]; then echo "[database]" >> /etc/neutron/neutron.conffised -i "/\[database]$/aconnection = mysql+pymysql://neutron:$ALL_PASS@$CONTROLLER_IP/neutron" /etc/neutron/neutron.conf#[DEFAULT]if [ `cat /etc/neutron/neutron.conf|grep '^\[DEFAULT\]'` != "[DEFAULT]" ]; then echo "[DEFAULT]" >> /etc/neutron/neutron.conffised -i "/\[DEFAULT]$/anotify_nova_on_port_data_changes = true" /etc/neutron/neutron.confsed -i "/\[DEFAULT]$/anotify_nova_on_port_status_changes = true" /etc/neutron/neutron.confsed -i "/\[DEFAULT]$/aauth_strategy = keystone" /etc/neutron/neutron.confsed -i "/\[DEFAULT]$/atransport_url = rabbit://openstack:$ALL_PASS@$CONTROLLER_IP" /etc/neutron/neutron.confsed -i "/\[DEFAULT]$/aallow_overlapping_ips = true" /etc/neutron/neutron.confsed -i "/\[DEFAULT]$/aservice_plugins = router" /etc/neutron/neutron.confsed -i "/\[DEFAULT]$/acore_plugin = ml2" /etc/neutron/neutron.conf#[keystone_authtoken]if [ `cat /etc/neutron/neutron.conf|grep '^\[keystone_authtoken\]'` != "[keystone_authtoken]" ]; then echo "[keystone_authtoken]" >> /etc/neutron/neutron.conffised -i "/\[keystone_authtoken]$/apassword = $ALL_PASS" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/ausername = neutron" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/aproject_name = service" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/auser_domain_name = Default" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/aproject_domain_name = Default" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/aauth_type = password" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/amemcached_servers = $CONTROLLER_IP:11211" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/aauth_url = http://$CONTROLLER_IP:5000" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/awww_authenticate_uri = http://$CONTROLLER_IP:5000" /etc/neutron/neutron.conf#[nova]if [ `cat /etc/neutron/neutron.conf|grep '^\[nova\]'` != "[nova]" ]; then echo "[nova]" >> /etc/neutron/neutron.conffised -i "/\[nova]$/apassword = $ALL_PASS" /etc/neutron/neutron.confsed -i "/\[nova]$/ausername = nova" /etc/neutron/neutron.confsed -i "/\[nova]$/aproject_name = service" /etc/neutron/neutron.confsed -i "/\[nova]$/aregion_name = RegionOne" /etc/neutron/neutron.confsed -i "/\[nova]$/auser_domain_name = Default" /etc/neutron/neutron.confsed -i "/\[nova]$/aproject_domain_name = Default" /etc/neutron/neutron.confsed -i "/\[nova]$/aauth_type = password" /etc/neutron/neutron.confsed -i "/\[nova]$/aauth_url = http://$CONTROLLER_IP:5000" /etc/neutron/neutron.conf#[oslo_concurrency]if [ `cat /etc/neutron/neutron.conf|grep '^\[oslo_concurrency\]'` != "[oslo_concurrency]" ]; then echo "[oslo_concurrency]" >> /etc/neutron/neutron.conffised -i "/\[oslo_concurrency]$/alock_path = \/var\/lib/neutron\/tmp" /etc/neutron/neutron.confcp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak#[ml2]if [ `cat /etc/neutron/plugins/ml2/ml2_conf.ini|grep '^\[ml2\]'` != "[ml2]" ]; then echo "[ml2]" >> /etc/neutron/plugins/ml2/ml2_conf.inifised -i "/\[ml2]$/aextension_drivers = port_security" /etc/neutron/plugins/ml2/ml2_conf.inised -i "/\[ml2]$/amechanism_drivers = openvswitch,l2population" /etc/neutron/plugins/ml2/ml2_conf.inised -i "/\[ml2]$/atenant_network_types = vxlan,vlan" /etc/neutron/plugins/ml2/ml2_conf.inised -i "/\[ml2]$/atype_drivers = flat,vlan,vxlan" /etc/neutron/plugins/ml2/ml2_conf.ini#[ml2_type_flat]if [ `cat /etc/neutron/plugins/ml2/ml2_conf.ini|grep '^\[ml2_type_flat\]'` != "[ml2_type_flat]" ]; then echo "[ml2_type_flat]" >> /etc/neutron/plugins/ml2/ml2_conf.inifised -i "/\[ml2_type_flat]$/aflat_networks = provider" /etc/neutron/plugins/ml2/ml2_conf.ini#[ml2_type_vlan]if [ `cat /etc/neutron/plugins/ml2/ml2_conf.ini|grep '^\[ml2_type_vlan\]'` != "[ml2_type_vlan]" ]; then echo "[ml2_type_vlan]" >> /etc/neutron/plugins/ml2/ml2_conf.inifised -i "/\[ml2_type_vlan]$/anetwork_vlan_ranges = physicnet:1000:2000" /etc/neutron/plugins/ml2/ml2_conf.ini#[ml2_type_vxlan]if [ `cat /etc/neutron/plugins/ml2/ml2_conf.ini|grep '^\[ml2_type_vxlan\]'` != "[ml2_type_vxlan]" ]; then echo "[ml2_type_vxlan]" >> /etc/neutron/plugins/ml2/ml2_conf.inifised -i "/\[ml2_type_vxlan]$/avni_ranges = 30000:31000" /etc/neutron/plugins/ml2/ml2_conf.ini#[securitygroup]if [ `cat /etc/neutron/plugins/ml2/ml2_conf.ini|grep '^\[securitygroup\]'` != "[securitygroup]" ]; then echo "[securitygroup]" >> /etc/neutron/plugins/ml2/ml2_conf.inifised -i "/\[securitygroup]$/aenable_ipset = true" /etc/neutron/plugins/ml2/ml2_conf.ini#/etc/neutron/plugins/ml2/openvswitch_agent.inicp /etc/neutron/plugins/ml2/openvswitch_agent.ini /etc/neutron/plugins/ml2/openvswitch_agent.ini.bak#[agent]#sed -i "/tunnel_types = /atunnel_types = vxlan" /etc/neutron/plugins/ml2/openvswitch_agent.ini#[ovs]#sed -i "/\[ovs]$/alocal_ip = 10.214.1.2" /etc/neutron/plugins/ml2/openvswitch_agent.ini#sed -i "/\[ovs]$/atun_peer_patch_port = patch-int" /etc/neutron/plugins/ml2/openvswitch_agent.ini#sed -i "/\[ovs]$/aint_peer_patch_port = patch-tun" /etc/neutron/plugins/ml2/openvswitch_agent.ini#sed -i "/\[ovs]$/atunnel_bridge = br-tun" /etc/neutron/plugins/ml2/openvswitch_agent.ini#[securitygroup]if [ `cat /etc/neutron/plugins/ml2/openvswitch_agent.ini|grep '^\[securitygroup\]'` != "[securitygroup]" ]; then echo "[securitygroup]" >> /etc/neutron/plugins/ml2/openvswitch_agent.inifised -i "/\[securitygroup]$/aenable_security_group = true" /etc/neutron/plugins/ml2/openvswitch_agent.inised -i "/\[securitygroup]$/afirewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver" /etc/neutron/plugins/ml2/openvswitch_agent.inicp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.bakif [ `cat /etc/neutron/l3_agent.ini|grep '^\[DEFAULT\]'` != "[DEFAULT]" ]; then echo "[DEFAULT]" >> /etc/neutron/l3_agent.inifised -i "/\[DEFAULT]$/ainterface_driver = neutron.agent.linux.interface.OVSInterfaceDriver" /etc/neutron/l3_agent.inicp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.bakif [ `cat /etc/neutron/dhcp_agent.ini|grep '^\[DEFAULT\]'` != "[DEFAULT]" ]; then echo "[DEFAULT]" >> /etc/neutron/dhcp_agent.inifised -i "/\[DEFAULT]$/aenable_isolated_metadata = true" /etc/neutron/l3_agent.inised -i "/\[DEFAULT]$/adhcp_driver = neutron.agent.linux.dhcp.Dnsmasq" /etc/neutron/dhcp_agent.ini sed -i "/\[DEFAULT]$/ainterface_driver = neutron.agent.linux.interface.OVSInterfaceDriver" /etc/neutron/dhcp_agent.inised -i "/force_metadata = /aforce_metadata = true" /etc/neutron/dhcp_agent.ini#metadata.confif [ `cat /etc/neutron/metadata_agent.ini|grep '^\[DEFAULT\]'` != "[DEFAULT]" ]; then echo "[DEFAULT]" >> /etc/neutron/metadata_agent.inificp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.baksed -i "/\[DEFAULT]$/ametadata_proxy_shared_secret = $ALL_PASS" /etc/neutron/metadata_agent.inised -i "/\[DEFAULT]$/anova_metadata_host = $CONTROLLER_IP" /etc/neutron/metadata_agent.ini#nova.confif [ `cat /etc/nova/nova.conf|grep '^\[neutron\]'` != "[neutron]" ]; then echo "[neutron]" >> /etc/nova/nova.conffised -i "/\[neutron]$/ametadata_proxy_shared_secret = $ALL_PASS" /etc/nova/nova.confsed -i "/\[neutron]$/aservice_metadata_proxy = true" /etc/nova/nova.confsed -i "/\[neutron]$/apassword = $ALL_PASS" /etc/nova/nova.confsed -i "/\[neutron]$/ausername = neutron" /etc/nova/nova.confsed -i "/\[neutron]$/aproject_name = service" /etc/nova/nova.confsed -i "/\[neutron]$/aregion_name = RegionOne" /etc/nova/nova.confsed -i "/\[neutron]$/auser_domain_name = Default" /etc/nova/nova.confsed -i "/\[neutron]$/aproject_domain_name = Default" /etc/nova/nova.confsed -i "/\[neutron]$/aauth_type = password" /etc/nova/nova.confsed -i "/\[neutron]$/aauth_url = http://$CONTROLLER_IP:5000" /etc/nova/nova.confsed -i "/\[neutron]$/aurl = http://$CONTROLLER_IP:9696" /etc/nova/nova.confln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.inisu -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutronsystemctl restart openstack-nova-api.servicesystemctl enable neutron-server.service \ neutron-openvswitch-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service neutron-l3-agent.servicesystemctl start neutron-server.service \ neutron-openvswitch-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service neutron-l3-agent.service#===Dashboard===yum install openstack-dashboard -y#/etc/openstack-dashboard/local_settingscp /etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settings.baksed -i "/OPENSTACK_HOST = /cOPENSTACK_HOST = \"$CONTROLLER_IP\"" /etc/openstack-dashboard/local_settingssed -i "/ALLOWED_HOSTS = /cALLOWED_HOSTS = ['*']" /etc/openstack-dashboard/local_settingssed -i "/SESSION_ENGINE = /aSESSION_ENGINE = 'django.contrib.sessions.backends.cache'" /etc/openstack-dashboard/local_settingssed -i "/OPENSTACK_KEYSTONE_URL =/aOPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True" /etc/openstack-dashboard/local_settingssed -i "/OPENSTACK_KEYSTONE_URL =/aOPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\"" /etc/openstack-dashboard/local_settingssed -i "/OPENSTACK_KEYSTONE_URL =/aOPENSTACK_KEYSTONE_DEFAULT_ROLE = \"user\"" /etc/openstack-dashboard/local_settingssed -i "/TIME_ZONE/c#TIME_ZONE = UTC" /etc/openstack-dashboard/local_settingsecho "CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': '$CONTROLLER_IP:11211', }}" >> local_settingsecho "OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 3,}" >> local_settingssed -i "/WSGIScriptAlias/iWSGIApplicationGroup %{GLOBAL}" /etc/httpd/conf.d/openstack-dashboard.conf#Because of the bugs of Train in CentOS7.8, we need to do something to solve it.echo "* soft nofile 1024000* hard nofile 1024000" >> /etc/security/limits.confyum install -y lsoflsof | wc -lcd /usr/share/openstack-dashboard/python manage.py make_web_conf --apache > /etc/httpd/conf.d/openstack-dashboard.confsed -i "s/WEBROOT = '\/'/WEBROOT = '\/dashboard'/g" /usr/share/openstack-dashboard/openstack_dashboard/defaults.pysed -i "s/WEBROOT = '\/'/WEBROOT = '\/dashboard'/g" /usr/share/openstack-dashboard/openstack_dashboard/test/settings.pycd /usr/share/openstack-dashboard/static/dashboard/js/for i in `ls|awk {print}`dosed -i "s/WEBROOT = '\/'/WEBROOT = '\/dashboard'/g" $ised -i "s/WEBROOT='\/'/WEBROOT='\/dashboard'/g" $ised -i "s/WEBROOT = \"\/\"/WEBROOT = \"\/dashboard\"/g" $ised -i "s/WEBROOT=\"\/\"/WEBROOT=\"\/dashboard\"/g" $idonesed -i "/WSGIScriptAlias/c\ \ \ \ WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi.py" /etc/httpd/conf.d/openstack-dashboard.confsed -i "/Alias/c\ \ \ \ Alias /dashboard/static /usr/share/openstack-dashboard/static" /etc/httpd/conf.d/openstack-dashboard.confsystemctl restart httpd.service memcached.servicesystemctl status httpd memcached#=== ===sed -i "/\[Service]$/aLimitNOFILE=65535" /usr/lib/systemd/system/mariadb.servicesed -i "/\[Service]$/aLimitNPROC=65535" /usr/lib/systemd/system/mariadb.servicesystemctl daemon-reloadsystemctl restart mariadb.service#===Fwaas Lbaasv2 Vpnaas===yum install openstack-neutron-fwaas -yneutron-db-manage --subproject neutron-fwaas upgrade head#lbaasv2yum install openstack-neutron-lbaas -yneutron-db-manage --subproject neutron-lbaas upgrade head#vpnaasyum install openstack-neutron-vpnaas -yneutron-db-manage --subproject neutron-vpnaas upgrade head###8.Block Storage service##Discover compute#source /root/admin-openrc#openstack compute service list --service nova-compute#su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova##add image#openstack image create "cirros" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --publicecho -e "\033[45;37mOpenstack Train computer node install end !!!\033[0m"

September 7, 2021 · 13 min · jiezi

关于openstack报错的解决方法:During sync_power_state the instance has a pending task (deleting). Skip

今天在openstack中遇到了删除实例删除失败的现象,检查界面上显示状态–deleting,后台日志/var/log/nova/nova-compute.log显示During sync_power_state the instance has a pending task (deleting). Skip错误。 检查存储端,发现对应的实例已经删除,所以决定重启下compute服务。 systemctl restart openstack-nova-compute 然后刷新界面,发现所有出错的实例已经都成功被删除了。

August 12, 2020 · 1 min · jiezi

Openstack Centos7实例重启网卡名称与顺序错乱的问题解决

修改内核参数1.修改内核启动参数为了使网卡名称恢复到旧的网卡名称(ethX),需要修改/etc/default/grub文件的GRUB_CMDLINE_LINUX内容,增加net.ifnames=0,修改后内容如下:2.更新内核启动参数:(BIOS)grub2-mkconfig -o /boot/grub2/grub.cfg修改udev rules文件在/usr/lib/udev/rules.d/60-net.rules文件中将原有内容删除,然后将每块网卡对应的MAC地址及修改后的名称依次写入/usr/lib/udev/rules.d/60-net.rules文件。样例如下:修改网卡配置文件修改/etc/sysconfig/network-scripts/目录下的网卡配置文件。这里只需要修改HWADDR对应的MAC地址即可。重启实例重启后,网卡顺序及MAC,IP一致。

April 9, 2019 · 1 min · jiezi

Openstack 虚拟机 CPU类型不匹配导致虚拟机创建启动慢的问题

Openstack以qcow2为镜像创建实例,实例的CPU类型自动分配为EYPC,但是查看计算节点CPU类型为Inter,因此在nova配置中设置计算节点的cpu_mode[libvirt]virt_type=kvmcpu_mode=host-model以下引入Openstack官网的配置信息描述:virt_type = kvm (StrOpt) Libvirt domain type (valid options are: kvm, lxc, qemu, uml, xen and parallels)cpu_mode = None (StrOpt) Set to “host-model” to clone the host CPU feature flags;to “host-passthrough” to use the host CPU model exactly; to “custom” to use a named CPU model; to “none” to not set any CPU model. If virt_type=“kvm|qemu”, it will default to “host-model”, otherwise it will default to “none"cpu_model = None(StrOpt) Set to a named libvirt CPU model (see names listed in /usr/share/libvirt/cpu_map.xml). Only has effect if cpu_mode=“custom” and virt_type=“kvm|qemu” ...

April 9, 2019 · 1 min · jiezi

Openstack报错:分配网络失败,不重新调度

Openstack nova-compute报错:2019-04-04 16:14:28.788 5468 ERROR nova.compute.manager [req-c8cdadc1-292c-4e48-a0e4-5cbec9bd1874 e5029b459c3d4c36bb02570dca7ece7a b6eb86a5853b45f98e89c69a2136f5ff - default default] [instance: 04931b88-3618-4c36-b49a-2248a9cf5fd0] Failed to allocate network(s): VirtualInterfaceCreateException: \u865a\u62df\u63a5\u53e3\u521b\u5efa\u5931\u8d252019-04-04 16:14:28.788 5468 ERROR nova.compute.manager [instance: 04931b88-3618-4c36-b49a-2248a9cf5fd0] Traceback (most recent call last):2019-04-04 16:14:28.788 5468 ERROR nova.compute.manager [instance: 04931b88-3618-4c36-b49a-2248a9cf5fd0] File “/usr/lib/python2.7/site-packages/nova/compute/manager.py”, line 2032, in _build_and_run_instance2019-04-04 16:14:28.788 5468 ERROR nova.compute.manager [instance: 04931b88-3618-4c36-b49a-2248a9cf5fd0] block_device_info=block_device_info)2019-04-04 16:14:28.788 5468 ERROR nova.compute.manager [instance: 04931b88-3618-4c36-b49a-2248a9cf5fd0] File “/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py”, line 3107, in spawn2019-04-04 16:14:28.788 5468 ERROR nova.compute.manager [instance: 04931b88-3618-4c36-b49a-2248a9cf5fd0] destroy_disks_on_failure=True)2019-04-04 16:14:28.788 5468 ERROR nova.compute.manager [instance: 04931b88-3618-4c36-b49a-2248a9cf5fd0] File “/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py”, line 5609, in _create_domain_and_network2019-04-04 16:14:28.788 5468 ERROR nova.compute.manager [instance: 04931b88-3618-4c36-b49a-2248a9cf5fd0] raise exception.VirtualInterfaceCreateException()2019-04-04 16:14:28.788 5468 ERROR nova.compute.manager [instance: 04931b88-3618-4c36-b49a-2248a9cf5fd0] VirtualInterfaceCreateException: \u865a\u62df\u63a5\u53e3\u521b\u5efa\u5931\u8d252019-04-04 16:14:28.788 5468 ERROR nova.compute.manager [instance: 04931b88-3618-4c36-b49a-2248a9cf5fd0]2019-04-04 16:14:28.792 5468 ERROR nova.compute.manager [req-c8cdadc1-292c-4e48-a0e4-5cbec9bd1874 e5029b459c3d4c36bb02570dca7ece7a b6eb86a5853b45f98e89c69a2136f5ff - default default] [instance: 04931b88-3618-4c36-b49a-2248a9cf5fd0] 实例04931b88-3618-4c36-b49a-2248a9cf5fd0的构建已中止:分配网络失败,不重新调度。: BuildAbortException: \u5b9e\u4f8b04931b88-3618-4c36-b49a-2248a9cf5fd0\u7684\u6784\u5efa\u5df2\u4e2d\u6b62\uff1a\u5206\u914d\u7f51\u7edc\u5931\u8d25\uff0c\u4e0d\u91cd\u65b0\u8c03\u5ea6\u30022019-04-04 16:14:28.792 5468 ERROR nova.compute.manager [instance: 04931b88-3618-4c36-b49a-2248a9cf5fd0] Traceback (most recent call last):2019-04-04 16:14:28.792 5468 ERROR nova.compute.manager [instance: 04931b88-3618-4c36-b49a-2248a9cf5fd0] File “/usr/lib/python2.7/site-packages/nova/compute/manager.py”, line 1841, in _do_build_and_run_instance2019-04-04 16:14:28.792 5468 ERROR nova.compute.manager [instance: 04931b88-3618-4c36-b49a-2248a9cf5fd0] filter_properties, request_spec)2019-04-04 16:14:28.792 5468 ERROR nova.compute.manager [instance: 04931b88-3618-4c36-b49a-2248a9cf5fd0] File “/usr/lib/python2.7/site-packages/nova/compute/manager.py”, line 2092, in _build_and_run_instance2019-04-04 16:14:28.792 5468 ERROR nova.compute.manager [instance: 04931b88-3618-4c36-b49a-2248a9cf5fd0] reason=msg)2019-04-04 16:14:28.792 5468 ERROR nova.compute.manager [instance: 04931b88-3618-4c36-b49a-2248a9cf5fd0] BuildAbortException: \u5b9e\u4f8b04931b88-3618-4c36-b49a-2248a9cf5fd0\u7684\u6784\u5efa\u5df2\u4e2d\u6b62\uff1a\u5206\u914d\u7f51\u7edc\u5931\u8d25\uff0c\u4e0d\u91cd\u65b0\u8c03\u5ea6\u30022019-04-04 16:14:28.792 5468 ERROR nova.compute.manager [instance: 04931b88-3618-4c36-b49a-2248a9cf5fd0]以上原因是由于控制节点和结算节点的Overlay网络没有连通。 ...

April 4, 2019 · 1 min · jiezi

Openstack修改系统网络配额

前言neutron在安装配置完成之后,openstack为了实现对所有tenant对网络资源的使用,针对neutron设置有专门的配额,以防止租户使用过多的资源,而对其他的tenant造成影响。和nova的quota相类似,neutron也使用单独的一个驱动来实现网络neutron的配额控制。2. neutron默认的配额neutron默认的配额针对network,port,router,subnet,floatingip做了配额方面的限定,参考neutron的配置文件,获取quota的配额内容为:[root@controller ~]# vim /etc/neutron/neutron.conf [quotas]quota_driver = neutron.db.quota_db.DbQuotaDriver 配额驱动quota_items = network,subnet,port quota限定的范畴default_quota = -1 默认的quota,-1表示没有限制(未启用)quota_network = 10 建立的network个数quota_subnet = 10 建立的subnet个数quota_port = 50 允许的port个数quota_security_group = 10 安全组的个数quota_security_group_rule = 100 安全组规规则条数quota_vip = 10 vip个数,以下的quota_member和quota_health_monitors 都用于LBaaS场景quota_pool = 10 pool个数quota_member = -1 member个数quota_health_monitors = -1 monitor个数quota_router = 10 router的个数quota_floatingip = 50 floating-ip个数3. 修改neutron的配额查看neutron默认的配额[root@controller ~]# keystone tenant-list+———————————-+———-+———+| id | name | enabled |+———————————-+———-+———+| 842ab3268a2c47e6a4b0d8774de805ae | admin | True || 7ff1dfb5a6f349958c3a949248e56236 | companyA | True | #得到tenant的uuid号| 10d1465c00d049fab88dec1af0f56b1b | demo | True || 3b57a14f7c354a979c9f62b60f31a331 | service | True |+———————————-+———-+———+[root@controller ~]# neutron quota-show –tenant-id 7ff1dfb5a6f349958c3a949248e56236+———————+——-+| Field | Value |+———————+——-+| floatingip | 50 || health_monitor | -1 || member | -1 || network | 10 || pool | 10 || port | 50 | #port,每台虚拟机都需要一个ip,即一个port,很容易就超过配额| router | 10 || security_group | 10 || security_group_rule | 100 || subnet | 10 || vip | 10 |+———————+——-+修改neutron配额[root@controller ~]# neutron quota-update –network 20 –subnet 20 –port 100 –router 5 –floatingip 100 –security-group 10 –security-group-rule 100 –tenant-id 7ff1dfb5a6f349958c3a949248e56236 +———————+——-+| Field | Value |+———————+——-+| floatingip | 100 || health_monitor | -1 || member | -1 || network | 20 || pool | 10 || port | 100 || router | 5 || security_group | 10 || security_group_rule | 100 || subnet | 20 || vip | 10 |+———————+——-+校验neutron的quota配置[root@controller ~]# neutron quota-show –tenant-id 7ff1dfb5a6f349958c3a949248e56236 +———————+——-+| Field | Value |+———————+——-+| floatingip | 100 || health_monitor | -1 || member | -1 || network | 20 || pool | 10 || port | 100 || router | 5 || security_group | 10 || security_group_rule | 100 || subnet | 20 || vip | 10 |+———————+——-+4.统计port的个数[root@controller ~]# neutron port-list+————————————–+——+——————-+—————————————————————————————+| id | name | mac_address | fixed_ips |+————————————–+——+——————-+—————————————————————————————+| 0060ec4a-957d-4571-b730-6b4a9bb3baf8 | | fa:16:3e:48:42:3d | {“subnet_id”: “9654a807-d4fa-49f1-abb6-2e45d776c69f”, “ip_address”: “10.16.4.19”} || 00942be0-a3a9-471d-a4ba-336db0ee1539 | | fa:16:3e:73:75:03 | {“subnet_id”: “ad4a5ffc-3ccc-42c4-89a1-61e7b18632a3”, “ip_address”: “10.16.6.96”} || 0119045c-8219-4744-bd58-a7e77294832c | | fa:16:3e:10:ed:7f | {“subnet_id”: “9654a807-d4fa-49f1-abb6-2e45d776c69f”, “ip_address”: “10.16.4.71”} || 04f7d8ea-1849-4938-9ef7-e8114893132f | | fa:16:3e:50:86:1b | {“subnet_id”: “ad4a5ffc-3ccc-42c4-89a1-61e7b18632a3”, “ip_address”: “10.16.6.27”} |[root@controller ~]# neutron port-list |wc -l #超过配额时,需要修改1945. 总结随着时间的推移,当越来越多的instance加入到openstack中,port也会相应增加,一个ip对应一个port,所以当port达到配额时,openstack会组织用户继续分配虚拟机,此时,就需要修改neutron的配额了,关于neutron配额的报错,可以参考neutron的日志/var/log/neutron/neutron-server.log,可以根据日志的信息,定位到报错的原因,具体不赘述。6. 附录neutron实现quota的代码解读[root@controller ~]# vim /usr/lib/python2.6/site-packages/neutron/db/quota_db.pyimport sqlalchemy as safrom neutron.common import exceptionsfrom neutron.db import model_basefrom neutron.db import models_v2’‘‘quota数据库表的表结构,tenant默认集成的配额从这里获取mysql> desc quotas;+———–+————–+——+—–+———+——-+| Field | Type | Null | Key | Default | Extra |+———–+————–+——+—–+———+——-+| id | varchar(36) | NO | PRI | NULL | || tenant_id | varchar(255) | YES | MUL | NULL | || resource | varchar(255) | YES | | NULL | || limit | int(11) | YES | | NULL | |+———–+————–+——+—–+———+——-+‘‘‘class Quota(model_base.BASEV2, models_v2.HasId): “““Represent a single quota override for a tenant. If there is no row for a given tenant id and resource, then the default for the quota class is used. "”” tenant_id = sa.Column(sa.String(255), index=True) resource = sa.Column(sa.String(255)) limit = sa.Column(sa.Integer)‘‘‘quota配额的具体实现,根据数据库的配置内容,实现quota的控制,即quota的增删改查方法’‘‘class DbQuotaDriver(object): “““Driver to perform necessary checks to enforce quotas and obtain quota information. The default driver utilizes the local database. "”” ’’’ 得到租户tenant的quota,执行neutron quota-show –tenant-id uuid时调用的方法 ’’’ @staticmethod def get_tenant_quotas(context, resources, tenant_id): “““Given a list of resources, retrieve the quotas for the given tenant. :param context: The request context, for access checks. :param resources: A dictionary of the registered resource keys. :param tenant_id: The ID of the tenant to return quotas for. :return dict: from resource name to dict of name and limit "”” # init with defaults 得到quota默认的配额项item,即所谓的network,subnet,port和router等,以及对应的值 tenant_quota = dict((key, resource.default) for key, resource in resources.items()) # update with tenant specific limits 从数据库中获取最新的quota配置信息,并更新 q_qry = context.session.query(Quota).filter_by(tenant_id=tenant_id) tenant_quota.update((q[‘resource’], q[’limit’]) for q in q_qry) return tenant_quota ’’’ quota的删除,即执行neutron quota-delete 的方法,删除之后,tenant将会集成默认的的quota配置 ’’’ @staticmethod def delete_tenant_quota(context, tenant_id): “““Delete the quota entries for a given tenant_id. Atfer deletion, this tenant will use default quota values in conf. "”” #从neutron。quotas数据库中查询到所有的quota配置之后,过略某个具体的tenant的quota,之后执行delete()方法将其删除 with context.session.begin(): tenant_quotas = context.session.query(Quota) tenant_quotas = tenant_quotas.filter_by(tenant_id=tenant_id) tenant_quotas.delete() ’’’ 得到所有租户tenant的配额资源,即执行neutron quota-list所查看的内容 ’’’ @staticmethod def get_all_quotas(context, resources): “““Given a list of resources, retrieve the quotas for the all tenants. :param context: The request context, for access checks. :param resources: A dictionary of the registered resource keys. :return quotas: list of dict of tenant_id:, resourcekey1: resourcekey2: … "”” tenant_default = dict((key, resource.default) for key, resource in resources.items()) all_tenant_quotas = {} for quota in context.session.query(Quota): tenant_id = quota[’tenant_id’] # avoid setdefault() because only want to copy when actually req’d #如果quotas表中,没有找到配置选项,说明使用默认的quota配置,直接用默认的copy过来即可,有配置则继承quotas表中的配置 tenant_quota = all_tenant_quotas.get(tenant_id) if tenant_quota is None: tenant_quota = tenant_default.copy() tenant_quota[’tenant_id’] = tenant_id all_tenant_quotas[tenant_id] = tenant_quota tenant_quota[quota[‘resource’]] = quota[’limit’] return all_tenant_quotas.values() ’’’ 更新quota的配置,即执行neutron quota-update命令的具体实现 ’’’ @staticmethod def update_quota_limit(context, tenant_id, resource, limit): with context.session.begin(): tenant_quota = context.session.query(Quota).filter_by( tenant_id=tenant_id, resource=resource).first() #有配置内容,则更新,没有则根据资源的配置内容,在数据库中添加对应的条目 if tenant_quota: tenant_quota.update({’limit’: limit}) else: tenant_quota = Quota(tenant_id=tenant_id, resource=resource, limit=limit) context.session.add(tenant_quota) def _get_quotas(self, context, tenant_id, resources, keys): “““Retrieves the quotas for specific resources. A helper method which retrieves the quotas for the specific resources identified by keys, and which apply to the current context. :param context: The request context, for access checks. :param tenant_id: the tenant_id to check quota. :param resources: A dictionary of the registered resources. :param keys: A list of the desired quotas to retrieve. "”” desired = set(keys) sub_resources = dict((k, v) for k, v in resources.items() if k in desired) # Make sure we accounted for all of them… if len(keys) != len(sub_resources): unknown = desired - set(sub_resources.keys()) raise exceptions.QuotaResourceUnknown(unknown=sorted(unknown)) # Grab and return the quotas (without usages) quotas = DbQuotaDriver.get_tenant_quotas( context, sub_resources, tenant_id) return dict((k, v) for k, v in quotas.items()) ’’’ neutron quota的校验,即在执行过程中,调用该方法,确认tenant的quota是否在合理的范围内 ’’’ def limit_check(self, context, tenant_id, resources, values): “““Check simple quota limits. For limits–those quotas for which there is no usage synchronization function–this method checks that a set of proposed values are permitted by the limit restriction. This method will raise a QuotaResourceUnknown exception if a given resource is unknown or if it is not a simple limit resource. If any of the proposed values is over the defined quota, an OverQuota exception will be raised with the sorted list of the resources which are too high. Otherwise, the method returns nothing. :param context: The request context, for access checks. :param tenant_id: The tenant_id to check the quota. :param resources: A dictionary of the registered resources. :param values: A dictionary of the values to check against the quota. "”” # Ensure no value is less than zero quota的配置值不能为负数 unders = [key for key, val in values.items() if val < 0] if unders: raise exceptions.InvalidQuotaValue(unders=sorted(unders)) # Get the applicable quotas quotas = self._get_quotas(context, tenant_id, resources, values.keys()) # Check the quotas and construct a list of the resources that # would be put over limit by the desired values overs = [key for key, val in values.items() if quotas[key] >= 0 and quotas[key] < val] if overs: raise exceptions.OverQuota(overs=sorted(overs))

April 3, 2019 · 5 min · jiezi

Openstack: Too many open Files 错误解决办法

Openstack WebUI页面无法打开,页面报500错误,查看httpd->error_log日志报如下错误:[Tue Apr 02 14:01:05.658276 2019] [:error] [pid 9245] File “/usr/lib/python2.7/site-packages/requests/sessions.py”, line 518, in request[Tue Apr 02 14:01:05.658280 2019] [:error] [pid 9245] File “/usr/lib/python2.7/site-packages/requests/sessions.py”, line 639, in send[Tue Apr 02 14:01:05.658284 2019] [:error] [pid 9245] File “/usr/lib/python2.7/site-packages/requests/adapters.py”, line 438, in send[Tue Apr 02 14:01:05.658287 2019] [:error] [pid 9245] File “/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py”, line 588, in urlopen[Tue Apr 02 14:01:05.658291 2019] [:error] [pid 9245] File “/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py”, line 241, in _get_conn[Tue Apr 02 14:01:05.658296 2019] [:error] [pid 9245] File “/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/connection.py”, line 27, in is_connection_dropped[Tue Apr 02 14:01:05.658300 2019] [:error] [pid 9245] File “/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/wait.py”, line 33, in wait_for_read[Tue Apr 02 14:01:05.658304 2019] [:error] [pid 9245] File “/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/wait.py”, line 22, in _wait_for_io_events[Tue Apr 02 14:01:05.658308 2019] [:error] [pid 9245] File “/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/selectors.py”, line 581, in DefaultSelector[Tue Apr 02 14:01:05.658312 2019] [:error] [pid 9245] File “/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/selectors.py”, line 394, in init[Tue Apr 02 14:01:05.658316 2019] [:error] [pid 9245] IOError: [Errno 24] Too many open files解决方式:修改操作系统打开的文件数;登录到Controller节点,执行:[root@controller ~]# ulimit -acore file size (blocks, -c) 0data seg size (kbytes, -d) unlimitedscheduling priority (-e) 0file size (blocks, -f) unlimitedpending signals (-i) 60587max locked memory (kbytes, -l) 64max memory size (kbytes, -m) unlimitedopen files (-n) 1024pipe size (512 bytes, -p) 8POSIX message queues (bytes, -q) 819200real-time priority (-r) 0stack size (kbytes, -s) 8192cpu time (seconds, -t) unlimitedmax user processes (-u) 60587virtual memory (kbytes, -v) unlimitedfile locks (-x) unlimited系统默认设置为1024。使用命令查看当前打开文件数:[root@controller ~]# lsof | wc -l174911修改vim /etc/security/limits.conf,在文件最后加入如下信息:* soft nofile 1024000* hard nofile 1024000*表示所有用户,修改后重启服务器,配置生效。 ...

April 2, 2019 · 2 min · jiezi

开源如何加速NFV转型

作者:Pam Baker迎接即将举行的ONS(开放网络峰会),我们与Red Hat的NFV技术总监Thomas Nadeau讨论了开源在电信服务供应商创新中的角色。Red Hat以开源文化和商业模式而闻名,不仅仅是开发软件的一种方式,而且它的“开源作为创新之路”在很多层面产生共鸣。迎接即将召开的ONS(开放网络峰会),我们与Red Hat技术总监Thomas Nadeau交谈 - 他在去年的活动中发表了主题演讲 - 听取他对开源在电信服务供应商创新中的角色的看法。开源在这个行业得到广泛接受的一个原因是,他说,一些非常成功的项目已经变得太大,任何一家公司都无法管理,或者单枪匹马地将其边界推向其他创新突破。“像Kubernetes这样的项目,现在对任何一家公司来说都太大了。我们作为一个行业需要努力的技术,因为没有一家公司可以单独推动它。“Nadeau说。“展望未来,要解决这些真正难以解决的问题,我们需要开源和开源软件开发模式。”以下是关于开源如何以及在何处对电信公司产生创新影响的更多见解。Linux.com:为什么开源一般来说是电信服务供应商的创新核心?Nadeau:第一个原因是服务供应商可以更好地掌控自己的命运。有些服务供应商比其他服务供应商更积极参与。其次,开源使服务供应商不必长时间等待他们需要开发的功能。第三,开放源代码使服务供应商不必费力地使用和管理单体系统,而他们真正想要的只是一些功能。幸运的是,网络设备供应商正在应对这种过度杀伤问题。它们变得更加灵活、更加模块化,而开源是实现这一目标的最佳方式。Linux.com:在你的ONS主题演讲中,你说开源提供传统运营商在创建数字服务和收入流方面与云规模公司竞争的对等环境。请解释开源如何帮助。Nadeau:这又跟Kubernetes有关。另一个是OpenStack。这些是这些企业真正需要的工具,不仅仅是扩展,而是存在于当今的市场中。如果没有虚拟化领域的开源,你就会陷入专有的单体,无法控制你的未来,并且需要等待很长时间才能获得竞争所需的功能。NFV方程中有两部分:基础设施和应用程序。NFV不仅仅是底层平台,而是平台和使用平台的应用程序之间的持续推动和拉动。NFV实际上是功能的虚拟化。它始于单体虚拟机(VM)。然后是“分解的虚拟机”,由于各种原因,各个功能以更分散的方式运行。这样做意味着将它们分开,这就是SDN进入的地方,控制平面与数据平面分离。这些概念也推动了底层平台的变化,从而大大增加了开销。这反过来又引起了对容器环境的兴趣,作为一种潜在的解决方案,但它仍然是NFV。你可以将其视为具有复合应用程序的SOA的最新版本。Kubernetes是Google使用的那种SOA模型,它消除了对复杂网络和存储的担忧,并且允许用户启动起作用的应用程序。对于企业应用程序模型,这很有用。但不是在NFV的情况下。在NFV案例中,在OpenStack平台的上一次迭代中,每个人都享受一对一的网络性能。但是当我们将它移到OpenShift时,由于他们已经实现了最新的SOA模型,我们又回到了原点,在那里你失去了80%的性能。所以现在进化的底层平台的重要性越来越高,所以钟摆摆动,但它仍然是NFV。开源允许你有效、快速地适应这些变化和影响。因此,创新快速而逻辑地发生,他们的迭代也是如此。Linux.com:告诉我们NFV中的底层Linux,以及为什么这个组合如此强大。Nadeau:Linux是开源的,它始终处于开源的一些最纯粹的意义。另一个原因是它是底层操作系统的主要选择。现实情况是,所有主要网络和所有顶级网络公司都在其所有高性能平台上运行Linux作为基本操作系统。现在它都是非常灵活。你可以将它放在Raspberry Pi上,或者放在价值数百万美元的巨大路由器上。它安全、灵活且可扩展,因此操作员现在可以真正将其用作工具。Linux.com:运营商一直在努力重新定义自己。事实上,许多正在积极寻求方法,摆脱严格防御以对抗破坏者,以及在他们是破坏者时采取进攻。网络功能虚拟化(NFV)如何帮助其中一种策略或两种策略?Nadeau:Telstra和Bell Canada就是很好的例子。他们使用开源代码与他们围绕该代码的合作伙伴生态系统协同工作,这使他们能够以与过去不同的方式做事。今天他们做了两件不同的事情。一个是他们设计自己的网络。他们在很多方面设计自己的东西,相对于以前他们可能需要使用来自供应商的整体解决方案,而这些解决方案跟竞争对手的业务看起来差不多一样。这些电信公司正在采取真正的“深入,卷起袖子”的方法,他们在更深入的层面上知道他们使用了什么,可以与下游的发行商或供应商合作。这可以追溯到生态系统,它类似于我们在Red Hat的合作伙伴计划,是填补空白的胶水,完善给电信公司设想的网络解决方案。在4月3日至5日在圣何塞麦克内里会议中心举行的ONS(Open Networking Summit,开放网络峰会)上了解更多信息。KubeCon + CloudNativeCon + Open Source Summit大会日期:会议日程通告日期:2019 年 4 月 10 日会议活动举办日期:2019 年 6 月 24 至 26 日KubeCon + CloudNativeCon + Open Source Summit赞助方案KubeCon + CloudNativeCon + Open Source Summit多元化奖学金现正接受申请KubeCon + CloudNativeCon和Open Source Summit即将首次合体落地中国KubeCon + CloudNativeCon + Open Source Summit购票窗口,立即购票!CNCF邀请你加入最终用户社区

March 26, 2019 · 1 min · jiezi

Openstack模块常用命令汇总

Keystone1.查询所有用户:[root@controller ~]# openstack user list+———————————-+———–+| ID | Name |+———————————-+———–+| 5dff98790e7849f5a9a6efb8ba984056 | glance || 647ac5b8de4d4b109e3eda68a7d86894 | neutron || a37535eb791a4d219ded1e0163a42aac | demo || aa6274ce2a4b4441bff1522b07afed84 | admin || d217676fff054a3e9c7a3a9b3ed8b1f8 | nova || faf1bbd882a84c4988ad2dae98e1d6a8 | placement |+———————————-+———–+2.查询所有认证服务:[root@controller ~]# openstack catalog list+———–+———–+—————————————–+| Name | Type | Endpoints |+———–+———–+—————————————–+| nova | compute | RegionOne || | | public: http://controller:8774/v2.1 || | | RegionOne || | | internal: http://controller:8774/v2.1 || | | RegionOne || | | admin: http://controller:8774/v2.1 || | | || neutron | network | RegionOne || | | internal: http://controller:9696 || | | RegionOne || | | admin: http://controller:9696 || | | RegionOne || | | public: http://controller:9696 || | | || keystone | identity | RegionOne || | | internal: http://controller:5000/v3/ || | | RegionOne || | | public: http://controller:5000/v3/ || | | RegionOne || | | admin: http://controller:5000/v3/ || | | || placement | placement | RegionOne || | | public: http://controller:8778 || | | RegionOne || | | admin: http://controller:8778 || | | RegionOne || | | internal: http://controller:8778 || | | || glance | image | RegionOne || | | public: http://controller:9292 || | | RegionOne || | | internal: http://controller:9292 || | | RegionOne || | | admin: http://controller:9292 || | | |+———–+———–+—————————————–+ Glance1.查询所有镜像列表:[root@controller ~]# openstack image list+————————————–+——–+——–+| ID | Name | Status |+————————————–+——–+——–+| 92db7e2e-bbe2-4e2e-8c18-4db2a2190ee4 | cirros | active || cd83fba7-00a7-44d7-ac7e-5bf66e560e94 | ningsi | active |+————————————–+——–+——–+2.删除指定镜像:[root@controller ~]# openstack image delete ID3.查看镜像:[root@controller ~]# openstack image show 92db7e2e-bbe2-4e2e-8c18-4db2a2190ee4+——————+——————————————————+| Field | Value |+——————+——————————————————+| checksum | f8ab98ff5e73ebab884d80c9dc9c7290 || container_format | bare || created_at | 2019-03-25T04:54:34Z || disk_format | qcow2 || file | /v2/images/92db7e2e-bbe2-4e2e-8c18-4db2a2190ee4/file || id | 92db7e2e-bbe2-4e2e-8c18-4db2a2190ee4 || min_disk | 0 || min_ram | 0 || name | cirros || owner | 214903aa0dc24bf39460f82917a0aab2 || protected | False || schema | /v2/schemas/image || size | 13267968 || status | active || tags | || updated_at | 2019-03-25T04:54:35Z || virtual_size | None || visibility | public |+——————+——————————————————+4.上传镜像[root@controller ~]# openstack image create “cirros” –file cirros-0.3.5-x86_64-disk.img –disk-format qcow2 –container-format bare –public ...

March 26, 2019 · 2 min · jiezi

Openstack网络类型由vxlan变成vxlan配置修改

vi /etc/neutron/plugins/ml2/ml2_conf.ini修改前:[ml2]type_drivers = flat,vlan,vxlantenant_network_types = vxlanmechanism_drivers = openvswitch,l2populationextension_drivers = port_security[ml2_type_flat]flat_networks = provider[ml2_type_vlan]#network_vlan_ranges = provider:1:1000[ml2_type_vxlan]vni_ranges = 1:1000[securitygroup]enable_ipset = true->修改前后:[ml2]type_drivers = flat,vlan,vxlantenant_network_types = vlanmechanism_drivers = openvswitch,l2populationextension_drivers = port_security[ml2_type_flat]flat_networks = provider[ml2_type_vlan]network_vlan_ranges = provider:1:1000[ml2_type_vxlan]# vni_ranges = 1:1000[securitygroup]enable_ipset = true# vi /etc/neutron/plugins/ml2/openvswitch_agent.ini[agent]tunnel_types = l2_population = True[ovs]bridge_mappings = provider:br-providerlocal_ip = 10.0.0.11[securitygroup]firewall_driver = iptables_hybrid重启计算及网络:# systemctl restart openstack-nova-api.service# systemctl restart neutron-*

March 19, 2019 · 1 min · jiezi

Openstack Queens 环境搭建(六)Neutron服务

Controller节点:Neutron服务安装网络类型:vxlanLayer2 二层插件采用:openvswitch1、创建neutron数据库,授予权限:$ mysql -u root -pMariaDB [(none)] CREATE DATABASE neutron;MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO ’neutron’@’localhost’ IDENTIFIED BY ‘123456’;MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO ’neutron’@’%’ IDENTIFIED BY ‘123456’;MariaDB [(none)]> exit;2、创建neutron用户:$ . admin-openrc$ openstack user create –domain default –password-prompt neutronUser Password: 123456Repeat User Password: 123456+———————+———————————-+| Field | Value |+———————+———————————-+| domain_id | default || enabled | True || id | 463fd14bf95b4cc49c0378623de56ffa || name | neutron || options | {} || password_expires_at | None |+———————+———————————-+$ openstack role add –project service –user neutron admin3、创建neutron服务实体:$ openstack service create –name neutron –description “OpenStack Networking” network+————-+———————————-+| Field | Value |+————-+———————————-+| description | OpenStack Networking || enabled | True || id | e10e48790ede425ea81e1a62250f124a || name | neutron || type | network |+————-+———————————-+4、创建网络服务API端点:$ openstack endpoint create –region RegionOne network public http://controller:9696+————–+———————————-+| Field | Value |+————–+———————————-+| enabled | True || id | f688ed8f1bf340d78794b600fa512145 || interface | public || region | RegionOne || region_id | RegionOne || service_id | e10e48790ede425ea81e1a62250f124a || service_name | neutron || service_type | network || url | http://controller:9696 |+————–+———————————-+$ openstack endpoint create –region RegionOne network internal http://controller:9696+————–+———————————-+| Field | Value |+————–+———————————-+| enabled | True || id | 571a008230c54cf8bcb1e38a75787c3f || interface | internal || region | RegionOne || region_id | RegionOne || service_id | e10e48790ede425ea81e1a62250f124a || service_name | neutron || service_type | network || url | http://controller:9696 |+————–+———————————-+$ openstack endpoint create –region RegionOne network admin http://controller:9696+————–+———————————-+| Field | Value |+————–+———————————-+| enabled | True || id | a8d654c1c878423789aab3fa7cf634cb || interface | admin || region | RegionOne || region_id | RegionOne || service_id | e10e48790ede425ea81e1a62250f124a || service_name | neutron || service_type | network || url | http://controller:9696 |+————–+———————————-+5、安装及配置:# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch ebtables# vi /etc/neutron/neutron.conf [DEFAULT]auth_strategy = keystonecore_plugin = ml2service_plugins = routerallow_overlapping_ips = Truetransport_url = rabbit://openstack:123456@controllernotify_nova_on_port_status_changes = truenotify_nova_on_port_data_changes = true[database]connection = mysql+pymysql://neutron:123456@controller/neutron[keystone_authtoken]auth_uri = http://controller:5000auth_url = http://controller:35357memcached_servers = controller:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = neutronpassword = 123456[nova]auth_url = http://controller:35357auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultregion_name = RegionOneproject_name = serviceusername = novapassword = 123456[oslo_concurrency]lock_path = /var/lib/neutron/tmp# vi /etc/neutron/plugins/ml2/ml2_conf.ini[ml2]type_drivers = flat,vlan,vxlantenant_network_types = vxlanmechanism_drivers = openvswitch,l2populationextension_drivers = port_security[ml2_type_flat]flat_networks = provider[ml2_type_vlan]#network_vlan_ranges = provider:1:1000[ml2_type_vxlan]vni_ranges = 1:1000[securitygroup]enable_ipset = true# vi /etc/sysctl.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1# vi /etc/neutron/l3_agent.ini[DEFAULT]interface_driver = openvswitchexternal_network_bridge =# vi /etc/neutron/dhcp_agent.ini [DEFAULT]interface_driver = openvswitchdhcp_driver = neutron.agent.linux.dhcp.Dnsmasqenable_isolated_metadata = true# vi /etc/neutron/metadata_agent.ini[DEFAULT]nova_metadata_host = controllermetadata_proxy_shared_secret = 123456# vi /etc/neutron/plugins/ml2/openvswitch_agent.ini[agent]tunnel_types = vxlanl2_population = True[ovs]bridge_mappings = provider:br-providerlocal_ip = 10.0.0.11[securitygroup]firewall_driver = iptables_hybrid6、完成安装:# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini# su -s /bin/sh -c “neutron-db-manage –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron# systemctl restart openstack-nova-api.service# systemctl start neutron-server.service# systemctl start neutron-openvswitch-agent.service# ovs-vsctl add-br br-provider# ifconfig eth0 0.0.0.0# ifconfig br-provider 192.100.10.160/24# route add default gw 192.100.10.1# systemctl restart neutron-server.service# systemctl restart neutron-openvswitch-agent.service# systemctl enable neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service# systemctl start neutron-dhcp-agent.service neutron-metadata-agent.service# systemctl enable neutron-l3-agent.service# systemctl start neutron-l3-agent.serviceCompute节点:1、安装及配置:# yum install openstack-neutron-openvswitch ebtables ipset# vi /etc/neutron/neutron.conf[DEFAULT]auth_strategy = keystone[keystone_authtoken]auth_uri = http://controller:5000auth_url = http://controller:35357memcached_servers = controller:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = neutronpassword = 123456[oslo_concurrency]lock_path = /var/lib/neutron/tmp# vi /etc/neutron/plugins/ml2/openvswitch_agent.ini[ovs]local_ip = 10.0.0.21[agent]tunnel_types = vxlanl2_population = True# systemctl restart neutron-openvswitch-agent.service# vi /etc/nova/nova.conf…[neutron]url = http://controller:9696auth_url = http://controller:35357auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultregion_name = RegionOneproject_name = serviceusername = neutronpassword = 1234562、完成安装:# systemctl restart openstack-nova-compute.service# systemctl enable neutron-openvswitch-agent.service# systemctl start neutron-openvswitch-agent.serviceController节点:验证操作:$ . admin-openrc$ openstack extension list –network+———————————————————————————————-+—————————+———————————————————————————————————————————————————-+| Name | Alias | Description |+———————————————————————————————-+—————————+———————————————————————————————————————————————————-+| Default Subnetpools | default-subnetpools | Provides ability to mark and use a subnetpool as the default. || Availability Zone | availability_zone | The availability zone extension. || Network Availability Zone | network_availability_zone | Availability zone support for network. || Auto Allocated Topology Services | auto-allocated-topology | Auto Allocated Topology Services. || Neutron L3 Configurable external gateway mode | ext-gw-mode | Extension of the router abstraction for specifying whether SNAT should occur on the external gateway || Port Binding | binding | Expose port bindings of a virtual port to external application || agent | agent | The agent management extension. || Subnet Allocation | subnet_allocation | Enables allocation of subnets from a subnet pool || L3 Agent Scheduler | l3_agent_scheduler | Schedule routers among l3 agents || Tag support | tag | Enables to set tag on resources. || Neutron external network | external-net | Adds external network attribute to network resource. || Tag support for resources with standard attribute: trunk, policy, security_group, floatingip | standard-attr-tag | Enables to set tag on resources with standard attribute. || Neutron Service Flavors | flavors | Flavor specification for Neutron advanced services. || Network MTU | net-mtu | Provides MTU attribute for a network resource. || Network IP Availability | network-ip-availability | Provides IP availability data for each network and subnet. || Quota management support | quotas | Expose functions for quotas management per tenant || If-Match constraints based on revision_number | revision-if-match | Extension indicating that If-Match based on revision_number is supported. || HA Router extension | l3-ha | Adds HA capability to routers. || Provider Network | provider | Expose mapping of virtual networks to physical networks || Multi Provider Network | multi-provider | Expose mapping of virtual networks to multiple physical networks || Quota details management support | quota_details | Expose functions for quotas usage statistics per project || Address scope | address-scope | Address scopes extension. || Neutron Extra Route | extraroute | Extra routes configuration for L3 router || Network MTU (writable) | net-mtu-writable | Provides a writable MTU attribute for a network resource. || Subnet service types | subnet-service-types | Provides ability to set the subnet service_types field || Resource timestamps | standard-attr-timestamp | Adds created_at and updated_at fields to all Neutron resources that have Neutron standard attributes. || Neutron 服务类型管理 | service-type | 用于为 Neutron 高级服务检索服务提供程序的 API || Router Flavor Extension | l3-flavors | Flavor support for routers. || Port Security | port-security | Provides port security || Neutron Extra DHCP options | extra_dhcp_opt | Extra options configuration for DHCP. For example PXE boot options to DHCP clients can be specified (e.g. tftp-server, server-ip-address, bootfile-name) || Resource revision numbers | standard-attr-revisions | This extension will display the revision number of neutron resources. || Pagination support | pagination | Extension that indicates that pagination is enabled. || Sorting support | sorting | Extension that indicates that sorting is enabled. || security-group | security-group | The security groups extension. || DHCP Agent Scheduler | dhcp_agent_scheduler | Schedule networks among dhcp agents || Router Availability Zone | router_availability_zone | Availability zone support for router. || RBAC Policies | rbac-policies | Allows creation and modification of policies that control tenant access to resources. || Tag support for resources: subnet, subnetpool, port, router | tag-ext | Extends tag support to more L2 and L3 resources. || standard-attr-description | standard-attr-description | Extension to add descriptions to standard attributes || IP address substring filtering | ip-substring-filtering | Provides IP address substring filtering when listing ports || Neutron L3 Router | router | Router abstraction for basic L3 forwarding between L2 Neutron networks and access to external networks via a NAT gateway. || Allowed Address Pairs | allowed-address-pairs | Provides allowed address pairs || project_id field enabled | project-id | Extension that indicates that project_id field is enabled. || Distributed Virtual Router | dvr | Enables configuration of Distributed Virtual Routers. |+———————————————————————————————-+—————————+———————————————————————————————————————————————————-+$ openstack network agent list+————————————–+——————–+————+——————-+——-+——-+—————————+| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |+————————————–+——————–+————+——————-+——-+——-+—————————+| 0fcf4aa9-3592-4552-9b4c-f2b55e23ef6b | DHCP agent | controller | nova | :-) | UP | neutron-dhcp-agent || 1a08e5eb-d867-4697-850d-bd2400134162 | Metadata agent | controller | None | :-) | UP | neutron-metadata-agent || 9a33be1e-61bd-4d6b-9ee1-bda6dc7b44cd | Linux bridge agent | controller | None | :-) | UP | neutron-linuxbridge-agent || bfdb443d-feee-4006-8618-558b73c3c4a2 | L3 agent | controller | nova | :-) | UP | neutron-l3-agent || ce5abc8d-504a-4164-ae0f-801e56a06653 | Linux bridge agent | compute | None | :-) | UP | neutron-linuxbridge-agent |+————————————–+——————–+————+——————-+——-+——-+—————————+ ...

March 19, 2019 · 6 min · jiezi

Openstack Queens 环境搭建(七)Horizon服务

Controller节点:1、安装及配置:# yum install openstack-dashboard# vi /etc/openstack-dashboard/local_settingsOPENSTACK_HOST = “controller"ALLOWED_HOSTS = [’*’]SESSION_ENGINE = ‘django.contrib.sessions.backends.cache’CACHES = { ‘default’: { ‘BACKEND’: ‘django.core.cache.backends.memcached.MemcachedCache’, ‘LOCATION’: ‘controller:11211’, }}OPENSTACK_KEYSTONE_URL = “http://%s:5000/v3” % OPENSTACK_HOSTOPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = TrueOPENSTACK_API_VERSIONS = { “identity”: 3, “image”: 2, “volume”: 2,}OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = “Default"OPENSTACK_KEYSTONE_DEFAULT_ROLE = “user"TIME_ZONE = “Asia/Shanghai”…# vi /etc/httpd/conf.d/openstack-dashboard.conf 在文件开头添加WSGIApplicationGroup %{GLOBAL}…2、完成安装:# systemctl restart httpd.service memcached.service使用 http://controller/dashboard 上的Web浏览器访问Dashboard。

March 19, 2019 · 1 min · jiezi

Openstack Queens 环境搭建(五)Nova服务

Controller节点:1、创建 nova_api, nova,和 nova_cell0 的数据库,授予权限:$ mysql -u root -pMariaDB [(none)]>CREATE DATABASE nova_api;MariaDB [(none)]> CREATE DATABASE nova;MariaDB [(none)]> CREATE DATABASE nova_cell0;MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO ’nova’@’localhost’ IDENTIFIED BY ‘123456’;MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO ’nova’@’%’ IDENTIFIED BY ‘123456’;MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO ’nova’@’localhost’ IDENTIFIED BY ‘123456’;MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO ’nova’@’%’ IDENTIFIED BY ‘123456’;MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO ’nova’@’localhost’ IDENTIFIED BY ‘123456’;MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO ’nova’@’%’ IDENTIFIED BY ‘123456’;MariaDB [(none)]> exit;2、创建nova用户:$ . admin-openrc$ openstack user create –domain default –password-prompt novaUser Password: 123456Repeat User Password: 123456+———————+———————————-+| Field | Value |+———————+———————————-+| domain_id | default || enabled | True || id | 81f1d5dfad5a42bb806d197ceb9881ce || name | nova || options | {} || password_expires_at | None |+———————+———————————-+$ openstack role add –project service –user nova admin3、创建nova服务实体:$ openstack service create –name nova –description “OpenStack Compute” compute+————-+———————————-+| Field | Value |+————-+———————————-+| description | OpenStack Compute || enabled | True || id | 3e011d345e4442fe8a232ab5ab1f8323 || name | nova || type | compute |+————-+———————————-+4、创建Compute API服务端点:$ openstack endpoint create –region RegionOne compute public http://controller:8774/v2.1+————–+———————————-+| Field | Value |+————–+———————————-+| enabled | True || id | 343b6a8fc9564623aca0097b2383650d || interface | public || region | RegionOne || region_id | RegionOne || service_id | 3e011d345e4442fe8a232ab5ab1f8323 || service_name | nova || service_type | compute || url | http://controller:8774/v2.1 |+————–+———————————-+$ openstack endpoint create –region RegionOne compute internal http://controller:8774/v2.1+————–+———————————-+| Field | Value |+————–+———————————-+| enabled | True || id | 3458cf55ac8b44d58c949fe88bf9afe3 || interface | internal || region | RegionOne || region_id | RegionOne || service_id | 3e011d345e4442fe8a232ab5ab1f8323 || service_name | nova || service_type | compute || url | http://controller:8774/v2.1 |+————–+———————————-+$ openstack endpoint create –region RegionOne compute admin http://controller:8774/v2.1+————–+———————————-+| Field | Value |+————–+———————————-+| enabled | True || id | 9f9115389c2a49a2874761b92c849bb0 || interface | admin || region | RegionOne || region_id | RegionOne || service_id | 3e011d345e4442fe8a232ab5ab1f8323 || service_name | nova || service_type | compute || url | http://controller:8774/v2.1 |+————–+———————————-+5、创建Placement服务相关:$ openstack user create –domain default –password-prompt placementUser Password: 123456Repeat User Password: 123456+———————+———————————-+| Field | Value |+———————+———————————-+| domain_id | default || enabled | True || id | 74870bc86a7c4108869c620099bffc30 || name | placement || options | {} || password_expires_at | None |+———————+———————————-+$ openstack role add –project service –user placement admin$ openstack service create –name placement –description “Placement API” placement+————-+———————————-+| Field | Value |+————-+———————————-+| description | Placement API || enabled | True || id | bbd270a97c3a499fb73765120094e9da || name | placement || type | placement |+————-+———————————-+$ openstack endpoint create –region RegionOne placement public http://controller:8778+————–+———————————-+| Field | Value |+————–+———————————-+| enabled | True || id | d79b3b62302a4055924762ac676fc9b4 || interface | public || region | RegionOne || region_id | RegionOne || service_id | bbd270a97c3a499fb73765120094e9da || service_name | placement || service_type | placement || url | http://controller:8778 |+————–+———————————-+$ openstack endpoint create –region RegionOne placement internal http://controller:8778+————–+———————————-+| Field | Value |+————–+———————————-+| enabled | True || id | 5424919fbee34a7a92946c607706b38a || interface | internal || region | RegionOne || region_id | RegionOne || service_id | bbd270a97c3a499fb73765120094e9da || service_name | placement || service_type | placement || url | http://controller:8778 |+————–+———————————-+$ openstack endpoint create –region RegionOne placement admin http://controller:8778+————–+———————————-+| Field | Value |+————–+———————————-+| enabled | True || id | d9d5626cdb5442ac91dff8c1588f4726 || interface | admin || region | RegionOne || region_id | RegionOne || service_id | bbd270a97c3a499fb73765120094e9da || service_name | placement || service_type | placement || url | http://controller:8778 |+————–+———————————-+6、安装和配置:# yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api# vi /etc/nova/nova.conf[DEFAULT]my_ip=192.100.10.160use_neutron=truefirewall_driver=nova.virt.firewall.NoopFirewallDriverenabled_apis=osapi_compute,metadatatransport_url=rabbit://openstack:123456@controller[api]auth_strategy=keystone[api_database]connection = mysql+pymysql://nova:123456@controller/nova_api[database]connection = mysql+pymysql://nova:123456@controller/nova[glance]api_servers = http://controller:9292[keystone_authtoken]auth_url = http://controller:5000/v3memcached_servers = controller:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = novapassword = 123456[libvirt]#virt_type=kvm[neutron]url = http://controller:9696auth_url = http://controller:35357auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultregion_name = RegionOneproject_name = serviceusername = neutronpassword = 123456service_metadata_proxy = truemetadata_proxy_shared_secret = 123456[oslo_concurrency]lock_path=/var/lib/nova/tmp[placement]os_region_name = RegionOneproject_domain_name = Defaultproject_name = serviceauth_type = passworduser_domain_name = Defaultauth_url = http://controller:5000/v3username = placementpassword = 123456[vnc]enabled=trueserver_listen=$my_ipserver_proxyclient_address=$my_ip#novncproxy_base_url=http://127.0.0.1:6080/vnc_auto.html# vi /etc/httpd/conf.d/00-nova-placement-api.conf 在最下方加入<Directory /usr/bin> <IfVersion >= 2.4> Require all granted </IfVersion> <IfVersion < 2.4> Order allow,deny Allow from all </IfVersion></Directory>7、完成安装:# systemctl restart httpd# su -s /bin/sh -c “nova-manage api_db sync” nova# su -s /bin/sh -c “nova-manage cell_v2 map_cell0” nova# su -s /bin/sh -c “nova-manage cell_v2 create_cell –name=cell1 –verbose” nova# su -s /bin/sh -c “nova-manage db sync” nova# nova-manage cell_v2 list_cells+——-+————————————–+————————————+————————————————-+| 名称 | UUID | Transport URL | 数据库连接 |+——-+————————————–+————————————+————————————————-+| cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:@controller/nova_cell0 || cell1 | c795b2eb-4814-4fe7-b9ff-090a1b1b2be5 | rabbit://openstack:@controller | mysql+pymysql://nova:****@controller/nova |+——-+————————————–+————————————+————————————————-+# systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service# systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.serviceCompute节点:1、安装和配置:# yum install openstack-nova-compute# vi /etc/nova/nova.conf[DEFAULT]my_ip = 192.100.10.161enabled_apis = osapi_compute,metadatause_neutron = Truefirewall_driver = nova.virt.firewall.NoopFirewallDrivertransport_url = rabbit://openstack:123456@controller[api]auth_strategy = keystone[vnc]enabled = Trueserver_listen = 0.0.0.0server_proxyclient_address = $my_ipnovncproxy_base_url = http://controller:6080/vnc_auto.html[glance]api_servers = http://controller:9292[oslo_concurrency]lock_path = /var/lib/nova/tmp[placement]os_region_name = RegionOneproject_domain_name = Defaultproject_name = serviceauth_type = passworduser_domain_name = Defaultauth_url = http://controller:5000/v3username = placementpassword = 123456[keystone_authtoken]auth_url = http://controller:5000/v3memcached_servers = controller:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = novapassword = 1234562、完成安装:# systemctl enable libvirtd.service openstack-nova-compute.service# systemctl start libvirtd.service openstack-nova-compute.serviceController节点:1、将计算节点添加到cell数据库:$ . admin-openrc$ openstack compute service list –service nova-compute+—-+————–+———————–+——+———+——-+—————————-+| ID | Binary | Host | Zone | Status | State | Updated At |+—-+————–+———————–+——+———+——-+—————————-+| 9 | nova-compute | localhost.localdomain | nova | enabled | up | 2018-09-13T02:59:06.000000 |+—-+————–+———————–+——+———+——-+—————————-+2、发现计算主机:# su -s /bin/sh -c “nova-manage cell_v2 discover_hosts –verbose” nova/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) [‘use_tpool’] not supported exception.NotSupportedWarningFound 2 cell mappings.Skipping cell0 since it does not contain hosts.Getting computes from cell ‘cell1’: c795b2eb-4814-4fe7-b9ff-090a1b1b2be5Checking host mapping for compute host ’localhost.localdomain’: 58be78ad-5220-4869-ab31-33c9674ecfd1Creating host mapping for compute host ’localhost.localdomain’: 58be78ad-5220-4869-ab31-33c9674ecfd1Found 1 unmapped computes in cell: c795b2eb-4814-4fe7-b9ff-090a1b1b2be5注意:添加新计算节点时,必须在控制器节点上运行nova-manage cell_v2 discover_hosts以注册这些新计算节点。或者,您可以在 /etc/nova/nova.conf 中设置适当的间隔:[scheduler]discover_hosts_in_cells_interval = 3003、验证:$ . admin-openrc$ openstack compute service list+—-+——————+———————–+———-+———+——-+—————————-+| ID | Binary | Host | Zone | Status | State | Updated At |+—-+——————+———————–+———-+———+——-+—————————-+| 1 | nova-conductor | controller | internal | enabled | up | 2018-09-13T03:00:28.000000 || 3 | nova-consoleauth | controller | internal | enabled | up | 2018-09-13T03:00:29.000000 || 4 | nova-scheduler | controller | internal | enabled | up | 2018-09-13T03:00:29.000000 || 9 | nova-compute | localhost.localdomain | nova | enabled | up | 2018-09-13T03:00:26.000000 |+—-+——————+———————–+———-+———+——-+—————————-+$ openstack catalog list+———–+———–+—————————————–+| Name | Type | Endpoints |+———–+———–+—————————————–+| keystone | identity | RegionOne || | | public: http://controller:5000/v3/ || | | RegionOne || | | internal: http://controller:5000/v3/ || | | RegionOne || | | admin: http://controller:5000/v3/ || | | || nova | compute | RegionOne || | | public: http://controller:8774/v2.1 || | | RegionOne || | | internal: http://controller:8774/v2.1 || | | RegionOne || | | admin: http://controller:8774/v2.1 || | | || glance | image | RegionOne || | | internal: http://controller:9292 || | | RegionOne || | | admin: http://controller:9292 || | | RegionOne || | | public: http://controller:9292 || | | || placement | placement | RegionOne || | | internal: http://controller:8778 || | | RegionOne || | | public: http://controller:8778 || | | RegionOne || | | admin: http://controller:8778 || | | |+———–+———–+—————————————–+$ openstack image list+————————————–+——–+——–+| ID | Name | Status |+————————————–+——–+——–+| ad7da2d4-cb83-4a41-836f-e58e47e899f5 | cirros | active |+————————————–+——–+——–+# nova-status upgrade check/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) [‘use_tpool’] not supported exception.NotSupportedWarningOption “os_region_name” from group “placement” is deprecated. Use option “region-name” from group “placement”.+——————————-+| 升级检查结果 |+——————————-+| 检查: Cells v2 || 结果: 成功 || 详情: None |+——————————-+| 检查: Placement API || 结果: 成功 || 详情: None |+——————————-+| 检查: Resource Providers || 结果: 成功 || 详情: None |+——————————-+| 检查: Ironic Flavor Migration || 结果: 成功 || 详情: None |+——————————-+| 检查: API Service Version || 结果: 成功 || 详情: None |+——————————-+ ...

March 19, 2019 · 6 min · jiezi

Openstack Queens 环境搭建(四)Glance服务

1、创建glance数据库,授予权限:$ mysql -u root -pMariaDB [(none)]> CREATE DATABASE glance;MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO ‘glance’@’localhost’ IDENTIFIED BY ‘123456’;MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO ‘glance’@’%’ IDENTIFIED BY ‘123456’;MariaDB [(none)]> exit;2、创建glance用户:$ . admin-openrc$ openstack user create –domain default –password-prompt glanceUser Password: 123456Repeat User Password: 123456+———————+———————————-+| Field | Value |+———————+———————————-+| domain_id | default || enabled | True || id | 5b7e76213b4b4945b7c702be5b595c0e || name | glance || options | {} || password_expires_at | None |+———————+———————————-+$ openstack role add –project service –user glance admin3、创建glance服务实体:$ openstack service create –name glance –description “OpenStack Image” image+————-+———————————-+| Field | Value |+————-+———————————-+| description | OpenStack Image || enabled | True || id | b9cfd97d134e4ec2bf19d78306e85a5a || name | glance || type | image |+————-+———————————-+4、创建API端点:$ openstack endpoint create –region RegionOne image public http://controller:9292+————–+———————————-+| Field | Value |+————–+———————————-+| enabled | True || id | b9c90172de704ea4a867190ba44fc931 || interface | public || region | RegionOne || region_id | RegionOne || service_id | b9cfd97d134e4ec2bf19d78306e85a5a || service_name | glance || service_type | image || url | http://controller:9292 |+————–+———————————-+$ openstack endpoint create –region RegionOne image internal http://controller:9292+————–+———————————-+| Field | Value |+————–+———————————-+| enabled | True || id | 074bde7662044e93830f4eca15d9c887 || interface | internal || region | RegionOne || region_id | RegionOne || service_id | b9cfd97d134e4ec2bf19d78306e85a5a || service_name | glance || service_type | image || url | http://controller:9292 |+————–+———————————-+$ openstack endpoint create –region RegionOne image admin http://controller:9292+————–+———————————-+| Field | Value |+————–+———————————-+| enabled | True || id | 17030061f9b84301ac515765706933b2 || interface | admin || region | RegionOne || region_id | RegionOne || service_id | b9cfd97d134e4ec2bf19d78306e85a5a || service_name | glance || service_type | image || url | http://controller:9292 |+————–+———————————-+5、安装和配置:# yum install openstack-glance# vi /etc/glance/glance-api.conf[database]connection = mysql+pymysql://glance:123456@controller/glance[glance_store]stores = file,httpdefault_store = filefilesystem_store_datadir = /var/lib/glance/images/[keystone_authtoken]auth_uri = http://controller:5000auth_url = http://controller:5000memcached_servers = controller:11211auth_type = passwordproject_domain_name = Defaultuser_domain_name = Defaultproject_name = serviceusername = glancepassword = 123456[paste_deploy]flavor = keystone# vi /etc/glance/glance-registry.conf[database]connection = mysql+pymysql://glance:123456@controller/glance[keystone_authtoken]auth_uri = http://controller:5000auth_url = http://controller:5000memcached_servers = controller:11211auth_type = passwordproject_domain_name = Defaultuser_domain_name = Defaultproject_name = serviceusername = glancepassword = 123456[paste_deploy]flavor = keystone# su -s /bin/sh -c “glance-manage db_sync” glance6、完成安装# systemctl enable openstack-glance-api.service openstack-glance-registry.service# systemctl start openstack-glance-api.service openstack-glance-registry.service7、验证操作$ . admin-openrc$ wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img$ openstack image create “cirros” \ –file cirros-0.3.5-x86_64-disk.img \ –disk-format qcow2 –container-format bare \ –public+——————+——————————————————+| Field | Value |+——————+——————————————————+| checksum | f8ab98ff5e73ebab884d80c9dc9c7290 || container_format | bare || created_at | 2018-09-13T00:55:04Z || disk_format | qcow2 || file | /v2/images/ad7da2d4-cb83-4a41-836f-e58e47e899f5/file || id | ad7da2d4-cb83-4a41-836f-e58e47e899f5 || min_disk | 0 || min_ram | 0 || name | cirros || owner | 4a5e42dd8cbf410f85a5f145039d69a6 || protected | False || schema | /v2/schemas/image || size | 13267968 || status | active || tags | || updated_at | 2018-09-13T00:55:04Z || virtual_size | None || visibility | public |+——————+——————————————————+$ openstack image list+————————————–+——–+——–+| ID | Name | Status |+————————————–+——–+——–+| ad7da2d4-cb83-4a41-836f-e58e47e899f5 | cirros | active |+————————————–+——–+——–+ ...

March 19, 2019 · 3 min · jiezi

Openstack Queens 环境搭建(二)环境相关服务

Controller节点:1、安装NTP服务:# yum install chrony# vi /etc/chrony.confserver 0.centos.pool.ntp.org iburstserver 1.centos.pool.ntp.org iburstserver 2.centos.pool.ntp.org iburstserver 3.centos.pool.ntp.org iburst…allow 192.100.10.0/24…# systemctl enable chronyd.service 开机启用NTP# systemctl start chronyd.service 开启NTP服务验证NTP服务:# chronyc sources 210 Number of sources = 2 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^- 192.0.2.11 2 7 12 137 -2814us[-3000us] +/- 43ms ^* 192.0.2.12 2 6 177 46 +17us[ -23us] +/- 68ms2、安装Openstack相关库# yum install centos-release-openstack-queens 安装Openstack库# yum upgrade 更新包# yum install python-openstackclient 安装Openstack客户端# yum install openstack-selinux 安装openstack-selinux用来管理Openstack服务的安全策略3、关闭防火墙# systemctl stop firewalld 关闭防火墙服务# systemctl disable firewalld 永久防火墙开机自启动4、关闭selinux服务# setenforce 0 关闭selinux服务# vi /etc/selinux/config 永久关闭selinux服务 SELINUX=disabled5、安装数据库服务# yum install mariadb mariadb-server python2-PyMySQL# vi /etc/my.cnf.d/openstack.cnf[mysqld]bind-address = 192.100.10.160default-storage-engine = innodbinnodb_file_per_table = onmax_connections = 4096collation-server = utf8_general_cicharacter-set-server = utf8# systemctl enable mariadb.service 开机启用Mysql服务# systemctl start mariadb.service 开启Mysql服务# mysql_secure_installation 设置Mysql密码->1234566、安装消息队列# yum install rabbitmq-server# systemctl enable rabbitmq-server.service# systemctl start rabbitmq-server.service# rabbitmqctl add_user openstack 123456# rabbitmqctl set_permissions openstack “.” “.” “.*“7、安装Memcached缓存# yum install memcached python-memcached# vi /etc/sysconfig/memcachedOPTIONS="-l 127.0.0.1,::1,controller”# systemctl enable memcached.service# systemctl start memcached.service8、Etcd# yum install etcd# vi /etc/etcd/etcd.conf#[Member]ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_PEER_URLS=“http://192.100.10.160:2380"ETCD_LISTEN_CLIENT_URLS=“http://192.100.10.160:2379"ETCD_NAME=“controller”#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS=“http://192.100.10.160:2380"ETCD_ADVERTISE_CLIENT_URLS=“http://192.100.10.160:2379"ETCD_INITIAL_CLUSTER=“controller=http://192.100.10.160:2380"ETCD_INITIAL_CLUSTER_TOKEN=“etcd-cluster-01"ETCD_INITIAL_CLUSTER_STATE=“new”# systemctl enable etcd# systemctl start etcdCompute节点:1、安装NTP服务:# yum install chrony# vi /etc/chrony.confserver controller iburst…allow 192.100.10.0/24…# systemctl enable chronyd.service 开机启用NTP# systemctl start chronyd.service 开启NTP服务2、安装Openstack相关库# yum install centos-release-openstack-queens 安装Openstack库# yum upgrade 更新包# yum install python-openstackclient 安装Openstack客户端# yum install openstack-selinux 安装openstack-selinux用来管理Openstack服务的安全策略3、关闭防火墙# systemctl stop firewalld 关闭防火墙服务# systemctl disable firewalld 永久防火墙开机自启动4、关闭selinux服务# setenforce 0 关闭selinux服务# vi /etc/selinux/config 永久关闭selinux服务 SELINUX=disabled ...

March 19, 2019 · 1 min · jiezi

Openstack Queens 环境搭建(一)环境准备

环境准备:基于CentOS Linux release 7.6.1810 (Core)控制节点(Controller):eth0:192.100.10.160/24eth1:10.0.0.11/24计算节点(Compute):eth0:192.100.10.161/24eth1: 10.0.0.12/24网卡0接口为外部网络+管理网络 -> 交换机 + 路由器网卡1接口为Overlay网络 -> 目前直连 / 交换机连接通用密码: 123456Controller节点:配置网卡信息:# vi /etc/sysconfig/network-scripts/ifcfg-eth0BOOTPROTO=staticIPADDR=192.100.10.160NETMASK=255.255.255.0GATEWAY=192.100.10.1# vi /etc/sysconfig/network-scripts/ifcfg-eth2BOOTPROTO=staticIPADDR=10.0.0.11NETMASK=255.255.255.0配置主机信息:# vi /etc/hosts# controller192.100.10.160 controller# compute192.100.10.161 compute

March 19, 2019 · 1 min · jiezi