关于openstack:OpenStack-Docker以及Kubernetes的搭建

原文地址:OpenStack, Docker以及Kubernetes的搭建Part 1: OpenStackStep 1Create a new Virtual Machine named as "StudentName-OS" and your VM should be placed in 'Lab Final Exam' folder. Deploy the machine according to the below configs.OpenStack controller node: 2 Dual core CPU4 GB RAM40 GB HDD -Network Adapter - Bridge adapterCentOS minimal OS - http://mirror.dal.nexril.net/centos/7.9.2009/isos/x86_64/ or Centos 8 Stream.Also, while spinning up VM, choose "Minimal Install" During CentOS installation, set root password as 'Dcne123'. Perform entire OpenStack Part of final lab with 'root' user. ...

August 24, 2023 · 10 min · jiezi

关于openstack:OpenStack的神秘组件-裸金属Ironic管理使用

OpenStack是目前寰球部署最宽泛的开源云基础架构,在OpenStack中提供的裸金属服务的我的项目是Ironic。OpenStack的官网次要介绍裸金属的用处在如下5方面: (1)高性能计算; (2)无奈虚拟化的硬件设施的计算工作; (3)数据库主机(一些数据库在hypervisor中运行不佳); (4)单租户、专用硬件、安全性、可靠性以及其它需要; (5)疾速部署云基础设施。 其本质是在过来的几年中随着如电信工作负载的5G,还有智能化的机器学习和人工智能,甚至是大数据,都在推动人们朝着越来越专业化的设施倒退,数据中心和云环境对立建设模式。人们心愿通过如OpenStack Ironic对物理硬件上实现自动化和管制,从而缩小设施的闲暇工夫,升高运维人员对硬件装置部署工夫。 为什么说OpenStack Ironic是一个神秘的组件: 起因一:Ironic应用了BMC(Baseboard Manager Controller)即基板治理控制器,独立的零碎在服务器通过额定的硬件控制器和PXE(Pre-boot Execution Environment)网络启动,间接把当时做好的操作系统磁盘镜像克隆到物理服务器上,免去了应用Kickstart主动装置零碎的过程,高效省时; 起因二:Ironic是通过Nova来调用的,是模仿Nova的一个虚拟化驱动,其创立和治理物理服务器资源是和虚拟化实例创立部署流程一样。 揭开OpenStack Ironic神秘的面纱,首先Ironic作为OpenStack一个独立的模块同样可与keystone、nova、neutron、cinder等组件交互,和部署虚拟机的调用流程是一样的,都是通过Nova的接口来执行创立实例,只是底层的nova-scheduler和nova-compute驱动不一样,虚拟机底层驱动采纳的是虚拟化技术,而物理机采纳的是PXE和IPMI技术。在OpenStack官网的架构时序图如下: OpenStack Ironic时序图(来源于OpenStack官网) 从时序图中能够看到Ironic组件的流程比较复杂,其次要是思考与各个组件交互和谬误异样的解决,其外围的逻辑流程能够简化为如下:用户通过Nova API和Nova Scheduler来启动一个裸金属实例,之后申请会通过Ironic API,连贯到Ironic Conductor服务,由 Ironic Conductor负责和Neutron网络、Glance镜像、Cinder存储等组件交互确定装置服务器的零碎、网络布局等,再到对应的Driver,并把信息记录到Ironic DB数据库中,最初实现实例部署,为用户提供胜利部署的物理机服务。 OpenStack Ironic部署应用,Ironic部署和Nova等罕用的组件部署形式根本一样,次要分为如下流程: (1)环境筹备,如果试验环境起码须要筹备两台物理服务器,一台作为 Ironic管制节点也就是咱们常说的controller节点,一台作为Ironic Node节点即裸金属的被治理节点,须要留神在Node节点须要具备并开启BMC、PXE性能,如果服务器有RAID须要先创立实现RAID,同时确保网络DHCP能力; (2)配置Ironic服务,次要是创立数据库,装置和配置Ironic-api和Ironic-conductor服务,配置Nova、Neutron,具体部署能够参考OpenStack Ironic官网部署。Ironic-api和Ironic-conductor服务能够部署在雷同或不同主机。用户也能够增加新的Ironic-conductor主机以应答一直增长的bare metal node。不过新增Ironic-conductor服务须要与现有Ironic-conductor放弃雷同版本。倡议每个Ironic-conductor治理100个左右的bare metal裸机节点,以均衡较优的可靠性和性能; (3)构建或应用现有镜像,部署一个裸机节点须要两组镜像:施行/部署镜像(deploy images)和用户镜像(user images)。Bare Metal Provisioning 应用 deploy images 来筹备bare metal(裸机) node,进行clean等操作,为user images的装置做筹备。user images 被装置bare metal node上,供用户最终应用。deploy images镜像包含.kernel文件和 .initramfs文件。能够间接下载OpenStack官网公布的镜像(倡议初学者优先应用)https://tarballs.opendev.org/...中下载。user images镜像能够应用disk-image-builder工具来制作,不过目前此工具仅反对centos/fedora/ubuntu/opensuse等零碎。如果想要构建UOS等镜像也能够应用虚拟机软件如vrish,创立好虚拟机后,虚拟机的qcow2磁盘文件可作为user images应用; (4)设置驱动程序,在正确配置所有服务之后,您应该用Bare Metal服务注册硬件,并确认Compute服务看到可用的硬件。一旦bare metal节点处于available provisioning状态,Compute服务就能够看到。 OpenStack Ironic在部署时可能会遇到各种问题,通过一段时间学习,次要的报错起因分成如下几类: 1)环境部署问题,例如Ironic和Nova服务倡议独自部署在不同的节点上;2)构建或应用现有镜像问题,次要是本人制作的镜像会呈现grub.efi找不到问题;3)配置问题,因为OpenStack官网对于Ironic文档更新会慢与版本更新,导致有些配置有问题,例如报'ServiceTokenAuthWrapper' object has no attribute '_discovery_cache'可批改keystoneauth1/plugin.py。 ...

January 31, 2023 · 1 min · jiezi

关于openstack:open-stack-应用题方面

在openstack公有云平台上,基于cirrors.qcow2镜像,应用命令创立一个名为cirros的镜像。 [root@controller ~]# glance image-create --name "cirros" --disk-format qcow2 --container-format bare --progress opt/openstack/images/CentOS_7.5_x86_64_XD.qcow22.在openstack公有云平台上,应用命令创立一个名为Fmin,ID为1,内存为1024 MB,磁盘为10 GB,vcpu数量为1的云主机类型。 [root@controller ~]# nova flavor-create Fmin 1 1024 10 1在openstack公有云平台上,编写模板server.yml,创立名为“m1.flavor”、 ID 为 1234、 内存为 1024MB、硬盘为 20GB、 vcpu数量为 2的云主机类型。 [root@controller ~]# openstack orchestration template version list #查看可用于编排的模板版本 [root@controller ~]# Vi server.yaml server.yamlheat_template_version: 2015-04-30description: resources: flavor: type: OS::Nova::Flavor properties: name: "m1.flavor" flavorid: "1234" disk: 20 ram: 1024 vcpus: 2outputs: flavor_info: description: Get the information of virtual machine type value: { get_attr: [ flavor, show ] } [root@controller ~]# heat stack-create m1_flavor_stack -f server.yaml #创立资源在openstack公有云平台上,通过应用命令创立云主机内部网络extnet,子网extsubnet,虚拟机浮动 IP 网段为172.18.x.0/24(其中x是考位号), 网关为172.18.x.1,网络应用vlan模式;创立云主机内网intnet,子网intsubnet,虚拟机子网 IP 网段为192.168.x.0/24(其中x是考位号),网关为192.168.x.1;实现内网子网intsubnet和内部网络extnet的连通。创立外网 ...

November 23, 2022 · 3 min · jiezi

关于openstack:openstack运维

openstack运维题openstack平台运维在openstack公有云平台上,基于cirrors.qcow2镜像,应用命令创立一个名为cirros的镜像。 [root@controller ~]# glance image-create --name "cirros" --disk-format qcow2 --container-format bare --progress opt/openstack/images/CentOS_7.5_x86_64_XD.qcow22.在openstack公有云平台上,应用命令创立一个名为Fmin,ID为1,内存为1024 MB,磁盘为10 GB,vcpu数量为1的云主机类型。 [root@controller ~]# nova flavor-create Fmin 1 1024 10 1在openstack公有云平台上,编写模板server.yml,创立名为“m1.flavor”、 ID 为 1234、 内存为 1024MB、硬盘为 20GB、 vcpu数量为 2的云主机类型。 [root@controller ~]# openstack orchestration template version list #查看可用于编排的模板版本 [root@controller ~]# Vi server.yaml server.yamlheat_template_version: 2015-04-30description: resources: flavor: type: OS::Nova::Flavor properties: name: "m1.flavor" flavorid: "1234" disk: 20 ram: 1024 vcpus: 2outputs: flavor_info: description: Get the information of virtual machine type value: { get_attr: [ flavor, show ] } [root@controller ~]# heat stack-create m1_flavor_stack -f server.yaml #创立资源在openstack公有云平台上,通过应用命令创立云主机内部网络extnet,子网extsubnet,虚拟机浮动 IP 网段为172.18.x.0/24(其中x是考位号), 网关为172.18.x.1,网络应用vlan模式;创立云主机内网intnet,子网intsubnet,虚拟机子网 IP 网段为192.168.x.0/24(其中x是考位号),网关为192.168.x.1;实现内网子网intsubnet和内部网络extnet的连通。创立外网 ...

November 23, 2022 · 3 min · jiezi

关于openstack:openstack运维题

openstack运维题openstack平台运维在openstack公有云平台上,基于cirrors.qcow2镜像,应用命令创立一个名为cirros的镜像。 [root@controller ~]# glance image-create --name "cirros" --disk-format qcow2 --container-format bare --progress opt/openstack/images/CentOS_7.5_x86_64_XD.qcow22.在openstack公有云平台上,应用命令创立一个名为Fmin,ID为1,内存为1024 MB,磁盘为10 GB,vcpu数量为1的云主机类型。 [root@controller ~]# nova flavor-create Fmin 1 1024 10 1在openstack公有云平台上,编写模板server.yml,创立名为“m1.flavor”、 ID 为 1234、 内存为 1024MB、硬盘为 20GB、 vcpu数量为 2的云主机类型。 [root@controller ~]# openstack orchestration template version list #查看可用于编排的模板版本 [root@controller ~]# Vi server.yaml server.yamlheat_template_version: 2015-04-30description: resources: flavor: type: OS::Nova::Flavor properties: name: "m1.flavor" flavorid: "1234" disk: 20 ram: 1024 vcpus: 2outputs: flavor_info: description: Get the information of virtual machine type value: { get_attr: [ flavor, show ] } [root@controller ~]# heat stack-create m1_flavor_stack -f server.yaml #创立资源在openstack公有云平台上,通过应用命令创立云主机内部网络extnet,子网extsubnet,虚拟机浮动 IP 网段为172.18.x.0/24(其中x是考位号), 网关为172.18.x.1,网络应用vlan模式;创立云主机内网intnet,子网intsubnet,虚拟机子网 IP 网段为192.168.x.0/24(其中x是考位号),网关为192.168.x.1;实现内网子网intsubnet和内部网络extnet的连通。创立外网 ...

November 23, 2022 · 3 min · jiezi

关于openstack:openstackcloudkitty组件入门级安装快速

@TOC 前言什么是CloudKitty?CloudKitty是OpenStack等的评级即服务项目。该我的项目旨在成为云的退款和评级的通用解决方案。从历史上看,它只能在OpenStack上下文中运行它,但当初能够在独立模式下运行CloudKitty。 CloudKitty容许进行基于指标的评级:它轮询终端节点以检索无关特定指标的度量值和元数据,将评级规定利用于收集的数据,并将评级数据推送到其存储后端。 CloudKitty是高度模块化的,这使得增加新性能变得容易。 架构CloudKitty能够分为四大局部: Data retrieval (API) 数据检索Data collection (cloudkitty-processor) 数据收集Data rating 数据评级Data storage 数据存储这些局部由两个过程解决:和 。数据检索局部由过程解决,其余局部由 解决。cloudkitty-apicloudkitty-processorcloudkitty-apicloudkitty-processor 以下是 CloudKitty 架构的概述: 装置yum install openstack-cloudkitty-api openstack-cloudkitty-processor openstack-cloudkitty-ui配置编辑/etc/cloudkitty/cloudkitty.conf以配置 CloudKitty [DEFAULT]verbose = Truelog_dir = /var/log/cloudkitty[oslo_messaging_rabbit]rabbit_userid = openstackrabbit_password = RABBIT_PASSWORDrabbit_hosts = RABBIT_HOST[auth]username = cloudkittypassword = CK_PASSWORDtenant = serviceregion = RegionOneurl = http://localhost:5000/v2.0[keystone_authtoken]username = cloudkittypassword = CK_PASSWORDproject_name = serviceregion = RegionOneauth_url = http://localhost:5000/v2.0auth_plugin = password[database]connection = mysql://cloudkitty:CK_DBPASS@localhost/cloudkitty[keystone_fetcher]username = adminpassword = ADMIN_PASSWORDtenant = adminregion = RegionOneurl = http://localhost:5000/v2.0[ceilometer_collector]username = cloudkittypassword = CK_PASSWORDtenant = serviceregion = RegionOneurl = http://localhost:5000设置数据库和存储后端 ...

May 2, 2022 · 4 min · jiezi

关于openstack:openstack之Designate组件入门级安装快速

@TOC 前言Designate 是一个开源 DNS 即服务施行,是用于运行云的 OpenStack 服务生态系统的一部分。Designate 是 OpenStack 的多租户 DNSaaS 服务。它提供了一个带有集成 Keystone 身份验证的 REST API。它能够配置为依据 Nova 和 Neutron 操作主动生成记录。Designate 反对多种 DNS 服务器,包含 Bind9 和 PowerDNS 4。 架构Designate 由几个不同的服务组成:API、Producer、Central、Worker 和 Mini DNS。它应用 oslo.db 兼容的数据库来存储状态和数据,并应用 oslo.messaging 兼容的音讯队列来促成服务之间的通信。所有指定服务的多个正本能够串联运行以促成高可用性部署,API 过程通常位于负载均衡器之后。 前提筹备获取admin凭据以管理员权限拜访 source admin-openrc#创立designate用户openstack user create --domain demo --password 000000 designate #将admin角色增加到designate用户openstack role add --project service --user designate admin #创立指定服务实体openstack service create --name designate --description "DNS" dns #创立 DNS 服务 API 端点 openstack endpoint create --region RegionOne dns public http://controller:9001/openstack endpoint create --region RegionOne dns internal http://controller:9001/openstack endpoint create --region RegionOne dns admin http://controller:9001/装置和配置组件装置软件包 ...

April 29, 2022 · 3 min · jiezi

关于openstack:OpenStack-第-25-版本Yoga正式发布12-年发展历程造就权威云时代

3 月 30 日,OpenStack 社区正式公布了其最新版本的更新 —— Yoga。此次更新是自 2010 年 NASA Ames 钻研核心与 Rackspace 开发者们独特创立开源基础设施即服务(IaaS)云 OpenStack 以来的第 25 次更新。 全新版本的 Yoga 反对 SmartNIC DPUs 等先进的硬件技术,通过对Kubernetes、Prometheus 等云原生软件集成进行优化和缩小技术债等形式,使得 OpenStack 内核的稳定性与可靠性得以放弃。 OpenStack Yoga 下载地址:https://www.openstack.org/sof... 12 年倒退历程造就权威“云”时代12 年前,人们眼中的云是一种能看得见的天然景象;然而明天,“云”就是所有。 OpenStack 在过来的 12 年里的倒退就是榜样,并由此迎来了属于本人的时代。 2010 年,由 NASA(美国国家航空航天局)和 Rackspace 独特单干正式发动并成立了以Apache 许可证受权的开源代码我的项目 —— Openstack。 从 Openstack 公布第一个开源的云计算平台版本 Austin,到 2012 年 9 月 第六个版本 Folsom 的公布,期间一直优化不断完善直至成熟,为其开源云计算平台稳步发展打好了坚实基础。 2013 年 4 月,OpenStack 公布了其第七个版本 Grizly ,新增了波及计算、存储、网络和共享服务等方面近 230 个新性能,无效缩小了对地方数据库的依赖。随后同年 10 月,OpenStack 公布了第八个版本 Havana。 ...

March 31, 2022 · 2 min · jiezi

关于openstack:学习OpenStack云计算实战手册-第3版

形容学习OpenStack云计算实战手册 第3版 xz内容地址https://www.aliyundrive.com/s...

March 14, 2022 · 1 min · jiezi

关于openstack:九州云黄舒泉开源基因技术为主携手塑造5G边缘计算新生态

云计算畛域的专家和实践者,OpenStack 社区晚期贡献者、推广者和实践者;多个 OpenStack 我的项目寰球前十贡献者,边缘计算我的项目 StarlingX 次要发起人之一,StarlingX 技术委员会成员,第一届 TSC 成员中惟一华人工程师;九州云 KATA、Airship、StarlingX 孵化我的项目发动工作的技术领头人。受访者:九州云 99Cloud 高级技术总监 黄舒泉采访及整顿:SegmentFault 思否编辑部5G 网络时代,边缘计算和云技术正在塑造物联网 IoT 的将来。而作为云技术的最新趋势,边缘计算也正在供应商们带来全新商机。在刚刚完结的 2021|OpenInfra Days China“开源根底设的下一个十年”大会上,咱们就看到了来自业内当先的开源云服务商九州云带来的相干分享。 作为中国当先的凋谢云边基础架构服务商,九州云自 2012 年成立之初就自带“开源基因”,是国内首批专门从事 OpenStack 和相干开源服务的企业。短短几年工夫,九州云早已成长为相干畛域的头部企业。 近日,咱们有幸采访到了九州云 99Cloud 高级技术总监 黄舒泉,听他讲述了九州云如何迅速精准的抓住“风口”,以及在推动边缘计算倒退与落地上的致力。 开源、云计算将来倒退状态:混合云据黄舒泉介绍,目前开源云计算次要是以 OpenStack 为规范的模式。多年来,九州云也是从云计算为核心再到边缘计算这样的倒退态势,这也是云计算目前的一个新的倒退模式。 在黄舒泉看来,算力在将来无处不在,云计算会向着混合云倒退。在新的倒退状态下,云计算根底资源管理相干的开源我的项目也会越来越多。比方 OpenStack、K8S、kata 等我的项目。因而,将来会有更多样的计算模式及不同开源软件框架来满足新形态计算的倒退。 紧随 5G 时代 |踊跃布局边缘计算相干畛域作为首批专门从事 OpenStack 和相干开源服务的公司,九州云在一开始就看到了云计算往边缘倒退的趋势。黄舒泉示意,早在 2017 年九州云就开始在边缘计算畛域开展相干布局,与英特尔和风河等公司独特发动了 StarlingX 的新我的项目,次要致力于边缘端治理、计算资源。近期,九州云也积极参与到基于 Edge Gallery 等开源边缘计算我的项目。 除了奉献开源和边缘计算之外,九州云也在积极参与边缘计算的落地工作中,现在,九州云已在智慧园区畛域获得不少问题,如湖州智慧园区、杭州智慧场馆等等。 黄舒泉认为,边缘计算更依赖于 5G 技术,而九州云的特点恰好是将这些基础设施的治理与 5G 网络相结合,因而在将来也会有越来越多新兴利用,包含云游戏、AR/VR 等更多新用户也会在边缘端部署,九州云也会在新的利用上有所投入,云游戏等也是九州云目前正在踊跃布局的畛域及相干业务。 随着 5G 网络一直演变,边缘计算逐步成为云时代的要害。本次 OpenInfra大会上,咱们就看到有自九州云的技术专家带来“边缘原生、边缘计算”等相干议题,都让人印象粗浅,这也让咱们对九州云接下来将要发展的重点开源我的项目颇感兴趣。 对于 Skyline 我的项目|携手塑造边缘计算新生态就下面的疑虑,黄舒泉也为咱们带来了一些最新的分享。他介绍称,正如本次OpenInfra大会上提到的Skyline,这正是九州云目前最新奉献到OpenStack的最新我的项目。该我的项目是一款现代化的治理界面——OpenStack仪表盘,是九州云基于多年积攒而翻新研发的新我的项目,能够无效改善界面体验和经营效率上问题,能更高效治理OpenStack外面的资源。 将Skyline我的项目奉献到社区后,九州云也会继续加大投入,不断完善推动该我的项目,以吸引更多合作伙伴来独特参加该我的项目,一起凋敝整个社区。 整体而言,九州云目前曾经在混合云畛域,继续将本人多年的积攒拿进去跟业内共享,一起推动整个行业的落地。在边缘云方面,九州云也踊跃走在行业前沿,与其余极具开源精力的企业携手,独特来塑造一个凋敝的边缘计算新生态。 开源奉献社区的意义:技术扭转世界家喻户晓,随着OpenStack近几年的一直演变和倒退,九州云作为OpenStack基金会的黄金会员,对社区贡献度也是位列前茅。那么这所有,对于像九州云这样的国内开源领军企业来说,到底意味着什么?这背地的推动力又是什么? 在外界眼里,OpenStack仿佛没那么“火”,但在黄舒泉看来,这恰好证实OpenStack越来越稳固。就好比越被广泛应用的技术,咱们可能会司空见惯,反而是一些更陈腐的技术,才可会引来一时间的高热点。 黄舒泉认为,近年来,随着OpenStack趋于稳定,OpenStack曾经成为云计算事实的规范。作为OpenInfra基金会的黄金会员,九州云多年来也始终将开源精力融入到本身血液中。作为一家有技术有谋求的企业,九州云不仅心愿可能引领整个社区技术的潮流(如捐献Skyline我的项目),更心愿吸引更多地企业和集体,推动社区生态凋敝。 黄舒泉示意,九州云对技术的信奉是置信技术是能够扭转这个世界、扭转这个社会的。作为一家企业,不仅仅须要生存上来,还须要在此基础上为社区继续奉献。目前为止,九州云在社区的奉献名落孙山,也阐明九州云受到了用户和宽广开发者们的认可,因此吸引了很多气味相投的技术人员的退出,成为了更多客户的抉择,而这所有,正是“开源赋能云边改革”。 ...

November 25, 2021 · 1 min · jiezi

关于openstack:均瑶陈钦霸数字化浪潮下-OpenStack助推私有云发展

陈钦霸,华东理工大学计算机技术硕士,02年入职均瑶,在数据中心、数据库、信息安全、云计算及开源生态等畛域有多年的实战经验。现负责均瑶团体信息科技部高级经理,次要负责均瑶团体信息化建设,包含OA、ERP、邮件、视频会议零碎等办公零碎,近几年次要负责公有云相干建设。受访者:均瑶团体信息科技部高级经理 陈钦霸采访及整顿:SegmentFault 思否编辑部作为国内一家大型实干型团体,均瑶自1991年成立以来,始终致力于求实翻新,近年来已在航空运输、金融服务、古代生产、教育服务、科技五大业务板块,成为业界佼佼者,开拓了一片天。 有人可能要问了,这样一家偏差传统型的大型企业,是如何一步步倒退成为明天这样在服务业内领军的龙头企业呢? 其实,这所有的所有背地,都离不开全面进行数字化转型以及开源云计算的使用。 在刚刚过来的主题为“开源基础设施的下一个十年”OpenInfra展会上,咱们有幸采访到了均瑶团体信息科技部高级经理陈钦霸,并就数字化转型、开源云计算等方面议题进行了一场别开生面的畅谈。看完上面这些内容,或者你就会找到你想要的答案。 对于企业全面进行数字化转型的契机,陈钦霸介绍称,早在两年前,均瑶团体王均金董事长就提出了“科技赋能”的议题,来推动整个团体业务倒退。 数字化转型的减速推动,应该是均瑶2021年会上,王均金董事长进一步提出“各个业务板块要深刻理解数字化理念,拥抱数字化,把握新技术,用好'方法论',科技赋能”。 特地在数字化浪潮的大趋势下,尤其对于均瑶这样——涉足航空、教育、生产等畛域偏传统业务的企业来说,数字化建设水平离同行业高水平还有差距,因而须要加大力度来推动。 团体数字化转型遇到难题对于均瑶这样的大型团体而言,数字化转型波及到旗下各行各业的子公司,因而会遇到不少难题。 对此,陈钦霸介绍道,均瑶是在2017年初开始踏足云计算畛域,通过内外部的具体调研和综合评估,下半年就孕育了“第一朵均瑶云”。而这朵云,是“从点到面,而后缓缓的推广到团体五大板块”当中去的。 刚开始,均瑶找了一家跨境电商初创公司做“公有云”孵化的试点,将其打造成了一个经典上云案例,差不多一两个月后,印证了均瑶“公有云”平台整体的稳定性,是经得起考验的。 尔后,便逐渐“下放”给各板块试用,前后花了将近一两年工夫,最终“功夫不负有心人”,均瑶团体实现了云主机规模靠近500多台、上云子公司达十多家的傲人“业绩”。 也就是说,从2017年开始,随着“新业务上云”的策略逐步开展,整个平台也开始趋于稳定、成熟,尔后的工夫里,一些子公司也开始将一些“积淀”的老业务逐渐上云,比方一些老旧的业务零碎迁徙上云,最终实现企业“从点到面”全面上云。 对于传统架构而言,新业务从立项、施行到上线,整个周期相当长。究其原因,陈钦霸解释称,比方服务器硬件设施洽购到货期要差不多两三个月,零碎装置部署、网络调试的周期也很长,整体效率低下。而数字化、云计算之后,以上这些业务基本上在一两天或者一周内就能够疾速上线,既进步了效率,又节约了投资,对企业信息化来说是一个飞速的倒退。上云业务随需应变,就像“麻利开发”,疾速迭代,实现资源依据业务弹性伸缩,“试错”成本低,不会产生资源节约。 开源云助力业务倒退对于“开源云助力企业业务倒退”方面的议题,陈钦霸也为咱们带来了不少实在的经典案例。据他回顾,早在2018年的时候,均瑶旗下的吉祥物流公司(吉祥航空子公司),新上线一个互联网在线平台,因为业务倒退比拟快,整个工期要求也比拟紧。上云后,缩小了信息化建设老本,初创投入少;放慢了零碎迭代能力,实现了灰度公布和自动化部署;晋升了效率,可专一于外围业务和开发工作;翻新了模式,可疾速搭建新业务环境进行验证,升高研发老本。 因而在引入云之后,整个平台造成了疾速部署、开发、上线、响应等一站式服务体系,当然在平台产品和服务成熟过程中,期间陆续修复了一些bug,也踩了不少坑。最终通过借助OpenStack相干技术及九州云撑持,在云利用场景等方面做了一些优化,这些经验也是与云厂商之间互相磨合的过程,也是正式深刻接触开源的开始。 在陈钦霸看来,现阶段开源仍旧是大趋势,其背地须要弱小的技术撑持团队。均瑶目前已上云的很多业务零碎,就对整个云平台的稳定性及可靠性有较高的依赖性,尤其波及到Ceph存储,关系到业务数据的安全性。 特地是在寰球都开始关注数据安全的当下,不论是国家层面还是企业,数据安全相干议题早已成为热议焦点。因而,均瑶对开源畛域的倒退也始终离不开来自单干厂商的大力支持,须要他们提供快捷技术保障服务。 企业接入开源所波及到的数据安全问题对于像均瑶这样的大型企业接入开源所波及到的数据安全问题,陈钦霸也从三个方面给出了独到见解。 一、在网络隔离前提下,还须要提供延申性能。云主机作为“人造隔离”的组件,还需与各业务部门实现资源互通。二、在业务互联条件下,还须要思考平安防护。须引入第三方平安机构,如均瑶目前单干的深服气云平安资源。三、在数据快照根底上,还须要定期异地备份。借助业余的备份工具,保留三个以上不同时间段的齐全备份数据,加固平安的最初一道防线。 既要保障网络隔离性,又要实现跨业务的互联互通。这是个矛盾点,理论利用场景中常常会产生很多问题。 因而,在接入开源的过程中,须要对业务场景提前思考分明,之后还要有明确布局,尔后才制订下一步策略。如均瑶目前正在测试容器平台利用方面,与九州增强单干,以及接下来与OpenStack的深度单干,除IaaS外,在PaaS、SaaS服务等方面都有需要。 通过引入这样的第三方业余的平安公司或云厂商,来为企业提供更多更业余的服务,是大型企业接入开源的外围。 对于跟九州云/OpenStack的单干在与OpenStack深度单干方面,陈钦霸谈到,在易用性和安全性方面,均瑶云平台的安全性现已在做进一步优化、加固和评测,且曾经拿到了相干认证。此外,云平台能力也在进行进一步拓展,比方减少更多高性能的计算节点资源,利用规模上也会进一步深入。而除了云主机,均瑶接下来可能还会跟OpenStack有进一步的容器平台、CMP多云治理平台等单干。 据陈钦霸介绍,均瑶云是基于Openstack N版搭建的“均瑶云一期”于2017年12月28日上线,面向外部子公司推广试用,2018年胜利入围上海市经信委“十佳云计算利用示范”我的项目,同年通过了公安部颁布的信息系统等级爱护备案证实第三级。 为满足业务更高要求,2020年,与九州云策略单干,“均瑶云平台”降级到更稳固的Openstack P版,同年均瑶云二期我的项目从400家参评企业中怀才不遇,获评上海市经信委“2020年企业上云”示范利用并获产业倒退专项资金。 均瑶团体的企业上云工作失去了上海市无关部门的必定,对“均瑶云”接下来的倒退也有较大的帮忙。 目前,均瑶团体外部应用均瑶有云的公司已超一半,约64.8%,应用私有云的公司占比17.6%,还有17.6%的公司应用了混合云。 均瑶云是均瑶团体重要信息基础设施,为企业业务倒退和外围数据安全提供反对和保障,具备重要战略意义。 2021年均瑶已发文全面推动分子公司“企业上云”。 结束语除了航空、教育等偏传统的畛域,近年来均瑶也已在衰弱这样的新兴畛域获得了不少实际成绩。从2020年均瑶衰弱(股票名称:均瑶衰弱)的正式上市,到往年引入了一个全新的衰弱产业——均瑶医疗,均瑶正在借助开源和上云的形式在更多畛域拓展和冲破。 这个过程中,陈钦霸作为均瑶团体的信息化“急先锋”,将会持续秉持“对信息诚信的高度撑持和依赖”,踊跃拥抱开源和云技术,为企业数字化转型砥砺“前行”,持续争做国内数字化转型企业标杆,积极响应国家“科技驱动”相干策略,为打造“数字中国”继续发力!

November 24, 2021 · 1 min · jiezi

关于openstack:均瑶陈钦霸数字化浪潮下-OpenStack助推私有云发展

受访者:均瑶团体信息科技部高级经理 陈钦霸 采访及整顿:SegmentFault 思否编辑部 陈钦霸,华东理工大学计算机技术硕士,02年入职均瑶,在数据中心、数据库、信息安全、云计算及开源生态等畛域有多年的实战经验。现负责均瑶团体信息科技部高级经理,次要负责均瑶团体信息化建设,包含OA、ERP、邮件、视频会议零碎等办公零碎,近几年次要负责公有云相干建设。 作为国内一家大型实干型团体,均瑶自1991年成立以来,始终致力于求实翻新,近年来已在航空运输、金融服务、古代生产、教育服务、科技五大业务板块,成为业界佼佼者,开拓了一片天。 有人可能要问了,这样一家偏差传统型的大型企业,是如何一步步倒退成为明天这样在服务业内领军的龙头企业呢? 其实,这所有的所有背地,都离不开全面进行数字化转型以及开源云计算的使用。 在刚刚过来的主题为“开源基础设施的下一个十年”OpenInfra展会上,咱们有幸采访到了均瑶团体信息科技部高级经理陈钦霸,并就数字化转型、开源云计算等方面议题进行了一场别开生面的畅谈。看完上面这些内容,或者你就会找到你想要的答案。 对于企业全面进行数字化转型的契机,陈钦霸介绍称,早在两年前,均瑶团体王均金董事长就提出了“科技赋能”的议题,来推动整个团体业务倒退。 数字化转型的减速推动,应该是均瑶2021年会上,王均金董事长进一步提出“各个业务板块要深刻理解数字化理念,拥抱数字化,把握新技术,用好'方法论',科技赋能”。 特地在数字化浪潮的大趋势下,尤其对于均瑶这样——涉足航空、教育、生产等畛域偏传统业务的企业来说,数字化建设水平离同行业高水平还有差距,因而须要加大力度来推动。 团体数字化转型遇到难题对于均瑶这样的大型团体而言,数字化转型波及到旗下各行各业的子公司,因而会遇到不少难题。 对此,陈钦霸介绍道,均瑶是在2017年初开始踏足云计算畛域,通过内外部的具体调研和综合评估,下半年就孕育了“第一朵均瑶云”。而这朵云,是“从点到面,而后缓缓的推广到团体五大板块”当中去的。 刚开始,均瑶找了一家跨境电商初创公司做“公有云”孵化的试点,将其打造成了一个经典上云案例,差不多一两个月后,印证了均瑶“公有云”平台整体的稳定性,是经得起考验的。 尔后,便逐渐“下放”给各板块试用,前后花了将近一两年工夫,最终“功夫不负有心人”,均瑶团体实现了云主机规模靠近500多台、上云子公司达十多家的傲人“业绩”。 也就是说,从2017年开始,随着“新业务上云”的策略逐步开展,整个平台也开始趋于稳定、成熟,尔后的工夫里,一些子公司也开始将一些“积淀”的老业务逐渐上云,比方一些老旧的业务零碎迁徙上云,最终实现企业“从点到面”全面上云。 对于传统架构而言,新业务从立项、施行到上线,整个周期相当长。究其原因,陈钦霸解释称,比方服务器硬件设施洽购到货期要差不多两三个月,零碎装置部署、网络调试的周期也很长,整体效率低下。而数字化、云计算之后,以上这些业务基本上在一两天或者一周内就能够疾速上线,既进步了效率,又节约了投资,对企业信息化来说是一个飞速的倒退。上云业务随需应变,就像“麻利开发”,疾速迭代,实现资源依据业务弹性伸缩,“试错”成本低,不会产生资源节约。 开源云助力业务倒退对于“开源云助力企业业务倒退”方面的议题,陈钦霸也为咱们带来了不少实在的经典案例。据他回顾,早在2018年的时候,均瑶旗下的吉祥物流公司(吉祥航空子公司),新上线一个互联网在线平台,因为业务倒退比拟快,整个工期要求也比拟紧。上云后,缩小了信息化建设老本,初创投入少;放慢了零碎迭代能力,实现了灰度公布和自动化部署;晋升了效率,可专一于外围业务和开发工作;翻新了模式,可疾速搭建新业务环境进行验证,升高研发老本。 因而在引入云之后,整个平台造成了疾速部署、开发、上线、响应等一站式服务体系,当然在平台产品和服务成熟过程中,期间陆续修复了一些bug,也踩了不少坑。最终通过借助OpenStack相干技术及九州云撑持,在云利用场景等方面做了一些优化,这些经验也是与云厂商之间互相磨合的过程,也是正式深刻接触开源的开始。 在陈钦霸看来,现阶段开源仍旧是大趋势,其背地须要弱小的技术撑持团队。均瑶目前已上云的很多业务零碎,就对整个云平台的稳定性及可靠性有较高的依赖性,尤其波及到Ceph存储,关系到业务数据的安全性。 特地是在寰球都开始关注数据安全的当下,不论是国家层面还是企业,数据安全相干议题早已成为热议焦点。因而,均瑶对开源畛域的倒退也始终离不开来自单干厂商的大力支持,须要他们提供快捷技术保障服务。 大型企业接入开源所波及到的数据安全问题对于像均瑶这样的大型企业接入开源所波及到的数据安全问题,陈钦霸也从三个方面给出了独到见解。 一、在网络隔离前提下,还须要提供延申性能。云主机作为“人造隔离”的组件,还需与各业务部门实现资源互通。二、在业务互联条件下,还须要思考平安防护。须引入第三方平安机构,如均瑶目前单干的深服气云平安资源。三、在数据快照根底上,还须要定期异地备份。借助业余的备份工具,保留三个以上不同时间段的齐全备份数据,加固平安的最初一道防线。 既要保障网络隔离性,又要实现跨业务的互联互通。这是个矛盾点,理论利用场景中常常会产生很多问题。 因而,在接入开源的过程中,须要对业务场景提前思考分明,之后还要有明确布局,尔后才制订下一步策略。如均瑶目前正在测试容器平台利用方面,与九州增强单干,以及接下来与OpenStack的深度单干,除IaaS外,在PaaS、SaaS服务等方面都有需要。 通过引入这样的第三方业余的平安公司或云厂商,来为企业提供更多更业余的服务,是大型企业接入开源的外围。 对于跟九州云/OpenStack的单干在与OpenStack深度单干方面,陈钦霸谈到,在易用性和安全性方面,均瑶云平台的安全性现已在做进一步优化、加固和评测,且曾经拿到了相干认证。此外,云平台能力也在进行进一步拓展,比方减少更多高性能的计算节点资源,利用规模上也会进一步深入。而除了云主机,均瑶接下来可能还会跟OpenStack有进一步的容器平台、CMP多云治理平台等单干。 据陈钦霸介绍,均瑶云是基于Openstack N版搭建的“均瑶云一期”于2017年12月28日上线,面向外部子公司推广试用,2018年胜利入围上海市经信委“十佳云计算利用示范”我的项目,同年通过了公安部颁布的信息系统等级爱护备案证实第三级。 为满足业务更高要求,2020年,与九州云策略单干,“均瑶云平台”降级到更稳固的Openstack P版,同年均瑶云二期我的项目从400家参评企业中怀才不遇,获评上海市经信委“2020年企业上云”示范利用并获产业倒退专项资金。 均瑶团体的企业上云工作失去了上海市无关部门的必定,对“均瑶云”接下来的倒退也有较大的帮忙。 目前,均瑶团体外部应用均瑶有云的公司已超一半,约64.8%,应用私有云的公司占比17.6%,还有17.6%的公司应用了混合云。 均瑶云是均瑶团体重要信息基础设施,为企业业务倒退和外围数据安全提供反对和保障,具备重要战略意义。 2021年均瑶已发文全面推动分子公司“企业上云”。 结束语除了航空、教育等偏传统的畛域,近年来均瑶也已在衰弱这样的新兴畛域获得了不少实际成绩。从2020年均瑶衰弱(股票名称:均瑶衰弱)的正式上市,到往年引入了一个全新的衰弱产业——均瑶医疗,均瑶正在借助开源和上云的形式在更多畛域拓展和冲破。 这个过程中,陈钦霸作为均瑶团体的信息化“急先锋”,将会持续秉持“对信息诚信的高度撑持和依赖”,踊跃拥抱开源和云技术,为企业数字化转型砥砺“前行”,持续争做国内数字化转型企业标杆,积极响应国家“科技驱动”相干策略,为打造“数字中国”继续发力!

November 24, 2021 · 1 min · jiezi

关于openstack:OpenStack-已死增长超-66来这里了解-OpenInfra-的最新动态

有人质疑 OpenStack 的时代曾经过来了,但数据通知咱们并非如此。 依据往年的 OpenStack 用户考察,由 OpenStack 治理的外围的数量在去年增长了66%。是的! 有超过2500万个 OpenStack 的外围在生产中,其中 Workday、雅虎和沃尔玛各运行超过 100 万个外围,中国移动运行超过 600 万个。 为什么对由开源解决方案驱动的基础设施会有如此继续的需要?请退出由 OpenInfra 基金会(原 OpenStack 基金会)主办的 OpenInfra Live: Keynotes,听取有远见的专家探讨为什么特定的部署曾经逾越了百万外围的门槛,以及独家布告、现场演示、OpenStack + Kubernetes和混合云经济学。 这将是往年大家惟一一次聚在一起的机会。来与寰球 OpenInfra 社区互动吧! 流动在中国的转播将于 2021/11/20(周六)北京工夫 09:00 开始,扫描二维码或点击链接即可注册报名,获取直播地址 https://pages.segmentfault.co... 流动亮点 在 OpenInfra Live: Keynotes 上与开源基础设施畛域的最新参与者互动吧!这两期特别版 OpenInfra Live 是你最好的机会: 与 OpenStack 和 Kubernetes 等寰球开源社区首领互动,听取这些我的项目如何反对混合云等 OpenInfra 的应用案例深刻理解混合云经济学和开源技术施展的作用庆贺咱们发表往年的超级用户大奖的得主来自寰球的开源社区首领齐聚一堂 退出咱们,立刻注册流动在中国的转播将于 2021/11/20(周六)北京工夫 09:00 开始扫描二维码或点击链接即可注册报名,获取直播地址 https://pages.segmentfault.co...

November 17, 2021 · 1 min · jiezi

关于openstack:Pike裸金属部署

变量 ctrl_ip="172.36.214.11" #controller_mgt_ip#Note: The hostname cannot contain "_"ctrl_hostname=`cat /etc/hostname`all_pwd="123456"#inspector_ip you should set on inspector_interfaceinspector_ip="10.0.0.1"inspector_intface="ens256"inspector_ippool_start="10.0.0.100"inspector_ippool_end="10.0.0.200"source /root/admin-openrcopenstack network create Provision --provider-network-type vxlan --provider-segment 4001#provision_ip you should set on inspector_interface's vlan subinterface, for example: ens256.1255provision_vlan="4001"provision_ip="20.0.0.1"provision_uuid=`openstack network show Provision | grep id|grep -v pro|grep -v qos|tr -d " "|awk -F '|' '{print$3}'`echo $provision_uuidsleep 3set inpspector interface sed -i "/BOOTPROTO/cBOOTPROTO=none" /etc/sysconfig/network-scripts/ifcfg-$inspector_intfacesed -i "/ONBOOT/cONBOOT=yes" /etc/sysconfig/network-scripts/ifcfg-$inspector_intfaceecho "IPADDR=$inspector_ip" >>/etc/sysconfig/network-scripts/ifcfg-$inspector_intfaceecho "PREFIX=24" >>/etc/sysconfig/network-scripts/ifcfg-$inspector_intfaceset provision interface echo "BOOTPROTO=none" >>/etc/sysconfig/network-scripts/ifcfg-$inspector_intface.$provision_vlanecho "DEVICE=$inspector_intface.$provision_vlan" >>/etc/sysconfig/network-scripts/ifcfg-$inspector_intface.$provision_vlanecho "ONBOOT=yes" >>/etc/sysconfig/network-scripts/ifcfg-$inspector_intface.$provision_vlanecho "IPADDR=$provision_ip" >>/etc/sysconfig/network-scripts/ifcfg-$inspector_intface.$provision_vlanecho "VLAN=yes" >>/etc/sysconfig/network-scripts/ifcfg-$inspector_intface.$provision_vlansystemctl restart networksystemctl status networkyum install qemu-img iscsi-initiator-utils python2-ironicclient psmisc gdisk -yDatabase ...

September 29, 2021 · 6 min · jiezi

关于openstack:OpenStackStein控制和网络节点合一部署

留神:运行shell,应用"source xx.sh or . xx.sh",不要应用"bash xx.sh" set environment variablesecho "#Add by Ly">>/etc/profileecho "export CONTROLLER_IP=172.36.214.11">>/etc/profileecho "export CTRL_HOST_NAME=stein-ctrl">>/etc/profileecho "export ALL_PASS=123456">>/etc/profilesource /etc/profileB_setup_base_env.shset -e -xyum install -y net-toolsyum install -y expectyum install -y tcpdumpyum install -y python-pipyum install -y treeecho "$CONTROLLER_IP $CTRL_HOST_NAME" >>/etc/hostssystemctl stop firewalldsystemctl disable firewalldsleep 2cp /etc/selinux/config /etc/selinux/config.baksed -i "/SELINUX=enforcing/cSELINUX=disabled" /etc/selinux/configsetenforce 0cp /etc/chrony.conf /etc/chrony.conf.baksed -i "/server 0.centos.pool.ntp.org iburst/cserver 10.165.7.181 iburst" /etc/chrony.confsed -i "/centos.pool.ntp.org/d" /etc/chrony.confsystemctl enable chronydsystemctl restart chronydsystemctl status chronydsleep 2chronyc sourcestimedatectl set-timezone Asia/Shanghaisleep 5#by your diyC_setup_base_soft_about_ctrl_stein.shset -e -xecho "The time now is : $CURDATE"sleep 3yum install centos-release-openstack-stein -yyum install python-openstackclient -yyum install openstack-selinux -yyum install -y mariadbyum install -y mariadb-serveryum install -y python2-PyMySQLtouch /etc/my.cnf.d/openstack.cnfecho "[mysqld]" >>/etc/my.cnf.d/openstack.cnfecho "bind-address = $CONTROLLER_IP" >>/etc/my.cnf.d/openstack.cnfecho "" >>/etc/my.cnf.d/openstack.cnfecho "default-storage-engine = innodb" >>/etc/my.cnf.d/openstack.cnfecho "innodb_file_per_table = on" >>/etc/my.cnf.d/openstack.cnfecho "max_connections = 4096" >>/etc/my.cnf.d/openstack.cnfecho "collation-server = utf8_general_ci" >>/etc/my.cnf.d/openstack.cnfecho "character-set-server = utf8" >>/etc/my.cnf.d/openstack.cnfsystemctl enable mariadb.servicesystemctl start mariadb.servicesystemctl status mariadb.servicesleep 2mysql_secure_installation <<EOFy$ALL_PASS$ALL_PASSyyyyEOF#Message queueyum install rabbitmq-server -ysystemctl enable rabbitmq-server.servicesystemctl start rabbitmq-server.servicesystemctl status rabbitmq-server.servicesleep 2rabbitmqctl add_user openstack $ALL_PASSrabbitmqctl set_permissions openstack ".*" ".*" ".*"#Memcachedyum install memcached python-memcached -ycp /etc/sysconfig/memcached /etc/sysconfig/memcached.baksed -i "/OPTIONS=\"-l 127.0.0.1,::1\"/cOPTIONS=\"-l 127.0.0.1,::1,$CONTROLLER_IP\"" /etc/sysconfig/memcachedsystemctl enable memcached.servicesystemctl start memcached.servicesystemctl status memcached.servicesleep 2#ETCDyum install etcd -ycp /etc/etcd/etcd.conf /etc/etcd/etcd.conf.baksed -i '/ETCD_DATA_DIR/cETCD_DATA_DIR="/var/lib/etcd/default.etcd"' /etc/etcd/etcd.confsed -i "/ETCD_LISTEN_PEER_URLS/cETCD_LISTEN_PEER_URLS=\"http://$CONTROLLER_IP:2380\"" /etc/etcd/etcd.confsed -i "/ETCD_LISTEN_CLIENT_URLS/cETCD_LISTEN_CLIENT_URLS=\"http://$CONTROLLER_IP:2379\"" /etc/etcd/etcd.confsed -i "/ETCD_NAME/cETCD_NAME=\"$CON_HOST_NAME\"" /etc/etcd/etcd.confsed -i "/ETCD_INITIAL_ADVERTISE_PEER_URLS/cETCD_INITIAL_ADVERTISE_PEER_URLS=\"http://$CONTROLLER_IP:2380\"" /etc/etcd/etcd.confsed -i "/ETCD_ADVERTISE_CLIENT_URLS/cETCD_ADVERTISE_CLIENT_URLS=\"http://$CONTROLLER_IP:2379\"" /etc/etcd/etcd.confsed -i "/ETCD_INITIAL_CLUSTER=/cETCD_INITIAL_CLUSTER=\"$CON_HOST_NAME=http://$CONTROLLER_IP:2380\"" /etc/etcd/etcd.confsed -i '/ETCD_INITIAL_CLUSTER_TOKEN/cETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"' /etc/etcd/etcd.confsed -i '/ETCD_INITIAL_CLUSTER_STATE/cETCD_INITIAL_CLUSTER_STATE="new"' /etc/etcd/etcd.confsystemctl enable etcdsystemctl start etcdsystemctl status etcdsleep 2D_setup_keystone_about_ctrl_stein.shset -e -xyum install openstack-keystone -yyum install httpd -yyum install mod_wsgi -ymysql -N -uroot -p$ALL_PASS<<EOFDROP DATABASE if exists keystone;CREATE DATABASE if not exists keystone;GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '$ALL_PASS';EOF#yum install openstack-keystone -y#yum install httpd -y#yum install mod_wsgi -ycp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak#[database]sed -i "/#connection = <None>/aconnection = mysql+pymysql://keystone:$ALL_PASS@$CONTROLLER_IP/keystone" /etc/keystone/keystone.conf#[token]sed -i '/provider =/aprovider = fernet' /etc/keystone/keystone.conf#Populate the Identity service databasesu -s /bin/sh -c "keystone-manage db_sync" keystonekeystone-manage fernet_setup --keystone-user keystone --keystone-group keystonekeystone-manage credential_setup --keystone-user keystone --keystone-group keystone#keystone-manage bootstrap --bootstrap-password $ALL_PASS \ --bootstrap-admin-url http://$CONTROLLER_IP:5000/v3/ \ --bootstrap-internal-url http://$CONTROLLER_IP:5000/v3/ \ --bootstrap-public-url http://$CONTROLLER_IP:5000/v3/ \ --bootstrap-region-id RegionOne#ServerNamesed -i "/#ServerName/aServerName $CONTROLLER_IP" /etc/httpd/conf/httpd.conf#Creating a soft linkln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/systemctl enable httpd.servicesystemctl start httpd.servicesystemctl status httpd.service#Configure the administrative accountexport OS_USERNAME=adminexport OS_PASSWORD=$ALL_PASSexport OS_PROJECT_NAME=adminexport OS_USER_DOMAIN_NAME=Defaultexport OS_PROJECT_DOMAIN_NAME=Defaultexport OS_AUTH_URL=http://$CONTROLLER_IP:5000/v3export OS_IDENTITY_API_VERSION=3#Create a domain, projects, users, and rolesopenstack domain create --description "An Example Domain" exampleopenstack project create --domain default --description "Service Project" serviceopenstack project create --domain default --description "Demo Project" myproject/usr/bin/expect << EOFset timeout 15spawn openstack user create --domain default --password-prompt myuserexpect "User*"send "$ALL_PASS\r"expect "Repeat *"send "$ALL_PASS\r"expect eofEOFopenstack role create myroleopenstack role add --project myproject --user myuser myroleunset OS_AUTH_URL OS_PASSWORD/usr/bin/expect << EOFset timeout 15spawn openstack --os-auth-url http://$CONTROLLER_IP:5000/v3 \ --os-project-domain-name Default --os-user-domain-name Default \ --os-project-name admin --os-username admin token issueexpect "*Password*"send "$ALL_PASS\r"expect eofEOF/usr/bin/expect << EOFset timeout 15spawn openstack --os-auth-url http://controller:5000/v3 \ --os-project-domain-name Default --os-user-domain-name Default \ --os-project-name myproject --os-username myuser token issueexpect "*Password*"send "$ALL_PASS\r"expect eofEOF#Creating admin-openrctouch /root/admin-openrcecho "export OS_PROJECT_DOMAIN_NAME=Default" >/root/admin-openrcecho "export OS_USER_DOMAIN_NAME=Default" >>/root/admin-openrcecho "export OS_PROJECT_NAME=admin" >>/root/admin-openrcecho "export OS_USERNAME=admin" >>/root/admin-openrcecho "export OS_PASSWORD=$ALL_PASS" >>/root/admin-openrcecho "export OS_AUTH_URL=http://$CONTROLLER_IP:5000/v3" >>/root/admin-openrcecho "export OS_IDENTITY_API_VERSION=3" >>/root/admin-openrcecho "export OS_IMAGE_API_VERSION=2" >>/root/admin-openrc#Creating demo-openrctouch /root/demo-openrcecho "export OS_PROJECT_DOMAIN_NAME=Default" >/root/demo-openrcecho "export OS_USER_DOMAIN_NAME=Default" >>/root/demo-openrcecho "export OS_PROJECT_NAME=myproject" >>/root/demo-openrcecho "export OS_USERNAME=myuser" >>/root/demo-openrcecho "export OS_PASSWORD=$ALL_PASS" >>/root/demo-openrcecho "export OS_AUTH_URL=http://$CONTROLLER_IP:5000/v3" >>/root/demo-openrcecho "export OS_IDENTITY_API_VERSION=3" >>/root/demo-openrcecho "export OS_IMAGE_API_VERSION=2" >>/root/demo-openrcsource /root/admin-openrcopenstack token issuesleep 2E_setup_image_about_ctrl_stein.shset -e -x#Database operations: glancemysql -N -uroot -p$ALL_PASS<<EOFDROP DATABASE if exists glance;CREATE DATABASE if not exists glance;GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '$ALL_PASS';EOFsource /root/admin-openrc/usr/bin/expect << EOFset timeout 15spawn openstack user create --domain default --password-prompt glanceexpect "User*"send "$ALL_PASS\r"expect "Repeat*"send "$ALL_PASS\r"expect eofEOFopenstack role add --project service --user glance adminopenstack service create --name glance --description "OpenStack Image" imageopenstack endpoint create --region RegionOne image public http://$CONTROLLER_IP:9292openstack endpoint create --region RegionOne image internal http://$CONTROLLER_IP:9292openstack endpoint create --region RegionOne image admin http://$CONTROLLER_IP:9292yum install openstack-glance -ycp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak#[database]sed -i "/#connection =/aconnection = mysql+pymysql://glance:$ALL_PASS@$CONTROLLER_IP/glance" /etc/glance/glance-api.conf#[keystone_authtoken]sed -i "/\[keystone_authtoken]$/apassword = $ALL_PASS" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/ausername = glance" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/aproject_name = service" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/auser_domain_name = Default" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/aproject_domain_name = Default" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/aauth_type = password" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/amemcached_servers = $CONTROLLER_IP:11211" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/aauth_url = http://$CONTROLLER_IP:5000" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/awww_authenticate_uri = http://$CONTROLLER_IP:5000" /etc/glance/glance-api.conf#[paste_deploy]sed -i "/flavor = keystone/cflavor = keystone" /etc/glance/glance-api.conf#[glance_store]sed -i "/\[glance_store]$/afilesystem_store_datadir = /var/lib/glance/images/" /etc/glance/glance-api.confsed -i "/\[glance_store]$/adefault_store = file" /etc/glance/glance-api.confsed -i "/\[glance_store]$/astores = file,http" /etc/glance/glance-api.conf#备份glance-registry.confcp /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.bak#[database]sed -i "/#connection = <None>/aconnection = mysql+pymysql://glance:$ALL_PASS@$CONTROLLER_IP/glance" /etc/glance/glance-registry.conf#[keystone_authtoken]sed -i "/\[keystone_authtoken]$/apassword = $ALL_PASS" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/ausername = glance" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/aproject_name = service" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/auser_domain_name = Default" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/aproject_domain_name = Default" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/aauth_type = password" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/amemcached_servers = $CONTROLLER_IP:11211" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/aauth_url = http://$CONTROLLER_IP:5000" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/awww_authenticate_uri = http://$CONTROLLER_IP:5000" /etc/glance/glance-registry.conf#[paste_deploy]sed -i "/flavor = keystone/cflavor = keystone" /etc/glance/glance-registry.conf#Populate the Image service databasesu -s /bin/sh -c "glance-manage db_sync" glancesystemctl enable openstack-glance-api.service openstack-glance-registry.servicesystemctl start openstack-glance-api.service openstack-glance-registry.serviceF_setup_placement_about_ctrl_stein.shset -x -e#mysql -N -uroot -p$ALL_PASS<<EOFCREATE DATABASE placement;GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY '$ALL_PASS';EOFsource /root/admin-openrc/usr/bin/expect << EOFset timeout 15spawn openstack user create --domain default --password-prompt placementexpect "User*"send "$ALL_PASS\r"expect "Repeat*"send "$ALL_PASS\r"expect eofEOFopenstack role add --project service --user placement adminopenstack service create --name placement --description "Placement API" placementopenstack endpoint create --region RegionOne placement public http://$CONTROLLER_IP:8778openstack endpoint create --region RegionOne placement internal http://$CONTROLLER_IP:8778openstack endpoint create --region RegionOne placement admin http://$CONTROLLER_IP:8778yum install openstack-placement-api -y#cp /etc/placement/placement.conf /etc/placement/placement.conf.bak#[placement_database]sed -i "/\[placement_database]$/aconnection = mysql+pymysql://placement:$ALL_PASS@$CONTROLLER_IP/placement" /etc/placement/placement.conf#[api]sed -i "/\[api]$/aauth_strategy = keystone" /etc/placement/placement.conf#[keystone_authtoken]sed -i "/\[keystone_authtoken]$/apassword = $ALL_PASS" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/ausername = placement" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/aproject_name = service" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/auser_domain_name = Default" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/aproject_domain_name = Default" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/aauth_type = password" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/amemcached_servers = $CONTROLLER_IP:11211" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/aauth_url = http://$CONTROLLER_IP:5000/v3" /etc/placement/placement.confsu -s /bin/sh -c "placement-manage db sync" placementsystemctl restart httpd#verify installationsource /root/admin-openrcplacement-status upgrade check#install osc-placementmkdir /root/.piptouch /root/.pip/pip.confecho "[global]" >/root/.pip/pip.confecho "index-url=http://10.153.3.130/pypi/web/simple" >>/root/.pip/pip.confecho "" >>/root/.pip/pip.confecho "[install]" >>/root/.pip/pip.confecho "trusted-host=10.153.3.130" >>/root/.pip/pip.confpip install osc-placementsed -i "/<\/VirtualHost>/i\ \ <Directory \/usr\/bin>" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ <IfVersion >= 2.4>" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ \ \ \ \ Require all granted" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ <\/IfVersion>" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ <IfVersion < 2.4>" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ \ \ \ \ Order allow,deny" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ \ \ \ \ Allow from all" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ <\/IfVersion>" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ <\/Directory>" /etc/httpd/conf.d/00-placement-api.confsystemctl restart httpdsystemctl status httpdopenstack --os-placement-api-version 1.2 resource class list --sort-column nameopenstack --os-placement-api-version 1.6 trait list --sort-column nameG_setup_nova_about_ctrl_stein.shset -x -e#mysql -N -uroot -p$ALL_PASS<<EOFDROP DATABASE if exists nova_api;CREATE DATABASE if not exists nova_api;DROP DATABASE if exists nova;CREATE DATABASE if not exists nova;DROP DATABASE if exists nova_cell0;CREATE DATABASE if not exists nova_cell0;GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '$ALL_PASS';EOFsource /root/admin-openrc/usr/bin/expect << EOFset timeout 15spawn openstack user create --domain default --password-prompt novaexpect "User*"send "$ALL_PASS\r"expect "Repeat*"send "$ALL_PASS\r"expect eofEOFopenstack role add --project service --user nova adminopenstack service create --name nova --description "OpenStack Compute" computeopenstack endpoint create --region RegionOne compute public http://$CONTROLLER_IP:8774/v2.1openstack endpoint create --region RegionOne compute internal http://$CONTROLLER_IP:8774/v2.1openstack endpoint create --region RegionOne compute admin http://$CONTROLLER_IP:8774/v2.1yum install -y openstack-nova-apiyum install -y openstack-nova-conductoryum install -y openstack-nova-novncproxyyum install -y openstack-nova-schedulercp /etc/nova/nova.conf /etc/nova/nova.conf.bak#[DEFAULT]sed -i "/\[DEFAULT]$/afirewall_driver = nova.virt.firewall.NoopFirewallDriver" /etc/nova/nova.confsed -i "/\[DEFAULT]$/ause_neutron = True" /etc/nova/nova.confsed -i "/\[DEFAULT]$/amy_ip = $CONTROLLER_IP" /etc/nova/nova.confsed -i "/\[DEFAULT]$/atransport_url = rabbit://openstack:$ALL_PASS@$CONTROLLER_IP" /etc/nova/nova.confsed -i "/\[DEFAULT]$/aenabled_apis = osapi_compute,metadata" /etc/nova/nova.conf#[api_database]sed -i "/\[api_database]$/aconnection = mysql+pymysql://nova:$ALL_PASS@$CONTROLLER_IP/nova_api" /etc/nova/nova.conf#[database]sed -i "/\[database]$/aconnection = mysql+pymysql://nova:$ALL_PASS@$CONTROLLER_IP/nova" /etc/nova/nova.conf#[api]sed -i "/\[api]$/aauth_strategy = keystone" /etc/nova/nova.conf#[keystone_authtoken]sed -i "/\[keystone_authtoken]$/apassword = $ALL_PASS" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/ausername = nova" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/aproject_name = service" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/auser_domain_name = Default" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/aproject_domain_name = Default" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/aauth_type = password" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/amemcached_servers = $CONTROLLER_IP:11211" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/aauth_url = http://$CONTROLLER_IP:5000" /etc/nova/nova.conf#[vnc]sed -i "/\[vnc]$/aserver_proxyclient_address = \$my_ip" /etc/nova/nova.confsed -i "/\[vnc]$/aserver_listen = \$my_ip" /etc/nova/nova.confsed -i "/\[vnc]$/aenabled = true" /etc/nova/nova.conf#[glance]sed -i "/\[glance]$/aapi_servers = http://$CONTROLLER_IP:9292" /etc/nova/nova.conf#[oslo_concurrency]sed -i "/\[oslo_concurrency]$/alock_path = /var/lib/nova/tmp" /etc/nova/nova.conf#[placement]sed -i "/\[placement]$/apassword = $ALL_PASS" /etc/nova/nova.confsed -i "/\[placement]$/ausername = placement" /etc/nova/nova.confsed -i "/\[placement]$/aauth_url = http://$CONTROLLER_IP:5000/v3" /etc/nova/nova.confsed -i "/\[placement]$/auser_domain_name = Default" /etc/nova/nova.confsed -i "/\[placement]$/aauth_type = password" /etc/nova/nova.confsed -i "/\[placement]$/aproject_name = service" /etc/nova/nova.confsed -i "/\[placement]$/aproject_domain_name = Default" /etc/nova/nova.confsed -i "/\[placement]$/aos_region_name = RegionOne" /etc/nova/nova.confsu -s /bin/sh -c "nova-manage api_db sync" novasu -s /bin/sh -c "nova-manage cell_v2 map_cell0" novasu -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" novasu -s /bin/sh -c "nova-manage db sync" novasu -s /bin/sh -c "nova-manage cell_v2 list_cells" novasystemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.servicesystemctl start openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.servicesleep 3#Verify operationsource /root/admin-openrcopenstack compute service listsleep 1openstack catalog listsleep 1openstack image listsleep 1nova-status upgrade checksleep 4H_setup_neutron_about_ctrl_stein.shset -e -x#mysql -N -uroot -p$ALL_PASS<<EOFDROP DATABASE if exists neutron;CREATE DATABASE if not exists neutron;GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '$ALL_PASS';EOFsource /root/admin-openrc/usr/bin/expect << EOFspawn openstack user create --domain default --password-prompt neutronexpect "User*"send "$ALL_PASS\r"expect "Repeat*"send "$ALL_PASS\r"expect eofEOFopenstack role add --project service --user neutron adminopenstack service create --name neutron --description "OpenStack Networking" networkopenstack endpoint create --region RegionOne network public http://$CONTROLLER_IP:9696openstack endpoint create --region RegionOne network internal http://$CONTROLLER_IP:9696openstack endpoint create --region RegionOne network admin http://$CONTROLLER_IP:9696yum install -y openstack-neutronyum install -y openstack-neutron-ml2yum install -y openstack-neutron-openvswitchyum install -y ebtables#/etc/neutron/neutron.confcp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak#[database]sed -i "/\[database]$/aconnection = mysql+pymysql://neutron:$ALL_PASS@$CONTROLLER_IP/neutron" /etc/neutron/neutron.conf#[DEFAULT]sed -i "/\[DEFAULT]$/anotify_nova_on_port_data_changes = true" /etc/neutron/neutron.confsed -i "/\[DEFAULT]$/anotify_nova_on_port_status_changes = true" /etc/neutron/neutron.confsed -i "/\[DEFAULT]$/aauth_strategy = keystone" /etc/neutron/neutron.confsed -i "/\[DEFAULT]$/atransport_url = rabbit://openstack:$ALL_PASS@$CONTROLLER_IP" /etc/neutron/neutron.confsed -i "/\[DEFAULT]$/aallow_overlapping_ips = true" /etc/neutron/neutron.confsed -i "/\[DEFAULT]$/aservice_plugins = router" /etc/neutron/neutron.confsed -i "/\[DEFAULT]$/acore_plugin = ml2" /etc/neutron/neutron.conf#[keystone_authtoken]sed -i "/\[keystone_authtoken]$/apassword = $ALL_PASS" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/ausername = neutron" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/aproject_name = service" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/auser_domain_name = Default" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/aproject_domain_name = Default" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/aauth_type = password" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/amemcached_servers = $CONTROLLER_IP:11211" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/aauth_url = http://$CONTROLLER_IP:5000" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/awww_authenticate_uri = http://$CONTROLLER_IP:5000" /etc/neutron/neutron.conf#[nova]sed -i "/\[nova]$/apassword = $ALL_PASS" /etc/neutron/neutron.confsed -i "/\[nova]$/ausername = nova" /etc/neutron/neutron.confsed -i "/\[nova]$/aproject_name = service" /etc/neutron/neutron.confsed -i "/\[nova]$/aregion_name = RegionOne" /etc/neutron/neutron.confsed -i "/\[nova]$/auser_domain_name = Default" /etc/neutron/neutron.confsed -i "/\[nova]$/aproject_domain_name = Default" /etc/neutron/neutron.confsed -i "/\[nova]$/aauth_type = password" /etc/neutron/neutron.confsed -i "/\[nova]$/aauth_url = http://$CONTROLLER_IP:5000" /etc/neutron/neutron.conf#[oslo_concurrency]sed -i "/\[oslo_concurrency]$/alock_path = /var/lib/neutron/tmp" /etc/neutron/neutron.confcp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak#[ml2]sed -i "/\[ml2]$/aextension_drivers = port_security" /etc/neutron/plugins/ml2/ml2_conf.inised -i "/\[ml2]$/amechanism_drivers = openvswitch,l2population" /etc/neutron/plugins/ml2/ml2_conf.inised -i "/\[ml2]$/atenant_network_types = vxlan,vlan" /etc/neutron/plugins/ml2/ml2_conf.inised -i "/\[ml2]$/atype_drivers = flat,vlan,vxlan" /etc/neutron/plugins/ml2/ml2_conf.ini#[ml2_type_flat]sed -i "/\[ml2_type_flat]$/aflat_networks = provider" /etc/neutron/plugins/ml2/ml2_conf.ini#[ml2_type_vlan]sed -i "/\[ml2_type_vlan]$/anetwork_vlan_ranges = physicnet:1000:2000" /etc/neutron/plugins/ml2/ml2_conf.ini#[ml2_type_vxlan]sed -i "/\[ml2_type_vxlan]$/avni_ranges = 30000:31000" /etc/neutron/plugins/ml2/ml2_conf.ini#[securitygroup]sed -i "/\[securitygroup]$/aenable_ipset = true" /etc/neutron/plugins/ml2/ml2_conf.ini#/etc/neutron/plugins/ml2/openvswitch_agent.inicp /etc/neutron/plugins/ml2/openvswitch_agent.ini /etc/neutron/plugins/ml2/openvswitch_agent.ini.bak#[agent]#sed -i "/tunnel_types = /atunnel_types = vxlan" /etc/neutron/plugins/ml2/openvswitch_agent.ini#[ovs]#sed -i "/\[ovs]$/alocal_ip = 10.214.1.2" /etc/neutron/plugins/ml2/openvswitch_agent.ini#sed -i "/\[ovs]$/atun_peer_patch_port = patch-int" /etc/neutron/plugins/ml2/openvswitch_agent.ini#sed -i "/\[ovs]$/aint_peer_patch_port = patch-tun" /etc/neutron/plugins/ml2/openvswitch_agent.ini#sed -i "/\[ovs]$/atunnel_bridge = br-tun" /etc/neutron/plugins/ml2/openvswitch_agent.ini#[securitygroup]sed -i "/\[securitygroup]$/aenable_security_group = true" /etc/neutron/plugins/ml2/openvswitch_agent.inised -i "/\[securitygroup]$/afirewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver" /etc/neutron/plugins/ml2/openvswitch_agent.inicp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.baksed -i "/\[DEFAULT]$/ainterface_driver = neutron.agent.linux.interface.OVSInterfaceDriver" /etc/neutron/l3_agent.inicp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.baksed -i "/\[DEFAULT]$/aenable_isolated_metadata = true" /etc/neutron/l3_agent.inised -i "/\[DEFAULT]$/adhcp_driver = neutron.agent.linux.dhcp.Dnsmasq" /etc/neutron/dhcp_agent.inised -i "/\[DEFAULT]$/ainterface_driver = neutron.agent.linux.interface.OVSInterfaceDriver" /etc/neutron/dhcp_agent.inised -i "/force_metadata = /aforce_metadata = true" /etc/neutron/dhcp_agent.inicp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.baksed -i "/\[DEFAULT]$/ametadata_proxy_shared_secret = $ALL_PASS" /etc/neutron/metadata_agent.inised -i "/\[DEFAULT]$/anova_metadata_host = $CONTROLLER_IP" /etc/neutron/metadata_agent.ini#Edit /etc/nova/nova.conf file and perform the fllowing actionssed -i "/\[neutron]$/ametadata_proxy_shared_secret = $ALL_PASS" /etc/nova/nova.confsed -i "/\[neutron]$/aservice_metadata_proxy = true" /etc/nova/nova.confsed -i "/\[neutron]$/apassword = $ALL_PASS" /etc/nova/nova.confsed -i "/\[neutron]$/ausername = neutron" /etc/nova/nova.confsed -i "/\[neutron]$/aproject_name = service" /etc/nova/nova.confsed -i "/\[neutron]$/aregion_name = RegionOne" /etc/nova/nova.confsed -i "/\[neutron]$/auser_domain_name = Default" /etc/nova/nova.confsed -i "/\[neutron]$/aproject_domain_name = Default" /etc/nova/nova.confsed -i "/\[neutron]$/aauth_type = password" /etc/nova/nova.confsed -i "/\[neutron]$/aauth_url = http://$CONTROLLER_IP:5000" /etc/nova/nova.confsed -i "/\[neutron]$/aurl = http://$CONTROLLER_IP:9696" /etc/nova/nova.confln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.inisu -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutronsystemctl restart openstack-nova-api.servicesystemctl enable neutron-server.service \ neutron-openvswitch-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service neutron-l3-agent.servicesystemctl start neutron-server.service \ neutron-openvswitch-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service neutron-l3-agent.servicesleep 4I_setup_dashboard_about_ctrl_stein.shset -x -eyum install openstack-dashboard -y##/etc/openstack-dashboard/local_settingscp /etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settings.baksed -i "/OPENSTACK_HOST = /cOPENSTACK_HOST = \"$CONTROLLER_IP\"" /etc/openstack-dashboard/local_settingssed -i "/ALLOWED_HOSTS = /cALLOWED_HOSTS = ['*']" /etc/openstack-dashboard/local_settings#SESSION_ENGINE = 'django.contrib.sessions.backends.cache' #CACHESsed -i "/^CACHES =/iSESSION_ENGINE = 'django.contrib.sessions.backends.cache'" /etc/openstack-dashboard/local_settingssed -i "/^[ \t]*'BACKEND'/a\\ \t'LOCATION': '$CONTROLLER_IP:11211'," /etc/openstack-dashboard/local_settingssed -i 's/django.core.cache.backends.locmem.LocMemCache/django.core.cache.backends.memcached.MemcachedCache/g' /etc/openstack-dashboard/local_settings#sed -i "/OPENSTACK_KEYSTONE_URL/cOPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST" /etc/openstack-dashboard/local_settings#sed -i "/OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT/cOPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True" /etc/openstack-dashboard/local_settings#OPENSTACK_API_VERSIONS = {# "identity": 3,# "image": 2,# "volume": 2,#}sed -i "s/#OPENSTACK_API_VERSIONS/OPENSTACK_API_VERSIONS/g" /etc/openstack-dashboard/local_settingssed -i "/# \"identity\": 3,/c\\ \"identity\": 3," /etc/openstack-dashboard/local_settingssed -i "/# \"image\": 2,/c\\ \"image\": 2," /etc/openstack-dashboard/local_settingssed -i "/# \"volume\": 2,/c\\ \"volume\": 2," /etc/openstack-dashboard/local_settingssed -i "/# \"compute\": 2,/a}" /etc/openstack-dashboard/local_settings#sed -i "/#OPENSTACK_KEYSTONE_DEFAULT_DOMAIN/cOPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\"" /etc/openstack-dashboard/local_settingssed -i "/OPENSTACK_KEYSTONE_DEFAULT_ROLE/cOPENSTACK_KEYSTONE_DEFAULT_ROLE = \"user\"" /etc/openstack-dashboard/local_settings#OPENSTACK_NEUTRON_NETWORK = {# ...# 'enable_router': False,# 'enable_quotas': False,# 'enable_distributed_router': False,# 'enable_ha_router': False,# 'enable_lb': False,# 'enable_firewall': False,# 'enable_vpn': False,# 'enable_fip_topology_check': False,#}##/etc/httpd/conf.d/openstack-dashboard.conf#cp /etc/httpd/conf.d/openstack-dashboard.conf /etc/httpd/conf.d/openstack-dashboard.conf.baksed -i "/WSGIScriptAlias/iWSGIApplicationGroup %{GLOBAL}" /etc/httpd/conf.d/openstack-dashboard.conf#systemctl restart httpd.service memcached.servicesystemctl status httpd memcachedsleep 3#Fwaasyum install openstack-neutron-fwaas -yneutron-db-manage --subproject neutron-fwaas upgrade head#lbaasv2yum install openstack-neutron-lbaas -yneutron-db-manage --subproject neutron-lbaas upgrade head#vpnaasyum install openstack-neutron-vpnaas -yneutron-db-manage --subproject neutron-vpnaas upgrade head

September 8, 2021 · 10 min · jiezi

关于openstack:OpenStackTrain版本ControllerNetworkShell脚本部署

Train版本Controller+Network装置部署阐明:因共事应用后说虚拟机无奈创立,经定位发现在CentOS零碎下有些配置项没有导致写入对应配置文件失败,故此优化了一次,但未测试。 #!/bin/bash #Author: -- Created: 2021.4#Modified -- Modified: 2021echo -e "\033[45;37m Openstack Train controller node start to install \033[0m"#===Variable===CTRL_HOST_NAME=`cat /etc/hostname | awk '{print $1}'`ALL_PASS="123456"CURDATE=`date`#Get IP addressipNum=`ip a|grep inet|grep -v 127.0.0.1|grep -v inet6|awk '{print$2}'|awk -F '/' '{print$1}'|tr -d " "|wc -l`#echo "This host IP address:"#ip a|grep inet|grep -v 127.0.0.1|grep -v inet6|awk '{print$2}'|awk -F '/' '{print$1}'|tr -d " "echo "This host IP address: `ip a|grep inet|grep -v 127.0.0.1|grep -v inet6|awk '{print$2}'|awk -F '/' '{print$1}'|tr -d " "`"if [ "$ipNum" -eq 0 ];then echo "This host does not have IP address, Please set it." exit 1fiif [ "$ipNum" -gt 1 ];then echo "This host has multiple IP addresses !" echo "Which one you choose, please enter the number of rows." while : do read -p "The number of row is : " rowNum if [[ "$ipNum" =~ ^[0-9]+$ ]]; then if [[ "$rowNum" -gt $ipNum ]]; then echo "Invaild rows!" elif [[ "$rowNum" -le 0 ]]; then echo "Invaild rows!" else CONTROLLER_IP=`ip a |grep inet|grep -v 127.0.0.1|grep -v inet6|awk '{print$2}'|awk -F '/' '{print$1}'|awk 'NR==$rowNum'` break fi else echo "Invaild rows!" fi donefiif [ "$ipNum" -eq 1 ]; then CONTROLLER_IP=`ip a |grep inet|grep -v 127.0.0.1|grep -v inet6|awk '{print$2}'|tr -d ' '|awk -F '/' '{print$1}'`fiset -eecho ""echo "Controller'ip is : $CONTROLLER_IP"echo "Controller'name is : $CTRL_HOST_NAME"echo "Openstack all passwords are : $ALL_PASS"echo "Starting time : $CURDATE"echo ""#echo "Your can cancel in 10s by 'Ctrl + D'"echo -e "\033[45;37m Your can cancel within 10s by 'Ctrl + C' \033[0m"echo -n "Wait for 10 seconds "for i in $(seq 10); do echo -n "."; sleep 1; doneecho#sleep 10echo "end"set -x#===Environment===yum install vim -yyum install net-tools -yyum install ftp -yyum install expect -yyum install tcpdump -yyum install lldpad -yyum install htop -yyum install bwm-ng -yyum install python-pip -yecho "$CONTROLLER_IP $CTRL_HOST_NAME" >>/etc/hostssystemctl stop firewalldsystemctl disable firewalldcp /etc/selinux/config /etc/selinux/config.baksed -i "/SELINUX=enforcing/cSELINUX=disabled" /etc/selinux/configsetenforce 0cp /etc/chrony.conf /etc/chrony.conf.baksed -i "/server 0.centos.pool.ntp.org iburst/cserver 10.165.7.181 iburst" /etc/chrony.confsed -i "/centos.pool.ntp.org/d" /etc/chrony.confsystemctl enable chronydsystemctl restart chronydchronyc sourcestimedatectl set-timezone Asia/Shanghaiecho "The time now is : $CURDATE"yum install python-openstackclient -yyum install openstack-selinux -y#databaseyum install mariadb mariadb-server python2-PyMySQL -ytouch /etc/my.cnf.d/openstack.cnfecho "[mysqld]" >>/etc/my.cnf.d/openstack.cnfecho "bind-address = $CONTROLLER_IP" >>/etc/my.cnf.d/openstack.cnfecho "" >>/etc/my.cnf.d/openstack.cnfecho "default-storage-engine = innodb" >>/etc/my.cnf.d/openstack.cnfecho "innodb_file_per_table = on" >>/etc/my.cnf.d/openstack.cnfecho "max_connections = 4096" >>/etc/my.cnf.d/openstack.cnfecho "collation-server = utf8_general_ci" >>/etc/my.cnf.d/openstack.cnfecho "character-set-server = utf8" >>/etc/my.cnf.d/openstack.cnfsystemctl enable mariadb.servicesystemctl start mariadb.servicesystemctl status mariadb.servicemysql_secure_installation <<EOFy$ALL_PASS$ALL_PASSyyyyEOF#Message queueyum install rabbitmq-server -ysystemctl enable rabbitmq-server.servicesystemctl start rabbitmq-server.servicesystemctl status rabbitmq-server.servicerabbitmqctl add_user openstack $ALL_PASSrabbitmqctl set_permissions openstack ".*" ".*" ".*"#Memcachedyum install -y memcachedyum install -y python-memcachedcp /etc/sysconfig/memcached /etc/sysconfig/memcached.baksed -i "/OPTIONS=\"-l 127.0.0.1,::1\"/cOPTIONS=\"-l 127.0.0.1,::1,$CONTROLLER_IP\"" /etc/sysconfig/memcachedsystemctl enable memcached.servicesystemctl start memcached.servicesystemctl status memcached.service#ETCDyum install etcd -ycp /etc/etcd/etcd.conf /etc/etcd/etcd.conf.baksed -i '/ETCD_DATA_DIR/cETCD_DATA_DIR="/var/lib/etcd/default.etcd"' /etc/etcd/etcd.confsed -i "/ETCD_LISTEN_PEER_URLS/cETCD_LISTEN_PEER_URLS=\"http://$CONTROLLER_IP:2380\"" /etc/etcd/etcd.confsed -i "/ETCD_LISTEN_CLIENT_URLS/cETCD_LISTEN_CLIENT_URLS=\"http://$CONTROLLER_IP:2379\"" /etc/etcd/etcd.confsed -i "/ETCD_NAME/cETCD_NAME=\"$CON_HOST_NAME\"" /etc/etcd/etcd.confsed -i "/ETCD_INITIAL_ADVERTISE_PEER_URLS/cETCD_INITIAL_ADVERTISE_PEER_URLS=\"http://$CONTROLLER_IP:2380\"" /etc/etcd/etcd.confsed -i "/ETCD_ADVERTISE_CLIENT_URLS/cETCD_ADVERTISE_CLIENT_URLS=\"http://$CONTROLLER_IP:2379\"" /etc/etcd/etcd.confsed -i "/ETCD_INITIAL_CLUSTER=/cETCD_INITIAL_CLUSTER=\"$CON_HOST_NAME=http://$CONTROLLER_IP:2380\"" /etc/etcd/etcd.confsed -i '/ETCD_INITIAL_CLUSTER_TOKEN/cETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"' /etc/etcd/etcd.confsed -i '/ETCD_INITIAL_CLUSTER_STATE/cETCD_INITIAL_CLUSTER_STATE="new"' /etc/etcd/etcd.confsystemctl enable etcdsystemctl start etcdsystemctl status etcd#===Identity service===mysql -N -uroot -p$ALL_PASS<<EOFDROP DATABASE if exists keystone;CREATE DATABASE if not exists keystone;GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '$ALL_PASS';EOFyum install -y openstack-keystoneyum install -y httpdyum install -y mod_wsgicp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bakif [ `cat /etc/keystone/keystone.conf|grep '^\[database\]'` != "[database]" ]; then echo "[database]" >> /keystone/keystone.confelse echo "We have this!"fi#[database]sed -i "/\[database]$/aconnection = mysql+pymysql://keystone:$ALL_PASS@$CONTROLLER_IP/keystone" /etc/keystone/keystone.conf#[token]if [ `cat /etc/keystone/keystone.conf|grep '^\[token\]'` != "[token]" ]; then echo "[token]" >> /keystone/keystone.confelse echo "We have this!"fised -i '/\[token]$/aprovider = fernet' /etc/keystone/keystone.conf#Populate the Identity service databasesu -s /bin/sh -c "keystone-manage db_sync" keystonekeystone-manage fernet_setup --keystone-user keystone --keystone-group keystonekeystone-manage credential_setup --keystone-user keystone --keystone-group keystone#keystone-manage bootstrap --bootstrap-password $ALL_PASS \ --bootstrap-admin-url http://$CONTROLLER_IP:5000/v3/ \ --bootstrap-internal-url http://$CONTROLLER_IP:5000/v3/ \ --bootstrap-public-url http://$CONTROLLER_IP:5000/v3/ \ --bootstrap-region-id RegionOne#ServerName sed -i "/#ServerName/aServerName $CONTROLLER_IP" /etc/httpd/conf/httpd.conf#Creating a soft linkln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/systemctl enable httpd.servicesystemctl start httpd.service#systemctl status httpd.service#Configure the administrative accountexport OS_USERNAME=adminexport OS_PASSWORD=$ALL_PASSexport OS_PROJECT_NAME=adminexport OS_USER_DOMAIN_NAME=Defaultexport OS_PROJECT_DOMAIN_NAME=Defaultexport OS_AUTH_URL=http://$CONTROLLER_IP:5000/v3export OS_IDENTITY_API_VERSION=3#Create a domain, projects, users, and rolesopenstack domain create --description "An Example Domain" exampleopenstack project create --domain default --description "Service Project" serviceopenstack project create --domain default --description "Demo Project" myproject/usr/bin/expect << EOFset timeout 15spawn openstack user create --domain default --password-prompt myuserexpect "User*"send "$ALL_PASS\r"expect "Repeat *"send "$ALL_PASS\r"expect eofEOFopenstack role create myroleopenstack role add --project myproject --user myuser myroleunset OS_AUTH_URL OS_PASSWORD/usr/bin/expect << EOFset timeout 15spawn openstack --os-auth-url http://$CONTROLLER_IP:5000/v3 \--os-project-domain-name Default --os-user-domain-name Default \--os-project-name admin --os-username admin token issueexpect "*Password*"send "$ALL_PASS\r"expect eofEOF/usr/bin/expect << EOFset timeout 15spawn openstack --os-auth-url http://controller:5000/v3 \--os-project-domain-name Default --os-user-domain-name Default \--os-project-name myproject --os-username myuser token issueexpect "*Password*"send "$ALL_PASS\r"expect eofEOF#Creating admin-openrctouch /root/admin-openrcecho "export OS_PROJECT_DOMAIN_NAME=Default" >/root/admin-openrcecho "export OS_USER_DOMAIN_NAME=Default" >>/root/admin-openrcecho "export OS_PROJECT_NAME=admin" >>/root/admin-openrcecho "export OS_USERNAME=admin" >>/root/admin-openrcecho "export OS_PASSWORD=$ALL_PASS" >>/root/admin-openrcecho "export OS_AUTH_URL=http://$CONTROLLER_IP:5000/v3" >>/root/admin-openrcecho "export OS_IDENTITY_API_VERSION=3" >>/root/admin-openrcecho "export OS_IMAGE_API_VERSION=2" >>/root/admin-openrc#Creating demo-openrctouch /root/demo-openrcecho "export OS_PROJECT_DOMAIN_NAME=Default" >/root/demo-openrcecho "export OS_USER_DOMAIN_NAME=Default" >>/root/demo-openrcecho "export OS_PROJECT_NAME=myproject" >>/root/demo-openrcecho "export OS_USERNAME=myuser" >>/root/demo-openrcecho "export OS_PASSWORD=$ALL_PASS" >>/root/demo-openrcecho "export OS_AUTH_URL=http://$CONTROLLER_IP:5000/v3" >>/root/demo-openrcecho "export OS_IDENTITY_API_VERSION=3" >>/root/demo-openrcecho "export OS_IMAGE_API_VERSION=2" >>/root/demo-openrcsource /root/admin-openrcopenstack token issuesleep 2#===3.Image service===#Database operations: glancemysql -N -uroot -p$ALL_PASS<<EOFDROP DATABASE if exists glance;CREATE DATABASE if not exists glance;GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '$ALL_PASS';EOFsource /root/admin-openrc/usr/bin/expect << EOFset timeout 15spawn openstack user create --domain default --password-prompt glanceexpect "User*"send "$ALL_PASS\r"expect "Repeat*"send "$ALL_PASS\r"expect eofEOFopenstack role add --project service --user glance adminopenstack service create --name glance --description "OpenStack Image" imageopenstack endpoint create --region RegionOne image public http://$CONTROLLER_IP:9292openstack endpoint create --region RegionOne image internal http://$CONTROLLER_IP:9292openstack endpoint create --region RegionOne image admin http://$CONTROLLER_IP:9292yum install openstack-glance -ycp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bakif [ `cat /etc/glance/glance-api.conf|grep '^\[database\]'` != "[database]" ]; then echo "[database]" >> /etc/glance/glance-api.confelse echo "We have this!"fi#[database]sed -i "/\[database]$/aconnection = mysql+pymysql://glance:$ALL_PASS@$CONTROLLER_IP/glance" /etc/glance/glance-api.confif [ `cat /etc/glance/glance-api.conf|grep '^\[keystone_authtoken\]'` != "[keystone_authtoken]" ]; then echo "[keystone_authtoken]" >> /etc/glance/glance-api.confelse echo "We have this!"fi#[keystone_authtoken]sed -i "/\[keystone_authtoken]$/apassword = $ALL_PASS" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/ausername = glance" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/aproject_name = service" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/auser_domain_name = Default" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/aproject_domain_name = Default" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/aauth_type = password" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/amemcached_servers = $CONTROLLER_IP:11211" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/aauth_url = http://$CONTROLLER_IP:5000" /etc/glance/glance-api.confsed -i "/\[keystone_authtoken]$/awww_authenticate_uri = http://$CONTROLLER_IP:5000" /etc/glance/glance-api.conf#[paste_deploy]if [ `cat /etc/glance/glance-api.conf|grep '^\[paste_deploy\]'` != "[paste_deploy]" ]; then echo "[paste_deploy]" >> /etc/glance/glance-api.confelse echo "We have this!"fised -i "/\[paste_deploy]$/aflavor = keystone" /etc/glance/glance-api.conf#[glance_store]if [ `cat /etc/glance/glance-api.conf|grep '^\[glance_store\]'` != "[glance_store]" ]; then echo "[glance_store]" >> /etc/glance/glance-api.confelse echo "We have this!"fised -i "/\[glance_store]$/afilesystem_store_datadir = /var/lib/glance/images/" /etc/glance/glance-api.confsed -i "/\[glance_store]$/adefault_store = file" /etc/glance/glance-api.confsed -i "/\[glance_store]$/astores = file,http" /etc/glance/glance-api.conf#copy glance-registry.confcp /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.bak#[database]if [ `cat etc/glance/glance-registry.conf|grep '^\[database\]'` != "[database]" ]; then echo "[database]" >> /etc/glance//etc/glance/glance-registry.conffised -i "/\[database]$/aconnection = mysql+pymysql://glance:$ALL_PASS@$CONTROLLER_IP/glance" /etc/glance/glance-registry.confif [ `cat etc/glance/glance-registry.conf|grep '^\[keystone_authtoken\]'` != "[keystone_authtoken]" ]; then echo "[keystone_authtoken]" >> /etc/glance//etc/glance/glance-registry.conffised -i "/\[keystone_authtoken]$/apassword = $ALL_PASS" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/ausername = glance" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/aproject_name = service" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/auser_domain_name = Default" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/aproject_domain_name = Default" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/aauth_type = password" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/amemcached_servers = $CONTROLLER_IP:11211" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/aauth_url = http://$CONTROLLER_IP:5000" /etc/glance/glance-registry.confsed -i "/\[keystone_authtoken]$/awww_authenticate_uri = http://$CONTROLLER_IP:5000" /etc/glance/glance-registry.confif [ `cat /etc/glance/glance-registry.conf|grep '^\[paste_deploy\]'` != "[paste_deploy]" ]; then echo "[paste_deploy]" >> /etc/glance//etc/glance/glance-registry.conffised -i "/\[paste_deploy]$/aflavor = keystone" /etc/glance/glance-registry.confsu -s /bin/sh -c "glance-manage db_sync" glancesystemctl enable openstack-glance-api.service openstack-glance-registry.servicesystemctl start openstack-glance-api.service openstack-glance-registry.service#systemctl status openstack-glance-api.service openstack-glance-registry.service#===Placement service====mysql -N -uroot -p$ALL_PASS<<EOFCREATE DATABASE placement;GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY '$ALL_PASS';EOFsource /root/admin-openrc/usr/bin/expect << EOFset timeout 15spawn openstack user create --domain default --password-prompt placementexpect "User*"send "$ALL_PASS\r"expect "Repeat*"send "$ALL_PASS\r"expect eofEOFopenstack role add --project service --user placement adminopenstack service create --name placement --description "Placement API" placementopenstack endpoint create --region RegionOne placement public http://$CONTROLLER_IP:8778openstack endpoint create --region RegionOne placement internal http://$CONTROLLER_IP:8778openstack endpoint create --region RegionOne placement admin http://$CONTROLLER_IP:8778yum install openstack-placement-api -ycp /etc/placement/placement.conf /etc/placement/placement.conf.bakif [ `cat /etc/placement/placement.conf|grep '^\[placement_database\]'` != "[placement_database]" ]; then echo "[placement_database]" >> /etc/placement/placement.conffised -i "/\[placement_database]$/aconnection = mysql+pymysql://placement:$ALL_PASS@$CONTROLLER_IP/placement" /etc/placement/placement.confif [ `cat /etc/placement/placement.conf|grep '^\[api\]'` != "[api]" ]; then echo "[api]" >> /etc/placement/placement.conffised -i "/\[api]$/aauth_strategy = keystone" /etc/placement/placement.confif [ `cat /etc/placement/placement.conf|grep '^\[keystone_authtoken\]'` != "[keystone_authtoken]" ]; then echo "[keystone_authtoken]" >> /etc/placement/placement.conffised -i "/\[keystone_authtoken]$/apassword = $ALL_PASS" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/ausername = placement" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/aproject_name = service" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/auser_domain_name = Default" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/aproject_domain_name = Default" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/aauth_type = password" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/amemcached_servers = $CONTROLLER_IP:11211" /etc/placement/placement.confsed -i "/\[keystone_authtoken]$/aauth_url = http://$CONTROLLER_IP:5000/v3" /etc/placement/placement.confsu -s /bin/sh -c "placement-manage db sync" placementsystemctl restart httpd#verify installationsource /root/admin-openrcplacement-status upgrade check#install osc-placementmkdir /root/.piptouch /root/.pip/pip.confecho "[global]" >/root/.pip/pip.confecho "index-url=http://10.153.3.130/pypi/web/simple" >>/root/.pip/pip.confecho "" >>/root/.pip/pip.confecho "[install]" >>/root/.pip/pip.confecho "trusted-host=10.153.3.130" >>/root/.pip/pip.confpip install osc-placementsed -i "/<\/VirtualHost>/i\ \ <Directory \/usr\/bin>" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ <IfVersion >= 2.4>" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ \ \ \ \ Require all granted" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ <\/IfVersion>" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ <IfVersion < 2.4>" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ \ \ \ \ Order allow,deny" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ \ \ \ \ Allow from all" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ \ \ <\/IfVersion>" /etc/httpd/conf.d/00-placement-api.confsed -i "/<\/VirtualHost>/i\ \ <\/Directory>" /etc/httpd/conf.d/00-placement-api.confsystemctl restart httpdsystemctl status httpdopenstack --os-placement-api-version 1.2 resource class list --sort-column nameopenstack --os-placement-api-version 1.6 trait list --sort-column name#===Compute service===mysql -N -uroot -p$ALL_PASS<<EOFDROP DATABASE if exists nova_api;CREATE DATABASE if not exists nova_api;DROP DATABASE if exists nova;CREATE DATABASE if not exists nova;DROP DATABASE if exists nova_cell0;CREATE DATABASE if not exists nova_cell0;GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '$ALL_PASS';EOFsource /root/admin-openrc/usr/bin/expect << EOFset timeout 15spawn openstack user create --domain default --password-prompt novaexpect "User*"send "$ALL_PASS\r"expect "Repeat*"send "$ALL_PASS\r"expect eofEOFopenstack role add --project service --user nova adminopenstack service create --name nova --description "OpenStack Compute" computeopenstack endpoint create --region RegionOne compute public http://$CONTROLLER_IP:8774/v2.1openstack endpoint create --region RegionOne compute internal http://$CONTROLLER_IP:8774/v2.1openstack endpoint create --region RegionOne compute admin http://$CONTROLLER_IP:8774/v2.1yum install -y openstack-nova-apiyum install -y openstack-nova-conductoryum install -y openstack-nova-novncproxyyum install -y openstack-nova-schedulercp /etc/nova/nova.conf /etc/nova/nova.conf.bak#[DEFAULT]if [ `cat /etc/nova/nova.conf|grep '^\[DEFAULT\]'` != "[DEFAULT]" ]; then echo "[DEFAULT]" >> /etc/nova/nova.conffised -i "/\[DEFAULT]$/afirewall_driver = nova.virt.firewall.NoopFirewallDriver" /etc/nova/nova.confsed -i "/\[DEFAULT]$/ause_neutron = True" /etc/nova/nova.confsed -i "/\[DEFAULT]$/amy_ip = $CONTROLLER_IP" /etc/nova/nova.confsed -i "/\[DEFAULT]$/atransport_url = rabbit://openstack:$ALL_PASS@$CONTROLLER_IP:5672" /etc/nova/nova.confsed -i "/\[DEFAULT]$/aenabled_apis = osapi_compute,metadata" /etc/nova/nova.conf#[api_database]if [ `cat /etc/nova/nova.conf|grep '^\[api_database\]'` != "[api_database]" ]; then echo "[api_database]" >> /etc/nova/nova.conffised -i "/\[api_database]$/aconnection = mysql+pymysql://nova:$ALL_PASS@$CONTROLLER_IP/nova_api" /etc/nova/nova.conf#[database]if [ `cat /etc/nova/nova.conf|grep '^\[database\]'` != "[database]" ]; then echo "[database]" >> /etc/nova/nova.conffised -i "/\[database]$/aconnection = mysql+pymysql://nova:$ALL_PASS@$CONTROLLER_IP/nova" /etc/nova/nova.conf#[api]if [ `cat /etc/nova/nova.conf|grep '^\[api\]'` != "[api]" ]; then echo "[api]" >> /etc/nova/nova.conffised -i "/\[api]$/aauth_strategy = keystone" /etc/nova/nova.conf#[keystone_authtoken]if [ `cat /etc/nova/nova.conf|grep '^\[keystone_authtoken\]'` != "[keystone_authtoken]" ]; then echo "[keystone_authtoken]" >> /etc/nova/nova.conffised -i "/\[keystone_authtoken]$/apassword = $ALL_PASS" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/ausername = nova" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/aproject_name = service" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/auser_domain_name = Default" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/aproject_domain_name = Default" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/aauth_type = password" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/amemcached_servers = $CONTROLLER_IP:11211" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/aauth_url = http://$CONTROLLER_IP:5000" /etc/nova/nova.confsed -i "/\[keystone_authtoken]$/awww_authenticate_uri = http://$CONTROLLER_IP:5000/" /etc/nova/nova.conf#[vnc]if [ `cat /etc/nova/nova.conf|grep '^\[vnc\]'` != "[vnc]" ]; then echo "[vnc]" >> /etc/nova/nova.conffised -i "/\[vnc]$/aserver_proxyclient_address = \$my_ip" /etc/nova/nova.confsed -i "/\[vnc]$/aserver_listen = \$my_ip" /etc/nova/nova.confsed -i "/\[vnc]$/aenabled = true" /etc/nova/nova.conf#[glance]if [ `cat /etc/nova/nova.conf|grep '^\[glance\]'` != "[glance]" ]; then echo "[glance]" >> /etc/nova/nova.conffised -i "/\[glance]$/aapi_servers = http://$CONTROLLER_IP:9292" /etc/nova/nova.conf#[oslo_concurrency]if [ `cat /etc/nova/nova.conf|grep '^\[oslo_concurrency\]'` != "[oslo_concurrency]" ]; then echo "[oslo_concurrency]" >> /etc/nova/nova.conffised -i "/\[oslo_concurrency]$/alock_path = \/var\/lib\/nova\/tmp" /etc/nova/nova.conf#[placement]if [ `cat /etc/nova/nova.conf|grep '^\[placement\]'` != "[placement]" ]; then echo "[placement]" >> /etc/nova/nova.conffised -i "/\[placement]$/apassword = $ALL_PASS" /etc/nova/nova.confsed -i "/\[placement]$/ausername = placement" /etc/nova/nova.confsed -i "/\[placement]$/aauth_url = http://$CONTROLLER_IP:5000/v3" /etc/nova/nova.confsed -i "/\[placement]$/auser_domain_name = Default" /etc/nova/nova.confsed -i "/\[placement]$/aauth_type = password" /etc/nova/nova.confsed -i "/\[placement]$/aproject_name = service" /etc/nova/nova.confsed -i "/\[placement]$/aproject_domain_name = Default" /etc/nova/nova.confsed -i "/\[placement]$/aos_region_name = RegionOne" /etc/nova/nova.confsu -s /bin/sh -c "nova-manage api_db sync" novasu -s /bin/sh -c "nova-manage cell_v2 map_cell0" novasu -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" novasu -s /bin/sh -c "nova-manage db sync" novasu -s /bin/sh -c "nova-manage cell_v2 list_cells" novasystemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.servicesystemctl start openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service#Verify operationsource /root/admin-openrcopenstack compute service listsleep 2openstack catalog listsleep 2openstack image listsleep 2nova-status upgrade checksleep 2#===Networking Service===mysql -N -uroot -p$ALL_PASS<<EOFDROP DATABASE if exists neutron;CREATE DATABASE if not exists neutron;GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '$ALL_PASS';GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '$ALL_PASS';EOFsource /root/admin-openrc/usr/bin/expect << EOFspawn openstack user create --domain default --password-prompt neutronexpect "User*"send "$ALL_PASS\r"expect "Repeat*"send "$ALL_PASS\r"expect eofEOFopenstack role add --project service --user neutron adminopenstack service create --name neutron --description "OpenStack Networking" networkopenstack endpoint create --region RegionOne network public http://$CONTROLLER_IP:9696openstack endpoint create --region RegionOne network internal http://$CONTROLLER_IP:9696openstack endpoint create --region RegionOne network admin http://$CONTROLLER_IP:9696yum install -y openstack-neutronyum install -y openstack-neutron-ml2yum install -y openstack-neutron-openvswitchyum install -y ebtables#/etc/neutron/neutron.confcp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak#[database]if [ `cat /etc/neutron/neutron.conf|grep '^\[database\]'` != "[database]" ]; then echo "[database]" >> /etc/neutron/neutron.conffised -i "/\[database]$/aconnection = mysql+pymysql://neutron:$ALL_PASS@$CONTROLLER_IP/neutron" /etc/neutron/neutron.conf#[DEFAULT]if [ `cat /etc/neutron/neutron.conf|grep '^\[DEFAULT\]'` != "[DEFAULT]" ]; then echo "[DEFAULT]" >> /etc/neutron/neutron.conffised -i "/\[DEFAULT]$/anotify_nova_on_port_data_changes = true" /etc/neutron/neutron.confsed -i "/\[DEFAULT]$/anotify_nova_on_port_status_changes = true" /etc/neutron/neutron.confsed -i "/\[DEFAULT]$/aauth_strategy = keystone" /etc/neutron/neutron.confsed -i "/\[DEFAULT]$/atransport_url = rabbit://openstack:$ALL_PASS@$CONTROLLER_IP" /etc/neutron/neutron.confsed -i "/\[DEFAULT]$/aallow_overlapping_ips = true" /etc/neutron/neutron.confsed -i "/\[DEFAULT]$/aservice_plugins = router" /etc/neutron/neutron.confsed -i "/\[DEFAULT]$/acore_plugin = ml2" /etc/neutron/neutron.conf#[keystone_authtoken]if [ `cat /etc/neutron/neutron.conf|grep '^\[keystone_authtoken\]'` != "[keystone_authtoken]" ]; then echo "[keystone_authtoken]" >> /etc/neutron/neutron.conffised -i "/\[keystone_authtoken]$/apassword = $ALL_PASS" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/ausername = neutron" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/aproject_name = service" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/auser_domain_name = Default" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/aproject_domain_name = Default" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/aauth_type = password" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/amemcached_servers = $CONTROLLER_IP:11211" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/aauth_url = http://$CONTROLLER_IP:5000" /etc/neutron/neutron.confsed -i "/\[keystone_authtoken]$/awww_authenticate_uri = http://$CONTROLLER_IP:5000" /etc/neutron/neutron.conf#[nova]if [ `cat /etc/neutron/neutron.conf|grep '^\[nova\]'` != "[nova]" ]; then echo "[nova]" >> /etc/neutron/neutron.conffised -i "/\[nova]$/apassword = $ALL_PASS" /etc/neutron/neutron.confsed -i "/\[nova]$/ausername = nova" /etc/neutron/neutron.confsed -i "/\[nova]$/aproject_name = service" /etc/neutron/neutron.confsed -i "/\[nova]$/aregion_name = RegionOne" /etc/neutron/neutron.confsed -i "/\[nova]$/auser_domain_name = Default" /etc/neutron/neutron.confsed -i "/\[nova]$/aproject_domain_name = Default" /etc/neutron/neutron.confsed -i "/\[nova]$/aauth_type = password" /etc/neutron/neutron.confsed -i "/\[nova]$/aauth_url = http://$CONTROLLER_IP:5000" /etc/neutron/neutron.conf#[oslo_concurrency]if [ `cat /etc/neutron/neutron.conf|grep '^\[oslo_concurrency\]'` != "[oslo_concurrency]" ]; then echo "[oslo_concurrency]" >> /etc/neutron/neutron.conffised -i "/\[oslo_concurrency]$/alock_path = \/var\/lib/neutron\/tmp" /etc/neutron/neutron.confcp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak#[ml2]if [ `cat /etc/neutron/plugins/ml2/ml2_conf.ini|grep '^\[ml2\]'` != "[ml2]" ]; then echo "[ml2]" >> /etc/neutron/plugins/ml2/ml2_conf.inifised -i "/\[ml2]$/aextension_drivers = port_security" /etc/neutron/plugins/ml2/ml2_conf.inised -i "/\[ml2]$/amechanism_drivers = openvswitch,l2population" /etc/neutron/plugins/ml2/ml2_conf.inised -i "/\[ml2]$/atenant_network_types = vxlan,vlan" /etc/neutron/plugins/ml2/ml2_conf.inised -i "/\[ml2]$/atype_drivers = flat,vlan,vxlan" /etc/neutron/plugins/ml2/ml2_conf.ini#[ml2_type_flat]if [ `cat /etc/neutron/plugins/ml2/ml2_conf.ini|grep '^\[ml2_type_flat\]'` != "[ml2_type_flat]" ]; then echo "[ml2_type_flat]" >> /etc/neutron/plugins/ml2/ml2_conf.inifised -i "/\[ml2_type_flat]$/aflat_networks = provider" /etc/neutron/plugins/ml2/ml2_conf.ini#[ml2_type_vlan]if [ `cat /etc/neutron/plugins/ml2/ml2_conf.ini|grep '^\[ml2_type_vlan\]'` != "[ml2_type_vlan]" ]; then echo "[ml2_type_vlan]" >> /etc/neutron/plugins/ml2/ml2_conf.inifised -i "/\[ml2_type_vlan]$/anetwork_vlan_ranges = physicnet:1000:2000" /etc/neutron/plugins/ml2/ml2_conf.ini#[ml2_type_vxlan]if [ `cat /etc/neutron/plugins/ml2/ml2_conf.ini|grep '^\[ml2_type_vxlan\]'` != "[ml2_type_vxlan]" ]; then echo "[ml2_type_vxlan]" >> /etc/neutron/plugins/ml2/ml2_conf.inifised -i "/\[ml2_type_vxlan]$/avni_ranges = 30000:31000" /etc/neutron/plugins/ml2/ml2_conf.ini#[securitygroup]if [ `cat /etc/neutron/plugins/ml2/ml2_conf.ini|grep '^\[securitygroup\]'` != "[securitygroup]" ]; then echo "[securitygroup]" >> /etc/neutron/plugins/ml2/ml2_conf.inifised -i "/\[securitygroup]$/aenable_ipset = true" /etc/neutron/plugins/ml2/ml2_conf.ini#/etc/neutron/plugins/ml2/openvswitch_agent.inicp /etc/neutron/plugins/ml2/openvswitch_agent.ini /etc/neutron/plugins/ml2/openvswitch_agent.ini.bak#[agent]#sed -i "/tunnel_types = /atunnel_types = vxlan" /etc/neutron/plugins/ml2/openvswitch_agent.ini#[ovs]#sed -i "/\[ovs]$/alocal_ip = 10.214.1.2" /etc/neutron/plugins/ml2/openvswitch_agent.ini#sed -i "/\[ovs]$/atun_peer_patch_port = patch-int" /etc/neutron/plugins/ml2/openvswitch_agent.ini#sed -i "/\[ovs]$/aint_peer_patch_port = patch-tun" /etc/neutron/plugins/ml2/openvswitch_agent.ini#sed -i "/\[ovs]$/atunnel_bridge = br-tun" /etc/neutron/plugins/ml2/openvswitch_agent.ini#[securitygroup]if [ `cat /etc/neutron/plugins/ml2/openvswitch_agent.ini|grep '^\[securitygroup\]'` != "[securitygroup]" ]; then echo "[securitygroup]" >> /etc/neutron/plugins/ml2/openvswitch_agent.inifised -i "/\[securitygroup]$/aenable_security_group = true" /etc/neutron/plugins/ml2/openvswitch_agent.inised -i "/\[securitygroup]$/afirewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver" /etc/neutron/plugins/ml2/openvswitch_agent.inicp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.bakif [ `cat /etc/neutron/l3_agent.ini|grep '^\[DEFAULT\]'` != "[DEFAULT]" ]; then echo "[DEFAULT]" >> /etc/neutron/l3_agent.inifised -i "/\[DEFAULT]$/ainterface_driver = neutron.agent.linux.interface.OVSInterfaceDriver" /etc/neutron/l3_agent.inicp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.bakif [ `cat /etc/neutron/dhcp_agent.ini|grep '^\[DEFAULT\]'` != "[DEFAULT]" ]; then echo "[DEFAULT]" >> /etc/neutron/dhcp_agent.inifised -i "/\[DEFAULT]$/aenable_isolated_metadata = true" /etc/neutron/l3_agent.inised -i "/\[DEFAULT]$/adhcp_driver = neutron.agent.linux.dhcp.Dnsmasq" /etc/neutron/dhcp_agent.ini sed -i "/\[DEFAULT]$/ainterface_driver = neutron.agent.linux.interface.OVSInterfaceDriver" /etc/neutron/dhcp_agent.inised -i "/force_metadata = /aforce_metadata = true" /etc/neutron/dhcp_agent.ini#metadata.confif [ `cat /etc/neutron/metadata_agent.ini|grep '^\[DEFAULT\]'` != "[DEFAULT]" ]; then echo "[DEFAULT]" >> /etc/neutron/metadata_agent.inificp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.baksed -i "/\[DEFAULT]$/ametadata_proxy_shared_secret = $ALL_PASS" /etc/neutron/metadata_agent.inised -i "/\[DEFAULT]$/anova_metadata_host = $CONTROLLER_IP" /etc/neutron/metadata_agent.ini#nova.confif [ `cat /etc/nova/nova.conf|grep '^\[neutron\]'` != "[neutron]" ]; then echo "[neutron]" >> /etc/nova/nova.conffised -i "/\[neutron]$/ametadata_proxy_shared_secret = $ALL_PASS" /etc/nova/nova.confsed -i "/\[neutron]$/aservice_metadata_proxy = true" /etc/nova/nova.confsed -i "/\[neutron]$/apassword = $ALL_PASS" /etc/nova/nova.confsed -i "/\[neutron]$/ausername = neutron" /etc/nova/nova.confsed -i "/\[neutron]$/aproject_name = service" /etc/nova/nova.confsed -i "/\[neutron]$/aregion_name = RegionOne" /etc/nova/nova.confsed -i "/\[neutron]$/auser_domain_name = Default" /etc/nova/nova.confsed -i "/\[neutron]$/aproject_domain_name = Default" /etc/nova/nova.confsed -i "/\[neutron]$/aauth_type = password" /etc/nova/nova.confsed -i "/\[neutron]$/aauth_url = http://$CONTROLLER_IP:5000" /etc/nova/nova.confsed -i "/\[neutron]$/aurl = http://$CONTROLLER_IP:9696" /etc/nova/nova.confln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.inisu -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutronsystemctl restart openstack-nova-api.servicesystemctl enable neutron-server.service \ neutron-openvswitch-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service neutron-l3-agent.servicesystemctl start neutron-server.service \ neutron-openvswitch-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service neutron-l3-agent.service#===Dashboard===yum install openstack-dashboard -y#/etc/openstack-dashboard/local_settingscp /etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settings.baksed -i "/OPENSTACK_HOST = /cOPENSTACK_HOST = \"$CONTROLLER_IP\"" /etc/openstack-dashboard/local_settingssed -i "/ALLOWED_HOSTS = /cALLOWED_HOSTS = ['*']" /etc/openstack-dashboard/local_settingssed -i "/SESSION_ENGINE = /aSESSION_ENGINE = 'django.contrib.sessions.backends.cache'" /etc/openstack-dashboard/local_settingssed -i "/OPENSTACK_KEYSTONE_URL =/aOPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True" /etc/openstack-dashboard/local_settingssed -i "/OPENSTACK_KEYSTONE_URL =/aOPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\"" /etc/openstack-dashboard/local_settingssed -i "/OPENSTACK_KEYSTONE_URL =/aOPENSTACK_KEYSTONE_DEFAULT_ROLE = \"user\"" /etc/openstack-dashboard/local_settingssed -i "/TIME_ZONE/c#TIME_ZONE = UTC" /etc/openstack-dashboard/local_settingsecho "CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': '$CONTROLLER_IP:11211', }}" >> local_settingsecho "OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 3,}" >> local_settingssed -i "/WSGIScriptAlias/iWSGIApplicationGroup %{GLOBAL}" /etc/httpd/conf.d/openstack-dashboard.conf#Because of the bugs of Train in CentOS7.8, we need to do something to solve it.echo "* soft nofile 1024000* hard nofile 1024000" >> /etc/security/limits.confyum install -y lsoflsof | wc -lcd /usr/share/openstack-dashboard/python manage.py make_web_conf --apache > /etc/httpd/conf.d/openstack-dashboard.confsed -i "s/WEBROOT = '\/'/WEBROOT = '\/dashboard'/g" /usr/share/openstack-dashboard/openstack_dashboard/defaults.pysed -i "s/WEBROOT = '\/'/WEBROOT = '\/dashboard'/g" /usr/share/openstack-dashboard/openstack_dashboard/test/settings.pycd /usr/share/openstack-dashboard/static/dashboard/js/for i in `ls|awk {print}`dosed -i "s/WEBROOT = '\/'/WEBROOT = '\/dashboard'/g" $ised -i "s/WEBROOT='\/'/WEBROOT='\/dashboard'/g" $ised -i "s/WEBROOT = \"\/\"/WEBROOT = \"\/dashboard\"/g" $ised -i "s/WEBROOT=\"\/\"/WEBROOT=\"\/dashboard\"/g" $idonesed -i "/WSGIScriptAlias/c\ \ \ \ WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi.py" /etc/httpd/conf.d/openstack-dashboard.confsed -i "/Alias/c\ \ \ \ Alias /dashboard/static /usr/share/openstack-dashboard/static" /etc/httpd/conf.d/openstack-dashboard.confsystemctl restart httpd.service memcached.servicesystemctl status httpd memcached#=== ===sed -i "/\[Service]$/aLimitNOFILE=65535" /usr/lib/systemd/system/mariadb.servicesed -i "/\[Service]$/aLimitNPROC=65535" /usr/lib/systemd/system/mariadb.servicesystemctl daemon-reloadsystemctl restart mariadb.service#===Fwaas Lbaasv2 Vpnaas===yum install openstack-neutron-fwaas -yneutron-db-manage --subproject neutron-fwaas upgrade head#lbaasv2yum install openstack-neutron-lbaas -yneutron-db-manage --subproject neutron-lbaas upgrade head#vpnaasyum install openstack-neutron-vpnaas -yneutron-db-manage --subproject neutron-vpnaas upgrade head###8.Block Storage service##Discover compute#source /root/admin-openrc#openstack compute service list --service nova-compute#su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova##add image#openstack image create "cirros" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --publicecho -e "\033[45;37mOpenstack Train computer node install end !!!\033[0m"

September 7, 2021 · 13 min · jiezi