关于kubernetes:k8s系列scheduler

kube-scheduler的基本工作工作是依据各种调度算法将Pod绑定(bind)到最合适的工作节点,整个调度流程分为两个阶段:预选策略(Predicates)和优选策略(Priorities)。 预选(Predicates):输出是所有节点,输入是满足预选条件的节点。kube-scheduler依据预选策略过滤掉不满足策略的Nodes。例如,如果某节点的资源有余或者不满足预选策略的条件如“Node的label必须与Pod的Selector统一”时则无奈通过预选。 优选(Priorities):输出是预选阶段筛选出的节点,优选会依据优先策略为通过预选的Nodes进行打分排名,抉择得分最高的Node。例如,资源越富裕、负载越小的Node可能具备越高的排名。 艰深点说,调度的过程就是在答复两个问题:1. 候选有哪些?2. 其中最适宜的是哪个? 具体:https://www.cnblogs.com/kcxg/...深刻:https://www.infoq.cn/article/...

August 6, 2022 · 1 min · jiezi

关于kubernetes:k8s系列集群的共享与隔离namespace

在k8s中,namespace的次要工作是用来隔离出一个绝对独立的空间的。其中,隔离包含两个局部 具体的ns介绍https://www.cnblogs.com/Ayana... ns实例:https://www.bilibili.com/vide...https://blog.csdn.net/xujiami... resource实例:https://www.bilibili.com/vide...https://blog.csdn.net/xujiami...

August 5, 2022 · 1 min · jiezi

关于kubernetes:k8s系列kubernetes与cicd

k8s的利用部署是一个一直迭代一直循环的过程,故能够应用工具将其自动化。 CI/CD 是一种通过在利用开发阶段引入自动化来频繁向客户交付利用的办法。 CI/CD 的外围概念是继续集成、继续交付和继续部署。 其中常见的工具有gitlab:Git仓库管理工具maven:构建和治理各种我的项目Jenkins:基于Java开发的一种继续集成工具scripts:基于各种脚本语言开发的自动化脚本 惯例的我的项目公布步骤如下:而有了k8s之后: 具体实际https://blog.csdn.net/xujiami...andhttps://www.bilibili.com/vide...深刻学习:https://zhuanlan.zhihu.com/p/...

August 5, 2022 · 1 min · jiezi

关于kubernetes:Apache-APISIX-Ingress-v15rc1-发布

Apache APISIX Ingress Controller v1.5-rc1 版本正式公布。此版本历时 7 个月左右的工夫,由 36 位贡献者进行了 144 次提交。其中有 22 位新晋贡献者,感激大家对本我的项目的奉献与反对! 接下来就让咱们一起看下,在 APISIX Ingress v1.5 版本中有哪些重要更新。 所有 CRD API Version 降级至 v2在 APISIX Ingress 我的项目初始,只有多数的几个 CRD ,并且每个资源都是各自进行 API Version 的保护。这就导致了后续有新资源引入或性能迭代的过程中,会呈现每个自定义资源应用的 API Version 都不一样的情况,从而减少用户学习老本。 所以从 v1.3 版本开始,咱们提出了对立所有资源 API Version 的 Proposal。通过两个版本的迭代,现已正式引入了 v2 API Version,同时将 v2beta3 标记为废除,直到 v1.7 版本时会齐全移除 v2beta3。 根本反对 Gateway APIGateway API 能够说是下一代的 Ingress 定义,具备更加丰盛的体现能力。咱们曾经在 APISIX Ingress 中实现其中大多数资源的反对(留神,此个性目前还在试验性质,默认不启用)。 如果想要在 APISIX Ingress 中应用 Gateway API,能够在 controller 的配置文件中传递 enable_gateway_api: true 配置项来开启此性能。 ...

August 5, 2022 · 1 min · jiezi

关于kubernetes:k8s系列服务发现

1.集群外部拜访2.外部拜访内部3.内部拜访外部 1.例如 pod A 要拜访pod B1.1因为pod的生命周期周期不稳固,ip地址随时可能变动。因而只能通过service的ip地址来拜访,这是绝对固定的,同时为每个service筹备了dns域名解析,将ip地址映射为不便咱们应用的域名1.2提供了headlessservice列出残缺的pod清单 2.例如 pod A 拜访 内部的MySQL服务2.1与服务未迁徙到k8s上一样,间接应用ip+端口的拜访形式这里间接应用例子中的ip地址10.155.20.60:3306就能实现对数据库的拜访2.2实际上就是加了一层封装,将2.1的拜访形式包装为集群内的一个outservice,就能够和外部拜访一样应用dns轻松拜访了 3.内部的客户端拜访集群外部3.1集群中节点都有一个对外的端口node:port这种办法在生产环境中比拟少应用3.2nodeport与hostport最重要的一个区别是,hostport是针对一个单宿主机的一个容器的;而nodeport是针对于K8S集群而言的。3.3ingress https://zhuanlan.zhihu.com/p/...

August 5, 2022 · 1 min · jiezi

关于kubernetes:k8s系列docker镜像仓库Harbor

以Docker为代表的容器技术的呈现,扭转了传统的交付形式。通过把业务及其依赖的环境打包进Docker镜像,解决了开发环境和生产环境的差别问题,晋升了业务交付的效率。如何高效地治理和散发Docker镜像?是泛滥企业须要思考的问题。 Harbor是VMware开源的一个用于存储和散发Docker镜像的企业级Registry服务器,能够用来构建企业外部的Docker镜像仓库。它在Docker的开源我的项目 Distribution的根底上,增加了一些企业须要的性能个性,如镜像同步复制、破绽扫描和权限治理等。 这里贴上Harbor的开源我的项目地址https://github.com/goharbor/h... Harbor官网文档提供的架构设计图 具体部署参考https://www.bilibili.com/vide... 对于docker镜像深刻理解:https://www.zhihu.com/questio...对于webhook的介绍:https://zhuanlan.zhihu.com/p/...

August 5, 2022 · 1 min · jiezi

关于kubernetes:k8s系列集群搭建一集群搭建方案对比

k8s集群搭建计划大抵能够分为三类: 1.社区计划 2.kubeadm这是官网的搭建计划,所有的组件都在容器内,容易装置部署 3.Binary二进制计划,所有组件都以过程的形式存在,易于保护

August 4, 2022 · 1 min · jiezi

关于kubernetes:k8s系列k8s架构二认证

基本概念对于k8s的认证,首先咱们应该分明密码学中的对称加密和非对称加密 对称加密中,单方持有雷同的密钥非对称加密中,单方持有一个密钥对,加密时应用公钥,解码时应用私钥 k8s中的service是对外提供的服务,同时服务间也须要进行通信等一系列操作,因而有必要在通信时做好安全措施。因为非对称加密的加密算法较为简单,对称加密的安全性又不够高,故service在通信时先应用非对称加密发送密钥,再应用密钥进行对称加密,这也是赫赫有名的SSL/TLS协定 公钥在应用前须要通过CA认证 k8s中的认证与受权认证内部拜访:例如:kubectl拜访ApiServer。ApiServer个别是一个集群的规范入口,集群中有常见的组件scheduler,controlmanger,etcd等都须要通过ApiServer与内部对接1.客户端证书认证(TLS双向认证)2.BearerToken外部拜访:例如集群外部的pod须要拜访ApiServer。3.ServiceAccount:包含namespace,token,ca三局部组成,通过挂载存储于pod的目录中 受权k8s中的受权形式包含三种:ABAC,WebHook,RBAC。其中RBAC是kubernetes1.6引入的,也是咱们须要重点理解的。RBAC(Role Based Access Control)分为三层架构:user,role与authority其中user有外部与内部之分,ServiceAccount是集群外部的用户authority是角色所领有的权限,包含对资源的应用,以及最根本的增删改查等动作。角色是两者之间的桥梁,通过rolebinding与user分割,并领有本人的命名空间。同时,为了不便用户应用所有的权限,设计了CLusterRole 后续在实现受权之后,还应通过准入管制(admision control),其成果相似于filter 准入控制器(Admission Controller)位于 API Server 中,在对象被长久化之前,准入控制器拦挡对 API Server 的申请,个别用来做身份验证和受权。 深刻学习:SSL/TLS协定https://zhuanlan.zhihu.com/p/...https://www.cnblogs.com/wqbin...RBAC与ServiceAccounthttps://zhuanlan.zhihu.com/p/...admision control参考https://jimmysong.io/kubernet...

August 4, 2022 · 1 min · jiezi

关于kubernetes:分布式链路追踪Jaeger-微服务Pig在Rainbond上的实践分享

随着微服务架构的风行,客户端发动的一次申请可能须要波及到多个或 N 个服务,以致咱们对服务之间的监控和排查变得更加简单。 举个例子: 某条业务线的某个接口调用服务端时快时慢,这时就须要排查各个服务的日志进行剖析,调动各个服务的开发人员联动排查,这种排查费时又费劲。对于 ToB 的业务有时候还拿不到日志,难搞哦! 因而,就须要能够帮忙了解零碎行为、用于剖析性能问题的工具,以便产生故障的时候,可能疾速定位和解决问题,那就是 APM (Application Performance Monitor)。目前风行的 APM 开源工具有很多,比方:Zipkin,Skywalking,Pinpoint、Jaeger 等等,本文将次要介绍 Jaeger 。 Jaeger 是 Uber 技术团队公布的开源分布式跟踪零碎,它用于监控和故障排查基于微服务的分布式系统: 分布式上下文流传、事务监控根本原因、服务依赖剖析性能/提早优化OpenTracing 启发的数据模型多个存储后端:Cassandra, Elasticsearch, memory.零碎拓扑图服务性能监控(SPM)自适应采样Jaeger 架构 ComponentDescriptionJaeger ClientJaeger Client SDKJaeger Agent收集 Client 数据Jaeger Collector收集 Jaeger Agent 数据,有 pull/push 两种形式DB StorageCollector 须要存储后端,Collector 拿到的数据将存在 Elasticsearch 或 Cassandra。Spark jobs用于生成拓扑图 UI 数据Jaeger Query Service & UI负责从 Storage 查问数据并提供 API 和 UI如何在Rainbond上集成? 1.集成 OpenTelemetry Client: v1.36 版本以前 Jaeger Client 是基于 OpenTracing API 实现的客户端库,Jaeger Client 联合 Jaeger Agent 一起应用,发送 span 到 Jaeger Collector。 ...

August 4, 2022 · 2 min · jiezi

关于kubernetes:k8s系列k8s架构一概念

docker中最一般的容器container。container是基于某个映像image构建而成的。 k8s中的根本单位pod。pod中能够有一个或多个容器,他们是共享网络的(同一个ip)。每个pod中有一个容器pause连贯其余容器,作用相似于docker compose,同时负责健康检查。 pod之上的概念是RS(ReplicaSet正本集),主动治理pod,管制pod数量。 RS之上的概念为Deployment(部署),负责服务更新治理的控制器 另一个重要的概念为Service。在介绍这个概念之前咱们应该先理解一下Label(标签)。k8s中容许对其余组件如:deployment,pod,node等打上标签。打上标签之后,service会依据标签找到绝对应的pod。同时,service有一个对外的ClusterIP,能够供客户端拜访应用服务。 学习的课程起源:https://www.bilibili.com/vide... pause深刻理解:https://cloud.tencent.com/dev...https://zhuanlan.zhihu.com/p/...RS与Deployment深刻理解https://zhuanlan.zhihu.com/p/...service与ClusterIPhttps://blog.csdn.net/jiangxi...

August 4, 2022 · 1 min · jiezi

关于kubernetes:企业运维实践使用Aliyun容器镜像服务对海外gcrquay仓库中的镜像进行拉取构建

欢送关注「WeiyiGeek」 每天带你玩转网络安全运维、利用开发、物联网IOT学习! 心愿各位看友【关注、点赞、评论、珍藏、投币】,助力每一个幻想。 文章目录: 0x00 前言简述 0x01 操作实际 0x00 前言简述形容: 在国内搭建k8s集群及其依赖组件间时, 经常会遇到无奈下载k8s.gcr.io、quay.io的镜像, 那咱们如何解决呢? 例如, 在K8S集群中部署nfs-subdir-external-provisioner资源清单时报如下谬误, 这是因为国内无法访问k8s.gcr.io, 所以无奈拉取下载k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2镜像 Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)那如何畅快的下载:罕用的形式是应用k8s.gcr.io的镜像源, 例如["https://gcr.mirrors.ustc.edu.cn/google-containers/","https://registry.cn-hangzhou.aliyuncs.com/google_containers/"], 然而对于某一些镜像它是没有进行同步的此时咱们将无奈通过其下载,可能你还会采纳一台海内的机器进行git pull、git tag、git push一系列的操作将k8s.gcr.io、quay.io仓库中的镜像传到国内, 然而对于没有海内机器的敌人来说是不是没有方法了。 答案: 当然是否定的,咱们能够采纳Github仓库中Dockerfile文件与阿里云提供的容器镜像服务(https://www.aliyun.com/produc...)进行海内镜像构建,从而拉取构建后公共或者公有镜像。 原文地址: https://blog.weiyigeek.top/2022/6-1-663.html 0x01 操作实际步骤 01.登录 github.com 创立一个公共仓库(如果没有请注册), 此处我创立了一个 imagesbuild 仓库,专门用于构建 k8s.gcr.io、quay.io 仓库中无奈下载的镜像,此处以 nfs-subdir-external-provisioner 镜像为例,在 /sig-storage/nfs-subdir-external-provisioner 目录下创立一个 Dockerfile 文件,其内容如 tee 命令写入所示。 git clone git@github.com:WeiyiGeek/imagesbuild.gitmkdir -vp imagesbuild/sig-storage/nfs-subdir-external-provisionertee imagesbuild/sig-storage/nfs-subdir-external-provisioner/Dockerfile <<'EOF'FROM k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2LABEL MAINTAINER=master@weiyigeeek.top BUILDTYPE=AliyunEOFgit add . && git commit -m "nfs-subdir-external-provisioner" && git push ...

July 28, 2022 · 1 min · jiezi

关于kubernetes:基于MysqlExporter监控Mysql

MySQLD Exporter 插件基于规范的 MySQLD Exporter 实现。Rainbond 自带的 Prometheus 监控零碎 rbd-monitor 会收集 Exporter 中的数据,并通过监控面板展现进去。用户能够自定义展现哪些要害性能数据的指标,这是监控 Mysql 数据库服务的不二之选。 装置 Mysql-Exporter 插件在团队视图点击左侧的 插件 选项卡,进入我的插件页面。抉择从利用商店装置/新建插件。 在开源利用商店中搜寻 Mysql-exportor ,点击装置即可将插件装置到以后团队中。 在已有的 Mysql 服务组件的插件页面能够 开明 MySQLD Exporter 插件。 开明该插件后,查看配置 ,确认 DATA_SOURCE_NAME (MySQL 连贯信息)是否正确。同时,也要确认时区的设置和被监控的 Mysql 服务组件是否统一。图中的配置代表应用 Asia/Shanghai 时区,Mysql 服务组件能够应用同样的环境变量配置来申明时区。 确认无误后,依据提醒 更新 Mysql 服务组件,即可开始收集 MySQLD Exporter 提供的指标。 治理监控点通过点击业务监控面板右上方的 治理监控点 ,能够定义监控点信息,这些信息定义了监控指标的起源。 MySQLD Exporter 插件曾经定义好了一组监控点的配置,这组配置蕴含以下几个元素,这些元素都是必填项: 配置名称:自定义这组配置的名字收集工作名称:自定义门路:指标的起源门路,依据 Exporter 设计的不同,须要填写适合的门路端口:Exporter 监听的端口,默认监听 9104,用户须要为 Mysql 主服务开启 9104 端口的对内服务。收集工夫距离: 多久收集一次指标 查看监控这一插件曾经默认配置好了罕用的监控图表,点击一键导入,应用 mysqld-exportor 计划即可生成图表。 ...

July 25, 2022 · 1 min · jiezi

关于kubernetes:企业运维实践还不会部署高可用的kubernetes集群使用kubeadm方式安装高可用k8s集群v1237

关注「WeiyiGeek」公众号 设为「特地关注」每天带你玩转网络安全运维、利用开发、物联网IOT学习! 心愿各位看友【关注、点赞、评论、珍藏、投币】,助力每一个幻想。 文章目录: 0x00 前言简述 0x01 环境筹备 主机布局软件版本网络布局0x02 装置部署 1.筹备根底主机环境配置2.负载平衡治理ipvsadm工具装置与内核加载3.高可用HAProxy与Keepalived软件装置配置4.容器运行时containerd.io装置配置5.装置源配置与初始化集群配置筹备6.应用kubeadm装置部署K8S集群7.部署配置 Calico 网络插件0x03 集群辅助插件部署 1.集群中基于nfs的provisioner的动静持卷环境部署2.集群中装置metrics-server获取客户端资源监控指标3.集群治理原生UI工具kubernetes-dashboard装置部署4.集群治理K9S客户端工具装置应用5.集群服务Service七层负载平衡ingress环境搭建部署舒适提醒:若文章中有图片显示不全,可拜访我的博客【 https://blog.weiyigeek.top 】持续浏览该篇文章。 0x00 前言简述形容: 在我博客以及后面的文章之中解说Kubernetes相干集群环境的搭建阐明, 随着K8S及其相干组件的迭代, 与读者以后接触的版本有所不同,在上一章中咱们一起实际了【应用二进制形式进行装置部署高可用的K8S集群V1.23.6】(https://blog.weiyigeek.top/20...) , 所以本章将实际应用kubeadm形式部署搭建高可用的kubernetes集群V1.23.7,此处依然依照ubuntu 20.04零碎以及haproxy、keepalive、containerd、etcd、kubeadm、kubectl 等相干工具插件【最新或者稳固的版本】进行实际,这里不再对k8s等相干基础知识做介绍,如有新入门的童鞋,请拜访如下【博客文章】(https://blog.weiyigeek.top/ta...) 或者【B站专栏】(https://www.bilibili.com/read...) 依照程序学习。 Kubernetes 简述Kubernetes (后续简称k8s)是 Google(2014年6月) 开源的一个容器编排引擎,应用Go语言开发,它反对自动化部署、大规模可伸缩、以及云平台中多个主机上的容器化利用进行治理。其指标是让部署容器化的利用更加简略并且高效,提供了资源调度、部署治理、服务发现、扩容缩容、状态 监控、保护等一整套性能, 致力成为跨主机集群的自动化部署、自动化扩大以及运行应用程序容器的平台,它反对一些列CNCF毕业我的项目,包含 Containerd、calico 等 。 扩大文章: 1.应用二进制形式部署v1.23.6的K8S集群实际(上)【 https://mp.weixin.qq.com/s/sY... 】 2.应用二进制形式部署v1.23.6的K8S集群实际(下)【 https://mp.weixin.qq.com/s/-k... 】 本章残缺原文地址: 还不会部署高可用的kubernetes集群?企业DevOps实际之应用kubeadm形式装置高可用k8s集群v1.23.7-https://mp.weixin.qq.com/s/v_kO8o8mWOYc38kot86g6Q21-kubernetes进阶之kubeadm形式装置高可用k8s集群0x01 环境筹备主机布局舒适提醒: 同样此处应用的是 Ubuntu 20.04 操作系统, 该零碎已做平安加固和内核优化合乎等保2.0要求【SecOpsDev/Ubuntu-InitializeSecurity.sh at master · WeiyiGeek/SecOpsDev (github.com)】, 如你的Linux未进行相应配置环境可能与读者有些许差别, 如须要进行(windows server、Ubuntu、CentOS)平安加固请参照如下加固脚本进行加固, 请大家疯狂的 star 。 加固脚本地址:【 https://github.com/WeiyiGeek/... 】 主机地址主机名称主机配置主机角色软件组件10.20.176.212devtest-master-2128C/16G管制节点 10.20.176.213devtest-master-2138C/16G管制节点 10.20.176.214devtest-master-2148C/32G管制节点 10.20.176.215devtest-work-2158C/16G工作节点 10.20.176.211slbvip.k8s.devtest-虚构VIP虚构网卡地址<br/> 软件版本操作系统 Ubuntu 20.04 LTS - 5.4.0-92-generic高可用软件 ...

July 25, 2022 · 21 min · jiezi

关于kubernetes:OpsCICDK8S-学习笔记

Kubernetes(K8S)简介课程内容介绍下什么是 kubernetes,什么时候须要 kubernetes,以及它的组成架构。用 3 种不同的形式教大家如何装置 kubernetes 集群。包含 minikube,云平台搭建,裸机搭建(3 台服务器)。通过一个 demo 我的项目演示如何部署我的项目到集群中,怎么对外裸露服务端口怎么部署数据库这种有状态的利用,以及如何数据长久化集群中配置文件和密码文件的应用怎么应用 Helm 利用商店疾速装置第三方利用怎么应用 Ingress 对外提供服务指标:学完课程,对 kubernetes 有一个全面的意识,可能轻松应答各种集群部署工作。 Kubernetes (K8S) 是什么它是一个为 容器化 利用提供集群部署和治理的开源工具,由 Google 开发。Kubernetes 这个名字源于希腊语,意为“舵手”或“飞行员”。k8s 这个缩写是因为 k 和 s 之间有八个字符的关系。 Google 在 2014 年开源了 Kubernetes 我的项目 次要个性: 高可用,不宕机,主动劫难复原灰度更新,不影响业务失常运行一键回滚到历史版本不便的伸缩扩大(利用伸缩,机器加减)、提供负载平衡有一个欠缺的生态学习课程前提 相熟 Docker 的根本应用,如果你还不理解 Docker,先看视频 Docker 疾速上手 相熟 Linux 操作系统不同的利用部署计划 传统部署形式:利用间接在物理机上部署,机器资源分配不好管制,呈现Bug时,可能机器的大部分资源被某个利用占用,导致其余利用无奈失常运行,无奈做到利用隔离。 虚拟机部署在单个物理机上运行多个虚拟机,每个虚拟机都是残缺独立的零碎,性能损耗大。 容器部署所有容器共享主机的零碎,轻量级的虚拟机,性能损耗小,资源隔离,CPU和内存可按需分配 什么时候须要 Kubernetes当你的利用只是跑在一台机器,间接一个 docker + docker-compose 就够了,不便轻松;当你的利用须要跑在 3,4 台机器上,你仍旧能够每台机器独自配置运行环境 + 负载均衡器;当你利用拜访数一直减少,机器逐步减少到十几台、上百台、上千台时,每次加机器、软件更新、版本回滚,都会变得十分麻烦、痛不欲生,再也不能好好的摸鱼了,人生节约在那些没技术含量的重复性工作上。 这时候,Kubernetes 就能够一展身手了,让你轻松治理百万千万台机器的集群。“谈笑间,樯橹灰飞烟灭”,享受着一手掌控所有,年薪百万不可企及。 Kubernetes 能够为你提供集中式的治理集群机器和利用,加机器、版本升级、版本回滚,那都是一个命令就搞定的事,不停机的灰度更新,确保高可用、高性能、高扩大。 Kubernetes 集群架构 master主节点,管制平台,不须要很高性能,不跑工作,通常一个就行了,也能够开多个主节点来进步集群可用度。 worker工作节点,能够是虚拟机或物理计算机,工作都在这里跑,机器性能须要好点;通常都有很多个,能够一直加机器扩充集群;每个工作节点由主节点治理 重要概念 Pod豆荚,K8S 调度、治理的最小单位,一个 Pod 能够蕴含一个或多个容器,每个 Pod 有本人的虚构IP。一个工作节点能够有多个 pod,主节点会考量负载主动调度 pod 到哪个节点运行。 ...

July 24, 2022 · 8 min · jiezi

关于kubernetes:kubernetesk8s-configmapyaml-配置

一、介绍ConfigMap 是一种 API 对象,用来将非机密性的数据保留到键值对中。应用时, Pods 能够将其用作环境变量、命令行参数或者存储卷中的配置文件。ConfigMap 将你的环境配置信息和 容器镜像 解耦,便于利用配置的批改。官网文档 https://kubernetes.io/zh-cn/d... 二、配置实际配置次要有两步: 1.在Deployment部署文件里加载configMap配置2.配置configmap.yaml2.1 在Deployment部署文件里加载configMap配置apiVersion: v1kind: Podmetadata: name: mypodspec: containers: - name: mypod image: redis volumeMounts: //这个是咱们要用到的 - name: config-volume //咱们这里命名config-volume mountPath: "/etc/config-volume" //咱们配置的挂载的虚构门路地址 readOnly: true volumes: //这个是咱们要用到的 - name: config-volume //咱们这里命名config-volume configMap: name: myconfigmap //configmap.yaml中的metadata.name2.2 配置configmap.yamlapiVersion: v1kind: ConfigMapmetadata: name: myconfigmap //这个是咱们批改的configmap.yaml的namedata: ...immutable: true

July 23, 2022 · 1 min · jiezi

关于kubernetes:MacOS-安装k8s

装置前筹备确保本地曾经装置并启动好了Docker Desktop 拉取k8s镜像克隆git仓库到本地 git clone https://github.com/gotok8s/k8s-docker-desktop-for-mac.git进入我的项目目录,执行 ./load_images.sh期待所有镜像拉取实现 部署k8s进入Docker Decktop的设置页面,勾选Kubernetes设置页的配置,点击右下角的Apply & Restart按钮,期待k8s实现部署 实现当前能够验证一下部署状态 kubectl cluster-infokubectl get nodeskubectl describe node 装置k8s Dashboard利用举荐配置 kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended.yaml 获取登录token开启代理并且设置代理端口为8001 kubectl proxy --port=8001关上新的命令窗口,执行 获取token不带参数 curl 'http://127.0.0.1:8001/api/v1/namespaces/kube-system/serviceaccounts/kubernetes-dashboard/token' -H "Content-Type:application/json" -X POST -d '{}'获取token带参数 curl 'http://127.0.0.1:8001/api/v1/namespaces/kube-system/serviceaccounts/kubernetes-dashboard/token' -H "Content-Type:application/json" -X POST -d '{"kind":"TokenRequest","apiVersion":"authentication.k8s.io/v1","metadata":{"name":"kubernetes-dashboard","namespace":"kube-system"},"spec":{"audiences":["https://kubernetes.default.svc.cluster.local"],"expirationSeconds":7600}}' 登录k8s Dashboard复制上一步返回的token信息,浏览器拜访如下地址,填入token即可登录 http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login

July 21, 2022 · 1 min · jiezi

关于kubernetes:超90万个K8S实例可被发现暴露在公网上14位于中国

翻译:SEAL平安 原题目:Over 900,000 Kubernetes instances found exposed online 原文链接:https://www.bleepingcomputer.... 据 Beepingcomputer 音讯,超过90万个配置谬误的 Kuberenetes 集群被发现裸露在互联网上,可能会受到潜在的歹意扫描,有些甚至容易受到数据裸露所带来的网络攻击。 Kubernetes 是一个被业界宽泛采纳的开源容器编排引擎,用于托管在线服务并通过对立的 API 接口治理容器化工作负载。 因为其可扩展性、在多云环境中的灵活性、可移植性、老本、利用开发和零碎部署工夫的缩小,它被企业广为采纳并且在近几年内快速增长。 然而,如果 Kubernetes 配置不当,近程攻击者可能会利用谬误配置拜访外部资源和公有资产,而这些资源和资产原本是不应该公开的。 此外,依据配置入侵者有时能够在容器中降级他们的权限,以突破隔离并转向主机过程,从而使他们可能初步进入企业外部网络以进行进一步的攻打。  查找裸露的KubernetesCyble 的钻研人员进行了一次演习,应用相似于歹意行为者应用的扫描工具和搜寻查问,在网络上查找裸露的 Kubernetes 实例。 结果显示,能发现90万台 Kubernetes服务器,其中65%(58.5万台)位于美国,14%在中国,9%在德国,而荷兰和爱尔兰各占6%。在裸露的服务器中,裸露最多的TCP端口是 “443”,有超过一百万个实例,端口“10250”的数量为231,200,而 “6443”端口有84,400个后果。 必须强调的是,并非所有这些裸露的集群都是可利用的。退一万步来说,即便在那些可利用的集群中,其危险水平也因具体配置而异。  高风险的状况为了评估有多少裸露的实例可能存在重大危险,Cyble 钻研了对 Kubelet API 的未经认证的申请所返回的错误代码。 绝大多数裸露的实例返回错误代码403,这意味着未经认证的申请被禁止,无奈通过,所以不能对它们进行攻打。 而后有一个蕴含大概5000个实例的子集,返回错误代码401,示意该申请未经受权。 然而,这个响应给了潜在的攻击者一个提醒,即此集群正在运行,因而他们能够利用破绽尝试其余攻打。 最初,有一个蕴含799个 Kubernetes 实例的子集返回状态码为 200,这意味着这些实例齐全裸露给内部攻击者。 在这些状况下,攻击者无需明码即可拜访 Kubernetes Dashboard 上的节点、拜访所有 Secret、执行操作等。 尽管易受攻击的 Kubernetes 服务器的数量相当少,但只须要发现一个可近程利用的破绽,就会有更多的设施容易受到攻打。 为了确保你的集群不在这799个实例中,甚至不在裸露水平较低的5000个实例中,请参考 NSA 和 CISA 对于加固Kubernetes系统安全的指南: https://www.bleepingcomputer....   把握平安情况上个月,Shadowserver 基金会公布了一份对于裸露的 Kubernetes 实例的报告,他们发现了38万个惟一IP响应了 HTTP 错误代码200。 Cyble 通知 BleepingComputer,造成这种微小差别的起因是,他们应用了开源扫描器和简略查问,这是任何威逼者都能够应用的。而 Shadowserver 则扫描了整个IPv4空间,并每天监测新增内容。 ...

July 20, 2022 · 1 min · jiezi

关于kubernetes:技术实践|分布式时间锁

前沿K8s client-go中,源生自带了一个leader库,便于实现分布式工夫锁。 分布式工夫锁应用场景以K8s原生的controller-manager组件为例,当有三台master机器时,默认会运行三个controller-manager实例,但只有一个在干活,另外两个处于备用状态。而这个性能的实现,就依赖于分布式工夫锁。 工夫锁配置阐明 所有相干配置如下图所示: 锁的持有者,会每隔retryPeriod更新锁的有效期,示意它始终在持有这把锁。 特地阐明下两个参数: 一. leaseTimeout  举个例子:当初有个房间,我要求当有人进入房间时,下一个人至多期待1小时才可进入房间。这时,咱们能够将leaseTimeout设置为1小时,每当有人进入房间,则将房门上的时候改为以后工夫。下一个人筹备进入时,必须查看房门上的工夫间隔以后工夫超过leaseTimeout。之所以要这样设计,是因为在分布式状况下,只有程序活着的时候才能够要求它干什么,而一旦它异样了,它就失控了。而为了避免在它异样时,其它活着的程序能够失常接替它,所以就约定了leaseTimeout,一旦超过这个工夫,则间接认定它异样,能够接管。二. renewDeadline  下面的约定,无奈避免脑裂。因为锁持有者在leaseTimeout中未更新锁,并不代表它曾经挂了,它可能只是因为其它起因无奈更新锁,或者程序夯住了,之后它可能再复原。而如果它在他人接替它后,原持有者再复原运行,则会导致脑裂,为了避免这种状况产生,针对锁持有者就设置了renewDeadline如果锁持有者如果无奈在renewDeadline工夫内实现锁的更新,则要求锁持有者强制开释锁,程序退出。所以renewDeadline必须比leaseTimeout小leader运行流程 下面流程很清晰,上面独自具体讲下: 尝试获取锁并更新锁 选举时序图从下面获取锁流程,除了第一次创立锁之外,选举的要害就是察看工夫: observedTime id1异常情况 id1网络异样无奈更新锁 从时序图中可看进去,监听工夫的必要性。所有的flower(待接替者)都必须更新本地监听工夫,必须保障在renewDeadline工夫中,锁未产生任何变动,否则就须要再从新选举。 当然,还有一种极其状况:两个flower同时发现锁未产生任何变动,同时尝试去获取锁,这个时候就须要用到etcd的resourceVersion机制:在update时须要上送查问时的resourceVersion,示意在这过程中该资源未产生过其它更新。 原理就相似sql的select for update –> 查问时锁定记录。在这种状况下,etcd会保障先更新的能更新胜利,后更新的会失败。这样就保障这种极其状况不会脑裂。

July 15, 2022 · 1 min · jiezi

关于kubernetes:二进制安装Kubernetesk8s-v1243-IPv4IPv6双栈

二进制装置Kubernetes(k8s) v1.24.3 IPv4/IPv6双栈Kubernetes 开源不易,帮忙点个star,谢谢了 介绍kubernetes(k8s)二进制高可用装置部署,反对IPv4+IPv6双栈。 我应用IPV6的目标是在公网进行拜访,所以我配置了IPV6动态地址。 若您没有IPV6环境,或者不想应用IPv6,不对主机进行配置IPv6地址即可。 不配置IPV6,不影响后续,不过集群仍旧是反对IPv6的。为前期留有扩大可能性。 若不要IPv6 ,不给网卡配置IPv6即可,不要对IPv6相干配置删除或操作,否则会出问题。 https://github.com/cby-chen/K... 手动我的项目地址:https://github.com/cby-chen/K... 脚本我的项目地址:https://github.com/cby-chen/B... 强烈建议在Github上查看文档。Github出问题会更新文档,并且后续尽可能第一工夫更新新版本文档。 1.21.13 和 1.22.10 和 1.23.3 和 1.23.4 和 1.23.5 和 1.23.6 和 1.23.7 和 1.24.0 和 1.24.1 和 1.24.2 和 1.24.3 文档以及安装包已生成。 1.环境主机名称IP地址阐明软件Master01192.168.1.61master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-clientMaster02192.168.1.62master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-clientMaster03192.168.1.63master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-clientNode01192.168.1.64node节点kubelet、kube-proxy、nfs-clientNode02192.168.1.65node节点kubelet、kube-proxy、nfs-clientNode03192.168.1.66node节点kubelet、kube-proxy、nfs-clientNode04192.168.1.67node节点kubelet、kube-proxy、nfs-clientNode05192.168.1.68node节点kubelet、kube-proxy、nfs-clientLb01192.168.1.70Lb01节点haproxy、keepalivedLb02192.168.1.80Lb02节点haproxy、keepalived 192.168.1.69VIP 软件版本kernel5.18.0-1.el8CentOS 8v8 或者 v7kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxyv1.24.3etcdv3.5.4containerdv1.6.6cfsslv1.6.1cniv1.1.1crictlv1.24.2haproxyv1.8.27keepalivedv2.1.5网段 物理主机:192.168.1.0/24 service:10.96.0.0/12 pod:172.16.0.0/12 倡议k8s集群与etcd集群离开装置 安装包曾经整顿好:https://github.com/cby-chen/K... 1.1.k8s根底零碎环境配置1.2.配置IPssh root@192.168.1.100 "nmcli con mod ens18 ipv4.addresses 192.168.1.61/24; nmcli con mod ens18 ipv4.gateway 10.0.0.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.106 "nmcli con mod ens18 ipv4.addresses 192.168.1.62/24; nmcli con mod ens18 ipv4.gateway 10.0.0.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.110 "nmcli con mod ens18 ipv4.addresses 192.168.1.63/24; nmcli con mod ens18 ipv4.gateway 10.0.0.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.114 "nmcli con mod ens18 ipv4.addresses 192.168.1.64/24; nmcli con mod ens18 ipv4.gateway 10.0.0.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.115 "nmcli con mod ens18 ipv4.addresses 192.168.1.65/24; nmcli con mod ens18 ipv4.gateway 10.0.0.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.116 "nmcli con mod ens18 ipv4.addresses 192.168.1.66/24; nmcli con mod ens18 ipv4.gateway 10.0.0.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.117 "nmcli con mod ens18 ipv4.addresses 192.168.1.67/24; nmcli con mod ens18 ipv4.gateway 10.0.0.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.118 "nmcli con mod ens18 ipv4.addresses 192.168.1.68/24; nmcli con mod ens18 ipv4.gateway 10.0.0.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.119 "nmcli con mod ens18 ipv4.addresses 192.168.1.70/24; nmcli con mod ens18 ipv4.gateway 10.0.0.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.120 "nmcli con mod ens18 ipv4.addresses 192.168.1.80/24; nmcli con mod ens18 ipv4.gateway 10.0.0.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.61 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::10; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18"ssh root@192.168.1.62 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::20; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18"ssh root@192.168.1.63 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::30; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18"ssh root@192.168.1.64 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::40; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18"ssh root@192.168.1.65 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::50; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18"ssh root@192.168.1.66 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::60; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18"ssh root@192.168.1.67 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::70; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18"ssh root@192.168.1.68 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::80; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18"ssh root@192.168.1.70 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::90; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18"ssh root@192.168.1.80 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::100; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18"1.3.设置主机名hostnamectl set-hostname k8s-master01hostnamectl set-hostname k8s-master02hostnamectl set-hostname k8s-master03hostnamectl set-hostname k8s-node01hostnamectl set-hostname k8s-node02hostnamectl set-hostname k8s-node03hostnamectl set-hostname k8s-node04hostnamectl set-hostname k8s-node05hostnamectl set-hostname lb01hostnamectl set-hostname lb021.4.配置yum源# 对于 CentOS 7sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org|baseurl=https://mirrors.tuna.tsinghua.edu.cn|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo# 对于 CentOS 8sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org/$contentdir|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo# 对于公有仓库sed -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://mirror.centos.org/\$contentdir|baseurl=http://192.168.1.123/centos|g' -i.bak /etc/yum.repos.d/CentOS-*.repo1.5.装置一些必备工具yum -y install wget jq psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl -y1.6.选择性下载须要工具1.下载kubernetes1.24.+的二进制包github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.mdwget https://dl.k8s.io/v1.24.2/kubernetes-server-linux-amd64.tar.gz2.下载etcdctl二进制包github二进制包下载地址:https://github.com/etcd-io/etcd/releaseswget https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz3.containerd二进制包下载github下载地址:https://github.com/containerd/containerd/releases4.containerd下载时下载带cni插件的二进制包。wget https://github.com/containerd/containerd/releases/download/v1.6.6/cri-containerd-cni-1.6.6-linux-amd64.tar.gz5.下载cfssl二进制包github二进制包下载地址:https://github.com/cloudflare/cfssl/releaseswget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd646.cni插件下载github下载地址:https://github.com/containernetworking/plugins/releaseswget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz7.crictl客户端二进制下载github下载:https://github.com/kubernetes-sigs/cri-tools/releaseswget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz1.7.敞开防火墙systemctl disable --now firewalld1.8.敞开SELinuxsetenforce 0sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config1.9.敞开替换分区sed -ri 's/.*swap.*/#&/' /etc/fstabswapoff -a && sysctl -w vm.swappiness=0cat /etc/fstab# /dev/mapper/centos-swap swap swap defaults 0 01.10.网络配置(俩种形式二选一)# 形式一# systemctl disable --now NetworkManager# systemctl start network && systemctl enable network# 形式二cat > /etc/NetworkManager/conf.d/calico.conf << EOF [keyfile]unmanaged-devices=interface-name:cali*;interface-name:tunl*EOFsystemctl restart NetworkManager1.11.进行工夫同步 (lb除外)# 服务端yum install chrony -ycat > /etc/chrony.conf << EOF pool ntp.aliyun.com iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsyncallow 10.0.0.0/24local stratum 10keyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronyEOFsystemctl restart chronyd ; systemctl enable chronyd# 客户端yum install chrony -ycat > /etc/chrony.conf << EOF pool 192.168.1.61 iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsynckeyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronyEOFsystemctl restart chronyd ; systemctl enable chronyd#应用客户端进行验证chronyc sources -v1.12.配置ulimitulimit -SHn 65535cat >> /etc/security/limits.conf <<EOF* soft nofile 655360* hard nofile 131072* soft nproc 655350* hard nproc 655350* seft memlock unlimited* hard memlock unlimiteddEOF1.13.配置免密登录yum install -y sshpassssh-keygen -f /root/.ssh/id_rsa -P ''export IP="192.168.1.61 192.168.1.62 192.168.1.63 192.168.1.64 192.168.1.65 192.168.1.66 192.168.1.67 192.168.1.68 192.168.1.70 192.168.1.80"export SSHPASS=123123for HOST in $IP;do sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOSTdone1.14.增加启用源 (lb除外)# 为 RHEL-8或 CentOS-8配置源yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo # 为 RHEL-7 SL-7 或 CentOS-7 装置 ELRepo yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo # 查看可用安装包yum --disablerepo="*" --enablerepo="elrepo-kernel" list available1.15.降级内核至4.18版本以上 (lb除外)# 装置最新的内核# 我这里抉择的是稳定版kernel-ml 如需更新长期保护版本kernel-lt yum --enablerepo=elrepo-kernel install kernel-ml# 查看已装置那些内核rpm -qa | grep kernelkernel-core-4.18.0-358.el8.x86_64kernel-tools-4.18.0-358.el8.x86_64kernel-ml-core-5.16.7-1.el8.elrepo.x86_64kernel-ml-5.16.7-1.el8.elrepo.x86_64kernel-modules-4.18.0-358.el8.x86_64kernel-4.18.0-358.el8.x86_64kernel-tools-libs-4.18.0-358.el8.x86_64kernel-ml-modules-5.16.7-1.el8.elrepo.x86_64# 查看默认内核grubby --default-kernel/boot/vmlinuz-5.16.7-1.el8.elrepo.x86_64# 若不是最新的应用命令设置grubby --set-default /boot/vmlinuz-「您的内核版本」.x86_64# 重启失效reboot# v8 整合命令为:yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y ; sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo ; sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --default-kernel ; reboot # v7 整合命令为:yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y ; sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo ; sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --set-default $(ls /boot/vmlinuz-* | grep elrepo) ; grubby --default-kernel ; reboot 1.16.装置ipvsadm (lb除外)yum install ipvsadm ipset sysstat conntrack libseccomp -ycat >> /etc/modules-load.d/ipvs.conf <<EOF ip_vsip_vs_rrip_vs_wrrip_vs_shnf_conntrackip_tablesip_setxt_setipt_setipt_rpfilteript_REJECTipipEOFsystemctl restart systemd-modules-load.servicelsmod | grep -e ip_vs -e nf_conntrackip_vs_sh 16384 0ip_vs_wrr 16384 0ip_vs_rr 16384 0ip_vs 180224 6 ip_vs_rr,ip_vs_sh,ip_vs_wrrnf_conntrack 176128 1 ip_vsnf_defrag_ipv6 24576 2 nf_conntrack,ip_vsnf_defrag_ipv4 16384 1 nf_conntracklibcrc32c 16384 3 nf_conntrack,xfs,ip_vs1.17.批改内核参数 (lb除外)cat <<EOF > /etc/sysctl.d/k8s.confnet.ipv4.ip_forward = 1net.bridge.bridge-nf-call-iptables = 1fs.may_detach_mounts = 1vm.overcommit_memory=1vm.panic_on_oom=0fs.inotify.max_user_watches=89100fs.file-max=52706963fs.nr_open=52706963net.netfilter.nf_conntrack_max=2310720net.ipv4.tcp_keepalive_time = 600net.ipv4.tcp_keepalive_probes = 3net.ipv4.tcp_keepalive_intvl =15net.ipv4.tcp_max_tw_buckets = 36000net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_max_orphans = 327680net.ipv4.tcp_orphan_retries = 3net.ipv4.tcp_syncookies = 1net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.ip_conntrack_max = 65536net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.tcp_timestamps = 0net.core.somaxconn = 16384net.ipv6.conf.all.disable_ipv6 = 0net.ipv6.conf.default.disable_ipv6 = 0net.ipv6.conf.lo.disable_ipv6 = 0net.ipv6.conf.all.forwarding = 1EOFsysctl --system1.18.所有节点配置hosts本地解析cat > /etc/hosts <<EOF127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain62408:8207:78cc:5cc1:181c::10 k8s-master012408:8207:78cc:5cc1:181c::20 k8s-master022408:8207:78cc:5cc1:181c::30 k8s-master032408:8207:78cc:5cc1:181c::40 k8s-node012408:8207:78cc:5cc1:181c::50 k8s-node022408:8207:78cc:5cc1:181c::60 k8s-node032408:8207:78cc:5cc1:181c::70 k8s-node042408:8207:78cc:5cc1:181c::80 k8s-node052408:8207:78cc:5cc1:181c::90 lb012408:8207:78cc:5cc1:181c::100 lb02192.168.1.61 k8s-master01192.168.1.62 k8s-master02192.168.1.63 k8s-master03192.168.1.64 k8s-node01192.168.1.65 k8s-node02192.168.1.66 k8s-node03192.168.1.67 k8s-node04192.168.1.68 k8s-node05192.168.1.70 lb01192.168.1.80 lb02192.168.1.69 lb-vipEOF2.k8s根本组件装置2.1.所有k8s节点装置Containerd作为Runtime# wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz#创立cni插件所需目录mkdir -p /etc/cni/net.d /opt/cni/bin #解压cni二进制包tar xf cni-plugins-linux-amd64-v1.1.1.tgz -C /opt/cni/bin/# wget https://github.com/containerd/containerd/releases/download/v1.6.6/cri-containerd-cni-1.6.6-linux-amd64.tar.gz#解压tar -xzf cri-containerd-cni-1.6.6-linux-amd64.tar.gz -C /#创立服务启动文件cat > /etc/systemd/system/containerd.service <<EOF[Unit]Description=containerd container runtimeDocumentation=https://containerd.ioAfter=network.target local-fs.target[Service]ExecStartPre=-/sbin/modprobe overlayExecStart=/usr/local/bin/containerdType=notifyDelegate=yesKillMode=processRestart=alwaysRestartSec=5LimitNPROC=infinityLimitCORE=infinityLimitNOFILE=infinityTasksMax=infinityOOMScoreAdjust=-999[Install]WantedBy=multi-user.targetEOF2.1.1配置Containerd所需的模块cat <<EOF | sudo tee /etc/modules-load.d/containerd.confoverlaybr_netfilterEOF2.1.2加载模块systemctl restart systemd-modules-load.service2.1.3配置Containerd所需的内核cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.confnet.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1net.bridge.bridge-nf-call-ip6tables = 1EOF# 加载内核sysctl --system2.1.4创立Containerd的配置文件# 创立默认配置文件mkdir -p /etc/containerdcontainerd config default | tee /etc/containerd/config.toml# 批改Containerd的配置文件sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep SystemdCgroupsed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep sandbox_image2.1.5启动并设置为开机启动systemctl daemon-reloadsystemctl enable --now containerd2.1.6配置crictl客户端连贯的运行时地位# wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz#解压tar xf crictl-v1.24.2-linux-amd64.tar.gz -C /usr/bin/#生成配置文件cat > /etc/crictl.yaml <<EOFruntime-endpoint: unix:///run/containerd/containerd.sockimage-endpoint: unix:///run/containerd/containerd.socktimeout: 10debug: falseEOF#测试systemctl restart containerdcrictl info2.2.k8s与etcd下载及装置(仅在master01操作)2.2.1解压k8s安装包# 下载安装包# wget https://dl.k8s.io/v1.24.2/kubernetes-server-linux-amd64.tar.gz# wget https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz# 解压k8s安装文件cd cbytar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}# 解压etcd安装文件tar -xf etcd*.tar.gz && mv etcd-*/etcd /usr/local/bin/ && mv etcd-*/etcdctl /usr/local/bin/# 查看/usr/local/bin下内容ls /usr/local/bin/containerd containerd-shim-runc-v1 containerd-stress critest ctr etcdctl kube-controller-manager kubelet kube-scheduler containerd-shim containerd-shim-runc-v2 crictl ctd-decoder etcd kube-apiserver kubectl kube-proxy2.2.2查看版本[root@k8s-master01 ~]# kubelet --versionKubernetes v1.24.3[root@k8s-master01 ~]# etcdctl versionetcdctl version: 3.5.4API version: 3.5[root@k8s-master01 ~]# 2.2.3将组件发送至其余k8s节点Master='k8s-master02 k8s-master03'Work='k8s-node01 k8s-node02 k8s-node03 k8s-node04 k8s-node05'for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; donefor NODE in $Work; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; donemkdir -p /opt/cni/bin2.3创立证书相干文件mkdir pkicd pkicat > admin-csr.json << EOF { "CN": "admin", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:masters", "OU": "Kubernetes-manual" } ]}EOFcat > ca-config.json << EOF { "signing": { "default": { "expiry": "876000h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "876000h" } } }}EOFcat > etcd-ca-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ], "ca": { "expiry": "876000h" }}EOFcat > front-proxy-ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "ca": { "expiry": "876000h" }}EOFcat > kubelet-csr.json << EOF { "CN": "system:node:\$NODE", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "system:nodes", "OU": "Kubernetes-manual" } ]}EOFcat > manager-csr.json << EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-controller-manager", "OU": "Kubernetes-manual" } ]}EOFcat > apiserver-csr.json << EOF { "CN": "kube-apiserver", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ]}EOFcat > ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ], "ca": { "expiry": "876000h" }}EOFcat > etcd-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ]}EOFcat > front-proxy-client-csr.json << EOF { "CN": "front-proxy-client", "key": { "algo": "rsa", "size": 2048 }}EOFcat > kube-proxy-csr.json << EOF { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-proxy", "OU": "Kubernetes-manual" } ]}EOFcat > scheduler-csr.json << EOF { "CN": "system:kube-scheduler", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-scheduler", "OU": "Kubernetes-manual" } ]}EOFcd ..mkdir bootstrapcd bootstrapcat > bootstrap.secret.yaml << EOF apiVersion: v1kind: Secretmetadata: name: bootstrap-token-c8ad9c namespace: kube-systemtype: bootstrap.kubernetes.io/tokenstringData: description: "The default bootstrap token generated by 'kubelet '." token-id: c8ad9c token-secret: 2e4d610cf3e7426e usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress ---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: kubelet-bootstraproleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node-bootstrappersubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: node-autoapprove-bootstraproleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclientsubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: node-autoapprove-certificate-rotationroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclientsubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:nodes---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:kube-apiserver-to-kubeletrules: - apiGroups: - "" resources: - nodes/proxy - nodes/stats - nodes/log - nodes/spec - nodes/metrics verbs: - "*"---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: system:kube-apiserver namespace: ""roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kube-apiserver-to-kubeletsubjects: - apiGroup: rbac.authorization.k8s.io kind: User name: kube-apiserverEOFcd ..mkdir corednscd corednscat > coredns.yaml << EOF apiVersion: v1kind: ServiceAccountmetadata: name: coredns namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: kubernetes.io/bootstrapping: rbac-defaults name: system:corednsrules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch - apiGroups: - discovery.k8s.io resources: - endpointslices verbs: - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:corednsroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:corednssubjects:- kind: ServiceAccount name: coredns namespace: kube-system---apiVersion: v1kind: ConfigMapmetadata: name: coredns namespace: kube-systemdata: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance }---apiVersion: apps/v1kind: Deploymentmetadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "CoreDNS"spec: # replicas: not specified here: # 1. Default is 1. # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns spec: priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations: - key: "CriticalAddonsOnly" operator: "Exists" nodeSelector: kubernetes.io/os: linux affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: ["kube-dns"] topologyKey: kubernetes.io/hostname containers: - name: coredns image: registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.8.6 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns readOnly: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /ready port: 8181 scheme: HTTP dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile---apiVersion: v1kind: Servicemetadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS"spec: selector: k8s-app: kube-dns clusterIP: 10.96.0.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP - name: metrics port: 9153 protocol: TCPEOFcd ..mkdir metrics-servercd metrics-servercat > metrics-server.yaml << EOF apiVersion: v1kind: ServiceAccountmetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: k8s-app: metrics-server rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: system:aggregated-metrics-readerrules:- apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: k8s-app: metrics-server name: system:metrics-serverrules:- apiGroups: - "" resources: - pods - nodes - nodes/stats - namespaces - configmaps verbs: - get - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata: labels: k8s-app: metrics-server name: metrics-server-auth-reader namespace: kube-systemroleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-readersubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: labels: k8s-app: metrics-server name: metrics-server:system:auth-delegatorroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegatorsubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: labels: k8s-app: metrics-server name: system:metrics-serverroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-serversubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: v1kind: Servicemetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-systemspec: ports: - name: https port: 443 protocol: TCP targetPort: https selector: k8s-app: metrics-server---apiVersion: apps/v1kind: Deploymentmetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-systemspec: selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm - --requestheader-username-headers=X-Remote-User - --requestheader-group-headers=X-Remote-Group - --requestheader-extra-headers-prefix=X-Remote-Extra- image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:0.5.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 name: metrics-server ports: - containerPort: 4443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: https scheme: HTTPS initialDelaySeconds: 20 periodSeconds: 10 resources: requests: cpu: 100m memory: 200Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 volumeMounts: - mountPath: /tmp name: tmp-dir - name: ca-ssl mountPath: /etc/kubernetes/pki nodeSelector: kubernetes.io/os: linux priorityClassName: system-cluster-critical serviceAccountName: metrics-server volumes: - emptyDir: {} name: tmp-dir - name: ca-ssl hostPath: path: /etc/kubernetes/pki---apiVersion: apiregistration.k8s.io/v1kind: APIServicemetadata: labels: k8s-app: metrics-server name: v1beta1.metrics.k8s.iospec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system version: v1beta1 versionPriority: 100EOF3.相干证书生成# master01节点下载证书生成工具# wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64" -O /usr/local/bin/cfssl# wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64" -O /usr/local/bin/cfssljson# 软件包内有cp cfssl_1.6.1_linux_amd64 /usr/local/bin/cfsslcp cfssljson_1.6.1_linux_amd64 /usr/local/bin/cfssljsonchmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson3.1.生成etcd证书特地阐明除外,以下操作在所有master节点操作 ...

July 14, 2022 · 23 min · jiezi

关于kubernetes:kubernetes-k8s-二进制高可用安装

kubernetes (k8s) 二进制高可用装置https://github.com/cby-chen/Kubernetes 开源不易,帮忙点个star,谢谢了 GitHub拜访不通顺能够拜访国内GitEE https://gitee.com/cby-inc/Kub... 常见异样装置会呈现kubelet异样,无奈辨认 --node-labels 字段问题,起因如下。将 --node-labels=node.kubernetes.io/node='' 替换为 --node-labels=node.kubernetes.io/node= 将 '' 删除即可。 留神hosts配置文件中主机名和IP地址对应在文档7.2,却记别忘记执行kubectl create -f bootstrap.secret.yaml命令介绍kubernetes(k8s)二进制高可用装置部署,反对IPv4+IPv6双栈。 我应用IPV6的目标是在公网进行拜访,所以我配置了IPV6动态地址。 若您没有IPV6环境,或者不想应用IPv6,不对主机进行配置IPv6地址即可。 不配置IPV6,不影响后续,不过集群仍旧是反对IPv6的。为前期留有扩大可能性。 若不要IPv6 ,不给网卡配置IPv6即可,不要对IPv6相干配置删除或操作,否则会出问题。 强烈建议在Github上查看文档。Github出问题会更新文档,并且后续尽可能第一工夫更新新版本文档。 以后文档版本1.21.13 和 1.22.10 和 1.23.3 和 1.23.4 和 1.23.5 和 1.23.6 和 1.23.7 和 1.24.0 和 1.24.1 和 1.24.2 和 1.24.3 ...陆续更新。 拜访地址https://github.com/cby-chen/K... 手动我的项目地址:https://github.com/cby-chen/K... 脚本我的项目地址:https://github.com/cby-chen/B... 文档二进制装置每个版本文档1.23版本v1.23.3-CentOS-binary-install v1.23.4-CentOS-binary-install v1.23.5-CentOS-binary-install v1.23.6-CentOS-binary-install 1.24版本v1.24.0-CentOS-binary-install-IPv6-IPv4.md v1.24.1-CentOS-binary-install-IPv6-IPv4.md v1.24.2-CentOS-binary-install-IPv6-IPv4.md v1.24.3-CentOS-binary-install-IPv6-IPv4.md 三主俩从版本v1.21.13-CentOS-binary-install-IPv6-IPv4-Three-Masters-Two-Slaves.md v1.22.10-CentOS-binary-install-IPv6-IPv4-Three-Masters-Two-Slaves.md v1.23.7-CentOS-binary-install-IPv6-IPv4-Three-Masters-Two-Slaves.md v1.24.0-CentOS-binary-install-IPv6-IPv4-Three-Masters-Two-Slaves.md v1.24.1-CentOS-binary-install-IPv6-IPv4-Three-Masters-Two-Slaves.md v1.24.1-Ubuntu-binary-install-IPv6-IPv4-Three-Masters-Two-Slaves.md 修复kube-proxy证书权限过大问题kube-proxy_permissions.md 应用kubeadm初始化IPV4/IPV6集群kubeadm-install-IPV6-IPV4.md IPv4集群启用IPv6性能,敞开IPv6则反之Enable-implement-IPv4-IPv6.md 安装包(下载更快)我本人的网盘:https://pan.oiox.cn/s/PetV (下载更快)123网盘:https://www.123pan.com/s/Z8Ar... ...

July 14, 2022 · 1 min · jiezi

关于kubernetes:扩展-Kubernetes-之-Kubectl-Plugin

简介kubectl 是 重要的 kubernetes 治理/运维工具kubectl 性能十分弱小, 常见的命令应用形式能够参考 kubectl --help,或者这篇文章这篇文章首先会简略介绍几个 kubectl 你可能不晓得的小技巧,次要篇幅介绍 kubectl 的 plugin.kubectl 小技巧 设置主动补全 kubectl completion zsh查看资源 SPEC (有没有遇到过 想看SPEC 只能去查API文档或者翻代码的状况?) kubectl explain [--recursive]给罕用的命令设置 alias, 比方笔者罕用的: kns="kubectl -n kube-system", kna="kubectl --all-namespaces=true, kcc="kubectl config use-context, kgy="kubectl get -o yaml", 或者间接应用这个我的项目 生成的 alias, 这个我的项目应用一套规定生成了 800 多个 aliases kubectl plguinkubectl 反对一种简略的 plugin 机制,反对通过 kubectl 调用另一个二进制,实现 kubernetes 相干的一些性能(其实对二进制执行的性能没有任何限度)目前这种机制并没有在 kubectl 和 plugin 传递任何信息,只对 plugin 有两点要求 plugin 为可执行文件plugin 可执行文件的名字为 kubectl-$plugin_name krew本地装置形式很简略,只须要把 可执行文件挪动到比方 /usr/local/bin ,并且命名为 kubectl-$plugin_name即可。然而做好到插件如何分享,以及如何获取他人装置到插件呢。kubectl 提供了一个 krew(他自身也是一个插件) 工具提供了相应到性能Available Commands: help Help about any command info Show information about a kubectl plugin install Install kubectl plugins list List installed kubectl plugins search Discover kubectl plugins uninstall Uninstall plugins update Update the local copy of the plugin index upgrade Upgrade installed plugins to newer versions version Show krew version and diagnostics复制代码检索插件能够应用命令 kubectl krew search, 然而这下面到介绍比拟简介,更好到形式是到 这个 index页面 查看介绍和去对应的 github 仓库查看具体介绍。➜ kubectl krew searchNAME DESCRIPTION INSTALLEDaccess-matrix Show an RBAC access matrix for server resources noadvise-psp Suggests PodSecurityPolicies for cluster. noauth-proxy Authentication proxy to a pod or service nobulk-action Do bulk actions on Kubernetes resources. noca-cert Print the PEM CA certificate of the current clu... nocapture Triggers a Sysdig capture to troubleshoot the r... no...复制代码装置插件应用 kubectl krew install➜ kubectl krew install custom-colsUpdated the local copy of plugin index.Installing plugin: custom-colsInstalled plugin: custom-cols\ | Use this plugin: | kubectl custom-cols | Documentation: | https://github.com/webofmars/... | Caveats: | \ | | The list of templates is for now limited and can be retrieved with the --help option. | | Please feel free to submit any PR upstream (see github repo) to add more. | //WARNING: You installed a plugin from the krew-index plugin repository. These plugins are not audited for security by the Krew maintainers. Run them at your own risk.(base)复制代码举荐插件介绍change-ns切换 ns, 用于切换 namespace,切换后会设置在 kubeconfig 中,后续的操作就不必再加 --namespaces 了。不过设置了 namespace 之后须要留神的是后续的命令默认 namespace 都是这个设置值了,如果你在 yaml 中没有写名 namespace,资源可能不会创立到你冀望的 default 目录上面了.➜ kubectl change-ns kube-systemnamespace changed to "kube-system"复制代码csshssh 到 kubernetes node 下面去,会主动从 node 信息中提取 外网 ip,并连贯 tmux 尝试做 ssh 登陆. ...

July 14, 2022 · 6 min · jiezi

关于kubernetes:K8S-笔记-实操之-Kubectl-日志输出详细程度和调试

Kubectl -v 或者 --v 能够实现日志级别的管制。-v 后跟一个数字示意日志的级别。具体级别和用处如下: 具体水平 形容--v=0 用于那些应该 始终 对运维人员可见的信息,因为这些信息个别很有用。--v=1 如果您不想要看到冗余信息,此值是一个正当的默认日志级别。--v=2 输入无关服务的稳固状态的信息以及重要的日志音讯,这些信息可能与零碎中的重大变动无关。这是倡议大多数零碎设置的默认日志级别。--v=3 蕴含无关零碎状态变动的扩大信息。--v=4 蕴含调试级别的冗余信息。--v=5 跟踪级别的具体水平。--v=6 显示所申请的资源。--v=7 显示 HTTP 申请头。--v=8 显示 HTTP 申请内容。--v=9 显示 HTTP 申请内容而且不截断内容。 来看理论的执行输入,比照不同日志级别的区别。以 kubectl get pod 为例: kubectl get pod -A --v=0 [root@k8s-master auth]# kubectl get pod -A --v=0NAMESPACE NAME READY STATUS RESTARTS AGEdefault my-nginx-86f598f68b-f7v8r 1/1 Running 2 (2d2h ago) 20ddefault my-nginx-86f598f68b-jbstc 1/1 Running 1 (2d2h ago) 8dkube-system coredns-6d8c4cb4d-7zxqg 1/1 Running 4 (2d2h ago) 24dkube-system coredns-6d8c4cb4d-g7pkr 1/1 Running 5 (2d2h ago) 34dkube-system dashboard-metrics-scraper-799d786dbf-mqgzn 1/1 Running 4 (2d2h ago) 24dkube-system etcd-k8s-master 1/1 Running 11 (2d2h ago) 144dkube-system kube-apiserver-k8s-master 1/1 Running 12 (2d2h ago) 144dkube-system kube-controller-manager-k8s-master 1/1 Running 33 (23h ago) 144dkube-system kube-flannel-ds-km8hx 1/1 Running 4 (2d2h ago) 34dkube-system kube-flannel-ds-mnz8h 1/1 Running 5 (2d2h ago) 34dkube-system kube-proxy-57h47 1/1 Running 12 (2d2h ago) 144dkube-system kube-proxy-q55pv 1/1 Running 5 (2d2h ago) 144dkube-system kube-proxy-qs57s 1/1 Running 5 (2d2h ago) 144dkube-system kube-scheduler-k8s-master 1/1 Running 34 (23h ago) 144dkube-system kubernetes-dashboard-56d4dc85cb-wp4ds 1/1 Running 4 (2d2h ago) 24dkubectl get pod -A --v=1 ...

July 12, 2022 · 16 min · jiezi

关于kubernetes:实战Kubernetes-Gitlab-CI

一 背景在目前微服务大行其道的背景下,Gitlab CI集成kubernetes曾经是不可或缺的基本操作,咱们前几节零碎的实战了前后端我的项目以及物理/K8s混合环境部署,这节课咱们来学习Gitlab CI如何将利用公布进K8s,咱们都晓得在之前的将gitlab-runner部署在服务器下面是存在肯定的危险,如果运行pipeline的服务器宕机,公布工作就没方法持续了,更可怕的时候如果common-runner发送故障,多个公布工作就都有问题,在微服务架构中,不可变的基础设施,容器的自蕴含环境使得咱们公布变得更加简略快捷,不必在思考放心runner的环境如何依据不同的我的项目辨别,且动静的Job触发,咱们会随时拉起一个Pod来运行咱们的Job,运行实现后又进行销毁,这样不仅能实现动静运行节俭资源,而且能够不必思考多我的项目多任务并发构建的问题,这节课就让咱们来纵情享受K8s+Gitlab CI为咱们带来畅快淋漓的公布体验。二 架构解析本文以构建一个Java软件我的项目并将其部署到阿里云容器服务Kubernetes集群中为例,阐明如何应用GitLab CI在阿里云Kubernetes服务上运行GitLab Runner、配置Kubernetes类型的executor,并执行Pipeline。2.1 Gitlab CI流程图2.2 流程详解如上图为一个简略的Gitlab CI部署进K8s流程图,同之前咱们讲到的CI集成统一,只须要我的项目中存在.gitlab-ci.yml文件即可,与之前的差别为在集成kubernetes的时候,咱们讲咱们的gitlab-runner运行在咱们K8s内,其为一个POD模式运行,其管制着后续的pipeline中各stage的执行,咱们能够看到,当一个pipeline有多个stage,每个stage都是有一个独自的Pod去执行,这个Pod应用的镜像在咱们CI的.gitlab-ci.yml 的每个stage的image中定义,如所示为一个部署Java我的项目的流程 开发者或我的项目维护者merge request到特定分支,改分支存在CI文件,开始进行CICI工作由曾经注册在gitlab-server运行在K8s集群内的giitlab-runner进行下发 第一个为package,对java我的项目利用maven镜像进行打包,生成war包制品到缓存目录中;利用docker镜像对依据我的项目中的Dockerfile对缓存中的制品进行镜像构建,构建实现后登录镜像仓库进行镜像推送;在此咱们利用将部署文件托管在我的项目内,也体现了gitops的思维,将之前构建推送胜利的镜像地址信息在deployment.yaml文件进行批改,之后apply进k8s中,java我的项目构建的镜像就运行在K8s集群内了,实现整个的公布。 在流程中有一些注意事项: 咱们能够在stage中增加本人业务需要的内容,例如单元测试,代码扫描等。在部署文件中,咱们能够将整个我的项目制作成helm的chart,替换其中的镜像,利用helm来进行整个利用的部署。利用在部署中单个stage利用的是不同的image,在各个stage中传递曾经生成的制品例如war/jar包,须要应用到内部存储来缓存制品。 三 长处通过下面的Gitlab CI流程咱们可能看到将gitlab runner运行在K8s集群中,每个Job启动独自的POD运行操作,此种形式齐全利用起了K8s的一些长处 高可用:当某个节点呈现故障时,Kubernetes 会主动创立一个新的 GitLab-Runner 容器,并挂载同样的 Runner 配置,使服务达到高可用。弹性伸缩:触发式工作,正当应用资源,每次运行脚本工作时,Gitlab-Runner 会主动创立一个或多个新的长期 Runner来运行Job。资源最大化利用:动态创建Pod运行Job,资源主动开释,而且 Kubernetes 会依据每个节点资源的应用状况,动态分配长期 Runner 到闲暇的节点上创立,升高呈现因某节点资源利用率高,还排队期待在该节点的状况。扩展性好:当 Kubernetes 集群的资源严重不足而导致长期 Runner 排队期待时,能够很容易的增加一个 Kubernetes Node 到集群中,从而实现横向扩大。 如果您的业务目前运行环境为K8s,那么Gitlab CI齐全符合您的业务场景,您只须要自定义gitlab-ci.yml中的各个本人需要的stage即可,配合gitops将配置也托管在我的项目内,追随我的项目一块保护治理,实现端到端的CI工作流,使得运维工作也可通过git追溯,进步工作效力,麻利开发上线部署。四 实战咱们在下面理解来Gitlab CI与Kubernetes的集成及其长处,上面就让咱们通过实战来更具体的理解其流程。4.1 环境筹备 须要有Gitlab 服务器,能够是部署在物理服务器上,当然也能够部署在K8s集群外部。 我须要筹备好K8s集群,能够为私有云的容器编排引擎,例如阿里的ACK,腾讯的TKE,华为的CCE等都实用这些形式。 因为gitlab-runner装置较为简单,咱们在示例中应用helm来进行装置,helm版本为v2.14.3,如果有能力能够本人编写资源清单部署。 4.1.1 记录注册信息登录Gitlab 服务器记录gitlab 的 url 和注册令牌,在咱们部署进K8s的gitlab-runnner的配置中须要填写该信息,运行在K8s中的Pod就利用此此信息在gitlab服务器进行注册。 4.1.2 获取gitlab-runner因为独自部署gitlab-runner进K8s中,本人去写资源清单文件难度较大,而且容易出错,咱们在此利用官网的chart镜像通过helm来进行部署,仅批改其中咱们关系的字段即可,首先在登录K8s集群进行gitlab-runner的helm repo的增加,之后将chart下载到本地,解压文件并批改其中的values.yml文件。 增加repo获取charts [root@master common-service]# helm repo add gitlab https://charts.gitlab.io"gitlab" has been added to your repositories[root@master common-service]# helm repo updateHang tight while we grab the latest from your chart repositories......Skip local chart repository...Successfully got an update from the "aliyun" chart repository...Successfully got an update from the "apphub" chart repository...Successfully got an update from the "gitlab" chart repository...Successfully got an update from the "stable" chart repositoryUpdate Complete.[root@master common-service]# helm search gitlab-runnerNAME CHART VERSION APP VERSION DESCRIPTION gitlab/gitlab-runner 0.14.0 12.8.0 GitLab Runner ...

July 8, 2022 · 2 min · jiezi

关于kubernetes:K8S-笔记-kubectl-命令自动补全

kubectl 作为 Kubernetes 的命令行工具(CLI),是 Kubernetes 用户日常应用和管理员日常治理必须把握的工具。kubectl 提供了大量的子命令,用于 Kubernetes 集群的治理和各种性能的实现。 kubectl 提供了如下帮忙命令: kubectl -h 查看子命令列表kubectl options 查看全局选项kubectl <command> --help 查看子命令的帮忙kubectl [command] [PARAMS] -o=<format> 设置输入格局(如 json、yaml、jsonpath 等)kubectl explain [RESOURCE] 查看资源的定义但以上办法尽管具体,但不够快捷。 本文提供了 kubectl 命令主动补全的配置办法,能够帮忙你更加疾速地获取本人想要执行命令。具体方法如下: 装置 bash-completion: yum install -y bash-completion执行 source 命令: source /usr/share/bash-completion/bash_completion如果想让零碎中的所有用户都能领有命令补全的性能,则执行如下命令: kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null如果只须要以后用户领有命令主动补全性能,则执行如下命令: echo 'source <(kubectl completion bash)' >> ~/.bashrcsource ~/.bashrc验证主动补全的成果(双击 Tab 键): [root@k8s-master ~]# kubectlalpha attach completion debug edit help patch rollout topannotate auth config delete exec kustomize plugin run uncordonapi-resources autoscale cordon describe explain label port-forward scale versionapi-versions certificate cp diff expose logs proxy set waitapply cluster-info create drain get options replace taint[root@k8s-master ~]#[root@k8s-master ~]# kubectl createclusterrole cronjob job priorityclass rolebinding serviceaccountclusterrolebinding deployment namespace quota secretconfigmap ingress poddisruptionbudget role service

July 5, 2022 · 1 min · jiezi

关于kubernetes:扩展你的KUBECTL功能

随着 Kubernetes 成为支流的利用容器编排平台,其命令行客户端 kubectl 也成为了咱们日常部署利用,保护集群最罕用的工具。 kubectl 本身提供了弱小的内置自命令来满足咱们对集群的操作,例如 get 获取集群内的资源对象,proxy 创立代理之类的,除了内置的这些自命令,kubectl 还提供了可扩大的能力,容许咱们装置本人编写或者社区提供的插件来加强咱们应用 kubectl 的生产力。 这里将给大家介绍如何在装置 kubectl 扩大插件,以及几款我在日常工作中罕用到的社区提供的插件。 在装置和应用 kubectl 插件的之前,请确保以及装置和配置好 kubectl 命令行工具和 git 工具。 krew首先介绍的第一款扩大插件就是 krew - k8s特地兴趣小组开发的一款用于装置和治理 kubectl 扩大插件的插件。 代码: https://github.com/kubernetes... 装置 krew (在macOS/Linux上): 在终端执行(Bash或者Zsh)执行 ( set -x; cd "$(mktemp -d)" && OS="$(uname | tr '[:upper:]' '[:lower:]')" && ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')" && KREW="krew-${OS}_${ARCH}" && curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz" && tar zxvf "${KREW}.tar.gz" && ./"${KREW}" install krew)将 $HOME/.krew/bin 退出到 PATH 环境变量,更新你的 .bashrc 或者 .zshrc 文件,增加上面一行 ...

July 4, 2022 · 6 min · jiezi

关于kubernetes:K8S-生态周报-Ingress-NGINX-项目暂停接收新功能将专注于稳定性提升

「K8S 生态周报」内容次要蕴含我所接触到的 K8S 生态相干的每周值得举荐的一些信息。欢送订阅专栏「k8s生态」。题外话大家好,我是张晋涛。 本次我将这个局部放在结尾。聊聊最近的一些状况。 「K8S 生态周报」暂停了 2 个多月的工夫,期间也有小伙伴来催更,感激大家的继续关注!!! 次要有两方面的起因,一是我近期的确比较忙,另一方面是我进行了一些思考和总结,分享给大家。 「K8S 生态周报」从 2019 年的 3 月开始,到当初曾经是第四年了,我也始终在思考,它能为我,还有关注我的读者小伙伴们带来什么。 对我而言,这是一个总结归档,分享反馈的过程,在此期间我也成长了很多。 我比拟开心的事件是,相比于其他人/其余社区发的日报,周报等,「K8S 生态周报」并不单纯的是在搬运链接,或者搬运 ChangeLog,在每期的内容中,除去资讯自身外,我也会减少我的一些集体认识,还有我所理解到的一些其余内容,包含一些背景常识等。此外,还会包含一些代码的剖析/性能的实际和比照等。能够说「K8S 生态周报」是更有技术性的内容。 基于以上的一些剖析和集体的一些思考,我决定后续「K8S 生态周报」中将退出更多我集体的思考的了解,在提供这些有价值的资讯的同时,与小伙伴们减少更多的交换和沟通。 Ingress NGINX 我的项目暂停接管新性能将专一于稳定性晋升相熟我的小伙伴可能晓得,我是 Kubernetes Ingress NGINX 我的项目的 maintainer 。 通过咱们开发团队的长时间探讨,咱们发现 Kubernetes Ingress NGINX 我的项目自 2016 年到当初曾经走过了 6 年工夫,在这 6 年的工夫里,在 GitHub 上达到了 13K star,同时也有 800+ 位 Contributor 参加奉献此我的项目,同时也收到了 4000+ 的 Issue 反馈,以及 4000+ 的 PR 。 在这个过程中,Ingress NGINX 我的项目的性能失去了极大的丰盛,但作为一个软件,不可避免的也会有各种 bug,破绽等存在。目前对于此我的项目来说,大家会在须要某些性能的时候疾速的去实现它(感激大家奉献的 PR),然而当呈现 bug 或破绽的时候,却很少有人来修改它。(在开源我的项目中,这是一个广泛状况,修改 bug 或破绽,相比于减少新性能,须要对我的项目本身更加的相熟) 这种状况实际上为维护者们减少了很多累赘,咱们须要把工夫放在解决 issue,增加和 review 新性能的 PR,以及进行 bug 和破绽修改,以及思考新性能是否可能会带来一些连锁反应等。 ...

July 4, 2022 · 2 min · jiezi

关于kubernetes:合集-行业解决方案如何搭建高性能的数据加速与数据编排平台-Alluxio

在2022年过来的半年工夫里,Alluxio一共做过30局面向客户、用户、粉丝、关注者的直播分享。 这30场分享中,咱们每1期都会精心布局、定向邀请嘉宾,其中有来自一线大厂的实战者,有来自Alluxio的嘉宾。内容涵盖【金融】【互联网&科技】【电信】【电商】【出行】【人工智能】等热门行业。 30场直播中咱们播种了很多反馈,但最多的还是征询哪里能够【看回放】,明天小编就给大家精选了这份【回放资源清单】(按播放排名精选),此处值得一键珍藏! 戳顶部企业名称查看往期课程回放 【金融行业】| 兴业银行 【互联网&科技】| 腾讯 | Bilibili | Bilibili | Bilibili | 网易 | Momo | Kyligence 【电信】| 中国联通 【电商】| Shopee | 唯品会 【出行行业】| 文远知行 | Uber 【人工智能】| 云知声 【Alluxio嘉宾分享】| Alluxio | Alluxio | Alluxio

June 30, 2022 · 1 min · jiezi

关于kubernetes:k8s常用命令

KUBECONFIG=config kubectl get pod -n kube-system -w命令阐明:前提条件是机器上须要装置kubectl,config指的是k8s配置文件 curl -H "Authorization: Bearer $token" https://127.0.0.1:8080/api/v1/namespaces/default/pods命令阐明:curl获取k8s的pod列表

June 30, 2022 · 1 min · jiezi

关于kubernetes:KubeSpace最简流水线之发布

KubeSpace是一个开源的DevOps以及Kubernetes多集群治理平台。 Github:https://github.com/kubespace/... DevOps Kubernetes多集群治理平台-KubeSpace(零)KubeSpace之利用治理(一)KubeSpace之利用商店(二)KubeSpace最简流水线之构建(三)KubeSpace最简流水线之部署(四)KubeSpace最简流水线之公布(五)筹备Git代码仓库当初咱们有一个很简略的golang http服务,代码托管在Github。 本地启动golang服务,端口为8000: go run main.go申请 /v2/current_time 接口返回以后工夫: curl http://127.0.0.1:8000/v2/current_timeHELLO, current time: 2022-05-09 21:49:37利用在KubeSpace中创立一个「生产环境」的工作空间,绑定「local」集群中的「prod」命名空间。 并将「测试环境-1」中的go-app利用克隆到「生产环境」。 克隆之后,编辑go-app利用,将service中的NodePort端口改为「30088」。 在「生产环境」中装置go-app利用,装置后go-app利用以后的镜像为「registry.cn-hangzhou.aliyuncs.com/librrary/go-app:1652103773」。 此时,go-app利用运行在「生产环境」中,且服务失常。 curl http://10.240.163.1:30088/v2/current_timeHELLO, current time: 2022-05-10 13:29:21骨干流水线在KubeSpace平台中有一个go-app的代码空间以及骨干流水线。具体可参考[KubeSpace最简流水线之构建]()。 公布编辑骨干流水线在go-app代码流水线空间中,对骨干流水线进行编辑。 在骨干流水线中,代码库源默认触发分支为「master」,且默认有两个阶段「构建代码镜像」以及「公布」。 对「构建代码镜像」中的工作进行批改。 减少阶段「生产环境部署」,并在该阶段下减少「部署go-app」工作。 其中,工作插件抉择「利用部署」,工作空间抉择「生产环境」,利用抉择「go-app」。 确定之后,点击右上角「保留」对骨干流水线进行保留。 执行流水线骨干流水线编辑实现,进入到骨干流水线的构建页面。 点击「构建」按钮,输出「master」分支之后,会主动开始执行骨干流水线。 等「构建代码镜像」执行实现之后,在「公布」阶段会暂停执行,后续阶段须要人工触发执行。 点击「公布」阶段中的「执行」按钮,会要求输出本次公布的版本号,默认第一次公布版本号为「1.0.0」。 咱们默认以「1.0.0」做为本次公布的版本号,点击「执行」按钮,会持续开始后续的阶段执行。 期待1分钟左右,工作会执行胜利。 查看公布工作的日志,咱们能够看到会对以后代码commit id 「83f1fea」打标签,并对构建进去的镜像打「1.0.0」的标签,并推送镜像到仓库。 公布实现之后,会主动给代码仓库打上公布的版本号。 同时,会主动将镜像部署到「生产环境」中的go-app利用。 检查一下go-app运行是否失常。 curl http://10.244.0.145:8000/v2/current_timeHELLO, current time: 2022-05-10 14:27:57OK,出工! 交换沟通如果您在应用过程中,有任何问题、倡议或性能需要,欢送随时跟咱们交换或提交issue。 能够在官网扫描QQ二维码,退出咱们的QQ交换群。

June 25, 2022 · 1 min · jiezi

关于kubernetes:KubeSpace最简流水线之部署

KubeSpace是一个开源的DevOps以及Kubernetes多集群治理平台。 Github:https://github.com/kubespace/... DevOps Kubernetes多集群治理平台-KubeSpace(零)KubeSpace之利用治理(一)KubeSpace之利用商店(二)KubeSpace最简流水线之构建(三)KubeSpace最简流水线之部署(四)KubeSpace最简流水线之公布(五)筹备Git代码仓库当初咱们有一个很简略的golang http服务,代码托管在Github。 本地启动golang服务,端口为8000: go run main.go申请 /v2/current_time 接口返回以后工夫: curl http://127.0.0.1:8000/v2/current_timeHello, current time: 2022-05-09 21:49:37利用在KubeSpace平台中的「测试环境-1」工作空间中创立一个go-app利用。具体可参考KubeSpace之利用治理。 分支流水线在KubeSpace平台中创立一个go-app的代码空间以及分支流水线。具体可参考KubeSpace最简流水线之构建。 利用主动部署编辑分支流水线在go-app代码流水线空间中,对分支流水线进行编辑。 在流水线中,点击最左边的「+」,新增一个「部署利用」阶段。 在「部署利用」阶段中新增「部署go-app」工作,其中工作插件抉择「利用部署」插件。 在「利用部署」插件中,工作空间抉择「测试环境-1」,利用抉择「go-app」,以及「是否部署」默认选中。 阶段工作增加实现后,需点击右上角「保留」按钮,对分支流水线进行保留。 执行流水线在分支流水线构建列表,咱们能够看到最近一次构建的历史记录,以及构建的代码提交信息。 如上,能够看到最近一次的代码提交id是「5eb807b」。 咱们当初对代码又有了一个新的提交,提交id是「83f1fea」。 在新的提交中,咱们将 /v2/current_time 接口返回的「Hello」批改为「HELLO」。 curl http://127.0.0.1:8000/v2/current_timeHELLO, current time: 2022-05-09 21:49:37当初对分支流水线执行构建,点击「构建」,并输出「master」分支,确定之后,会开始主动执行分支流水线。 如上,能够看到最新的构建代码提交id是「83f1fea」,正是咱们最新的提交。而且流水线中也多了一个「部署利用」的阶段。 期待1分钟左右,流水线会主动执行实现。 查看部署利用的工作日志,能够看到会主动将「构建代码镜像」产出的镜像「registry.cn-hangzhou.aliyuncs.com/librrary/go-app:1652103773」更新到「go-app」利用中,并主动进行降级部署。 进入「测试环境-1」工作空间的利用中,查看go-app的利用详情。 如上,能够看到go-app的利用镜像曾经更新为「registry.cn-hangzhou.aliyuncs.com/librrary/go-app:1652103773」。 咱们拜访利用的 /v2/current_time 接口,看是否更新胜利: curl http://10.244.0.141:8000/v2/current_timeHELLO, current time: 2022-05-09 13:59:17就是如此简略! OK,出工! 交换沟通如果您在应用过程中,有任何问题、倡议或性能需要,欢送随时跟咱们交换或提交issue。 能够在官网扫描QQ二维码,退出咱们的QQ交换群。

June 25, 2022 · 1 min · jiezi

关于kubernetes:KubeSpace最简流水线之构建

KubeSpace是一个开源的DevOps以及Kubernetes多集群治理平台。 Github:https://github.com/kubespace/... DevOps Kubernetes多集群治理平台-KubeSpace(零)KubeSpace之利用治理(一)KubeSpace之利用商店(二)KubeSpace最简流水线之构建(三)KubeSpace最简流水线之部署(四)KubeSpace最简流水线之公布(五)筹备Git代码仓库当初咱们有一个很简略的golang http服务,代码托管在Github。 本地启动golang服务,端口为8000: go run main.go申请http://127.0.0.1:8000/current...返回以后工夫: curl http://127.0.0.1:8000/current_timeHello, current time: 2022-05-08 11:11:33.632898 +0800 CST m=+18.871691849Git私钥个别拜访git的私钥在本地文件~/.ssh/id_rsa中。若不是该文件,请提前准备好。 代码流水线空间增加密钥在KubeSpace中的「平台配置」-「密钥治理」中,点击「+ 创立密钥」,增加筹备好的git私钥。 增加镜像仓库在KubeSpace中的「平台配置」-「镜像仓库」中,点击「+ 增加仓库」,增加镜像仓库,输出用户明码。 这里,我增加了一个「registry.cn-hangzhou.aliyuncs.com」阿里云的镜像仓库。如果是docker hub,则增加「docker.io」即可。 创立代码空间在KubeSpace流水线中,点击「+ 创立空间」。 如上,抉择代码类型为「GIT」,输出git仓库地址,并抉择刚刚增加的密钥。 点击「确定」,创立代码流水线空间之后,默认会创立两条流水线:分支流水线、骨干流水线。在两条流水线中都默认包含一个「构建代码镜像」的阶段工作,然而在骨干流水线中会多一个「公布」的阶段工作。 构建代码镜像编辑分支流水线在流水线列表,点击分支流水线的「编辑」按钮,来配置流水线的阶段工作。 如上,次要包含「根本信息」与「阶段工作」两个局部。 因为以后go-app的代码库只有一个master分支,所以在阶段工作中的「代码库源」中,须要配置触发分支为所有。默认分支流水线是排除master分支的。 而后,须要批改「构建代码镜像」的工作,点击「构建代码镜像」上方的圆圈进行配置。 如上,首先对代码进行编译配置,若不须要编译,则将「编译」勾销即可。 若须要编译,则抉择编译镜像,KubeSpace默认会自带一些各个语言如golang、node等镜像,如不满足,能够在「资源管理」中进行增加。 而后编译形式分为脚本文件与自定义脚本。脚本文件是在代码库中的编译脚本,需指定绝对目录;自定义脚本则在下方的编译脚本中输出编译相干命令即可。 编译实现之后,会对编译后的代码库进行镜像构建。首先须要抉择要推送的镜像仓库,这里抉择咱们刚刚增加的「registry.cn-hangzhou.aliyuncs.com」镜像仓库。 之后输出构建镜像的Dockerfile以及镜像名称。留神:镜像名称不须要填写标签,在构建镜像时会主动增加动静标签。 执行流水线对分支流水线编辑实现之后,就能够对执行构建流水线了。 如上,点击「构建」按钮。 构建分支输出「master」分支,点击「确定」之后,开始执行分支流水线配置的工作。 点击「#1」,能够查看以后执行工作的日志以及阶段信息。 如上,「构建代码镜像」工作执行实现之后,会构建出go-app代码镜像,并推送到镜像仓库。 手动降级利用在流水线构建出代码镜像之后,如上咱们构建出go-app的镜像「registry.cn-hangzhou.aliyuncs.com/librrary/go-app:1652021791」。能够在工作空间的利用中,手动对go-app的利用进行镜像降级。 如上,咱们将go-app利用的标签降级为流水线构建进去的镜像标签。 点击go-app,能够查看利用的降级过程。 能够看到,以后利用的镜像曾经替换为最新的镜像标签,且有新的Pod实例正在创立。 是不是很简略! OK,出工! 交换沟通如果您在应用过程中,有任何问题、倡议或性能需要,欢送随时跟咱们交换或提交issue。 能够在官网扫描QQ二维码,退出咱们的QQ交换群。

June 25, 2022 · 1 min · jiezi

关于kubernetes:KubeSpace之应用商店

KubeSpace是一个开源的DevOps以及Kubernetes多集群治理平台。 Github:https://github.com/kubespace/... DevOps Kubernetes多集群治理平台-KubeSpace(零)KubeSpace之利用治理(一)KubeSpace之利用商店(二)KubeSpace最简流水线之构建(三)KubeSpace最简流水线之部署(四)KubeSpace最简流水线之公布(五)介绍KubeSpace平台的利用商店内置了丰盛的中间件(如mysql、redis等)以及集群组件。来疾速反对您的业务部署。 利用商店中的利用底层是通过Helm Chart来实现的。当然,若内置的不满足需要,能够导入自定义利用。 点击每个利用,能够查看该利用的版本列表,并下载对应版本的chart。 导入利用在咱们的「测试环境-1」工作空间中,须要部署一套nginx来负载用户的流量。咱们能够导入利用商店中的nginx来疾速部署。 进入「测试环境-1」工作空间,左侧导航栏点击「利用」,在列表页,点击「导入利用」按钮。 在弹出框中,抉择「nginx」利用以及版本。 导入之后,须要进行装置,对nginx点击「装置」按钮后,能够批改nginx利用的helm charts中「values」配置。其中能够批改比方正本数、镜像、资源限额、是否开启ingress等配置。 装置之后,期待「nginx」利用运行失常, 进入「nginx」资源详情页面,点击Pod终端。 拜访该Pod IP,端口为8080。 是不是狠不便!! 公布到利用商店当初咱们有一个go-app的利用,想让所有人都能够不便的装置部署,那么咱们能够将其公布到利用商店。这样,其他人想用的话,能够间接导入对应的工作空间,疾速部署到其环境。 在利用列表页,点击go-app的「更多操作」中的「公布」按钮。 如上,会默认将以后利用的最新版本或运行版本公布到利用商店。 公布之后,会在利用商店看到刚刚公布的go-app利用。 导入自定义利用当初咱们有一个曾经在应用的helm chart,KubeSpace反对导入本人的helm chart到利用商店。 在利用商店中,点击「+ 导入利用」,在弹出框中,首先须要上传helm chart。 留神:需上传helm chart的tgz文件。 上传chart tgz文件之后,会解析chart中的利用名称、版本以及形容等。 如上,导入了「testapp」这个helm chart,输出该chart的版本阐明之后,间接导入即可。 后续咱们就能够间接在工作空间中导入部署「testapp」了。 OK,出工! 交换沟通如果您在应用过程中,有任何问题、倡议或性能需要,欢送随时跟咱们交换或提交issue。 能够在官网扫描QQ二维码,退出咱们的QQ交换群。

June 25, 2022 · 1 min · jiezi

关于kubernetes:KubeSpace之应用管理

KubeSpace是一个开源的DevOps以及Kubernetes多集群治理平台。 Github:https://github.com/kubespace/... DevOps Kubernetes多集群治理平台-KubeSpace(零)KubeSpace之利用治理(一)KubeSpace之利用商店(二)KubeSpace最简流水线之构建(三)KubeSpace最简流水线之部署(四)KubeSpace最简流水线之公布(五) 筹备Golang服务当初咱们有一个很简略的golang http服务,代码托管在github。 本地启动golang服务,端口为8000: go run main.go申请http://127.0.0.1:8000/current...返回以后工夫: curl http://127.0.0.1:8000/current_timeHello, current time: 2022-05-08 11:11:33.632898 +0800 CST m=+18.871691849构建镜像编译golang代码: CGO_ENABLED=0 LD_FLAGS=-s GOOS=linux go build -o go-app在代码库中有Dockerfile,构建服务镜像,并推送到镜像仓库 docker build -t registry.cn-hangzhou.aliyuncs.com/librrary/go-app:v1 .docker push registry.cn-hangzhou.aliyuncs.com/librrary/go-app:v1在装有docker的服务器运行镜像 docker run -it --rm -p 8000:8000 registry.cn-hangzhou.aliyuncs.com/librrary/go-app:v1创立利用在KubeSpace平台创立利用非常简单。 创立一个测试环境首先,创立一个工作空间「测试环境-1」,工作空间绑定一个K8s集群中的命名空间,通过命名空间来隔离不同环境的资源。如上,「测试环境-1」绑定 local 集群中的 test-1 命名空间。 创立go-app利用进入到「测试环境-1」工作空间,进入「利用治理」,点击「创立利用」按钮。 创立利用分为「根本信息」跟「利用配置」两局部。 在根本信息输出如下: 在利用配置中只须要输出镜像即可: 其它配置还包含: 容器组:能够增加多个容器,容器中包含镜像、启动命令、资源配额等配置;存储:增加内部存储到工作负载,并挂载到容器中,包含PVC、HostPath、EmptyDir、ConfigMap、Secret、NFS、GlusterFS等;网络:配置DNS策略,是否应用宿主机网络、PID,以及自定义域名等;调度:工作负载的调度策略,包含指定节点标签、污点容忍、节点亲和性以及Pod亲和反亲和等;平安:能够对工作负载中的容器限度用户运行以及sysctl配置等。在右上角点击「保留」按钮,会保留以后利用配置为一个新的版本:至此,一个go-app的利用很快创立实现。 装置利用go-app利用创立胜利之后,在利用列表能够进行装置。 点击「装置」之后,还能够抉择「利用版本」以及对「利用镜像」、「镜像标签」进行批改。 装置之后,在列表页点击「利用名称」,能够查看利用的资源详情、容器日志、进入容器Shell等。 能够看到,go-app的Pod曾经运行失常,拜访Pod IP,看服务是否失常: curl http://10.244.0.131:8000/current_timeHello, current time: 2022-05-08 04:23:22.316105117 +0000 UTC m=+219.645432028对外拜访当初,咱们曾经有一个在K8s集群内运行的go-app服务了。然而集群内部拜访不到,能够通过以下两种形式: ...

June 25, 2022 · 1 min · jiezi

关于kubernetes:DevOps-Kubernetes多集群管理平台KubeSpace

KubeSpace是一个开源的DevOps以及Kubernetes多集群治理平台。 Github:https://github.com/kubespace/... DevOps Kubernetes多集群治理平台-KubeSpace(零)KubeSpace之利用治理(一)KubeSpace之利用商店(二)KubeSpace最简流水线之构建(三)KubeSpace最简流水线之部署(四)KubeSpace最简流水线之公布(五)介绍KubeSpace能够兼容不同云厂商的Kubernetes集群,极大的不便了集群的管理工作。KubeSpace平台以后包含如下性能: 集群治理:Kubernetes集群原生资源的治理;工作空间:以环境(测试、生产等)以及利用为视角的工作空间治理;流水线:通过多种工作插件反对CICD,疾速公布代码并部署到不同的工作空间;利用商店:内置丰盛的中间件(mysql、redis等),以及反对导入公布自定义利用;平台配置:密钥、镜像仓库治理,以及不同模块的权限治理。装置通过helm装置kubespace,执行如下命令: helm repo add kubespace https://kubespace.cn/chartshelm install kubespace -n kubespace kubespace/kubespace --create-namespace装置之后,查看所有Pod是否运行失常: kubectl get pods -n kubespace -owide -w当所有Pod运行失常后,通过如下命令查看浏览器拜访地址: export NODE_PORT=$(kubectl get -n kubespace -o jsonpath="{.spec.ports[0].nodePort}" services kubespace)export NODE_IP=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}")echo http://$NODE_IP:$NODE_PORT降级通过helm降级kubespace,执行如下命令: helm repo updatehelm upgrade -n kubespace kubespace kubespace/kubespace应用阐明1. 首次登录在KubeSpace第一次登录时,会要求输出admin超级管理员的明码,而后以admin帐号登录。 2. 导入集群首次登录之后,默认会将以后集群增加到平台。 您还能够增加其它集群到平台,点击「增加集群」,输出集群名称,集群增加之后,会提醒将Kubernetes集群导入连贯到KubeSpace平台。 在Kubernetes集群中应用上述的kubectl命令部署agent服务,将集群连贯导入到KubeSpace平台。 期待几分钟后,查看agent服务是否启动。 kubectl get pods -n kubespace 能够看到agent服务的pod曾经是Running状态,在KubeSpace平台能够看到集群状态为Connect。 3. 集群治理将Kubernetes集群胜利连贯导入到KubeSpace平台之后,就能够对立治理集群中的资源了。 4. 工作空间在工作空间,能够创立多个环境,绑定不同集群的namespace,来隔离利用以及资源。 在每个空间中,能够创立利用或导入利用商店中的利用,并进行装置/降级。 5. 利用商店KubeSpace平台内置了丰盛的中间件,能够疾速导入到工作空间,并装置应用。同时也能够导入/公布本人的利用到利用商店。 ...

June 24, 2022 · 1 min · jiezi

关于kubernetes:详解kubernetes备份恢复利器-Velero-深入了解Carina系列第三期

Carina 是由博云主导并发动的云原生本地存储我的项目(GitHub 地址为:https://github.com/carina-io/...),目前曾经进入 CNCF 全景图。 Carina 能够为云原生环境中的有状态利用提供高性能、免运维的本地存储解决方案,具备存储卷生命周期治理、LVM/RAW盘供给、智能调度、RAID治理、主动分层等能力,旨在为云原生有状态服务提供极低提早、免运维、懂数据库的数据存储系统。Carina 作为博云容器云平台的组件之一,曾经在多个金融机构的生产环境中稳固运行多年。 传统的数据备份计划次要有两种, 一种是利用存储数据的服务端实现基于快照的备份,另一种是在每台指标服务器上部署专有备份 agent 并指定备份数据目录,定期把数据复制到内部存储上。这两种形式的备份机制绝对固化,在云原生时代无奈适应容器化后的弹性、池化等部署场景。 以云原生存储插件 Carina 为例,数据库等数据敏感场景中每个数据库集群包含多个计算实例,实例可能在集群内任意漂移并实现主动故障复原。传统数据备份形式在数据库集群疾速扩缩容、跨节点漂移等场景下无奈主动追随计算实例迁徙从而导致数据备份生效,因而一款贴合 k8s 容器场景的备份工具就非常重要。 Kubernetes备份复原利器:veleroVelero 是一款云原生时代的劫难复原和迁徙工具,采纳 Go 语言编写,并在 github 上进行了开源,开源地址为:https://github.com/vmware-tan...。Velero 源于西班牙语,意思为帆船,十分合乎 Kubernetes 社区的命名格调。 利用 velero 用户能够平安的备份、复原和迁徙 Kubernetes 集群资源和长久卷。它的基本原理就是将集群的数据,例如集群资源和长久化数据卷备份到对象存储中,在复原的时候将数据从对象存储中拉取下来。除了灾备之外它还能做资源移转,反对把容器利用从一个集群迁徙到另一个集群,这也是 velero 一个十分胜利的应用场景。 Velero 次要包含两个外围组件,别离为服务端和客户端。服务端运行在具体的 Kubernetes 集群中,客户端是运行在本地的命令行工具,只有配置好 kubectl 及 kubeconfig 即可应用,非常简单。 Velero 基于其实现的 kubernetes 资源备份能力,能够轻松实现 Kubernetes 集群的数据备份和复原、复制 kubernetes 集群资源到其余 kubernetes 集群或者疾速复制生产环境到测试环境等性能。 在资源备份方面,velero 反对将数据备份到泛滥的云存储中,例如AWS S3或S3兼容的存储系统、Azure Blob、Google Cloud存储、Aliyun OSS等。与备份整个 kubernetes 的数据存储引擎 etcd 相比,velero 的管制更加细化,能够对 Kubernetes 集群内对象级别进行备份,还能够通过对 Type、Namespace、Label 等对象进行分类备份或者复原。 Velero工作流程以外围的数据备份为例,当执行velero backup create my-backup时: ...

June 24, 2022 · 5 min · jiezi

关于kubernetes:clientgo-gin的简单整合九Create

背景:实现了后面一些简略list-watch的demo,这里开始进一步实现crud的基本操作,就从create开始了。这里从create namespace deployment pod service作一个简略的利用列举 create namespace对于namespace后面做过list的利用:client-go list namespace,/src/service/Namespace.go文件如下: package serviceimport ( "context" "github.com/gin-gonic/gin" . "k8s-demo1/src/lib" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "time")type Time struct { time.Time `protobuf:"-"`}type Namespace struct { Name string CreateTime Time `json:"CreateTime"` Status string Labels map[string]string}func ListNamespace(g *gin.Context) { ns, err := K8sClient.CoreV1().Namespaces().List(context.Background(), metav1.ListOptions{}) if err != nil { g.Error(err) return } ret := make([]*Namespace, 0) for _, item := range ns.Items { ret = append(ret, &Namespace{ Name: item.Name, CreateTime: Time(item.CreationTimestamp), Status: string(item.Status.Phase), Labels: item.Labels, }) } g.JSON(200, ret) return}创立一个namespace当初要创立一个create 创立命名空间的办法!最终如下: ...

June 21, 2022 · 6 min · jiezi

关于kubernetes:Nocalhost-让云原生时代的开发更高效

01 痛点在咱们团队将产品的部署状态迁徙到kubernetes上之后,在研发过程中,开发和联调代码的过程十分的苦楚,决定寻找在k8s环境下的云原生时代的买通本地和开发环境的解决方案。 回顾一下,过来的开发过程,咱们的过程一共有2种模式,即k8s无关过程和依赖了k8s的过程。 02 剖析- k8s无关过程 - 因为我的项目依赖了oracle连贯库,装置oracle驱动,在团队中既有window,unbuntu,和mac的状况下,团队成员的本地开发环境的搭建非常复杂过程依赖的环境变量十分多,如果开发环境的ip换掉,配置环境变量十分麻烦过程内依赖了一些二进制bin文件,格局是ELF的,这些文件在windows和mac的环境上是无奈执行的- 依赖了k8s的过程 - 须要复制开发环境上kubeconfig文件到本地,一个人如果同时开发多个环境,切换kubeconfig文件十分繁琐本地开发无奈将开发环境上的流量导到本地,须要打包替换环境上镜像来实现联调03 现状因而日常的开发流程便是: 配置环境变量和kubeconfig应用postman等其余工具进行本地开发提交代码、跑 CI 流程,出镜像替换环境上镜像自测显然,为了保障在本地开发的代码,在环境上实在运行的成果完全一致,带来了很大的工作效率问题,本地编码2分钟,上环境自测5分钟 04 需要因而咱们急需一种便捷的,学习成本低的,基于配置的,一劳永逸的,团队对立的解决方案来买通本地和开发环境,节俭本地开发环境的搭建和配置的工夫,节俭团队成员联调的工夫,进步团队效率 05 计划选型Telepresence参考 https://docs.microsoft.com/zh... 介绍 Telepresence是一款为Kubernetes微服务框架提供疾速本地化开发性能的开源软件。Telepresence在Kubernetes集群中运行的Pod中部署双向网络代理,该Pod将Kubernetes环境(如TCP连贯,环境变量,卷)中的数据代理到本地过程。本地过程通明地笼罩其网络,以便DNS调用和TCP连贯通过代理路由到近程Kubernetes集群,可能获取 个性 基于在本地计算机应用docker,fuse/sshfs 和流量转发技术,来将本地和近程之间的流量和文件系统买通,以在本地应用ide进行调试本地拜访远端的服务, 跨namespace本地服务能够齐全拜访近程群集中的其余服务;本地服务能够齐全拜访Kubernetes的环境变量,Secrets和ConfigMapK8S中运行的近程服务也能够齐全拜访本地服务通过在本地应用docker来运行本地代码,并和近程k8s服务通信,本支行承当了相似kubelet的角色缺点 不反对ide插件不能将本地代码同步到远端进行调试没有做隔离,进入调试模式后,服务的可用性取决于本地服务只能应用命令操作,操作不能配置化,操作繁琐Bridge to kubernetes参考 https://docs.microsoft.com/zh... 介绍 Bridge to Kubernetes 可重定向已连贯的 Kubernetes 群集与开发计算机之间的流量。Kubernetes 群集中的本地代码和服务能够像在同一 Kubernetes 群集中一样进行通信。 Bridge to Kubernetes插件只可用于Visual Studio和VS Code,反对其余ide的打算还在官网的RoadMap上没有开始 工作原理 提醒你在群集上配置要替换的服务,在开发计算机上配置用于代码的端口,并将代码的启动工作配置为一次性操作。将群集上 pod 中的容器替换为近程代理容器,它会将流量重定向到开发计算机。在开发计算机上运行 kubectl port-forward,将流量从开发计算机转发到群集中运行的近程代理。应用近程代理从群集收集环境信息。此环境信息包含环境变量、可见服务、卷装载和秘密装载。在 Visual Studio 中设置环境,以便开发计算机上的服务能够拜访雷同变量,就像它在该群集上运行一样。更新 主机文件 ,将群集上的服务映射到开发计算机上的本地 IP 地址。 这些主机 文件条目容许开发计算机上运行的代码向群集中运行的其余服务申请。若要更新 主机文件 ,Bridge to Kubernetes计算机上须要管理员拜访权限。开始在开发计算机上运行和调试代码。如有必要,Bridge to Kubernetes进行以后应用这些端口的服务或过程,以开释开发计算机上所需的端口。个性 买通本地和近程k8s环境的流量将环境信息包含环境变量、service、volume,secret,configmap克隆到本地通过主动批改/etc/hosts文件,将集群上的服务映射到开发计算机上的本地 IP 地址缺点 Bridge to Kubernetes 具备以下限度: 要使 Bridge to Kubernetes 胜利连贯,一个 pod 只能有一个容器在该 pod 中运行,然而事实中有很多多容器的pod。目前,Bridge to Kubernetes pod 必须是 Linux 容器。Windows容器。如果k8s环境是集群环境,若要更新 /etc/hosts 文件  ,Bridge to Kubernetes计算机上须要管理员拜访权限Nocalhost参考 ...

June 21, 2022 · 1 min · jiezi

关于kubernetes:技术分享-kubernetes-pod-简介

作者:沈亚军 爱可生研发团队成员,负责公司 DMP 产品的后端开发,喜好太广,三天三夜都说不完,低调低调... 本文起源:原创投稿 *爱可生开源社区出品,原创内容未经受权不得随便应用,转载请分割小编并注明起源。 pod 是什么Pod 是一组相互合作的容器,是咱们能够在 Kubernetes 中创立和治理的最小可部署单元。同一个 pod 内的容器共享网络和存储,并且作为一个整体被寻址和调度。当咱们在 Kubernetes 中创立一个 pod 会创立 pod 内的所有容器,并且将容器的所有资源都被调配到一个节点上。 为什么须要 pod思考以下问题,为什么不间接在 kubernetes 部署容器?为什么须要把多个容器视作一个整体?为什么不应用同一个容器内运行多个过程的计划? 当一个利用蕴含多个过程且通过IPC形式通信,须要运行在同一台主机。如果部署在 kubernetes 环境过程须要运行在容器内,所以可能思考计划之一是把多个过程运行在同一个容器内以实现相似在同一个主机的部署模式。然而 container 的设计是每个容器运行一个独自的过程,除非过程自身会创立多个子过程,当然如果你抉择在同一个容器内运行多个没有分割的过程的话,那么须要本人来治理其余过程,包含每个过程的生命周期(重启挂掉的过程)、日志的切割等。如果多个过程都在规范输入和规范谬误输入上输入日志,就会导致日志的凌乱,因而 docker 和 kubernetes 心愿咱们在一个容器内只运行一个过程。 排除在同一个容器内运行多个过程的计划后,咱们须要一个更高层级的组织构造实现把多个容器绑定在一起组成一个单元,这就是 pod 概念的由来,Pod 带来的益处: Pod 做为一个能够独立运行的服务单元,简化了利用部署的难度,以更高的抽象层次为利用部署管提供了极大的不便。Pod 做为最小的利用实例能够独立运行,因而能够不便的进行部署、程度扩大和膨胀、不便进行调度治理与资源的调配。Pod 中的容器共享雷同的数据和网络地址空间,Pod 之间也进行了对立的资源管理与调配。pause 容器因为容器之间是应用 Linux Namespace 和 cgroups 隔开的,所以 pod 的实现须要解决怎么去突破这个隔离。为了实现同 pod 的容器能够共享局部资源,引入了 pause 容器。 pause 容器的镜像十分小,运行着一个非常简单的过程。它简直不执行任何性能,启动后就永远把本人阻塞住。每个 Kubernetes Pod 都蕴含一个 pause 容器, pause 容器是 pod 内实现 namespace 共享的根底。 在 linux 环境下运行一个过程,该过程会继承父过程所有的namespace,同时也能够应用unsharing形式创立新的namespace。如下应用unshare形式运行 shell 并创立新的 PID、UTS、IPC 和 mount 命名空间。 ...

June 21, 2022 · 3 min · jiezi

关于kubernetes:在Kubernetesk8s中部署-jenkins

在Kubernetes(k8s)中部署 jenkins YAML配置文件因为jenkins须要长久化存储,通过nfs动静供应pvc存储卷。 能够参考我之前的文档:https://cloud.tencent.com/dev... vim jenkins-deploy.yamlcat jenkins-deploy.yaml###############应用 storageClass 创立 pvc ###################---apiVersion: v1kind: PersistentVolumeClaimmetadata: name: jenkins-data-pvc namespace: defaultspec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi###############创立一个ServiceAccount 名称为:jenkins-admin###################---apiVersion: v1kind: ServiceAccountmetadata: name: jenkins-admin namespace: default labels: name: jenkins###############绑定账户jenkins-admin 为集群管理员角色,为了管制权限倡议绑定自定义角色###################---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: jenkins-admin labels: name: jenkinssubjects: - kind: ServiceAccount name: jenkins-admin namespace: defaultroleRef: kind: ClusterRole # cluster-admin 是 k8s 集群中默认的管理员角色 name: cluster-admin apiGroup: rbac.authorization.k8s.io############### 在 default 命名空间创立 deployment ###################---apiVersion: apps/v1kind: Deploymentmetadata: name: jenkins namespace: defaultspec: replicas: 1 selector: matchLabels: app: jenkins template: metadata: labels: app: jenkins spec: terminationGracePeriodSeconds: 10 # 留神:k8s 1.21.x 中 serviceAccount 改名为 serviceAccountName # 这里填写下面创立的 serviceAccount 的 name serviceAccount: jenkins-admin containers: - name: jenkins image: jenkins/jenkins:latest imagePullPolicy: IfNotPresent env: - name: JAVA_OPTS value: -Duser.timezone=Asia/Shanghai ports: - containerPort: 8080 name: web protocol: TCP - containerPort: 50000 name: agent protocol: TCP resources: limits: cpu: 1000m memory: 1Gi requests: cpu: 500m memory: 512Mi livenessProbe: httpGet: path: /login port: 8080 initialDelaySeconds: 60 timeoutSeconds: 5 failureThreshold: 12 readinessProbe: httpGet: path: /login port: 8080 initialDelaySeconds: 60 timeoutSeconds: 5 failureThreshold: 12 volumeMounts: - name: jenkinshome mountPath: /var/jenkins_home volumes: - name: jenkinshome persistentVolumeClaim: claimName: jenkins-data-pvc############### 在 default 命名空间创立 service ###################---apiVersion: v1kind: Servicemetadata: name: jenkins namespace: default labels: app: jenkinsspec: selector: app: jenkins type: ClusterIP ports: - name: web port: 8080 targetPort: 8080---apiVersion: v1kind: Servicemetadata: name: jenkins-agent namespace: default labels: app: jenkinsspec: selector: app: jenkins type: ClusterIP ports: - name: agent port: 50000 targetPort: 50000执行部署kubectl apply -f jenkins-deploy.yamlpersistentvolumeclaim/jenkins-data-pvc createdserviceaccount/jenkins-admin createdclusterrolebinding.rbac.authorization.k8s.io/jenkins-admin createddeployment.apps/jenkins createdservice/jenkins createdservice/jenkins-agent created拜访测试# 查看svckubectl get svc | grep jenkinsNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEjenkins ClusterIP 10.99.124.103 <none> 8080/TCP 3m7sjenkins-agent ClusterIP 10.98.21.139 <none> 50000/TCP 3m6s# 批改为NodePortkubectl edit svc jenkinstype: NodePort# 查看批改后的svc端口kubectl get svc | grep jenkinsjenkins NodePort 10.99.124.103 <none> 8080:31613/TCP 4m24sjenkins-agent ClusterIP 10.98.21.139 <none> 50000/TCP 4m23s查看明码# 查看pod名称kubectl get pod -n default | grep jenkinsjenkins-7db75dbcb9-76l7l 1/1 Running 0 5m11s# 查看默认明码kubectl exec jenkins-7db75dbcb9-76l7l -- cat /var/jenkins_home/secrets/initialAdminPassworda9b2d13bc4c9453f93bb83e43a780f7c对于 ...

June 21, 2022 · 2 min · jiezi

关于kubernetes:改变默认存储类

1 扭转默认 StorageClass本文展现了如何扭转默认的 Storage Class,它用于为没有非凡需要的 PersistentVolumeClaims 配置 volumes. 1.1 为什么要扭转默认存储类?取决于装置模式,你的 Kubernetes 集群可能和一个被标记为默认的已有 StorageClass 一起部署。 这个默认的 StorageClass 当前将被用于动静的为没有特定存储类需要的 PersistentVolumeClaims 配置存储。更多细节请查看 PersistentVolumeClaim 文档。 事后装置的默认 StorageClass 可能不能很好的适应你冀望的工作负载;例如,它配置的存储可能太过低廉。 如果是这样的话,你能够扭转默认 StorageClass,或者齐全禁用它以避免动静配置存储。 删除默认 StorageClass 可能行不通,因为它可能会被你集群中的扩大管理器主动重建。 请查阅你的装置文档中对于扩大管理器的细节,以及如何禁用单个扩大。 1.2 扭转默认存储类列出集群中的 storageclasses: [shutang@lona-001 mysql-cluster]$ kubectl get scNAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGElonghorn driver.longhorn.io Delete Immediate true 7h15mmanaged-nfs-storage (default) fuseim.pri/ifs Retain Immediate false 27d标记默认 StorageClass 为非默认默认 StorageClass 的注解 storageclass.kubernetes.io/is-default-class 设置为 true。 注解的其它任意值或者缺省值将被解释为 false。要标记一个 StorageClass 为非默认的,你须要扭转它的值为 false: [shutang@lona-001 mysql-cluster]$ kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'storageclass.storage.k8s.io/managed-nfs-storage patched[shutang@lona-001 mysql-cluster]$ kubectl get scNAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGElonghorn driver.longhorn.io Delete Immediate true 7h26mmanaged-nfs-storage fuseim.pri/ifs Retain Immediate false 27d[shutang@lona-001 mysql-cluster]$标记一个 longhorn 为默认的存储类 ...

June 21, 2022 · 1 min · jiezi

关于kubernetes:二进制安装Kubernetesk8s-v1242-IPv4IPv6双栈

二进制装置Kubernetes(k8s) v1.24.2 IPv4/IPv6双栈Kubernetes 开源不易,帮忙点个star,谢谢了 介绍kubernetes二进制装置 强烈建议在Github上查看文档。Github出问题会更新文档,并且后续尽可能第一工夫更新新版本文档 1.21.13 和 1.22.10 和 1.23.3 和 1.23.4 和 1.23.5 和 1.23.6 和 1.23.7 和 1.24.0 和1.24.1 和1.24.2 文档以及安装包已生成。 我应用IPV6的目标是在公网进行拜访,所以我配置了IPV6动态地址。 若您没有IPV6环境,或者不想应用IPv6,不对主机进行配置IPv6地址即可。 不配置IPV6,不影响后续,不过集群仍旧是反对IPv6的。为前期留有扩大可能性。 https://github.com/cby-chen/K... 手动我的项目地址:https://github.com/cby-chen/K... 脚本我的项目地址:https://github.com/cby-chen/B... kubernetes 1.24 变动较大,具体见:https://kubernetes.io/zh/blog... 1.环境主机名称IP地址阐明软件Master0110.0.0.61master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-clientMaster0210.0.0.62master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-clientMaster0310.0.0.63master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-clientNode0110.0.0.64node节点kubelet、kube-proxy、nfs-clientNode0210.0.0.65node节点kubelet、kube-proxy、nfs-clientNode0310.0.0.66node节点kubelet、kube-proxy、nfs-clientNode0410.0.0.67node节点kubelet、kube-proxy、nfs-clientNode0510.0.0.68node节点kubelet、kube-proxy、nfs-clientLb0110.0.0.70Lb01节点haproxy、keepalivedLb0210.0.0.80Lb02节点haproxy、keepalived 10.0.0.69VIP 软件版本kernel5.18.0-1.el8CentOS 8v8 或者 v7kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxyv1.24.2etcdv3.5.4containerdv1.6.6cfsslv1.6.1cniv1.1.1crictlv1.24.2haproxyv1.8.27keepalivedv2.1.5网段 物理主机:10.0.0.0/24 service:10.96.0.0/12 pod:172.16.0.0/12 倡议k8s集群与etcd集群离开装置 安装包曾经整顿好:https://github.com/cby-chen/K... 1.1.k8s根底零碎环境配置1.2.配置IPssh root@10.1.1.100 "nmcli con mod ens160 ipv4.addresses 10.0.0.61/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.1.1.106 "nmcli con mod ens160 ipv4.addresses 10.0.0.62/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.1.1.110 "nmcli con mod ens160 ipv4.addresses 10.0.0.63/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.1.1.114 "nmcli con mod ens160 ipv4.addresses 10.0.0.64/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.1.1.115 "nmcli con mod ens160 ipv4.addresses 10.0.0.65/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.1.1.116 "nmcli con mod ens160 ipv4.addresses 10.0.0.66/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.1.1.117 "nmcli con mod ens160 ipv4.addresses 10.0.0.67/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.1.1.118 "nmcli con mod ens160 ipv4.addresses 10.0.0.68/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.1.1.119 "nmcli con mod ens160 ipv4.addresses 10.0.0.70/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.1.1.120 "nmcli con mod ens160 ipv4.addresses 10.0.0.80/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.0.0.61 "nmcli con mod ens160 ipv6.addresses 2408:8207:78cc:5cc1:181c::10; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.62 "nmcli con mod ens160 ipv6.addresses 2408:8207:78cc:5cc1:181c::20; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.63 "nmcli con mod ens160 ipv6.addresses 2408:8207:78cc:5cc1:181c::30; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.64 "nmcli con mod ens160 ipv6.addresses 2408:8207:78cc:5cc1:181c::40; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.65 "nmcli con mod ens160 ipv6.addresses 2408:8207:78cc:5cc1:181c::50; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.66 "nmcli con mod ens160 ipv6.addresses 2408:8207:78cc:5cc1:181c::60; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.67 "nmcli con mod ens160 ipv6.addresses 2408:8207:78cc:5cc1:181c::70; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.68 "nmcli con mod ens160 ipv6.addresses 2408:8207:78cc:5cc1:181c::80; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.70 "nmcli con mod ens160 ipv6.addresses 2408:8207:78cc:5cc1:181c::90; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.80 "nmcli con mod ens160 ipv6.addresses 2408:8207:78cc:5cc1:181c::100; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"1.3.设置主机名hostnamectl set-hostname k8s-master01hostnamectl set-hostname k8s-master02hostnamectl set-hostname k8s-master03hostnamectl set-hostname k8s-node01hostnamectl set-hostname k8s-node02hostnamectl set-hostname k8s-node03hostnamectl set-hostname k8s-node04hostnamectl set-hostname k8s-node05hostnamectl set-hostname lb01hostnamectl set-hostname lb021.4.配置yum源# 对于 CentOS 7sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org|baseurl=https://mirrors.tuna.tsinghua.edu.cn|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo# 对于 CentOS 8sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org/$contentdir|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo# 对于公有仓库sed -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://mirror.centos.org/\$contentdir|baseurl=http://10.0.0.123/centos|g' -i.bak /etc/yum.repos.d/CentOS-*.repo1.5.装置一些必备工具yum -y install wget jq psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl -y1.6.选择性下载须要工具1.下载kubernetes1.24.+的二进制包github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.mdwget https://dl.k8s.io/v1.24.2/kubernetes-server-linux-amd64.tar.gz2.下载etcdctl二进制包github二进制包下载地址:https://github.com/etcd-io/etcd/releaseswget https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz3.docker-ce二进制包下载地址二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/这里须要下载20.10.+版本wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.14.tgz4.containerd二进制包下载github下载地址:https://github.com/containerd/containerd/releasescontainerd下载时下载带cni插件的二进制包。wget https://github.com/containerd/containerd/releases/download/v1.6.6/cri-containerd-cni-1.6.6-linux-amd64.tar.gz5.下载cfssl二进制包github二进制包下载地址:https://github.com/cloudflare/cfssl/releaseswget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd646.cni插件下载github下载地址:https://github.com/containernetworking/plugins/releaseswget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz7.crictl客户端二进制下载github下载:https://github.com/kubernetes-sigs/cri-tools/releaseswget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz1.7.敞开防火墙systemctl disable --now firewalld1.8.敞开SELinuxsetenforce 0sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config1.9.敞开替换分区sed -ri 's/.*swap.*/#&/' /etc/fstabswapoff -a && sysctl -w vm.swappiness=0cat /etc/fstab# /dev/mapper/centos-swap swap swap defaults 0 01.10.网络配置(俩种形式二选一)# 形式一# systemctl disable --now NetworkManager# systemctl start network && systemctl enable network# 形式二cat > /etc/NetworkManager/conf.d/calico.conf << EOF [keyfile]unmanaged-devices=interface-name:cali*;interface-name:tunl*EOFsystemctl restart NetworkManager1.11.进行工夫同步 (lb除外)# 服务端yum install chrony -ycat > /etc/chrony.conf << EOF pool ntp.aliyun.com iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsyncallow 10.0.0.0/24local stratum 10keyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronyEOFsystemctl restart chronydsystemctl enable chronyd# 客户端yum install chrony -ycat > /etc/chrony.conf << EOF pool 10.0.0.61 iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsynckeyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronyEOFcat /etc/chrony.conf | grep -v "^#" | grep -v "^$"pool 10.0.0.61 iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsynckeyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronysystemctl restart chronyd ; systemctl enable chronyd# 客户端装置一条命令yum install chrony -y ; sed -i "s#2.centos.pool.ntp.org#10.0.0.61#g" /etc/chrony.conf ; systemctl restart chronyd ; systemctl enable chronyd#应用客户端进行验证chronyc sources -v1.12.配置ulimitulimit -SHn 65535cat >> /etc/security/limits.conf <<EOF* soft nofile 655360* hard nofile 131072* soft nproc 655350* hard nproc 655350* seft memlock unlimited* hard memlock unlimiteddEOF1.13.配置免密登录yum install -y sshpassssh-keygen -f /root/.ssh/id_rsa -P ''export IP="10.0.0.61 10.0.0.62 10.0.0.63 10.0.0.64 10.0.0.65 10.0.0.66 10.0.0.67 10.0.0.68 10.0.0.70 10.0.0.80"export SSHPASS=123123for HOST in $IP;do sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOSTdone1.14.增加启用源 (lb除外)# 为 RHEL-8或 CentOS-8配置源yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm# 为 RHEL-7 SL-7 或 CentOS-7 装置 ELRepo yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm# 查看可用安装包yum --disablerepo="*" --enablerepo="elrepo-kernel" list available1.15.降级内核至4.18版本以上 (lb除外)# 装置最新的内核# 我这里抉择的是稳定版kernel-ml 如需更新长期保护版本kernel-lt yum --enablerepo=elrepo-kernel install kernel-ml# 查看已装置那些内核rpm -qa | grep kernelkernel-core-4.18.0-358.el8.x86_64kernel-tools-4.18.0-358.el8.x86_64kernel-ml-core-5.16.7-1.el8.elrepo.x86_64kernel-ml-5.16.7-1.el8.elrepo.x86_64kernel-modules-4.18.0-358.el8.x86_64kernel-4.18.0-358.el8.x86_64kernel-tools-libs-4.18.0-358.el8.x86_64kernel-ml-modules-5.16.7-1.el8.elrepo.x86_64# 查看默认内核grubby --default-kernel/boot/vmlinuz-5.16.7-1.el8.elrepo.x86_64# 若不是最新的应用命令设置grubby --set-default /boot/vmlinuz-「您的内核版本」.x86_64# 重启失效reboot# v8 整合命令为:yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --default-kernel ; reboot# v7 整合命令为:yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --set-default \$(ls /boot/vmlinuz-* | grep elrepo) ; grubby --default-kernel1.16.装置ipvsadm (lb除外)yum install ipvsadm ipset sysstat conntrack libseccomp -ycat >> /etc/modules-load.d/ipvs.conf <<EOF ip_vsip_vs_rrip_vs_wrrip_vs_shnf_conntrackip_tablesip_setxt_setipt_setipt_rpfilteript_REJECTipipEOFsystemctl restart systemd-modules-load.servicelsmod | grep -e ip_vs -e nf_conntrackip_vs_sh 16384 0ip_vs_wrr 16384 0ip_vs_rr 16384 0ip_vs 180224 6 ip_vs_rr,ip_vs_sh,ip_vs_wrrnf_conntrack 176128 1 ip_vsnf_defrag_ipv6 24576 2 nf_conntrack,ip_vsnf_defrag_ipv4 16384 1 nf_conntracklibcrc32c 16384 3 nf_conntrack,xfs,ip_vs1.17.批改内核参数 (lb除外)cat <<EOF > /etc/sysctl.d/k8s.confnet.ipv4.ip_forward = 1net.bridge.bridge-nf-call-iptables = 1fs.may_detach_mounts = 1vm.overcommit_memory=1vm.panic_on_oom=0fs.inotify.max_user_watches=89100fs.file-max=52706963fs.nr_open=52706963net.netfilter.nf_conntrack_max=2310720net.ipv4.tcp_keepalive_time = 600net.ipv4.tcp_keepalive_probes = 3net.ipv4.tcp_keepalive_intvl =15net.ipv4.tcp_max_tw_buckets = 36000net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_max_orphans = 327680net.ipv4.tcp_orphan_retries = 3net.ipv4.tcp_syncookies = 1net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.ip_conntrack_max = 65536net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.tcp_timestamps = 0net.core.somaxconn = 16384net.ipv6.conf.all.disable_ipv6 = 0net.ipv6.conf.default.disable_ipv6 = 0net.ipv6.conf.lo.disable_ipv6 = 0net.ipv6.conf.all.forwarding = 0EOFsysctl --system1.18.所有节点配置hosts本地解析cat > /etc/hosts <<EOF127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain62408:8207:78cc:5cc1:181c::10 k8s-master012408:8207:78cc:5cc1:181c::20 k8s-master022408:8207:78cc:5cc1:181c::30 k8s-master032408:8207:78cc:5cc1:181c::40 k8s-node012408:8207:78cc:5cc1:181c::50 k8s-node022408:8207:78cc:5cc1:181c::60 k8s-node032408:8207:78cc:5cc1:181c::70 k8s-node042408:8207:78cc:5cc1:181c::80 k8s-node052408:8207:78cc:5cc1:181c::90 lb012408:8207:78cc:5cc1:181c::100 lb0210.0.0.61 k8s-master0110.0.0.62 k8s-master0210.0.0.63 k8s-master0310.0.0.64 k8s-node0110.0.0.65 k8s-node0210.0.0.66 k8s-node0310.0.0.67 k8s-node0410.0.0.68 k8s-node0510.0.0.70 lb0110.0.0.80 lb0210.0.0.69 lb-vipEOF2.k8s根本组件装置2.1.所有k8s节点装置Containerd作为Runtimewget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz#创立cni插件所需目录mkdir -p /etc/cni/net.d /opt/cni/bin #解压cni二进制包tar xf cni-plugins-linux-amd64-v1.1.1.tgz -C /opt/cni/bin/wget https://github.com/containerd/containerd/releases/download/v1.6.6/cri-containerd-cni-1.6.6-linux-amd64.tar.gz#解压tar -C / -xzf cri-containerd-cni-1.6.6-linux-amd64.tar.gz#创立服务启动文件cat > /etc/systemd/system/containerd.service <<EOF[Unit]Description=containerd container runtimeDocumentation=https://containerd.ioAfter=network.target local-fs.target[Service]ExecStartPre=-/sbin/modprobe overlayExecStart=/usr/local/bin/containerdType=notifyDelegate=yesKillMode=processRestart=alwaysRestartSec=5LimitNPROC=infinityLimitCORE=infinityLimitNOFILE=infinityTasksMax=infinityOOMScoreAdjust=-999[Install]WantedBy=multi-user.targetEOF2.1.1配置Containerd所需的模块cat <<EOF | sudo tee /etc/modules-load.d/containerd.confoverlaybr_netfilterEOF2.1.2加载模块systemctl restart systemd-modules-load.service2.1.3配置Containerd所需的内核cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.confnet.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1net.bridge.bridge-nf-call-ip6tables = 1EOF# 加载内核sysctl --system2.1.4创立Containerd的配置文件mkdir -p /etc/containerdcontainerd config default | tee /etc/containerd/config.toml批改Containerd的配置文件sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep SystemdCgroupsed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep sandbox_image# 找到containerd.runtimes.runc.options,在其下退出SystemdCgroup = true[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true [plugins."io.containerd.grpc.v1.cri".cni]# 将sandbox_image默认地址改为合乎版本地址 sandbox_image = "registry.cn-hangzhou.aliyuncs.com/chenby/pause:3.6"2.1.5启动并设置为开机启动systemctl daemon-reloadsystemctl enable --now containerd2.1.6配置crictl客户端连贯的运行时地位wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz#解压tar xf crictl-v1.24.2-linux-amd64.tar.gz -C /usr/bin/#生成配置文件cat > /etc/crictl.yaml <<EOFruntime-endpoint: unix:///run/containerd/containerd.sockimage-endpoint: unix:///run/containerd/containerd.socktimeout: 10debug: falseEOF#测试systemctl restart containerdcrictl info2.2.k8s与etcd下载及装置(仅在master01操作)2.2.1解压k8s安装包# 下载安装包wget https://dl.k8s.io/v1.24.2/kubernetes-server-linux-amd64.tar.gzwget https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz# 解压k8s安装文件cd cbytar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}# 解压etcd安装文件tar -xf etcd-v3.5.4-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.4-linux-amd64/etcd{,ctl}# 查看/usr/local/bin下内容ls /usr/local/bin/containerd containerd-shim-runc-v1 containerd-stress critest ctr etcdctl kube-controller-manager kubelet kube-scheduler containerd-shim containerd-shim-runc-v2 crictl ctd-decoder etcd kube-apiserver kubectl kube-proxy2.2.2查看版本[root@k8s-master01 ~]# kubelet --versionKubernetes v1.24.2[root@k8s-master01 ~]# etcdctl versionetcdctl version: 3.5.4API version: 3.5[root@k8s-master01 ~]# 2.2.3将组件发送至其余k8s节点Master='k8s-master02 k8s-master03'Work='k8s-node01 k8s-node02 k8s-node03 k8s-node04 k8s-node05'for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; donefor NODE in $Work; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; donemkdir -p /opt/cni/bin2.3创立证书相干文件mkdir pkicd pkicat > admin-csr.json << EOF { "CN": "admin", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:masters", "OU": "Kubernetes-manual" } ]}EOFcat > ca-config.json << EOF { "signing": { "default": { "expiry": "876000h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "876000h" } } }}EOFcat > etcd-ca-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ], "ca": { "expiry": "876000h" }}EOFcat > front-proxy-ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "ca": { "expiry": "876000h" }}EOFcat > kubelet-csr.json << EOF { "CN": "system:node:\$NODE", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "system:nodes", "OU": "Kubernetes-manual" } ]}EOFcat > manager-csr.json << EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-controller-manager", "OU": "Kubernetes-manual" } ]}EOFcat > apiserver-csr.json << EOF { "CN": "kube-apiserver", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ]}EOFcat > ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ], "ca": { "expiry": "876000h" }}EOFcat > etcd-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ]}EOFcat > front-proxy-client-csr.json << EOF { "CN": "front-proxy-client", "key": { "algo": "rsa", "size": 2048 }}EOFcat > kube-proxy-csr.json << EOF { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-proxy", "OU": "Kubernetes-manual" } ]}EOFcat > scheduler-csr.json << EOF { "CN": "system:kube-scheduler", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-scheduler", "OU": "Kubernetes-manual" } ]}EOFcd ..mkdir bootstrapcd bootstrapcat > bootstrap.secret.yaml << EOF apiVersion: v1kind: Secretmetadata: name: bootstrap-token-c8ad9c namespace: kube-systemtype: bootstrap.kubernetes.io/tokenstringData: description: "The default bootstrap token generated by 'kubelet '." token-id: c8ad9c token-secret: 2e4d610cf3e7426e usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress ---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: kubelet-bootstraproleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node-bootstrappersubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: node-autoapprove-bootstraproleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclientsubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: node-autoapprove-certificate-rotationroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclientsubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:nodes---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:kube-apiserver-to-kubeletrules: - apiGroups: - "" resources: - nodes/proxy - nodes/stats - nodes/log - nodes/spec - nodes/metrics verbs: - "*"---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: system:kube-apiserver namespace: ""roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kube-apiserver-to-kubeletsubjects: - apiGroup: rbac.authorization.k8s.io kind: User name: kube-apiserverEOFcd ..mkdir corednscd corednscat > coredns.yaml << EOF apiVersion: v1kind: ServiceAccountmetadata: name: coredns namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: kubernetes.io/bootstrapping: rbac-defaults name: system:corednsrules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch - apiGroups: - discovery.k8s.io resources: - endpointslices verbs: - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:corednsroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:corednssubjects:- kind: ServiceAccount name: coredns namespace: kube-system---apiVersion: v1kind: ConfigMapmetadata: name: coredns namespace: kube-systemdata: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance }---apiVersion: apps/v1kind: Deploymentmetadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "CoreDNS"spec: # replicas: not specified here: # 1. Default is 1. # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns spec: priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations: - key: "CriticalAddonsOnly" operator: "Exists" nodeSelector: kubernetes.io/os: linux affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: ["kube-dns"] topologyKey: kubernetes.io/hostname containers: - name: coredns image: registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.8.6 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns readOnly: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /ready port: 8181 scheme: HTTP dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile---apiVersion: v1kind: Servicemetadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS"spec: selector: k8s-app: kube-dns clusterIP: 10.96.0.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP - name: metrics port: 9153 protocol: TCPEOFcd ..mkdir metrics-servercd metrics-servercat > metrics-server.yaml << EOF apiVersion: v1kind: ServiceAccountmetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: k8s-app: metrics-server rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: system:aggregated-metrics-readerrules:- apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: k8s-app: metrics-server name: system:metrics-serverrules:- apiGroups: - "" resources: - pods - nodes - nodes/stats - namespaces - configmaps verbs: - get - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata: labels: k8s-app: metrics-server name: metrics-server-auth-reader namespace: kube-systemroleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-readersubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: labels: k8s-app: metrics-server name: metrics-server:system:auth-delegatorroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegatorsubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: labels: k8s-app: metrics-server name: system:metrics-serverroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-serversubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: v1kind: Servicemetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-systemspec: ports: - name: https port: 443 protocol: TCP targetPort: https selector: k8s-app: metrics-server---apiVersion: apps/v1kind: Deploymentmetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-systemspec: selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm - --requestheader-username-headers=X-Remote-User - --requestheader-group-headers=X-Remote-Group - --requestheader-extra-headers-prefix=X-Remote-Extra- image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:0.5.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 name: metrics-server ports: - containerPort: 4443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: https scheme: HTTPS initialDelaySeconds: 20 periodSeconds: 10 resources: requests: cpu: 100m memory: 200Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 volumeMounts: - mountPath: /tmp name: tmp-dir - name: ca-ssl mountPath: /etc/kubernetes/pki nodeSelector: kubernetes.io/os: linux priorityClassName: system-cluster-critical serviceAccountName: metrics-server volumes: - emptyDir: {} name: tmp-dir - name: ca-ssl hostPath: path: /etc/kubernetes/pki---apiVersion: apiregistration.k8s.io/v1kind: APIServicemetadata: labels: k8s-app: metrics-server name: v1beta1.metrics.k8s.iospec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system version: v1beta1 versionPriority: 100EOF3.相干证书生成# master01节点下载证书生成工具# wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64" -O /usr/local/bin/cfssl# wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64" -O /usr/local/bin/cfssljson# 软件包内有cp cfssl_1.6.1_linux_amd64 /usr/local/bin/cfsslcp cfssljson_1.6.1_linux_amd64 /usr/local/bin/cfssljsonchmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson3.1.生成etcd证书特地阐明除外,以下操作在所有master节点操作 ...

June 20, 2022 · 23 min · jiezi

关于kubernetes:K8S-笔记-解决首次登录-K8S-dashboard-的告警

部署好 K8S dashboard 之后,首次登录,通常会在右上角告诉面板中呈现很多告警: configmaps is forbidden: User "system:serviceaccount:kube-system:kubernetes-dashboard" cannot list resource "configmaps" in API group "" in the namespace "default" persistentvolumeclaims is forbidden: User "system:serviceaccount:kube-system:kubernetes-dashboard" cannot list resource "persistentvolumeclaims" in API group "" in the namespace "default" secrets is forbidden: User "system:serviceaccount:kube-system:kubernetes-dashboard" cannot list resource "secrets" in API group "" in the namespace "default" services is forbidden: User "system:serviceaccount:kube-system:kubernetes-dashboard" cannot list resource "services" in API group "" in the namespace "default" ingresses.extensions is forbidden: User "system:serviceaccount:kube-system:kubernetes-dashboard" cannot list resource "ingresses" in API group "extensions" in the namespace "default" daemonsets.apps is forbidden: User "system:serviceaccount:kube-system:kubernetes-dashboard" cannot list resource "daemonsets" in API group "apps" in the namespace "default" pods is forbidden: User "system:serviceaccount:kube-system:kubernetes-dashboard" cannot list resource "pods" in API group "" in the namespace "default" events is forbidden: User "system:serviceaccount:kube-system:kubernetes-dashboard" cannot list resource "events" in API group "" in the namespace "default" deployments.apps is forbidden: User "system:serviceaccount:kube-system:kubernetes-dashboard" cannot list resource "deployments" in API group "apps" in the namespace "default" replicasets.apps is forbidden: User "system:serviceaccount:kube-system:kubernetes-dashboard" cannot list resource "replicasets" in API group "apps" in the namespace "default" jobs.batch is forbidden: User "system:serviceaccount:kube-system:kubernetes-dashboard" cannot list resource "jobs" in API group "batch" in the namespace "default" cronjobs.batch is forbidden: User "system:serviceaccount:kube-system:kubernetes-dashboard" cannot list resource "cronjobs" in API group "batch" in the namespace "default" replicationcontrollers is forbidden: User "system:serviceaccount:kube-system:kubernetes-dashboard" cannot list resource "replicationcontrollers" in API group "" in the namespace "default" statefulsets.apps is forbidden: User "system:serviceaccount:kube-system:kubernetes-dashboard" cannot list resource "statefulsets" in API group "apps" in the namespace "default" ...

June 17, 2022 · 3 min · jiezi

关于kubernetes:K8S-笔记-创建和使用-Nginx-configmap

ConfigMap 简介ConfigMap 是 k8s 中的一种 API 对象,用于镜像和配置文件解耦(对标非 k8s 环境,咱们常常用配置管理核心解耦代码和配置,其实是一个意思),这样镜像就具备了可移植性和可复用性。Pods 能够将其用作环境变量、命令行参数或者存储卷中的配置文件。在生产环境中,它作为环境变量配置的应用十分常见。 跟它相似的,还有另一个 API 对象 Secret 。 二者的区别是,前者用于存储不敏感和非保密性数据。例如 ip 和端口。后者用于存储敏感和保密性数据,例如用户名和明码,秘钥,等等,应用 base64 编码贮存。 对于 configmap 的更多内容能够看看官网:https://kubernetes.io/zh-cn/d... 应用 ConfigMap 的限度条件ConfigMap 要在 Pod 启动前创立好。因为它是要被 Pod 应用的嘛。只有当 ConfigMap 和 Pod 处于同一个 NameSpace 时 Pod 才能够援用它。当 Pod 对 ConfigMap 进行挂载(VolumeMount)操作时,在容器外部只能挂载为目录,不能挂载为文件。当挂载曾经存在的目录时,且目录内含有其它文件,ConfigMap 会将其笼罩掉。实操本次操作,最后的 yaml 配置如下,总共蕴含三个局部: ConfigMapDeploymentService也能够将这三个局部拆分到 3 个 yaml 文件中别离执行。 apiVersion: v1kind: ConfigMapmetadata: name: nginx-confdata: nginx.conf: | user nginx; worker_processes 2; error_log /var/log/nginx/error.log; events { worker_connections 1024; } http { include mime.types; #sendfile on; keepalive_timeout 1800; log_format main 'remote_addr:$remote_addr ' 'time_local:$time_local ' 'method:$request_method ' 'uri:$request_uri ' 'host:$host ' 'status:$status ' 'bytes_sent:$body_bytes_sent ' 'referer:$http_referer ' 'useragent:$http_user_agent ' 'forwardedfor:$http_x_forwarded_for ' 'request_time:$request_time'; access_log /var/log/nginx/access.log main; server { listen 80; server_name localhost; location / { root html; index index.html index.htm; } error_page 500 502 503 504 /50x.html; } include /etc/nginx/conf.d/*.conf; } virtualhost.conf: | upstream app { server localhost:8080; keepalive 1024; } server { listen 80 default_server; root /usr/local/app; access_log /var/log/nginx/app.access_log main; error_log /var/log/nginx/app.error_log; location / { proxy_pass http://app/; proxy_http_version 1.1; } }---apiVersion: apps/v1kind: Deploymentmetadata: name: my-demo-nginxspec: replicas: 1 selector: matchLabels: app: my-demo-nginx template: metadata: labels: app: my-demo-nginx spec: containers: - name: my-demo-nginx imagePullPolicy: IfNotPresent ports: - containerPort: 80 volumeMounts: - mountPath: /etc/nginx/nginx.conf # mount nginx-conf volumn to /etc/nginx #readOnly: true #name: nginx-conf #name: my-demo-nginx name: nginx subPath: nginx.conf - mountPath: /var/log/nginx name: log volumes: - name: nginx configMap: name: nginx-conf # place ConfigMap `nginx-conf` on /etc/nginx items: - key: nginx.conf path: nginx.conf - key: virtualhost.conf path: conf.d/virtualhost.conf # dig directory - name: log emptyDir: {}---apiVersion: v1kind: Servicemetadata: name: nginx-service #定义service名称为nginx-service labels: app: nginx-service #为service打上app标签spec: type: NodePort #应用NodePort形式开明,在每个Node上调配一个端口作为内部拜访入口 #type: LoadBalancer #工作在特定的Cloud Provider上,例如Google Cloud,AWS,OpenStack #type: ClusterIP #默认,调配一个集群外部能够拜访的虚构IP(VIP) ports: - port: 8000 #port是k8s集群外部拜访service的端口,即通过clusterIP: port能够拜访到某个service targetPort: 80 #targetPort是pod的端口,从port和nodePort来的流量通过kube-proxy流入到后端pod的targetPort上,最初进入容器 nodePort: 32500 #nodePort是内部拜访k8s集群中service的端口,通过nodeIP: nodePort能够从内部拜访到某个service selector: app: my-nginx执行该 yaml 文件,遇到了问题:本模板&此实操中 Deployment 的配置,它的 spec.template.spec.containers.volumeMounts.name 的值应用 nginx 能力胜利,如果是 my-demo-nginx 则报错如下: ...

June 17, 2022 · 6 min · jiezi

关于kubernetes:kubernetesk8s安装BGP模式calico网络支持IPV4和IPV6

kubernetes(k8s)装置BGP模式calico网络反对IPV4和IPV6BGP是互联网上一个外围的去中心化自治路由协定,它通过保护IP路由表或“前缀”表来实现自治零碎AS之间的可达性,属于矢量路由协定。不过,思考到并非所有的网络都能反对BGP,以及Calico管制立体的设计要求物理网络必须是二层网络,以确保 vRouter间均间接可达,路由不可能将物理设施当作下一跳等起因,为了反对三层网络,Calico还推出了IP-in-IP叠加的模型,它也应用Overlay的形式来传输数据。IPIP的包头十分小,而且也是内置在内核中,因而实践上它的速度要比VxLAN快一点 ,但安全性更差。Calico 3.x的默认配置应用的是IPIP类型的传输计划而非BGP。 Calico的零碎架构如图所示: Calico 次要由 Felix、etcd、BGP client 以及 BGP Route Reflector 组成 Felix,Calico Agent,跑在每台须要运行 Workload 的节点上,次要负责配置路由及 ACLs 等信息来确保 Endpoint 的连通状态;etcd,分布式键值存储,次要负责网络元数据一致性,确保 Calico 网络状态的准确性;BGP Client(BIRD), 次要负责把 Felix 写入 Kernel 的路由信息散发到以后 Calico 网络,确保 Workload 间的通信的有效性;BGP Route Reflector(BIRD),大规模部署时应用,摒弃所有节点互联的 mesh 模式,通过一个或者多个 BGP Route Reflector 来实现集中式的路由散发。calico/calico-ipam,次要用作 Kubernetes 的 CNI 插件配置NetworkManager避免烦扰calico[root@k8s-master01 ~]# vim /etc/NetworkManager/conf.d/calico.conf[root@k8s-master01 ~]# cat /etc/NetworkManager/conf.d/calico.conf[keyfile]unmanaged-devices=interface-name:cali*;interface-name:tunl*[root@k8s-master01 ~]# 下载官网最新calico配置文件[root@k8s-master01 ~]# curl https://projectcalico.docs.tigera.io/manifests/calico-typha.yaml -o calico.yaml % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 228k 100 228k 0 0 83974 0 0:00:02 0:00:02 --:--:-- 83974[root@k8s-master01 ~]# 批改calico配置得以反对IPV6[root@k8s-master01 ~]# cp calico.yaml calico-ipv6.yaml[root@k8s-master01 ~]# vim calico-ipv6.yaml# calico-config ConfigMap处 "ipam": { "type": "calico-ipam", "assign_ipv4": "true", "assign_ipv6": "true" }, - name: CLUSTER_TYPE value: "k8s,bgp" - name: IP value: "autodetect" - name: IP6 value: "autodetect" - name: CALICO_IPV4POOL_CIDR value: "172.16.0.0/16" - name: CALICO_IPV6POOL_CIDR value: "fc00::/48" - name: FELIX_IPV6SUPPORT value: "true"批改calico配置得以反对IPV4[root@k8s-master01 ~]# grep "IPV4POOL_CIDR" calico.yaml -A 1 - name: CALICO_IPV4POOL_CIDR value: "172.16.0.0/12"[root@k8s-master01 ~]# kubectl apply -f calico.yaml查看POD[root@k8s-master01 ~]# kubectl get pod -A -wNAMESPACE NAME READY STATUS RESTARTS AGEkube-system calico-kube-controllers-56cdb7c587-l8h5k 1/1 Running 0 66skube-system calico-node-b2mpq 1/1 Running 0 66skube-system calico-node-jlk89 1/1 Running 0 66skube-system calico-node-nqdc4 1/1 Running 0 66skube-system calico-node-pjrcn 1/1 Running 0 66skube-system calico-node-w4gfm 1/1 Running 0 66skube-system calico-typha-6775694657-vk4ds 1/1 Running 0 66s对于 ...

June 16, 2022 · 1 min · jiezi

关于kubernetes:二进制安装Kubernetesk8s-v12113-IPv4IPv6双栈

二进制装置Kubernetes(k8s) v1.21.13 IPv4/IPv6双栈Kubernetes 开源不易,帮忙点个star,谢谢了 介绍kubernetes二进制装置 后续尽可能第一工夫更新新版本文档 1.21.13 和 1.22.10 和 1.23.3 和 1.23.4 和 1.23.5 和 1.23.6 和 1.23.7 和 1.24.0 和1.24.1 文档以及安装包已生成。 我应用IPV6的目标是在公网进行拜访,所以我配置了IPV6动态地址。 若您没有IPV6环境,或者不想应用IPv6,不对主机进行配置IPv6地址即可。 不配置IPV6,不影响后续,不过集群仍旧是反对IPv6的。为前期留有扩大可能性。 我的项目地址:https://github.com/cby-chen/K... 每个初始版本会打上releases,安装包在releases页面 https://github.com/cby-chen/K... (下载更快)我本人的网盘:https://pan.oiox.cn/s/PetV 1.环境主机名称IP地址阐明软件Master0110.0.0.81master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client、haproxy、keepalivedMaster0210.0.0.82master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client、haproxy、keepalivedMaster0310.0.0.83master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client、haproxy、keepalivedNode0110.0.0.84node节点kubelet、kube-proxy、nfs-clientNode0210.0.0.85node节点kubelet、kube-proxy、nfs-client 10.0.0.89VIP 软件版本内核4.18.0-373.el8.x86_64CentOS 8v8 或者 v7kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxyv1.21.13etcdv3.5.4containerdv1.6.6cfsslv1.6.1cniv1.1.1crictlv1.23.0haproxyv1.8.27keepalivedv2.1.5网段 物理主机:192.168.1.0/24 service:10.96.0.0/12 pod:172.16.0.0/12 如果有条件倡议k8s集群与etcd集群离开装置 1.1.k8s根底零碎环境配置1.2.配置IPssh root@10.1.1.112 "nmcli con mod ens160 ipv4.addresses 10.0.0.81/8; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.1.1.102 "nmcli con mod ens160 ipv4.addresses 10.0.0.82/8; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.1.1.104 "nmcli con mod ens160 ipv4.addresses 10.0.0.83/8; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.1.1.105 "nmcli con mod ens160 ipv4.addresses 10.0.0.84/8; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.1.1.106 "nmcli con mod ens160 ipv4.addresses 10.0.0.85/8; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.0.0.81 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1:181c::10; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.82 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1:181c::20; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.83 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1:181c::30; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.84 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1:181c::40; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.85 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1:181c::50; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"1.3.设置主机名hostnamectl set-hostname k8s-master01hostnamectl set-hostname k8s-master02hostnamectl set-hostname k8s-master03hostnamectl set-hostname k8s-node01hostnamectl set-hostname k8s-node021.4.配置yum源# 对于 CentOS 7sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org|baseurl=https://mirrors.tuna.tsinghua.edu.cn|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo# 对于 CentOS 8sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org/$contentdir|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.reposed -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://mirror.centos.org/\$contentdir|baseurl=http://10.0.0.123/centos|g' -i.bak /etc/yum.repos.d/CentOS-*.repo1.5.装置一些必备工具yum -y install wget jq psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl -y1.6.选择性下载须要工具1.下载kubernetes1.21.+的二进制包github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.mdwget https://dl.k8s.io/v1.21.13/kubernetes-server-linux-amd64.tar.gz2.下载etcdctl二进制包github二进制包下载地址:https://github.com/etcd-io/etcd/releaseswget https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz3.containerd二进制包下载github下载地址:https://github.com/containerd/containerd/releasescontainerd下载时下载带cni插件的二进制包。wget https://github.com/containerd/containerd/releases/download/v1.6.6/cri-containerd-cni-1.6.6-linux-amd64.tar.gz4.下载cfssl二进制包github二进制包下载地址:https://github.com/cloudflare/cfssl/releaseswget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd645.cni插件下载github下载地址:https://github.com/containernetworking/plugins/releaseswget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz6.crictl客户端二进制下载github下载:https://github.com/kubernetes-sigs/cri-tools/releaseswget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz1.7.敞开防火墙systemctl disable --now firewalld1.8.敞开SELinuxsetenforce 0sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config1.9.敞开替换分区sed -ri 's/.*swap.*/#&/' /etc/fstabswapoff -a && sysctl -w vm.swappiness=0cat /etc/fstab# /dev/mapper/centos-swap swap swap defaults 0 01.10.敞开NetworkManager 并启用 network (lb除外)systemctl disable --now NetworkManagersystemctl start network && systemctl enable network1.11.进行工夫同步服务端yum install chrony -ycat > /etc/chrony.conf << EOF pool ntp.aliyun.com iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsyncallow 10.0.0.8/8local stratum 10keyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronyEOFsystemctl restart chronydsystemctl enable chronyd客户端yum install chrony -ycat > /etc/chrony.conf << EOF pool 10.0.0.81 iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsynckeyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronyEOFsystemctl restart chronyd ; systemctl enable chronyd应用客户端进行验证chronyc sources -v1.12.配置ulimitulimit -SHn 65535cat >> /etc/security/limits.conf <<EOF* soft nofile 655360* hard nofile 131072* soft nproc 655350* hard nproc 655350* seft memlock unlimited* hard memlock unlimiteddEOF1.13.配置免密登录yum install -y sshpassssh-keygen -f /root/.ssh/id_rsa -P ''export IP="10.0.0.81 10.0.0.82 10.0.0.83 10.0.0.84 10.0.0.85"export SSHPASS=123123for HOST in $IP;do sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOSTdone1.14.增加启用源为 RHEL-8或 CentOS-8配置源yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm为 RHEL-7 SL-7 或 CentOS-7 装置 ELRepo yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm查看可用安装包yum --disablerepo="*" --enablerepo="elrepo-kernel" list available1.15.降级内核至4.18版本以上装置最新的内核# 我这里抉择的是稳定版kernel-ml 如需更新长期保护版本kernel-lt yum --enablerepo=elrepo-kernel install kernel-ml查看已装置那些内核rpm -qa | grep kernelkernel-core-4.18.0-358.el8.x86_64kernel-tools-4.18.0-358.el8.x86_64kernel-ml-core-5.16.7-1.el8.elrepo.x86_64kernel-ml-5.16.7-1.el8.elrepo.x86_64kernel-modules-4.18.0-358.el8.x86_64kernel-4.18.0-358.el8.x86_64kernel-tools-libs-4.18.0-358.el8.x86_64kernel-ml-modules-5.16.7-1.el8.elrepo.x86_64查看默认内核grubby --default-kernel/boot/vmlinuz-5.16.7-1.el8.elrepo.x86_64若不是最新的应用命令设置grubby --set-default /boot/vmlinuz-「您的内核版本」.x86_64重启失效rebootv8 整合命令为:yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --default-kernel ; rebootv7 整合命令为:yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --set-default \$(ls /boot/vmlinuz-* | grep elrepo) ; grubby --default-kernel1.16.装置ipvsadmyum install ipvsadm ipset sysstat conntrack libseccomp -ycat >> /etc/modules-load.d/ipvs.conf <<EOF ip_vsip_vs_rrip_vs_wrrip_vs_shnf_conntrackip_tablesip_setxt_setipt_setipt_rpfilteript_REJECTipipEOFsystemctl restart systemd-modules-load.servicelsmod | grep -e ip_vs -e nf_conntrackip_vs_sh 16384 0ip_vs_wrr 16384 0ip_vs_rr 16384 0ip_vs 180224 6 ip_vs_rr,ip_vs_sh,ip_vs_wrrnf_conntrack 176128 1 ip_vsnf_defrag_ipv6 24576 2 nf_conntrack,ip_vsnf_defrag_ipv4 16384 1 nf_conntracklibcrc32c 16384 3 nf_conntrack,xfs,ip_vs1.17.批改内核参数cat <<EOF > /etc/sysctl.d/k8s.confnet.ipv4.ip_forward = 1net.bridge.bridge-nf-call-iptables = 1fs.may_detach_mounts = 1vm.overcommit_memory=1vm.panic_on_oom=0fs.inotify.max_user_watches=89100fs.file-max=52706963fs.nr_open=52706963net.netfilter.nf_conntrack_max=2310720net.ipv4.tcp_keepalive_time = 600net.ipv4.tcp_keepalive_probes = 3net.ipv4.tcp_keepalive_intvl =15net.ipv4.tcp_max_tw_buckets = 36000net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_max_orphans = 327680net.ipv4.tcp_orphan_retries = 3net.ipv4.tcp_syncookies = 1net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.ip_conntrack_max = 65536net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.tcp_timestamps = 0net.core.somaxconn = 16384net.ipv6.conf.all.disable_ipv6 = 0net.ipv6.conf.default.disable_ipv6 = 0net.ipv6.conf.lo.disable_ipv6 = 0net.ipv6.conf.all.forwarding = 0EOFsysctl --system1.18.所有节点配置hosts本地解析cat > /etc/hosts <<EOF127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain62408:8207:78ca:9fa1:181c::10 k8s-master012408:8207:78ca:9fa1:181c::20 k8s-master022408:8207:78ca:9fa1:181c::30 k8s-master032408:8207:78ca:9fa1:181c::40 k8s-node012408:8207:78ca:9fa1:181c::50 k8s-node0210.0.0.81 k8s-master0110.0.0.82 k8s-master0210.0.0.83 k8s-master0310.0.0.84 k8s-node0110.0.0.85 k8s-node0210.0.0.89 lb-vipEOF2.k8s根本组件装置2.1.所有k8s节点装置Containerd作为Runtimewget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz#创立cni插件所需目录mkdir -p /etc/cni/net.d /opt/cni/bin #解压cni二进制包tar xf cni-plugins-linux-amd64-v1.1.1.tgz -C /opt/cni/bin/wget https://github.com/containerd/containerd/releases/download/v1.6.6/cri-containerd-cni-1.6.6-linux-amd64.tar.gz#解压tar -C / -xzf cri-containerd-cni-1.6.6-linux-amd64.tar.gz#创立服务启动文件cat > /etc/systemd/system/containerd.service <<EOF[Unit]Description=containerd container runtimeDocumentation=https://containerd.ioAfter=network.target local-fs.target[Service]ExecStartPre=-/sbin/modprobe overlayExecStart=/usr/local/bin/containerdType=notifyDelegate=yesKillMode=processRestart=alwaysRestartSec=5LimitNPROC=infinityLimitCORE=infinityLimitNOFILE=infinityTasksMax=infinityOOMScoreAdjust=-999[Install]WantedBy=multi-user.targetEOF2.1.1配置Containerd所需的模块cat <<EOF | sudo tee /etc/modules-load.d/containerd.confoverlaybr_netfilterEOF2.1.2加载模块systemctl restart systemd-modules-load.service2.1.3配置Containerd所需的内核cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.confnet.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1net.bridge.bridge-nf-call-ip6tables = 1EOF# 加载内核sysctl --system2.1.4创立Containerd的配置文件mkdir -p /etc/containerdcontainerd config default | tee /etc/containerd/config.toml批改Containerd的配置文件sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep SystemdCgroupsed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep sandbox_image# 找到containerd.runtimes.runc.options,在其下退出SystemdCgroup = true[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true [plugins."io.containerd.grpc.v1.cri".cni]# 将sandbox_image默认地址改为合乎版本地址 sandbox_image = "registry.cn-hangzhou.aliyuncs.com/chenby/pause:3.6"2.1.5启动并设置为开机启动systemctl daemon-reloadsystemctl enable --now containerd2.1.6配置crictl客户端连贯的运行时地位wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz#解压tar xf crictl-v1.23.0-linux-amd64.tar.gz -C /usr/bin/#生成配置文件cat > /etc/crictl.yaml <<EOFruntime-endpoint: unix:///run/containerd/containerd.sockimage-endpoint: unix:///run/containerd/containerd.socktimeout: 10debug: falseEOF#测试systemctl restart containerdcrictl info2.2.k8s与etcd下载及装置(仅在master01操作)2.2.1解压k8s安装包解压k8s安装文件tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}解压etcd安装文件tar -xf etcd-v3.5.4-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.4-linux-amd64/etcd{,ctl}# 查看/usr/local/bin下内容ls /usr/local/bin/etcd etcdctl kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler2.2.2查看版本[root@localhost ~]# kubelet --versionKubernetes v1.21.13[root@localhost ~]# etcdctl versionetcdctl version: 3.5.4API version: 3.5[root@localhost ~]# 2.2.3将组件发送至其余k8s节点Master='k8s-master02 k8s-master03'Work='k8s-node01 k8s-node02'for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; donefor NODE in $Work; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done2.3创立证书相干文件mkdir pkicd pkicat > admin-csr.json << EOF { "CN": "admin", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:masters", "OU": "Kubernetes-manual" } ]}EOFcat > ca-config.json << EOF { "signing": { "default": { "expiry": "876000h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "876000h" } } }}EOFcat > etcd-ca-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ], "ca": { "expiry": "876000h" }}EOFcat > front-proxy-ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "ca": { "expiry": "876000h" }}EOFcat > kubelet-csr.json << EOF { "CN": "system:node:\$NODE", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "system:nodes", "OU": "Kubernetes-manual" } ]}EOFcat > manager-csr.json << EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-controller-manager", "OU": "Kubernetes-manual" } ]}EOFcat > apiserver-csr.json << EOF { "CN": "kube-apiserver", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ]}EOFcat > ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ], "ca": { "expiry": "876000h" }}EOFcat > etcd-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ]}EOFcat > front-proxy-client-csr.json << EOF { "CN": "front-proxy-client", "key": { "algo": "rsa", "size": 2048 }}EOFcat > kube-proxy-csr.json << EOF { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-proxy", "OU": "Kubernetes-manual" } ]}EOFcat > scheduler-csr.json << EOF { "CN": "system:kube-scheduler", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-scheduler", "OU": "Kubernetes-manual" } ]}EOFcd ..mkdir bootstrapcd bootstrapcat > bootstrap.secret.yaml << EOF apiVersion: v1kind: Secretmetadata: name: bootstrap-token-c8ad9c namespace: kube-systemtype: bootstrap.kubernetes.io/tokenstringData: description: "The default bootstrap token generated by 'kubelet '." token-id: c8ad9c token-secret: 2e4d610cf3e7426e usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress ---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: kubelet-bootstraproleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node-bootstrappersubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: node-autoapprove-bootstraproleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclientsubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: node-autoapprove-certificate-rotationroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclientsubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:nodes---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:kube-apiserver-to-kubeletrules: - apiGroups: - "" resources: - nodes/proxy - nodes/stats - nodes/log - nodes/spec - nodes/metrics verbs: - "*"---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: system:kube-apiserver namespace: ""roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kube-apiserver-to-kubeletsubjects: - apiGroup: rbac.authorization.k8s.io kind: User name: kube-apiserverEOFcd ..mkdir corednscd corednscat > coredns.yaml << EOF apiVersion: v1kind: ServiceAccountmetadata: name: coredns namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: kubernetes.io/bootstrapping: rbac-defaults name: system:corednsrules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch - apiGroups: - discovery.k8s.io resources: - endpointslices verbs: - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:corednsroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:corednssubjects:- kind: ServiceAccount name: coredns namespace: kube-system---apiVersion: v1kind: ConfigMapmetadata: name: coredns namespace: kube-systemdata: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance }---apiVersion: apps/v1kind: Deploymentmetadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "CoreDNS"spec: # replicas: not specified here: # 1. Default is 1. # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns spec: priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations: - key: "CriticalAddonsOnly" operator: "Exists" nodeSelector: kubernetes.io/os: linux affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: ["kube-dns"] topologyKey: kubernetes.io/hostname containers: - name: coredns image: registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.8.6 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns readOnly: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /ready port: 8181 scheme: HTTP dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile---apiVersion: v1kind: Servicemetadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS"spec: selector: k8s-app: kube-dns clusterIP: 10.96.0.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP - name: metrics port: 9153 protocol: TCPEOFcd ..mkdir metrics-servercd metrics-servercat > metrics-server.yaml << EOF apiVersion: v1kind: ServiceAccountmetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: k8s-app: metrics-server rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: system:aggregated-metrics-readerrules:- apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: k8s-app: metrics-server name: system:metrics-serverrules:- apiGroups: - "" resources: - pods - nodes - nodes/stats - namespaces - configmaps verbs: - get - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata: labels: k8s-app: metrics-server name: metrics-server-auth-reader namespace: kube-systemroleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-readersubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: labels: k8s-app: metrics-server name: metrics-server:system:auth-delegatorroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegatorsubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: labels: k8s-app: metrics-server name: system:metrics-serverroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-serversubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: v1kind: Servicemetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-systemspec: ports: - name: https port: 443 protocol: TCP targetPort: https selector: k8s-app: metrics-server---apiVersion: apps/v1kind: Deploymentmetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-systemspec: selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm - --requestheader-username-headers=X-Remote-User - --requestheader-group-headers=X-Remote-Group - --requestheader-extra-headers-prefix=X-Remote-Extra- image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:0.5.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 name: metrics-server ports: - containerPort: 4443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: https scheme: HTTPS initialDelaySeconds: 20 periodSeconds: 10 resources: requests: cpu: 100m memory: 200Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 volumeMounts: - mountPath: /tmp name: tmp-dir - name: ca-ssl mountPath: /etc/kubernetes/pki nodeSelector: kubernetes.io/os: linux priorityClassName: system-cluster-critical serviceAccountName: metrics-server volumes: - emptyDir: {} name: tmp-dir - name: ca-ssl hostPath: path: /etc/kubernetes/pki---apiVersion: apiregistration.k8s.io/v1kind: APIServicemetadata: labels: k8s-app: metrics-server name: v1beta1.metrics.k8s.iospec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system version: v1beta1 versionPriority: 100EOF3.相干证书生成master01节点下载证书生成工具wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64" -O /usr/local/bin/cfsslwget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64" -O /usr/local/bin/cfssljsonchmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson3.1.生成etcd证书特地阐明除外,以下操作在所有master节点操作 ...

June 15, 2022 · 27 min · jiezi

关于kubernetes:二进制安装Kubernetesk8s-v12210-IPv4IPv6双栈

二进制装置Kubernetes(k8s) v1.22.10 IPv4/IPv6双栈Kubernetes 开源不易,帮忙点个star,谢谢了 介绍kubernetes二进制装置 后续尽可能第一工夫更新新版本文档 1.22.10 和 1.23.3 和 1.23.4 和 1.23.5 和 1.23.6 和 1.23.7 和 1.24.0 和1.24.1 文档以及安装包已生成。 我应用IPV6的目标是在公网进行拜访,所以我配置了IPV6动态地址。 若您没有IPV6环境,或者不想应用IPv6,不对主机进行配置IPv6地址即可。 不配置IPV6,不影响后续,不过集群仍旧是反对IPv6的。为前期留有扩大可能性。 我的项目地址:https://github.com/cby-chen/K... 每个初始版本会打上releases,安装包在releases页面 https://github.com/cby-chen/K... (下载更快)我本人的网盘:https://pan.oiox.cn/s/PetV 1.环境主机名称IP地址阐明软件Master0110.0.0.81master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client、haproxy、keepalivedMaster0210.0.0.82master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client、haproxy、keepalivedMaster0310.0.0.83master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client、haproxy、keepalivedNode0110.0.0.84node节点kubelet、kube-proxy、nfs-clientNode0210.0.0.85node节点kubelet、kube-proxy、nfs-client 10.0.0.89VIP 软件版本内核4.18.0-373.el8.x86_64CentOS 8v8 或者 v7kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxyv1.22.10etcdv3.5.4containerdv1.6.6cfsslv1.6.1cniv1.1.1crictlv1.23.0haproxyv1.8.27keepalivedv2.1.5网段 物理主机:192.168.1.0/24 service:10.96.0.0/12 pod:172.16.0.0/12 如果有条件倡议k8s集群与etcd集群离开装置 1.1.k8s根底零碎环境配置1.2.配置IPssh root@10.1.1.112 "nmcli con mod ens160 ipv4.addresses 10.0.0.81/8; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.1.1.102 "nmcli con mod ens160 ipv4.addresses 10.0.0.82/8; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.1.1.104 "nmcli con mod ens160 ipv4.addresses 10.0.0.83/8; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.1.1.105 "nmcli con mod ens160 ipv4.addresses 10.0.0.84/8; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.1.1.106 "nmcli con mod ens160 ipv4.addresses 10.0.0.85/8; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.0.0.81 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1:181c::10; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.82 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1:181c::20; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.83 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1:181c::30; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.84 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1:181c::40; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.85 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1:181c::50; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"1.3.设置主机名hostnamectl set-hostname k8s-master01hostnamectl set-hostname k8s-master02hostnamectl set-hostname k8s-master03hostnamectl set-hostname k8s-node01hostnamectl set-hostname k8s-node021.4.配置yum源# 对于 CentOS 7sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org|baseurl=https://mirrors.tuna.tsinghua.edu.cn|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo# 对于 CentOS 8sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org/$contentdir|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.reposed -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://mirror.centos.org/\$contentdir|baseurl=http://10.0.0.123/centos|g' -i.bak /etc/yum.repos.d/CentOS-*.repo1.5.装置一些必备工具yum -y install wget jq psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl -y1.6.选择性下载须要工具1.下载kubernetes1.22.+的二进制包github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.mdwget https://dl.k8s.io/v1.22.10/kubernetes-server-linux-amd64.tar.gz2.下载etcdctl二进制包github二进制包下载地址:https://github.com/etcd-io/etcd/releaseswget https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz3.containerd二进制包下载github下载地址:https://github.com/containerd/containerd/releasescontainerd下载时下载带cni插件的二进制包。wget https://github.com/containerd/containerd/releases/download/v1.6.6/cri-containerd-cni-1.6.6-linux-amd64.tar.gz4.下载cfssl二进制包github二进制包下载地址:https://github.com/cloudflare/cfssl/releaseswget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd645.cni插件下载github下载地址:https://github.com/containernetworking/plugins/releaseswget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz6.crictl客户端二进制下载github下载:https://github.com/kubernetes-sigs/cri-tools/releaseswget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz1.7.敞开防火墙systemctl disable --now firewalld1.8.敞开SELinuxsetenforce 0sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config1.9.敞开替换分区sed -ri 's/.*swap.*/#&/' /etc/fstabswapoff -a && sysctl -w vm.swappiness=0cat /etc/fstab# /dev/mapper/centos-swap swap swap defaults 0 01.10.敞开NetworkManager 并启用 network (lb除外)systemctl disable --now NetworkManagersystemctl start network && systemctl enable network1.11.进行工夫同步服务端yum install chrony -ycat > /etc/chrony.conf << EOF pool ntp.aliyun.com iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsyncallow 10.0.0.8/8local stratum 10keyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronyEOFsystemctl restart chronydsystemctl enable chronyd客户端yum install chrony -ycat > /etc/chrony.conf << EOF pool 10.0.0.81 iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsynckeyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronyEOFsystemctl restart chronyd ; systemctl enable chronyd应用客户端进行验证chronyc sources -v1.12.配置ulimitulimit -SHn 65535cat >> /etc/security/limits.conf <<EOF* soft nofile 655360* hard nofile 131072* soft nproc 655350* hard nproc 655350* seft memlock unlimited* hard memlock unlimiteddEOF1.13.配置免密登录yum install -y sshpassssh-keygen -f /root/.ssh/id_rsa -P ''export IP="10.0.0.81 10.0.0.82 10.0.0.83 10.0.0.84 10.0.0.85"export SSHPASS=123123for HOST in $IP;do sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOSTdone1.14.增加启用源为 RHEL-8或 CentOS-8配置源yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm为 RHEL-7 SL-7 或 CentOS-7 装置 ELRepo yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm查看可用安装包yum --disablerepo="*" --enablerepo="elrepo-kernel" list available1.15.降级内核至4.18版本以上装置最新的内核# 我这里抉择的是稳定版kernel-ml 如需更新长期保护版本kernel-lt yum --enablerepo=elrepo-kernel install kernel-ml查看已装置那些内核rpm -qa | grep kernelkernel-core-4.18.0-358.el8.x86_64kernel-tools-4.18.0-358.el8.x86_64kernel-ml-core-5.16.7-1.el8.elrepo.x86_64kernel-ml-5.16.7-1.el8.elrepo.x86_64kernel-modules-4.18.0-358.el8.x86_64kernel-4.18.0-358.el8.x86_64kernel-tools-libs-4.18.0-358.el8.x86_64kernel-ml-modules-5.16.7-1.el8.elrepo.x86_64查看默认内核grubby --default-kernel/boot/vmlinuz-5.16.7-1.el8.elrepo.x86_64若不是最新的应用命令设置grubby --set-default /boot/vmlinuz-「您的内核版本」.x86_64重启失效rebootv8 整合命令为:yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --default-kernel ; rebootv7 整合命令为:yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --set-default \$(ls /boot/vmlinuz-* | grep elrepo) ; grubby --default-kernel1.16.装置ipvsadmyum install ipvsadm ipset sysstat conntrack libseccomp -ycat >> /etc/modules-load.d/ipvs.conf <<EOF ip_vsip_vs_rrip_vs_wrrip_vs_shnf_conntrackip_tablesip_setxt_setipt_setipt_rpfilteript_REJECTipipEOFsystemctl restart systemd-modules-load.servicelsmod | grep -e ip_vs -e nf_conntrackip_vs_sh 16384 0ip_vs_wrr 16384 0ip_vs_rr 16384 0ip_vs 180224 6 ip_vs_rr,ip_vs_sh,ip_vs_wrrnf_conntrack 176128 1 ip_vsnf_defrag_ipv6 24576 2 nf_conntrack,ip_vsnf_defrag_ipv4 16384 1 nf_conntracklibcrc32c 16384 3 nf_conntrack,xfs,ip_vs1.17.批改内核参数cat <<EOF > /etc/sysctl.d/k8s.confnet.ipv4.ip_forward = 1net.bridge.bridge-nf-call-iptables = 1fs.may_detach_mounts = 1vm.overcommit_memory=1vm.panic_on_oom=0fs.inotify.max_user_watches=89100fs.file-max=52706963fs.nr_open=52706963net.netfilter.nf_conntrack_max=2310720net.ipv4.tcp_keepalive_time = 600net.ipv4.tcp_keepalive_probes = 3net.ipv4.tcp_keepalive_intvl =15net.ipv4.tcp_max_tw_buckets = 36000net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_max_orphans = 327680net.ipv4.tcp_orphan_retries = 3net.ipv4.tcp_syncookies = 1net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.ip_conntrack_max = 65536net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.tcp_timestamps = 0net.core.somaxconn = 16384net.ipv6.conf.all.disable_ipv6 = 0net.ipv6.conf.default.disable_ipv6 = 0net.ipv6.conf.lo.disable_ipv6 = 0net.ipv6.conf.all.forwarding = 0EOFsysctl --system1.18.所有节点配置hosts本地解析cat > /etc/hosts <<EOF127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain62408:8207:78ca:9fa1:181c::10 k8s-master012408:8207:78ca:9fa1:181c::20 k8s-master022408:8207:78ca:9fa1:181c::30 k8s-master032408:8207:78ca:9fa1:181c::40 k8s-node012408:8207:78ca:9fa1:181c::50 k8s-node0210.0.0.81 k8s-master0110.0.0.82 k8s-master0210.0.0.83 k8s-master0310.0.0.84 k8s-node0110.0.0.85 k8s-node0210.0.0.89 lb-vipEOF2.k8s根本组件装置2.1.所有k8s节点装置Containerd作为Runtimewget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz#创立cni插件所需目录mkdir -p /etc/cni/net.d /opt/cni/bin #解压cni二进制包tar xf cni-plugins-linux-amd64-v1.1.1.tgz -C /opt/cni/bin/wget https://github.com/containerd/containerd/releases/download/v1.6.6/cri-containerd-cni-1.6.6-linux-amd64.tar.gz#解压tar -C / -xzf cri-containerd-cni-1.6.6-linux-amd64.tar.gz#创立服务启动文件cat > /etc/systemd/system/containerd.service <<EOF[Unit]Description=containerd container runtimeDocumentation=https://containerd.ioAfter=network.target local-fs.target[Service]ExecStartPre=-/sbin/modprobe overlayExecStart=/usr/local/bin/containerdType=notifyDelegate=yesKillMode=processRestart=alwaysRestartSec=5LimitNPROC=infinityLimitCORE=infinityLimitNOFILE=infinityTasksMax=infinityOOMScoreAdjust=-999[Install]WantedBy=multi-user.targetEOF2.1.1配置Containerd所需的模块cat <<EOF | sudo tee /etc/modules-load.d/containerd.confoverlaybr_netfilterEOF2.1.2加载模块systemctl restart systemd-modules-load.service2.1.3配置Containerd所需的内核cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.confnet.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1net.bridge.bridge-nf-call-ip6tables = 1EOF# 加载内核sysctl --system2.1.4创立Containerd的配置文件mkdir -p /etc/containerdcontainerd config default | tee /etc/containerd/config.toml批改Containerd的配置文件sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep SystemdCgroupsed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep sandbox_image# 找到containerd.runtimes.runc.options,在其下退出SystemdCgroup = true[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true [plugins."io.containerd.grpc.v1.cri".cni]# 将sandbox_image默认地址改为合乎版本地址 sandbox_image = "registry.cn-hangzhou.aliyuncs.com/chenby/pause:3.6"2.1.5启动并设置为开机启动systemctl daemon-reloadsystemctl enable --now containerd2.1.6配置crictl客户端连贯的运行时地位wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz#解压tar xf crictl-v1.23.0-linux-amd64.tar.gz -C /usr/bin/#生成配置文件cat > /etc/crictl.yaml <<EOFruntime-endpoint: unix:///run/containerd/containerd.sockimage-endpoint: unix:///run/containerd/containerd.socktimeout: 10debug: falseEOF#测试systemctl restart containerdcrictl info2.2.k8s与etcd下载及装置(仅在master01操作)2.2.1解压k8s安装包解压k8s安装文件tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}解压etcd安装文件tar -xf etcd-v3.5.4-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.4-linux-amd64/etcd{,ctl}# 查看/usr/local/bin下内容ls /usr/local/bin/etcd etcdctl kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler2.2.2查看版本[root@localhost ~]# kubelet --versionKubernetes v1.22.10[root@localhost ~]# etcdctl versionetcdctl version: 3.5.4API version: 3.5[root@localhost ~]# 2.2.3将组件发送至其余k8s节点Master='k8s-master02 k8s-master03'Work='k8s-node01 k8s-node02'for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; donefor NODE in $Work; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done2.3创立证书相干文件mkdir pkicd pkicat > admin-csr.json << EOF { "CN": "admin", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:masters", "OU": "Kubernetes-manual" } ]}EOFcat > ca-config.json << EOF { "signing": { "default": { "expiry": "876000h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "876000h" } } }}EOFcat > etcd-ca-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ], "ca": { "expiry": "876000h" }}EOFcat > front-proxy-ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "ca": { "expiry": "876000h" }}EOFcat > kubelet-csr.json << EOF { "CN": "system:node:\$NODE", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "system:nodes", "OU": "Kubernetes-manual" } ]}EOFcat > manager-csr.json << EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-controller-manager", "OU": "Kubernetes-manual" } ]}EOFcat > apiserver-csr.json << EOF { "CN": "kube-apiserver", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ]}EOFcat > ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ], "ca": { "expiry": "876000h" }}EOFcat > etcd-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ]}EOFcat > front-proxy-client-csr.json << EOF { "CN": "front-proxy-client", "key": { "algo": "rsa", "size": 2048 }}EOFcat > kube-proxy-csr.json << EOF { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-proxy", "OU": "Kubernetes-manual" } ]}EOFcat > scheduler-csr.json << EOF { "CN": "system:kube-scheduler", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-scheduler", "OU": "Kubernetes-manual" } ]}EOFcd ..mkdir bootstrapcd bootstrapcat > bootstrap.secret.yaml << EOF apiVersion: v1kind: Secretmetadata: name: bootstrap-token-c8ad9c namespace: kube-systemtype: bootstrap.kubernetes.io/tokenstringData: description: "The default bootstrap token generated by 'kubelet '." token-id: c8ad9c token-secret: 2e4d610cf3e7426e usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress ---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: kubelet-bootstraproleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node-bootstrappersubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: node-autoapprove-bootstraproleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclientsubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: node-autoapprove-certificate-rotationroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclientsubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:nodes---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:kube-apiserver-to-kubeletrules: - apiGroups: - "" resources: - nodes/proxy - nodes/stats - nodes/log - nodes/spec - nodes/metrics verbs: - "*"---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: system:kube-apiserver namespace: ""roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kube-apiserver-to-kubeletsubjects: - apiGroup: rbac.authorization.k8s.io kind: User name: kube-apiserverEOFcd ..mkdir corednscd corednscat > coredns.yaml << EOF apiVersion: v1kind: ServiceAccountmetadata: name: coredns namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: kubernetes.io/bootstrapping: rbac-defaults name: system:corednsrules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch - apiGroups: - discovery.k8s.io resources: - endpointslices verbs: - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:corednsroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:corednssubjects:- kind: ServiceAccount name: coredns namespace: kube-system---apiVersion: v1kind: ConfigMapmetadata: name: coredns namespace: kube-systemdata: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance }---apiVersion: apps/v1kind: Deploymentmetadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "CoreDNS"spec: # replicas: not specified here: # 1. Default is 1. # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns spec: priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations: - key: "CriticalAddonsOnly" operator: "Exists" nodeSelector: kubernetes.io/os: linux affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: ["kube-dns"] topologyKey: kubernetes.io/hostname containers: - name: coredns image: registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.8.6 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns readOnly: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /ready port: 8181 scheme: HTTP dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile---apiVersion: v1kind: Servicemetadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS"spec: selector: k8s-app: kube-dns clusterIP: 10.96.0.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP - name: metrics port: 9153 protocol: TCPEOFcd ..mkdir metrics-servercd metrics-servercat > metrics-server.yaml << EOF apiVersion: v1kind: ServiceAccountmetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: k8s-app: metrics-server rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: system:aggregated-metrics-readerrules:- apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: k8s-app: metrics-server name: system:metrics-serverrules:- apiGroups: - "" resources: - pods - nodes - nodes/stats - namespaces - configmaps verbs: - get - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata: labels: k8s-app: metrics-server name: metrics-server-auth-reader namespace: kube-systemroleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-readersubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: labels: k8s-app: metrics-server name: metrics-server:system:auth-delegatorroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegatorsubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: labels: k8s-app: metrics-server name: system:metrics-serverroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-serversubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: v1kind: Servicemetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-systemspec: ports: - name: https port: 443 protocol: TCP targetPort: https selector: k8s-app: metrics-server---apiVersion: apps/v1kind: Deploymentmetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-systemspec: selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm - --requestheader-username-headers=X-Remote-User - --requestheader-group-headers=X-Remote-Group - --requestheader-extra-headers-prefix=X-Remote-Extra- image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:0.5.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 name: metrics-server ports: - containerPort: 4443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: https scheme: HTTPS initialDelaySeconds: 20 periodSeconds: 10 resources: requests: cpu: 100m memory: 200Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 volumeMounts: - mountPath: /tmp name: tmp-dir - name: ca-ssl mountPath: /etc/kubernetes/pki nodeSelector: kubernetes.io/os: linux priorityClassName: system-cluster-critical serviceAccountName: metrics-server volumes: - emptyDir: {} name: tmp-dir - name: ca-ssl hostPath: path: /etc/kubernetes/pki---apiVersion: apiregistration.k8s.io/v1kind: APIServicemetadata: labels: k8s-app: metrics-server name: v1beta1.metrics.k8s.iospec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system version: v1beta1 versionPriority: 100EOF3.相干证书生成master01节点下载证书生成工具wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64" -O /usr/local/bin/cfsslwget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64" -O /usr/local/bin/cfssljsonchmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson3.1.生成etcd证书特地阐明除外,以下操作在所有master节点操作 ...

June 15, 2022 · 27 min · jiezi

关于kubernetes:二进制安装Kubernetesk8s-v1237-IPv4IPv6双栈

二进制装置Kubernetes(k8s) v1.23.7 IPv4/IPv6双栈Kubernetes 开源不易,帮忙点个star,谢谢了 介绍kubernetes二进制装置 后续尽可能第一工夫更新新版本文档 1.23.3 和 1.23.4 和 1.23.5 和 1.23.6 和 1.23.7 和 1.24.0 和1.24.1 文档以及安装包已生成。 我应用IPV6的目标是在公网进行拜访,所以我配置了IPV6动态地址。 若您没有IPV6环境,或者不想应用IPv6,不对主机进行配置IPv6地址即可。 不配置IPV6,不影响后续,不过集群仍旧是反对IPv6的。为前期留有扩大可能性。 我的项目地址:https://github.com/cby-chen/K... 每个初始版本会打上releases,安装包在releases页面 https://github.com/cby-chen/K... (下载更快)我本人的网盘:https://pan.oiox.cn/s/PetV 1.环境主机名称IP地址阐明软件Master0110.0.0.81master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client、haproxy、keepalivedMaster0210.0.0.82master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client、haproxy、keepalivedMaster0310.0.0.83master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client、haproxy、keepalivedNode0110.0.0.84node节点kubelet、kube-proxy、nfs-clientNode0210.0.0.85node节点kubelet、kube-proxy、nfs-client 10.0.0.89VIP 软件版本内核4.18.0-373.el8.x86_64CentOS 8v8 或者 v7kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxyv1.23.7etcdv3.5.4containerdv1.6.6cfsslv1.6.1cniv1.1.1crictlv1.23.0haproxyv1.8.27keepalivedv2.1.5网段 物理主机:192.168.1.0/24 service:10.96.0.0/12 pod:172.16.0.0/12 如果有条件倡议k8s集群与etcd集群离开装置 1.1.k8s根底零碎环境配置1.2.配置IPssh root@10.1.1.112 "nmcli con mod ens160 ipv4.addresses 10.0.0.81/8; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.1.1.102 "nmcli con mod ens160 ipv4.addresses 10.0.0.82/8; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.1.1.104 "nmcli con mod ens160 ipv4.addresses 10.0.0.83/8; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.1.1.105 "nmcli con mod ens160 ipv4.addresses 10.0.0.84/8; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.1.1.106 "nmcli con mod ens160 ipv4.addresses 10.0.0.85/8; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.0.0.81 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1:181c::10; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.82 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1:181c::20; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.83 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1:181c::30; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.84 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1:181c::40; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.85 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1:181c::50; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"1.3.设置主机名hostnamectl set-hostname k8s-master01hostnamectl set-hostname k8s-master02hostnamectl set-hostname k8s-master03hostnamectl set-hostname k8s-node01hostnamectl set-hostname k8s-node021.4.配置yum源# 对于 CentOS 7sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org|baseurl=https://mirrors.tuna.tsinghua.edu.cn|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo# 对于 CentOS 8sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org/$contentdir|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.reposed -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://mirror.centos.org/\$contentdir|baseurl=http://10.0.0.123/centos|g' -i.bak /etc/yum.repos.d/CentOS-*.repo1.5.装置一些必备工具yum -y install wget jq psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl -y1.6.选择性下载须要工具1.下载kubernetes1.23.+的二进制包github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.mdwget https://dl.k8s.io/v1.23.7/kubernetes-server-linux-amd64.tar.gz2.下载etcdctl二进制包github二进制包下载地址:https://github.com/etcd-io/etcd/releaseswget https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz3.containerd二进制包下载github下载地址:https://github.com/containerd/containerd/releasescontainerd下载时下载带cni插件的二进制包。wget https://github.com/containerd/containerd/releases/download/v1.6.6/cri-containerd-cni-1.6.6-linux-amd64.tar.gz4.下载cfssl二进制包github二进制包下载地址:https://github.com/cloudflare/cfssl/releaseswget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd645.cni插件下载github下载地址:https://github.com/containernetworking/plugins/releaseswget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz6.crictl客户端二进制下载github下载:https://github.com/kubernetes-sigs/cri-tools/releaseswget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz1.7.敞开防火墙systemctl disable --now firewalld1.8.敞开SELinuxsetenforce 0sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config1.9.敞开替换分区sed -ri 's/.*swap.*/#&/' /etc/fstabswapoff -a && sysctl -w vm.swappiness=0cat /etc/fstab# /dev/mapper/centos-swap swap swap defaults 0 01.10.敞开NetworkManager 并启用 network (lb除外)systemctl disable --now NetworkManagersystemctl start network && systemctl enable network1.11.进行工夫同步服务端yum install chrony -ycat > /etc/chrony.conf << EOF pool ntp.aliyun.com iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsyncallow 10.0.0.8/8local stratum 10keyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronyEOFsystemctl restart chronydsystemctl enable chronyd客户端yum install chrony -ycat > /etc/chrony.conf << EOF pool 10.0.0.81 iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsynckeyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronyEOFsystemctl restart chronyd ; systemctl enable chronyd应用客户端进行验证chronyc sources -v1.12.配置ulimitulimit -SHn 65535cat >> /etc/security/limits.conf <<EOF* soft nofile 655360* hard nofile 131072* soft nproc 655350* hard nproc 655350* seft memlock unlimited* hard memlock unlimiteddEOF1.13.配置免密登录yum install -y sshpassssh-keygen -f /root/.ssh/id_rsa -P ''export IP="10.0.0.81 10.0.0.82 10.0.0.83 10.0.0.84 10.0.0.85"export SSHPASS=123123for HOST in $IP;do sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOSTdone1.14.增加启用源为 RHEL-8或 CentOS-8配置源yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm为 RHEL-7 SL-7 或 CentOS-7 装置 ELRepo yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm查看可用安装包yum --disablerepo="*" --enablerepo="elrepo-kernel" list available1.15.降级内核至4.18版本以上装置最新的内核# 我这里抉择的是稳定版kernel-ml 如需更新长期保护版本kernel-lt yum --enablerepo=elrepo-kernel install kernel-ml查看已装置那些内核rpm -qa | grep kernelkernel-core-4.18.0-358.el8.x86_64kernel-tools-4.18.0-358.el8.x86_64kernel-ml-core-5.16.7-1.el8.elrepo.x86_64kernel-ml-5.16.7-1.el8.elrepo.x86_64kernel-modules-4.18.0-358.el8.x86_64kernel-4.18.0-358.el8.x86_64kernel-tools-libs-4.18.0-358.el8.x86_64kernel-ml-modules-5.16.7-1.el8.elrepo.x86_64查看默认内核grubby --default-kernel/boot/vmlinuz-5.16.7-1.el8.elrepo.x86_64若不是最新的应用命令设置grubby --set-default /boot/vmlinuz-「您的内核版本」.x86_64重启失效rebootv8 整合命令为:yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --default-kernel ; rebootv7 整合命令为:yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --set-default \$(ls /boot/vmlinuz-* | grep elrepo) ; grubby --default-kernel1.16.装置ipvsadmyum install ipvsadm ipset sysstat conntrack libseccomp -ycat >> /etc/modules-load.d/ipvs.conf <<EOF ip_vsip_vs_rrip_vs_wrrip_vs_shnf_conntrackip_tablesip_setxt_setipt_setipt_rpfilteript_REJECTipipEOFsystemctl restart systemd-modules-load.servicelsmod | grep -e ip_vs -e nf_conntrackip_vs_sh 16384 0ip_vs_wrr 16384 0ip_vs_rr 16384 0ip_vs 180224 6 ip_vs_rr,ip_vs_sh,ip_vs_wrrnf_conntrack 176128 1 ip_vsnf_defrag_ipv6 24576 2 nf_conntrack,ip_vsnf_defrag_ipv4 16384 1 nf_conntracklibcrc32c 16384 3 nf_conntrack,xfs,ip_vs1.17.批改内核参数cat <<EOF > /etc/sysctl.d/k8s.confnet.ipv4.ip_forward = 1net.bridge.bridge-nf-call-iptables = 1fs.may_detach_mounts = 1vm.overcommit_memory=1vm.panic_on_oom=0fs.inotify.max_user_watches=89100fs.file-max=52706963fs.nr_open=52706963net.netfilter.nf_conntrack_max=2310720net.ipv4.tcp_keepalive_time = 600net.ipv4.tcp_keepalive_probes = 3net.ipv4.tcp_keepalive_intvl =15net.ipv4.tcp_max_tw_buckets = 36000net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_max_orphans = 327680net.ipv4.tcp_orphan_retries = 3net.ipv4.tcp_syncookies = 1net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.ip_conntrack_max = 65536net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.tcp_timestamps = 0net.core.somaxconn = 16384net.ipv6.conf.all.disable_ipv6 = 0net.ipv6.conf.default.disable_ipv6 = 0net.ipv6.conf.lo.disable_ipv6 = 0net.ipv6.conf.all.forwarding = 0EOFsysctl --system1.18.所有节点配置hosts本地解析cat > /etc/hosts <<EOF127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain62408:8207:78ca:9fa1:181c::10 k8s-master012408:8207:78ca:9fa1:181c::20 k8s-master022408:8207:78ca:9fa1:181c::30 k8s-master032408:8207:78ca:9fa1:181c::40 k8s-node012408:8207:78ca:9fa1:181c::50 k8s-node0210.0.0.81 k8s-master0110.0.0.82 k8s-master0210.0.0.83 k8s-master0310.0.0.84 k8s-node0110.0.0.85 k8s-node0210.0.0.89 lb-vipEOF2.k8s根本组件装置2.1.所有k8s节点装置Containerd作为Runtimewget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz#创立cni插件所需目录mkdir -p /etc/cni/net.d /opt/cni/bin #解压cni二进制包tar xf cni-plugins-linux-amd64-v1.1.1.tgz -C /opt/cni/bin/wget https://github.com/containerd/containerd/releases/download/v1.6.6/cri-containerd-cni-1.6.6-linux-amd64.tar.gz#解压tar -C / -xzf cri-containerd-cni-1.6.6-linux-amd64.tar.gz#创立服务启动文件cat > /etc/systemd/system/containerd.service <<EOF[Unit]Description=containerd container runtimeDocumentation=https://containerd.ioAfter=network.target local-fs.target[Service]ExecStartPre=-/sbin/modprobe overlayExecStart=/usr/local/bin/containerdType=notifyDelegate=yesKillMode=processRestart=alwaysRestartSec=5LimitNPROC=infinityLimitCORE=infinityLimitNOFILE=infinityTasksMax=infinityOOMScoreAdjust=-999[Install]WantedBy=multi-user.targetEOF2.1.1配置Containerd所需的模块cat <<EOF | sudo tee /etc/modules-load.d/containerd.confoverlaybr_netfilterEOF2.1.2加载模块systemctl restart systemd-modules-load.service2.1.3配置Containerd所需的内核cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.confnet.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1net.bridge.bridge-nf-call-ip6tables = 1EOF# 加载内核sysctl --system2.1.4创立Containerd的配置文件mkdir -p /etc/containerdcontainerd config default | tee /etc/containerd/config.toml批改Containerd的配置文件sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep SystemdCgroupsed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep sandbox_image# 找到containerd.runtimes.runc.options,在其下退出SystemdCgroup = true[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true [plugins."io.containerd.grpc.v1.cri".cni]# 将sandbox_image默认地址改为合乎版本地址 sandbox_image = "registry.cn-hangzhou.aliyuncs.com/chenby/pause:3.6"2.1.5启动并设置为开机启动systemctl daemon-reloadsystemctl enable --now containerd2.1.6配置crictl客户端连贯的运行时地位wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz#解压tar xf crictl-v1.23.0-linux-amd64.tar.gz -C /usr/bin/#生成配置文件cat > /etc/crictl.yaml <<EOFruntime-endpoint: unix:///run/containerd/containerd.sockimage-endpoint: unix:///run/containerd/containerd.socktimeout: 10debug: falseEOF#测试systemctl restart containerdcrictl info2.2.k8s与etcd下载及装置(仅在master01操作)2.2.1解压k8s安装包解压k8s安装文件tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}解压etcd安装文件tar -xf etcd-v3.5.4-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.4-linux-amd64/etcd{,ctl}# 查看/usr/local/bin下内容ls /usr/local/bin/etcd etcdctl kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler2.2.2查看版本[root@k8s-master01 ~]# kubelet --versionKubernetes v1.23.7[root@k8s-master01 ~]# etcdctl versionetcdctl version: 3.5.4API version: 3.5[root@k8s-master01 ~]# 2.2.3将组件发送至其余k8s节点Master='k8s-master02 k8s-master03'Work='k8s-node01 k8s-node02'for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; donefor NODE in $Work; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done2.3创立证书相干文件mkdir pkicd pkicat > admin-csr.json << EOF { "CN": "admin", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:masters", "OU": "Kubernetes-manual" } ]}EOFcat > ca-config.json << EOF { "signing": { "default": { "expiry": "876000h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "876000h" } } }}EOFcat > etcd-ca-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ], "ca": { "expiry": "876000h" }}EOFcat > front-proxy-ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "ca": { "expiry": "876000h" }}EOFcat > kubelet-csr.json << EOF { "CN": "system:node:\$NODE", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "system:nodes", "OU": "Kubernetes-manual" } ]}EOFcat > manager-csr.json << EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-controller-manager", "OU": "Kubernetes-manual" } ]}EOFcat > apiserver-csr.json << EOF { "CN": "kube-apiserver", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ]}EOFcat > ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ], "ca": { "expiry": "876000h" }}EOFcat > etcd-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ]}EOFcat > front-proxy-client-csr.json << EOF { "CN": "front-proxy-client", "key": { "algo": "rsa", "size": 2048 }}EOFcat > kube-proxy-csr.json << EOF { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-proxy", "OU": "Kubernetes-manual" } ]}EOFcat > scheduler-csr.json << EOF { "CN": "system:kube-scheduler", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-scheduler", "OU": "Kubernetes-manual" } ]}EOFcd ..mkdir bootstrapcd bootstrapcat > bootstrap.secret.yaml << EOF apiVersion: v1kind: Secretmetadata: name: bootstrap-token-c8ad9c namespace: kube-systemtype: bootstrap.kubernetes.io/tokenstringData: description: "The default bootstrap token generated by 'kubelet '." token-id: c8ad9c token-secret: 2e4d610cf3e7426e usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress ---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: kubelet-bootstraproleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node-bootstrappersubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: node-autoapprove-bootstraproleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclientsubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: node-autoapprove-certificate-rotationroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclientsubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:nodes---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:kube-apiserver-to-kubeletrules: - apiGroups: - "" resources: - nodes/proxy - nodes/stats - nodes/log - nodes/spec - nodes/metrics verbs: - "*"---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: system:kube-apiserver namespace: ""roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kube-apiserver-to-kubeletsubjects: - apiGroup: rbac.authorization.k8s.io kind: User name: kube-apiserverEOFcd ..mkdir corednscd corednscat > coredns.yaml << EOF apiVersion: v1kind: ServiceAccountmetadata: name: coredns namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: kubernetes.io/bootstrapping: rbac-defaults name: system:corednsrules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch - apiGroups: - discovery.k8s.io resources: - endpointslices verbs: - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:corednsroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:corednssubjects:- kind: ServiceAccount name: coredns namespace: kube-system---apiVersion: v1kind: ConfigMapmetadata: name: coredns namespace: kube-systemdata: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance }---apiVersion: apps/v1kind: Deploymentmetadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "CoreDNS"spec: # replicas: not specified here: # 1. Default is 1. # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns spec: priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations: - key: "CriticalAddonsOnly" operator: "Exists" nodeSelector: kubernetes.io/os: linux affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: ["kube-dns"] topologyKey: kubernetes.io/hostname containers: - name: coredns image: registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.8.6 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns readOnly: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /ready port: 8181 scheme: HTTP dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile---apiVersion: v1kind: Servicemetadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS"spec: selector: k8s-app: kube-dns clusterIP: 10.96.0.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP - name: metrics port: 9153 protocol: TCPEOFcd ..mkdir metrics-servercd metrics-servercat > metrics-server.yaml << EOF apiVersion: v1kind: ServiceAccountmetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: k8s-app: metrics-server rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: system:aggregated-metrics-readerrules:- apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: k8s-app: metrics-server name: system:metrics-serverrules:- apiGroups: - "" resources: - pods - nodes - nodes/stats - namespaces - configmaps verbs: - get - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata: labels: k8s-app: metrics-server name: metrics-server-auth-reader namespace: kube-systemroleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-readersubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: labels: k8s-app: metrics-server name: metrics-server:system:auth-delegatorroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegatorsubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: labels: k8s-app: metrics-server name: system:metrics-serverroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-serversubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: v1kind: Servicemetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-systemspec: ports: - name: https port: 443 protocol: TCP targetPort: https selector: k8s-app: metrics-server---apiVersion: apps/v1kind: Deploymentmetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-systemspec: selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm - --requestheader-username-headers=X-Remote-User - --requestheader-group-headers=X-Remote-Group - --requestheader-extra-headers-prefix=X-Remote-Extra- image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:0.5.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 name: metrics-server ports: - containerPort: 4443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: https scheme: HTTPS initialDelaySeconds: 20 periodSeconds: 10 resources: requests: cpu: 100m memory: 200Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 volumeMounts: - mountPath: /tmp name: tmp-dir - name: ca-ssl mountPath: /etc/kubernetes/pki nodeSelector: kubernetes.io/os: linux priorityClassName: system-cluster-critical serviceAccountName: metrics-server volumes: - emptyDir: {} name: tmp-dir - name: ca-ssl hostPath: path: /etc/kubernetes/pki---apiVersion: apiregistration.k8s.io/v1kind: APIServicemetadata: labels: k8s-app: metrics-server name: v1beta1.metrics.k8s.iospec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system version: v1beta1 versionPriority: 100EOF3.相干证书生成master01节点下载证书生成工具wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64" -O /usr/local/bin/cfsslwget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64" -O /usr/local/bin/cfssljsonchmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson3.1.生成etcd证书特地阐明除外,以下操作在所有master节点操作 ...

June 15, 2022 · 27 min · jiezi

关于kubernetes:二进制安装Kubernetesk8s-v1241-IPv4IPv6双栈-Ubuntu版本

二进制装置Kubernetes(k8s) v1.24.1 IPv4/IPv6双栈 --- Ubuntu版本Kubernetes 开源不易,帮忙点个star,谢谢了 介绍kubernetes二进制装置 后续尽可能第一工夫更新新版本文档,更新后内容在GitHub。 本文是应用的是Ubuntu作为基底,其余文档请在GitHub上查看。 1.23.3 和 1.23.4 和 1.23.5 和 1.23.6 和 1.24.0 和1.24.1 文档以及安装包已生成。 我应用IPV6的目标是在公网进行拜访,所以我配置了IPV6动态地址。 若您没有IPV6环境,或者不想应用IPv6,不对主机进行配置IPv6地址即可。 不配置IPV6,不影响后续,不过集群仍旧是反对IPv6的。为前期留有扩大可能性。 https://github.com/cby-chen/K... 手动我的项目地址:https://github.com/cby-chen/K... 脚本我的项目地址:https://github.com/cby-chen/B... kubernetes 1.24 变动较大,具体见:https://kubernetes.io/zh/blog... 1.环境主机名称IP地址阐明软件Master01192.168.1.11master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client、haproxy、keepalivedMaster02192.168.1.12master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client、haproxy、keepalivedMaster03192.168.1.13master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client、haproxy、keepalivedNode01192.168.1.14node节点kubelet、kube-proxy、nfs-clientNode02192.168.1.15node节点kubelet、kube-proxy、nfs-client 192.168.1.19VIP 软件版本kernel5.4.0-86Ubuntu2004 及以上kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxyv1.24.1etcdv3.5.4containerdv1.5.11cfsslv1.6.1cniv1.1.1crictlv1.24.2haproxyv1.8.27keepalivedv2.1.5网段 物理主机:10.0.0.0/24 service:10.96.0.0/12 pod:172.16.0.0/12 倡议k8s集群与etcd集群离开装置 安装包曾经整顿好:https://github.com/cby-chen/K... 1.1.k8s根底零碎环境配置1.2.配置IProot@hello:~# vim /etc/netplan/00-installer-config.yaml root@hello:~# root@hello:~# cat /etc/netplan/00-installer-config.yaml# This is the network config written by 'subiquity'network: ethernets: ens18: addresses: - 192.168.1.11/24 gateway4: 192.168.1.1 nameservers: addresses: [8.8.8.8] version: 2root@hello:~# root@hello:~# netplan apply root@hello:~# 1.3.设置主机名hostnamectl set-hostname k8s-master01hostnamectl set-hostname k8s-master02hostnamectl set-hostname k8s-master03hostnamectl set-hostname k8s-node01hostnamectl set-hostname k8s-node021.4.配置apt源sudo sed -i 's/archive.ubuntu.com/mirrors.ustc.edu.cn/g' /etc/apt/sources.list1.5.装置一些必备工具apt install wget jq psmisc vim net-tools nfs-kernel-server telnet lvm2 git tar curl -y1.6.选择性下载须要工具1.下载kubernetes1.24.+的二进制包github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.mdwget https://dl.k8s.io/v1.24.1/kubernetes-server-linux-amd64.tar.gz2.下载etcdctl二进制包github二进制包下载地址:https://github.com/etcd-io/etcd/releaseswget https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz3.docker-ce二进制包下载地址二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/这里须要下载20.10.+版本wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.14.tgz4.containerd二进制包下载github下载地址:https://github.com/containerd/containerd/releasescontainerd下载时下载带cni插件的二进制包。wget https://github.com/containerd/containerd/releases/download/v1.6.4/cri-containerd-cni-1.6.4-linux-amd64.tar.gz5.下载cfssl二进制包github二进制包下载地址:https://github.com/cloudflare/cfssl/releaseswget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd646.cni插件下载github下载地址:https://github.com/containernetworking/plugins/releaseswget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz7.crictl客户端二进制下载github下载:https://github.com/kubernetes-sigs/cri-tools/releaseswget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz1.7.敞开防火墙systemctl disable --now ufw1.8.敞开替换分区sed -ri 's/.*swap.*/#&/' /etc/fstabswapoff -a && sysctl -w vm.swappiness=0cat /etc/fstab# /dev/mapper/centos-swap swap swap defaults 0 01.9.进行工夫同步 (lb除外)# 服务端apt install chrony -ycat > /etc/chrony/chrony.conf << EOF pool ntp.aliyun.com iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsyncallow 192.168.1.0/24local stratum 10keyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronyEOFsystemctl restart chronydsystemctl enable chronyd# 客户端apt install chrony -yvim /etc/chrony/chrony.confcat /etc/chrony/chrony.conf | grep -v "^#" | grep -v "^$"pool 192.168.1.11 iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsynckeyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronysystemctl restart chronyd ; systemctl enable chronyd# 客户端装置一条命令yum install chrony -y ; sed -i "s#2.centos.pool.ntp.org#192.168.1.11#g" /etc/chrony/chrony.conf ; systemctl restart chronyd ; systemctl enable chronyd#应用客户端进行验证chronyc sources -v1.10.配置ulimitulimit -SHn 65535cat >> /etc/security/limits.conf <<EOF* soft nofile 655360* hard nofile 131072* soft nproc 655350* hard nproc 655350* seft memlock unlimited* hard memlock unlimiteddEOF1.11.配置免密登录apt install -y sshpassssh-keygen -f /root/.ssh/id_rsa -P ''export IP="192.168.1.11 192.168.1.12 192.168.1.13 192.168.1.14 192.168.1.15"export SSHPASS=123123for HOST in $IP;do sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOSTdone1.12.装置ipvsadm (lb除外)apt install ipvsadm ipset sysstat conntrack -ycat >> /etc/modules-load.d/ipvs.conf <<EOF ip_vsip_vs_rrip_vs_wrrip_vs_shnf_conntrackip_tablesip_setxt_setipt_setipt_rpfilteript_REJECTipipEOFsystemctl restart systemd-modules-load.servicelsmod | grep -e ip_vs -e nf_conntrackip_vs_sh 16384 0ip_vs_wrr 16384 0ip_vs_rr 16384 0ip_vs 155648 6 ip_vs_rr,ip_vs_sh,ip_vs_wrrnf_conntrack 139264 1 ip_vsnf_defrag_ipv6 24576 2 nf_conntrack,ip_vsnf_defrag_ipv4 16384 1 nf_conntracklibcrc32c 16384 4 nf_conntrack,btrfs,raid456,ip_vs1.13.批改内核参数 (lb除外)cat <<EOF > /etc/sysctl.d/k8s.confnet.ipv4.ip_forward = 1net.bridge.bridge-nf-call-iptables = 1fs.may_detach_mounts = 1vm.overcommit_memory=1vm.panic_on_oom=0fs.inotify.max_user_watches=89100fs.file-max=52706963fs.nr_open=52706963net.netfilter.nf_conntrack_max=2310720net.ipv4.tcp_keepalive_time = 600net.ipv4.tcp_keepalive_probes = 3net.ipv4.tcp_keepalive_intvl =15net.ipv4.tcp_max_tw_buckets = 36000net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_max_orphans = 327680net.ipv4.tcp_orphan_retries = 3net.ipv4.tcp_syncookies = 1net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.ip_conntrack_max = 65536net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.tcp_timestamps = 0net.core.somaxconn = 16384net.ipv6.conf.all.disable_ipv6 = 0net.ipv6.conf.default.disable_ipv6 = 0net.ipv6.conf.lo.disable_ipv6 = 0net.ipv6.conf.all.forwarding = 0EOFsysctl --system1.14.所有节点配置hosts本地解析cat > /etc/hosts <<EOF127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.1.11 k8s-master01192.168.1.12 k8s-master02192.168.1.13 k8s-master03192.168.1.14 k8s-node01192.168.1.15 k8s-node02192.168.1.19 lb-vipEOF2.k8s根本组件装置2.1.所有k8s节点装置Containerd作为Runtimewget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz#创立cni插件所需目录mkdir -p /etc/cni/net.d /opt/cni/bin #解压cni二进制包tar xf cni-plugins-linux-amd64-v1.1.1.tgz -C /opt/cni/bin/wget https://github.com/containerd/containerd/releases/download/v1.6.4/cri-containerd-cni-1.6.4-linux-amd64.tar.gz#解压tar -C / -xzf cri-containerd-cni-1.6.4-linux-amd64.tar.gz#创立服务启动文件cat > /etc/systemd/system/containerd.service <<EOF[Unit]Description=containerd container runtimeDocumentation=https://containerd.ioAfter=network.target local-fs.target[Service]ExecStartPre=-/sbin/modprobe overlayExecStart=/usr/local/bin/containerdType=notifyDelegate=yesKillMode=processRestart=alwaysRestartSec=5LimitNPROC=infinityLimitCORE=infinityLimitNOFILE=infinityTasksMax=infinityOOMScoreAdjust=-999[Install]WantedBy=multi-user.targetEOF2.1.1配置Containerd所需的模块cat <<EOF | sudo tee /etc/modules-load.d/containerd.confoverlaybr_netfilterEOF2.1.2加载模块systemctl restart systemd-modules-load.service2.1.3配置Containerd所需的内核cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.confnet.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1net.bridge.bridge-nf-call-ip6tables = 1EOF# 加载内核sysctl --system2.1.4创立Containerd的配置文件mkdir -p /etc/containerdcontainerd config default | tee /etc/containerd/config.toml批改Containerd的配置文件sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep SystemdCgroupsed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep sandbox_image# 找到containerd.runtimes.runc.options,在其下退出SystemdCgroup = true[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true [plugins."io.containerd.grpc.v1.cri".cni]# 将sandbox_image默认地址改为合乎版本地址 sandbox_image = "registry.cn-hangzhou.aliyuncs.com/chenby/pause:3.6"2.1.5启动并设置为开机启动systemctl daemon-reloadsystemctl enable --now containerd2.1.6配置crictl客户端连贯的运行时地位wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz#解压tar xf crictl-v1.24.2-linux-amd64.tar.gz -C /usr/bin/#生成配置文件cat > /etc/crictl.yaml <<EOFruntime-endpoint: unix:///run/containerd/containerd.sockimage-endpoint: unix:///run/containerd/containerd.socktimeout: 10debug: falseEOF#测试systemctl restart containerdcrictl info2.2.k8s与etcd下载及装置(仅在master01操作)2.2.1解压k8s安装包# 下载安装包wget https://dl.k8s.io/v1.24.1/kubernetes-server-linux-amd64.tar.gzwget https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz# 解压k8s安装文件cd cbytar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}# 解压etcd安装文件tar -xf etcd-v3.5.4-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.4-linux-amd64/etcd{,ctl}# 查看/usr/local/bin下内容ls /usr/local/bin/etcd etcdctl kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler2.2.2查看版本[root@k8s-master01 ~]# kubelet --versionKubernetes v1.24.1[root@k8s-master01 ~]# etcdctl versionetcdctl version: 3.5.4API version: 3.5[root@k8s-master01 ~]# 2.2.3将组件发送至其余k8s节点Master='k8s-master02 k8s-master03'Work='k8s-node01 k8s-node02'for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; donefor NODE in $Work; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; donemkdir -p /opt/cni/bin2.3创立证书相干文件mkdir pkicd pkicat > admin-csr.json << EOF { "CN": "admin", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:masters", "OU": "Kubernetes-manual" } ]}EOFcat > ca-config.json << EOF { "signing": { "default": { "expiry": "876000h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "876000h" } } }}EOFcat > etcd-ca-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ], "ca": { "expiry": "876000h" }}EOFcat > front-proxy-ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "ca": { "expiry": "876000h" }}EOFcat > kubelet-csr.json << EOF { "CN": "system:node:\$NODE", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "system:nodes", "OU": "Kubernetes-manual" } ]}EOFcat > manager-csr.json << EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-controller-manager", "OU": "Kubernetes-manual" } ]}EOFcat > apiserver-csr.json << EOF { "CN": "kube-apiserver", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ]}EOFcat > ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ], "ca": { "expiry": "876000h" }}EOFcat > etcd-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ]}EOFcat > front-proxy-client-csr.json << EOF { "CN": "front-proxy-client", "key": { "algo": "rsa", "size": 2048 }}EOFcat > kube-proxy-csr.json << EOF { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-proxy", "OU": "Kubernetes-manual" } ]}EOFcat > scheduler-csr.json << EOF { "CN": "system:kube-scheduler", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-scheduler", "OU": "Kubernetes-manual" } ]}EOFcd ..mkdir bootstrapcd bootstrapcat > bootstrap.secret.yaml << EOF apiVersion: v1kind: Secretmetadata: name: bootstrap-token-c8ad9c namespace: kube-systemtype: bootstrap.kubernetes.io/tokenstringData: description: "The default bootstrap token generated by 'kubelet '." token-id: c8ad9c token-secret: 2e4d610cf3e7426e usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress ---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: kubelet-bootstraproleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node-bootstrappersubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: node-autoapprove-bootstraproleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclientsubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: node-autoapprove-certificate-rotationroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclientsubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:nodes---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:kube-apiserver-to-kubeletrules: - apiGroups: - "" resources: - nodes/proxy - nodes/stats - nodes/log - nodes/spec - nodes/metrics verbs: - "*"---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: system:kube-apiserver namespace: ""roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kube-apiserver-to-kubeletsubjects: - apiGroup: rbac.authorization.k8s.io kind: User name: kube-apiserverEOFcd ..mkdir corednscd corednscat > coredns.yaml << EOF apiVersion: v1kind: ServiceAccountmetadata: name: coredns namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: kubernetes.io/bootstrapping: rbac-defaults name: system:corednsrules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch - apiGroups: - discovery.k8s.io resources: - endpointslices verbs: - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:corednsroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:corednssubjects:- kind: ServiceAccount name: coredns namespace: kube-system---apiVersion: v1kind: ConfigMapmetadata: name: coredns namespace: kube-systemdata: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance }---apiVersion: apps/v1kind: Deploymentmetadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "CoreDNS"spec: # replicas: not specified here: # 1. Default is 1. # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns spec: priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations: - key: "CriticalAddonsOnly" operator: "Exists" nodeSelector: kubernetes.io/os: linux affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: ["kube-dns"] topologyKey: kubernetes.io/hostname containers: - name: coredns image: registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.8.6 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns readOnly: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /ready port: 8181 scheme: HTTP dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile---apiVersion: v1kind: Servicemetadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS"spec: selector: k8s-app: kube-dns clusterIP: 10.96.0.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP - name: metrics port: 9153 protocol: TCPEOFcd ..mkdir metrics-servercd metrics-servercat > metrics-server.yaml << EOF apiVersion: v1kind: ServiceAccountmetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: k8s-app: metrics-server rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: system:aggregated-metrics-readerrules:- apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: k8s-app: metrics-server name: system:metrics-serverrules:- apiGroups: - "" resources: - pods - nodes - nodes/stats - namespaces - configmaps verbs: - get - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata: labels: k8s-app: metrics-server name: metrics-server-auth-reader namespace: kube-systemroleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-readersubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: labels: k8s-app: metrics-server name: metrics-server:system:auth-delegatorroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegatorsubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: labels: k8s-app: metrics-server name: system:metrics-serverroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-serversubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: v1kind: Servicemetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-systemspec: ports: - name: https port: 443 protocol: TCP targetPort: https selector: k8s-app: metrics-server---apiVersion: apps/v1kind: Deploymentmetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-systemspec: selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm - --requestheader-username-headers=X-Remote-User - --requestheader-group-headers=X-Remote-Group - --requestheader-extra-headers-prefix=X-Remote-Extra- image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:0.5.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 name: metrics-server ports: - containerPort: 4443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: https scheme: HTTPS initialDelaySeconds: 20 periodSeconds: 10 resources: requests: cpu: 100m memory: 200Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 volumeMounts: - mountPath: /tmp name: tmp-dir - name: ca-ssl mountPath: /etc/kubernetes/pki nodeSelector: kubernetes.io/os: linux priorityClassName: system-cluster-critical serviceAccountName: metrics-server volumes: - emptyDir: {} name: tmp-dir - name: ca-ssl hostPath: path: /etc/kubernetes/pki---apiVersion: apiregistration.k8s.io/v1kind: APIServicemetadata: labels: k8s-app: metrics-server name: v1beta1.metrics.k8s.iospec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system version: v1beta1 versionPriority: 100EOF3.相干证书生成master01节点下载证书生成工具wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64" -O /usr/local/bin/cfsslwget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64" -O /usr/local/bin/cfssljsonchmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson3.1.生成etcd证书特地阐明除外,以下操作在所有master节点操作 ...

June 15, 2022 · 26 min · jiezi

关于kubernetes:2022云上开发的新纪元

社交网络上风波再起最近,K8s 圈出名网红 Kelsey Hightower 发的一条推特再度引爆全网。他示意,本地资源的限度和内部依赖的简单,使得近程开发趋势升温。 这推尽管话短,但力量可不小。一下子各路大V上百条跟推,大家疯狂宣泄对本地开发的不满。比方: 独一无二,来自 Temporal 的 KOL @swyx 也发表了一篇文章示意“近程开发是大势所趋”。 为什么各路大V都开始热捧近程开发 (Remote Development)?上面咱们来仔细分析其中原因。 开发者工具的云化之路软件开发周期经常被划分为两个流程: 内循环 (inner loop) 和外循环 (outer loop)。内循环包含编码、测试、构建等。而外循环则涵盖了从代码提交到线上公布的所有步骤。 在过来,内循环阶段的开发者工具根本都是本地的。这是因为用户无法忍受网络的高提早,心愿能疾速失去反馈。 然而,这个边界逐步被突破,越来越多的开发者工具被云化。因为开发者发现他的大部分工夫不是花在写代码上,而是去寻找解决问题的办法上。如果明天一个云服务可能帮忙他更快地解决问题,那么这 100ms 的提早就不再是问题。举几个例子: Github Copilot 通过 AI 算法主动预测提供给用户代码补全提醒。它有多厉害?有了它,开发者甚至连 StackOverflow 都能够不必上!Sourcegraph 提供了搜寻代码的云服务。许多开发者用了它之后都说比本地搜寻还快。Cloud Shell 是各个云厂商提供的蕴含全套开发工具 (如 awscli) 终端环境,帮忙用户疾速上手应用云资源。云上开发的新体验在整个内循环阶段,最简单的莫过于配置开发环境。这是因为开发环境往往有泛滥依赖难以治理,以及根底组件配置极其简单。而且当初考究麻利开发、开源合作等,很多开发者都是第一次接触我的项目,不懂如何配置环境。最初,哪怕用户在本地配置起来开发环境,它跟云上的生产环境差距还是很大,最初上线不免遇到新的问题,导致上线失败。 为了晋升开发者效率,越来越多的公司抉择将开发环境搬到云上: 大公司:像 Google、FB、Etsy、Tesla、Shopify 等巨头为工程师按需在云上拉起开发环境。他们的工程师间接在云上实现编码、调试、构建、测试、公布全流程。中小企业:越来越多的中小企业购买像 Github Codespace、Gitpod、StackBlitz、Okteto 等公司的服务来治理近程开发环境。基于开源自研:像 Nocalhost 这样的我的项目提供了基于容器的云原生近程开发体验。不少企业基于 Nocalhost 搭建近程开发环境治理平台。 提供近程开发环境,让开发者不必操心如何配置环境、可能间接上手开发,能力无效进步开发者效率。这个在许多公司里失去了验证,也是一个逐步被越来越多人认可的趋势。 明天,一个好的云上开发体验应该长这样: One-click deploy:开发环境都是预约好的,能够被一键拉起。用户只须要抉择编程框架和所依赖的后端服务 (如 MySQL、Redis、Prometheus) 就能够拉起开发环境来应用了。Cattle, not pet: Dev environments should be cattle, not pet. 每一个环境都将是可代码化的、可复制的、不可更改的基础设施。On-dema: 通过 Branching 等贴近开发者应用习惯的形式来按需拉起开发环境 (如下图),在分支合并到骨干后主动删除。Integrated experience: 基于 VSCode、Jetbrains 等搭建更贴合开发者应用场景的 IDE,能够分享开发环境、一键为问题代码创立 issue、图形化调配流量到不同环境等。 ...

June 14, 2022 · 1 min · jiezi

关于kubernetes:vivo大规模-Kubernetes-集群自动化运维实践

作者:vivo 互联网服务器团队-Zhang Rong一、背景随着vivo业务迁徙到K8s的增长,咱们须要将K8s部署到多个数据中心。如何高效、牢靠的在数据中心治理多个大规模的K8s集群是咱们面临的要害挑战。kubernetes的节点须要对OS、Docker、etcd、K8s、CNI和网络插件的装置和配置,保护这些依赖关系繁琐又容易出错。 以前集群的部署和扩缩容次要通过ansible编排工作,黑屏化操作、配置集群的inventory和vars执行ansible playbook。集群运维的次要艰难点如下: 须要人工黑屏化集群运维操作,存在操作失误和集群配置差别。部署脚本工具没有具体的版本控制,不利于集群的降级和配置变更。部署脚本上线须要破费大量的工夫验证,没有具体的测试用例和CI验证。ansible工作没有拆分为模块化装置,应该化整为零。具体到K8s、etcd、addons的等角色的模块化治理,能够独自执行ansible工作。次要是通过二进制部署,须要本人保护一套集群管理体系。部署流程繁琐,效率较低。组件的参数治理比拟凌乱,通过命令行指定参数。K8s的组件最多有100以上的参数配置。每个大版本的迭代都在变动。本文将分享咱们开发的Kubernetes-Operator,采纳K8s的申明式API设计,能够让集群管理员和Kubernetes-Operator的CR资源进行交互,以简化、升高工作风险性。只须要一个集群管理员就能够保护成千上万个K8s节点。 二、集群部署实际2.1 集群部署介绍次要基于ansible定义的OS、Docker、etcd、k8s和addons等集群部署工作。 次要流程如下: Bootstrap OSPreinstall stepInstall DockerInstall etcdInstall Kubernetes MasterInstall Kubernetes nodeConfigure network pluginInstall AddonsPostinstall setup下面看到是集群一键部署要害流程。当在多个数据中心部署完K8s集群后,比方集群组件的安全漏洞、新性能的上线、组件的降级等对线上集群进行变更时,须要小心谨慎的去解决。咱们做到了化整为零,对单个模块去解决。防止全量的去执行ansible脚本,减少保护的难度。针对如Docker、etcd、K8s、network-plugin和addons的模块化治理和运维,需提供独自的ansible脚本入口,更加精密的运维操作,笼罩到集群大部分的生命周期治理。同时kubernetes-operator的api设计的时候能够不便抉择对应操作yml去执行操作。 集群部署优化操作如下: (1)K8s的组件参数治理通过ConmponentConfig[1]提供的API去标识配置文件。 【可维护性】当组件参数超过50个以上时配置变得难以治理。【可降级性】对于降级,版本化配置的参数更容易治理。因为社区一个大版本的参数没有变动。【可编程性】能够对组件(JSON/YAML)对象的模板进行修补。如果你启用动静kubelet配置选项,批改参数会主动失效,不须要重启服务。【可配置性】许多类型的配置不能示意为key-value模式。(2)打算切换到kubeadm部署 应用kubeadm对K8s集群的生命周期治理,缩小本身保护集群的老本。应用kubeadm的证书治理,如证书上传到secret里缩小证书在主机拷贝的工夫耗费和从新生成证书性能等。应用kubeadm的kubeconfig生成admin kubeconfig文件。kubeadm其它性能如image治理、配置核心upload-config、主动给管制节点打标签和污点等。装置coredns和kube-proxy addons。(3)ansible应用标准 应用ansible自带模块解决部署逻辑。防止应用hostvars。防止应用delegate_to。启用–limit 模式。等等。2.2 CI 矩阵测试部署进去的集群,须要进行大量的场景测试和模仿。保障线上环境变更的可靠性和稳定性。 CI矩阵局部测试案例如下。 (1)语法测试: ansible-lintshellcheckyamllintsyntax-checkpep8(2)集群部署测试: 部署集群扩缩容管制节点、计算节点、etcd降级集群etcd、Docker、K8s和addons参数变更等(3)性能和功能测试: 查看kube-apiserver是否失常工作查看节点之间网络是否失常查看计算节点是否失常K8s e2e测试K8s conformance 测试其余测试这里利用了GitLab、gitlab-runner[2]、ansible和kubevirt[3]等开源软件构建了CI流程。 具体的部署步骤如下: 在K8s集群部署gitlab-runner,并对接GitLab仓库。在K8s集群部署Containerized-Data-Importer (CDI)[4]组件,用于创立pvc的存储虚拟机的映像文件。在K8s集群部署kubevirt,用于创立虚拟机。在代码仓库编写gitlab-ci.yaml[5], 布局集群测试矩阵。 如上图所示,当开发人员在GitLab提交PR时会触发一系列操作。这里次要展现了创立虚拟机和集群部署。其实在咱们的集群还部署了语法检查和性能测试gitlab-runner,通过这些gitlab-runner创立CI的job去执行CI流程。 具体CI流程如下: 开发人员提交PR。触发CI主动进行ansible语法查看。执行ansible脚本去创立namespace,pvc和kubevirt的虚拟机模板,最终虚拟机在K8s上运行。这里次要用到ansible的K8s模块[6]去治理这些资源的创立和销毁。调用ansible脚本去部署K8s集群。集群部署完进行性能验证和性能测试等。销毁kubevirt、pvc等资源。即删除虚拟机,开释资源。 如上图所示,当开发人员提交多个PR时,会在K8s集群中创立多个job,每个job都会执行上述的CI测试,相互不会产生影响。这种次要应用kubevirt的能力,实现了K8s on K8s的架构。 kubevirt次要能力如下: 提供规范的K8s API,通过ansible的K8s模块就能够治理这些资源的生命周期。复用了K8s的调度能力,对资源进行了管控。复用了K8s的网络能力,以namespace隔离,每个集群网络相互不影响。三、Kubernetes-Operator 实际3.1 Operator 介绍Operator是一种用于特定利用的控制器,能够扩大 K8s API的性能,来代表K8s的用户创立、配置和治理简单利用的实例。基于K8s的资源和控制器概念构建,又涵盖了特定畛域或利用自身的常识。用于实现其所治理的利用生命周期的自动化。 总结 Operator性能如下: kubernetes controller部署或者治理一个利用,如数据库、etcd等用户自定义的利用生命周期治理部署降级扩缩容备份自我修复等等3.2 Kubernetes-Operator CR 介绍 kubernetes-operator的应用很多自定义的CR资源和控制器,这里简略的介绍性能和作用。 【ClusterDeployment】:  管理员配置的惟一的CR,其中MachineSet、Machine和Cluster它的子资源或者关联资源。ClusterDeployment是所有的配置参数入口,定义了如etcd、K8s、lb、集群版本、网路和addons等所有配置。 【MachineSet】:集群角色的汇合包含管制节点、计算节点和etcd的配置和执行状态。 ...

June 14, 2022 · 1 min · jiezi

关于kubernetes:阿里云K8S组件Cloud-Controller-Manager升级问题排查

前言最近把阿里云k8s组件Cloud Controller Manager从v2.1.0降级到v2.3.0,发现不是特地顺利,把解决过程记录下来,避免前面再呈现截然不同的问题。 操作点击降级,而后发现前置查看报错,如下所示: 而后,在事件核心中也打印: DryRun: Error syncing load balancer [lb-bp1erkwv3fcdyobqd7x3k]: Message: loadbalancer lb-bp1erkwv3fcdyobqd7x3k listener 80 should be updated, VGroupId rsp-bp1up5x12mwt6 should be changed to rsp-bp1tsakxo59ww;DryRun: Error syncing load balancer [lb-bp1erkwv3fcdyobqd7x3k]: Message: loadbalancer lb-bp1erkwv3fcdyobqd7x3k listener 443 should be updated, VGroupId rsp-bp1cuciusq2zf should be changed to rsp-bp11d0mmv0cma;发现跟负载平衡有关系,而后查看SLB,只有把VGroupIdrsp-bp1up5x12mwt6 设置到 rsp-bp1tsakxo59ww;和把rsp-bp1cuciusq2z设置到rsp-bp11d0mmv0cma即可如下所示: 依照事件核心的提醒,咱们只有把80和443对应的虚构服务器组转移一下就好。 转移虚构服务组1、点击批改80或443监听配置 2、下一步 3、指定服务器组 4、间断点击下一步,即可实现 这样就实现了,你再点击降级Cloud Controller Manager就没问题了 总结1、下面的4个虚构服务器组都是系统生成的2、降级完k8s之后又变回去了,又要再执行一次,感觉好麻烦,于是我把残余的两个,就是下面图中的第1、2删除,前面再察看有没有问题。 援用

June 13, 2022 · 1 min · jiezi

关于kubernetes:在-Kubernetes-集群上部署-VSCode

在 Kubernetes 集群上部署 VSCodeVisual Studio CodeVisual Studio Code 是一个轻量级但功能强大的源代码编辑器,可在您的桌面上运行,实用于 Windows、macOS 和 Linux。它内置了对 JavaScript、TypeScript 和 Node.js 的反对,并为其余语言(如 C++、C#、Java、Python、PHP、Go)和运行时(如 .NET 和 Unity)提供了丰盛的扩大生态系统. 开发工具来说云端 IDE 也逐步受到大家器重,Visual Studio Code 有官网web版本,因为拜访不太稳固能够借助Code-Server部署在本地环境。 官网地址:https://vscode.dev/ 传统形式装置# 装置curl -fsSL https://code-server.dev/install.sh | sh# 查看配置cat .config/code-server/config.yaml bind-addr: 0.0.0.0:8080auth: passwordpassword: c5d4b8deec690d04e81ef0d5cert: falsedocker形式装置# 启用容器mkdir -p ~/.configdocker run -d --name code-server \-p 8080:8080 \-v "$HOME/.config:/home/coder/.config" \-v "$PWD:/home/coder/project" \-u "$(id -u):$(id -g)" \-e "DOCKER_USER=$USER" \codercom/code-server:latest # 查看明码docker exec -it code-server cat ~/.config/code-server/config.yamlbind-addr: 127.0.0.1:8080auth: passwordpassword: cca029c905426a228d46d3eacert: falsekubernetes形式装置apiVersion: v1kind: Namespacemetadata: name: code-server---apiVersion: v1kind: Servicemetadata: name: code-server namespace: code-serverspec: type: NodePort selector: app: code-server ports: - port: 80 targetPort: 8080---apiVersion: apps/v1kind: Deploymentmetadata: name: code-server namespace: code-server labels: app: code-serverspec: replicas: 3 strategy: rollingUpdate: maxSurge: 3 maxUnavailable: 3 type: RollingUpdate selector: matchLabels: app: code-server template: metadata: labels: app: code-server spec: containers: - name: code-server image: codercom/code-server imagePullPolicy: IfNotPresent env: - name: PASSWORD value: "123123" resources: limits: memory: "512Mi" cpu: "4096m" ports: - containerPort: 8080kubernetes形式验证测试kubectl get svc -n code-server NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEcode-server NodePort 10.97.52.132 <none> 80:31274/TCP 2d21hcurl -I 192.168.1.61:31274HTTP/1.1 302 FoundLocation: ./loginVary: Accept, Accept-EncodingContent-Type: text/plain; charset=utf-8Content-Length: 29Date: Mon, 13 Jun 2022 01:11:16 GMTConnection: keep-aliveKeep-Alive: timeout=5对于 ...

June 13, 2022 · 1 min · jiezi

关于kubernetes:kubernetes-设置-Master-可调度与不可调度

kubernetes 设置 Master 可调度与不可调度语法kubectl taint node [node] key=value[effect] [effect] 可取值: [ NoSchedule | PreferNoSchedule | NoExecute ] NoSchedule: 肯定不能被调度 PreferNoSchedule: 尽量不要调度 NoExecute: 不仅不会调度, 还会驱赶Node上已有的Pod 勾销污点勾销污点[root@k8s-master01 ~]# kubectl taint node k8s-master node-role.kubernetes.io/master-设置污点# 设置为肯定不能被调度[root@k8s-master01 ~]# kubectl taint node k8s-master01 node-role.kubernetes.io/master="":NoSchedulenode/k8s-master01 tainted[root@k8s-master01 ~]# kubectl taint node k8s-master02 node-role.kubernetes.io/master="":NoSchedulenode/k8s-master02 tainted[root@k8s-master01 ~]# kubectl taint node k8s-master03 node-role.kubernetes.io/master="":NoSchedulenode/k8s-master03 tainted[root@k8s-master01 ~]# # 查看污点[root@k8s-master01 ~]# kubectl describe node | grep TaTaints: node-role.kubernetes.io/master:NoScheduleTaints: node-role.kubernetes.io/master:NoScheduleTaints: node-role.kubernetes.io/master:NoScheduleTaints: <none>Taints: <none>Taints: <none>Taints: <none>Taints: <none>[root@k8s-master01 ~]# 查看验证# 查看曾经调度到maser上的pod没有被驱赶[root@k8s-master01 ~]# kubectl get pod -o wideNAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESdefault hostname-test-cby-58d85dccdb-7zgjj 1/1 Running 1 (2d1h ago) 19d 172.25.244.195 k8s-master01 <none> <none>default hostname-test-cby-58d85dccdb-8t7zv 1/1 Running 1 (2d1h ago) 19d 172.25.244.196 k8s-master01 <none> <none>default hostname-test-cby-58d85dccdb-9bqsq 1/1 Running 1 (2d1h ago) 19d 172.25.92.74 k8s-master02 <none> <none>default hostname-test-cby-58d85dccdb-jj2ml 1/1 Running 1 (2d1h ago) 19d 172.17.125.3 k8s-node01 <none> <none>default hostname-test-cby-58d85dccdb-k96zl 1/1 Running 1 (2d1h ago) 19d 172.18.195.3 k8s-master03 <none> <none>default hostname-test-cby-58d85dccdb-lng8b 1/1 Running 1 (2d1h ago) 19d 172.29.115.131 k8s-node04 <none> <none>default hostname-test-cby-58d85dccdb-lsrbg 1/1 Running 1 (2d1h ago) 19d 172.25.214.195 k8s-node03 <none> <none>default hostname-test-cby-58d85dccdb-mlv24 1/1 Running 1 (2d1h ago) 19d 172.17.54.131 k8s-node05 <none> <none>default hostname-test-cby-58d85dccdb-p5vc8 1/1 Running 1 (2d1h ago) 19d 172.27.14.195 k8s-node02 <none> <none>default hostname-test-cby-58d85dccdb-z6ptf 1/1 Running 1 (2d1h ago) 19d 172.25.214.196 k8s-node03 <none> <none>[root@k8s-master01 ~]# 设置污点# 设置为不仅不会调度, 还会驱赶Node上已有的Pod[root@k8s-master01 ~]# kubectl taint node k8s-master03 node-role.kubernetes.io/master="":NoExecutenode/k8s-master03 tainted[root@k8s-master01 ~]# kubectl taint node k8s-master02 node-role.kubernetes.io/master="":NoExecutenode/k8s-master02 tainted[root@k8s-master01 ~]# kubectl taint node k8s-master01 node-role.kubernetes.io/master="":NoExecutenode/k8s-master01 tainted# 查看污点[root@k8s-master01 ~]# kubectl describe node | grep TaTaints: node-role.kubernetes.io/master:NoExecuteTaints: node-role.kubernetes.io/master:NoExecuteTaints: node-role.kubernetes.io/master:NoExecuteTaints: <none>Taints: <none>Taints: <none>Taints: <none>Taints: <none>[root@k8s-master01 ~]# 查看验证# 查看曾经调度到master节点的pod已进行驱赶[root@k8s-master01 ~]# kubectl get pod -A -o wideNAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESdefault mysql-0 2/2 Running 0 34m 172.27.14.206 k8s-node02 <none> <none>default mysql-1 2/2 Running 0 34m 172.17.125.11 k8s-node01 <none> <none>default mysql-2 2/2 Terminating 0 34m 172.18.195.10 k8s-master03 <none> <none>[root@k8s-master01 ~]# https://www.oiox.cn/https://www.chenby.cn/https://cby-chen.github.io/https://blog.csdn.net/qq_3392...https://my.oschina.net/u/3981543https://www.zhihu.com/people/...https://segmentfault.com/u/hp...https://juejin.cn/user/331578...https://cloud.tencent.com/dev...https://www.jianshu.com/u/0f8...https://www.toutiao.com/c/use... ...

June 10, 2022 · 2 min · jiezi

关于kubernetes:kubeapiserver-调度器核心实现

随着k8s的倒退,调度器的实现也在变动,本文将从1.23版本源码角度解析k8s调度器的外围实现。 调度器总览整个调度过程由kubernetes/pkg/scheduler/scheduler.go#L421的 func (sched *Scheduler) scheduleOne(ctx context.Context)实现。这个函数有两百多行,能够分为四个局部: 获取待调度Pod对象:通过sched.NextPod()从优先级队列中获取一个优先级最高的待调度Pod资源对象,该过程是阻塞模式的,当优先级队列中不存在任何Pod资源对象时,sched.config.NextPod函数处于期待状态。调度阶段:通过sched.Algorithm.Schedule(schedulingCycleCtx, sched.Extenders, fwk, state, pod)调度函数执行预选调度算法和优选调度算法,为Pod资源对象抉择一个适合的节点。抢占阶段:当高优先级的Pod资源对象没有找到适合的节点时,调度器会通过sched.preempt函数尝试抢占低优先级的Pod资源对象的节点。绑定阶段:当调度器为Pod资源对象抉择了一个适合的节点时,通过sched.bind函数将适合的节点与Pod资源对象绑定在一起。调度过程 进入过滤阶段前的节点数量计算在初始化调度器的时候,kube-scheduler会对节点数量进行优化。如下图:门路:其中红框是调度器的一个性能优化,通过PercentageOfNodesToScore机制,在集群节点数量很多的时候,只加载指定百分比的节点,这样在大集群中,能够显著优化调度性能;这个百分比数值能够调整,默认为50,即加载一半的节点;具体的节点数量由一个不简单的计算过程得出:其中,minFeasibleNodesToFind为预设的参加预选的最小可用节点数,当初的值为100。见上图172行,当集群节点数量小于该值或percentageOfNodesToScore百分比大于等于100时候,间接返回所有节点。当大于100个节点的时候,应用了一个公式,adaptivePercentage = basePercentageOfNodesToScore - numAllNodes/125,翻译一下的话就是自适应百分比数=默认百分比数-所有节点数/125,见178行,默认百分比为50,假如有1000个节点,那么自适应百分比数=50-1000/125=42;180和181行则是指定了一个百分比上限minFeasibleNodesPercentageToFind,当初的值为5。即后面算进去的百分比如果小于5,则取上限5。依照这个机制,那么参加过滤的节点数=100042%=420个。当这个节点数小于minFeasibleNodesToFind的时候,则返回minFeasibleNodesToFind。因而,1000个节点的集群最终参加预选的是420个;同理能够计算,5000个节点的集群,参加预选的是5000(50-5000/125)%=500个。能够看到,只管节点数量从1000减少到了5000,但参加预选的只从420减少到了500。 过滤阶段通过PercentageOfNodesToScore失去参加预选调度的节点数量之后,scheduler会通过podInfo := sched.NextPod()从调度队列中获取pod信息;而后进入Schedule,这是一个定义了schedule的接口,k8s实现了一个genericScheduler,如果要自定义本人的调度器,实现该接口,而后在deployment中指定用该调度器就行。 type ScheduleAlgorithm interface { Schedule(context.Context, []framework.Extender, framework.Framework, *framework.CycleState, *v1.Pod) (scheduleResult ScheduleResult, err error)}进入genericScheduler后,首先就进入预选阶段findNodesThatFitPod,或者称为过滤阶段,此阶段会取得过滤之后可用的所有节点,供下一阶段应用,即feasibleNodes。findNodesThatFitPod供蕴含以下四局部: fwk.RunPreFilterPlugins:运行过滤前的解决插件。RunPreFilterPlugins 负责运行一组框架已配置的 PreFilter 插件。如果任何插件返回除 Success 之外的任何内容,它将设置返回的*Status::code为non-success 。则调度周期停止。g.evaluateNominatedNode:将某个节点独自执行过滤。如果Pod指定了某个Node上运行,这个节点很可能是惟一适宜Pod的候选节点,那么会在过滤所有节点之前,查看该Node,具体条件为:len(pod.Status.NominatedNodeName) > 0 && feature.DefaultFeatureGate.Enabled(features.PreferNominatedNode),这个机制也叫“提名节点”。g.findNodesThatPassFilters:将所有节点进行预选过滤。这个函数会创立一个可用node的节点feasibleNodes := make([]*v1.Node, numNodesToFind),而后通过checkNode遍历node,查看node是否合乎运行Pod的条件,即运行所有的预选调度算法(如下所示),如果合乎则退出feasibelNodes列表。 for _, pl := range f.filterPlugins { pluginStatus := f.runFilterPlugin(ctx, pl, state, pod, nodeInfo) if !pluginStatus.IsSuccess() { if !pluginStatus.IsUnschedulable() { // Filter plugins are not supposed to return any status other than // Success or Unschedulable. errStatus := framework.AsStatus(fmt.Errorf("running %q filter plugin: %w", pl.Name(), pluginStatus.AsError())).WithFailedPlugin(pl.Name()) return map[string]*framework.Status{pl.Name(): errStatus} } pluginStatus.SetFailedPlugin(pl.Name()) statuses[pl.Name()] = pluginStatus if !f.runAllFilters { // Exit early if we don't need to run all filters. return statuses } } }findNodesThatPassExtenders:将上一步通过预选的Node再通过扩大过滤器过滤一遍。这个其实是k8s留给用户的自定义过滤器。它遍历所有的extender来确定是否关怀对应的资源,如果关怀就会调用Filter接口来进行近程调用feasibleList, failedMap, failedAndUnresolvableMap, err := extender.Filter(pod, feasibleNodes),并将筛选后果传递给下一个extender,逐渐放大筛选汇合。近程调用是一个http的实现,如下图:至此,预选阶段完结。整个预选过程逻辑上很天然,预处理->过滤->用户自定义过滤->完结。在预处理阶段(PreFilterPlugin),官网次要定义了:InterPodAffinity: 实现Pod之间的亲和性和反亲和性,InterPodAffinity实现了PreFilterExtensions,因为抢占调度的Pod可能与以后的Pod具备亲和性或者反亲和性;NodePorts: 查看Pod申请的端口在Node是否可用,NodePorts未实现PreFilterExtensions;NodeResourcesFit: 查看Node是否领有Pod申请的所有资源,NodeResourcesFit未实现PreFilterEtensions;PodTopologySpread: 实现Pod拓扑散布;ServiceAffinity: 查看属于某个服务(Service)的Pod与配置的标签所定义的Node汇合是否适配,这个插件还反对将属于某个服务的Pod扩散到各个Node,ServiceAffinity实现了PreFilterExtensions接口;VolumeBinding: 查看Node是否有申请的卷,是否能够绑定申请的卷,VolumeBinding未实现PreFilterExtensions接口;过滤插件在晚期版本叫做预选算法,但在较新的版本曾经删除了/pkg/scheduler/algorithem这个包,因为用过滤更贴切一点。在这个目录下能够找到所有的插件实现:基本上通过名字就晓得是做什么的,不赘述,如InterPodAffinity: 实现Pod之间的亲和性和反亲和性;NodeAffinity: 实现了Node选择器和节点亲和性NodeLabel: 依据配置的标签过滤Node;NodeName: 查看Pod指定的Node名称与以后Node是否匹配;NodePorts: 查看Pod申请的端口在Node是否可用;... ...

June 9, 2022 · 2 min · jiezi

关于kubernetes:使用-KubeKey-搭建-KubernetesKubeSphere-环境的心路累历程

明天要干嘛?明天我要给 KubeKey 挑个刺! 身为一个 KubeSphere Community Member,至今为止我竟然没有用过 KubeKey,是不是很过分?说进去都感觉没脸在 KubeSphere 社区立足啊! 想当年开始玩 KubeSphere 时,每走一步我都感觉“不谐和”。虽说 KubeSphere 早曾经有了足够的知名度和大量的企业用户,然而我总能挑出“刺”,天天给 KubeSphere 社区提意见建议…… 没错,最终他们“受不了”了,决定邀请我退出 KubeSphere 社区,成为一名光彩的 Member! 当初我本人也搞开源社区了。自从开始治理 DevStream 开源社区后,我根本就没有精力参加 KubeSphere 社区了。哎,一颗躁动的心,一双不安分的手!我得做点啥,然而开发我是没精力参加了,要不,施展一下我的“臭故障” - 早期的强迫症和极致的细节洞察力,去挑挑刺吧! 没错! 我决定试用一下 KubeKey!一方面把这些“刺”反馈给 KubeSphere 社区,帮忙他们进一步欠缺 KubeKey 的应用体验!另外一方面在这个过程中相熟 KubeKey 的用法,看下能不能找到 DevStream 和 KubeSphere 合作的点,比方用 DevStream 简化 KubeSphere 的装置部署和配置过程。 在哪里干?KubeSphere 社区给了我一个开发机,一台 Linux vm,酷!我就在这个 Linux vm 上“干”它吧! 从哪里开始干?这还不简略,README 文档呀! 疾速开干!咱们在文档里找 Quick Start,没错,有,大抵长这样: {{< 开炮! 看到这个日志,是不是看着特地像“no errors, no warnings”,一派祥和,歌舞升平,马上能够用 kubectl 命令看下簇新的 Kubernetes 集群了!(不要和我杠单节点 k8s 环境是不是集群,官网称之为“单节点集群”) ...

June 9, 2022 · 2 min · jiezi

关于kubernetes:kubernetes-启用-PHP-Nginx-网页环境

kubernetes 启用 PHP + Nginx 网页环境传统装置形式进行装置步骤较多,应用kubernetes能够实现疾速启用环境,在测试或者线上都能够做到疾速 启用 编写 yaml 文件[root@k8s-master01 ~]# vim PHP-Nginx-Deployment-ConfMap-Service.yaml[root@k8s-master01 ~]# cat PHP-Nginx-Deployment-ConfMap-Service.yamlkind: Service # 对象类型apiVersion: v1 # api 版本metadata: # 元数据 name: php-fpm-nginx #Service 服务名spec: type: NodePort # 类型为nodeport selector: #标签选择器 app: php-fpm-nginx ports: #端口信息 - port: 80 # 容器端口80 protocol: TCP #tcp类型 targetPort: 80 # Service 将 nginx 容器的 80 端口裸露进去---kind: ConfigMap # 对象类型apiVersion: v1 # api 版本metadata: # 元数据 name: nginx-config # 对象名称data: # key-value 数据汇合 nginx.conf: | # 将 nginx config 配置写入 ConfigMap 中,经典的 php-fpm 代理设置,这里就不再多说了 user nginx; worker_processes auto; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; server { listen 80 default_server; listen [::]:80 default_server; root /var/www/html; index index.php; server_name _; if (-f $request_filename/index.html) { rewrite (.*) $1/index.html break; } if (-f $request_filename/index.php) { rewrite (.*) $1/index.php; } if (!-f $request_filename) { rewrite (.*) /index.php; } location / { try_files $uri $uri/ =404; } location ~ \.php$ { include fastcgi_params; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:9000; } } include /etc/nginx/conf.d/*.conf; }---kind: Deployment # 对象类型apiVersion: apps/v1 # api 版本metadata: # 元数据 name: php-fpm-nginx # Deployment 对象名称spec: # Deployment 对象规约 selector: # 选择器 matchLabels: # 标签匹配 app: php-fpm-nginx replicas: 3 # 正本数量 template: # 模版 metadata: # Pod 对象的元数据 labels: # Pod 对象的标签 app: php-fpm-nginx spec: # Pod 对象规约 containers: # 这里设置了两个容器 - name: php-fpm # 第一个容器名称 image: php:7.4.29-fpm # 容器镜像 imagePullPolicy: IfNotPresent #镜像拉取策略 livenessProbe: # 存活探测 initialDelaySeconds: 5 # 容器启动后要期待多少秒后才启动存活和就绪探测器 periodSeconds: 10 # 每多少秒执行一次存活探测 tcpSocket: # 监测tcp端口 port: 9000 #监测端口 readinessProbe: # 就绪探测 initialDelaySeconds: 5 # 容器启动后要期待多少秒后才启动存活和就绪探测器 periodSeconds: 10 # 每多少秒执行一次存活探测 tcpSocket: # 监测tcp端口 port: 9000 #监测端口 resources: # 资源束缚 requests: # 最小限度 memory: "64Mi" # 内存最新64M cpu: "250m" # CPU最大应用0.25核 limits: # 最大限度 memory: "128Mi" # 内存最新128M cpu: "500m" # CPU最大应用0.5核 ports: - containerPort: 9000 # php-fpm 端口 volumeMounts: # 挂载数据卷 - mountPath: /var/www/html # 挂载两个容器共享的 volume name: nginx-www lifecycle: # 生命周期 postStart: # 当容器处于 postStart 阶段时,执行一下命令 exec: command: ["/bin/sh", "-c", "echo startup..."] # 将 /app/index.php 复制到挂载的 volume preStop: exec: command: - sh - '-c' - sleep 5 && kill -SIGQUIT 1 # 优雅退出 - name: nginx # 第二个容器名称 image: nginx # 容器镜像 imagePullPolicy: IfNotPresent livenessProbe: # 存活探测 initialDelaySeconds: 5 # 容器启动后要期待多少秒后才启动存活和就绪探测器 periodSeconds: 10 # 每多少秒执行一次存活探测 httpGet: # 以httpGet形式进行探测 path: / # 探测门路 port: 80 # 探测端口 readinessProbe: # 就绪探测 initialDelaySeconds: 5 # 容器启动后要期待多少秒后才启动存活和就绪探测器 periodSeconds: 10 # 每多少秒执行一次存活探测 httpGet: # 以httpGet形式进行探测 path: / # 探测门路 port: 80 # 探测端口 resources: # 资源束缚 requests: # 最小限度 memory: "64Mi" # 内存最新64M cpu: "250m" # CPU最大应用0.25核 limits: # 最大限度 memory: "128Mi" # 内存最新128M cpu: "500m" # CPU最大应用0.5核 ports: - containerPort: 80 # nginx 端口 volumeMounts: # nginx 容器挂载了两个 volume,一个是与 php-fpm 容器共享的 volume,另外一个是配置了 nginx.conf 的 volume - mountPath: /var/www/html # 挂载两个容器共享的 volume name: nginx-www - mountPath: /etc/nginx/nginx.conf # 挂载配置了 nginx.conf 的 volume subPath: nginx.conf name: nginx-config lifecycle: preStop: exec: command: - sh - '-c' - sleep 5 && /usr/sbin/nginx -s quit # 优雅退出 volumes: - name: nginx-www # 网站文件通过nfs挂载 nfs: path: /html/ server: 192.168.1.123 - name: nginx-config configMap: # configMap name: nginx-config部署网站# 下载网站代码wget https://typecho.org/downloads/1.1-17.10.30-release.tar.gz# 解压源码包tar xvf 1.1-17.10.30-release.tar.gz#挪动到当前目录下mv build/* .#设置权限chmod 777 -R *创立资源kubectl apply -f PHP-Nginx-Deployment-ConfMap-Service.yaml测试环境kubectl get pod -l app=php-fpm-nginxNAME READY STATUS RESTARTS AGEphp-fpm-nginx-8b4bfb457-24bpd 2/2 Running 1 (6m34s ago) 16mphp-fpm-nginx-8b4bfb457-fvqd6 2/2 Running 2 (5m39s ago) 16mphp-fpm-nginx-8b4bfb457-kmzsc 2/2 Running 1 (6m34s ago) 16mkubectl get configmaps | grep nginxNAME DATA AGEnginx-config 1 17mkubectl get svc | grep nginxphp-fpm-nginx NodePort 10.98.66.104 <none> 80:31937/TCP 16m ...

June 9, 2022 · 3 min · jiezi

关于kubernetes:Kubernetes-部署-MySQL-高可用读写分离

Kubernetes 部署 MySQL 集群简介: 在有状态利用中,MySQL是咱们最常见也是最罕用的。本文咱们将实战部署一个一组多从的MySQL集群。 一、配置筹备configMapcat > mysql-configmap.yaml << EOF apiVersion: v1kind: ConfigMapmetadata: name: mysql labels: app: mysqldata: master.cnf: | # Apply this config only on the master. [mysqld] log-bin slave.cnf: | # Apply this config only on slaves. [mysqld] super-read-onlyEOFconfigMap能够将配置文件和镜像解耦开。下面的配置意思是,创立一个master.cnf文件配置内容为:log-bin,即开启bin-log日志,供主节点应用。创立一个slave.cnf文件配置内容为:super-read-only,设为该节点只读,供备用节点应用。 servicecat > mysql-services.yaml << EOF apiVersion: v1kind: Servicemetadata: name: mysql labels: app: mysqlspec: ports: - name: mysql port: 3306 clusterIP: None selector: app: mysql---# Client service for connecting to any MySQL instance for reads.# For writes, you must instead connect to the master: mysql-0.mysql.apiVersion: v1kind: Servicemetadata: name: mysql-read labels: app: mysqlspec: ports: - name: mysql port: 3306 selector: app: mysqlEOFStatefulSetapiVersion: apps/v1kind: StatefulSetmetadata: name: mysqlspec: selector: matchLabels: app: mysql serviceName: mysql replicas: 3 template: metadata: labels: app: mysql spec: # 设置初始化容器,进行一些筹备工作 initContainers: - name: init-mysql image: mysql:5.7 # 为每个MySQL节点配置service-id # 如果节点序号是0,则应用master的配置, 其余节点应用slave的配置 command: - bash - "-c" - | set -ex # 基于 Pod 序号生成 MySQL 服务器的 ID。 [[ `hostname` =~ -([0-9]+)$ ]] || exit 1 ordinal=${BASH_REMATCH[1]} echo [mysqld] > /mnt/conf.d/server-id.cnf # 增加偏移量以防止应用 server-id=0 这一保留值。 echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf # Copy appropriate conf.d files from config-map to emptyDir. # 将适合的 conf.d 文件从 config-map 复制到 emptyDir。 if [[ $ordinal -eq 0 ]]; then cp /mnt/config-map/master.cnf /mnt/conf.d/ else cp /mnt/config-map/slave.cnf /mnt/conf.d/ fi volumeMounts: - name: conf mountPath: /mnt/conf.d - name: config-map mountPath: /mnt/config-map - name: clone-mysql image: registry.cn-hangzhou.aliyuncs.com/chenby/xtrabackup:1.0 # 为除了节点序号为0的主节点外的其它节点,备份前一个节点的数据 command: - bash - "-c" - | set -ex # 如果已有数据,则跳过克隆。 [[ -d /var/lib/mysql/mysql ]] && exit 0 # 跳过主实例(序号索引 0)的克隆。 [[ `hostname` =~ -([0-9]+)$ ]] || exit 1 ordinal=${BASH_REMATCH[1]} [[ $ordinal -eq 0 ]] && exit 0 # 从原来的对等节点克隆数据。 ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql # 筹备备份。 xtrabackup --prepare --target-dir=/var/lib/mysql volumeMounts: - name: data mountPath: /var/lib/mysql subPath: mysql - name: conf mountPath: /etc/mysql/conf.d containers: - name: mysql image: mysql:5.7 # 设置反对免密登录 env: - name: MYSQL_ALLOW_EMPTY_PASSWORD value: "1" ports: - name: mysql containerPort: 3306 volumeMounts: - name: data mountPath: /var/lib/mysql subPath: mysql - name: conf mountPath: /etc/mysql/conf.d resources: # 设置启动pod须要的资源,官网文档上须要500m cpu,1Gi memory。 # 我本地测试的时候,会因为资源有余,报1 Insufficient cpu, 1 Insufficient memory谬误,所以我改小了点 requests: # m是千分之一的意思,100m示意须要0.1个cpu cpu: 1024m # Mi是兆的意思,须要100M 内存 memory: 1Gi livenessProbe: # 应用mysqladmin ping命令,对MySQL节点进行探活检测 # 在节点部署完30秒后开始,每10秒检测一次,超时工夫为5秒 exec: command: ["mysqladmin", "ping"] initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 readinessProbe: # 对节点服务可用性进行检测, 启动5秒后开始,每2秒检测一次,超时工夫1秒 exec: # 查看咱们是否能够通过 TCP 执行查问(skip-networking 是敞开的)。 command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"] initialDelaySeconds: 5 periodSeconds: 2 timeoutSeconds: 1 - name: xtrabackup image: registry.cn-hangzhou.aliyuncs.com/chenby/xtrabackup:1.0 ports: - name: xtrabackup containerPort: 3307 # 开始进行备份文件校验、解析和开始同步 command: - bash - "-c" - | set -ex cd /var/lib/mysql # 确定克隆数据的 binlog 地位(如果有的话)。 if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then # XtraBackup 曾经生成了局部的 “CHANGE MASTER TO” 查问 # 因为咱们从一个现有正本进行克隆。(须要删除开端的分号!) cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in # 在这里要疏忽 xtrabackup_binlog_info (它是没用的)。 rm -f xtrabackup_slave_info xtrabackup_binlog_info elif [[ -f xtrabackup_binlog_info ]]; then # 咱们间接从主实例进行克隆。解析 binlog 地位。 [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1 rm -f xtrabackup_binlog_info xtrabackup_slave_info echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\ MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in fi # 查看咱们是否须要通过启动复制来实现克隆。 if [[ -f change_master_to.sql.in ]]; then echo "Waiting for mysqld to be ready (accepting connections)" until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done echo "Initializing replication from clone position" mysql -h 127.0.0.1 \ -e "$(<change_master_to.sql.in), \ MASTER_HOST='mysql-0.mysql', \ MASTER_USER='root', \ MASTER_PASSWORD='', \ MASTER_CONNECT_RETRY=10; \ START SLAVE;" || exit 1 # 如果容器重新启动,最多尝试一次。 mv change_master_to.sql.in change_master_to.sql.orig fi # 当对等点申请时,启动服务器发送备份。 exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \ "xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root" volumeMounts: - name: data mountPath: /var/lib/mysql subPath: mysql - name: conf mountPath: /etc/mysql/conf.d resources: requests: cpu: 100m memory: 100Mi volumes: - name: conf emptyDir: {} - name: config-map configMap: name: mysql # 设置PVC volumeClaimTemplates: - metadata: name: data annotations: # 配置PVC应用nfs动静供应 volume.beta.kubernetes.io/storage-class: nfs-storage spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 1Gi二、创立所需资源# 创立configMapkubectl apply -f mysql-configmap.yaml # 创立servicekubectl apply -f mysql-services.yaml # 创立statefulSetkubectl apply -f mysql-statefulset.yaml# 查看创立过程kubectl get pods --watchmysql-0 0/2 Pending 0 0smysql-0 0/2 Pending 0 0smysql-0 0/2 Init:0/2 0 0smysql-0 0/2 Init:0/2 0 1smysql-0 0/2 Init:1/2 0 2smysql-0 0/2 PodInitializing 0 3smysql-0 1/2 Running 0 4smysql-0 2/2 Running 0 8smysql-1 0/2 Pending 0 0smysql-1 0/2 Pending 0 0smysql-1 0/2 Init:0/2 0 0smysql-1 0/2 Init:0/2 0 1smysql-1 0/2 Init:1/2 0 1smysql-1 0/2 PodInitializing 0 2smysql-1 1/2 Running 0 3smysql-1 2/2 Running 0 8smysql-2 0/2 Pending 0 0smysql-2 0/2 Pending 0 0smysql-2 0/2 Init:0/2 0 0smysql-2 0/2 Init:0/2 0 1smysql-2 0/2 Init:1/2 0 2smysql-2 0/2 PodInitializing 0 3smysql-2 1/2 Running 0 4smysql-2 2/2 Running 0 9s三、测试主库进入pod进行操作# 进入到pod mysql-0中,进行测试kubectl exec -it mysql-0 bash# 用mysql-client链接mysql-0mysql -h mysql-0Welcome to the MySQL monitor. Commands end with ; or \g.Your MySQL connection id is 276Server version: 5.7.38-log MySQL Community Server (GPL)Copyright (c) 2000, 2022, Oracle and/or its affiliates.Oracle is a registered trademark of Oracle Corporation and/or itsaffiliates. Other names may be trademarks of their respectiveowners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql>创立库、表# 创立数据库testmysql> create database cby;Query OK, 1 row affected (0.00 sec)# 应用test库mysql> use cby;Database changed# 创立message表mysql> create table message (message varchar(50));Query OK, 0 rows affected (0.01 sec)# 查看message表构造mysql> show create table message;+---------+------------------------------------------------------------------------------------------------------+| Table | Create Table |+---------+------------------------------------------------------------------------------------------------------+| message | CREATE TABLE `message` ( `message` varchar(50) DEFAULT NULL) ENGINE=InnoDB DEFAULT CHARSET=latin1 |+---------+------------------------------------------------------------------------------------------------------+1 row in set (0.00 sec)插入数据# 插入mysql> insert into message value("hello chenby");Query OK, 1 row affected (0.00 sec)# 查看mysql> select * from message;+---------------+| message |+---------------+| hello chenby |+---------------+1 row in set (0.00 sec)四、测试备库连贯mysql-1 mysql -h mysql-1.mysqlWelcome to the MySQL monitor. Commands end with ; or \g.Your MySQL connection id is 362Server version: 5.7.38 MySQL Community Server (GPL)Copyright (c) 2000, 2022, Oracle and/or its affiliates.Oracle is a registered trademark of Oracle Corporation and/or itsaffiliates. Other names may be trademarks of their respectiveowners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> mysql> 查看库、表构造# 查看数据库列表mysql> show databases;+------------------------+| Database |+------------------------+| information_schema || cby || mysql || performance_schema || sys || test || xtrabackup_backupfiles |+------------------------+7 rows in set (0.01 sec)# 应用cby库mysql> use cby;Reading table information for completion of table and column namesYou can turn off this feature to get a quicker startup with -ADatabase changedmysql> # 查看表列表mysql> show tables;+---------------+| Tables_in_cby |+---------------+| message |+---------------+1 row in set (0.00 sec)# 查看message表构造mysql> show create table message;+---------+------------------------------------------------------------------------------------------------------+| Table | Create Table |+---------+------------------------------------------------------------------------------------------------------+| message | CREATE TABLE `message` ( `message` varchar(50) DEFAULT NULL) ENGINE=InnoDB DEFAULT CHARSET=latin1 |+---------+------------------------------------------------------------------------------------------------------+1 row in set (0.00 sec)mysql> # 查问数据mysql> select * from message;+---------------+| message |+---------------+| hello chenby |+---------------+1 row in set (0.00 sec)mysql> # 写入数据mysql> insert into message values("hello world");ERROR 1290 (HY000): The MySQL server is running with the --super-read-only option so it cannot execute this statementmysql> # 这是因为mysql-1是一个只读备库,无奈进行写操作。五、测试mysql-read服务循环中运行 SELECT @@server_id ...

June 8, 2022 · 7 min · jiezi

关于kubernetes:使用-Nocalhost-开发-Rainbond-上的微服务应用

本文将介绍如何应用 Nocalhost 疾速开发 Rainbond 上的微服务利用的开发流程以及实际操作步骤。 Nocalhost 能够间接在 Kubernetes 中开发利用,Rainbond 能够疾速部署微服务项目,无需编写Yaml,Nocalhost 联合 Rainbond 减速咱们的微服务开发效率。 一. 简介Nocalhost 是一款开源的基于 IDE 的云原生利用开发工具: 间接在 Kubernetes 集群中构建、测试和调试应用程序提供易于应用的 IDE 插件(反对 VS Code 和 JetBrains),即便在 Kubernetes 集群中进行开发和调试,Nocalhost 也能放弃和本地开发一样的开发体验应用即时文件同步进行开发: 即时将您的代码更改同步到远端容器,而无需重建镜像或重新启动容器。Rainbond 是一款云原生利用治理平台: 应用简略,不须要懂容器、Kubernetes和底层简单技术,反对治理多个Kubernetes集群,和治理企业应用全生命周期。次要性能包含利用开发环境、利用市场、微服务架构、利用交付、利用运维、利用级多云治理等。二. 本地 + Rainbond 开发微服务以前咱们在本地 + Rainbond 开发微服务时,要开发的模块咱们运行在本地,其余模块运行在 Rainbond 上,咱们通过 Rainbond 的网关与本地进行通信、联调。 这样会遇到一些问题: 多人合作开发联调艰难本地环境差异化无奈通过注册核心(Nacos)调用其余微服务近程Debug较难受限于本地资源三. 应用 Nocalhost + Rainbond 开发微服务当初咱们通过 Nocalhost + Rainbond 开发微服务时,所有服务都运行在 Rainbond 上,当要开发时本地 Vscode 直连到 Rainbond 组件中,并与本地代码实时同步到 Rainbond 组件中。多人开发联调时,可通过 Rainbond 内置的 Service Mesh 进行服务之间联调。 ...

June 6, 2022 · 2 min · jiezi

关于kubernetes:安装-Metrics-server

装置 Metrics serverMetrics Server 是 Kubernetes 内置主动缩放管道的可扩大、高效的容器资源指标起源。 Metrics Server 从 Kubelets 收集资源指标,并通过Metrics API在 Kubernetes apiserver 中公开它们,以供 Horizontal Pod Autoscaler和Vertical Pod Autoscaler应用。Metrics API 也能够通过 拜访kubectl top,从而更容易调试主动缩放管道。 单机版单机版 wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml查看镜像地址grep -rn image components.yaml 140: image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1141: imagePullPolicy: IfNotPresent设置镜像地址为阿里云sed -i "s#k8s.gcr.io/metrics-server#registry.cn-hangzhou.aliyuncs.com/chenby#g" components.yaml查看镜像地址已更新grep -rn image components.yaml 140: image: registry.cn-hangzhou.aliyuncs.com/chenby/metrics-server:v0.6.1141: imagePullPolicy: IfNotPresentargs增加tls证书配置选项vim components.yaml增加"- --kubelet-insecure-tls"例: args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls image: registry.cn-hangzhou.aliyuncs.com/chenby/metrics-server:v0.6.1执行配置kubectl apply -f components.yaml serviceaccount/metrics-server createdclusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader createdclusterrole.rbac.authorization.k8s.io/system:metrics-server createdrolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader createdclusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator createdclusterrolebinding.rbac.authorization.k8s.io/system:metrics-server createdservice/metrics-server createddeployment.apps/metrics-server createdapiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created高可用版本高可用版本wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/high-availability.yaml查看镜像地址grep -rn image high-availability.yaml 150: image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1151: imagePullPolicy: IfNotPresent设置镜像地址为阿里云sed -i "s#k8s.gcr.io/metrics-server#registry.cn-hangzhou.aliyuncs.com/chenby#g" high-availability.yaml查看镜像地址已更新grep -rn image high-availability.yaml 150: image: registry.cn-hangzhou.aliyuncs.com/chenby/metrics-server:v0.6.1151: imagePullPolicy: IfNotPresentargs增加tls证书配置选项vim high-availability.yaml增加"- --kubelet-insecure-tls"例: args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls image: registry.cn-hangzhou.aliyuncs.com/chenby/metrics-server:v0.6.1执行配置kubectl apply -f high-availability.yamlserviceaccount/metrics-server createdclusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader createdclusterrole.rbac.authorization.k8s.io/system:metrics-server createdrolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader createdclusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator createdclusterrolebinding.rbac.authorization.k8s.io/system:metrics-server createdservice/metrics-server createddeployment.apps/metrics-server createdWarning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudgetpoddisruptionbudget.policy/metrics-server createdapiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created验证查看metrics资源kubectl get pod -n kube-system | grep metricsmetrics-server-65fb95948b-2bcht 1/1 Running 0 32smetrics-server-65fb95948b-vqp5s 1/1 Running 0 32s查看node资源状况kubectl top nodeNAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-master01 127m 1% 2439Mi 64% k8s-node01 50m 0% 1825Mi 23% k8s-node02 53m 0% 1264Mi 16% 查看pod资源状况kubectl top pod NAME CPU(cores) MEMORY(bytes) chenby-57479d5997-44926 0m 10Mi chenby-57479d5997-tbpqc 0m 11Mi chenby-57479d5997-w8cp2 0m 6Mi https://www.oiox.cn/ ...

June 3, 2022 · 1 min · jiezi

关于kubernetes:查看k8s中etcd集群的状态

1 下载安装 etcdctl 客户端命令工具1.1 应用脚本下载与服务端雷同版本的 etcdctl 软件包 [shutang@centos03.com etcd]$ pwd/home/shutang/k8s/etcd[shutang@centos03.com etcd]$ lsdownload.sh[shutang@centos03.com etcd]$ cat download.sh#!/bin/bashETCD_VER=v3.4.3ETCD_DIR=etcd-downloadDOWNLOAD_URL=https://github.com/coreos/etcd/releases/download# Downloadmkdir ${ETCD_DIR}cd ${ETCD_DIR}wget ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gztar -xzvf etcd-${ETCD_VER}-linux-amd64.tar.gz# installcd etcd-${ETCD_VER}-linux-amd64cp etcdctl /usr/local/bin/1.2 执行脚本,而后配置环境变量和别名 [shutang@centos03.com etcd] bash download.sh# 创立该文件[shutang@centos03.com profile.d]$ cat etcd.shexport ETCDCTL_API=3alias etcdctl='etcdctl --endpoints=https://centos01.com:2379,https://centos02.com:2379,https://centos03.com:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key'[shutang@centos03.com profile.d]$ source etcd.sh# 这里须要留神要保障两个证书文件和一个私钥文件具备可读权限,以便普通用户能够应用 etcdctl 命令2 普通用户能够应用 etcdctl 命令2.1 查看 etcd 集群成员列表 [shutang@centos03.com profile.d]$ etcdctl member listfw57bbcfbe9bc95, started, centos03.com, https://192.168.0.100:2380, https://192.168.0.100:2379, falseds8968b39130b7a, started, centos02.com, https://192.168.0.101:2380, https://192.168.0.101:2379, falsefs663af9b5wfr85, started, centos01.com, https://192.168.0.102:2380, https://192.168.0.102:2379, false2.2 查看 endpoints 状态 ...

June 1, 2022 · 1 min · jiezi

关于kubernetes:二进制安装-Kubernetesk8s

二进制装置 Kubernetes(k8s)Kubernetes 开源不易,帮忙点个star,谢谢了 介绍kubernetes(k8s) 二进制装置 后续尽可能第一工夫更新新版本文档 1.23.3 和 1.23.4 和 1.23.5 和 1.23.6 和 1.24.0 和 1.24.1 文档以及安装包已生成。 若不应用IPv6,不对主机进行配置IPv6地址即可,不影响后续,然而集群仍旧是IPv6的。 (下载更快)我的网盘共享:https://pan.oiox.cn/s/PetV 手动我的项目地址:https://github.com/cby-chen/K... 脚本我的项目地址:https://github.com/cby-chen/B...\_installation\_of\_Kubernetes kubernetes 1.24 变动较大,具体见:https://kubernetes.io/zh/blog... 文档每个版本文档如下链接 https://github.com/cby-chen/K... https://github.com/cby-chen/K... https://github.com/cby-chen/K... https://github.com/cby-chen/K... https://github.com/cby-chen/K... https://github.com/cby-chen/K... https://github.com/cby-chen/K... 安装包(下载更快)我本人的网盘:https://pan.oiox.cn/s/PetV 每个初始版本会打上releases,安装包在releases页面 https://github.com/cby-chen/K... 留神:1.23.3 版本过后没想到会后续更新,所以过后命名不太标准。 wget https://github.com/cby-chen/K... wget https://github.com/cby-chen/K... wget https://github.com/cby-chen/K... wget https://github.com/cby-chen/K... wget https://github.com/cby-chen/K... wget https://github.com/cby-chen/K... 其余倡议在 Kubernetes 查看文档,后续会陆续更新文档小陈网站:https://blog.oiox.cn/https://www.oiox.cn/https://www.chenby.cn/https://cby-chen.github.io/对于小陈:https://www.oiox.cn/index.php...其余文档请查看如下,欢送关注微信公众号《Linux运维交换社区》: https://www.oiox.cn/ https://www.chenby.cn/ https://cby-chen.github.io/ https://blog.csdn.net/qq\_33921750 https://my.oschina.net/u/3981543 https://www.zhihu.com/people/... https://segmentfault.com/u/hp... https://juejin.cn/user/331578... https://cloud.tencent.com/dev... https://www.jianshu.com/u/0f8... https://www.toutiao.com/c/use... CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、今日头条、集体博客、全网可搜《小陈运维》 文章次要公布于微信公众号:《Linux运维交换社区》

May 29, 2022 · 1 min · jiezi

关于kubernetes:二进制安装Kubernetesk8s-v1241-IPv4IPv6双栈

二进制装置Kubernetes(k8s) v1.24.1 IPv4/IPv6双栈Kubernetes 开源不易,帮忙点个star,谢谢了 介绍kubernetes二进制装置 后续尽可能第一工夫更新新版本文档 1.23.3 和 1.23.4 和 1.23.5 和 1.23.6 和 1.24.0 和1.24.1 文档以及安装包已生成。 若不应用IPv6,不对主机进行配置IPv6地址即可,不影响后续,然而集群仍旧是反对IPv6的。 https://github.com/cby-chen/K... 手动我的项目地址:https://github.com/cby-chen/K... 脚本我的项目地址:https://github.com/cby-chen/B... kubernetes 1.24 变动较大,具体见:https://kubernetes.io/zh/blog... 1.环境主机名称IP地址阐明软件Master0110.0.0.61master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-clientMaster0210.0.0.62master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-clientMaster0310.0.0.63master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-clientNode0110.0.0.64node节点kubelet、kube-proxy、nfs-clientNode0210.0.0.65node节点kubelet、kube-proxy、nfs-clientNode0310.0.0.66node节点kubelet、kube-proxy、nfs-clientNode0410.0.0.67node节点kubelet、kube-proxy、nfs-clientNode0510.0.0.68node节点kubelet、kube-proxy、nfs-clientLb0110.0.0.70Lb01节点haproxy、keepalivedLb0210.0.0.80Lb02节点haproxy、keepalived 10.0.0.69VIP 软件版本kernel5.18.0-1.el8CentOS 8v8 或者 v7kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxyv1.24.1etcdv3.5.4containerdv1.5.11cfsslv1.6.1cniv1.1.1crictlv1.24.2haproxyv1.8.27keepalivedv2.1.5网段 物理主机:10.0.0.0/24 service:10.96.0.0/12 pod:172.16.0.0/12 倡议k8s集群与etcd集群离开装置 安装包曾经整顿好:https://github.com/cby-chen/K... 1.1.k8s根底零碎环境配置1.2.配置IPssh root@10.0.0.190 "nmcli con mod ens160 ipv4.addresses 10.0.0.61/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.0.0.146 "nmcli con mod ens160 ipv4.addresses 10.0.0.62/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.0.0.242 "nmcli con mod ens160 ipv4.addresses 10.0.0.63/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.0.0.152 "nmcli con mod ens160 ipv4.addresses 10.0.0.64/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.0.0.124 "nmcli con mod ens160 ipv4.addresses 10.0.0.65/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.0.0.126 "nmcli con mod ens160 ipv4.addresses 10.0.0.66/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.0.0.247 "nmcli con mod ens160 ipv4.addresses 10.0.0.67/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.0.0.207 "nmcli con mod ens160 ipv4.addresses 10.0.0.68/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.0.0.101 "nmcli con mod ens160 ipv4.addresses 10.0.0.70/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.0.0.195 "nmcli con mod ens160 ipv4.addresses 10.0.0.80/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"ssh root@10.0.0.61 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1::10; nmcli con mod ens160 ipv6.gateway 2408:8207:78ca:9fa1::1; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.62 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1::20; nmcli con mod ens160 ipv6.gateway 2408:8207:78ca:9fa1::1; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.63 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1::30; nmcli con mod ens160 ipv6.gateway 2408:8207:78ca:9fa1::1; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.64 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1::40; nmcli con mod ens160 ipv6.gateway 2408:8207:78ca:9fa1::1; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.65 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1::50; nmcli con mod ens160 ipv6.gateway 2408:8207:78ca:9fa1::1; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.66 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1::60; nmcli con mod ens160 ipv6.gateway 2408:8207:78ca:9fa1::1; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.67 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1::70; nmcli con mod ens160 ipv6.gateway 2408:8207:78ca:9fa1::1; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.68 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1::80; nmcli con mod ens160 ipv6.gateway 2408:8207:78ca:9fa1::1; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.70 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1::90; nmcli con mod ens160 ipv6.gateway 2408:8207:78ca:9fa1::1; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"ssh root@10.0.0.80 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1::100; nmcli con mod ens160 ipv6.gateway 2408:8207:78ca:9fa1::1; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"1.3.设置主机名hostnamectl set-hostname k8s-master01hostnamectl set-hostname k8s-master02hostnamectl set-hostname k8s-master03hostnamectl set-hostname k8s-node01hostnamectl set-hostname k8s-node02hostnamectl set-hostname k8s-node03hostnamectl set-hostname k8s-node04hostnamectl set-hostname k8s-node05hostnamectl set-hostname lb01hostnamectl set-hostname lb021.4.配置yum源# 对于 CentOS 7sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org|baseurl=https://mirrors.tuna.tsinghua.edu.cn|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo# 对于 CentOS 8sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org/$contentdir|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo# 对于公有仓库sed -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://mirror.centos.org/\$contentdir|baseurl=http://10.0.0.123/centos|g' -i.bak /etc/yum.repos.d/CentOS-*.repo1.5.装置一些必备工具yum -y install wget jq psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl -y1.6.选择性下载须要工具1.下载kubernetes1.24.+的二进制包github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.mdwget https://dl.k8s.io/v1.24.1/kubernetes-server-linux-amd64.tar.gz2.下载etcdctl二进制包github二进制包下载地址:https://github.com/etcd-io/etcd/releaseswget https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz3.docker-ce二进制包下载地址二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/这里须要下载20.10.+版本wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.14.tgz4.containerd二进制包下载github下载地址:https://github.com/containerd/containerd/releasescontainerd下载时下载带cni插件的二进制包。wget https://github.com/containerd/containerd/releases/download/v1.6.4/cri-containerd-cni-1.6.4-linux-amd64.tar.gz5.下载cfssl二进制包github二进制包下载地址:https://github.com/cloudflare/cfssl/releaseswget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd646.cni插件下载github下载地址:https://github.com/containernetworking/plugins/releaseswget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz7.crictl客户端二进制下载github下载:https://github.com/kubernetes-sigs/cri-tools/releaseswget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz1.7.敞开防火墙systemctl disable --now firewalld1.8.敞开SELinuxsetenforce 0sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config1.9.敞开替换分区sed -ri 's/.*swap.*/#&/' /etc/fstabswapoff -a && sysctl -w vm.swappiness=0cat /etc/fstab# /dev/mapper/centos-swap swap swap defaults 0 01.10.敞开NetworkManager 并启用 network (lb除外)systemctl disable --now NetworkManagersystemctl start network && systemctl enable network1.11.进行工夫同步 (lb除外)# 服务端yum install chrony -ycat > /etc/chrony.conf << EOF pool ntp.aliyun.com iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsyncallow 10.0.0.0/24local stratum 10keyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronyEOFsystemctl restart chronydsystemctl enable chronyd# 客户端yum install chrony -yvim /etc/chrony.confcat /etc/chrony.conf | grep -v "^#" | grep -v "^$"pool 10.0.0.61 iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsynckeyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronysystemctl restart chronyd ; systemctl enable chronyd# 客户端装置一条命令yum install chrony -y ; sed -i "s#2.centos.pool.ntp.org#10.0.0.61#g" /etc/chrony.conf ; systemctl restart chronyd ; systemctl enable chronyd#应用客户端进行验证chronyc sources -v1.12.配置ulimitulimit -SHn 65535cat >> /etc/security/limits.conf <<EOF* soft nofile 655360* hard nofile 131072* soft nproc 655350* hard nproc 655350* seft memlock unlimited* hard memlock unlimiteddEOF1.13.配置免密登录yum install -y sshpassssh-keygen -f /root/.ssh/id_rsa -P ''export IP="10.0.0.61 10.0.0.62 10.0.0.63 10.0.0.64 10.0.0.65 10.0.0.66 10.0.0.67 10.0.0.68 10.0.0.70 10.0.0.60"export SSHPASS=123123for HOST in $IP;do sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOSTdone1.14.增加启用源 (lb除外)# 为 RHEL-8或 CentOS-8配置源yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm# 为 RHEL-7 SL-7 或 CentOS-7 装置 ELRepo yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm# 查看可用安装包yum --disablerepo="*" --enablerepo="elrepo-kernel" list available1.15.降级内核至4.18版本以上 (lb除外)# 装置最新的内核# 我这里抉择的是稳定版kernel-ml 如需更新长期保护版本kernel-lt yum --enablerepo=elrepo-kernel install kernel-ml# 查看已装置那些内核rpm -qa | grep kernelkernel-core-4.18.0-358.el8.x86_64kernel-tools-4.18.0-358.el8.x86_64kernel-ml-core-5.16.7-1.el8.elrepo.x86_64kernel-ml-5.16.7-1.el8.elrepo.x86_64kernel-modules-4.18.0-358.el8.x86_64kernel-4.18.0-358.el8.x86_64kernel-tools-libs-4.18.0-358.el8.x86_64kernel-ml-modules-5.16.7-1.el8.elrepo.x86_64# 查看默认内核grubby --default-kernel/boot/vmlinuz-5.16.7-1.el8.elrepo.x86_64# 若不是最新的应用命令设置grubby --set-default /boot/vmlinuz-「您的内核版本」.x86_64# 重启失效reboot# v8 整合命令为:yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --default-kernel ; reboot# v7 整合命令为:yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --set-default \$(ls /boot/vmlinuz-* | grep elrepo) ; grubby --default-kernel1.16.装置ipvsadm (lb除外)yum install ipvsadm ipset sysstat conntrack libseccomp -ycat >> /etc/modules-load.d/ipvs.conf <<EOF ip_vsip_vs_rrip_vs_wrrip_vs_shnf_conntrackip_tablesip_setxt_setipt_setipt_rpfilteript_REJECTipipEOFsystemctl restart systemd-modules-load.servicelsmod | grep -e ip_vs -e nf_conntrackip_vs_sh 16384 0ip_vs_wrr 16384 0ip_vs_rr 16384 0ip_vs 180224 6 ip_vs_rr,ip_vs_sh,ip_vs_wrrnf_conntrack 176128 1 ip_vsnf_defrag_ipv6 24576 2 nf_conntrack,ip_vsnf_defrag_ipv4 16384 1 nf_conntracklibcrc32c 16384 3 nf_conntrack,xfs,ip_vs1.17.批改内核参数 (lb除外)cat <<EOF > /etc/sysctl.d/k8s.confnet.ipv4.ip_forward = 1net.bridge.bridge-nf-call-iptables = 1fs.may_detach_mounts = 1vm.overcommit_memory=1vm.panic_on_oom=0fs.inotify.max_user_watches=89100fs.file-max=52706963fs.nr_open=52706963net.netfilter.nf_conntrack_max=2310720net.ipv4.tcp_keepalive_time = 600net.ipv4.tcp_keepalive_probes = 3net.ipv4.tcp_keepalive_intvl =15net.ipv4.tcp_max_tw_buckets = 36000net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_max_orphans = 327680net.ipv4.tcp_orphan_retries = 3net.ipv4.tcp_syncookies = 1net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.ip_conntrack_max = 65536net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.tcp_timestamps = 0net.core.somaxconn = 16384net.ipv6.conf.all.disable_ipv6 = 0net.ipv6.conf.default.disable_ipv6 = 0net.ipv6.conf.lo.disable_ipv6 = 0net.ipv6.conf.all.forwarding = 0EOFsysctl --system1.18.所有节点配置hosts本地解析cat > /etc/hosts <<EOF127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain62408:8207:78ca:9fa1::10 k8s-master012408:8207:78ca:9fa1::20 k8s-master022408:8207:78ca:9fa1::30 k8s-master032408:8207:78ca:9fa1::40 k8s-node012408:8207:78ca:9fa1::50 k8s-node022408:8207:78ca:9fa1::60 k8s-node032408:8207:78ca:9fa1::70 k8s-node042408:8207:78ca:9fa1::80 k8s-node052408:8207:78ca:9fa1::90 lb012408:8207:78ca:9fa1::100 lb0210.0.0.61 k8s-master0110.0.0.62 k8s-master0210.0.0.63 k8s-master0310.0.0.64 k8s-node0110.0.0.65 k8s-node0210.0.0.66 k8s-node0310.0.0.67 k8s-node0410.0.0.68 k8s-node0510.0.0.70 lb0110.0.0.60 lb0210.0.0.69 lb-vipEOF2.k8s根本组件装置2.1.所有k8s节点装置Containerd作为Runtimewget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz#创立cni插件所需目录mkdir -p /etc/cni/net.d /opt/cni/bin #解压cni二进制包tar xf cni-plugins-linux-amd64-v1.1.1.tgz -C /opt/cni/bin/wget https://github.com/containerd/containerd/releases/download/v1.6.4/cri-containerd-cni-1.6.4-linux-amd64.tar.gz#解压tar -C / -xzf cri-containerd-cni-1.6.4-linux-amd64.tar.gz#创立服务启动文件cat > /etc/systemd/system/containerd.service <<EOF[Unit]Description=containerd container runtimeDocumentation=https://containerd.ioAfter=network.target local-fs.target[Service]ExecStartPre=-/sbin/modprobe overlayExecStart=/usr/local/bin/containerdType=notifyDelegate=yesKillMode=processRestart=alwaysRestartSec=5LimitNPROC=infinityLimitCORE=infinityLimitNOFILE=infinityTasksMax=infinityOOMScoreAdjust=-999[Install]WantedBy=multi-user.targetEOF2.1.1配置Containerd所需的模块cat <<EOF | sudo tee /etc/modules-load.d/containerd.confoverlaybr_netfilterEOF2.1.2加载模块systemctl restart systemd-modules-load.service2.1.3配置Containerd所需的内核cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.confnet.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1net.bridge.bridge-nf-call-ip6tables = 1EOF# 加载内核sysctl --system2.1.4创立Containerd的配置文件mkdir -p /etc/containerdcontainerd config default | tee /etc/containerd/config.toml批改Containerd的配置文件sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep SystemdCgroupsed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep sandbox_image# 找到containerd.runtimes.runc.options,在其下退出SystemdCgroup = true[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true [plugins."io.containerd.grpc.v1.cri".cni]# 将sandbox_image默认地址改为合乎版本地址 sandbox_image = "registry.cn-hangzhou.aliyuncs.com/chenby/pause:3.6"2.1.5启动并设置为开机启动systemctl daemon-reloadsystemctl enable --now containerd2.1.6配置crictl客户端连贯的运行时地位wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz#解压tar xf crictl-v1.24.2-linux-amd64.tar.gz -C /usr/bin/#生成配置文件cat > /etc/crictl.yaml <<EOFruntime-endpoint: unix:///run/containerd/containerd.sockimage-endpoint: unix:///run/containerd/containerd.socktimeout: 10debug: falseEOF#测试systemctl restart containerdcrictl info2.2.k8s与etcd下载及装置(仅在master01操作)2.2.1解压k8s安装包# 下载安装包wget https://dl.k8s.io/v1.24.1/kubernetes-server-linux-amd64.tar.gzwget https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz# 解压k8s安装文件cd cbytar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}# 解压etcd安装文件tar -xf etcd-v3.5.4-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.4-linux-amd64/etcd{,ctl}# 查看/usr/local/bin下内容ls /usr/local/bin/etcd etcdctl kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler2.2.2查看版本[root@k8s-master01 ~]# kubelet --versionKubernetes v1.24.1[root@k8s-master01 ~]# etcdctl versionetcdctl version: 3.5.4API version: 3.5[root@k8s-master01 ~]# 2.2.3将组件发送至其余k8s节点Master='k8s-master02 k8s-master03'Work='k8s-node01 k8s-node02 k8s-node03 k8s-node04 k8s-node05'for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; donefor NODE in $Work; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; donemkdir -p /opt/cni/bin2.3创立证书相干文件mkdir pkicd pkicat > admin-csr.json << EOF { "CN": "admin", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:masters", "OU": "Kubernetes-manual" } ]}EOFcat > ca-config.json << EOF { "signing": { "default": { "expiry": "876000h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "876000h" } } }}EOFcat > etcd-ca-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ], "ca": { "expiry": "876000h" }}EOFcat > front-proxy-ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "ca": { "expiry": "876000h" }}EOFcat > kubelet-csr.json << EOF { "CN": "system:node:$NODE", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "system:nodes", "OU": "Kubernetes-manual" } ]}EOFcat > manager-csr.json << EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-controller-manager", "OU": "Kubernetes-manual" } ]}EOFcat > apiserver-csr.json << EOF { "CN": "kube-apiserver", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ]}EOFcat > ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ], "ca": { "expiry": "876000h" }}EOFcat > etcd-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ]}EOFcat > front-proxy-client-csr.json << EOF { "CN": "front-proxy-client", "key": { "algo": "rsa", "size": 2048 }}EOFcat > kube-proxy-csr.json << EOF { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-proxy", "OU": "Kubernetes-manual" } ]}EOFcat > scheduler-csr.json << EOF { "CN": "system:kube-scheduler", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-scheduler", "OU": "Kubernetes-manual" } ]}EOFcd ..mkdir bootstrapcd bootstrapcat > bootstrap.secret.yaml << EOF apiVersion: v1kind: Secretmetadata: name: bootstrap-token-c8ad9c namespace: kube-systemtype: bootstrap.kubernetes.io/tokenstringData: description: "The default bootstrap token generated by 'kubelet '." token-id: c8ad9c token-secret: 2e4d610cf3e7426e usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress ---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: kubelet-bootstraproleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node-bootstrappersubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: node-autoapprove-bootstraproleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclientsubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: node-autoapprove-certificate-rotationroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclientsubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:nodes---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:kube-apiserver-to-kubeletrules: - apiGroups: - "" resources: - nodes/proxy - nodes/stats - nodes/log - nodes/spec - nodes/metrics verbs: - "*"---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: system:kube-apiserver namespace: ""roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kube-apiserver-to-kubeletsubjects: - apiGroup: rbac.authorization.k8s.io kind: User name: kube-apiserverEOFcd ..mkdir corednscd corednscat > coredns.yaml << EOF apiVersion: v1kind: ServiceAccountmetadata: name: coredns namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: kubernetes.io/bootstrapping: rbac-defaults name: system:corednsrules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch - apiGroups: - discovery.k8s.io resources: - endpointslices verbs: - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:corednsroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:corednssubjects:- kind: ServiceAccount name: coredns namespace: kube-system---apiVersion: v1kind: ConfigMapmetadata: name: coredns namespace: kube-systemdata: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance }---apiVersion: apps/v1kind: Deploymentmetadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "CoreDNS"spec: # replicas: not specified here: # 1. Default is 1. # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns spec: priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations: - key: "CriticalAddonsOnly" operator: "Exists" nodeSelector: kubernetes.io/os: linux affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: ["kube-dns"] topologyKey: kubernetes.io/hostname containers: - name: coredns image: registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.8.6 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns readOnly: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /ready port: 8181 scheme: HTTP dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile---apiVersion: v1kind: Servicemetadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS"spec: selector: k8s-app: kube-dns clusterIP: 10.96.0.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP - name: metrics port: 9153 protocol: TCPEOFcd ..mkdir metrics-servercd metrics-servercat > metrics-server.yaml << EOF apiVersion: v1kind: ServiceAccountmetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: k8s-app: metrics-server rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: system:aggregated-metrics-readerrules:- apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: k8s-app: metrics-server name: system:metrics-serverrules:- apiGroups: - "" resources: - pods - nodes - nodes/stats - namespaces - configmaps verbs: - get - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata: labels: k8s-app: metrics-server name: metrics-server-auth-reader namespace: kube-systemroleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-readersubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: labels: k8s-app: metrics-server name: metrics-server:system:auth-delegatorroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegatorsubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: labels: k8s-app: metrics-server name: system:metrics-serverroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-serversubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: v1kind: Servicemetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-systemspec: ports: - name: https port: 443 protocol: TCP targetPort: https selector: k8s-app: metrics-server---apiVersion: apps/v1kind: Deploymentmetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-systemspec: selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm - --requestheader-username-headers=X-Remote-User - --requestheader-group-headers=X-Remote-Group - --requestheader-extra-headers-prefix=X-Remote-Extra- image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:0.5.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 name: metrics-server ports: - containerPort: 4443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: https scheme: HTTPS initialDelaySeconds: 20 periodSeconds: 10 resources: requests: cpu: 100m memory: 200Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 volumeMounts: - mountPath: /tmp name: tmp-dir - name: ca-ssl mountPath: /etc/kubernetes/pki nodeSelector: kubernetes.io/os: linux priorityClassName: system-cluster-critical serviceAccountName: metrics-server volumes: - emptyDir: {} name: tmp-dir - name: ca-ssl hostPath: path: /etc/kubernetes/pki---apiVersion: apiregistration.k8s.io/v1kind: APIServicemetadata: labels: k8s-app: metrics-server name: v1beta1.metrics.k8s.iospec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system version: v1beta1 versionPriority: 100EOF3.相干证书生成master01节点下载证书生成工具wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64" -O /usr/local/bin/cfsslwget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64" -O /usr/local/bin/cfssljsonchmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson3.1.生成etcd证书特地阐明除外,以下操作在所有master节点操作 ...

May 29, 2022 · 29 min · jiezi

关于kubernetes:修复kubeproxy证书权限过大问题

修复kube-proxy证书权限过大问题之前kube-proxy服务都是用admin集群证书,造成权限过大不平安,后续该问题,将在文档中修复 请关注 https://github.com/cby-chen/K... 创立生成证书配置文件具体见:https://github.com/cby-chen/Kubernetes#23%E5%88%9B%E5%BB%BA%E8%AF%81%E4%B9%A6%E7%9B%B8%E5%85%B3%E6%96%87%E4%BB%B6cat > ca-config.json << EOF { "signing": { "default": { "expiry": "876000h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "876000h" } } }}EOFcat > kube-proxy-csr.json << EOF { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-proxy", "OU": "Kubernetes-manual" } ]}EOF生成 CA 证书和私钥cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ kube-proxy-csr.json | cfssljson -bare /etc/kubernetes/pki/kube-proxyll /etc/kubernetes/pki/kube-proxy*-rw-r--r-- 1 root root 1045 May 26 10:21 /etc/kubernetes/pki/kube-proxy.csr-rw------- 1 root root 1675 May 26 10:21 /etc/kubernetes/pki/kube-proxy-key.pem-rw-r--r-- 1 root root 1464 May 26 10:21 /etc/kubernetes/pki/kube-proxy.pem设置集群参数和客户端认证参数时 --embed-certs 都为 true,这会将 certificate-authority、client-certificate 和 client-key 指向的证书文件内容写入到生成的 kube-proxy.kubeconfig 文件中; ...

May 26, 2022 · 2 min · jiezi

关于kubernetes:大咖说|Kubernetes自动伸缩实现方式深度讲解

本篇文章将从三个方面探讨如何利用 K8S 实现自定义指标的主动伸缩。第一方面次要理解主动伸缩的原理以及其重要性。第二方面次要会介绍如何通过 K8S 实现自动化伸缩能力。第三方面实战演示如何应用自定义指标的形式来实现主动伸缩能力。 原视频链接:大咖实战|Kubernetes主动伸缩实现指南分享 作者:马若飞。大型跨国互联网公司首席工程师。 AWS Container Hero,《Istio实战指南》作者,极客 工夫 《Service Mesh实战》专栏作者。中国最大的服务网格社区 ServiceMesher.com 治理委员会核心成员, 云原生 社区 VIP,Istio.io contributor。钻研方向为微服务、Service Mesh、云原生技术。 主动伸缩能力的重要性从可扩展性说起主动伸缩能力是设计软件时必备的一个十分重要的品质属性,通常用英文 Scalability 这个词来代表。驰名论文《The Art of Scalability》中对伸缩性做了十分具体的形容,并在里边提出了一个业界十分重要且出名的实践——扩展性立方体。扩展性立方体实践认为当应用软件须要进行扩大时,通常有三个维度,即如下图所示。 第一个维度 X 轴:X 轴是最根本的一个伸缩能力,属于是程度复制,即 replicate 能力。换句话说,就是复制服务而后负载平衡,这也是最简略最根底的扩大。 第二个维度 Y 轴:指的是功能性的扩大。即在 Y 轴上能够看到随着一个利用的演进以及一直地开发,该利用在性能上也是能够扩大的。如针对X轴扩大产生的问题,须要将大型服务进行拆解,把宰割的工作指摘和数据调配至多个实体,这也是微服务实践诞生的根底。 第三个维度 Z 轴:次要指的是数据分区。即利用在数据量急剧增长的状况下,客户心愿通过数据分区的形式来使得利用在长久层的维度进行的扩大。通常,最简略的扩大形式就是所谓的散库散表或者叫分库分表,英文名加 Sharding。即像程度扩大一样,将数据以程度的形式或者垂直的形式进行 Sharding。一般来说程度形式是指将原来一个数据库拆分成多个,每一个库的表构造均雷同,但承载的存储数据同。垂直形式是指将不同的表散列到不同的 DB 上。 什么是主动伸缩(Auto-scaling)主动伸缩通常将其它翻译成 Auto-scaling。伸缩指的是在不同的维度上能够有复制与扩大,主动伸缩顾名思义就是能够将上述扩大以自动化的形式实现。主动伸缩是一种主动扩大计算资源的云计算技术。在手动形式部署阶段,Auto-scaling 的能力尚未被开发进去,随着 PaaS 平台的一直的演进,主动扩大的需要也越来越显著,故特别强调了主动伸缩是一种云计算技术。 主动伸缩的重要性更好的容错性(Fault-tolerance) :及时、疾速应答负载压力更好的可用性(High Availability) :高可用的实质:冗余更好的老本治理(Cost saving) :按需付费云原生利用的必备能力毛病:难以辨认非正常流量(Ddos攻打)常见的主动伸缩实现云提供商的根本服务ServerlessKubernetes HPA HPA工作原理及根本用法Kubernetes 里的主动伸缩实现— HPAHPA与RC, Deployment, Pod的关系如下图所示: HPA通过Scale sub-resource接口,对RC和Deployment的replicas进行管制。HPA最终对Pod正本数的管制终归还是通过RC和Deployment控制器。 HPA工作原理HPA具体的工作原理如下图所示: 基于默认资源指标实现的HPA默认指标:CPU、内存 查看距离:默认15s ...

May 26, 2022 · 1 min · jiezi

关于kubernetes:创建用户认证授权的-kubeconfig-文件

创立用户认证受权的 kubeconfig 文件当咱们装置好集群后,如果想要把 kubectl 命令交给用户应用,就不得不对用户的身份进行认证和对其权限做出限度。 上面以创立一个 cby 用户并将其绑定到 cby 和 chenby 的 namespace 为例阐明。 创立生成证书配置文件具体见:https://github.com/cby-chen/Kubernetes#23%E5%88%9B%E5%BB%BA%E8%AF%81%E4%B9%A6%E7%9B%B8%E5%85%B3%E6%96%87%E4%BB%B6cat > ca-config.json << EOF { "signing": { "default": { "expiry": "876000h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "876000h" } } }}EOFcat > cby-csr.json << EOF { "CN": "cby", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:masters", "OU": "Kubernetes-manual" } ]}EOF生成 CA 证书和私钥cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ cby-csr.json | cfssljson -bare /etc/kubernetes/pki/cbyll /etc/kubernetes/pki/cby*-rw-r--r-- 1 root root 1021 May 25 17:36 /etc/kubernetes/pki/cby.csr-rw------- 1 root root 1679 May 25 17:36 /etc/kubernetes/pki/cby-key.pem-rw-r--r-- 1 root root 1440 May 25 17:36 /etc/kubernetes/pki/cby.pem创立 kubeconfig 文件kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://10.0.0.89:8443 \ --kubeconfig=/etc/kubernetes/cby.kubeconfigkubectl config set-credentials cby \ --client-certificate=/etc/kubernetes/pki/cby.pem \ --client-key=/etc/kubernetes/pki/cby-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/cby.kubeconfigkubectl config set-context cby@kubernetes \ --cluster=kubernetes \ --user=cby \ --kubeconfig=/etc/kubernetes/cby.kubeconfigkubectl config use-context cby@kubernetes --kubeconfig=/etc/kubernetes/cby.kubeconfig增加用户并将配置其用户useradd cbysu - cbymkdir .kube/exit cp /etc/kubernetes/cby.kubeconfig /home/cby/.kube/configchown cby.cby /home/cby/.kube/configRoleBinding须要应用 RBAC创立角色绑定以将该用户的行为限度在某个或某几个 namespace 空间范畴内 ...

May 26, 2022 · 2 min · jiezi

关于kubernetes:强制删除kubernetes中的处于Terminating状态的资源

1 强制删除 kubernetes 中的pod 资源# pod处于 Terminating 状态,先查看该pod是否有对应的deployment资源和replicaset资源,如果有这两种资源先删除这两种资源kubectl delete deployment <deploy-name> -n namespacekubectl delete rc <rc-name -n> namespace# 再去删除podkubectl delete pod <pod-name> --grace-period=0 --force# 如果在这些命令后 pod 仍处于 `Unknown` 状态 或者 `Terminating` ,请应用上面命令从集群中删除podkubectl patch pod <pod> -p '{"metadata":{"finalizers":null}}' -n namespace2 强制删除namespace当咱们执行 kubectl delete ns 想删除不再应用的 namespace 或者想重建某个namespace 下的资源时,发现 namespace 处于 Terminating 状态2.1 获取对于该 namespace 的json信息 kubectl get ns <namespace-name> > tmp.json2.2 应用 vi 编辑文件去掉 spec 字段的内容 2.3 另外关上一个终端窗口,执行 kubectl proxy 指令 ...

May 25, 2022 · 1 min · jiezi

关于kubernetes:关于-ServiceAccounts-及其-Secrets-的重大变化

对于 ServiceAccounts 及其 Secrets 的重大变动kubernetes v1.24.0 更新之后进行创立 ServiceAccount 不会主动生成 Secret 须要对其手动创立 创立 ServiceAccountcat<<EOF | kubectl apply -f -apiVersion: v1kind: ServiceAccountmetadata: name: cby namespace: defaultEOF查看 ServiceAccountroot@cby:~# kubectl get serviceaccounts cbyNAME SECRETS AGEcby 0 9s查看 ServiceAccount 具体具体,没有对 Token 进行创立root@cby:~# kubectl describe serviceaccounts cbyName: cbyNamespace: defaultLabels: <none>Annotations: <none>Image pull secrets: <none>Mountable secrets: <none>Tokens: <none>Events: <none>root@cby:~# root@cby:~# kubectl get secretsNo resources found in default namespace.root@cby:~#创立 Secret 资源并与 ServiceAccount 关联cat<<EOF | kubectl apply -f -apiVersion: v1kind: Secrettype: kubernetes.io/service-account-tokenmetadata: name: cby annotations: kubernetes.io/service-account.name: "cby"EOF再次查看 ServiceAccount 已对 Secret 关联root@cby:~# kubectl describe serviceaccounts cbyName: cbyNamespace: defaultLabels: <none>Annotations: <none>Image pull secrets: <none>Mountable secrets: <none>Tokens: cbyEvents: <none>root@cby:~# 查看 Secret 具体具体root@cby:~# kubectl get secrets cby NAME TYPE DATA AGEcby kubernetes.io/service-account-token 3 35sroot@cby:~# root@cby:~# kubectl describe secrets cby Name: cbyNamespace: defaultLabels: <none>Annotations: kubernetes.io/service-account.name: cby kubernetes.io/service-account.uid: c6629b84-1c08-483d-9a12-c2930ac0a2feType: kubernetes.io/service-account-tokenData====ca.crt: 1363 bytesnamespace: 7 bytestoken: eyJhbGciOiJSUzI1NiIsImtpZCI6IjRwMk02VU9leXU3N3lraUN6UVQ4R3I3Smw3eFhYdEVMX1Z2aTFjU2luSVEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImNieSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJjYnkiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJjNjYyOWI4NC0xYzA4LTQ4M2QtOWExMi1jMjkzMGFjMGEyZmUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpjYnkifQ.r0nHVPO-QY-1p0fwKx0p0AfkiCGpTZ8vGzE8ioDtih5cAP1ew3ABnrj01EqeIEn8vhz29i0NHtZfh5XtYttqjU6o_b1IGFtkW5uIwlxYX2gtmm9njsL2NM7YM6lM0BDfQXvYrpKUuWLQUR-8i79h-GH9WFydmEwnthdxit7uSMJIZuyZP0X0ebxWUg1GGHsqNPy514zXEyvTZh8vs4fVl5ROJbKzFuSuQ1TntXMDncHSf8DSJ7iHUZ0pD757ysHvFKH9l6IbGrt8GUvxWxjMvnNjclLozKgfLXQEOVei39VrPU5DtsPp9DU8C04Gn4TWFW_WsyEWM14lGsQEGD-2QAroot@cby:~# 删除 ServiceAccount 随之 Secret 一并主动删除root@cby:~# kubectl delete serviceaccounts cby serviceaccount "cby" deletedroot@cby:~#root@cby:~# kubectl get serviceaccountsroot@cby:~# kubectl get secrethttps://www.oiox.cn/https://www.chenby.cn/https://cby-chen.github.io/https://blog.csdn.net/qq\_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/...https://segmentfault.com/u/hp...https://juejin.cn/user/331578...https://cloud.tencent.com/dev...https://www.jianshu.com/u/0f8...https://www.toutiao.com/c/use...CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、今日头条、集体博客、全网可搜《小陈运维》文章次要公布于微信公众号:《Linux运维交换社区》

May 25, 2022 · 1 min · jiezi

关于kubernetes:kubernetes-k8s-v1240-安装dashboard面板

kubernetes (k8s) v1.24.0 装置dashboard面板介绍v1.24.0 应用之前的装置形式,在装置过程中会有一些异样,此文档已修复已知问题。 下载所需配置root@k8s-master01:~# wget https://raw.githubusercontent.com/cby-chen/Kubernetes/main/yaml/dashboard.yamlroot@k8s-master01:~# root@k8s-master01:~# kubectl apply -f dashboard.yamlnamespace/kubernetes-dashboard unchangedserviceaccount/kubernetes-dashboard unchangedservice/kubernetes-dashboard configuredsecret/kubernetes-dashboard-certs unchangedsecret/kubernetes-dashboard-csrf configuredWarning: resource secrets/kubernetes-dashboard-key-holder is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.secret/kubernetes-dashboard-key-holder configuredconfigmap/kubernetes-dashboard-settings unchangedrole.rbac.authorization.k8s.io/kubernetes-dashboard unchangedclusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchangedrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchangedclusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchangeddeployment.apps/kubernetes-dashboard configuredservice/dashboard-metrics-scraper unchangeddeployment.apps/dashboard-metrics-scraper unchangedroot@k8s-master01:~# wget https://raw.githubusercontent.com/cby-chen/Kubernetes/main/yaml/dashboard-user.yamlroot@k8s-master01:~# kubectl  apply -f dashboard-user.yamlserviceaccount/admin-user createdclusterrolebinding.rbac.authorization.k8s.io/admin-user createdroot@k8s-master01:~# 批改为nodePortroot@k8s-master01:~# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboardservice/kubernetes-dashboard editedroot@k8s-master01:~# kubectl get svc kubernetes-dashboard -n kubernetes-dashboardNAME                   TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGEkubernetes-dashboard   NodePort   10.96.221.8   <none>        443:32721/TCP   74sroot@k8s-master01:~#创立tokenroot@k8s-master01:~# kubectl -n kubernetes-dashboard create token admin-usereyJhbGciOiJSUzI1NiIsImtpZCI6IlV6b3NRbDRiTll4VEl1a1VGbU53M2Y2X044Wjdfa21mQ0dfYk5BWktHRjAifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjUyNzYzMjUzLCJpYXQiOjE2NTI3NTk2NTMsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiNDYxYjc4MDItNTgzMS00MTNmLTg2M2ItODdlZWVkOTI3MTdiIn19LCJuYmYiOjE2NTI3NTk2NTMsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.nFF729zlDxz4Ed3fcVk5BE8Akc6jod6akf2rksVGJHmfurY7NO1nHP4EekrMx1FRa2JfoPOHTdxcWDVaQAymDC4vgP5aW5RCEOURUY6YdTQUxleRiX-Bgp3eNRHNOcPvdedGm0w7M7gnZqCwy4tsgyiXkIM7zZpvCqdCA1vGJxf_UIck4R8Izua5NSacnG25miIvAmxNzOAEHDD_jDIDHnPVi3iVZzrjBkDwG6spYx_yJbbLy1XbJCYMMH44X4ajuQulV_NS-aiIHj_-PbxfrBRAJCVTZ8L3zD14BraeAAHFqSoiLXohmYHLLjshtraVu4XcvehJDfnRMi8Y4b6sqAhttps://192.168.1.31:32721/https://www.oiox.cn/https://www.chenby.cn/https://cby-chen.github.io/https://blog.csdn.net/qq\_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/...https://segmentfault.com/u/hp...https://juejin.cn/user/331578...https://cloud.tencent.com/dev...https://www.jianshu.com/u/0f8...https://www.toutiao.com/c/use... CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、今日头条、集体博客、全网可搜《小陈运维》 文章次要公布于微信公众号:《Linux运维交换社区》

May 17, 2022 · 1 min · jiezi

关于kubernetes:clientgo-gin的简单整合三list列表相关再进阶关于Pods

背景:紧接client-go gin的简略整合二(list列表相干进一步操作),namespace deployment service 都list列表展示了,总感觉还少点什么?比方显示集群中所有运行的pod列表?依据namespace显示pod列表?依照deployment名称查问所蕴含的pod?总而言之这一部分就围绕着pod列表的展示开展了! client-go gin的简略整合二(list列表相干再进阶)1. 展示命名空间的pod相干信息先确认一下须要获取的信息:kubectl get pods -o wide [root@zhangpeng ~]# kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-7b5d9df6b8-dsx8j 1/1 Running 0 5d19h 10.31.0.4 cn-beijing.172.25.84.228 <none> <none>name status restarts ip node这几个必定三要搞上的 输入一下pod的yaml看还有什么要输入的 [root@zhangpeng ~]# kubectl get pods nginx-7b5d9df6b8-dsx8j -o yamlcreatetime lables image也增加一下!根本copy了一下Namespace.go 外面func ListNamespace过去:src/service/Pod.go package serviceimport ( "context" "github.com/gin-gonic/gin" . "k8s-demo1/src/lib"package serviceimport ( "context" "github.com/gin-gonic/gin" . "k8s-demo1/src/lib" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1")type Pod struct { Namespace string Status string Images string NodeName string CreateTime string Labels map[string]string}func ListallPod(g *gin.Context) { ns := g.Query("ns") pods, err := K8sClient.CoreV1().Pods(ns).List(context.Background(), metav1.ListOptions{}) if err != nil { g.Error(err) } ret := make([]*Pod, 0) for _, item := range pods.Items { ret = append(ret, &Pod{ Namespace: item.Namespace, Name: item.Name, Status: string(item.Status.Phase), Labels: item.Labels, NodeName: item.Spec.NodeName, Images: item.Spec.Containers[0].Image, CreateTime: item.CreationTimestamp.Format("2006-01-02 15:04:05"), }) } g.JSON(200, ret) return}Status取了Phase的值应该是没有问题的吧?Images跟deployment取值一样。原本开始筹备搞上restart的次数......然而kube-system下pod有异样输入就先疏忽了!main.go ...

May 12, 2022 · 3 min · jiezi

关于kubernetes:使用kubeadm初始化IPV4IPV6集群

应用kubeadm初始化IPV4/IPV6集群 图片 CentOS 配置YUM源cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=kubernetesbaseurl=https://mirrors.ustc.edu.cn/kubernetes/yum/repos/kubernetes-el7-$basearchenabled=1EOFsetenforce 0yum install -y kubelet kubeadm kubectl# 如装置老版本# yum install kubelet-1.16.9-0 kubeadm-1.16.9-0 kubectl-1.16.9-0systemctl enable kubelet && systemctl start kubelet# 将 SELinux 设置为 permissive 模式(相当于将其禁用)sudo setenforce 0sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/configsudo systemctl enable --now kubeletUbuntu 配置APT源curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -cat <<EOF >/etc/apt/sources.list.d/kubernetes.listdeb https://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial mainEOFapt-get updateapt-get install -y kubelet kubeadm kubectl# 如装置老版本# apt install kubelet=1.23.6-00 kubeadm=1.23.6-00 kubectl=1.23.6-00配置containerdwget https://github.com/containerd/containerd/releases/download/v1.6.4/cri-containerd-cni-1.6.4-linux-amd64.tar.gz#解压tar -C / -xzf cri-containerd-cni-1.6.4-linux-amd64.tar.gz#创立服务启动文件cat > /etc/systemd/system/containerd.service <<EOF[Unit]Description=containerd container runtimeDocumentation=https://containerd.ioAfter=network.target local-fs.target[Service]ExecStartPre=-/sbin/modprobe overlayExecStart=/usr/local/bin/containerdType=notifyDelegate=yesKillMode=processRestart=alwaysRestartSec=5LimitNPROC=infinityLimitCORE=infinityLimitNOFILE=infinityTasksMax=infinityOOMScoreAdjust=-999[Install]WantedBy=multi-user.targetEOFmkdir -p /etc/containerdcontainerd config default | tee /etc/containerd/config.tomlsed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.tomlsystemctl daemon-reloadsystemctl enable --now containerd配置根底环境cat <<EOF | sudo tee /etc/modules-load.d/k8s.confbr_netfilterEOFcat <<EOF | sudo tee /etc/sysctl.d/k8s.confnet.ipv4.ip_forward = 1net.bridge.bridge-nf-call-iptables = 1fs.may_detach_mounts = 1vm.overcommit_memory=1vm.panic_on_oom=0fs.inotify.max_user_watches=89100fs.file-max=52706963fs.nr_open=52706963net.netfilter.nf_conntrack_max=2310720net.ipv4.tcp_keepalive_time = 600net.ipv4.tcp_keepalive_probes = 3net.ipv4.tcp_keepalive_intvl =15net.ipv4.tcp_max_tw_buckets = 36000net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_max_orphans = 327680net.ipv4.tcp_orphan_retries = 3net.ipv4.tcp_syncookies = 1net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.ip_conntrack_max = 65536net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.tcp_timestamps = 0net.core.somaxconn = 16384net.ipv6.conf.all.disable_ipv6 = 0net.ipv6.conf.default.disable_ipv6 = 0net.ipv6.conf.lo.disable_ipv6 = 0net.ipv6.conf.all.forwarding = 1EOFsudo sysctl --systemhostnamectl set-hostname k8s-master01hostnamectl set-hostname k8s-node01hostnamectl set-hostname k8s-node02sed -ri 's/.*swap.*/#&/' /etc/fstabswapoff -a && sysctl -w vm.swappiness=0cat /etc/fstabhostnamectl set-hostname k8s-master01hostnamectl set-hostname k8s-node01hostnamectl set-hostname k8s-node02cat > /etc/hosts <<EOF127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4::1         localhost localhost.localdomain localhost6 localhost6.localdomain62408:8207:78ce:7561::21 k8s-master012408:8207:78ce:7561::22 k8s-node012408:8207:78ce:7561::23 k8s-node0210.0.0.21 k8s-master0110.0.0.22 k8s-node0110.0.0.23 k8s-node02EOF初始化装置root@k8s-master01:~# kubeadm config images list --image-repository registry.cn-hangzhou.aliyuncs.com/chenbyregistry.cn-hangzhou.aliyuncs.com/chenby/kube-apiserver:v1.24.0registry.cn-hangzhou.aliyuncs.com/chenby/kube-controller-manager:v1.24.0registry.cn-hangzhou.aliyuncs.com/chenby/kube-scheduler:v1.24.0registry.cn-hangzhou.aliyuncs.com/chenby/kube-proxy:v1.24.0registry.cn-hangzhou.aliyuncs.com/chenby/pause:3.7registry.cn-hangzhou.aliyuncs.com/chenby/etcd:3.5.3-0registry.cn-hangzhou.aliyuncs.com/chenby/coredns:v1.8.6root@k8s-master01:~# vim kubeadm.yaml root@k8s-master01:~# cat kubeadm.yamlapiVersion: kubeadm.k8s.io/v1beta3kind: InitConfigurationlocalAPIEndpoint:  advertiseAddress: "2408:8207:78ce:7561::21"  bindPort: 6443nodeRegistration:  taints:  - effect: PreferNoSchedule    key: node-role.kubernetes.io/master---apiVersion: kubeadm.k8s.io/v1beta3kind: ClusterConfigurationkubernetesVersion: v1.24.0imageRepository: registry.cn-hangzhou.aliyuncs.com/chenbynetworking:  podSubnet: 172.16.0.0/12,fc00::/48  serviceSubnet: 10.96.0.0/12,fd00::/108root@k8s-master01:~#root@k8s-master01:~# root@k8s-master01:~# kubeadm init --config=kubeadm.yaml [init] Using Kubernetes version: v1.24.0[preflight] Running pre-flight checks[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.21][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [10.0.0.21 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [10.0.0.21 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 6.504341 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers][mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:PreferNoSchedule][bootstrap-token] Using token: lnodkp.3n8i4m33sqwg39w2[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:  mkdir -p $HOME/.kube  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  sudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:  export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:  https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 10.0.0.21:6443 --token lnodkp.3n8i4m33sqwg39w2 \    --discovery-token-ca-cert-hash sha256:0ed7e18ea2b49bb599bc45e72f764bbe034ef1dce47729f2722467c167754da8 root@k8s-master01:~# root@k8s-master01:~#   mkdir -p $HOME/.kuberoot@k8s-master01:~#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configroot@k8s-master01:~#   sudo chown $(id -u):$(id -g) $HOME/.kube/configroot@k8s-master01:~# root@k8s-node01:~# kubeadm join 10.0.0.21:6443 --token qf3z22.qwtqieutbkik6dy4 \> --discovery-token-ca-cert-hash sha256:2ade8c834a41cc1960993a600c89fa4bb86e3594f82e09bcd42633d4defbda0d[preflight] Running pre-flight checks[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.root@k8s-node01:~# root@k8s-node02:~# kubeadm join 10.0.0.21:6443 --token qf3z22.qwtqieutbkik6dy4 \> --discovery-token-ca-cert-hash sha256:2ade8c834a41cc1960993a600c89fa4bb86e3594f82e09bcd42633d4defbda0d[preflight] Running pre-flight checks[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.root@k8s-node02:~# 查看集群root@k8s-master01:~# kubectl  get nodeNAME           STATUS   ROLES           AGE    VERSIONk8s-master01   Ready    control-plane   111s   v1.24.0k8s-node01     Ready    <none>          82s    v1.24.0k8s-node02     Ready    <none>          92s    v1.24.0root@k8s-master01:~# root@k8s-master01:~# root@k8s-master01:~# kubectl  get pod -ANAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGEkube-system   coredns-bc77466fc-jxkpv                1/1     Running   0          83skube-system   coredns-bc77466fc-nrc9l                1/1     Running   0          83skube-system   etcd-k8s-master01                      1/1     Running   0          87skube-system   kube-apiserver-k8s-master01            1/1     Running   0          89skube-system   kube-controller-manager-k8s-master01   1/1     Running   0          87skube-system   kube-proxy-2lgrn                       1/1     Running   0          83skube-system   kube-proxy-69p9r                       1/1     Running   0          47skube-system   kube-proxy-g58m2                       1/1     Running   0          42skube-system   kube-scheduler-k8s-master01            1/1     Running   0          87sroot@k8s-master01:~# 配置calicowget https://raw.githubusercontent.com/cby-chen/Kubernetes/main/yaml/calico-ipv6.yaml# vim calico-ipv6.yaml# calico-config ConfigMap处    "ipam": {        "type": "calico-ipam",        "assign_ipv4": "true",        "assign_ipv6": "true"    },    - name: IP      value: "autodetect"    - name: IP6      value: "autodetect"    - name: CALICO_IPV4POOL_CIDR      value: "172.16.0.0/16"    - name: CALICO_IPV6POOL_CIDR      value: "fc00::/48"    - name: FELIX_IPV6SUPPORT      value: "true"kubectl  apply -f calico-ipv6.yaml 测试IPV6root@k8s-master01:~# cat cby.yaml apiVersion: apps/v1kind: Deploymentmetadata:  name: chenbyspec:  replicas: 3  selector:    matchLabels:      app: chenby  template:    metadata:      labels:        app: chenby    spec:      containers:      - name: chenby        image: nginx        resources:          limits:            memory: "128Mi"            cpu: "500m"        ports:        - containerPort: 80---apiVersion: v1kind: Servicemetadata:  name: chenbyspec:  ipFamilyPolicy: PreferDualStack  ipFamilies:  - IPv6  - IPv4  type: NodePort  selector:    app: chenby  ports:  - port: 80    targetPort: 80kubectl  apply -f cby.yaml root@k8s-master01:~# kubectl  get pod NAME                      READY   STATUS    RESTARTS   AGEchenby-57479d5997-6pfzg   1/1     Running   0          6mchenby-57479d5997-jjwpk   1/1     Running   0          6mchenby-57479d5997-pzrkc   1/1     Running   0          6mroot@k8s-master01:~# kubectl  get svcNAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGEchenby       NodePort    fd00::f816   <none>        80:30265/TCP   6m7skubernetes   ClusterIP   10.96.0.1    <none>        443/TCP        168mroot@k8s-master01:~# curl -I http://[2408:8207:78ce:7561::21]:30265/HTTP/1.1 200 OKServer: nginx/1.21.6Date: Wed, 11 May 2022 07:01:43 GMTContent-Type: text/htmlContent-Length: 615Last-Modified: Tue, 25 Jan 2022 15:03:52 GMTConnection: keep-aliveETag: "61f01158-267"Accept-Ranges: bytesroot@k8s-master01:~# curl -I http://10.0.0.21:30265/HTTP/1.1 200 OKServer: nginx/1.21.6Date: Wed, 11 May 2022 07:01:54 GMTContent-Type: text/htmlContent-Length: 615Last-Modified: Tue, 25 Jan 2022 15:03:52 GMTConnection: keep-aliveETag: "61f01158-267"Accept-Ranges: byteshttps://www.oiox.cn/ https://www.chenby.cn/ https://blog.oiox.cn/ https://cby-chen.github.io/ https://blog.csdn.net/qq\_33921750 https://my.oschina.net/u/3981543 https://www.zhihu.com/people/... https://segmentfault.com/u/hp... https://juejin.cn/user/331578... https://cloud.tencent.com/dev... https://www.jianshu.com/u/0f8... https://www.toutiao.com/c/use... CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、今日头条、集体博客、全网可搜《小陈运维》 文章次要公布于微信公众号:《Linux运维交换社区》 本文应用 文章同步助手 同步

May 12, 2022 · 1 min · jiezi

关于kubernetes:部署kubernetes官网博客

部署kubernetes官网博客拜访 https://kubernetes.io/ 有些时候不问题,部署离线内网应用官网以及博客, 各位尝鲜能够拜访 https://doc.oiox.cn/ 装置dockerroot@cby:~# curl -sSL https://get.daocloud.io/docker | sh# Executing docker install script, commit: 0221adedb4bcde0f3d18bddda023544fc56c29d1+ sh -c apt-get update -qq >/dev/null+ sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null+ sh -c curl -fsSL "https://download.docker.com/linux/ubuntu/gpg" | gpg --dearmor --yes -o /usr/share/keyrings/docker-archive-keyring.gpg+ sh -c chmod a+r /usr/share/keyrings/docker-archive-keyring.gpg+ sh -c echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu focal stable" > /etc/apt/sources.list.d/docker.list+ sh -c apt-get update -qq >/dev/null+ sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq --no-install-recommends docker-ce docker-ce-cli docker-compose-plugin docker-scan-plugin >/dev/null+ version_gte 20.10+ [ -z  ]+ return 0+ sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq docker-ce-rootless-extras >/dev/null+ sh -c docker versionClient: Docker Engine - Community Version:           20.10.15 API version:       1.41 Go version:        go1.17.9 Git commit:        fd82621 Built:             Thu May  5 13:19:23 2022 OS/Arch:           linux/amd64 Context:           default Experimental:      trueServer: Docker Engine - Community Engine:  Version:          20.10.15  API version:      1.41 (minimum version 1.12)  Go version:       go1.17.9  Git commit:       4433bf6  Built:            Thu May  5 13:17:28 2022  OS/Arch:          linux/amd64  Experimental:     false containerd:  Version:          1.6.4  GitCommit:        212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 runc:  Version:          1.1.1  GitCommit:        v1.1.1-0-g52de29d docker-init:  Version:          0.19.0  GitCommit:        de40ad0================================================================================To run Docker as a non-privileged user, consider setting up theDocker daemon in rootless mode for your user:    dockerd-rootless-setuptool.sh installVisit https://docs.docker.com/go/rootless/ to learn about rootless mode.To run the Docker daemon as a fully privileged service, but granting non-rootusers access, refer to https://docs.docker.com/go/daemon-access/WARNING: Access to the remote API on a privileged Docker daemon is equivalent         to root access on the host. Refer to the 'Docker daemon attack surface'         documentation for details: https://docs.docker.com/go/attack-surface/================================================================================root@cby:~# 克隆库root@cby:~# git clone https://github.com/kubernetes/website.gitCloning into 'website'...remote: Enumerating objects: 269472, done.remote: Counting objects: 100% (354/354), done.remote: Compressing objects: 100% (240/240), done.remote: Total 269472 (delta 201), reused 221 (delta 112), pack-reused 269118Receiving objects: 100% (269472/269472), 334.98 MiB | 1.92 MiB/s, done.Resolving deltas: 100% (190520/190520), done.Updating files: 100% (7124/7124), done.root@cby:~# cd websiteroot@cby:~/website# 装置依赖root@cby:~/website# git submodule update --init --recursive --depth 1Submodule 'api-ref-generator' (https://github.com/kubernetes-sigs/reference-docs) registered for path 'api-ref-generator'Submodule 'themes/docsy' (https://github.com/google/docsy.git) registered for path 'themes/docsy'Cloning into '/root/website/api-ref-generator'...Cloning into '/root/website/themes/docsy'...remote: Total 0 (delta 0), reused 0 (delta 0), pack-reused 0remote: Enumerating objects: 104, done.remote: Counting objects: 100% (104/104), done.remote: Compressing objects: 100% (53/53), done.remote: Total 61 (delta 34), reused 23 (delta 6), pack-reused 0Unpacking objects: 100% (61/61), 103.64 KiB | 252.00 KiB/s, done.From https://github.com/kubernetes-sigs/reference-docs * branch            55bce686224caba37f93e1e1eb53c0c9fc104ed4 -> FETCH_HEADSubmodule path 'api-ref-generator': checked out '55bce686224caba37f93e1e1eb53c0c9fc104ed4'Submodule 'themes/docsy' (https://github.com/google/docsy.git) registered for path 'api-ref-generator/themes/docsy'Cloning into '/root/website/api-ref-generator/themes/docsy'...remote: Total 0 (delta 0), reused 0 (delta 0), pack-reused 0remote: Enumerating objects: 251, done.remote: Counting objects: 100% (251/251), done.remote: Compressing objects: 100% (119/119), done.remote: Total 130 (delta 82), reused 34 (delta 3), pack-reused 0Receiving objects: 100% (130/130), 43.96 KiB | 308.00 KiB/s, done.Resolving deltas: 100% (82/82), completed with 77 local objects.From https://github.com/google/docsy * branch            6b30513dc837c5937de351f2fb2e4fedb04365c4 -> FETCH_HEADSubmodule path 'api-ref-generator/themes/docsy': checked out '6b30513dc837c5937de351f2fb2e4fedb04365c4'Submodule 'assets/vendor/Font-Awesome' (https://github.com/FortAwesome/Font-Awesome.git) registered for path 'api-ref-generator/themes/docsy/assets/vendor/Font-Awesome'Submodule 'assets/vendor/bootstrap' (https://github.com/twbs/bootstrap.git) registered for path 'api-ref-generator/themes/docsy/assets/vendor/bootstrap'Cloning into '/root/website/api-ref-generator/themes/docsy/assets/vendor/Font-Awesome'...Cloning into '/root/website/api-ref-generator/themes/docsy/assets/vendor/bootstrap'...remote: Total 0 (delta 0), reused 0 (delta 0), pack-reused 0remote: Enumerating objects: 8924, done.remote: Counting objects: 100% (8921/8921), done.remote: Compressing objects: 100% (2868/2868), done.remote: Total 4847 (delta 3027), reused 2286 (delta 1978), pack-reused 0Receiving objects: 100% (4847/4847), 5.77 MiB | 4.38 MiB/s, done.Resolving deltas: 100% (3027/3027), completed with 884 local objects.From https://github.com/FortAwesome/Font-Awesome * branch            fcec2d1b01ff069ac10500ac42e4478d20d21f4c -> FETCH_HEADSubmodule path 'api-ref-generator/themes/docsy/assets/vendor/Font-Awesome': checked out 'fcec2d1b01ff069ac10500ac42e4478d20d21f4c'remote: Total 0 (delta 0), reused 0 (delta 0), pack-reused 0remote: Enumerating objects: 701, done.remote: Counting objects: 100% (701/701), done.remote: Compressing objects: 100% (511/511), done.remote: Total 528 (delta 115), reused 186 (delta 13), pack-reused 0Receiving objects: 100% (528/528), 2.01 MiB | 5.52 MiB/s, done.Resolving deltas: 100% (115/115), completed with 73 local objects.From https://github.com/twbs/bootstrap * branch            a716fb03f965dc0846df479e14388b1b4b93d7ce -> FETCH_HEADSubmodule path 'api-ref-generator/themes/docsy/assets/vendor/bootstrap': checked out 'a716fb03f965dc0846df479e14388b1b4b93d7ce'remote: Total 0 (delta 0), reused 0 (delta 0), pack-reused 0remote: Enumerating objects: 76, done.remote: Counting objects: 100% (76/76), done.remote: Compressing objects: 100% (37/37), done.remote: Total 39 (delta 30), reused 6 (delta 0), pack-reused 0Unpacking objects: 100% (39/39), 4.48 KiB | 654.00 KiB/s, done.From https://github.com/google/docsy * branch            1c77bb24483946f11c13f882f836a940b55ad019 -> FETCH_HEADSubmodule path 'themes/docsy': checked out '1c77bb24483946f11c13f882f836a940b55ad019'Submodule 'assets/vendor/Font-Awesome' (https://github.com/FortAwesome/Font-Awesome.git) registered for path 'themes/docsy/assets/vendor/Font-Awesome'Submodule 'assets/vendor/bootstrap' (https://github.com/twbs/bootstrap.git) registered for path 'themes/docsy/assets/vendor/bootstrap'Cloning into '/root/website/themes/docsy/assets/vendor/Font-Awesome'...Cloning into '/root/website/themes/docsy/assets/vendor/bootstrap'...remote: Total 0 (delta 0), reused 0 (delta 0), pack-reused 0remote: Enumerating objects: 8925, done.remote: Counting objects: 100% (8922/8922), done.remote: Compressing objects: 100% (2801/2801), done.remote: Total 4848 (delta 3031), reused 2433 (delta 2046), pack-reused 0Receiving objects: 100% (4848/4848), 5.65 MiB | 4.21 MiB/s, done.Resolving deltas: 100% (3031/3031), completed with 855 local objects.From https://github.com/FortAwesome/Font-Awesome * branch            7d3d774145ac38663f6d1effc6def0334b68ab7e -> FETCH_HEADSubmodule path 'themes/docsy/assets/vendor/Font-Awesome': checked out '7d3d774145ac38663f6d1effc6def0334b68ab7e'remote: Total 0 (delta 0), reused 0 (delta 0), pack-reused 0remote: Enumerating objects: 770, done.remote: Counting objects: 100% (770/770), done.remote: Compressing objects: 100% (497/497), done.remote: Total 524 (delta 161), reused 183 (delta 19), pack-reused 0Receiving objects: 100% (524/524), 2.01 MiB | 2.53 MiB/s, done.Resolving deltas: 100% (161/161), completed with 122 local objects.From https://github.com/twbs/bootstrap * branch            043a03c95a2ad6738f85b65e53b9dbdfb03b8d10 -> FETCH_HEADSubmodule path 'themes/docsy/assets/vendor/bootstrap': checked out '043a03c95a2ad6738f85b65e53b9dbdfb03b8d10'root@cby:~/website# 构建镜像root@cby:~/website# make container-imagedocker build . \    --network=host \    --tag gcr.io/k8s-staging-sig-docs/k8s-website-hugo:v0.87.0-c8ffb2b5979c \    --build-arg HUGO_VERSION=0.87.0Sending build context to Docker daemon  4.096kBStep 1/12 : FROM golang:1.16-alpine1.16-alpine: Pulling from library/golang59bf1c3509f3: Pull complete 666ba61612fd: Pull complete 8ed8ca486205: Pull complete ca4bf87e467a: Pull complete 0435e0963794: Pull complete Digest: sha256:5616dca835fa90ef13a843824ba58394dad356b7d56198fb7c93cbe76d7d67feStatus: Downloaded newer image for golang:1.16-alpine ---> 7642119cd161Step 2/12 : LABEL maintainer="Luc Perkins <lperkins@linuxfoundation.org>" ---> Running in f6a8d1fa0c42Removing intermediate container f6a8d1fa0c42 ---> 291fd45ae748Step 3/12 : RUN apk add --no-cache     curl     gcc     g++     musl-dev     build-base     libc6-compat ---> Running in 209e30a852d3fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/main/x86_64/APKINDEX.tar.gzfetch https://dl-cdn.alpinelinux.org/alpine/v3.15/community/x86_64/APKINDEX.tar.gz(1/25) Installing libgcc (10.3.1_git20211027-r0)(2/25) Installing libstdc++ (10.3.1_git20211027-r0)(3/25) Installing binutils (2.37-r3)(4/25) Installing libmagic (5.41-r0)(5/25) Installing file (5.41-r0)(6/25) Installing libgomp (10.3.1_git20211027-r0)(7/25) Installing libatomic (10.3.1_git20211027-r0)(8/25) Installing libgphobos (10.3.1_git20211027-r0)(9/25) Installing gmp (6.2.1-r1)(10/25) Installing isl22 (0.22-r0)(11/25) Installing mpfr4 (4.1.0-r0)(12/25) Installing mpc1 (1.2.1-r0)(13/25) Installing gcc (10.3.1_git20211027-r0)(14/25) Installing musl-dev (1.2.2-r7)(15/25) Installing libc-dev (0.7.2-r3)(16/25) Installing g++ (10.3.1_git20211027-r0)(17/25) Installing make (4.3-r0)(18/25) Installing fortify-headers (1.1-r1)(19/25) Installing patch (2.7.6-r7)(20/25) Installing build-base (0.5-r2)(21/25) Installing brotli-libs (1.0.9-r5)(22/25) Installing nghttp2-libs (1.46.0-r0)(23/25) Installing libcurl (7.80.0-r1)(24/25) Installing curl (7.80.0-r1)(25/25) Installing libc6-compat (1.2.2-r7)Executing busybox-1.34.1-r3.triggerOK: 198 MiB in 40 packagesRemoving intermediate container 209e30a852d3 ---> 83dfeba4ff34Step 4/12 : ARG HUGO_VERSION ---> Running in fdbe162165c2Removing intermediate container fdbe162165c2 ---> d6219e970f50Step 5/12 : RUN mkdir $HOME/src &&     cd $HOME/src &&     curl -L https://github.com/gohugoio/hugo/archive/refs/tags/v${HUGO_VERSION}.tar.gz | tar -xz &&     cd "hugo-${HUGO_VERSION}" &&     go install --tags extended ---> Running in fe0b26ed3841  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current                                 Dload  Upload   Total   Spent    Left  Speed  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0100 35.2M    0 35.2M    0     0  2216k      0 --:--:--  0:00:16 --:--:-- 3037kgo: downloading github.com/alecthomas/chroma v0.9.2go: downloading github.com/bep/debounce v1.2.0go: downloading github.com/fsnotify/fsnotify v1.4.9go: downloading github.com/pkg/errors v0.9.1go: downloading github.com/spf13/afero v1.6.0go: downloading github.com/spf13/cobra v1.2.1go: downloading github.com/spf13/fsync v0.9.0go: downloading github.com/spf13/jwalterweatherman v1.1.0go: downloading github.com/spf13/pflag v1.0.5go: downloading golang.org/x/sync v0.0.0-20210220032951-036812b2e83cgo: downloading github.com/pelletier/go-toml v1.9.3go: downloading github.com/spf13/cast v1.4.0go: downloading github.com/PuerkitoBio/purell v1.1.1go: downloading github.com/gobwas/glob v0.2.3go: downloading github.com/mattn/go-isatty v0.0.13go: downloading github.com/mitchellh/mapstructure v1.4.1go: downloading github.com/aws/aws-sdk-go v1.40.8go: downloading github.com/dustin/go-humanize v1.0.0go: downloading gocloud.dev v0.20.0go: downloading github.com/pelletier/go-toml/v2 v2.0.0-beta.3.0.20210727221244-fa0796069526go: downloading golang.org/x/text v0.3.6go: downloading google.golang.org/api v0.51.0go: downloading github.com/jdkato/prose v1.2.1go: downloading github.com/kyokomi/emoji/v2 v2.2.8go: downloading github.com/mitchellh/hashstructure v1.1.0go: downloading github.com/olekukonko/tablewriter v0.0.5go: downloading github.com/armon/go-radix v1.0.0go: downloading github.com/gohugoio/locales v0.14.0go: downloading github.com/gohugoio/localescompressed v0.14.0go: downloading github.com/gorilla/websocket v1.4.2go: downloading github.com/rogpeppe/go-internal v1.8.0go: downloading gopkg.in/yaml.v2 v2.4.0go: downloading github.com/niklasfasching/go-org v1.5.0go: downloading github.com/bep/gitmap v1.1.2go: downloading github.com/gobuffalo/flect v0.2.3go: downloading golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87cgo: downloading github.com/cpuguy83/go-md2man/v2 v2.0.0go: downloading github.com/cli/safeexec v1.0.0go: downloading github.com/dlclark/regexp2 v1.4.0go: downloading github.com/BurntSushi/locker v0.0.0-20171006230638-a6e239ea1c69go: downloading github.com/disintegration/gift v1.2.1go: downloading golang.org/x/image v0.0.0-20210220032944-ac19c3e999fbgo: downloading github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578go: downloading golang.org/x/net v0.0.0-20210614182718-04defd469f4ego: downloading go.opencensus.io v0.23.0go: downloading golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1go: downloading github.com/Azure/azure-pipeline-go v0.2.2go: downloading github.com/Azure/azure-storage-blob-go v0.9.0go: downloading github.com/google/uuid v1.1.2go: downloading github.com/google/wire v0.4.0go: downloading cloud.google.com/go v0.87.0go: downloading github.com/googleapis/gax-go v2.0.2+incompatiblego: downloading github.com/googleapis/gax-go/v2 v2.0.5go: downloading cloud.google.com/go/storage v1.10.0go: downloading golang.org/x/oauth2 v0.0.0-20210628180205-a41e5a781914go: downloading google.golang.org/genproto v0.0.0-20210716133855-ce7ef5c701eago: downloading github.com/mattn/go-runewidth v0.0.9go: downloading github.com/bep/tmc v0.5.1go: downloading github.com/rwcarlsen/goexif v0.0.0-20190401172101-9e8deecbddbdgo: downloading github.com/gohugoio/go-i18n/v2 v2.1.3-0.20210430103248-4c28c89f8013go: downloading github.com/russross/blackfriday v1.5.3-0.20200218234912-41c5fccfd6f6go: downloading github.com/bep/gowebp v0.1.0go: downloading github.com/muesli/smartcrop v0.3.0go: downloading google.golang.org/grpc v1.39.0go: downloading github.com/mattn/go-ieproxy v0.0.1go: downloading github.com/russross/blackfriday/v2 v2.0.1go: downloading google.golang.org/protobuf v1.27.1go: downloading github.com/danwakefield/fnmatch v0.0.0-20160403171240-cbb64ac3d964go: downloading github.com/yuin/goldmark v1.4.0go: downloading github.com/yuin/goldmark-highlighting v0.0.0-20200307114337-60d527fdb691go: downloading github.com/miekg/mmark v1.3.6go: downloading github.com/tdewolff/minify/v2 v2.9.21go: downloading github.com/sanity-io/litter v1.5.1go: downloading github.com/getkin/kin-openapi v0.68.0go: downloading github.com/ghodss/yaml v1.0.0go: downloading github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57ego: downloading github.com/shurcooL/sanitized_anchor_name v1.0.0go: downloading github.com/jmespath/go-jmespath v0.4.0go: downloading github.com/BurntSushi/toml v0.3.1go: downloading github.com/evanw/esbuild v0.12.17go: downloading github.com/tdewolff/parse/v2 v2.5.19go: downloading github.com/bep/godartsass v0.12.0go: downloading github.com/bep/golibsass v1.0.0go: downloading github.com/golang/protobuf v1.5.2go: downloading github.com/google/go-cmp v0.5.6go: downloading github.com/go-openapi/jsonpointer v0.19.5go: downloading github.com/go-openapi/swag v0.19.5go: downloading github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800eRemoving intermediate container fe0b26ed3841 ---> 034cde1adc00Step 6/12 : FROM golang:1.16-alpine ---> 7642119cd161Step 7/12 : RUN apk add --no-cache     runuser     git     openssh-client     rsync     npm &&     npm install -D autoprefixer postcss-cli ---> Running in 2af5902e5287fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/main/x86_64/APKINDEX.tar.gzfetch https://dl-cdn.alpinelinux.org/alpine/v3.15/community/x86_64/APKINDEX.tar.gz(1/27) Installing brotli-libs (1.0.9-r5)(2/27) Installing nghttp2-libs (1.46.0-r0)(3/27) Installing libcurl (7.80.0-r1)(4/27) Installing expat (2.4.7-r0)(5/27) Installing pcre2 (10.39-r0)(6/27) Installing git (2.34.2-r0)(7/27) Installing c-ares (1.18.1-r0)(8/27) Installing libgcc (10.3.1_git20211027-r0)(9/27) Installing libstdc++ (10.3.1_git20211027-r0)(10/27) Installing icu-libs (69.1-r1)(11/27) Installing libuv (1.42.0-r0)(12/27) Installing nodejs-current (17.9.0-r0)(13/27) Installing npm (8.1.3-r0)(14/27) Installing openssh-keygen (8.8_p1-r1)(15/27) Installing ncurses-terminfo-base (6.3_p20211120-r0)(16/27) Installing ncurses-libs (6.3_p20211120-r0)(17/27) Installing libedit (20210910.3.1-r0)(18/27) Installing openssh-client-common (8.8_p1-r1)(19/27) Installing openssh-client-default (8.8_p1-r1)(20/27) Installing libacl (2.2.53-r0)(21/27) Installing lz4-libs (1.9.3-r1)(22/27) Installing popt (1.18-r0)(23/27) Installing zstd-libs (1.5.0-r0)(24/27) Installing rsync (3.2.3-r5)(25/27) Installing libeconf (0.4.2-r0)(26/27) Installing linux-pam (1.5.2-r0)(27/27) Installing runuser (2.37.4-r0)Executing busybox-1.34.1-r3.triggerOK: 106 MiB in 42 packagesadded 73 packages, and audited 74 packages in 15s17 packages are looking for funding  run `npm fund` for detailsfound 0 vulnerabilitiesRemoving intermediate container 2af5902e5287 ---> 620ef2580a98Step 8/12 : RUN mkdir -p /var/hugo &&     addgroup -Sg 1000 hugo &&     adduser -Sg hugo -u 1000 -h /var/hugo hugo &&     chown -R hugo: /var/hugo &&     runuser -u hugo -- git config --global --add safe.directory /src ---> Running in dc169979de70Removing intermediate container dc169979de70 ---> 1006a4277115Step 9/12 : COPY --from=0 /go/bin/hugo /usr/local/bin/hugo ---> 9bd8581cf0c3Step 10/12 : WORKDIR /src ---> Running in 89fb367fe208Removing intermediate container 89fb367fe208 ---> b299d26f87a7Step 11/12 : USER hugo:hugo ---> Running in 353a5aec3b6eRemoving intermediate container 353a5aec3b6e ---> ec88a8ce29a5Step 12/12 : EXPOSE 1313 ---> Running in 2649b06d597fRemoving intermediate container 2649b06d597f ---> 20b483234fdeSuccessfully built 20b483234fdeSuccessfully tagged gcr.io/k8s-staging-sig-docs/k8s-website-hugo:v0.87.0-c8ffb2b5979croot@cby:~/website# 构建容器root@cby:~/website# make container-servedocker run --rm --interactive --tty --volume /root/website:/src --cap-drop=ALL --cap-add=AUDIT_WRITE --read-only --mount type=tmpfs,destination=/tmp,tmpfs-mode=01777 -p 1313:1313 gcr.io/k8s-staging-sig-docs/k8s-website-hugo:v0.87.0-c8ffb2b5979c hugo server --buildFuture --environment development --bind 0.0.0.0 --destination /tmp/hugo --cleanDestinationDirStart building sites … hugo v0.87.0+extended linux/amd64 BuildDate=unknown----                   |  EN  |  ZH  | KO  | JA  | FR  | IT  | DE  | ES  | PT-BR | ID  | RU  | VI  | PL  | UK   -------------------+------+------+-----+-----+-----+-----+-----+-----+-------+-----+-----+-----+-----+------  Pages            | 1453 | 1015 | 539 | 450 | 338 |  71 | 164 | 292 |   186 | 335 | 155 |  77 |  69 |  92    Paginator pages  |   43 |    9 |   0 |   0 |   0 |   0 |   0 |   0 |     0 |   0 |   0 |   0 |   0 |   0    Non-page files   |  509 |  386 | 200 | 266 |  73 |  20 |  17 |  33 |    30 | 105 |  24 |   8 |   6 |  20    Static files     |  838 |  838 | 838 | 838 | 838 | 838 | 838 | 838 |   838 | 838 | 838 | 838 | 838 | 838    Processed images |    1 |    1 |   0 |   0 |   0 |   0 |   0 |   0 |     0 |   0 |   0 |   0 |   0 |   0    Aliases          |    8 |    2 |   3 |   1 |   0 |   1 |   0 |   0 |     1 |   1 |   1 |   0 |   0 |   0    Sitemaps         |    2 |    1 |   1 |   1 |   1 |   1 |   1 |   1 |     1 |   1 |   1 |   1 |   1 |   1    Cleaned          |    0 |    0 |   0 |   0 |   0 |   0 |   0 |   0 |     0 |   0 |   0 |   0 |   0 |   0  Built in 15926 msWatching for changes in /src/{archetypes,assets,content,data,i18n,layouts,package.json,postcss.config.js,static,themes}Watching for config changes in /src/config.toml, /src/themes/docsy/config.toml, /src/go.modEnvironment: "development"Serving pages from /tmp/hugoRunning in Fast Render Mode. For full rebuilds on change: hugo server --disableFastRenderWeb Server is available at http://localhost:1313/ (bind address 0.0.0.0)Press Ctrl+C to stop后盾启动root@cby:~# docker imagesREPOSITORY                                     TAG                    IMAGE ID       CREATED         SIZEgcr.io/k8s-staging-sig-docs/k8s-website-hugo   v0.87.0-c8ffb2b5979c   20b483234fde   4 minutes ago   501MB<none>                                         <none>                 034cde1adc00   4 minutes ago   1.8GBgolang                                         1.16-alpine            7642119cd161   2 months ago    302MBroot@cby:~#root@cby:~/website# docker run --rm --interactive -d --volume /root/website:/src --cap-drop=ALL --cap-add=AUDIT_WRITE --read-only --mount type=tmpfs,destination=/tmp,tmpfs-mode=01777 -p 1313:1313 gcr.io/k8s-staging-sig-docs/k8s-website-hugo:v0.87.0-c8ffb2b5979c hugo server --buildFuture --environment development --bind 0.0.0.0 --destination /tmp/hugo --cleanDestinationDirdocker run --rm --interactive -d --volume /root/website:/src --cap-drop=ALL --cap-add=AUDIT_WRITE --read-only --mount type=tmpfs,destination=/tmp,tmpfs-mode=01777 -p 1313:1313 gcr.io/k8s-staging-sig-docs/k8s-website-hugo:v0.87.0-c8ffb2b5979c hugo server --buildFuture --environment development --bind 0.0.0.0 --destination /tmp/hugo --cleanDestinationDirroot@cby:~/website# docker psCONTAINER ID   IMAGE                                                               COMMAND                  CREATED         STATUS         PORTS                                       NAMES06f34ad73c67   gcr.io/k8s-staging-sig-docs/k8s-website-hugo:v0.87.0-c8ffb2b5979c   "hugo server --build…"   5 seconds ago   Up 4 seconds   0.0.0.0:1313->1313/tcp, :::1313->1313/tcp   nervous_kilbyroot@cby:~/website# 更新文档root@hello:~/website# git pullremote: Enumerating objects: 187, done.remote: Counting objects: 100% (181/181), done.remote: Compressing objects: 100% (112/112), done.remote: Total 187 (delta 107), reused 126 (delta 69), pack-reused 6Receiving objects: 100% (187/187), 154.37 KiB | 403.00 KiB/s, done.Resolving deltas: 100% (107/107), completed with 35 local objects.From https://github.com/kubernetes/website   f559e15074..07e1929b49  main          -> origin/main   8c980f042b..68e621e794  dev-1.24-ko.1 -> origin/dev-1.24-ko.1Updating f559e15074..07e1929b49Fast-forward content/en/docs/concepts/cluster-administration/manage-deployment.md                             |   2 +- content/en/docs/concepts/containers/runtime-class.md                                             |   2 +- content/en/docs/concepts/workloads/pods/init-containers.md                                       |   1 - content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_generate-csr.md            |   3 --- content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_preflight.md          |   3 --- content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md       |   3 --- content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join.md |   3 --- content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token.md                         |   3 --- content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_create.md                  |   1 - content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_delete.md                  |   3 --- content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_version.md                       |   3 --- content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md              |   2 +- content/en/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes.md                         |   2 +- content/en/docs/tasks/configure-pod-container/configure-pod-initialization.md                    |   1 - content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md              |   2 +- content/pt-br/blog/_posts/2022-02-17-updated-dockershim-faq.md                                   |   2 +- content/zh/docs/concepts/architecture/nodes.md                                                   | 134 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--------------- content/zh/docs/concepts/cluster-administration/system-logs.md                                   | 117 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--------------------------------------- content/zh/docs/concepts/containers/runtime-class.md                                             |  62 +++++++++++++++++++++----------------------------------------- content/zh/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md                | 111 +++++++++++++++++++++------------------------------------------------------------------------------------------ content/zh/docs/concepts/overview/kubernetes-api.md                                              |  71 +++++++++++++++++++++++++++++++++++++++++++++++++---------------------- content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_generate-csr.md            |  26 ++++++++++++++++++++------ content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_preflight.md          |  28 +++++++++++++++++++++++++--- content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md       |  30 +++++++++++++++++++++++++++++- content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join.md |  20 +++++++++++++++++++- content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_token.md                         |  24 +++++++++++++++++++++++- content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_create.md                  |  51 ++++++++++++++++++++++++++++++++++++++++++++++++++- content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_delete.md                  |  24 +++++++++++++++++++++++- content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_version.md                       |  24 +++++++++++++++++++++++- static/_redirects                                                                                |  48 +++++++++++++++++++++++++++++++++--------------- 30 files changed, 539 insertions(+), 267 deletions(-)root@hello:~/website#  https://www.oiox.cn/ https://www.chenby.cn/ https://blog.oiox.cn/ https://cby-chen.github.io/ https://blog.csdn.net/qq\_33921750 https://my.oschina.net/u/3981543 https://www.zhihu.com/people/... https://segmentfault.com/u/hp... https://juejin.cn/user/331578... https://cloud.tencent.com/dev... https://www.jianshu.com/u/0f8... https://www.toutiao.com/c/use... CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、今日头条、集体博客、全网可搜《小陈运维》 文章次要公布于微信公众号:《Linux运维交换社区》

May 11, 2022 · 1 min · jiezi

关于kubernetes:博云Kubernetes-近年影响最大版本发布这几点值得关注

近几年影响最大版本来袭2022 年 5 月 3 日,Kubernetes 1.24 正式公布。这个版本的公布能够说是“捷足先登”和“万众瞩目”,因为此次公布对 Kubernetes 社区会带来深远影响。 在 1.24 版本中,共有 46 项性能加强,其中 13 个进入了稳定期,14 个是改良现有的性能,13 个是全新的性能,此外还有六个被废除的性能。置信很多理解 Kubernetes 的同学曾经晓得了,其中最重要的就是 Kubernetes 社区正式移除了对于 Docker 的反对,在经验了一年多、几个大版本的过渡期之后,这一天还是到来了。除了这个重大变更以外,其余的新的性能加强也值得咱们关注,因为篇幅的限度,在这里就不一一列举了,笔者先带大家理解几个方面,顺便也讲讲博云容器云平台在这些畛域的工作。 陈腐上架1. Docker 的离去Docker 和 Kubernetes 相互陪伴,走过了很久,当初是别离的时刻了。 不久前,Kubernetes 社区公布了一个纪录片,讲述了 Kubernetes 的诞生和倒退,其中提到了 Kubernetes 和 Docker 的瓜葛,这两个我的项目有时帮忙,经常竞争,总是单干。然而令人唏嘘的是,随着 Kubernetes 成长为容器编排的事实标准,容器运行时也百花齐放,Kubernetes 社区迫切需要一套对立的规范治理多种容器运行时,因而 CRI 应运而生,而 docker shim 垫片的形式也必将逐步退出历史舞台。 相比于历史,大家可能更关怀将来怎么办,只管 Kubernetes 官网发了几篇博文向用户阐明即便废除 Docker 也不必恐慌,然而很多用户可能还是不释怀。其实 Kubernetes 社区之所以抉择当初这个工夫点放弃 Docker,很大的起因是社区中的代替计划根本曾经成熟。 在运行时层面,containerd、cri-o 初露锋芒;在客户端命令行工具中 podman,nerdctl 都是十分优良的工具。以 podman 工具为例,在应用办法上,它与 Docker 非常相似。 没错,能够将 podman 间接当作 Docker 应用,这极大缩小了开发者的迁徙老本。 ...

May 11, 2022 · 2 min · jiezi

关于kubernetes:Rainbond结合NeuVector实践容器安全管理

前言Rainbond 是一个云原生利用治理平台,应用简略,不须要懂容器、Kubernetes和底层简单技术,反对治理多个Kubernetes集群,和治理企业应用全生命周期。然而随着云原生时代的一点点提高,层出不穷的网络容器安全事件的呈现,也是让大家对于容器平安,网络安全的重要性,有了更进一步的想法,Rainbond 为了保障用户在应用的过程中不呈现相似的容器安全事件,特地适配整合了 NeuVector。 NeuVector 是业界首个端到端的开源容器平安平台,为容器化工作负载提供企业级零信赖平安的解决方案。NeuVector 能够提供实时深刻的容器网络可视化、东西向容器网络监控、被动隔离和爱护、容器主机平安以及容器外部平安,容器治理平台无缝集成并且实现利用级容器平安的自动化,实用于各种云环境、跨云或者本地部署等容器生产环境。 本文次要表述,基于 Rainbond 装置部署 NeuVector 容器平安平台的步骤,以及配合 Rainbond 实现生产环境中的最佳实际。 部署 NeuVectorNeuVector 有多种部署装置模式,为了更加简化装置,选用 helm 的模式进行装置,Rainbond 也是反对 helm 商店的模式,只须要在利用市场,增加一个新的商店,把 helm商店的URL 填写上即可。 筹备工作创立团队 NeuVector 通常是装置在 neuvector 命名空间外面的,而在 Rainbond ,团队的概念则是对应 kubernetes 里命名空间,所以通过 helm 装置的时候,首先须要创立进去对应的团队,团队的英文名对应的则是该团队在集群中的命名空间,此处填写 neuvector,抉择对应集群即可。 <img src="https://static.goodrain.com/wechat/neuvector/1.png" style="zoom: 50%;" /> 对接 helm 商店 Rainbond反对基于helm间接部署利用,所以接下来对接 neuvector 官网helm仓库,后续基于Helm商店部署 neuvector 即可, 在利用市场页面,点击增加商店,抉择helm商店,输出相干信息即可实现对接。 helm 商店地址:https://neuvector.github.io/n... 装置在 helm 仓库找到 core 点击装置到 neuvector 团队里即可 批改默认的 key 以及 value values 配置项: 键值registrydocker.iotag5.0.0-preview.1controller.image.repositoryneuvector/controller.previewenforcer.image.repositoryneuvector/enforcer.previewmanager.image.repositoryneuvector/manager.previewcve.scanner.image.repositoryneuvector/scanner.previewcve.updater.image.repositoryneuvector/updater.previewmanager.svc.typeClusterIP装置实现当前,确认 pod 的状态为 Running ...

May 10, 2022 · 1 min · jiezi

关于kubernetes:如何进行架构设计-深度揭秘阿里云-Serverless-Kubernetes2

文丨陈晓宇,阿里云技术专家 在上一篇《故事,从Docker讲起|深度揭秘阿里云Serverless Kubernetes(1)》的文章中,咱们介绍了 Serverless Kubernetes 的演进历史,在这一篇咱们将进入阿里云 Serverless Kubernetes 外部,从架构层面看一下阿里云是如何实现 Serverless Kubernetes 的。 整体架构Serverless Kubernetes 设计的初衷是为了提供一套免运维的云上托管 Kubernetes。所以,咱们不仅要解决 Kubernetes Master(etcd、kube-apisever、kube-controller-manager)的托管,而且还须要实现 Pod 的云上托管,这样用户只须要提交 Yaml 便能够启动服务,不再须要保护计算节点。基于此,咱们将整个 Serverless Kubernetes 架构做了如下设计: 整个架构分为三层:Kubernetes Master 和虚构 Kubelet、ECI 后盾服务以及 ECI Agent。 最上层是一个云上托管的 Kubernetes Master 和一个虚构 Kubelet(Virtual Kubelet)。Virtual Kubelet 和规范的 Kubelet 相似,只不过在启动 Pod 的时候不再是调用本地的 CRI 启动容器,而是通过 HTTP 的形式调用 ECI OpenAPI 启动 ECI 实例,每个 ECI 就是一个 Pod。Virtual Kubelet 设计的初衷次要是为了贴合 k8s 原生架构:在 k8s 中,Pod 是由 Kubelet 拉起并且定时同步状态。 中间层是 ECI 后盾服务,负责资源配置和调度。如用户配置日志采集,ECI 后盾会去 SLS(阿里云日志服务)创立日志采集配置,如果用户通过 PVC 为 Pod 挂载云盘,ECI 后盾服务会创立云盘并将云盘挂载到 ECI 上。另外,ECI 后盾还负责资源调度,抉择适合的物理机节点启动 ECI,具体启动形式是通过部署在每个节点上的 proxy 实现。 ...

May 10, 2022 · 2 min · jiezi

关于kubernetes:如何进行容器镜像加速-深度揭秘阿里云-Serverless-Kubernetes3

容器相比虚拟机最突出的特点之一便是轻量化和疾速启动。相比虚拟机动辄十几个 G 的镜像,容器镜像只蕴含利用以及利用所需的依赖库,所以能够做到几百 M 甚至更少。但即便如此,几十秒的镜像拉取还是在劫难逃,如果镜像更大,则消耗工夫更长。 咱们团队(阿里云弹性容器 ECI)剖析了 3000 个不同业务 Pod 的启动工夫,具体如下图。能够看出,在 Pod 启动过程中,镜像拉取局部耗时最长。大部分 Pod 须要 30s 能力将镜像拉下来,这极大减少了容器的启动工夫。 如果是大规模启动,这个问题则会变得更蹩脚。镜像仓库会因为镜像并发拉取导致带宽打满或者服务端压力过大而间接解体。 咱们屡次遇到过这个问题。因为一个服务的正本数达到 1000+ ,在迅速扩容 1000+ 多个实例的时候,很多 Pod 都处于 Pending 状态,期待镜像拉取。 尽管 kubernetes 在调度的时候曾经反对镜像的亲和性,但只针对老镜像,如果并发启动的新镜像的话,还是须要从镜像仓库外面拉取。上面提供几种罕用的解决思路。 办法一:多镜像仓库多镜像仓库可能很好升高单个仓库的压力,在并发拉取镜像的时候,能够通过域名解析负载平衡的办法,将镜像仓库地址映射到不同的镜像仓库,从而升高单个仓库的压力。 不过,这里有个技术挑战点:镜像仓库之间的镜像同步。 为了确保 Docker 客户端无论从哪个仓库都能够获取到最新的镜像,须要保障镜像曾经胜利复制到了每个镜像仓库。开源的镜像仓库 Harbor 曾经反对镜像复制性能,能够帮忙咱们将镜像散发到不同的仓库中。 办法二:P2P 镜像散发多镜像仓库尽管可能缓解单个仓库的压力,但依然不能完全避免单个仓库被打爆的问题,而且多个仓库的运维老本也比拟高。相比而论 P2P 的计划则更加优雅。 说起 P2P 大家可能都不生疏,咱们罕用的迅雷下载就是应用了 P2P 的原理,还有最近比拟火的区块链技术底层也是基于 P2P 技术。 P2P 镜像散发的原理比较简单。首先将镜像分成很多的“块(block)”,如果某个 Docker 客户端拉取了这个块,那么其余的 Docker 客户端就能够从这个客户端拉数据,从而防止所有的申请都打到镜像仓库。Dragonfly 是阿里开源的 P2P 散发工具。原理如下图所示: 其中的 SuperNode 是大脑,负责存储 “块”和客户端的关系,客户端第一次申请会被打到 SuperNode 节点,而后 SuperNode 回源去镜像仓库拉取数据转发给客户端,并且会记录这些块和客户端的对应关系。后续其余客户端申请这些块的时候,SuperNode 会通知客户端应该去方才胜利拉取的节点上获取数据,从而升高 registry 的负载。上面是咱们生产环境并发拉取 Tensorflow 镜像的实测的数据: ...

May 9, 2022 · 2 min · jiezi

关于kubernetes:使用kubeadm快速启用一个集群

应用kubeadm疾速启用一个集群 CentOS 配置YUM源cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=kubernetesbaseurl=https://mirrors.ustc.edu.cn/kubernetes/yum/repos/kubernetes-el7-$basearchenabled=1EOFsetenforce 0yum install -y kubelet kubeadm kubectlsystemctl enable kubelet && systemctl start kubelet# 将 SELinux 设置为 permissive 模式(相当于将其禁用)sudo setenforce 0sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/configsudo systemctl enable --now kubeletUbuntu 配置APT源curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -cat <<EOF >/etc/apt/sources.list.d/kubernetes.listdeb https://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial mainEOFapt-get updateapt-get install -y kubelet kubeadm kubectl配置containerdwget https://github.com/containerd/containerd/releases/download/v1.6.4/cri-containerd-cni-1.6.4-linux-amd64.tar.gz#解压tar -C / -xzf cri-containerd-cni-1.6.4-linux-amd64.tar.gz#创立服务启动文件cat > /etc/systemd/system/containerd.service <<EOF[Unit]Description=containerd container runtimeDocumentation=https://containerd.ioAfter=network.target local-fs.target[Service]ExecStartPre=-/sbin/modprobe overlayExecStart=/usr/local/bin/containerdType=notifyDelegate=yesKillMode=processRestart=alwaysRestartSec=5LimitNPROC=infinityLimitCORE=infinityLimitNOFILE=infinityTasksMax=infinityOOMScoreAdjust=-999[Install]WantedBy=multi-user.targetEOFmkdir -p /etc/containerdcontainerd config default | tee /etc/containerd/config.tomlsystemctl daemon-reloadsystemctl enable --now containerd配置根底环境cat <<EOF | sudo tee /etc/modules-load.d/k8s.confbr_netfilterEOFcat <<EOF | sudo tee /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOFsudo sysctl --systemecho 1 > /proc/sys/net/ipv4/ip_forwardhostnamectl set-hostname k8s-master01hostnamectl set-hostname k8s-node01hostnamectl set-hostname k8s-node02sed -ri 's/.*swap.*/#&/' /etc/fstabswapoff -a && sysctl -w vm.swappiness=0cat /etc/fstabhostnamectl set-hostname k8s-master01hostnamectl set-hostname k8s-node01hostnamectl set-hostname k8s-node02cat > /etc/hosts <<EOF127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4::1         localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.1.31 k8s-master01192.168.1.32 k8s-node01192.168.1.33 k8s-node01EOF初始化装置root@k8s-master01:~# kubeadm config imagesmissing subcommand; "images" is not meant to be run on its ownTo see the stack trace of this error execute with --v=5 or higherroot@k8s-master01:~# kubeadm config images listk8s.gcr.io/kube-apiserver:v1.24.0k8s.gcr.io/kube-controller-manager:v1.24.0k8s.gcr.io/kube-scheduler:v1.24.0k8s.gcr.io/kube-proxy:v1.24.0k8s.gcr.io/pause:3.7k8s.gcr.io/etcd:3.5.3-0k8s.gcr.io/coredns/coredns:v1.8.6root@k8s-master01:~# root@k8s-master01:~# root@k8s-master01:~# root@k8s-master01:~# kubeadm init  --image-repository registry.cn-hangzhou.aliyuncs.com/chenby[init] Using Kubernetes version: v1.24.0[preflight] Running pre-flight checks[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.31][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.1.31 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.1.31 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 9.502219 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers][mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule][bootstrap-token] Using token: nsiavq.637f6t76cbtwckq9[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:  mkdir -p $HOME/.kube  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  sudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:  export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:  https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.1.31:6443 --token nsiavq.637f6t76cbtwckq9 \        --discovery-token-ca-cert-hash sha256:963b47c1d46199eb28c2813c893fcd201cfaa32cfdfd521f6bc78a70c13878c4 root@k8s-master01:~# root@k8s-master01:~#   mkdir -p $HOME/.kuberoot@k8s-master01:~#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configroot@k8s-master01:~#   sudo chown $(id -u):$(id -g) $HOME/.kube/configroot@k8s-master01:~# root@k8s-node01:~# kubeadm join 192.168.1.31:6443 --token nsiavq.637f6t76cbtwckq9 \>         --discovery-token-ca-cert-hash sha256:963b47c1d46199eb28c2813c893fcd201cfaa32cfdfd521f6bc78a70c13878c4[preflight] Running pre-flight checks[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.root@k8s-node01:~# root@k8s-node02:~# kubeadm join 192.168.1.31:6443 --token nsiavq.637f6t76cbtwckq9 \>         --discovery-token-ca-cert-hash sha256:963b47c1d46199eb28c2813c893fcd201cfaa32cfdfd521f6bc78a70c13878c4[preflight] Running pre-flight checks[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.root@k8s-node02:~# 验证root@k8s-master01:~# kubectl  get nodeNAME           STATUS   ROLES           AGE   VERSIONk8s-master01   Ready    control-plane   86s   v1.24.0k8s-node01     Ready    <none>          42s   v1.24.0k8s-node02     Ready    <none>          37s   v1.24.0root@k8s-master01:~# root@k8s-master01:~# root@k8s-master01:~# kubectl  get pod -ANAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGEkube-system   coredns-bc77466fc-jxkpv                1/1     Running   0          83skube-system   coredns-bc77466fc-nrc9l                1/1     Running   0          83skube-system   etcd-k8s-master01                      1/1     Running   0          87skube-system   kube-apiserver-k8s-master01            1/1     Running   0          89skube-system   kube-controller-manager-k8s-master01   1/1     Running   0          87skube-system   kube-proxy-2lgrn                       1/1     Running   0          83skube-system   kube-proxy-69p9r                       1/1     Running   0          47skube-system   kube-proxy-g58m2                       1/1     Running   0          42skube-system   kube-scheduler-k8s-master01            1/1     Running   0          87sroot@k8s-master01:~# https://www.oiox.cn/ https://www.chenby.cn/ https://blog.oiox.cn/ https://cby-chen.github.io/ https://blog.csdn.net/qq\_33921750 https://my.oschina.net/u/3981543 https://www.zhihu.com/people/... https://segmentfault.com/u/hp... https://juejin.cn/user/331578... https://cloud.tencent.com/dev... https://www.jianshu.com/u/0f8... https://www.toutiao.com/c/use... CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、今日头条、集体博客、全网可搜《小陈运维》 文章次要公布于微信公众号:《Linux运维交换社区》

May 6, 2022 · 1 min · jiezi

关于kubernetes:clientgo-gin的简单整合二list列表相关进一步操作

背景上一步实现了client-go gin的简略整合一(list列表相干操作),实现了简略的namespace deployment service的name的输入!当初我想输入更多的内容,也过后深刻一下kubernetes这些根底! 1. client-go gin的简略整合二(list列表相干进一步操作)1. 从namespace开始[root@zhangpeng ~]# kubectl get ns -o wide首先我想输入namespace的STATUS状态和AGE!以develop为例看一下还有什么想输入的信息 [root@zhangpeng ~]# kubectl get ns develop -o yamlcreationTimestamp labels status状态在这里也是能够体现的!入手吧src/service/Namespace.go package serviceimport ( "context" "github.com/gin-gonic/gin" . "k8s-demo1/src/lib" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "time")type Time struct { time.Time `protobuf:"-"`}type Namespace struct { Name string CreateTime Time `json:"CreateTime"` Status string Labels map[string]string}func ListNamespace(g *gin.Context) { ns, err := K8sClient.CoreV1().Namespaces().List(context.Background(), metav1.ListOptions{}) if err != nil { g.Error(err) return } ret := make([]*Namespace, 0) for _, item := range ns.Items { ret = append(ret, &Namespace{ Name: item.Name, CreateTime: Time(item.CreationTimestamp), Status: string(item.Status.Phase), Labels: item.Labels, }) } g.JSON(200, ret) return}注:毕竟老手不太会解决数据,就做了如下解决,先能展现出本人先要的数据。前面再作深刻的学习!同理status然而我这里偷懒了......间接搞了一个string。短期来看应该没有什么问题吧?同理labels map[string]string运行main.go,main.go仍然是原来的没有进行其余批改如下: ...

May 6, 2022 · 3 min · jiezi

关于kubernetes:clientgo连接kubernetes集群delete相关操作

背景紧跟client-go连贯kubernetes集群-connect and list,client-go连贯kubernetes集群-create相干操作与client-go连贯kubernetes集群-update相干操作。当初操作一下删除deployment 与namespace。当然了也想看一下操作集群crud的操作都有哪些动作! client-go连贯kubernetes集群-delete相干操作删除deploymentmain.go package mainimport ( "context" "flag" "fmt" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/client-go/kubernetes" "k8s.io/client-go/tools/clientcmd" "k8s.io/client-go/util/homedir" "path/filepath")func main() { var kubeconfig *string if home := homedir.HomeDir(); home != "" { kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file") } else { kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file") } flag.Parse() config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig) if err != nil { panic(err.Error()) } // create the clientset clientset, err := kubernetes.NewForConfig(config) if err != nil { panic(err.Error()) } DeploymentName := "nginx" if err = clientset.AppsV1().Deployments("zhangpeng").Delete(context.TODO(), DeploymentName, metav1.DeleteOptions{}); err != nil { fmt.Println(err) return }}kubectl get deploymnt -n zhangpeng ...

May 4, 2022 · 2 min · jiezi

关于kubernetes:clientgo连接kubernetes集群update相关操作

背景:紧接client-go连贯kubernetes集群-connect and list,client-go连贯kubernetes集群-create相干操作。实例都是拿namespace 和deployment两个为代表进行开展延长的(集体环境中deployment还是具备代表性的),后面创立了namespace deployment,失常的流程下一步就是批改namespace and deployment 了! client-go连贯kubernetes集群-update相干操作1. namespace的update参照create先看一眼&corev1.Namespace metav1.ObjectMeta中都有哪些配置能够批改,metav1.ObjectMeta{}填充一下所有字段:Name还是默认的zhangpeng namespace了,我增加一个labels?main.go package mainimport ( "context" "flag" "fmt" corev1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/client-go/kubernetes" "k8s.io/client-go/tools/clientcmd" "k8s.io/client-go/util/homedir" "path/filepath")func main() { var kubeconfig *string if home := homedir.HomeDir(); home != "" { kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file") } else { kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file") } flag.Parse() config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig) if err != nil { panic(err.Error()) } // create the clientset clientset, err := kubernetes.NewForConfig(config) if err != nil { panic(err.Error()) } namespace := &corev1.Namespace{ ObjectMeta: metav1.ObjectMeta{ Name: "zhangpeng", GenerateName: "", Namespace: "", SelfLink: "", UID: "", ResourceVersion: "", Generation: 0, CreationTimestamp: metav1.Time{}, DeletionTimestamp: nil, DeletionGracePeriodSeconds: nil, Labels: map[string]string{ "dev": "test", }, Annotations: nil, OwnerReferences: nil, Finalizers: nil, ClusterName: "", ManagedFields: nil, }, } result, _ := clientset.CoreV1().Namespaces().Update(context.TODO(), namespace, metav1.UpdateOptions{}) fmt.Println(result)}运行main.go登录某云后盾确认生成label!这里正好看到了被迫配额与限度?刚巧最近在看文章的时候看到一个这样的例子:基于client-go操作namespace资源配额设计 ...

May 4, 2022 · 3 min · jiezi

关于kubernetes:clientgo连接kubernetes集群create

背景client-go连贯kubernetes集群-connect and list。都是查看获取list列表的。当初想用client-go创立利用该如何操作呢? client-go连贯kubernetes集群-create创立一个namespace:clientset.CoreV1().Namespaces().Create package mainimport ( "context" "flag" "fmt" v1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/client-go/kubernetes" "k8s.io/client-go/tools/clientcmd" "k8s.io/client-go/util/homedir" "path/filepath")func main() { var kubeconfig *string if home := homedir.HomeDir(); home != "" { kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file") } else { kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file") } flag.Parse() config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig) if err != nil { panic(err.Error()) } // create the clientset clientset, err := kubernetes.NewForConfig(config) if err != nil { panic(err.Error()) } namespace := &v1.Namespace{ ObjectMeta: metav1.ObjectMeta{ Name: "zhangpeng", }, } result, err := clientset.CoreV1().Namespaces().Create(context.TODO(), namespace, metav1.CreateOptions{}) if err != nil { fmt.Println(err) } else { fmt.Printf("Create ns %s SuccessFul !", result.ObjectMeta.Name) list, _ := clientset.CoreV1().Namespaces().List(context.Background(), metav1.ListOptions{}) for _, item := range list.Items { fmt.Println(item.Name) } } //fmt.Println(clientset.ServerVersion()) //list, _ := clientset.CoreV1().Namespaces().List(context.Background(), metav1.ListOptions{}) //for _, item := range list.Items { // fmt.Println(item.Name) // //} //fmt.Println("pod list in develop") //list1, _ := clientset.CoreV1().Pods("develop").List(context.Background(), metav1.ListOptions{}) //for _, item := range list1.Items { // fmt.Println(item.Name) // //} clientset.AppsV1()}嗯打印在一起了 也能够加个换行符? ...

May 4, 2022 · 3 min · jiezi

关于kubernetes:kubevirtVirtualMachineInstanceReplicaSetvmis扩缩容弹性伸缩

@TOC 概述/了解VirtualMachineInstanceReplicaSet(vmis)确保指定数量的 VirtualMachineInstance(vmi) 正本在任何时候都在运行。咱们能够这样了解,vmis就是kubernetes(k8s)外面的控制器(DeployMent,ReplicaSet)治理咱们pod的正本数,实现扩缩容、回滚等。也能够借助HorizontalPodAutoscaler(hpa)实现弹性伸缩。这里咱们就说vmis控制器,在这里的vmis控制器,治理咱们vmi虚拟机实例的正本数,也能够实现扩缩容,借助hpa实现弹性伸缩。所有咱们的yaml文件写法原理都相似。 应用场景当须要许多雷同的虚拟机,并且不关怀在虚拟机终止后任何磁盘状态时。 创立vmis编写vmis的yaml文件 [root@master vm]# cat vmis.yamlapiVersion: kubevirt.io/v1alpha3kind: VirtualMachineInstanceReplicaSetmetadata: name: testreplicasetspec: replicas: 2 selector: matchLabels: myvmi: myvmi # 保持一致,抉择 template: metadata: labels: myvmi: myvmi # 保持一致,匹配 spec: domain: devices: disks: - name: containerdisk disk: bus: virtio resources: requests: memory: 1024M volumes: - name: containerdisk containerDisk: image: centos7 imagePullPolicy: IfNotPresent应用kubectl命令创立vmis [root@master vm]# kubectl apply -f vmis.yamlvirtualmachineinstancereplicaset.kubevirt.io/testreplicaset created查看运行状态 [root@master vm]# kubectl get vmisNAME AGE PHASE IP NODENAME READYtestreplicaset6vm9s 42s Running 10.244.0.139 master Falsetestreplicaset8dshm 22s Scheduling Falsetestreplicasetbqxnb 22s Scheduling False[root@master vm]# kubectl get vmisNAME AGE PHASE IP NODENAME READYtestreplicaset8dshm 46s Running 10.244.0.141 master Falsetestreplicasetbqxnb 46s Running 10.244.0.140 master False[root@master vm]# kubectl get podNAME READY STATUS RESTARTS AGEvirt-launcher-testreplicaset8dshm-nz7x2 2/2 Running 0 69svirt-launcher-testreplicasetbqxnb-ljp2f 2/2 Running 0 70sdescribe 查看详细信息[root@master vm]# kubectl describe vmirs testreplicasetName: testreplicasetNamespace: defaultLabels: <none>Annotations: kubevirt.io/latest-observed-api-version: v1 kubevirt.io/storage-observed-api-version: v1alpha3API Version: kubevirt.io/v1Kind: VirtualMachineInstanceReplicaSetMetadata: Creation Timestamp: 2022-05-02T13:50:05Z Generation: 2 Managed Fields: API Version: kubevirt.io/v1alpha3 Fields Type: FieldsV1 fieldsV1: f:metadata: f:annotations: f:kubevirt.io/latest-observed-api-version: f:kubevirt.io/storage-observed-api-version: f:spec: f:template: f:metadata: f:creationTimestamp: f:status: .: f:labelSelector: f:replicas: Manager: Go-http-client Operation: Update Time: 2022-05-02T13:50:05Z API Version: kubevirt.io/v1alpha3 Fields Type: FieldsV1 fieldsV1: f:metadata: f:annotations: ... ... .: f:memory: f:volumes: Manager: kubectl Operation: Update Time: 2022-05-02T13:50:05Z Resource Version: 267261 Self Link: /apis/kubevirt.io/v1/namespaces/default/virtualmachineinstancereplicasets/testreplicaset UID: 96d17d12-17b5-4df7-940a-fac7c6b820d2Spec: Replicas: 2 Selector: Match Labels: Myvmi: myvmi Template: Metadata: Creation Timestamp: <nil> Labels: Myvmi: myvmi Spec: Domain: Devices: Disks: Disk: Bus: virtio Name: containerdisk Resources: Requests: Memory: 1024M Volumes: Container Disk: Image: kubevirt/cirros-container-disk-demo Image Pull Policy: IfNotPresent Name: containerdiskStatus: Label Selector: myvmi=myvmi Replicas: 2Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 5m21s virtualmachinereplicaset-controller Started the virtual machine by creating the new virtual machine instance testreplicaseth6zsl Normal SuccessfulCreate 5m21s virtualmachinereplicaset-controller Started the virtual machine by creating the new virtual machine instance testreplicasetw75s4扩缩容查看vmis ...

May 2, 2022 · 2 min · jiezi

关于kubernetes:k8s-服务拓扑设置定向流量有哪些常见场景

k8s 服务拓扑设置定向流量有哪些常见场景?面试官:"说说你对Kubernetes 网络模型的了解?设计这种网络模型有什么益处?"面试官:"Kubernetes 网络次要解决哪四方面的问题?"面试官:"k8s利用服务拓扑的流量路由策略来设置定向流量,拓扑键的匹配规定是什么?"面试官:"日常应用拓扑键有什么束缚?"面试官:"简略说说服务拓扑设置定向流量有哪些常见场景?试举例说明?" 囧么肥事-胡言乱语 说说你对Kubernetes 网络模型的了解?设计这种网络模型有什么益处?集群网络系统是 Kubernetes 的外围局部,Kubernetes 的主旨就是在利用之间共享机器。 要解决的问题? 通常来说,共享机器须要两个利用之间不能应用雷同的端口,然而在多个利用开发者之间 去大规模地协调端口是件很艰难的事件,尤其是还要让用户裸露在他们管制范畴之外的集群级别的问题上。 动态分配端口也会给零碎带来很多复杂度 - 每个利用都须要设置一个端口的参数, 而 API 服务器还须要晓得如何将动静端口数值插入到配置模块中,服务也须要晓得如何找到对方等等。 针对这些问题,Kubernetes设计了一种洁净的、向后兼容的网络模型,即 IP-per-pod 模型。怎么说? 首先,Kubernetes 强制规定了一种简略粗犷的网络模型设计根底准则:每个 Pod 都领有一个独立的 IP 地址。 Kubernetes 强制要求所有网络设施都满足以下根本要求(从而排除了无意隔离网络的策略): 节点上的 Pod 能够不通过 NAT 和其余任何节点上的 Pod 通信节点上的代理(比方:零碎守护过程、kubelet)能够和节点上的所有 Pod 通信另外,对于反对在主机网络中运行 Pod 的平台(比方:Linux): 运行在节点主机网络里的 Pod 能够不通过 NAT 和所有节点上的 Pod 通信须要留神的是:在K8S集群中,IP地址调配是以Pod对象为单位,而非容器,同一Pod内的所有容器共享同一网络名称空间。 Kubernetes 的 IP 地址存在于 Pod 范畴内 - 容器共享它们的网络命名空间 - 包含它们的 IP 地址和 MAC 地址 这意味着什么? 意味着 k8s 假设所有 Pod 都在一个能够间接连通的、扁平的网络空间中也就是说,无论容器运行在集群中的哪个节点,所有容器之间都能通过一个扁平的网络立体进行通信。不论它们是否运行在同一个 Node节点 (宿主机) 上,都要求它们能够间接通过对方的 IP 进行拜访。每一个 Pod 都有它本人的IP地址, 益处就是你不须要再显式地在 Pod 之间创立链接, 简直不须要解决容器端口到主机端口之间的映射。 ...

May 2, 2022 · 2 min · jiezi

关于kubernetes:clientgo连接kubernetes集群

背景:kubernetes的根本利用的算是能入门了。然而基于各种客户端操作kubernetes集群还是没有深刻玩过,最近一段时间入门了一下goland,就拿client-go深刻体验一下kubernetes集群的基本操作,当然了最初能更深刻一下跟gin框架联合了就好了......算是练手入门对于client-go参照githubhttps://github.com/kubernetes/client-go。请留神版本与kubernetes的版本兼容性对应关系:https://github.com/kubernetes/client-go#versioning。我这里装置的最新的1.23.6版本(连贯的集群其实是1.22的阿里云的ack集群。只进行简略的操作,没有什么太大问题) 试验环境阿里云ack1.22.3开发环境Goland2022.1 上手client-go连贯kubernetes集群创立我的项目k8s-demo1 go get装置依赖创立好目录构造如下:接下来应该是装置client-go的依赖了,参照client-go官网文档:https://github.com/kubernetes/client-go/blob/master/INSTALL.md。当然了我这里就依照最新版本了 go get k8s.io/client-go@v0.23.6留神:因为之前装置过,下载很是快了..... api官网文档https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/ 对于client-go连贯kubernetes集群的几种客户端参照:csdn博客https://xinchen.blog.csdn.net/article/details/113753087 client-go实战之二:RESTClientclient-go实战之三:Clientsetclient-go实战之四:dynamicClientclient-go实战之五:DiscoveryClient我这里就应用Clientset了! clientset创立kubernetes客户端并验证version下载集群配置文件登录阿里云ack集群治理页面下载config配置文件保留到开发机器C:\Users\zhangpeng.kube下:注:当然了很多自建的集群填写的都是内网的形式,能够通过代理或者其余形式连贯集群 第一个例子打印一下kubernetes集群versionpackage mainimport ( "flag" "fmt" "k8s.io/client-go/kubernetes" "k8s.io/client-go/tools/clientcmd" "k8s.io/client-go/util/homedir" "path/filepath")func main() { var kubeconfig *string if home := homedir.HomeDir(); home != "" { kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file") } else { kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file") } flag.Parse() config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig) if err != nil { panic(err.Error()) } // create the clientset clientset, err := kubernetes.NewForConfig(config) if err != nil { panic(err.Error()) } fmt.Println(clientset.ServerVersion())}go run main.go的时候报错了貌似少了依赖包。依照提醒依照了一下 ...

May 1, 2022 · 1 min · jiezi

关于kubernetes:Kubernetesk8s实现IPv4IPv6网络双栈

背景现在IPv4IP地址曾经应用结束,将来寰球会以IPv6地址为核心,会大力发展IPv6网络环境,因为IPv6能够实现给任何一个设施调配到公网IP,所以资源是十分丰盛的。 配置hosts[root@k8s-master01 ~]# vim /etc/hosts[root@k8s-master01 ~]# cat /etc/hosts127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4::1         localhost localhost.localdomain localhost6 localhost6.localdomain62408:8207:78ce:7561::10 k8s-master012408:8207:78ce:7561::20 k8s-master022408:8207:78ce:7561::30 k8s-master032408:8207:78ce:7561::40 k8s-node012408:8207:78ce:7561::50 k8s-node022408:8207:78ce:7561::60 k8s-node032408:8207:78ce:7561::70 k8s-node042408:8207:78ce:7561::80 k8s-node0510.0.0.81 k8s-master0110.0.0.82 k8s-master0210.0.0.83 k8s-master0310.0.0.84 k8s-node0110.0.0.85 k8s-node0210.0.0.86 k8s-node0310.0.0.87 k8s-node0410.0.0.88 k8s-node0510.0.0.80 lb0110.0.0.90 lb0210.0.0.99 lb-vip[root@k8s-master01 ~]# 配置ipv6地址[root@k8s-master01 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens160 [root@k8s-master01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens160TYPE=EthernetPROXY_METHOD=noneBROWSER_ONLY=noBOOTPROTO=noneDEFROUTE=yesIPV4_FAILURE_FATAL=noIPV6INIT=yesIPV6_AUTOCONF=noIPV6ADDR=2408:8207:78ce:7561::10/64IPV6_DEFAULTGW=2408:8207:78ce:7561::1IPV6_DEFROUTE=yesIPV6_FAILURE_FATAL=noNAME=ens160UUID=56ca7c8c-21c6-484f-acbd-349111b3ddb5DEVICE=ens160ONBOOT=yesIPADDR=10.0.0.81PREFIX=24GATEWAY=10.0.0.1DNS1=8.8.8.8DNS2=2408:8000:1010:1::8[root@k8s-master01 ~]# 留神:每一台主机都须要配置为动态IPv6地址!若不进行配置,在内核中开启IPv6数据包转发性能后会呈现IPv6异样。 sysctl参数启用ipv6[root@k8s-master01 ~]# vim /etc/sysctl.d/k8s.conf[root@k8s-master01 ~]# cat /etc/sysctl.d/k8s.confnet.ipv4.ip_forward = 1net.bridge.bridge-nf-call-iptables = 1fs.may_detach_mounts = 1vm.overcommit_memory=1vm.panic_on_oom=0fs.inotify.max_user_watches=89100fs.file-max=52706963fs.nr_open=52706963net.netfilter.nf_conntrack_max=2310720net.ipv4.tcp_keepalive_time = 600net.ipv4.tcp_keepalive_probes = 3net.ipv4.tcp_keepalive_intvl =15net.ipv4.tcp_max_tw_buckets = 36000net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_max_orphans = 327680net.ipv4.tcp_orphan_retries = 3net.ipv4.tcp_syncookies = 1net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.ip_conntrack_max = 65536net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.tcp_timestamps = 0net.core.somaxconn = 16384net.ipv6.conf.all.disable_ipv6 = 0net.ipv6.conf.default.disable_ipv6 = 0net.ipv6.conf.lo.disable_ipv6 = 0net.ipv6.conf.all.forwarding = 1[root@k8s-master01 ~]# [root@k8s-master01 ~]# reboot测试拜访公网IPv6[root@k8s-master01 ~]# ping www.chenby.cn -6PING www.chenby.cn(2408:871a:5100:119:1d:: (2408:871a:5100:119:1d::)) 56 data bytes64 bytes from 2408:871a:5100:119:1d:: (2408:871a:5100:119:1d::): icmp_seq=1 ttl=53 time=10.6 ms64 bytes from 2408:871a:5100:119:1d:: (2408:871a:5100:119:1d::): icmp_seq=2 ttl=53 time=9.94 ms^C--- www.chenby.cn ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 1002msrtt min/avg/max/mdev = 9.937/10.269/10.602/0.347 ms[root@k8s-master01 ~]# 批改kube-apiserver如下配置--service-cluster-ip-range=10.96.0.0/12,fd00::/108  --feature-gates=IPv6DualStack=true [root@k8s-master01 ~]# vim /usr/lib/systemd/system/kube-apiserver.service[root@k8s-master01 ~]# cat /usr/lib/systemd/system/kube-apiserver.service[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-apiserver \      --v=2  \      --logtostderr=true  \      --allow-privileged=true  \      --bind-address=0.0.0.0  \      --secure-port=6443  \      --insecure-port=0  \      --advertise-address=192.168.1.81 \      --service-cluster-ip-range=10.96.0.0/12,fd00::/108  \      --feature-gates=IPv6DualStack=true \      --service-node-port-range=30000-32767  \      --etcd-servers=https://192.168.1.81:2379,https://192.168.1.82:2379,https://192.168.1.83:2379 \      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \      --client-ca-file=/etc/kubernetes/pki/ca.pem  \      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \      --service-account-issuer=https://kubernetes.default.svc.cluster.local \      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \      --authorization-mode=Node,RBAC  \      --enable-bootstrap-token-auth=true  \      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \      --requestheader-allowed-names=aggregator  \      --requestheader-group-headers=X-Remote-Group  \      --requestheader-extra-headers-prefix=X-Remote-Extra-  \      --requestheader-username-headers=X-Remote-User \      --enable-aggregator-routing=true      # --token-auth-file=/etc/kubernetes/token.csvRestart=on-failureRestartSec=10sLimitNOFILE=65535[Install]WantedBy=multi-user.target批改kube-controller-manager如下配置--feature-gates=IPv6DualStack=true--service-cluster-ip-range=10.96.0.0/12,fd00::/108--cluster-cidr=172.16.0.0/12,fc00::/48--node-cidr-mask-size-ipv4=24--node-cidr-mask-size-ipv6=64[root@k8s-master01 ~]# vim /usr/lib/systemd/system/kube-controller-manager.service[root@k8s-master01 ~]# cat /usr/lib/systemd/system/kube-controller-manager.service[Unit]Description=Kubernetes Controller ManagerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-controller-manager \      --v=2 \      --logtostderr=true \      --address=127.0.0.1 \      --root-ca-file=/etc/kubernetes/pki/ca.pem \      --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \      --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \      --service-account-private-key-file=/etc/kubernetes/pki/sa.key \      --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \      --leader-elect=true \      --use-service-account-credentials=true \      --node-monitor-grace-period=40s \      --node-monitor-period=5s \      --pod-eviction-timeout=2m0s \      --controllers=*,bootstrapsigner,tokencleaner \      --allocate-node-cidrs=true \      --feature-gates=IPv6DualStack=true \      --service-cluster-ip-range=10.96.0.0/12,fd00::/108 \      --cluster-cidr=172.16.0.0/12,fc00::/48 \      --node-cidr-mask-size-ipv4=24 \      --node-cidr-mask-size-ipv6=64 \      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \      --node-cidr-mask-size=24Restart=alwaysRestartSec=10s[Install]WantedBy=multi-user.target批改kubelet如下配置--feature-gates=IPv6DualStack=true[root@k8s-master01 ~]# vim /usr/lib/systemd/system/kubelet.service[root@k8s-master01 ~]# cat /usr/lib/systemd/system/kubelet.service[Unit]Description=Kubernetes KubeletDocumentation=https://github.com/kubernetes/kubernetesAfter=docker.serviceRequires=docker.service[Service]ExecStart=/usr/local/bin/kubelet \    --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig  \    --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \    --config=/etc/kubernetes/kubelet-conf.yml \    --network-plugin=cni  \    --cni-conf-dir=/etc/cni/net.d  \    --cni-bin-dir=/opt/cni/bin  \    --container-runtime=remote  \    --runtime-request-timeout=15m  \    --container-runtime-endpoint=unix:///run/containerd/containerd.sock  \    --cgroup-driver=systemd \    --node-labels=node.kubernetes.io/node='' \    --feature-gates=IPv6DualStack=trueRestart=alwaysStartLimitInterval=0RestartSec=10[Install]WantedBy=multi-user.target批改kube-apiserver如下配置#批改如下配置clusterCIDR: 172.16.0.0/12,fc00::/48 [root@k8s-master01 ~]# vim /etc/kubernetes/kube-proxy.yaml[root@k8s-master01 ~]# cat /etc/kubernetes/kube-proxy.yamlapiVersion: kubeproxy.config.k8s.io/v1alpha1bindAddress: 0.0.0.0clientConnection:  acceptContentTypes: ""  burst: 10  contentType: application/vnd.kubernetes.protobuf  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig  qps: 5clusterCIDR: 172.16.0.0/12,fc00::/48 configSyncPeriod: 15m0sconntrack:  max: null  maxPerCore: 32768  min: 131072  tcpCloseWaitTimeout: 1h0m0s  tcpEstablishedTimeout: 24h0m0senableProfiling: falsehealthzBindAddress: 0.0.0.0:10256hostnameOverride: ""iptables:  masqueradeAll: false  masqueradeBit: 14  minSyncPeriod: 0s  syncPeriod: 30sipvs:  masqueradeAll: true  minSyncPeriod: 5s  scheduler: "rr"  syncPeriod: 30skind: KubeProxyConfigurationmetricsBindAddress: 127.0.0.1:10249mode: "ipvs"nodePortAddresses: nulloomScoreAdj: -999portRange: ""udpIdleTimeout: 250ms[root@k8s-master01 ~]# 批改calico如下配置# vim calico.yaml# calico-config ConfigMap处    "ipam": {        "type": "calico-ipam",        "assign_ipv4": "true",        "assign_ipv6": "true"    },    - name: IP      value: "autodetect"    - name: IP6      value: "autodetect"    - name: CALICO_IPV4POOL_CIDR      value: "172.16.0.0/16"    - name: CALICO_IPV6POOL_CIDR      value: "fc00::/48"    - name: FELIX_IPV6SUPPORT      value: "true"# kubectl apply -f calico.yaml测试#部署利用[root@k8s-master01 ~]# cat cby.yaml apiVersion: apps/v1kind: Deploymentmetadata:  name: chenbyspec:  replicas: 3  selector:    matchLabels:      app: chenby  template:    metadata:      labels:        app: chenby    spec:      containers:      - name: chenby        image: nginx        resources:          limits:            memory: "128Mi"            cpu: "500m"        ports:        - containerPort: 80---apiVersion: v1kind: Servicemetadata:  name: chenbyspec:  ipFamilyPolicy: PreferDualStack  ipFamilies:  - IPv6  - IPv4  type: NodePort  selector:    app: chenby  ports:  - port: 80    targetPort: 80[root@k8s-master01 ~]# kubectl  apply -f cby.yaml#查看端口[root@k8s-master01 ~]# kubectl  get svcNAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGEchenby       NodePort    fd00::d80a   <none>        80:31535/TCP   54skubernetes   ClusterIP   10.96.0.1    <none>        443/TCP        22h[root@k8s-master01 ~]# #应用内网拜访[root@k8s-master01 ~]# curl -I http://[fd00::d80a]HTTP/1.1 200 OKServer: nginx/1.21.6Date: Fri, 29 Apr 2022 07:29:28 GMTContent-Type: text/htmlContent-Length: 615Last-Modified: Tue, 25 Jan 2022 15:03:52 GMTConnection: keep-aliveETag: "61f01158-267"Accept-Ranges: bytes[root@k8s-master01 ~]# #应用公网拜访[root@k8s-master01 ~]# curl -I http://[2408:8207:78ce:7561::10]:31535HTTP/1.1 200 OKServer: nginx/1.21.6Date: Fri, 29 Apr 2022 07:25:16 GMTContent-Type: text/htmlContent-Length: 615Last-Modified: Tue, 25 Jan 2022 15:03:52 GMTConnection: keep-aliveETag: "61f01158-267"Accept-Ranges: bytes[root@k8s-master01 ~]# [root@k8s-master01 ~]# curl -I http://10.0.0.81:31535HTTP/1.1 200 OKServer: nginx/1.21.6Date: Fri, 29 Apr 2022 07:26:16 GMTContent-Type: text/htmlContent-Length: 615Last-Modified: Tue, 25 Jan 2022 15:03:52 GMTConnection: keep-aliveETag: "61f01158-267"Accept-Ranges: bytes[root@k8s-master01 ~]#  https://www.oiox.cn/ https://www.chenby.cn/ https://blog.oiox.cn/ https://cby-chen.github.io/ https://blog.csdn.net/qq\_33921750 https://my.oschina.net/u/3981543 https://www.zhihu.com/people/... https://segmentfault.com/u/hp... https://juejin.cn/user/331578... https://cloud.tencent.com/dev... https://www.jianshu.com/u/0f8... https://www.toutiao.com/c/use... CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、今日头条、集体博客、全网可搜《小陈运维》 文章次要公布于微信公众号:《Linux运维交换社区》

April 29, 2022 · 1 min · jiezi

关于kubernetes:TTL-机制排毒线上k8s的Job已经通过API-增加了Job的TTL-时长且成功响应为什么系统还是清理了Job

TTL 机制排毒,线上k8s的Job曾经通过API 减少了Job的TTL 时长,且胜利响应,为什么零碎还是清理了Job?面试官:"已实现 Job 的 TTL 机制理解嘛?简略说说TTL存在的工夫偏差问题?"面试官:"能简略形容一下什么是TTL-after-finished 控制器嘛?"面试官:"我明明曾经通过API 减少了Job的TTL 时长,且失去了胜利的响应,为什么零碎还是清理了Job?"面试官:"如何更加精确的跟踪 Job 实现状况?理解 Finalizer 追踪 Job嘛?"面试官:"说说什么场景下CronJob 无奈被调度?" 囧么肥事-胡言乱语 已实现 Job 的 TTL 机制理解嘛?简略说说TTL存在的工夫偏差问题?实现的 Job 通常不须要持续留存在零碎中。在零碎中始终保留它们会给 API 服务器带来额定的压力。 实际上主动清理实现的 Job有两种惯例形式: 1、更高级别的控制器治理2、已实现 Job 的 TTL 机制 更高级别的控制器治理 如果 Job 由某种更高级别的控制器来治理,例如CronJobs, 则 Job 能够被 CronJob 基于特定的依据容量裁定的清理策略清理掉。 已实现 Job 的 TTL 机制 主动清理已实现 Job (状态为 Complete 或 Failed)的另一种形式是应用由TTL-after-finished控制器所提供 的 TTL 机制。 通过设置 Job 的 .spec.ttlSecondsAfterFinished 字段,能够让该控制器清理掉 已完结的资源。 留神点一:TTL 控制器清理 Job 时,会级联式地删除 Job 对象。 换言之,它会删除所有依赖的对象,包含 Pod 及 Job 自身。 ...

April 29, 2022 · 2 min · jiezi

关于kubernetes:k8s集群Job-Pod-容器可能因为多种原因失效想要更加稳定的使用Job负载有哪些需要注意的地方

k8s集群Job Pod 容器可能因为多种起因生效,想要更加稳固的应用Job负载,有哪些须要留神的中央?面试官:“计数性Job默认实现模式是什么?Indexed模式如何公布自定义索引呢?”面试官:“k8s的Job Pod 中的容器可能因为多种不同起因生效,想要更加稳固的应用Job负载,有哪些能够留神的中央?“面试官:“为什么k8s倡议在调试 Job 时将 `restartPolicy` 设置为 "Never"?”面试官:“Job 终止与清理理解嘛?Pod重试次数还未 达到 `backoffLimit` 所设的限度,为什么忽然被终止了?猜想起因?“ 囧么肥事-胡言乱语 计数性Job默认实现模式是什么?Indexed模式如何公布自定义索引呢?计数性Job默认实现模式是无索引模式NonIndexed。 实际上,带有 确定实现计数 的 Job,即 .spec.completions 不为 null 的 Job, 都能够在其 .spec.completionMode 中设置实现模式:NonIndexed(默认)和Indexed两种。 先看默认模式NonIndexed,无索引模式 1、每个Job实现事件都是独立无关且同质的2、胜利实现的Pod个数达到.spec.completions值时认为Job曾经实现3、当.spec.completions取值null时,Job被隐式解决为NonIndexed再看Indexed,索引模式 1、Job 的 Pod 会调配对应的实现索引2、索引取值为 0 到.spec.completions-13、当每个索引都对应一个实现的 Pod 时,Job 被认为是已实现的4、同一索引值可能被调配给多个Pod,然而只有一个会被记入实现计数对于索引模式来说,我下发10个索引,我不关注10个索引别离由多少个Pod去实现,我只关注10个索引工作是否按需实现即可。 Indexed模式下,索引有三种获取形式: 第一种:基于注解,Pod 索引在注解 batch.kubernetes.io/job-completion-index中出现,具体示意为一个十进制值字符串。第二种:基于主机名,作为 Pod 主机名的一部分,遵循模式 $(job-name)-$(index)。 当你同时应用带索引的 Job(Indexed Job)与服务(Service), Job 中的 Pods 能够通过 DNS 应用确切的主机名相互寻址。第三种:基于环境变量,对于容器化的工作,在环境变量 JOB_COMPLETION_INDEX 中体现。Indexed模式如何公布自定义索引呢? 下面提到了三种获取索引的形式:注解,主机名,环境变量。 Downward API 机制有两种形式能够把将 Pod 和 Container 字段信息出现给 Pod 中运行的容器: 环境变量卷文件你应用 Job 控制器为所有容器设置的内置 JOB_COMPLETION_INDEX 环境变量。 Init 容器将索引映射到一个动态值,并将其写入一个文件,该文件通过 emptyDir 卷与运行 worker 的容器共享。 ...

April 28, 2022 · 2 min · jiezi

关于kubernetes:使用Kubernetes快速启用一个静态页面

应用Kubernetes疾速启用一个动态页面将html动态页面搁置在nfs目录下,通过Deployment启动时挂在到nginx页面目录即可 查看yaml内容root@hello:~# cat cby.yamlapiVersion: apps/v1kind: Deploymentmetadata:  name: chenbyspec:  replicas: 3  selector:    matchLabels:      app: chenby  template:    metadata:      labels:        app: chenby    spec:      containers:      - name: chenby        image: nginx        resources:          limits:            memory: "128Mi"            cpu: "500m"        ports:        - containerPort: 80        volumeMounts:        - name: cby-nfs          mountPath: /usr/share/nginx/html/      volumes:      - name: cby-nfs        nfs:          server: 192.168.1.123          path: /cby-3/nfs/html---apiVersion: v1kind: Servicemetadata:  name: chenbyspec:  type: NodePort  selector:    app: chenby  ports:  - port: 80    targetPort: 80查看验证root@hello:~# kubectl  get deployments.apps  chenby  -o wideNAME     READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES   SELECTORchenby   3/3     3            3           4m44s   chenby       nginx    app=chenbyroot@hello:~# root@hello:~# kubectl  get pod -o wide | grep chenbychenby-77b57649c7-qv2ps                  1/1     Running   0          5m2s   172.17.125.19    k8s-node01     <none>           <none>chenby-77b57649c7-rx98c                  1/1     Running   0          5m2s   172.25.214.207   k8s-node03     <none>           <none>chenby-77b57649c7-tx2dz                  1/1     Running   0          5m2s   172.25.244.209   k8s-master01   <none>           <none>root@hello:~# kubectl  get svc -o wide | grep chenbychenby                NodePort    10.109.222.0    <none>        80:30971/TCP   5m8s   app=chenbyroot@hello:~# 演示 在线体验: https://www.oiox.cn/ https://www.chenby.cn/ https://blog.oiox.cn/ https://www.oiox.cn/ https://www.chenby.cn/ https://blog.oiox.cn/ https://cby-chen.github.io/ https://blog.csdn.net/qq_3392... https://my.oschina.net/u/3981543 https://www.zhihu.com/people/... https://segmentfault.com/u/hp... https://juejin.cn/user/331578... https://cloud.tencent.com/dev... https://www.jianshu.com/u/0f8... https://www.toutiao.com/c/use... CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、今日头条、集体博客、全网可搜《小陈运维》 文章次要公布于微信公众号:《Linux运维交换社区》

April 27, 2022 · 1 min · jiezi

关于kubernetes:k8s集群线上某些特殊情况强制删除-StatefulSet-的-Pod-隐患考虑

k8s集群线上某些非凡状况强制删除 StatefulSet 的 Pod 隐患思考?考点之什么状况下,须要强制删除 StatefulSet 的 Pod?考点之如果 StatefulSet 操作不当可能会引发什么很重大的结果?考点之如果遇到Pod 长时间处于 'Terminating' 或者 'Unknown' 状态状况,有什么平安一些的解决伎俩吗? 囧么肥事-胡言乱语 线上某些非凡状况下可能须要强制删除 StatefulSet 的 Pod?什么状况下,须要强制删除 StatefulSet 的 Pod? 失常状况下 StatefulSet 惯例场景下,不须要强制删除 StatefulSet 治理的 Pod。 StatefulSet 控制器会负责创立、 扩缩和删除 StatefulSet 治理的 Pods。 它尝试确保指定数量的从序数 0 到 N-1 的 Pod 处于沉闷状态并准备就绪。 StatefulSet 遵循At Most One(最多一个)规定,确保在任何时候,集群中最多只有一个具备给定标识的 Pod。 非凡状况下 所谓非凡状况下必须进行强制删除,SS感知到当某个节点不可达时,不会引发主动删除 Pod。在无法访问的节点上运行的 Pod 在超时 后会进入'Terminating' 或者 'Unknown' 状态,另外当用户尝试体面地删除无法访问的节点上的 Pod 时 Pod 也可能会进入这些状态。 如果你发现 StatefulSet 的某些 Pod 长时间处于 'Terminating' 或者 'Unknown' 状态 ...

April 26, 2022 · 1 min · jiezi

关于kubernetes:驯服-Kubernetes网易数帆云原生运维体系建设之路

本文系作者GOPS寰球运维大会演讲内容,由高效运维社区整顿。 本次主题次要会包含两个方面,首先面对云原生技术的疾速倒退和落地,传统运维体系应该怎么去构建及过程中遇到的冲击和挑战,会有一个简略的剖析。 其次,在面对不同的挑战时咱们做了哪些事件,我会依据外部实际来做一些分享,心愿能给大家一些参考。 对于运维来说,其实就是效率、稳定性、老本。其实,不论是稳定性,还是晋升运维效率都是为了钱:效率晋升了能缩小人力老本,稳定性保障/监控报警做得好,能够少故障少赔钱。对于运维来说,当然还有一个十分重要的是平安,不过明天咱们不会怎么讲平安。 在正式开始之前,我先简略介绍一下网易这边的技术状况。网易外部的各个BU之间的技术栈往往是有很大差别的,比方游戏、音乐、严选等,基本上能够认为是齐全不同行业的,都有本人的特色以及行业背景,这导致在网易建设大一统的平台不太事实,所以咱们更关注一些更轻微的点。 1. 运维的新挑战新技术栈网易数帆部门大略从 Kubernetes 1.0 公布时就开始接触容器了,大规模应用容器是在18年前后。业务应用容器后面临的首个挑战是全新的技术栈,这个时候运维体系该如何布局?相干的技术选型,包含网络/存储计划的抉择,集群规模容量布局,往往因为后面的短视造成前面的很多窘破。 容器化的场景下,对于 Kubernetes 的应用上,咱们没有对业务部门做任何限度,业务部门能够任意调用 Kubernetes API。因为应用模式多样,导致运维保障面临更多挑战。很多因为业务的误用导致的问题,一样须要运维保障来兜底。 晚期基础设施(包含Docker/内核/Kubernetes)始终Bug一直,比方,Docker 18年前很多的经典Bug,咱们都遇到过。不过这两年比拟新的版本问题曾经少了很多了。 咱们部门应用的支流操作系统发行版是debian,可能跟在座各位同仁绝大部分应用centos的不太一样。Debian发行版的益处是内核以及软件版本都绝对较新。所有遇到的内核问题,也是须要咱们本人去解决修复。 另外,容器化刚开始的时候,毕竟是新的技术栈,想招到匹配岗位的人才较艰难,人力老本比拟高。 技术惯性技术惯性大家比拟能了解,很多公司都有本人的传统的运维平台。从传统的运维平台到转变应用 Kubernetes 来做公布治理,从思维上,操作形式上,实现形式上多个方面,咱们发现两头有很多鸿沟,弥合鸿沟也是很苦楚的事件。 这样就导致开发人员的一个认知,原本虚拟机用得好好的,你们搞什么容器,当初出问题了吧,反正赖容器就对了。 一句话,传统的运维开发方式对云原生没有做好筹备。 知识库知识库的问题,首先云原生落地过程中,以后状态很多知识库还不够欠缺,有时候遇到问题去搜寻的话,还是会抓瞎的。 咱们团队的人员因为解决了大量的实际问题,经验丰富,咱们也输入了不少文档。 然而咱们发现业务方真正遇到问题的时候,压根不翻文档,而是间接甩给咱们。当然这外面的起因,可能是因为不足云原生相干技术背景有余而看不懂,或者就是一个意识问题。总的来说,知识库的传承老本比拟高,一些问题的预案和效率是极低的。 咱们认为推动云原生容器化落地过程中运维在这一块目前面临比拟大的挑战。 组织与人员架构在传统开发场景下最上层是开发、测试,两头会有业务的架构团队、利用运维、零碎运维、平台开发,上面是整个IDC的基础设施保障,整个架构层次分明。 但如果某个公司正在做云原生容器化落地,你会发现中间层成为了一团浆糊,多多少少大家工作都有一些穿插。如果一个平台开发不晓得业务方应用容器的姿态,可能会呈现业务方会用各种奇怪的玩法而导致出问题,最初还是运维运维来兜底。 问题的起因在于每个工程师的定位产生了扭转,比方 SA 以前只治理机器,当初会波及到容器的网络和存储,须要去理解 Kubernetes 的工作原理,这样一来很多人都必须去学习 Kubernetes。 容量治理对于容量,有几个方面。一个是业务方不合理申请资源,另一个是业务方也无奈预知状况,比方忽然来了一个促销之类的流动,容量需要增长。 运维 Kubernetes 还一个比拟要害的容量问题是,管制组件的资源耗费的容量评估经常被疏忽。客户往往部署了 Kubernetes 集群并配置了报警,而后后续就不停地加节点。忽然某一天产生了事变崩掉了,找运维问怎么回事,后果可能发现就是管控面容量有余了。 这里展现一个截图,因为 Kubernetes APIserver 重启了一下,内存在极短时间减少了百分之二十多,重启的那一刹那会有大量的申请进来,导致内存耗费得比拟厉害。这种状况下,个别都会配置有阈值报警。当然如果你不解决这种问题,一旦触发了,接下来可能会呈现雪崩效应,再也拉不起来了,直到你增大资源容量为止。 接下来简略从方才我说的运维提效、稳定性保障、老本上咱们做的实际。 2. 运维提效首先咱们集群应用了中心化的托管,当然并不是所有部门都是咱们管的,咱们只管跟咱们关系比拟亲密的集群。整个权限认证体系,间接走我外部的员工认证零碎,做到了对立认证,权限还是走RBAC,受权到集体。其次是因为咱们大量的人力在帮客户做排障,不同人和不同部门一遍遍找过去,不违心看文档和你做的事件,你兜底就能够了,咱们团队始终是超载的状态。因而,咱们要把一些常见的排障诊断过程做成自动化。最初,针对监控数据这一块,监控数据的存储没有间接应用开源零碎,而是应用外部实现的TSDB,来把监控数据对立存下来,这样能够更好对数据进行生产。 上面说下自动化诊断运维,方才后面的两位老师也都分享过相似的内容。相似的,咱们也是有知识库和流水线执行。很多公司做流水线的时候是做了本人外部一个平台,和其余的外部零碎进行对接,这样一来可能解决了本人的问题,然而通用性并不高。咱们在网易外部面临什么问题呢?咱们还按那种形式去做他人不会用你的货色,因为你是一个平台。别的部门要和它的做适配用起来比拟苦楚。咱们更多想通用一些计划,Kubernetes场景下有CRD的反对,把运维诊断、性能排查等各种货色形象成CRD的形式去做。 咱们把一个运维操作形象成一个原子运维操作Operation,把一个机器设置为不可调度,判断是不是合乎某个已知Bug场景等。多个Operation的编排会形成一个运维流水线OperationSet。针对诊断上下文,咱们做了个Diagnosis的形象。 诊断流水线的触发形式能够有更多种。首先用户能够本人手动创立一个Diagnosis执行。 咱们外部也应用泡泡(网易外部的IM)聊天机器人,来实现Chatops,通过与机器人聊天来触发相干的流程。对于聊天机器人,咱们不想去做比较复杂的常识了解,咱们更多的是很间接的。给聊天机器人发绝对结构化的语句,通知他你帮忙我看一下什么问题就能够了。因为咱们公司整个认证体系是一块的,泡泡机器人也会通过对立的认证体系,能够很轻易找到你的权限,防止你做一些超过权限的事件。通过这种ChatOps你能够触发流水线的执行。 还有一个更大的流水线触发源,就是监控报警的触发。比如说业务的某个利用,容器应用的CPU/内存占用达到了阈值之后,能够主动触发一次拿堆栈的信息,做内存的dump,而后把对应的对战信息,以及dump的内存文件上传到对象存储外面去。接下来的流程中,是能够把这些dump进去的数据拿进去进行剖析的。 当然有一些像中间件也会有这样一些状况,他们往往要做稳定性保障,如果我的中间件实例呈现了某种状况,应该执行什么操作?相似于这样的逻辑咱们也能够把它编排起来,这样咱们能够让其余的operater来去创立这种咱们新的Diagnosis的Oparater,通过这种形式把这个货色实现起来。 简略来说咱们整个场景就是Kubernetes下的一套利用,就是用apiserver承受相干的CRD,而后用Operator做执行,大略就是这么一个货色。 这块咱们心愿这个货色后续在外部把它做成一个平台,咱们心愿这个货色更泛化来看,就是通过一个事件触发一个流程,做一些运维操作、运维诊断,传统遗留下来的脚本都能够残缺继承下来。详见:KubeDiag 框架技术解析 因为Kubernetes是规范的API,如果说你是基于Kubernetes的场景,那咱们的一些教训可能是对你们有用的,很多货色景是共通的。比方,大家可能都遇到过内核版本在4.19之前的,memcg的回收解决是有问题的,会导致大量的泄露,包含像Docker的晚期版本也会有大量的容器删除不掉的问题。 咱们都有一系列的workaround的伎俩,这样的伎俩咱们能够把它做得十分的智能化,比如说咱们报警监测到一个Pod退出超过15分钟,还没有被干死,那咱们可能就触发一次诊断,来看是不是已知的Bug,如果是的话,咱们就通过一些伎俩把它主动复原掉。这种类型的诊断是通用的。 在传统的场景下,可能不同的公司,运维人员登陆机器的形式都不一样,因而传统的场景下咱们没有方法做到通用。然而Kubernetes的场景下咱们能够做到通用,因为Kubernetes RBAC能够做权限管制,咱们整体的有daemonset的形式去对你的过程做一操作,去帮你收集很多货色是能够做到的。 还有比拟头疼的,像很多做AI、大数据相干的,次要是AI训练,他们有C/CPP代码,会呈现coredump,coredump会带来几个问题,会导致本地的磁盘过后使用率会很高,会影响同节点上其余的业务。这个时候咱们能够通过一些办法,做到全局对立的本地不落盘的coredump采集。还有像发数据包、打火焰图等等相似这种,很多时候像性能诊断,还有一些惯例的软件Bug workaround是十分通用,这个都是底层的能力了。 ...

April 25, 2022 · 1 min · jiezi

关于kubernetes:如何在云原生混部场景下利用资源配额高效分配集群资源

简介:因为混部是一个简单的技术及运维体系,包含 K8s 调度、OS 隔离、可观测性等等各种技术,之前的一篇文章《历经 7 年双 11 实战,阿里巴巴是如何定义云原生混部调度优先级及服务质量的?》,次要聚焦在调度优先级和服务质量模型上,明天咱们来关注一下资源配额多租相干的内容。 引言在阿里团体,离线混部技术从 2014 年开始,经验了七年的双十一测验,外部已实现大规模落地推广,每年为阿里团体节俭数十亿的资源老本,整体资源利用率为 70%左右,达到业界领先水平。这两年,咱们开始把团体内的混部技术通过产品化的形式输入给业界,通过插件化的形式无缝装置在规范原生的 K8s 集群上,配合混部管控和运维能力,晋升集群的资源利用率和产品的综合用户体验。 因为混部是一个简单的技术及运维体系,包含 K8s 调度、OS 隔离、可观测性等等各种技术,之前的一篇文章《历经 7 年双 11 实战,阿里巴巴是如何定义云原生混部调度优先级及服务质量的?》,次要聚焦在调度优先级和服务质量模型上,明天咱们来关注一下资源配额多租相干的内容。 资源配额概述首先想提一个问题,在设计上,既然 K8s 的调度器曾经能够在没有资源的状况下,让 pod 处于 pending 状态,那为什么,还须要有一个资源配额(Resource Quota)的设计? 咱们在学习一个零碎时,岂但要学习设计自身,还须要思考为什么这个设计是必须的?如果把这个设计从零碎中砍掉,会造成什么结果?因为在一个零碎中减少任何一项功能设计,都会造成好几项边际效应(Side Effect),包含应用这个零碎的人的心智累赘,零碎的安全性、高可用性,性能,都须要纳入思考。所以,性能不是越多越好。越是优良的零碎,提供的性能反而是越少越好。例如 C 语言只有 32 个关键字,而用户能够通过自定义组合这些根底能力,实现本人想要的任何需要。 回到原问题,一个集群的资源肯定是无限的,无论是物理机上的 CPU、内存、磁盘,还有一些别的资源例如 GPU 卡这些。光靠调度,是否能解决这个问题呢?如果这个集群只有一个用户,那么这个问题其实还是能忍耐的,例如看到 pod pending了,那就不创立新的 pod 了;如果新的 pod 比拟重要,这个用户能够删掉旧的 pod,而后再创立新的。然而,实在的集群是被多个用户或者说团队同时应用的,当 A 团队资源不够了,再去等 B 团队的人决策什么利用能够腾挪出空间,在这个时候,跨团队的交换效率是十分低下的。所以在调度前,咱们就须要再减少一个环节。如下图所示: 在这个环节内,引入了资源配额和租户这 2 个概念。租户,是进行资源配额调配的团队单位。配额,则是多个租户在应用无限的集群资源时,相互在当时达成的一个共识。当时是一个十分重要的关键词,也就是说不能等到 pod 到了调度时、运行时,再去通知创建者这个 pod 因为配额有余而创立不进去,而是须要在创立 pod 之前,就给各个团队一个对资源的心理预期,每年初在配置资源配额时,给 A 团队或者 B 团队定一个往年能够应用的配额总量,这样当 A 团队配额用完时,A 团队外部能够先进行资源优先级排序,把不重要的 pod 删除掉,如果还不够,那就再和 B 团队磋商,是否能够从 B 团队的配额划分一些配额过去。这样的话,就无需任何状况下都要进行点对点的低效率沟通。A 团队和 B 团队在年初的时候就须要对本人的业务的资源用量,做一个大略的估算,也就是资源估算。 ...

April 25, 2022 · 1 min · jiezi

关于kubernetes:k8s集群StatefulSets的Pod优雅调度问题思考

k8s集群StatefulSets的Pod优雅调度问题思考?考点之你能解释一下为什么k8s的 StatefulSets 须要VolumeClaimTemplate嘛?考点之简略形容一下StatefulSets 对Pod的编排调度过程?考点之针对线上StatefulSet 的Pod缩容故障无奈失常缩容的状况,你能灰度剖析一下嘛?考点之聊聊什么是StatefulSet的分区滚动更新吧?什么场景须要应用分区更新?考点之StatefulSet提供优雅稳固的存储,然而线上告警StatefulSet Pod从新调度后数据失落? 囧么肥事-胡言乱语 你能解释一下为什么k8s的 StatefulSets 须要VolumeClaimTemplate嘛?对于k8s集群来说有状态的正本集都会用到长久存储。 Deployment中的Pod template里定义的存储卷,是基于模板配置调度,所有正本集共用一个存储卷,数据是雷同的。 StatefulSet职责是治理有状态利用,所以它治理的每个Pod都要自已的专有存储卷,它的存储卷就不能再用Pod模板来创立。 所以 StatefulSets 须要一种新形式来为管辖的Pod调配存储卷。 就这样VolumeClaimTemplate来了,k8s 给 StatefulSets 设置了VolumeClaimTemplate,也就是卷申请模板。 说了为什么须要它,那么VCT到底是什么呢? VolumeClaimTemplate:基于动态或动静地PV供应形式为Pod资源提供专有且固定的存储,它会为每个Pod都生成不同的PVC,并且绑定PV,实现每个Pod都有本人独立专用的存储卷。 简略形容一下StatefulSets 对Pod的编排调度过程?StatefulSets 提供了有序且优雅的部署和扩缩保障。 SS是如何优雅部署和扩缩的呢? 对于蕴含 N 个 正本的 StatefulSet 当部署 Pod 时,它们是顺次创立的,程序为 `0..N-1`。当删除 Pod 时,它们是逆序终止的,程序为 `N-1..0`。在将缩放操作利用到 Pod 之前,它后面的所有 Pod 必须是 Running 和 Ready 状态。在 Pod 终止之前,所有的继任者必须齐全敞开创立或扩容过程,以Nginx举例 定义正本数replicas=3SS会创立3个Pod调配有序序号ng-0, ng-1, ng-2SS严格执行部署或调度程序,按序部署ng-0 开始部署...ng-0 进入Running 和 Ready 状态SS 检测 ng-0 部署状态确定ng-0,合乎Running 和 Ready 状态ng-1 开始部署ng-1 进入Running 和 Ready 状态SS 检测 ng-0 和 ng-1 部署状态确定ng-0 和 ng-1 都合乎Running 和 Ready 状态才会执行 ng-2 部署假如此时 ng-0 产生故障那么ng-2 会阻塞,期待 ng-0 重新部署实现ng-2 开始部署ng-2 进入Running 和 Ready 状态相似,StatefulSet 进行缩容跟扩容整体规定是一样的,只不过缩容时,终止程序和创立程序相同。 ...

April 25, 2022 · 2 min · jiezi

关于kubernetes:摆脱-AI-生产小作坊如何基于-Kubernetes-构建云原生-AI-平台

简介:本文将介绍和梳理咱们对云原生 AI 这个新畛域的思考和定位,介绍云原生 AI 套件产品的外围场景、架构和次要能力。 作者:张凯 前言云原生(Cloud Native)[1]是云计算畛域过来 5 年倒退最快、关注度最高的方向之一。CNCF(Cloud Native Computing Foundation,云原生计算基金会)2021年度调查报告[2]显示,寰球曾经有超过 680 万的云原生技术开发者。同一期间,人工智能 AI 畛域也在“深度学习算法+GPU 大算力+海量数据”的推动下继续蓬勃发展。乏味的是,云原生技术和 AI,尤其是深度学习,呈现了很多关联。 大量 AI 算法工程师都在应用云原生容器技术调试、运行深度学习 AI 工作。很多企业的 AI 利用和 AI 零碎,都构建在容器集群上。为了帮忙用户更容易、更高效地在基于容器环境构建 AI 零碎,进步生产 AI 利用的能力,2021 年阿里云容器服务 ACK 推出了云原生 AI 套件产品。本文将介绍和梳理咱们对云原生 AI 这个新畛域的思考和定位,介绍云原生 AI 套件产品的外围场景、架构和次要能力。 AI 与云原生极简史回顾 AI 的倒退历史,咱们会发现这早已不是一个新的畛域。从 1956 年达特茅斯学术研讨会上被首次定义,到 2006 年 Geoffery Hinton 提出了“深度信念网络”(Deep Believe Network),AI 已历经 3 次倒退浪潮。尤其是近 10 年,在以深度学习(Deep Learning)为外围算法、以 GPU 为代表的大算力为根底,叠加海量生产数据积攒的推动下,AI 技术获得了令人瞩目的停顿。与前两次不同,这一次 AI 在机器视觉、语音辨认、自然语言了解等技术上实现冲破,并在商业、医疗、教育、工业、平安、交通等十分多行业胜利落地,甚至还催生了主动驾驶、AIoT 等新畛域。 然而,随同 AI 技术的突飞猛进和广泛应用,很多企业和机构也发现想要保障“算法+算力+数据”的飞轮高效运行,规模化生产出有商业落地价值的 AI 能力,其实并不容易。低廉的算力投入和运维老本,低下的 AI 服务生产效率,以及不足可解释性和通用性的 AI 算法,都成为横亘在 AI 用户背后的重重门槛。 ...

April 24, 2022 · 5 min · jiezi

关于kubernetes:kubernetesk8s-安装-Prometheus-Grafana

kubernetes(k8s) 装置 Prometheus + Grafana组件阐明MetricServer:是kubernetes集群资源应用状况的聚合器,收集数据给kubernetes集群内应用,如 kubectl,hpa,scheduler等。 PrometheusOperator:是一个零碎监测和警报工具箱,用来存储监控数据。 NodeExporter:用于各node的要害度量指标状态数据。 KubeStateMetrics:收集kubernetes集群内资源对象数 据,制订告警规定。 Prometheus:采纳pull形式收集apiserver,scheduler,controller-manager,kubelet组件数 据,通过http协定传输。 Grafana:是可视化数据统计和监控平台。 克隆代码root@hello:~#   git clone -b release-0.10 https://github.com/prometheus-operator/kube-prometheus.gitCloning into 'kube-prometheus'...remote: Enumerating objects: 16026, done.remote: Counting objects: 100% (2639/2639), done.remote: Compressing objects: 100% (165/165), done.remote: Total 16026 (delta 2524), reused 2485 (delta 2470), pack-reused 13387Receiving objects: 100% (16026/16026), 7.81 MiB | 2.67 MiB/s, done.Resolving deltas: 100% (10333/10333), done.root@hello:~# 进入目录批改镜像地址若拜访Google畅通无阻即可无需批改,跳过即可 root@hello:~# cd kube-prometheus/manifestsroot@hello:~/kube-prometheus/manifests# sed -i "s#quay.io/prometheus/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yamlroot@hello:~/kube-prometheus/manifests# sed -i "s#quay.io/brancz/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yamlroot@hello:~/kube-prometheus/manifests# sed -i "s#k8s.gcr.io/prometheus-adapter/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yamlroot@hello:~/kube-prometheus/manifests# sed -i "s#quay.io/prometheus-operator/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yamlroot@hello:~/kube-prometheus/manifests# sed -i "s#k8s.gcr.io/kube-state-metrics/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yamlroot@hello:~/kube-prometheus/manifests# 批改svc为NodePortroot@hello:~/kube-prometheus/manifests# sed -i  "/ports:/i\  type: NodePort" grafana-service.yamlroot@hello:~/kube-prometheus/manifests# sed -i  "/targetPort: http/i\    nodePort: 31100" grafana-service.yamlroot@hello:~/kube-prometheus/manifests# cat grafana-service.yamlapiVersion: v1kind: Servicemetadata:  labels:    app.kubernetes.io/component: grafana    app.kubernetes.io/name: grafana    app.kubernetes.io/part-of: kube-prometheus    app.kubernetes.io/version: 8.3.3  name: grafana  namespace: monitoringspec:  type: NodePort  ports:  - name: http    port: 3000    nodePort: 31100    targetPort: http  selector:    app.kubernetes.io/component: grafana    app.kubernetes.io/name: grafana    app.kubernetes.io/part-of: kube-prometheusroot@hello:~/kube-prometheus/manifests# sed -i  "/ports:/i\  type: NodePort" prometheus-service.yamlroot@hello:~/kube-prometheus/manifests# sed -i  "/targetPort: web/i\    nodePort: 31200" prometheus-service.yamlroot@hello:~/kube-prometheus/manifests# sed -i  "/targetPort: reloader-web/i\    nodePort: 31300" prometheus-service.yamlroot@hello:~/kube-prometheus/manifests# cat prometheus-service.yamlapiVersion: v1kind: Servicemetadata:  labels:    app.kubernetes.io/component: prometheus    app.kubernetes.io/instance: k8s    app.kubernetes.io/name: prometheus    app.kubernetes.io/part-of: kube-prometheus    app.kubernetes.io/version: 2.32.1  name: prometheus-k8s  namespace: monitoringspec:  type: NodePort  ports:  - name: web    port: 9090    nodePort: 31200    targetPort: web  - name: reloader-web    port: 8080    nodePort: 31300    targetPort: reloader-web  selector:    app.kubernetes.io/component: prometheus    app.kubernetes.io/instance: k8s    app.kubernetes.io/name: prometheus    app.kubernetes.io/part-of: kube-prometheus  sessionAffinity: ClientIProot@hello:~/kube-prometheus/manifests# sed -i  "/ports:/i\  type: NodePort" alertmanager-service.yaml root@hello:~/kube-prometheus/manifests# sed -i  "/targetPort: web/i\    nodePort: 31400" alertmanager-service.yaml root@hello:~/kube-prometheus/manifests# sed -i  "/targetPort: reloader-web/i\    nodePort: 31500" alertmanager-service.yaml root@hello:~/kube-prometheus/manifests# cat alertmanager-service.yaml apiVersion: v1kind: Servicemetadata:  labels:    app.kubernetes.io/component: alert-router    app.kubernetes.io/instance: main    app.kubernetes.io/name: alertmanager    app.kubernetes.io/part-of: kube-prometheus    app.kubernetes.io/version: 0.23.0  name: alertmanager-main  namespace: monitoringspec:  type: NodePort  ports:  - name: web    port: 9093    nodePort: 31400    targetPort: web  - name: reloader-web    port: 8080    nodePort: 31500    targetPort: reloader-web  selector:    app.kubernetes.io/component: alert-router    app.kubernetes.io/instance: main    app.kubernetes.io/name: alertmanager    app.kubernetes.io/part-of: kube-prometheus  sessionAffinity: ClientIP执行部署root@hello:~# root@hello:~#  kubectl create -f /root/kube-prometheus/manifests/setupcustomresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com createdcustomresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com createdcustomresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com createdcustomresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com createdcustomresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com createdcustomresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com createdcustomresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com createdcustomresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com creatednamespace/monitoring createdroot@hello:~# root@hello:~# root@hello:~# root@hello:~#  kubectl create -f /root/kube-prometheus/manifests/alertmanager.monitoring.coreos.com/main creatednetworkpolicy.networking.k8s.io/alertmanager-main createdpoddisruptionbudget.policy/alertmanager-main createdprometheusrule.monitoring.coreos.com/alertmanager-main-rules createdsecret/alertmanager-main createdservice/alertmanager-main createdserviceaccount/alertmanager-main createdservicemonitor.monitoring.coreos.com/alertmanager-main createdclusterrole.rbac.authorization.k8s.io/blackbox-exporter createdclusterrolebinding.rbac.authorization.k8s.io/blackbox-exporter createdconfigmap/blackbox-exporter-configuration createddeployment.apps/blackbox-exporter creatednetworkpolicy.networking.k8s.io/blackbox-exporter createdservice/blackbox-exporter createdserviceaccount/blackbox-exporter createdservicemonitor.monitoring.coreos.com/blackbox-exporter createdsecret/grafana-config createdsecret/grafana-datasources createdconfigmap/grafana-dashboard-alertmanager-overview createdconfigmap/grafana-dashboard-apiserver createdconfigmap/grafana-dashboard-cluster-total createdconfigmap/grafana-dashboard-controller-manager createdconfigmap/grafana-dashboard-grafana-overview createdconfigmap/grafana-dashboard-k8s-resources-cluster createdconfigmap/grafana-dashboard-k8s-resources-namespace createdconfigmap/grafana-dashboard-k8s-resources-node createdconfigmap/grafana-dashboard-k8s-resources-pod createdconfigmap/grafana-dashboard-k8s-resources-workload createdconfigmap/grafana-dashboard-k8s-resources-workloads-namespace createdconfigmap/grafana-dashboard-kubelet createdconfigmap/grafana-dashboard-namespace-by-pod createdconfigmap/grafana-dashboard-namespace-by-workload createdconfigmap/grafana-dashboard-node-cluster-rsrc-use createdconfigmap/grafana-dashboard-node-rsrc-use createdconfigmap/grafana-dashboard-nodes createdconfigmap/grafana-dashboard-persistentvolumesusage createdconfigmap/grafana-dashboard-pod-total createdconfigmap/grafana-dashboard-prometheus-remote-write createdconfigmap/grafana-dashboard-prometheus createdconfigmap/grafana-dashboard-proxy createdconfigmap/grafana-dashboard-scheduler createdconfigmap/grafana-dashboard-workload-total createdconfigmap/grafana-dashboards createddeployment.apps/grafana creatednetworkpolicy.networking.k8s.io/grafana createdprometheusrule.monitoring.coreos.com/grafana-rules createdservice/grafana createdserviceaccount/grafana createdservicemonitor.monitoring.coreos.com/grafana createdprometheusrule.monitoring.coreos.com/kube-prometheus-rules createdclusterrole.rbac.authorization.k8s.io/kube-state-metrics createdclusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics createddeployment.apps/kube-state-metrics creatednetworkpolicy.networking.k8s.io/kube-state-metrics createdprometheusrule.monitoring.coreos.com/kube-state-metrics-rules createdservice/kube-state-metrics createdserviceaccount/kube-state-metrics createdservicemonitor.monitoring.coreos.com/kube-state-metrics createdprometheusrule.monitoring.coreos.com/kubernetes-monitoring-rules createdservicemonitor.monitoring.coreos.com/kube-apiserver createdservicemonitor.monitoring.coreos.com/coredns createdservicemonitor.monitoring.coreos.com/kube-controller-manager createdservicemonitor.monitoring.coreos.com/kube-scheduler createdservicemonitor.monitoring.coreos.com/kubelet createdclusterrole.rbac.authorization.k8s.io/node-exporter createdclusterrolebinding.rbac.authorization.k8s.io/node-exporter createddaemonset.apps/node-exporter creatednetworkpolicy.networking.k8s.io/node-exporter createdprometheusrule.monitoring.coreos.com/node-exporter-rules createdservice/node-exporter createdserviceaccount/node-exporter createdservicemonitor.monitoring.coreos.com/node-exporter createdclusterrole.rbac.authorization.k8s.io/prometheus-k8s createdclusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s creatednetworkpolicy.networking.k8s.io/prometheus-k8s createdpoddisruptionbudget.policy/prometheus-k8s createdprometheus.monitoring.coreos.com/k8s createdprometheusrule.monitoring.coreos.com/prometheus-k8s-prometheus-rules createdrolebinding.rbac.authorization.k8s.io/prometheus-k8s-config createdrolebinding.rbac.authorization.k8s.io/prometheus-k8s createdrolebinding.rbac.authorization.k8s.io/prometheus-k8s createdrolebinding.rbac.authorization.k8s.io/prometheus-k8s createdrole.rbac.authorization.k8s.io/prometheus-k8s-config createdrole.rbac.authorization.k8s.io/prometheus-k8s createdrole.rbac.authorization.k8s.io/prometheus-k8s createdrole.rbac.authorization.k8s.io/prometheus-k8s createdservice/prometheus-k8s createdserviceaccount/prometheus-k8s createdservicemonitor.monitoring.coreos.com/prometheus-k8s createdapiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io createdclusterrole.rbac.authorization.k8s.io/prometheus-adapter createdclusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader createdclusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter createdclusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator createdclusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources createdconfigmap/adapter-config createddeployment.apps/prometheus-adapter creatednetworkpolicy.networking.k8s.io/prometheus-adapter createdpoddisruptionbudget.policy/prometheus-adapter createdrolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader createdservice/prometheus-adapter createdserviceaccount/prometheus-adapter createdservicemonitor.monitoring.coreos.com/prometheus-adapter createdclusterrole.rbac.authorization.k8s.io/prometheus-operator createdclusterrolebinding.rbac.authorization.k8s.io/prometheus-operator createddeployment.apps/prometheus-operator creatednetworkpolicy.networking.k8s.io/prometheus-operator createdprometheusrule.monitoring.coreos.com/prometheus-operator-rules createdservice/prometheus-operator createdserviceaccount/prometheus-operator createdservicemonitor.monitoring.coreos.com/prometheus-operator createdroot@hello:~# root@hello:~# 查看验证root@hello:~# kubectl  get pod -n monitoring NAME                                   READY   STATUS    RESTARTS   AGEalertmanager-main-0                    2/2     Running   0          69salertmanager-main-1                    2/2     Running   0          69salertmanager-main-2                    2/2     Running   0          69sblackbox-exporter-6c559c5c66-kw6vd     3/3     Running   0          83sgrafana-7fd69887fb-jmpmp               1/1     Running   0          81skube-state-metrics-867b64476b-h84g4    3/3     Running   0          81snode-exporter-576bm                    2/2     Running   0          80snode-exporter-94gn9                    2/2     Running   0          80snode-exporter-cbjqk                    2/2     Running   0          80snode-exporter-mhlh7                    2/2     Running   0          80snode-exporter-pdc6k                    2/2     Running   0          80snode-exporter-pqqds                    2/2     Running   0          80snode-exporter-s9cz4                    2/2     Running   0          80snode-exporter-tdlnt                    2/2     Running   0          81sprometheus-adapter-8f88b5b45-rrsh4     1/1     Running   0          78sprometheus-adapter-8f88b5b45-wh6pf     1/1     Running   0          78sprometheus-k8s-0                       2/2     Running   0          68sprometheus-k8s-1                       2/2     Running   0          68sprometheus-operator-7f9d9c77f8-h5gkt   2/2     Running   0          78sroot@hello:~# root@hello:~# kubectl  get svc -n monitoring NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGEalertmanager-main       NodePort    10.103.47.160    <none>        9093:31400/TCP,8080:31500/TCP   92salertmanager-operated   ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP      78sblackbox-exporter       ClusterIP   10.102.108.160   <none>        9115/TCP,19115/TCP              92sgrafana                 NodePort    10.106.2.21      <none>        3000:31100/TCP                  90skube-state-metrics      ClusterIP   None             <none>        8443/TCP,9443/TCP               90snode-exporter           ClusterIP   None             <none>        9100/TCP                        90sprometheus-adapter      ClusterIP   10.108.65.108    <none>        443/TCP                         87sprometheus-k8s          NodePort    10.100.227.174   <none>        9090:31200/TCP,8080:31300/TCP   88sprometheus-operated     ClusterIP   None             <none>        9090/TCP                        77sprometheus-operator     ClusterIP   None             <none>        8443/TCP                        87sroot@hello:~# http://192.168.1.81:31400/http://192.168.1.81:31200/http://192.168.1.81:31100/一条命令执行cd /root ; git clone -b release-0.10 https://github.com/prometheus-operator/kube-prometheus.git ;cd kube-prometheus/manifests ;sed -i "s#quay.io/prometheus/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yaml ;sed -i "s#quay.io/brancz/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yaml ;sed -i "s#k8s.gcr.io/prometheus-adapter/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yaml ;sed -i "s#quay.io/prometheus-operator/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yaml ;sed -i "s#k8s.gcr.io/kube-state-metrics/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yaml ;sed -i  "/ports:/i\  type: NodePort" grafana-service.yaml ;sed -i  "/targetPort: http/i\    nodePort: 31100" grafana-service.yaml ;sed -i  "/ports:/i\  type: NodePort" prometheus-service.yaml ;sed -i  "/targetPort: web/i\    nodePort: 31200" prometheus-service.yaml ;sed -i  "/targetPort: reloader-web/i\    nodePort: 31300" prometheus-service.yaml ;sed -i  "/ports:/i\  type: NodePort" alertmanager-service.yaml  ;sed -i  "/targetPort: web/i\    nodePort: 31400" alertmanager-service.yaml  ;sed -i  "/targetPort: reloader-web/i\    nodePort: 31500" alertmanager-service.yaml  ;kubectl create -f /root/kube-prometheus/manifests/setup ;kubectl create -f /root/kube-prometheus/manifests/ ; sleep 30 ; kubectl  get pod -n monitoring  ; kubectl  get svc -n monitoring  ;https://www.oiox.cn/ https://www.chenby.cn/ https://cby-chen.github.io/ https://blog.csdn.net/qq\_33921750 https://my.oschina.net/u/3981543 https://www.zhihu.com/people/... https://segmentfault.com/u/hp... https://juejin.cn/user/331578... https://cloud.tencent.com/dev... https://www.jianshu.com/u/0f8... https://www.toutiao.com/c/use... CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、今日头条、集体博客、全网可搜《小陈运维》 文章次要公布于微信公众号:《Linux运维交换社区》

April 24, 2022 · 1 min · jiezi

关于kubernetes:二进制安装Kubernetesk8s-v1236

二进制装置Kubernetes(k8s) v1.23.6背景kubernetes二进制装置 1.23.3 和 1.23.4 和 1.23.5 和 1.23.6 文档以及安装包已生成。 后续尽可能第一工夫更新新版本文档 https://github.com/cby-chen/K... 脚本我的项目地址:https://github.com/cby-chen/B... 手动我的项目地址:https://github.com/cby-chen/K... 1.环境主机名称IP地址阐明软件Master01192.168.1.81master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-clientMaster02192.168.1.82master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-clientMaster03192.168.1.83master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-clientNode01192.168.1.84node节点kubelet、kube-proxy、nfs-clientNode02192.168.1.85node节点kubelet、kube-proxy、nfs-clientNode03192.168.1.86node节点kubelet、kube-proxy、nfs-clientNode04192.168.1.87node节点kubelet、kube-proxy、nfs-clientNode05192.168.1.88node节点kubelet、kube-proxy、nfs-clientLb01192.168.1.80Lb01节点haproxy、keepalivedLb02192.168.1.90Lb02节点haproxy、keepalived 192.168.1.89VIP 软件版本内核4.18.0-373.el8.x86_64CentOS 8v8 或者 v7kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxyv1.23.6etcdv3.5.3docker-cev20.10.14containerdv1.5.11cfsslv1.6.1cniv1.1.1crictlv1.23.0haproxyv1.8.27keepalivedv2.1.5网段 物理主机:192.168.1.0/24 service:10.96.0.0/12 pod:172.16.0.0/12 如果有条件倡议k8s集群与etcd集群离开装置 1.1.k8s根底零碎环境配置1.2.配置IPssh root@192.168.1.161 "nmcli con mod ens18 ipv4.addresses 192.168.1.81/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.167 "nmcli con mod ens18 ipv4.addresses 192.168.1.82/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.137 "nmcli con mod ens18 ipv4.addresses 192.168.1.83/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.152 "nmcli con mod ens18 ipv4.addresses 192.168.1.84/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.198 "nmcli con mod ens18 ipv4.addresses 192.168.1.85/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.166 "nmcli con mod ens18 ipv4.addresses 192.168.1.86/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.171 "nmcli con mod ens18 ipv4.addresses 192.168.1.87/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.159 "nmcli con mod ens18 ipv4.addresses 192.168.1.88/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.122 "nmcli con mod ens18 ipv4.addresses 192.168.1.80/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.125 "nmcli con mod ens18 ipv4.addresses 192.168.1.90/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"1.3.设置主机名hostnamectl set-hostname k8s-master01hostnamectl set-hostname k8s-master02hostnamectl set-hostname k8s-master03hostnamectl set-hostname k8s-node01hostnamectl set-hostname k8s-node02hostnamectl set-hostname k8s-node03hostnamectl set-hostname k8s-node04hostnamectl set-hostname k8s-node05hostnamectl set-hostname lb01hostnamectl set-hostname lb021.4.配置yum源# 对于 CentOS 7sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org|baseurl=https://mirrors.tuna.tsinghua.edu.cn|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo# 对于 CentOS 8sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org/$contentdir|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.reposed -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://mirror.centos.org/\$contentdir|baseurl=http://192.168.1.123/centos|g' -i.bak /etc/yum.repos.d/CentOS-*.repo1.5.装置一些必备工具yum -y install wget jq psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl -y1.6.装置docker工具 (lb除外)yum install -y yum-utils device-mapper-persistent-data lvm2wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.reposudo sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repoyum makecacheyum -y install docker-cesystemctl enable --now docker1.7.敞开防火墙systemctl disable --now firewalld1.8.敞开SELinuxsetenforce 0sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config1.9.敞开替换分区sed -ri 's/.*swap.*/#&/' /etc/fstabswapoff -a && sysctl -w vm.swappiness=0cat /etc/fstab# /dev/mapper/centos-swap swap swap defaults 0 01.10.敞开NetworkManager 并启用 network (lb除外)systemctl disable --now NetworkManagersystemctl start network && systemctl enable network1.11.进行工夫同步 (lb除外)服务端yum install chrony -ycat > /etc/chrony.conf << EOF pool ntp.aliyun.com iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsyncallow 192.168.1.0/24local stratum 10keyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronyEOFsystemctl restart chronydsystemctl enable chronyd客户端yum install chrony -yvim /etc/chrony.confcat /etc/chrony.conf | grep -v "^#" | grep -v "^$"pool 192.168.1.81 iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsynckeyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronysystemctl restart chronyd ; systemctl enable chronydyum install chrony -y ; sed -i "s#2.centos.pool.ntp.org#192.168.1.81#g" /etc/chrony.conf ; systemctl restart chronyd ; systemctl enable chronyd应用客户端进行验证chronyc sources -v1.12.配置ulimitulimit -SHn 65535cat >> /etc/security/limits.conf <<EOF* soft nofile 655360* hard nofile 131072* soft nproc 655350* hard nproc 655350* seft memlock unlimited* hard memlock unlimiteddEOF1.13.配置免密登录yum install -y sshpassssh-keygen -f /root/.ssh/id_rsa -P ''export IP="192.168.1.81 192.168.1.82 192.168.1.83 192.168.1.84 192.168.1.85 192.168.1.86 192.168.1.87 192.168.1.88 192.168.1.80 192.168.1.90"export SSHPASS=123123for HOST in $IP;do sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOSTdone1.14.增加启用源 (lb除外)为 RHEL-8或 CentOS-8配置源yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm为 RHEL-7 SL-7 或 CentOS-7 装置 ELRepo yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm查看可用安装包yum --disablerepo="*" --enablerepo="elrepo-kernel" list available1.15.降级内核至4.18版本以上 (lb除外)装置最新的内核# 我这里抉择的是稳定版kernel-ml 如需更新长期保护版本kernel-lt yum --enablerepo=elrepo-kernel install kernel-ml查看已装置那些内核rpm -qa | grep kernelkernel-core-4.18.0-358.el8.x86_64kernel-tools-4.18.0-358.el8.x86_64kernel-ml-core-5.16.7-1.el8.elrepo.x86_64kernel-ml-5.16.7-1.el8.elrepo.x86_64kernel-modules-4.18.0-358.el8.x86_64kernel-4.18.0-358.el8.x86_64kernel-tools-libs-4.18.0-358.el8.x86_64kernel-ml-modules-5.16.7-1.el8.elrepo.x86_64查看默认内核grubby --default-kernel/boot/vmlinuz-5.16.7-1.el8.elrepo.x86_64若不是最新的应用命令设置grubby --set-default /boot/vmlinuz-「您的内核版本」.x86_64重启失效rebootv8 整合命令为:yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --default-kernel ; rebootv7 整合命令为:yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --set-default \$(ls /boot/vmlinuz-* | grep elrepo) ; grubby --default-kernel1.16.装置ipvsadm (lb除外)yum install ipvsadm ipset sysstat conntrack libseccomp -ycat >> /etc/modules-load.d/ipvs.conf <<EOF ip_vsip_vs_rrip_vs_wrrip_vs_shnf_conntrackip_tablesip_setxt_setipt_setipt_rpfilteript_REJECTipipEOFsystemctl restart systemd-modules-load.servicelsmod | grep -e ip_vs -e nf_conntrackip_vs_sh 16384 0ip_vs_wrr 16384 0ip_vs_rr 16384 0ip_vs 180224 6 ip_vs_rr,ip_vs_sh,ip_vs_wrrnf_conntrack 176128 1 ip_vsnf_defrag_ipv6 24576 2 nf_conntrack,ip_vsnf_defrag_ipv4 16384 1 nf_conntracklibcrc32c 16384 3 nf_conntrack,xfs,ip_vs1.17.批改内核参数 (lb除外)cat <<EOF > /etc/sysctl.d/k8s.confnet.ipv4.ip_forward = 1net.bridge.bridge-nf-call-iptables = 1fs.may_detach_mounts = 1vm.overcommit_memory=1vm.panic_on_oom=0fs.inotify.max_user_watches=89100fs.file-max=52706963fs.nr_open=52706963net.netfilter.nf_conntrack_max=2310720net.ipv4.tcp_keepalive_time = 600net.ipv4.tcp_keepalive_probes = 3net.ipv4.tcp_keepalive_intvl =15net.ipv4.tcp_max_tw_buckets = 36000net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_max_orphans = 327680net.ipv4.tcp_orphan_retries = 3net.ipv4.tcp_syncookies = 1net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.ip_conntrack_max = 65536net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.tcp_timestamps = 0net.core.somaxconn = 16384EOFsysctl --system1.18.所有节点配置hosts本地解析cat > /etc/hosts <<EOF127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.1.81 k8s-master01192.168.1.82 k8s-master02192.168.1.83 k8s-master03192.168.1.84 k8s-node01192.168.1.85 k8s-node02192.168.1.86 k8s-node03192.168.1.87 k8s-node04192.168.1.88 k8s-node05192.168.1.80 lb01192.168.1.90 lb02192.168.1.89 lb-vipEOF2.k8s根本组件装置2.1.所有k8s节点装置Containerd作为Runtimeyum install containerd -y2.1.1配置Containerd所需的模块cat <<EOF | sudo tee /etc/modules-load.d/containerd.confoverlaybr_netfilterEOF2.1.2加载模块systemctl restart systemd-modules-load.service2.1.3配置Containerd所需的内核cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.confnet.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1net.bridge.bridge-nf-call-ip6tables = 1EOF# 加载内核sysctl --system2.1.4创立Containerd的配置文件mkdir -p /etc/containerdcontainerd config default | tee /etc/containerd/config.toml批改Containerd的配置文件sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep SystemdCgroup# 找到containerd.runtimes.runc.options,在其下退出SystemdCgroup = true[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true [plugins."io.containerd.grpc.v1.cri".cni]# 将sandbox_image默认地址改为合乎版本地址 sandbox_image = "registry.cn-hangzhou.aliyuncs.com/chenby/pause:3.6"2.1.5启动并设置为开机启动systemctl daemon-reloadsystemctl enable --now containerd2.1.6配置crictl客户端连贯的运行时地位cat > /etc/crictl.yaml <<EOFruntime-endpoint: unix:///run/containerd/containerd.sockimage-endpoint: unix:///run/containerd/containerd.socktimeout: 10debug: falseEOFsystemctl restart containerd2.2.k8s与etcd下载及装置(仅在master01操作)2.2.1下载k8s安装包(你用哪个下哪个)1.下载kubernetes1.23.+的二进制包github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.mdwget https://dl.k8s.io/v1.23.6/kubernetes-server-linux-amd64.tar.gz2.下载etcdctl二进制包github二进制包下载地址:https://github.com/etcd-io/etcd/releaseswget https://github.com/etcd-io/etcd/releases/download/v3.5.3/etcd-v3.5.3-linux-amd64.tar.gz3.docker-ce二进制包下载地址二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/这里须要下载20.10.+版本wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.14.tgz4.containerd二进制包下载github下载地址:https://github.com/containerd/containerd/releasescontainerd下载时下载带cni插件的二进制包。wget https://github.com/containerd/containerd/releases/download/v1.6.2/cri-containerd-cni-1.6.2-linux-amd64.tar.gz5.下载cfssl二进制包github二进制包下载地址:https://github.com/cloudflare/cfssl/releaseswget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd646.cni插件下载github下载地址:https://github.com/containernetworking/plugins/releaseswget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz7.crictl客户端二进制下载github下载:https://github.com/kubernetes-sigs/cri-tools/releaseswget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz解压k8s安装文件tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}解压etcd安装文件tar -xf etcd-v3.5.3-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.3-linux-amd64/etcd{,ctl}# 查看/usr/local/bin下内容ls /usr/local/bin/etcd etcdctl kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler曾经整顿好的:wget https://github.com/cby-chen/Kubernetes/releases/download/v1.23.6/kubernetes-v1.23.6.tar2.2.2查看版本[root@k8s-master01 ~]# kubelet --versionKubernetes v1.23.6[root@k8s-master01 ~]# etcdctl versionetcdctl version: 3.5.3API version: 3.5[root@k8s-master01 ~]# 2.2.3将组件发送至其余k8s节点Master='k8s-master02 k8s-master03'Work='k8s-node01 k8s-node02 k8s-node03 k8s-node04 k8s-node05'for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; donefor NODE in $Work; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done2.2.4克隆证书相干文件git clone https://github.com/cby-chen/Kubernetes.git2.2.5所有k8s节点创立目录mkdir -p /opt/cni/bin3.相干证书生成master01节点下载证书生成工具wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64" -O /usr/local/bin/cfsslwget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64" -O /usr/local/bin/cfssljsonchmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson3.1.生成etcd证书特地阐明除外,以下操作在所有master节点操作 ...

April 21, 2022 · 22 min · jiezi

关于kubernetes:在k8skubernetes上安装-ingress-V113

介绍Ingress 公开了从集群内部到集群内服务的 HTTP 和 HTTPS 路由。流量路由由 Ingress 资源上定义的规定管制。 上面是一个将所有流量都发送到同一 Service 的简略 Ingress 示例: 写入配置文件,并执行[root@hello ~/yaml]# vim deploy.yaml[root@hello ~/yaml]#[root@hello ~/yaml]#[root@hello ~/yaml]# cat deploy.yamlapiVersion: v1kind: Namespacemetadata:  name: ingress-nginx  labels:    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx---# Source: ingress-nginx/templates/controller-serviceaccount.yamlapiVersion: v1kind: ServiceAccountmetadata:  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: controller  name: ingress-nginx  namespace: ingress-nginxautomountServiceAccountToken: true---# Source: ingress-nginx/templates/controller-configmap.yamlapiVersion: v1kind: ConfigMapmetadata:  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: controller  name: ingress-nginx-controller  namespace: ingress-nginxdata:  allow-snippet-annotations: 'true'---# Source: ingress-nginx/templates/clusterrole.yamlapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm  name: ingress-nginxrules:  - apiGroups:      - ''    resources:      - configmaps      - endpoints      - nodes      - pods      - secrets      - namespaces    verbs:      - list      - watch  - apiGroups:      - ''    resources:      - nodes    verbs:      - get  - apiGroups:      - ''    resources:      - services    verbs:      - get      - list      - watch  - apiGroups:      - networking.k8s.io    resources:      - ingresses    verbs:      - get      - list      - watch  - apiGroups:      - ''    resources:      - events    verbs:      - create      - patch  - apiGroups:      - networking.k8s.io    resources:      - ingresses/status    verbs:      - update  - apiGroups:      - networking.k8s.io    resources:      - ingressclasses    verbs:      - get      - list      - watch---# Source: ingress-nginx/templates/clusterrolebinding.yamlapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm  name: ingress-nginxroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: ingress-nginxsubjects:  - kind: ServiceAccount    name: ingress-nginx    namespace: ingress-nginx---# Source: ingress-nginx/templates/controller-role.yamlapiVersion: rbac.authorization.k8s.io/v1kind: Rolemetadata:  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: controller  name: ingress-nginx  namespace: ingress-nginxrules:  - apiGroups:      - ''    resources:      - namespaces    verbs:      - get  - apiGroups:      - ''    resources:      - configmaps      - pods      - secrets      - endpoints    verbs:      - get      - list      - watch  - apiGroups:      - ''    resources:      - services    verbs:      - get      - list      - watch  - apiGroups:      - networking.k8s.io    resources:      - ingresses    verbs:      - get      - list      - watch  - apiGroups:      - networking.k8s.io    resources:      - ingresses/status    verbs:      - update  - apiGroups:      - networking.k8s.io    resources:      - ingressclasses    verbs:      - get      - list      - watch  - apiGroups:      - ''    resources:      - configmaps    resourceNames:      - ingress-controller-leader    verbs:      - get      - update  - apiGroups:      - ''    resources:      - configmaps    verbs:      - create  - apiGroups:      - ''    resources:      - events    verbs:      - create      - patch---# Source: ingress-nginx/templates/controller-rolebinding.yamlapiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata:  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: controller  name: ingress-nginx  namespace: ingress-nginxroleRef:  apiGroup: rbac.authorization.k8s.io  kind: Role  name: ingress-nginxsubjects:  - kind: ServiceAccount    name: ingress-nginx    namespace: ingress-nginx---# Source: ingress-nginx/templates/controller-service-webhook.yamlapiVersion: v1kind: Servicemetadata:  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: controller  name: ingress-nginx-controller-admission  namespace: ingress-nginxspec:  type: ClusterIP  ports:    - name: https-webhook      port: 443      targetPort: webhook      appProtocol: https  selector:    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/component: controller---# Source: ingress-nginx/templates/controller-service.yamlapiVersion: v1kind: Servicemetadata:  annotations:  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: controller  name: ingress-nginx-controller  namespace: ingress-nginxspec:  type: NodePort  externalTrafficPolicy: Local  ipFamilyPolicy: SingleStack  ipFamilies:    - IPv4  ports:    - name: http      port: 80      protocol: TCP      targetPort: http      appProtocol: http    - name: https      port: 443      protocol: TCP      targetPort: https      appProtocol: https  selector:    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/component: controller---# Source: ingress-nginx/templates/controller-deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata:  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: controller  name: ingress-nginx-controller  namespace: ingress-nginxspec:  selector:    matchLabels:      app.kubernetes.io/name: ingress-nginx      app.kubernetes.io/instance: ingress-nginx      app.kubernetes.io/component: controller  revisionHistoryLimit: 10  minReadySeconds: 0  template:    metadata:      labels:        app.kubernetes.io/name: ingress-nginx        app.kubernetes.io/instance: ingress-nginx        app.kubernetes.io/component: controller    spec:      dnsPolicy: ClusterFirst      containers:        - name: controller          image: registry.cn-hangzhou.aliyuncs.com/chenby/controller:v1.1.3           imagePullPolicy: IfNotPresent          lifecycle:            preStop:              exec:                command:                  - /wait-shutdown          args:            - /nginx-ingress-controller            - --election-id=ingress-controller-leader            - --controller-class=k8s.io/ingress-nginx            - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller            - --validating-webhook=:8443            - --validating-webhook-certificate=/usr/local/certificates/cert            - --validating-webhook-key=/usr/local/certificates/key          securityContext:            capabilities:              drop:                - ALL              add:                - NET_BIND_SERVICE            runAsUser: 101            allowPrivilegeEscalation: true          env:            - name: POD_NAME              valueFrom:                fieldRef:                  fieldPath: metadata.name            - name: POD_NAMESPACE              valueFrom:                fieldRef:                  fieldPath: metadata.namespace            - name: LD_PRELOAD              value: /usr/local/lib/libmimalloc.so          livenessProbe:            failureThreshold: 5            httpGet:              path: /healthz              port: 10254              scheme: HTTP            initialDelaySeconds: 10            periodSeconds: 10            successThreshold: 1            timeoutSeconds: 1          readinessProbe:            failureThreshold: 3            httpGet:              path: /healthz              port: 10254              scheme: HTTP            initialDelaySeconds: 10            periodSeconds: 10            successThreshold: 1            timeoutSeconds: 1          ports:            - name: http              containerPort: 80              protocol: TCP            - name: https              containerPort: 443              protocol: TCP            - name: webhook              containerPort: 8443              protocol: TCP          volumeMounts:            - name: webhook-cert              mountPath: /usr/local/certificates/              readOnly: true          resources:            requests:              cpu: 100m              memory: 90Mi      nodeSelector:        kubernetes.io/os: linux      serviceAccountName: ingress-nginx      terminationGracePeriodSeconds: 300      volumes:        - name: webhook-cert          secret:            secretName: ingress-nginx-admission---# Source: ingress-nginx/templates/controller-ingressclass.yaml# We don't support namespaced ingressClass yet# So a ClusterRole and a ClusterRoleBinding is requiredapiVersion: networking.k8s.io/v1kind: IngressClassmetadata:  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: controller  name: nginx  namespace: ingress-nginxspec:  controller: k8s.io/ingress-nginx---# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml# before changing this value, check the required kubernetes version# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisitesapiVersion: admissionregistration.k8s.io/v1kind: ValidatingWebhookConfigurationmetadata:  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: admission-webhook  name: ingress-nginx-admissionwebhooks:  - name: validate.nginx.ingress.kubernetes.io    matchPolicy: Equivalent    rules:      - apiGroups:          - networking.k8s.io        apiVersions:          - v1        operations:          - CREATE          - UPDATE        resources:          - ingresses    failurePolicy: Fail    sideEffects: None    admissionReviewVersions:      - v1    clientConfig:      service:        namespace: ingress-nginx        name: ingress-nginx-controller-admission        path: /networking/v1/ingresses---# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yamlapiVersion: v1kind: ServiceAccountmetadata:  name: ingress-nginx-admission  namespace: ingress-nginx  annotations:    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: admission-webhook---# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yamlapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:  name: ingress-nginx-admission  annotations:    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: admission-webhookrules:  - apiGroups:      - admissionregistration.k8s.io    resources:      - validatingwebhookconfigurations    verbs:      - get      - update---# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yamlapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:  name: ingress-nginx-admission  annotations:    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: admission-webhookroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: ingress-nginx-admissionsubjects:  - kind: ServiceAccount    name: ingress-nginx-admission    namespace: ingress-nginx---# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yamlapiVersion: rbac.authorization.k8s.io/v1kind: Rolemetadata:  name: ingress-nginx-admission  namespace: ingress-nginx  annotations:    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: admission-webhookrules:  - apiGroups:      - ''    resources:      - secrets    verbs:      - get      - create---# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yamlapiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata:  name: ingress-nginx-admission  namespace: ingress-nginx  annotations:    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: admission-webhookroleRef:  apiGroup: rbac.authorization.k8s.io  kind: Role  name: ingress-nginx-admissionsubjects:  - kind: ServiceAccount    name: ingress-nginx-admission    namespace: ingress-nginx---# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yamlapiVersion: batch/v1kind: Jobmetadata:  name: ingress-nginx-admission-create  namespace: ingress-nginx  annotations:    helm.sh/hook: pre-install,pre-upgrade    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: admission-webhookspec:  template:    metadata:      name: ingress-nginx-admission-create      labels:        helm.sh/chart: ingress-nginx-4.0.10        app.kubernetes.io/name: ingress-nginx        app.kubernetes.io/instance: ingress-nginx        app.kubernetes.io/version: 1.1.0        app.kubernetes.io/managed-by: Helm        app.kubernetes.io/component: admission-webhook    spec:      containers:        - name: create          image: registry.cn-hangzhou.aliyuncs.com/chenby/kube-webhook-certgen:v1.1.1           imagePullPolicy: IfNotPresent          args:            - create            - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc            - --namespace=$(POD_NAMESPACE)            - --secret-name=ingress-nginx-admission          env:            - name: POD_NAMESPACE              valueFrom:                fieldRef:                  fieldPath: metadata.namespace          securityContext:            allowPrivilegeEscalation: false      restartPolicy: OnFailure      serviceAccountName: ingress-nginx-admission      nodeSelector:        kubernetes.io/os: linux      securityContext:        runAsNonRoot: true        runAsUser: 2000---# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yamlapiVersion: batch/v1kind: Jobmetadata:  name: ingress-nginx-admission-patch  namespace: ingress-nginx  annotations:    helm.sh/hook: post-install,post-upgrade    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: admission-webhookspec:  template:    metadata:      name: ingress-nginx-admission-patch      labels:        helm.sh/chart: ingress-nginx-4.0.10        app.kubernetes.io/name: ingress-nginx        app.kubernetes.io/instance: ingress-nginx        app.kubernetes.io/version: 1.1.0        app.kubernetes.io/managed-by: Helm        app.kubernetes.io/component: admission-webhook    spec:      containers:        - name: patch          image: registry.cn-hangzhou.aliyuncs.com/chenby/kube-webhook-certgen:v1.1.1           imagePullPolicy: IfNotPresent          args:            - patch            - --webhook-name=ingress-nginx-admission            - --namespace=$(POD_NAMESPACE)            - --patch-mutating=false            - --secret-name=ingress-nginx-admission            - --patch-failure-policy=Fail          env:            - name: POD_NAMESPACE              valueFrom:                fieldRef:                  fieldPath: metadata.namespace          securityContext:            allowPrivilegeEscalation: false      restartPolicy: OnFailure      serviceAccountName: ingress-nginx-admission      nodeSelector:        kubernetes.io/os: linux      securityContext:        runAsNonRoot: true        runAsUser: 2000[root@hello ~/yaml]#启用后端,写入配置文件执行[root@hello ~/yaml]# vim backend.yaml[root@hello ~/yaml]# cat backend.yamlapiVersion: apps/v1kind: Deploymentmetadata:  name: default-http-backend  labels:    app.kubernetes.io/name: default-http-backend  namespace: kube-systemspec:  replicas: 1  selector:    matchLabels:      app.kubernetes.io/name: default-http-backend  template:    metadata:      labels:        app.kubernetes.io/name: default-http-backend    spec:      terminationGracePeriodSeconds: 60      containers:      - name: default-http-backend        image: registry.cn-hangzhou.aliyuncs.com/chenby/defaultbackend-amd64:1.5         livenessProbe:          httpGet:            path: /healthz            port: 8080            scheme: HTTP          initialDelaySeconds: 30          timeoutSeconds: 5        ports:        - containerPort: 8080        resources:          limits:            cpu: 10m            memory: 20Mi          requests:            cpu: 10m            memory: 20Mi---apiVersion: v1kind: Servicemetadata:  name: default-http-backend  namespace: kube-system  labels:    app.kubernetes.io/name: default-http-backendspec:  ports:  - port: 80    targetPort: 8080  selector:    app.kubernetes.io/name: default-http-backend[root@hello ~/yaml]#装置测试利用[root@hello ~/yaml]# vim ingress-demo-app.yaml[root@hello ~/yaml]#[root@hello ~/yaml]# cat ingress-demo-app.yamlapiVersion: apps/v1kind: Deploymentmetadata:  name: hello-serverspec:  replicas: 2  selector:    matchLabels:      app: hello-server  template:    metadata:      labels:        app: hello-server    spec:      containers:      - name: hello-server        image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/hello-server        ports:        - containerPort: 9000---apiVersion: apps/v1kind: Deploymentmetadata:  labels:    app: nginx-demo  name: nginx-demospec:  replicas: 2  selector:    matchLabels:      app: nginx-demo  template:    metadata:      labels:        app: nginx-demo    spec:      containers:      - image: nginx        name: nginx---apiVersion: v1kind: Servicemetadata:  labels:    app: nginx-demo  name: nginx-demospec:  selector:    app: nginx-demo  ports:  - port: 8000    protocol: TCP    targetPort: 80---apiVersion: v1kind: Servicemetadata:  labels:    app: hello-server  name: hello-serverspec:  selector:    app: hello-server  ports:  - port: 8000    protocol: TCP    targetPort: 9000---apiVersion: networking.k8s.io/v1kind: Ingress  metadata:  name: ingress-host-barspec:  ingressClassName: nginx  rules:  - host: "hello.chenby.cn"    http:      paths:      - pathType: Prefix        path: "/"        backend:          service:            name: hello-server            port:              number: 8000  - host: "demo.chenby.cn"    http:      paths:      - pathType: Prefix        path: "/nginx"          backend:          service:            name: nginx-demo            port:              number: 8000[root@hello ~/yaml]#[root@hello ~/yaml]# kubectl  get ingressNAME               CLASS    HOSTS                            ADDRESS        PORTS   AGEingress-demo-app   <none>   app.demo.com                     192.168.1.11   80      20mingress-host-bar   nginx    hello.chenby.cn,demo.chenby.cn   192.168.1.11   80      2m17s[root@hello ~/yaml]#执行部署root@hello:~# kubectl  apply -f deploy.yaml namespace/ingress-nginx createdserviceaccount/ingress-nginx createdconfigmap/ingress-nginx-controller createdclusterrole.rbac.authorization.k8s.io/ingress-nginx createdclusterrolebinding.rbac.authorization.k8s.io/ingress-nginx createdrole.rbac.authorization.k8s.io/ingress-nginx createdrolebinding.rbac.authorization.k8s.io/ingress-nginx createdservice/ingress-nginx-controller-admission createdservice/ingress-nginx-controller createddeployment.apps/ingress-nginx-controller createdingressclass.networking.k8s.io/nginx createdvalidatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission createdserviceaccount/ingress-nginx-admission createdclusterrole.rbac.authorization.k8s.io/ingress-nginx-admission createdclusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission createdrole.rbac.authorization.k8s.io/ingress-nginx-admission createdrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission createdjob.batch/ingress-nginx-admission-create createdjob.batch/ingress-nginx-admission-patch createdroot@hello:~# root@hello:~# kubectl  apply -f backend.yaml deployment.apps/default-http-backend createdservice/default-http-backend createdroot@hello:~# root@hello:~# kubectl  apply -f ingress-demo-app.yaml deployment.apps/hello-server createddeployment.apps/nginx-demo createdservice/nginx-demo createdservice/hello-server createdingress.networking.k8s.io/ingress-host-bar createdroot@hello:~# 过滤查看ingress端口[root@hello ~/yaml]# kubectl  get svc -A | grep ingressdefault         ingress-demo-app                     ClusterIP   10.68.231.41    <none>        80/TCP                       51mingress-nginx   ingress-nginx-controller             NodePort    10.68.93.71     <none>        80:32746/TCP,443:30538/TCP   32mingress-nginx   ingress-nginx-controller-admission   ClusterIP   10.68.146.23    <none>        443/TCP                      32m[root@hello ~/yaml]#https://www.oiox.cn/ https://www.chenby.cn/ https://cby-chen.github.io/ https://blog.csdn.net/qq\_33921750 https://my.oschina.net/u/3981543 https://www.zhihu.com/people/... https://segmentfault.com/u/hp... https://juejin.cn/user/331578... https://cloud.tencent.com/dev... https://www.jianshu.com/u/0f8... https://www.toutiao.com/c/use... CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、今日头条、集体博客、全网可搜《小陈运维》 本文应用 文章同步助手 同步

April 21, 2022 · 1 min · jiezi

关于kubernetes:如何修改-Rancher-Server-的-IP-地址

作者简介王海龙, SUSE Rancher 中国社区技术经理,负责 Rancher 中国技术社区的保护和经营。领有 8 年的云计算畛域教训,经验了 OpenStack 到 Kubernetes 的技术改革,无论底层操作系统 Linux,还是虚拟化 KVM 或是 Docker 容器技术都有丰盛的运维和实践经验。留神: 本指南实用于 v2.5 及 v2.5 以下的 Rancher 版本,不实用 v2.6操作前请务必做好备份前 言Rancher 治理的每个上游用户集群都有一个 cluster agent,它建设了一个 tunnel,并通过这个 tunnel 连贯到 Rancher server 中相应的集群控制器(Cluster controller)。 Cluster agent,也称为 cattle-cluster-agent,是在上游用户集群中运行的组件,其重要的作用之一是在上游用户集群和 Rancher server 之间(通过到集群控制器的 tunnel)就事件、统计信息、节点信息和健康状况进行通信并上报。 当 Rancher server 的 IP 发生变化,cattle-cluster-agent 无奈通过 tunnel 连贯到 Rancher server 时,你能够在上游集群的 cattle-cluster-agent 容器中查看到如下日志: time="2022-04-06T03:42:22Z" level=info msg="Connecting to wss://35.183.183.66/v3/connect with token jhh9rx4zmgkrw2mz8mkvsmlnnx6q5jllnqb8jnr2vdxcgglglqbdjz"time="2022-04-06T03:42:22Z" level=info msg="Connecting to proxy" url="wss://35.183.183.66/v3/connect"time="2022-04-06T03:42:32Z" level=error msg="Failed to connect to proxy. Empty dialer response" error="dial tcp 35.183.183.66:443: i/o timeout"time="2022-04-06T03:42:32Z" level=error msg="Remotedialer proxy error" error="dial tcp 35.183.183.66:443: i/o timeout"35.183.183.66 为原 Rancher server IP  ...

April 20, 2022 · 2 min · jiezi

关于kubernetes:在Kubernetes上安装Netdata的方法

介绍Netdata可用于监督kubernetes集群并显示无关集群的信息,包含节点内存使用率、CPU、网络等,简略的说,Netdata仪表板可让您全面理解Kubernetes集群,包含在每个节点上运行的服务和Pod。 装置HELMroot@hello:~# curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -root@hello:~# sudo apt-get install apt-transport-https --yesroot@hello:~# echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.listroot@hello:~# sudo apt-get updateroot@hello:~# sudo apt-get install helm增加源并装置root@hello:~# helm repo add netdata https://netdata.github.io/helmchart/"netdata" has been added to your repositoriesroot@hello:~# helm install netdata netdata/netdataW0420 09:20:51.993046 1306427 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+W0420 09:20:52.298158 1306427 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+NAME: netdataLAST DEPLOYED: Wed Apr 20 09:20:50 2022NAMESPACE: defaultSTATUS: deployedREVISION: 1TEST SUITE: NoneNOTES:1. netdata will be available on http://netdata.k8s.local/, on the exposed port of your ingress controllerIn a production environment, you  You can get that port via `kubectl get services`. e.g. in the following example, the http exposed port is 31737, the https one is 30069. The hostname netdata.k8s.local will need to be added to /etc/hosts, so that it resolves to the exposed IP. That IP depends on how your cluster is set up:         - When no load balancer is available (e.g. with minikube), you get the IP shown on `kubectl cluster-info`        - In a production environment, the command `kubectl get services` will show the IP under the EXTERNAL-IP columnThe port can be retrieved in both cases from `kubectl get services`NAME                                         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGEexiled-tapir-nginx-ingress-controller        LoadBalancer   10.98.132.169    <pending>     80:31737/TCP,443:30069/TCP   11hroot@hello:~# helm listNAME    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSIONnetdata default         1               2022-04-20 09:20:50.947921117 +0800 CST deployed        netdata-3.7.15  v1.33.1    查看PODroot@hello:~# kubectl  get pod NAME                                      READY   STATUS    RESTARTS      AGEnetdata-child-2h65n                       2/2     Running   0             77snetdata-child-dfv82                       2/2     Running   0             77snetdata-child-h6fw6                       2/2     Running   0             77snetdata-child-lc9fd                       2/2     Running   0             77snetdata-child-nh566                       2/2     Running   0             77snetdata-child-ns2p2                       2/2     Running   0             77snetdata-child-v74x5                       2/2     Running   0             77snetdata-child-xjlrv                       2/2     Running   0             77snetdata-parent-57bf6bf47d-vc6fq           1/1     Running   0             77s增加SVC使内部即可拜访root@hello:~# kubectl  get svcNAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)     AGEkubernetes   ClusterIP   10.96.0.1        <none>        443/TCP     18dnetdata      ClusterIP   10.102.160.106   <none>        19999/TCP   3m39sroot@hello:~# kubectl expose  deployment netdata-parent --type="NodePort" --port 19999service/netdata-parent exposedroot@hello:~# kubectl  get svcNAME             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)           AGEkubernetes       ClusterIP   10.96.0.1        <none>        443/TCP           18dnetdata          ClusterIP   10.102.160.106   <none>        19999/TCP         3m43snetdata-parent   NodePort    10.100.122.173   <none>        19999:30518/TCP   2sroot@hello:~# 通过http://<yourmaster-IP>:30518  拜访浏览器中的netdata仪表板 点击左侧能够查看具体每一台机器的信息 https://www.oiox.cn/ https://www.chenby.cn/ https://cby-chen.github.io/ https://blog.csdn.net/qq\_33921750 https://my.oschina.net/u/3981543 https://www.zhihu.com/people/... https://segmentfault.com/u/hp... https://juejin.cn/user/331578... https://cloud.tencent.com/dev... https://www.jianshu.com/u/0f8... https://www.toutiao.com/c/use... CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、今日头条、集体博客、全网可搜《小陈运维》

April 20, 2022 · 1 min · jiezi

关于kubernetes:深度解密|基于-eBPF-的-Kubernetes-问题排查全景图发布

简介:通过 eBPF 无侵入地采集多语言、多网络协议的黄金指标/网络指标/Trace,通过关联 Kubernetes 对象、利用、云服务等各种上下文,同时在须要进一步下钻的时候提供专业化的监测工具(如火焰图),实现了 Kubernetes 环境下的一站式可观测性平台。 作者 | 李煌东 当 Kubernetes 成为云原生事实标准,可观测性挑战随之而来以后,云原生技术以容器技术为根底,通过规范可扩大的调度、网络、存储、容器运行时接口来提供基础设施。同时,通过规范可扩大的申明式资源和控制器来提供运维能力,两层标准化推动了开发与运维关注点拆散,各畛域进一步晋升规模化和专业化,达到老本、效率、稳定性的全面优化。 在这样的大技术背景下,越来越多的公司引入了云原生技术来开发、运维业务利用。正因为云原生技术带来了越发纷繁复杂的可能性,业务利用呈现了微服务泛滥、多语言开发、多通信协议的鲜明特征。同时,云原生技术自身将复杂度下移,给可观测性带来了更多挑战: 1、混沌的微服务架构,多语言和多网络协议混淆业务架构因为分工问题,容易呈现服务数量多,调用协定和关系非常复杂的景象,导致的常见问题包含: 无奈精确清晰理解、掌控全局的零碎运行架构;无法回答利用之间的连通性是否正确;多语言、多网络调用协定带来埋点老本呈线性增长,且反复埋点 ROI 低,开发个别将这类需要优先级升高,但可观测数据又不得不采集。2、下沉的基础设施能力屏蔽实现细节,问题定界越发艰难基础设施能力持续下沉,开发和运维关注点持续拆散,分层后彼此屏蔽了实现细节,数据方面不好关联了,呈现问题后不能迅速地定界问题呈现在哪一层。开发同学只关注利用是否失常工作,并不关怀底层基础设施细节,呈现问题后须要运维同学协同排查问题。运维同学在问题排查过程中,须要开发同学提供足够的上下游来推动排查,否则只拿到“某某利用提早高”这么抽象的表述,这很难有进一步后果。所以,开发同学和运维同学之间须要共同语言来进步沟通效率,Kubernetes 的 Label、Namespace 等概念非常适合用来构建上下文信息。 3、繁多监测零碎,造成监测界面不统一简单零碎带来的一个重大副作用就是监测零碎繁多。数据链路不关联、不对立,监测界面体验不统一。很多运维同学或者大多都有过这样的体验:定位问题时浏览器关上几十个窗口,在 Grafana、控制台、日志等各种工具之间来回切换,不仅十分耗时微小,且大脑能解决的信息无限,问题定位效率低下。如果有对立的可观测性界面,数据和信息失去无效地组织,缩小注意力扩散和页面切换,来进步问题定位效率,把宝贵时间投入到业务逻辑的构建下来。 解决思路与技术计划为了解决上述问题,咱们须要应用一种反对多语言,多通信协议的技术,并在产品层面尽可能笼罩软件栈端到端的可观测性需求,通过调研,咱们提出一种立足于容器界面和底层操作系统,向上关联利用性能监测的可观测性解决思路。 要采集容器、节点运行环境、利用、网络各个维度的数据挑战十分大,云原生社区针对不同需要给出了 cAdvisor、node exporter、kube-state-metics 等多种形式,但依然无奈满足全副需要。保护泛滥采集器的老本也不容小觑,引发的一个思考是是否有一种对利用无侵入的、反对动静扩大的数据采集计划?目前最好的答案是 eBPF。 1、「数据采集:eBPF 的超能力」 eBPF 相当于在内核中构建了一个执行引擎,通过内核调用将这段程序 attach 到某个内核事件上,实现监听内核事件。有了事件咱们就能进一步做协定推导,筛选出感兴趣的协定,对事件进一步解决后放到 ringbuffer 或者 eBPF 自带的数据结构 Map 中,供用户态过程读取。用户态过程读取这些数据后,进一步关联 Kubernetes 元数据后推送到存储端。这是整体处理过程。 eBPF 的超能力体现在能订阅各种内核事件,如文件读写、网络流量等,运行在 Kubernetes 中的容器或者 Pod 里的所有行为都是通过内核零碎调用来实现的,内核晓得机器上所有过程中产生的所有事件,所以内核简直是可观测性的最佳观测点,这也是咱们为什么抉择 eBPF 的起因。另一个在内核上做监测的益处是利用不须要变更,也不须要从新编译内核,做到了真正意义上的无侵入。当集群里有几十上百个利用的时候,无侵入的解决方案会帮上大忙。 但作为新技术,人们对 eBPF 也存在些许担心,比方安全性与探针性能。为了充分保证内核运行时的安全性,eBPF 代码进行了诸多限度,如最大堆栈空间以后为 512、最大指令数为 100 万。与此同时,针对性能担心,eBPF 探针管制在大概在 1%左右。其高性能次要体现在内核中解决数据,缩小数据在内核态和用户态之间的拷贝。简略说就是数据在内核里算好了再给用户过程,比方一个 Gauge 值,以往的做法是将原始数据拷贝到用户过程再计算。 2、可编程的执行引擎人造适宜可观测性可观测性工程通过帮忙用户更好的了解零碎外部状态来打消常识盲区和及时打消系统性危险。eBPF 在可观测性方面有何威力呢? 以利用异样为例,当发现利用有异样后,解决问题过程中发现短少利用层面可观测性,这时候通过埋点、测试、上线补充了利用可观测性,具体的问题失去了解决,但往往治标不治本,下一次别的中央有问题,又须要走同样的流程,另外多语言、多协定让埋点的老本更高。更好的做法是用无侵入形式去解决,以防止须要观测时没有数据。 eBPF 执行引擎可通过动静加载执行 eBPF 脚本来采集可观测性数据,举个具体例子,假如本来的 Kubernetes 零碎并没有做过程相干的监测,有一天发现了某个歹意过程(如挖矿程序)在疯狂地占用 CPU,这时候咱们会发现这类歹意的过程创立应该被监测起来,这时候咱们能够通过集成开源的过程事件检测库来是实现,但这往往须要打包、测试、公布这一整套流程,全副走完可能一个月就过来了。 ...

April 20, 2022 · 2 min · jiezi

关于kubernetes:探针配置失误线上容器应用异常死锁后kubernetes集群未及时响应自愈重启容器

探针配置失误,线上容器利用异样死锁后,kubernetes集群未及时响应自愈重启容器?探针配置失误,线上容器利用异样死锁后,kubernetes集群未及时响应自愈重启容器? 线上多个服务利用陷入了死循环,大量服务拜访不通,陷入死循环的利用长时间搁置,并没有进行自愈。 k8s利用容器没有检测到利用陷入了故障,容器未及时重启? 囧么肥事-胡言乱语 弄清楚为什么要应用容器探针?kubernetes 集群的益处是能够监测利用容器衰弱状态,在必要时候进行故障自愈。Pod管家一旦调度到某个节点,该节点上的Kubelet就会运行Pod的容器。 如果应用程序中有一个导致它每隔一段时间就会解体的bug,Kubernetes会主动重启应用程序,所以即便应用程序自身没有做任何非凡的事,在Kubernetes中运行也能主动取得自我修复的能力。 默认状况下,kubelet依据容器运行状态作为衰弱根据,不能监控容器中应用程序状态,例如程序假死。这就会导致无奈提供服务,失落流量。因而引入健康检查机制确保容器衰弱存活。 Pod通过两类探针来查看容器的衰弱状态。别离是LivenessProbe(存活探针)和 ReadinessProbe(就绪探针)。 还有一种启动探针监控利用启动状态:StartupProbe(启动探针) livenessProbe:批示容器是否正在运行。如果存活态探针失败,则 kubelet 会杀死容器, 并且容器将依据其重启策略决定将来。如果容器不提供存活探针, 则默认状态为 Success。readinessProbe:批示容器是否筹备好为申请提供服务。如果就绪态探针失败, 端点控制器将从与 Pod 匹配的所有服务的端点列表中删除该 Pod 的 IP 地址。 初始提早之前的就绪态的状态值默认为 Failure。 如果容器不提供就绪态探针,则默认状态为 Success。startupProbe: 批示容器中的利用是否曾经启动。如果提供了启动探针,则所有其余探针都会被 禁用,直到此探针胜利为止。如果启动探针失败,kubelet 将杀死容器,而容器依其重启策略进行重启。 如果容器没有提供启动探针,则默认状态为 Success。非凡场景如何抉择正确的探针?kubelet 应用存活探针来晓得什么时候要重启容器。 例如,存活探针能够捕捉到死锁(应用程序在运行,然而无奈继续执行前面的步骤)。 这样的状况下重启容器有助于让应用程序在有问题的状况下更可用。 kubelet 应用就绪探针判断容器什么时候筹备好了并能够开始承受申请流量。 当一个 Pod 内的所有容器都筹备好了,能力把这个 Pod 看作就绪了。 这种信号的一个用处就是管制哪个 Pod 作为 Service 的后端。 在 Pod 还没有筹备好的时候,会从 Service 的负载均衡器中被剔除的。 kubelet 应用启动探针监测应用程序容器什么时候启动了。 如果配置了这类探针,就能够管制容器在启动胜利后再进行存活性和就绪查看, 确保这些存活、就绪探针不会影响应用程序的启动。 这能够用于对慢启动容器进行存活性检测,防止它们在启动运行之前就被杀掉。 何时该应用存活态探针?如果容器中的过程可能在遇到问题或不衰弱的状况下自行解体,则不肯定须要存活态探针; kubelet 将依据 Pod 的restartPolicy 主动执行修复操作。 如果你心愿容器在探测失败时被杀死并重新启动,那么请指定一个存活态探针, 并指定restartPolicy 为 "Always" 或 "OnFailure"。 ...

April 20, 2022 · 1 min · jiezi

关于kubernetes:k8s-由于资产无法清空导致-namespace-无法删除

状况:在删除 flux-system namespace 的时候,namespace 始终处于 Terminating状态 kubectl get namespace flux-system 查看 flux-system 下的资源,有若干个 pod 未删除,始终卡在 Terminating 状态 kubectl -n flux-system get all 强制删除所有 pod kubectl -n flux-system delete pod kustomize-controller-5c84db559f-7pcfd --force --grace-period 0kubectl -n flux-system delete pod notification-controller-78db94d87d-g4t8l --force --grace-period 0kubectl -n flux-system delete pod source-controller-778dccd496-k6fmj --force --grace-period 0查看 pod 是否胜利删除 查看 flux-system 状态,发现 namespace 曾经被删除

April 19, 2022 · 1 min · jiezi