背景:
紧接Terraform系列一腾讯云CVM相干简略创立。筹备围绕着cvm先相熟一下根本的流程。比方:系统盘扩容,挂载数据盘,帐号密钥ssh-key,绑定公网ip.钻研一下官网文档体验一下!
Terraform系列二腾讯云CVM进一步相干玩法
1.对于硬盘的操作
参照:https://registry.terraform.io/providers/tencentcloudstack/tencentcloud/latest/docs/resources/instance
默认的系统盘是50G ,设置system_disk_size = 100。批改系统盘为100G,并减少了数据盘data_disks配置(50G)
1. 批改cvm.tf配置文件
[root@zhangpeng terraform]# cat cvm.tf
resource "tencentcloud_instance" "cvm_almalinux" { instance_name = "cvm-almalinux" availability_zone = "ap-beijing-2" image_id = "img-q95tlc25" instance_type = "S2.MEDIUM2" system_disk_type = "CLOUD_PREMIUM" system_disk_size = 100 hostname = "cvm-almalinux" data_disks { data_disk_type = "CLOUD_PREMIUM" data_disk_size = 50 encrypt = false } security_groups = [ "${tencentcloud_security_group.sg_bj.id}" ] vpc_id = "${tencentcloud_vpc.vpc_bj.id}" subnet_id = "${tencentcloud_subnet.subnet_bj_02.id}" internet_max_bandwidth_out = 10 count = 1}
当然了我这里看文档的时候看到了hostname配置这里也增加了hostname!恩 反正当初还没有公网Ip一步一步操作!
2. terraform plan
3. terraform apply
4. 验证
登陆腾讯云后盾找到对应cvm查看系统盘与数据盘
目测是失常的然而认真看了一眼信息,系统盘更换服务器明码也从新进行了初始化:
故:到这里系统盘的扩容与数据盘的挂载实现了。然而没有能确认系统盘是间接扩容还是进行了系统盘硬盘的间接更换!这个前面再去钻研!
2. 创立公网ip并绑定cvm
公网ip关上腾讯云控制台云服务器有个公网IP的选项看了下url叫eip?官网搜寻一下eip:https://registry.terraform.io/providers/tencentcloudstack/tencentcloud/latest/docs/resources/eip?然而没有看到这里有配置的阿....网上搜寻了一下发现能够通过allocate_public_ip = true开启公网IP!参照:http://www.panooo.com/Terraform_On_TencentCloud!
先这样搞一下吧!前面钻研一下如何新建一个EIP 而后绑定CVM?具体步骤如下:
1. 批改cvm.tf
cat cvm.tf
resource "tencentcloud_instance" "cvm_almalinux" { instance_name = "cvm-almalinux" availability_zone = "ap-beijing-2" image_id = "img-q95tlc25" instance_type = "S2.MEDIUM2" system_disk_type = "CLOUD_PREMIUM" system_disk_size = 100 hostname = "cvm-almalinux" allocate_public_ip = true data_disks { data_disk_type = "CLOUD_PREMIUM" data_disk_size = 50 encrypt = false } security_groups = [ "${tencentcloud_security_group.sg_bj.id}" ] vpc_id = "${tencentcloud_vpc.vpc_bj.id}" subnet_id = "${tencentcloud_subnet.subnet_bj_02.id}" internet_max_bandwidth_out = 10 count = 1}
减少了allocate_public_ip = true。另外也明确了 internet_max_bandwidth_out = 10是限度带宽的配置
2. terraform plan
3. terraform apply
4. 验证
登陆后盾验证的确有了公网ip了
然而.....这鬼货色每次都是新建吗?又收到了服务器创立,生成明码的短信提醒.......请看上面的分析测试!
3.cvm重建失去的论断
在下面步骤中cvm领有了公网的Ip。ssh登陆服务器先看一下:
系统盘 数据盘创立胜利,主机名hostname也设置胜利了!
先轻易生成一个文件,而后更改cvm. tf相干配置。确认一下在什么环境下cvm会重建!
touch zhangpeng.txt
1. 批改一下公网ip进口带宽测试一下?
将internet_max_bandwidth_out = 10批改为internet_max_bandwidth_out = 15
cat cvm.tf
resource "tencentcloud_instance" "cvm_almalinux" { instance_name = "cvm-almalinux" availability_zone = "ap-beijing-2" image_id = "img-q95tlc25" instance_type = "S2.MEDIUM2" system_disk_type = "CLOUD_PREMIUM" system_disk_size = 100 hostname = "cvm-almalinux" allocate_public_ip = true data_disks { data_disk_type = "CLOUD_PREMIUM" data_disk_size = 50 encrypt = false } security_groups = [ "${tencentcloud_security_group.sg_bj.id}" ] vpc_id = "${tencentcloud_vpc.vpc_bj.id}" subnet_id = "${tencentcloud_subnet.subnet_bj_02.id}" internet_max_bandwidth_out = 15 count = 1}
仍旧是terraform plan and terraform apply
未收到CVM重建信息,原明码失常登陆。登陆服务器查看zhangpeng.txt存在!所以确认批改带宽配置不会触发cvm重建!
2. 批改系统盘与数据盘大小
两个的测试都放在一起了,首先是批改数据盘的大小:
data_disk_size = 50 批改为 data_disk_size = 100
cat cvm.tf
resource "tencentcloud_instance" "cvm_almalinux" { instance_name = "cvm-almalinux" availability_zone = "ap-beijing-2" image_id = "img-q95tlc25" instance_type = "S2.MEDIUM2" system_disk_type = "CLOUD_PREMIUM" system_disk_size = 100 hostname = "cvm-almalinux" allocate_public_ip = true data_disks { data_disk_type = "CLOUD_PREMIUM" data_disk_size = 100 encrypt = false } security_groups = [ "${tencentcloud_security_group.sg_bj.id}" ] vpc_id = "${tencentcloud_vpc.vpc_bj.id}" subnet_id = "${tencentcloud_subnet.subnet_bj_02.id}" internet_max_bandwidth_out = 15 count = 1}
仍旧是terraform plan and terraform apply
服务没有重建。数据盘扩容胜利,zhangpeng.txt还在
而后再试一下批改系统盘:
system_disk_size = 100 批改为 system_disk_size = 150
cat cvm.tf
resource "tencentcloud_instance" "cvm_almalinux" { instance_name = "cvm-almalinux" availability_zone = "ap-beijing-2" image_id = "img-q95tlc25" instance_type = "S2.MEDIUM2" system_disk_type = "CLOUD_PREMIUM" system_disk_size = 150 hostname = "cvm-almalinux" allocate_public_ip = true data_disks { data_disk_type = "CLOUD_PREMIUM" data_disk_size = 100 encrypt = false } security_groups = [ "${tencentcloud_security_group.sg_bj.id}" ] vpc_id = "${tencentcloud_vpc.vpc_bj.id}" subnet_id = "${tencentcloud_subnet.subnet_bj_02.id}" internet_max_bandwidth_out = 15 count = 1}
仍旧是terraform plan and terraform apply
仍然没有重建CVM,什么起因呢?这里的所有操作都是针对与已有的配置进行批改大小,没有新增或者删除。那就试一下持续减少一块数据盘吧!
3. 减少一块新的数据盘
cat cvm.tf
resource "tencentcloud_instance" "cvm_almalinux" { instance_name = "cvm-almalinux" availability_zone = "ap-beijing-2" image_id = "img-q95tlc25" instance_type = "S2.MEDIUM2" system_disk_type = "CLOUD_PREMIUM" system_disk_size = 150 hostname = "cvm-almalinux" allocate_public_ip = true data_disks { data_disk_type = "CLOUD_PREMIUM" data_disk_size = 100 encrypt = false } data_disks { data_disk_type = "CLOUD_PREMIUM" data_disk_size = 50 encrypt = false } security_groups = [ "${tencentcloud_security_group.sg_bj.id}" ] vpc_id = "${tencentcloud_vpc.vpc_bj.id}" subnet_id = "${tencentcloud_subnet.subnet_bj_02.id}" internet_max_bandwidth_out = 15 count = 1}
仍旧是terraform plan and terraform apply
目测带replaced的都会重建......
4.论断
貌似在进行新增或者删除相干配置的时候都会重建?找泽阳大佬确认了一下有什么形式能够防止。貌似是我了解错了:写在cvm.tf这里貌似算是批改了cvm初始化,应该最好把数据盘 负载平衡独自创立,而后将其绑定到对应cvm!
4. 特别强调
1. terraform destroy
正好顺便体验一下删除配置而后从新创立一下利用:
terraform destroy
2. 独自创立vpc subset route and cvm
放弃其余配置文件(vpc subset route and cvm)不变,批改cvm.tf如下:
cat cvm.tf
resource "tencentcloud_instance" "cvm_almalinux" { instance_name = "cvm-almalinux" availability_zone = "ap-beijing-2" image_id = "img-q95tlc25" instance_type = "S2.MEDIUM2" system_disk_type = "CLOUD_PREMIUM" system_disk_size = 50 hostname = "cvm-almalinux" security_groups = [ "${tencentcloud_security_group.sg_bj.id}" ] lifecycle { create_before_destroy = false } vpc_id = "${tencentcloud_vpc.vpc_bj.id}" subnet_id = "${tencentcloud_subnet.subnet_bj_02.id}"}
3. terraform plan and terraform apply
4. 独自减少一个eip并绑定
1. 创立eip 公网ip
参照:https://registry.terraform.io/providers/tencentcloudstack/tencentcloud/latest/docs/resources/eip
[root@zhangpeng terraform]# cat eip.tf
resource "tencentcloud_eip" "cvm_almalinux_eip" { name = "cvm_almalinux_eip" internet_max_bandwidth_out = 10 internet_service_provider = "BGP" type = "EIP" internet_charge_type = "TRAFFIC_POSTPAID_BY_HOUR"}
2. eip绑定cvm
参照:https://registry.terraform.io/providers/tencentcloudstack/tencentcloud/latest/docs/resources/eip_association
[root@zhangpeng terraform]# cat eip_association.tf
resource "tencentcloud_eip_association" "cvm_almalinux_association" { eip_id = "${tencentcloud_eip.cvm_almalinux_eip.id}" instance_id = "${tencentcloud_instance.cvm_almalinux.id}"}
3. terraform plan and terraform apply
这里截图就疏忽了!间接看后果!!
不晓得带宽为什么显示0呢这里?
而后ssh登陆服务器测试一下:
[root@zhangpeng terraform]# ssh root@xxx.xxx.xxx.xxxkex_exchange_identification: Connection closed by remote host[root@zhangpeng terraform]# ssh root@xxx.xxx.xxx.xxxssh: connect to host root@xxx.xxx.xxx.xxx port 22: Connection timed out[root@zhangpeng terraform]# ssh root@xxx.xxx.xxx.xxx
不出所料 带宽没有设置失效!
可是我这里应该三设置胜利了阿......先手动设置一下验证一下独自设置EIP绑定CVM!
持续ssh登陆:
登陆胜利没有重建......当然了这里也体验到了还是 allocate_public_ip = true的形式简略!
5. 持续体验一下独自创立数据盘绑定cvm
1. 创立数据盘
参照:https://registry.terraform.io/providers/tencentcloudstack/tencentcloud/latest/docs/resources/cbs_storage
[root@zhangpeng terraform]# cat cbs.tf
resource "tencentcloud_cbs_storage" "cvm_almalinux_storage" { storage_name = "cvm_almalinux" storage_type = "CLOUD_PREMIUM" storage_size = 100 availability_zone = "ap-beijing-2" project_id = 0 encrypt = false tags = { abc = "tf" }}
2. 数据盘绑定cvm
参照:https://registry.terraform.io/providers/tencentcloudstack/tencentcloud/latest/docs/resources/cbs_storage_attachment
[root@zhangpeng terraform]# cat cbs_attachment.tf
resource "tencentcloud_cbs_storage_attachment" "cvm_almalinux_attachment" { storage_id = "${tencentcloud_cbs_storage.cvm_almalinux_storage.id}" instance_id = "${tencentcloud_instance.cvm_almalinux.id}"}
3. terraform plan and terraform apply
恩服务器没有重建.....登陆服务器查看disk数据盘
5.进一步的体验绑定ssh-key密钥形式登陆服务器
有了后面的失败案例,当初筹备独自创立一个密钥文件而后绑定CVM
参照:https://registry.terraform.io/providers/tencentcloudstack/tencentcloud/latest/docs/resources/key_pair
1. 创立key_pair
resource "tencentcloud_key_pair" "ssh-key" { key_name = "ssh-key" public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDjd8fTnp7Dcuj4mLaQxf9Zs/ORgUL9fQxRCNKkPgP1paTy1I513maMX126i36Lxxl3+FUB52oVbo/FgwlIfX8hyCnv8MCxqnuSDozf1CD0/wRYHcTWAtgHQHBPCC2nJtod6cVC3kB18KeV4U7zsxmwFeBIxojMOOmcOBuh7+trRw=="}
留神:我这里应用了我本地环境的id_rsa.pub!以上为官网例子
2. cvm减少key_pair配置
减少key_name配置!
[root@zhangpeng terraform]# cat cvm.tf
resource "tencentcloud_instance" "cvm_almalinux" { instance_name = "cvm-almalinux" availability_zone = "ap-beijing-2" image_id = "img-q95tlc25" instance_type = "S2.MEDIUM2" system_disk_type = "CLOUD_PREMIUM" system_disk_size = 50 hostname = "cvm-almalinux" security_groups = [ "${tencentcloud_security_group.sg_bj.id}" ] lifecycle { create_before_destroy = false } key_name= "${tencentcloud_key_pair.ssh_key.id}" vpc_id = "${tencentcloud_vpc.vpc_bj.id}" subnet_id = "${tencentcloud_subnet.subnet_bj_02.id}"}
3. terraform plan and terraform apply
ssh登陆验证:因为我的ssh-key是zhangpeng用户的故root用户登陆失败!切换到zhangpeng用户ssh免密登陆胜利!
cvm也没有重建......初步目标达到!
题外话:
总结一下:
- 公网ip还是在创立CVM的时候间接设置allocate_public_ip = true比拟不便
- 数据盘的增加 还有如果须要额定公网ip的绑定。能够独自创立组件,而后参照attachment相干将其绑定到cvm。
ssh-key的绑定服务器不会重建
下一步的打算
- 配置文件如何治理的更优雅?
- 应用Terraform在cvm中装置软件治理CVM
- Terraform体验治理其余利用
注:错别字请原谅......rocky中文输入法太残害了......硬盘的扩容也测试了具体过程就不写了!