背景:

kubernetes集群中部署利用,对利用进行压力测试。jmeter进行压力测试大略是每秒300个左右的申请(每分钟elasticsearch中采集的申请有18000个)。查看日志有nginx的erro log:


然而我的cpu 内存资源也都没有打满。通过搜索引擎搜寻发现与上面博客的环境根本类似,php-fpm也是走的socket:

参见:http://www.bubuko.com/infodetail-3600189.html

解决问题:

批改net.core.somaxconn

进入本人的nginx-php容器查看:

bash-5.0# cat /proc/sys/net/core/somaxconn128


随机找了一个work节点查看主机的somaxconn:

root@ap-shanghai-k8s-node-1:~# cat /proc/sys/net/core/somaxconn32768

注:这是一个tke集群。参数都是默认的。未进行批改
上面批改一下利用的配置文件:

apiVersion: apps/v1kind: Deploymentmetadata:  name: paper-miniprogramspec:  replicas: 1  strategy:    rollingUpdate:      maxSurge: 1      maxUnavailable: 0  selector:    matchLabels:      app: paper-miniprogram  template:    metadata:      labels:        app: paper-miniprogram    spec:      containers:        - name: paper-miniprogram          image: ccr.ccs.tencentyun.com/xxxx/paper-miniprogram:{data}          ports:            - containerPort: 80          resources:            requests:              memory: "1024M"              cpu: "1000m"            limits:              memory: "1024M"              cpu: "1000m"       imagePullSecrets:                                                      - name: tencent---apiVersion: v1kind: Servicemetadata:  name: paper-miniprogram  labels:    app: paper-miniprogramspec:  ports:  - port: 80    protocol: TCP    targetPort: 80  selector:    app: paper-miniprogram

批改如下:
减少initContainers配置

apiVersion: apps/v1kind: Deploymentmetadata:  name: paper-miniprogramspec:  replicas: 1  strategy:    rollingUpdate:      maxSurge: 1      maxUnavailable: 0  selector:    matchLabels:      app: paper-miniprogram  template:    metadata:      labels:        app: paper-miniprogram    spec:      containers:        - name: paper-miniprogram          image: ccr.ccs.tencentyun.com/xxxx/paper-miniprogram:{data}          ports:            - containerPort: 80          resources:            requests:              memory: "1024M"              cpu: "1000m"            limits:              memory: "1024M"              cpu: "1000m"       initContainers:      - image: busybox        command:        - sh        - -c        - echo 1000 > /proc/sys/net/core/somaxconn        imagePullPolicy: Always        name: setsysctl        securityContext:          privileged: true      imagePullSecrets:                                                      - name: tencent---apiVersion: v1kind: Servicemetadata:  name: paper-miniprogram  labels:    app: paper-miniprogramspec:  ports:  - port: 80    protocol: TCP    targetPort: 80  selector:    app: paper-miniprogram

php-fpm listen.backlog参数批改

先看一下零碎变量net.ipv4.tcp_max_syn_backlog的参数值

cat /proc/sys/net/core/netdev_max_backlog#ORsysctl -a|grep backlog


而后查看一下php中listen。backlog的配置:

511就先511吧不先批改了 如果批改这个值也要特权模式批改一下啊容器中net.ipv4.tcp_max_syn_backlog的值?

官网对于sysctl

kubernetes官网有syscl的用法说明的:https://kubernetes.io/zh/docs/tasks/administer-cluster/sysctl-cluster/

而后这样做的后遗症:

集体感觉特权模式会带来的平安等问题,还是不喜爱pod启用特权模式。

集体感觉比拟好的形式:

  1. 通过grafana看板发现pod的资源利用率还是没有那么高。正当调整资源limits参数。


  1. 启用hpa 程度主动伸缩。
  2. 综上所述我还想想放弃默认的net.core.somaxconn=128。而依附更多的正本数去满足高负载。这也是合乎应用容器的思维的思路。
  3. 要害是很多人认为扩充资源就能够进步并发负载量的思维是不对的.更应该去调优参数。

    对于php-fpm unix socket and tcp


参见知乎:https://zhuanlan.zhihu.com/p/83958307

一些配置的可供参考:

https://github.com/gaoxt/blog/issues/9
https://blog.csdn.net/pcyph/article/details/46513521