作者: 李镇伟

playwright拖拽元素和获取元素集合

背景

因为这两个功能经常用,网上又很少有直接可以抄的代码,所以我写个文章记录一下

元素向下拖拽100个px

拖拽演示

解说:先定位到要拖拽的元素,然后鼠标移动到元素的中心点,接着鼠标选中,鼠标移动,鼠标放下

src_elem = page.locator("xpath=元素定位")
box = src_elem.bounding_box(timeout=60000)
page.mouse.move(box["x"] + box["width"] / 2, box["y"] + box["height"] / 2)
page.mouse.down()
page.mouse.move(box["x"] + box["width"] / 2, box["y"] + box["height"] / 2 + 100)
page.mouse.up()

元素拖拽到另外一个元素上

这种方法也有用,比如把一个按钮丢到另外一个画板里

src_elem = page.locator("xpath=元素原来的")
src_elem.drag_to(page.locator("xpath=元素新的位置"))

获取一批元素集合,然后截图

我们的页面上有一批图片,以瀑布流的方式展示出来,他们的css都是相同的,我们需要给每一个元素截图,并且截图之后再鼠标向下滚动一下

elements = page.query_selector_all(".search-result__item")
number = 1
for item in elements:
    number = number + 1
    page.mouse.wheel(0, 100)
    item.screenshot(path="image/"+str(number) + ".png")
    time.sleep(1)

使用loki+promtail+grafana架构分析nginx日志

0.前提

1.已经阅读过这篇文章https://www.yinyubo.com/2022/03/14/nginx%E5%8A%A8%E6%80%81%E5%A2%9E%E5%8A%A0%E6%A8%A1%E5%9D%97ngx_http_geoip2_module

并且在nginx里已经安装好了geoip2

2.电脑上安装好了docker和docker-compose

1.调整nginx的访问日志格式

编辑/etc/nginx/nginx.conf,内容参考如下

...
load_module modules/ngx_http_geoip2_module.so;
...
http {
    include       /etc/nginx/mime.types;
    geoip2 /home/lzw/GeoLite2-Country_20220222/GeoLite2-Country.mmdb {
        auto_reload 5m;
        $geoip2_metadata_country_build metadata build_epoch;
        $geoip2_data_country_code default=CN source=$remote_addr country iso_code;
        $geoip2_data_country_name country names en;
    }
    geoip2 /home/lzw/GeoLite2-City_20220222/GeoLite2-City.mmdb {
        $geoip2_data_city_name default=Nanjing city names en;
 
    }
    vhost_traffic_status_zone;
    vhost_traffic_status_filter_by_set_key $geoip2_data_country_code country::*;
    log_format json_analytics escape=json '{'
                            '"msec": "$msec", ' # request unixtime in seconds with a milliseconds resolution
                            '"connection": "$connection", ' # connection serial number
                            '"connection_requests": "$connection_requests", ' # number of requests made in connection
                    '"pid": "$pid", ' # process pid
                    '"request_id": "$request_id", ' # the unique request id
                    '"request_length": "$request_length", ' # request length (including headers and body)
                    '"remote_addr": "$remote_addr", ' # client IP
                    '"remote_user": "$remote_user", ' # client HTTP username
                    '"remote_port": "$remote_port", ' # client port
                    '"time_local": "$time_local", '
                    '"time_iso8601": "$time_iso8601", ' # local time in the ISO 8601 standard format
                    '"request": "$request", ' # full path no arguments if the request
                    '"request_uri": "$request_uri", ' # full path and arguments if the request
                    '"args": "$args", ' # args
                    '"status": "$status", ' # response status code
                    '"body_bytes_sent": "$body_bytes_sent", ' # the number of body bytes exclude headers sent to a client
                    '"bytes_sent": "$bytes_sent", ' # the number of bytes sent to a client
                    '"http_referer": "$http_referer", ' # HTTP referer
                    '"http_user_agent": "$http_user_agent", ' # user agent
                    '"http_x_forwarded_for": "$http_x_forwarded_for", ' # http_x_forwarded_for
                    '"http_host": "$http_host", ' # the request Host: header
                    '"server_name": "$server_name", ' # the name of the vhost serving the request
                    '"request_time": "$request_time", ' # request processing time in seconds with msec resolution
                    '"upstream": "$upstream_addr", ' # upstream backend server for proxied requests
                    '"upstream_connect_time": "$upstream_connect_time", ' # upstream handshake time incl. TLS
                    '"upstream_header_time": "$upstream_header_time", ' # time spent receiving upstream headers
                    '"upstream_response_time": "$upstream_response_time", ' # time spend receiving upstream body
                    '"upstream_response_length": "$upstream_response_length", ' # upstream response length
                    '"upstream_cache_status": "$upstream_cache_status", ' # cache HIT/MISS where applicable
                    '"ssl_protocol": "$ssl_protocol", ' # TLS protocol
                    '"ssl_cipher": "$ssl_cipher", ' # TLS cipher
                    '"scheme": "$scheme", ' # http or https
                    '"request_method": "$request_method", ' # request method
                    '"server_protocol": "$server_protocol", ' # request protocol, like HTTP/1.1 or HTTP/2.0
                    '"pipe": "$pipe", ' # "p" if request was pipelined, "." otherwise
                    '"gzip_ratio": "$gzip_ratio", '
                    '"http_cf_ray": "$http_cf_ray",'
                    '"geoip_country_code": "$geoip2_data_country_code"'
                    '}';
    access_log   /var/log/nginx/json_access.log json_analytics;

2.安装loki+promtail+grafana

1.编写docker-compose文件,内容如下

version: "3"
 
networks:
  loki:
 
services:
  loki:
    image: grafana/loki:2.4.1
    ports:
      - "3100:3100"
    volumes:
      - /home/lzw/loki/loki-conf:/etc/loki
    command: -config.file=/etc/loki/local-config.yaml
    networks:
      - loki
 
  promtail:
    image: grafana/promtail:2.4.1
    volumes:
      - /home/lzw/loki/promtail-conf:/etc/promtail
      - /var/log/nginx:/var/log/nginx
    command: -config.file=/etc/promtail/config.yml
    networks:
      - loki
 
  grafana:
    image: grafana/grafana:latest
    volumes:
      - /home/lzw/loki/grafana:/var/lib/grafana
    ports:
      - "3000:3000"
    networks:
      - loki

2,编写loki-conf/local-config.yaml 配置文件

auth_enabled: false
 
server:
  http_listen_port: 3100
 
common:
  path_prefix: /loki
  storage:
    filesystem:
      chunks_directory: /loki/chunks
      rules_directory: /loki/rules
  replication_factor: 1
  ring:
    instance_addr: 127.0.0.1
    kvstore:
      store: inmemory
 
schema_config:
  configs:
    - from: 2020-10-24
      store: boltdb-shipper
      object_store: filesystem
      schema: v11
      index:
        prefix: index_
        period: 24h
 
ruler:
  alertmanager_url: http://localhost:9093

3.编写promtail-conf/config.yml文件

server:
  http_listen_port: 9080
  grpc_listen_port: 0
 
positions:
  filename: /tmp/positions.yaml
 
clients:
  - url: http://loki:3100/loki/api/v1/push
 
scrape_configs:
- job_name: nginx
  static_configs:
  - targets:
      - localhost
    labels:
      job: nginx
      agent: promtail
      __path__: /var/log/nginx/json_access.log

4.准备grafana挂载目录

docker run -d -p 3002:3000 --name=grafana2 grafana/grafana:latest
docker cp grafana2:/var/lib/grafana /home/lzw/loki/.
docker rm -f grafana2
sudo chown -R 472 /home/lzw/loki/grafana

5.运行docker-compose

docker-compose -f docker-compose.yaml up -d

运行完成后,3000端口可以访问grafana、3100端口访问loki。nginx的日志文件通过volume的方式挂载进promtail

3.在grafana里配置报表

1.配置数据源http://loki:3100

2.导入官网模板https://grafana.com/grafana/dashboards/12559

3.导入后的效果应该和下图类似

nginx动态增加模块ngx_http_geoip2_module

0.前提:

1.已经阅读过我的另一篇动态增加nginx-module-vts模块的文章,服务器里已经安装了nginx和其源码。https://www.yinyubo.com/2022/03/14/apt%e6%96%b9%e5%bc%8f%e5%ae%89%e8%a3%85nginx%e4%bb%a5%e5%8f%8a%e5%8a%a8%e6%80%81%e5%a2%9e%e5%8a%a0%e6%a8%a1%e5%9d%97nginx-module-vts/

2.已经去GeoLite2的官网下载了GeoLite2-Country.mmdb文件,这个网站需要注册才能下载

3.可以参考的github网站有

https://github.com/leev/ngx_http_geoip2_module

https://github.com/maxmind/libmaxminddb

1.安装libmaxminddb


sudo add-apt-repository ppa:maxmind/ppa
sudo apt update
sudo apt install libmaxminddb0 libmaxminddb-dev mmdb-bin

2.下载ngx_http_geoip2_module源码以及动态编译


# 下载ngx_http_geoip2_module源码
git clone https://github.com/leev/ngx_http_geoip2_module.git
#cd nginx源码目录,例如下面的命令
cd nginx-1.20.2/
# 进行动态编译
./configure --with-compat --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --add-dynamic-module=../ngx_http_geoip2_module --with-stream
make modules
cp objs/ngx_http_geoip2_module.so /etc/nginx/modules/.

3.在nginx配置文件里引入geoip2

这里我以在nginx-module-vts里加入geoip2地区解析为例,修改nginx.conf

...
load_module modules/ngx_http_vhost_traffic_status_module.so;
load_module modules/ngx_http_geoip2_module.so;
...
http {
    ...
    geoip2 /home/lzw/GeoLite2-Country_20220222/GeoLite2-Country.mmdb {
        auto_reload 5m;
        $geoip2_metadata_country_build metadata build_epoch;
        $geoip2_data_country_code default=CN source=$remote_addr country iso_code;
        $geoip2_data_country_name country names es;
    }
    geoip2 /home/lzw/GeoLite2-City_20220222/GeoLite2-City.mmdb {
        $geoip2_data_city_name default=Nanjing city names en;
        $geoip2_data_latitude location latitude;
        $geoip2_data_longitude location longitude;
        $geoip2_data_postalcode postal code;
    }
    default_type  application/octet-stream;
    vhost_traffic_status_zone;
    vhost_traffic_status_filter_by_set_key $geoip2_data_country_code country::*;
    vhost_traffic_status_filter_by_set_key $geoip2_data_city_name city::*;
    vhost_traffic_status_filter_by_set_key "$geoip2_data_latitude,$geoip2_data_longitude" latlong::*;
    vhost_traffic_status_filter_by_set_key $geoip2_data_longitude longitude::*;
    vhost_traffic_status_filter_by_set_key $geoip2_data_latitude latitude::*;
    vhost_traffic_status_filter_by_set_key $geoip2_data_postalcode postal::*;
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
 
    access_log  /var/log/nginx/access.log  main;
    ...
}

在每一个被监控的conf文件里 添加vhost_traffic_status_filter_by_set_key 信息,例如下面我添加了6个筛选项


server {
    listen       5320;
    server_name  localhost;    
    vhost_traffic_status_filter_by_set_key $geoip2_data_country_code country::$server_name;
    vhost_traffic_status_filter_by_set_key $geoip2_data_city_name city::$server_name;
    vhost_traffic_status_filter_by_set_key "$geoip2_data_latitude,$geoip2_data_longitude" latlong::$server_name;
    vhost_traffic_status_filter_by_set_key $geoip2_data_longitude longitude::$server_name;
    vhost_traffic_status_filter_by_set_key $geoip2_data_latitude latitude::$server_name;
    vhost_traffic_status_filter_by_set_key $geoip2_data_postalcode postal::$server_name;  
 
    location /status {
        vhost_traffic_status_display;
        vhost_traffic_status_display_format html;
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }
}

apt方式安装nginx以及动态增加模块nginx-module-vts

0.背景介绍

因为有很多人是先通过apt的访问安装了稳定版的nginx。后面突然要增加第三方模块如geoip或者nginx-module-vts等别的模块,这个时候就可以采用本文的方式去动态增加模块。

1.安装ubuntu(如果已经安装了,可以跳过)

# 安装必要工具
sudo apt install curl gnupg2 ca-certificates lsb-release ubuntu-keyring

# 导入官方Nginx签名密钥
curl https://nginx.org/keys/nginx_signing.key | gpg --dearmor \
    | sudo tee /usr/share/keyrings/nginx-archive-keyring.gpg >/dev/null

# 验证下载的文件中包含正确的密钥
gpg --dry-run --quiet --import --import-options import-show /usr/share/keyrings/nginx-archive-keyring.gpg

# 设置稳定版本的nginx仓库
echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \
http://nginx.org/packages/ubuntu `lsb_release -cs` nginx" \
    | sudo tee /etc/apt/sources.list.d/nginx.list

# 通过apt安装nginx
sudo apt update
sudo apt install nginx

2.确认安装的nginx版本信息

输入 nginx -V 检查回显


nginx version: nginx/1.20.2
built by gcc 9.3.0 (Ubuntu 9.3.0-10ubuntu2)
built with OpenSSL 1.1.1f  31 Mar 2020
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g -O2 -fdebug-prefix-map=/data/builder/debuild/nginx-1.20.2/debian/debuild-base/nginx-1.20.2=. -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie'

通过上述回显,我们可以看到nginx的配置文件路径和程序路径如下。这个路径用在后面的动态编译里

--prefix=/etc/nginx --sbin-path=/usr/sbin/nginx

3.下载nginx源代码到本地

因为我们之前的nginx是apt安装的,本地没有源码,而动态编译模块需要源码,所以这里下载源码到本地

进入http://nginx.org/en/download.html,下载与我们上面apt安装的nginx同版本的tar.gz包,例如这里我们装的是1.20.2版本

解压nginx源代码
tar -zxvf nginx-1.20.2.tar.gz

4.下载第三方模块nginx-module-vts到本地

git clone git://github.com/vozlt/nginx-module-vts.git

下载完成后,两个目录同级,例如下面的目录

目录
|---nginx-1.20.2
|---nginx-module-vts

5.进行编译

安装编译所需要的lib库

sudo apt install g++ gcc libpcre3 libpcre3-dev zlib1g-dev openssl libssl-dev make

进入nginx-1.20.2目录

cd nginx-1.20.2

使用configure工具进行编译。编译完成后会在objs目录下生成文件ngx_http_vhost_traffic_status_module.so,把这个ngx_http_vhost_traffic_status_module.so拷贝到/etc/nginx/modules/目录下

./configure --with-compat --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --add-dynamic-module=../nginx-module-vts
make modules
sudo cp objs/ngx_http_vhost_traffic_status_module.so /etc/nginx/modules/.

6.在nginx里加载第三方模块nginx-module-vts

编辑nginx.conf文件.

在文件顶部添加load_module modules/ngx_http_vhost_traffic_status_module.so;

在http下添加vhost_traffic_status_zone;

cd /etc/nginx/
sudo nano nginx.conf
#  以下是文件内容
...
load_module modules/ngx_http_vhost_traffic_status_module.so;
...
http {
    vhost_traffic_status_zone;
}

在/etc/nginx/conf.d目录下增加一个monitor.conf。主要添加vhost_traffic_status_display;和vhost_traffic_status_display_format html; 参考如下

server {
    listen       5320;
    server_name  localhost;
 
    #access_log  /var/log/nginx/host.access.log  main;
 
    location /status {
        vhost_traffic_status_display;
        vhost_traffic_status_display_format html;
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }
}

添加完成后,输入sudo systemctl restart nginx重启nginx

访问monitor.conf里对应的端口  http://服务器IP:5320/ 即可看到流量信息

一篇文章带你入门K8S二次开发

背景

我们经常会在网上看到K8S和周边工具的教程,例如HELM的使用,droneCI的使用,但是很少有文章写,如何基于K8S进行二次开发,本篇文章将使用python和vue进行K8S的二次开发,实现一个简单的查询k8s的pod和node信息的页面


效果图

通过前端页面,调用后端python接口,查询k8s当前的节点状态和应用状态


涉及到的知识点

知识点说明
python-sanic库为前台提供API接口
python-kubernetes库访问k8s,获取pod和node资源信息
nodejs-vue前端框架
nodejs-element-UI提供UI组件,用了图标和表格组件
k8s-helm程序最后是要运行在K8S里,所以要编写helm包,包括rbac,svc,deployment文件
docker前后端的docker镜像制作

用户故事


后端python代码解说

main.py 主函数入口

#main.py
from kubernetes import client, config
from sanic import Sanic
from sanic.response import json

from cors import add_cors_headers
from options import setup_options

# sanic程序必须有个名字
app = Sanic("backend")
# 在本地调试,把config文件拷贝到本机的~/.kube/config然后使用load_kube_config,在K8S集群里使用load_incluster_config
# config.load_kube_config()
config.load_incluster_config()


def check_node_status(receiver):
    '''
    检查节点的状态是否正确,正确的设为1,不正确的设为0
    '''
    # 期望结果
    expect = {"NetworkUnavailable": "False",
              "MemoryPressure": "False",
              "DiskPressure": "False",
              "PIDPressure": "False",
              "Ready": "True"
              }
    result_dict = {}
    for (key, value) in receiver.items():
        # 这个逻辑是判断k8s传过来的值与expect的值是否相同
        if expect[key] == value:
            result_dict[key] = 1
        else:
            result_dict[key] = 0
    return result_dict


@app.route("/api/node")
async def node(request):
    result = []
    v1 = client.CoreV1Api()
    node_rest = v1.list_node_with_http_info()
    for i in node_rest[0].items:
        computer_ip = i.status.addresses[0].address
        computer_name = i.status.addresses[1].address
        # 先获得节点的IP和名字
        info = {"computer_ip": computer_ip, "computer_name": computer_name}
        status_json = {}
        # 节点有多个状态,把所有状态查出来,存入json里
        # 这里有一个flannel插件的坑,及时节点关机了,NetworkUnavailable查出来还是False
        for node_condition in i.status.conditions:
            status_json[node_condition.type] = node_condition.status
        check_dict = check_node_status(status_json)
        # 把节点的状态加入节点信息json里
        info.update(check_dict)
        # 把每一个节点的查询结果加入list里,返回给前端
        result.append(info)
    return json(result)


@app.route("/api/pod")
async def pod(request):
    '''
    接口名是pod,其实是检查所有的deployment,statefulset,daemonset的副本状态
    通过这些状态判断当前的程序是否正常工作
    '''
    pod_list = []
    apis_api = client.AppsV1Api()
    # 检查deployment信息
    resp = apis_api.list_deployment_for_all_namespaces()
    for i in resp.items:
        pod_name = i.metadata.name
        pod_namespace = i.metadata.namespace
        pod_unavailable_replicas = i.status.unavailable_replicas
        # 不可用副本状态为None表示没有不可用的副本,程序正常
        if pod_unavailable_replicas == None:
            pod_status = 1
        else:
            pod_status = 0
        pod_json = {"pod_namespace": pod_namespace, "pod_name": pod_name, "pod_status": pod_status}
        pod_list.append(pod_json)
    # 检查stateful_set信息
    resp_stateful = apis_api.list_stateful_set_for_all_namespaces()
    for i in resp_stateful.items:
        pod_name = i.metadata.name
        pod_namespace = i.metadata.namespace
        # 正常工作的副本数量,等于期望的副本数量时,表明程序是可用的
        if i.status.ready_replicas == i.status.replicas:
            pod_status = 1
        else:
            pod_status = 0
        pod_json = {"pod_namespace": pod_namespace, "pod_name": pod_name, "pod_status": pod_status}
        pod_list.append(pod_json)
    # 检查daemonset信息
    resp_daemonset = apis_api.list_daemon_set_for_all_namespaces()
    for i in resp_daemonset.items:
        pod_name = i.metadata.name
        pod_namespace = i.metadata.namespace
        # 不可用副本状态为None表示没有不可用的副本,程序正常
        if i.status.number_unavailable == None:
            pod_status = 1
        else:
            pod_status = 0
        pod_json = {"pod_namespace": pod_namespace, "pod_name": pod_name, "pod_status": pod_status}
        pod_list.append(pod_json)
    return json(pod_list)


# Add OPTIONS handlers to any route that is missing it
app.register_listener(setup_options, "before_server_start")

# Fill in CORS headers
app.register_middleware(add_cors_headers, "response")
if __name__ == "__main__":
    app.run(host="0.0.0.0", port=8000)

cors.py 解决跨域问题,主要是本地调试方便,放到我的helm包里部署到K8S上时,是不需要的,因为我会用nginx把他反向代理过去

#cors.py
from typing import Iterable


def _add_cors_headers(response, methods: Iterable[str]) -> None:
    '''
    为了在测试的时候偷懒,我把Access-Control-Allow-Origin设置成了*
    如果是做成镜像和我的helm包一起用,是不需要这样的,因为我会用nginx把后端和前端设置成同源
    '''
    allow_methods = list(set(methods))
    if "OPTIONS" not in allow_methods:
        allow_methods.append("OPTIONS")
    headers = {
        "Access-Control-Allow-Methods": ",".join(allow_methods),
        "Access-Control-Allow-Origin": "*",
        "Access-Control-Allow-Credentials": "true",
        "Access-Control-Allow-Headers": (
            "origin, content-type, accept, "
            "authorization, x-xsrf-token, x-request-id"
        ),
    }
    response.headers.extend(headers)


def add_cors_headers(request, response):
    if request.method != "OPTIONS":
        methods = [method for method in request.route.methods]
        _add_cors_headers(response, methods)

options.py 搭配上面的cors.py使用

# options.py
from collections import defaultdict
from typing import Dict, FrozenSet

from sanic import Sanic, response
from sanic.router import Route

from cors import _add_cors_headers


def _compile_routes_needing_options(
        routes: Dict[str, Route]
) -> Dict[str, FrozenSet]:
    needs_options = defaultdict(list)
    # This is 21.12 and later. You will need to change this for older versions.
    for route in routes.values():
        if "OPTIONS" not in route.methods:
            needs_options[route.uri].extend(route.methods)

    return {
        uri: frozenset(methods) for uri, methods in dict(needs_options).items()
    }


def _options_wrapper(handler, methods):
    def wrapped_handler(request, *args, **kwargs):
        nonlocal methods
        return handler(request, methods)

    return wrapped_handler


async def options_handler(request, methods) -> response.HTTPResponse:
    resp = response.empty()
    _add_cors_headers(resp, methods)
    return resp


def setup_options(app: Sanic, _):
    app.router.reset()
    needs_options = _compile_routes_needing_options(app.router.routes_all)
    for uri, methods in needs_options.items():
        app.add_route(
            _options_wrapper(options_handler, methods),
            uri,
            methods=["OPTIONS"],
        )
    app.router.finalize()

requirements.txt 放置python需要用到的sdk

aiofiles==0.8.0
cachetools==4.2.4
certifi==2021.10.8
charset-normalizer==2.0.10
google-auth==2.3.3
httptools==0.3.0
idna==3.3
Jinja2==3.0.3
kubernetes==21.7.0
MarkupSafe==2.0.1
multidict==5.2.0
oauthlib==3.1.1
pyasn1==0.4.8
pyasn1-modules==0.2.8
python-dateutil==2.8.2
PyYAML==6.0
requests==2.27.1
requests-oauthlib==1.3.0
rsa==4.8
sanic==21.12.1
sanic-ext==21.12.3
sanic-routing==0.7.2
six==1.16.0
urllib3==1.26.8
websocket-client==1.2.3
websockets==10.1

Dockerfile 打包后端代码成镜像使用 docker build -t k8s-backend .

FROM python:3.9
ADD . .
RUN pip install -r /requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
WORKDIR .
CMD ["python3","main.py"]

前端代码解说

主要就是App.vue和main.ts两个文件

这里省略nodejs和vue的安装过程,使用下面的命令创建一个vue3的项目

# 下载vue的过程省略,创建一个vue项目,创建的时候,选择typescript版本
vue create k8s-frontend
# 安装element-ui的vue3版本
npm install element-plus --save
# npm安装axios,用于向后台发起请求
npm i axios -S

main.ts 主入口

import { Component, createApp } from 'vue'
import ElementPlus from 'element-plus'
import 'element-plus/dist/index.css'
import App from './App.vue'
const app = createApp(App)

app.use(ElementPlus)

app.mount('#app')

App.vue 做demo图省事,我就用了这一个vue文件放了所有功能

<template>

  <h2>服务器信息</h2>
  <el-table 
    :data="tableData" style="width: 100%">
    <el-table-column prop="computer_ip" label="IP地址" width="180" />
    <el-table-column prop="computer_name" label="服务器名字" width="180" />
    <el-icon><check /></el-icon>
    <el-table-column label="网络插件" width="100">
      <template #default="scope">
        <el-icon :size="20">
          <check class="check" v-if="scope.row.NetworkUnavailable ==1" />
          <close class="close" v-else />
        </el-icon>
      </template>
    </el-table-column>
    <el-table-column label="内存压力" width="100">
      <template #default="scope">
        <el-icon :size="20">
          <check class="check" v-if="scope.row.MemoryPressure ==1" />
          <close class="close" v-else />
        </el-icon>
      </template>
    </el-table-column>
    <el-table-column label="硬盘压力" width="100">
      <template #default="scope">
        <el-icon :size="20">
          <check class="check" v-if="scope.row.DiskPressure ==1" />
          <close class="close" v-else />
        </el-icon>
      </template>
    </el-table-column>
    <el-table-column label="进程压力" width="100">
      <template #default="scope">
        <el-icon :size="20">
          <check class="check" v-if="scope.row.PIDPressure ==1" />
          <close class="close" v-else />
        </el-icon>
      </template>
    </el-table-column>
    <el-table-column label="K3S状态" width="100">
      <template #default="scope">
        <el-icon :size="20">
          <check class="check" v-if="scope.row.Ready ==1" />
          <close class="close" v-else />
        </el-icon>
      </template>
    </el-table-column>
  </el-table>
  <el-divider></el-divider>
  <h2>应用程序信息</h2>
  <el-table
    :data="podData"
    style="width: 100%"
    :default-sort="{ prop: 'pod_status', order: 'ascending' }"
  >
    <el-table-column prop="pod_namespace" sortable  label="命名空间" width="180" />
    <el-table-column prop="pod_name" sortable label="应用名字" width="180" />
    <el-icon><check /></el-icon>
    <el-table-column prop="pod_status" label="是否正常" sortable width="100">
      <template #default="scope">
        <el-icon :size="20">
          <check class="check" v-if="scope.row.pod_status ==1" />
          <close class="close" v-else />
        </el-icon>
      </template>
    </el-table-column>
  </el-table>
  <el-divider></el-divider>
</template>

<script lang="ts" >
import { Options, Vue } from 'vue-class-component';
import { Check, Close } from '@element-plus/icons-vue';
import axios from 'axios'

@Options({
    // 这里可以配置Vue组件支持的各种选项
    components: {
        Check,
        Close
    },
    data() {
        return {
          podData: [],
          tableData: [],
        }
    },
    mounted() {
      this.pod();
      this.show();
    },
    methods: {
        say(){
          console.log("say");
        },
        pod(){
          const path = "http://127.0.0.1:8000/api/pod";
          //本地调试使用,在服务器上还是用相对路径
          // const path = "http://127.0.0.1:8000/node";
          // 务必使用箭头函数的方法,这样this.id能直接对上,不然会报错提示id没找到
          axios.get(path).then((response) => {
            this.podData = response.data;
          });
        },
        show() {
        const path = "http://127.0.0.1:8000/api/node";
        //本地调试使用,在服务器上还是用相对路径
        // const path = "http://127.0.0.1:8000/node";
        // 务必使用箭头函数的方法,这样this.id能直接对上,不然会报错提示id没找到
        axios.get(path).then((response) => {
          this.tableData = response.data;
        });
      },
    }
})
export default class App extends Vue {
}
</script>

<style>
#app {
  font-family: Avenir, Helvetica, Arial, sans-serif;
  -webkit-font-smoothing: antialiased;
  -moz-osx-font-smoothing: grayscale;
  text-align: left;
  color: #2c3e50;
  margin-top: 60px;
}
</style>

Dockerfile 用于制作前端镜像 docker build -t k8s-frontend .

FROM  node:14-alpine3.12 AS build

LABEL maintainer="sunj@sfere-elec.com"

ADD . /build/

RUN set -eux \
    && yarn config set registry https://mirrors.huaweicloud.com/repository/npm/ \
    && yarn config set sass_binary_site https://mirrors.huaweicloud.com/node-sass \
    && yarn config set python_mirror https://mirrors.huaweicloud.com/python \
    && yarn global add yrm \
    && yrm add sfere http://repo.sfere.local:8081/repository/npm-group/ \
    && yrm use sfere \
    && cd /build \
    && yarn install \
    && yarn build

FROM nginx:1.21.5-alpine
LABEL zhenwei.li "zhenwei.li@sfere-elec.com"
COPY --from=build /build/dist/ /usr/share/nginx/html
# 暴露端口映射
EXPOSE 80

HELM包解说

deployment.yaml 把两个docker镜像放在同一个deployment里

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: {{ .Release.Name }}
  name: {{ .Release.Name }}
spec:
  replicas: 1
  revisionHistoryLimit: 5
  selector:
    matchLabels:
      app: {{ .Release.Name }}
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: {{ .Release.Name }}
    spec:
      containers:
        - image: k8s-check-backend
          imagePullPolicy: Always
          name: server-check-backend
          resources: {}
        - image: k8s-check-frontend
          imagePullPolicy: Always
          name: server-check-frontend
          resources: {}
          volumeMounts:
          - name: nginx-conf
            mountPath: /etc/nginx/conf.d/default.conf
            subPath: default.conf
      restartPolicy: Always
      volumes:
        - name: nginx-conf
          configMap:
            name: {{ .Release.Name }}
            items:
            - key: default.conf
              path: default.conf
      serviceAccountName: {{ .Release.Name }}

service.yaml 把前端通过nodeport方式暴露出去,方便测试

apiVersion: v1
kind: Service
metadata:
  labels:
    app: {{ .Release.Name }}
  name: {{ .Release.Name }}
spec:
  type: NodePort
  ports:
    - name: web
      port: 80
      targetPort: 80
      nodePort: 32666
  selector:
    app: {{ .Release.Name }}

configmap.yaml nginx的配置文件,反向代理后端

apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Release.Name }}
data:
  default.conf: |
    # 当前项目nginx配置文件,lzw
    server {
        listen       80;
        server_name  _A;
        gzip on;
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /usr/share/nginx/html;
        }

        location / {
            root /usr/share/nginx/html;
            index index.html index.htm;

            if (!-e $request_filename){
                    rewrite ^/.* /index.html last;
            }
        }
        location /api {
            proxy_pass          http://localhost:8000;
            proxy_http_version 1.1;
            proxy_set_header    X-Real-IP           $remote_addr;
            proxy_set_header    X-Forwarded-For     $proxy_add_x_forwarded_for;
        }

        error_page   500 502 503 504  /50x.html;
    }

rbac.yaml 我们的程序是需要访问k8s资源的,如果没有配置rbac,调用K8S的API会报403错误

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/name: {{ .Release.Name }}
  name: {{ .Release.Name }}
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: {{ .Release.Name }}
subjects:
- kind: ServiceAccount
  name: {{ .Release.Name }}
  namespace: {{ .Release.Namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/name: {{ .Release.Name }}
  name: {{ .Release.Name }}
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  - secrets
  - nodes
  - pods
  - services
  - resourcequotas
  - replicationcontrollers
  - limitranges
  - persistentvolumeclaims
  - persistentvolumes
  - namespaces
  - endpoints
  verbs:
  - list
  - watch
- apiGroups:
  - extensions
  resources:
  - daemonsets
  - deployments
  - replicasets
  - ingresses
  verbs:
  - list
  - watch
- apiGroups:
  - apps
  resources:
  - statefulsets
  - daemonsets
  - deployments
  - replicasets
  verbs:
  - list
  - watch
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/name: {{ .Release.Name }}
  name: {{ .Release.Name }}
  namespace: {{ .Release.Namespace }}

程序安装

helm install k8s-check helm/k8s-server-check

安装完成后,通过http://masterip:32666访问即可

python通过sdk从minio下载文件时添加进度条

背景

Minio是就地环境下比较好用的对象存储工具,适合在CI/CD流程中使用。主要是因为GIT里用LFS来放大文件不妥,把部署流程中需要的中间文件放minio上,通过SDK去存取文件非常方便。

Minio的上传文件fput_object有progress参数,但是下载文件fget_object默认没有 progress 参数,所以我们需要自己用get_object对代码稍加改造

涉及到的库

https://github.com/verigak/progress

用于提供进度条

pip install progress
pip install minio

代码

from minio import Minio
from progress.bar import Bar


def get_object_with_progress(client, bucket_name, object_name):
    try:
        data = client.get_object(bucket_name, object_name)
        total_length = int(data.headers.get('content-length'))
        bar = Bar(object_name, max=total_length / 1024 / 1024, fill='*', check_tty=False,
                  suffix='%(percent).1f%% - %(eta_td)s')
        with open('./' + object_name, 'wb') as file_data:
            for d in data.stream(1024 * 1024):
                bar.next(1)
                file_data.write(d)
        bar.finish()
    except Exception as err:
        print(err)


if __name__ == '__main__':
    client = Minio(
        "play.min.io",
        access_key="Q3AM3UQ867SPQQA43P2F",
        secret_key="zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG",
    )

    bucket_name = "downlaod"
    object_name = "eiop-timescaledb-offline.zip"
    get_object_with_progress(client, bucket_name, object_name)

实现效果

Argo CD接入LDAP认证或者gitea认证的方法

背景

argocd默认是通过修改argocd-cm来添加账户的,添加完账户后,还需要使用argocd客户端命令去给账户设置密码,这肯定是比较麻烦的,为了方便使用,我们可以接入ldap认证或者gitea的oauth2认证。

这里我们主要写ldap认证,因为gitea没有提供“组信息”给dex,而ldap能返回”组信息”

关键词:argocd ldap dex

看图讲故事

根据上面的图,我们可以看到,主要是通过配置argocd-cm和argocd-rbac-cm两个配置文件来生效的

下面我们来详细讲讲配置文件如何编写,关于gitea,ldap的安装这里就不再描述了,简单提一句argocd的安装

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

接入LDAP的配置

编写一个ldap-patch-dex.yaml

注意:这里有一个坑爹的地方,DN居然要大写才能使用,官网文档没有说要大写

apiVersion: v1
data:
  dex.config: |
    connectors:
    - type: ldap
      name: 统一账户中心
      id: ldap
      config:
        # Ldap server address
        host: ${LDAP地址}:${LDAP端口}
        insecureNoSSL: true
        insecureSkipVerify: true
        # Variable name stores ldap bindDN in argocd-secret
        bindDN: "$dex.ldap.bindDN"
        # Variable name stores ldap bind password in argocd-secret
        bindPW: "$dex.ldap.bindPW"
        usernamePrompt: 用户名
        # Ldap user serch attributes
        userSearch:
          baseDN: "ou=XXXX,dc=XXX,dc=com"
          filter: "(objectClass=person)"
          username: uid
          idAttr: uid
          emailAttr: mail
          nameAttr: cn
        # Ldap group serch attributes
        groupSearch:
          baseDN: "dc=XXX,dc=com"
          filter: "(objectClass=groupOfUniqueNames)"
          userAttr: DN
          groupAttr: uniqueMember
          nameAttr: cn
kubectl -n argocd patch configmaps argocd-cm --patch "$(cat ldap-patch-dex.yaml)"

上面的 bindPW 和 bindDN 我们放一个只读权限的账户到secret里,设置方法如下

kubectl -n argocd patch secrets argocd-secret --patch "{\"data\":{\"dex.ldap.bindPW\":\"$(echo my-password | base64 -w 0)\"}}"

kubectl -n argocd patch secrets argocd-secret --patch "{\"data\":{\"dex.ldap.bindDN\":\"$(echo CN=ldapuser,OU=Service Accounts,OU=Resource,DC=mydomain,DC=local | base64 -w 0)\"}}"

设置grooup权限(只有ldap能分组,gitea接入不能获取分组)

编辑argocd-rbac-cm 文件,这里举例设置 “administrators “组为管理员

kubectl edit configmaps -n argocd argocd-rbac-cm

apiVersion: v1
data:
  policy.csv: |
    g, administrators, role:admin
  policy.default: role:readonly

编辑完成之后,需要重启argocd和dex

kubectl delete pod -n argocd argocd-dex-server-7857b96dbb-s596m
kubectl delete pod -n argocd argocd-server-559f498454-fl5d2

效果演示



不推荐使用(接入gitea oauth2认证)

这个 接入gitea oauth2 认证我不推荐,因为没有办法设置“组”,所有用户通过这种方式登录进来的都是 policy.default 对应的权限,也许以后会有,但是笔者写这篇文章的时候是没有办法获取“组”的。

1,在gitea里输入重定向URI创建oauth2认证,获得clientID和clientSecret。

注意:argocd的重定向地址是固定后缀/api/dex/callback

2.创建一个gitea-patch-dex.yaml 内容如下

apiVersion: v1
data:
  accounts.drone: apiKey,login
  dex.config: |-
    connectors:
    - type: gitea
      name: Gitea
      id: gitea
      config:
        baseURL: https://gitea域名
        redirectURI: https://argocd域名/api/dex/callback
        clientID: 上一步获取的clientID
        clientSecret: 上一步获取的clientSecret

3.生效配置文件,重启dex

kubectl -n argocd patch configmaps argocd-cm --patch "$(cat ldap-patch-dex.yaml)"

kubectl delete pod -n argocd argocd-dex-server-7857b96dbb-s596m


docker-compose快速部署LDAP

背景

开发人员一般会用到很多开发软件,例如GIT,SonarQueb,minio,rancher等程序,这么多的程序,每个程序都有自己的一套账户系统和权限肯定是不合适的,作为用户来说,我们肯定是希望同一个账户能在多个软件中登录,就像一个微信号可以玩腾讯的所有游戏。作为管理员来说,肯定是希望前端开发,后端开发,测试人员的权限是分开的,在一个地方修改,所有软件的权限都能同步变更。那我们就采用了ldap的方式来快速部署试试吧。

前提条件

ubuntu系统,安装了docker和docker-compose

架构图

docker-compose.yml内容

创建以下内容的docker-compose.yml 文件,使用docker-compose up -d 命令运行

version: '3'
 
services:
    ldap-service:
        image: osixia/openldap:1.5.0
        container_name: ldap-service
        restart: always
        hostname: ldap.zhenwei.local
        environment:
            - LDAP_ORGANISATION=zhenwei.li.Co.,Ltd.
            - LDAP_DOMAIN=域名.com
            - LDAP_ADMIN_PASSWORD=超管密码
            - LDAP_READONLY_USER=true
            - LDAP_READONLY_USER_USERNAME=lzwread
            - LDAP_READONLY_USER_PASSWORD=只读密码
            - LDAP_CONFIG_PASSWORD=只读密码
            - LDAP_TLS_VERIFY_CLIENT=never
        networks:
            server:
        ports:
          - "389:389"
          - "636:636"
        volumes:
            - /home/zhenwei/ldap/database:/var/lib/ldap
            - /home/zhenwei/ldap/config:/etc/ldap/slapd.d
    ldap-backup:
        image: osixia/openldap-backup:1.5.0
        container_name: ldap-backup
        restart: always
        environment:
            - LDAP_ORGANISATION=zhenwei.li.Co.,Ltd.
            - LDAP_BACKUP_CONFIG_CRON_EXP="0 2 * * *"
            - LDAP_DOMAIN=域名.com
            - LDAP_ADMIN_PASSWORD=超管密码
            - LDAP_READONLY_USER=true
            - LDAP_READONLY_USER_USERNAME=lzwread
            - LDAP_READONLY_USER_PASSWORD=只读密码
            - LDAP_CONFIG_PASSWORD=只读密码
        volumes:
            - /home/zhenwei/ldap/database:/var/lib/ldap
            - /home/zhenwei/ldap/config:/etc/ldap/slapd.d
            - /home/zhenwei/ldap/backup:/data/backup
        networks:
            server:
    phpldap-service:
        image: osixia/phpldapadmin:0.9.0
        container_name: phpldap-service
        restart: always
        environment:
            - PHPLDAPADMIN_LDAP_HOSTS=10.80.3.249
            - PHPLDAPADMIN_HTTPS=false
        networks:
          server:
        ports:
          - "3081:80"
        volumes:
            - /home/zhenwei/ldap/phpadmin-data:/var/www/phpldapadmin
        depends_on:
            - ldap-service
 
    ldap-ltb:
        image: accenture/adop-ldap-ltb:0.1.0
        container_name: ldap-ltb
        restart: always
        networks:
          server:
        ports:
          - "8095:80"
        environment:
            - LDAP_LTB_URL=ldap://ldap-service:389
            - LDAP_LTB_BS=dc=zhenwei.li,dc=com
            - LDAP_LTB_PWD=超管密码
            - LDAP_LTB_DN=cn=admin,dc=zhenwei.li,dc=com
        depends_on:
            - ldap-service
        volumes:
            - /home/zhenwei/ldap/ltb-config:/usr/share/self-service-password/conf
networks:
  server:
#    external: true

electron+droneCI+minio流水线

背景

因为我们的electron程序已经开发完成,期望要能开发人员每次上传代码,打了tag就自动build一份deb文件,自动上传到minio,方便运维人员去拿deb文件部署到ubuntu环境上。我们已有的技术栈包含droneCI,minio,python,于是边有了该方案。本文省略了vault,ldap,minio,harbor的安装与配置,这些程序的安装配置在本网站的其他文章里,就不一一贴出来了


架构图

解释:

1.前端开发上传electron代码到git服务端

2.git服务端通过webhook方式通知drone-server产生了。例如本文只测试的是发布tag触发webhook,还有很多种触发方式都可以设置

3.drone-server收到通知后,再在drone-runner所在的k8s集群里启动一个包含nodejs和python的任务容器

4.任务容器通过electron-forge make 命令打包一个deb文件

5.任务容器通过minio提供的python sdk上传deb文件到minio


drone插件编写

要完成上述目标,第一步就是得编写一个drone的插件

我编写该插件使用的是nodejs16版本的debian系统,然后通过提前安装好需要的如下表格里的工具。注意,因为我用的是华为源,2021年12月9日的时候,华为镜像上最新的electron只到16.0.2版本,所以注意指定版本号

介绍:该插件使用nodejs16版本的debian系统,然后通过提前安装好需要的如下工具。注意,因为我用的是华为源,2021年12月9日的时候,华为镜像上最新的electron只到16.0.2版本,所以注意指定版本号

工具名
rpm
python3-pip
python3
fakeroot
electron@v16.0.2 
electron-prebuilt-compile
electron-forge 
dpkg
minio的python sdk

代码有3个文件main.py Dockerfile ,requirements.txt,下面是详细介绍

main.py

代码功能是先获取环境变量,然后使用git的tag号替换掉package.json里的version字段。执行yarn install,yanr make,通过环境变量找到需要上传的文件,通过pythonde的sdk上传到minio里。详细代码如下

#main.py
import json
import os
import subprocess

from minio import Minio
from minio.error import S3Error

endpoint = "minio.sfere.local"
access_key = "bababa"
secret_key = "bababa"
bucket = "electronjs"
folder_path = "/drone/src/out/make/deb/x64"
suffix = "deb"
tag = "0.0.0"


def find_file_by_suffix(target_dir, target_suffix="deb"):
    find_res = []
    target_suffix_dot = "." + target_suffix
    walk_generator = os.walk(target_dir)
    for root_path, dirs, files in walk_generator:
        if len(files) < 1:
            continue
        for file in files:
            file_name, suffix_name = os.path.splitext(file)
            if suffix_name == target_suffix_dot:
                find_res.append(os.path.join(root_path, file))
    return find_res


def get_environment():
    global endpoint, access_key, secret_key, bucket, suffix, tag

    if "PLUGIN_ENDPOINT" in os.environ:
        endpoint = os.environ["PLUGIN_ENDPOINT"]
    if "PLUGIN_ACCESS_KEY" in os.environ:
        access_key = os.environ["PLUGIN_ACCESS_KEY"]
    if "PLUGIN_SECRET_KEY" in os.environ:
        secret_key = os.environ["PLUGIN_SECRET_KEY"]
    if "PLUGIN_BUCKET" in os.environ:
        bucket = os.environ["PLUGIN_BUCKET"]
    if "PLUGIN_SUFFIX" in os.environ:
        suffix = os.environ["PLUGIN_SUFFIX"]
    if "PLUGIN_TAG" in os.environ:
        tag = os.environ["PLUGIN_TAG"]


def yarn_make():
    with open('./package.json', 'r', encoding='utf8')as fp:
        json_data = json.load(fp)
    json_data['version'] = tag
    with open('./package.json', 'w', encoding='utf8')as fp:
        json.dump(json_data, fp, ensure_ascii=False, indent=2)
    print('package version replace to ' + tag)
    print(subprocess.run("yarn install", shell=True))
    print(subprocess.run("yarn make", shell=True))


def upload_file():
    file_list = find_file_by_suffix(folder_path, suffix)
    # 创建minio连接,这里因为我们是http的,所以secure=False
    client = Minio(
        endpoint=endpoint,
        access_key=access_key,
        secure=False,
        secret_key=secret_key,
    )

    # 检查bucket是否存在,不存在就创建bucket
    found = client.bucket_exists(bucket)
    if not found:
        client.make_bucket(bucket)
    else:
        print("Bucket 'electronjs' already exists")

    # 上传文件到bucket里
    for file in file_list:
        name = os.path.basename(file)
        client.fput_object(
            bucket, name, file,
        )
        print(
            "'" + file + "' is successfully uploaded as "
                         "object '" + name + "' to bucket '" + bucket + "'."
        )


if __name__ == "__main__":
    get_environment()
    yarn_make()
    try:
        upload_file()
    except S3Error as exc:
        print("error occurred.", exc)

Dockerfile

取一个node16版本的debian系统,使用国内源安装我们在之前列出来要用的工具,然后指定程序入口是我们的python程序。编写完后,使用docker build -t drone-electron-minio-plugin:0.1.0 . 做一个镜像上传到私仓里

FROM node:16-buster
RUN npm config set registry https://mirrors.huaweicloud.com/repository/npm/ \
    && npm config set disturl https://mirrors.huaweicloud.com/nodejs \
    && npm config set sass_binary_site https://mirrors.huaweicloud.com/node-sass \
    && npm config set phantomjs_cdnurl https://mirrors.huaweicloud.com/phantomjs \
    && npm config set chromedriver_cdnurl https://mirrors.huaweicloud.com/chromedriver \
    && npm config set operadriver_cdnurl https://mirrors.huaweicloud.com/operadriver \
    && npm config set electron_mirror https://mirrors.huaweicloud.com/electron/ \
    && npm config set python_mirror https://mirrors.huaweicloud.com/python \
    && npm config set canvas_binary_host_mirror https://npm.taobao.org/mirrors/node-canvas-prebuilt/ \
    && npm install -g npm@8.2.0 \
    && yarn config set registry https://mirrors.huaweicloud.com/repository/npm/ \
    && yarn config set disturl https://mirrors.huaweicloud.com/nodejs \
    && yarn config set sass_binary_site https://mirrors.huaweicloud.com/node-sass \
    && yarn config set phantomjs_cdnurl https://mirrors.huaweicloud.com/phantomjs \
    && yarn config set chromedriver_cdnurl https://mirrors.huaweicloud.com/chromedriver \
    && yarn config set operadriver_cdnurl https://mirrors.huaweicloud.com/operadriver \
    && yarn config set electron_mirror https://mirrors.huaweicloud.com/electron/ \
    && yarn config set python_mirror https://mirrors.huaweicloud.com/python \
    && yarn config set canvas_binary_host_mirror https://npm.taobao.org/mirrors/node-canvas-prebuilt/ \
    && yarn global add electron@v16.0.2 electron-forge electron-prebuilt-compile\
    && sed -i "s@http://ftp.debian.org@https://repo.huaweicloud.com@g" /etc/apt/sources.list \
    && sed -i "s@http://security.debian.org@https://repo.huaweicloud.com@g" /etc/apt/sources.list \
    && sed -i "s@http://deb.debian.org@https://repo.huaweicloud.com@g" /etc/apt/sources.list \
    && apt update \
    && apt install -y fakeroot dpkg rpm python3 python3-pip
ADD . .   
WORKDIR . 
RUN pip3 install -r ./requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
#CMD ["python3","/main.py"]
WORKDIR /drone/src
ENTRYPOINT ["python3", "/main.py"]

requirements.txt

minio==7.1.2

electron仓库代码

我们的electron仓库里要添加一个.drone.yml文件和对package.json稍微进行一些修改

package.json

.drone.yml

droneCI的流水线文件,使用了我们在上一节里build出来的drone插件镜像


流水线演示

需要人手动操作的

流水线自动操作的

minio分布式裸机安装(图文并茂)

背景&架构

因为单机的minio无法扩充节点,无法使用版本功能,于是我们边开始使用minio的分布式版本,minio的分布式版本可以使用docker、kubernetes、裸机三种方式,这里我们使用裸机安装,架构如下图所示

1准备工作

4台ubuntu18的电脑,每台电脑的系统,CPU,内存,硬盘空间大小均一致。给minio用的硬盘需使用XFS格式化。挂载给minio用的硬盘到/mnt/disk目录。分别按顺序配置了4个域名

minio1.sfere.local  minio2.sfere.local minio3.sfere.local minio4.sfere.local

编者注:这里我与官网略有不同,我每个服务器只有一块硬盘给挂载,官网是每个服务器给4块硬盘挂载

1个安装了nginx的服务器,域名是minio.sfere.local

编者注:如果你没有域名,你可以在这5台机器里的hosts文件里把5个地址加上,再在你的测试机器的hosts里上加上这5个地址


2.安装minio程序(4台电脑均进行一样的操作)

1.进入官网的下载链接,下载一个最新的deb文件https://dl.min.io/server/minio/release/linux-amd64/ 

例如我下载的 是 https://dl.min.io/server/minio/release/linux-amd64/minio_20211124231933.0.0_amd64.deb

2.把最新文件放到4台服务器上,使用dpkg命令安装 

3.sudo vi /etc/systemd/system/minio.service 注释掉ProtectProc=invisible 。这个是kernel 5.8之后才加入的,我们的ubuntu18系统不支持

4.添加minio-user用户和用户组。注意:此处与官网略有不同,官网打错字了把minio-user打成了miniouser

sudo groupadd -r minio-user
sudo useradd -M -r -g minio-user minio-user
sudo chown minio-user:minio-user /mnt/disk

5.创建环境变量文件

sudo nano /etc/default/minio

# Set the hosts and volumes MinIO uses at startup
# The command uses MinIO expansion notation {x...y} to denote a
# sequential series.
#
# The following example covers four MinIO hosts
# with 4 drives each at the specified hostname and drive locations.
 
MINIO_VOLUMES="http://minio{1...4}.sfere.local/mnt/disk/minio"
 
# Set all MinIO server options
#
# The following explicitly sets the MinIO Console listen address to
# port 9001 on all network interfaces. The default behavior is dynamic
# port selection.
 
MINIO_OPTS="--console-address :9001"
 
# Set the root username. This user has unrestricted permissions to
# perform S3 and administrative API operations on any resource in the
# deployment.
#
# Defer to your organizations requirements for superadmin user name.
 
MINIO_ROOT_USER=minioadmin
 
# Set the root password
#
# Use a long, random, unique string that meets your organizations
# requirements for passwords.
 
MINIO_ROOT_PASSWORD=sfere!lzw!2021
 
# Set to the URL of the load balancer for the MinIO deployment
# This value *must* match across all MinIO servers. If you do
# not have a load balancer, set this value to to any *one* of the
# MinIO hosts in the deployment as a temporary measure.
# nginx服务器地址
MINIO_SERVER_URL="http://minio.sfere.local"
 
MINIO_IDENTITY_LDAP_TLS_SKIP_VERIFY=on
MINIO_IDENTITY_LDAP_SERVER_INSECURE=on
MINIO_IDENTITY_LDAP_STS_EXPIRY=24h
MINIO_IDENTITY_LDAP_SERVER_ADDR=${LDAP域名}
MINIO_IDENTITY_LDAP_LOOKUP_BIND_DN=${LDAP只读账户}
MINIO_IDENTITY_LDAP_LOOKUP_BIND_PASSWORD=${LDAP只读账户的密码}
MINIO_IDENTITY_LDAP_USER_DN_SEARCH_BASE_DN=${LDAP用户搜索域}
MINIO_IDENTITY_LDAP_USER_DN_SEARCH_FILTER=(&(objectClass=inetOrgPerson)(uid=%s))
MINIO_IDENTITY_LDAP_GROUP_SEARCH_BASE_DN=${LDAP组搜索域}
MINIO_IDENTITY_LDAP_GROUP_SEARCH_FILTER=(&(objectclass=groupOfUniqueNames))

6. 运行minio服务,检查运行是否成功

sudo systemctl start minio.service
sudo systemctl status minio.service
journalctl -f -u minio.service

nginx配置

在/etc/nginx/conf.d目录下添加一个minio.conf

upstream minio {
    server minio1.sfere.local:9000;
    server minio2.sfere.local:9000;
    server minio3.sfere.local:9000;
    server minio4.sfere.local:9000;
}
 
upstream console {
    ip_hash;
    server minio1.sfere.local:9001;
    server minio2.sfere.local:9001;
    server minio3.sfere.local:9001;
    server minio4.sfere.local:9001;
}
 
server {
        listen       80;
        listen  [::]:80;
        server_name  minio.sfere.local;
 
        # To allow special characters in headers
        ignore_invalid_headers off;
        # Allow any size file to be uploaded.
        # Set to a value such as 1000m; to restrict file size to a specific value
        client_max_body_size 0;
        # To disable buffering
        proxy_buffering off;
 
        location / {
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
 
            proxy_connect_timeout 300;
            # Default is HTTP/1, keepalive is only enabled in HTTP/1.1
            proxy_http_version 1.1;
            proxy_set_header Connection "";
            chunked_transfer_encoding off;
 
            proxy_pass http://minio;
        }
}
server {
        listen       9001;
        listen  [::]:9001;
        server_name  minio.sfere.local;
 
        # To allow special characters in headers
        ignore_invalid_headers off;
        # Allow any size file to be uploaded.
        # Set to a value such as 1000m; to restrict file size to a specific value
        client_max_body_size 0;
        # To disable buffering
        proxy_buffering off;
 
        location / {
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-NginX-Proxy true;
 
            # This is necessary to pass the correct IP to be hashed
            real_ip_header X-Real-IP;
 
            proxy_connect_timeout 300;
 
            # To support websocket
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
 
            chunked_transfer_encoding off;
 
            proxy_pass http://console;
        }
}

使用mc客户端添加ldap超管,普通用户

docker run --rm -it --entrypoint=/bin/sh minio/mc
 
mc config host add minio http://minio.sfere.local minioadmin 'sfere!lzw!2021' --api S3v4
  
mc admin policy list minio
  
mc admin policy set minio consoleAdmin user=cn=李镇伟,ou=test-department,ou=NJ-Dev,ou=SFERE-RD,dc=sfere-elec,dc=com
mc admin policy set minio readwrite group=cn=jira-software-users,dc=sfere-elec,dc=com
mc admin policy set minio consoleAdmin group=cn=超级用户,dc=sfere-elec,dc=com

访问页面

访问http://minio.sfere.local/ 会自动跳转到http://minio.sfere.local:9001/login

参考文章

https://docs.min.io/minio/baremetal/installation/deploy-minio-distributed.html

苏ICP备18047533号-2