Category Archive : 未分类

fluentd接入Elasticsearch的简单例子

背景

最近想学习一下elasticsearch和fluentd的配合使用, fluentd比logstash节省太多资源了,所以就有了如下文章

Elasticsearch快捷安装(使用ECK方式)

参考文章

https://www.elastic.co/guide/en/cloud-on-k8s/1.8/k8s-deploy-eck.html

先安装一个eck的operator

kubectl create -f https://download.elastic.co/downloads/eck/1.8.0/crds.yaml
kubectl apply -f https://download.elastic.co/downloads/eck/1.8.0/operator.yaml

等命令介绍,输入下面命令查看日志

kubectl -n elastic-system logs -f statefulset.apps/elastic-operator

安装elasticsearch

cat <<EOF | kubectl apply -f -
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: quickstart
spec:
  version: 7.15.2
  nodeSets:
  - name: default
    count: 1
    config:
      node.store.allow_mmap: false
EOF

安装完成后,输入命令,获得es的密码,默认账户是elastic

PASSWORD=$(kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}')

部署完毕后,可以通过port-forward转发elasticsearch的端口到外部进行测试

kubectl port-forward service/quickstart-es-http 9200

再安装一个kibana

cat <<EOF | kubectl apply -f -
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: quickstart
spec:
  version: 7.15.2
  count: 1
  elasticsearchRef:
    name: quickstart
EOF

可以通过port-forward转发kibana的端口到外部进行测试

kubectl port-forward service/quickstart-kb-http 5601

fluentd安装

编写一个fluentd.yaml ,

编写完毕后kubectl apply -f fluentd.yaml 。内容如下,注意替换密码:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd
  namespace: kube-logging
  labels:
    app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: fluentd
  labels:
    app: fluentd
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - namespaces
  verbs:
  - get
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd
roleRef:
  kind: ClusterRole
  name: fluentd
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: fluentd
  namespace: kube-logging
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-logging
  labels:
    app: fluentd
spec:
  selector:
    matchLabels:
      app: fluentd
  template:
    metadata:
      labels:
        app: fluentd
    spec:
      serviceAccount: fluentd
      serviceAccountName: fluentd
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd
        image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
        env:
          - name:  FLUENT_ELASTICSEARCH_HOST
            value: "quickstart-es-http.default.svc.cluster.local"
          - name:  FLUENT_ELASTICSEARCH_PORT
            value: "9200"
          - name: FLUENT_ELASTICSEARCH_SCHEME
            value: "http"
          - name: FLUENT_ELASTICSEARCH_USER
            value: "elastic"
          - name: FLUENT_ELASTICSEARCH_PASSWORD
            value: "我是密码!注意替换"
          - name: FLUENT_ELASTICSEARCH_SSL_VERSION
            value: "TLSv1_2"
          - name: FLUENTD_SYSTEMD_CONF
            value: disable
          - name: FLUENT_UID
            value: "0"
        resources:
          limits:
            memory: 512Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers

部署一个测试程序(用完之后可以删除)

kubectl -n logging apply -f - <<"EOF"
apiVersion: apps/v1
kind: Deployment
metadata:
 name: log-generator
spec:
 selector:
   matchLabels:
     app.kubernetes.io/name: log-generator
 replicas: 1
 template:
   metadata:
     labels:
       app.kubernetes.io/name: log-generator
   spec:
     containers:
     - name: nginx
       image: banzaicloud/log-generator:0.3.2
EOF

kibana里添加index和查看

直接看图说话

参考文章

https://www.digitalocean.com/community/tutorials/how-to-set-up-an-elasticsearch-fluentd-and-kibana-efk-logging-stack-on-kubernetes
https://docs.fluentd.org/output/elasticsearch
https://github.com/fluent/fluentd-kubernetes-daemonset
https://medium.com/kubernetes-tutorials/cluster-level-logging-in-kubernetes-with-fluentd-e59aa2b6093a

TDengine安装,python客户端测试,接入DBeaver

简介

最近在看TDengine数据库,思考如何和我们的边缘集群结合在一起使用,本文结构是:

服务端:ubuntu18系统,通过deb文件安装TDengine数据库,主机IP 192.168.0.13,使用默认用户名密码

客户端:容器运行,python客户端,可以运行在另外一台机器或者K8S集群里

图形化工具:使用Dbeaver添加jdbc驱动,在图形化工具里使用TDengine


服务端TDengine安装

根据文档推荐,目前支持linux裸机安装,不建议docker安装,所以我们这里裸机安装

1.通过这个https://www.taosdata.com/assets-download/TDengine-server-2.2.0.5-Linux-x64.deb链接下载安装文件

2.放置此文件到192.168.0.13服务器上,通过命令sudo dpkg -i TDengine-server-2.2.0.5-Linux-x64.deb 安装

3.通过命令启动TDengine数据库sudo systemctl start taosd

4.参考https://www.taosdata.com/cn/getting-started/#Quick%20Start上的命令可以去创建数据库和表测试


python客户端容器

以python做一个docker容器为例。这里有一个很奇葩的地方,每一种TDengine的连接方式都必须要设置一个taos.cfg的路径,和安装一个独立的客户端程序

创建一个Dockerfile和一个测试文件sub.py ,一个requirements.txt文件,如果不用faker库,requirements.txt文件也不需要

Dockerfile内容如下

FROM python:3.9.7
ADD . .
RUN wget https://www.taosdata.com/assets-download/TDengine-client-2.2.0.5-Linux-x64.tar.gz
RUN tar -xzf TDengine-client-2.2.0.5-Linux-x64.tar.gz
WORKDIR /TDengine-client-2.2.0.5
RUN ./install_client.sh
WORKDIR /
RUN git clone --depth 1 https://github.com/taosdata/TDengine.git
RUN pip install -r requirements.txt  -i https://pypi.tuna.tsinghua.edu.cn/simple
RUN pip install ./TDengine/src/connector/python -i https://pypi.tuna.tsinghua.edu.cn/simple
CMD ["python","test.py"]

测试主文件test.py

import time
 
import taos
from faker import Faker
 
fake = Faker()
print()
conn = taos.connect(host="192.168.0.13", user="root", password="taosdata", config="C:\TDengine\cfg")
c1 = conn.cursor()
import datetime
 
# 创建数据库
# c1.execute('create database db')
c1.execute('use db')
print("开始时间")
print(time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time())))
 
# # 建表
# c1.execute('create table tb (ts timestamp, temperature int, humidity float)')
# # 插入数据
start_time = datetime.datetime(2018, 11, 1)
# affected_rows = c1.execute("insert into tb_ts_kv_t values (\'%s\', 'DEVICE', '1eb3c1d1d5db250860b6553ce011990', 'DCT', '1623200', True, '123', '456', 3.0)" %start_time)
# 批量插入数据
time_interval = datetime.timedelta(seconds=1)
sqlcmd = ['insert into tb_ts_kv_t values']
for irow in range(1, 1000):
    sqlcmd = ['insert into tb_ts_kv_t values']
    start_time += time_interval
    sqlcmd.append("('" + str(start_time) + "', 'DEVICE', '"+fake.company()+"', '"+fake.name()+"', '" + str(
        irow) + "', True, '"+fake.word()+"', '456', " + str(irow * 1.2) + ")")
    exestring = ' '.join(sqlcmd)
    # sqlcmd.append('(\'%s\', %d, %f)' %(start_time, irow, irow*1.2))
    affected_rows = c1.execute(exestring)
print("结束时间")
print(time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time())))
 
c1.execute('select * from tb_ts_kv_t')
for data in c1:
    print("ts=%s, temperature=%d, humidity=%f" %(data[0], data[4],data[-1]))

requirements.txt文件

Faker==9.3.1

接入图形化工具DBeaver

1.点击【数据库】->【驱动管理器】,点击【新建】

2,进行如下设置,输入如下参数

keyvalue
驱动名称Tdengine
驱动类型Generic
类名com.taosdata.jdbc.TSDBDriver
默认端口6030

3.点击添加【工件】,输入参数如下

keyvalue
Group IDcom.taosdata.jdbc
taos-jdbcdrivertaos-jdbcdriver
版本RELEASE

4.点击【驱动类】旁边的【找到类】按钮,选择【com.taosdata.jdbc.TSDBDriver】,点击【确定】

5.新建TDengine连接,参数如下

keyvalue
JDBC URLjdbc:TAOS://192.168.0.13/db
用户名root
密码taosdata

苏ICP备18047533号-2