当前位置: 首页 > news >正文

ELKstack-日志收集案例

由于实验环境限制,将 filebeat 和 logstash 部署在 tomcat-server-nodeX,将 redis 和
写 ES 集群的 logstash 部署在 redis-server,将 HAproxy 和 Keepalived 部署在
tomcat-server-nodeX。将 Kibana 部署在 ES 集群主机。

环境:

主机名核心RAMIP运行的服务
es-server-node124G192.168.100.142Elasticsearch、Kibana、Head、Cerebro
es-server-node224G192.168.100.144Elasticsearch 、Kibana
es-server-node324G192.168.100.146Elasticsearch 、Kibana
tomcat-server-node122G192.168.100.150logstash、filebeat、haproxy、tomcat
tomcat-server-node222G192.168.100.152logstash、filebeat、haproxy、nginx、tomcat
redis-server22G192.168.100.154redis、logstash、MySQL

一. 基础环境说明

1.1 ES 和 logstash 需要 JAVA 环境

前提:关闭防火墙和 SELinux,时间同步

ES 安装说明

不同的 ES 版本需要的依赖和说明,查看官方文档:
ES 安装说明

Logstash 安装说明

不同的 ES 版本需要的依赖和说明,查看官方文档:
Logstash 安装说明

1.2 HAProxy 和 redis 及 nginx 编译基础工具安装

Ubuntu 安装:

apt -y purge ufw lxd lxd-client lxcfs liblxc-common
apt -y install iproute2 ntpdate tcpdump telnet traceroute nfs-kernel-server nfs-common lrzsz tree openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev  gcc make openssh-server  iotop unzip zip

CentOS7 安装:

yum -y install vim-enhanced tcpdump lrzsz tree telnet bash-completion net-tools wget bzip2 lsof tmux man-pages zip unzip nfs-utils gcc make gcc-c++ glibc glibc-devel pcre pcre-devel openssl  openssl-devel systemd-devel zlib-devel

二. 配置 filebeat 将数据通过 logstash 写入 redis

日志流动路径:
tomcat-server:filebeat --> tomcat-server:logstash --> redis-server:redis

2.1 filebeat 配置

2.1.1 tomcat-server-node1

filebeat: logstash-output doc

root@tomcat-server-node1:~# ip addr sho eth0 | grep "inet "inet 192.168.100.150/24 brd 192.168.100.255 scope global eth0root@tomcat-server-node1:~# vim /etc/filebeat/filebeat.yml
...
filebeat.inputs:
- type: logenabled: truepaths:- /var/log/syslogdocument_type: system-logexclude_lines: ['^DBG']fields:name: syslog_from_filebeat_150  # 该值用来在logstash区分日志来源,以输出到不同目标或创建不同index(输出目标为elasticsearch时)
utput.logstash:hosts: ["192.168.100.150:5044"]

2.1.2 tomcat-server-node2

root@tomcat-server-node2:~# ip addr show eth0 | grep "inet "inet 192.168.100.152/24 brd 192.168.100.255 scope global eth0root@tomcat-server-node2:~# vim /etc/filebeat/filebeat.yml
...
filebeat.inputs:
- type: logenabled: truepaths:- /var/log/syslogdocument_type: system-logexclude_lines: ['^DBG']fields:name: syslog_from_filebeat_152
output.logstash:hosts: ["192.168.100.152:5044"]
...

2.2 logstash 配置

logstash filebeat input plugin
logstash redis output plugin

2.2.1 tomcat-server-node1

root@tomcat-server-node1:/etc/logstash/conf.d# cat syslog_from_filebeat.conf
input {beats {host => "192.168.100.150"port => "5044"}
}output {redis {host => "192.168.100.154"port => "6379"db   => "1"key  => "syslog_150"data_type => "list"password  => "stevenux"}
}

重启 logstash:

~# systemctl restart logstash

2.2.2 tomca-server-node2

root@tomcat-server-node2:/etc/logstash/conf.d# cat syslog_from_filebeat.conf
input {beats {host => "192.168.100.152"port => "5044"}
}output {redis {host => "192.168.100.154"port => "6379"db   => "1"key  => "syslog_152"data_type => "list"password  => "stevenux"}
}

重启 logstash:

~# systemctl restart logstash

2.3 redis 配置

2.3.1 关闭 RDB 和 AOF 数据持久

root@redis-server:/etc/logstash/conf.d# cat /usr/local/redis/redis.conf
...
bind 0.0.0.0
port 6379
daemonize yes
supervised no
pidfile /var/run/redis_6379.pid
loglevel notice
databases 16
# requirepass foobared  # 设置密码
#save 900  # 关闭RDB
#save 300
#save 60 10000
#dbfilename dump.rdb
appendonly no  # 关闭AOF
...

2.3.2 设置密码

开启 redis 后使用 redis-cli 设置密码,重启 redis 后失效:

127.0.0.1> CONFIG SET requirepass 'stevenux'

启动 redis:

root@redis-server:/etc/logstash/conf.d# /usr/local/redis/src/redis-server &

2.4 查看 logstash 和 filebeat 日志及 redis 数据

2.4.1 redis 数据

root@redis-server:~# redis-cli
127.0.0.1:6379> AUTH stevenux
OK
127.0.0.1:6379> SELECT 1
OK
127.0.0.1:6379[1]> KEYS *
1) "syslog_152"
2) "syslog_150"
127.0.0.1:6379[1]> LLEN syslog_152
(integer) 3760
127.0.0.1:6379[1]> LLEN syslog_150
(integer) 4125
127.0.0.1:6379[1]> LPOP syslog_152
"{\"agent\":{\"hostname\":\"tomcat-server-node2\",\"id\":\"93f937e9-e692-4434-8b83-7562f95ef976\",\"type\":\"filebeat\",\"version\":\"7.6.1\",\"ephemeral_id\":\"5cf51f37-15b3-44b7-bd01-293f0290774b\"},\"host\":{\"name\":\"tomcat-server-node2\",\"hostname\":\"tomcat-server-node2\",\"id\":\"e96c1092201442a4aeb7f67c5c417605\",\"architecture\":\"x86_64\",\"containerized\":false,\"os\":{\"codename\":\"bionic\",\"name\":\"Ubuntu\",\"platform\":\"ubuntu\",\"kernel\":\"4.15.0-55-generic\",\"family\":\"debian\",\"version\":\"18.04.3 LTS (Bionic Beaver)\"}},\"input\":{\"type\":\"log\"},\"@timestamp\":\"2020-03-22T05:52:07.572Z\",\"ecs\":{\"version\":\"1.4.0\"},\"tags\":[\"beats_input_codec_plain_applied\"],\"log\":{\"offset\":21760382,\"file\":{\"path\":\"/var/log/syslog\"}},\"message\":\"Mar 22 13:52:00 tomcat-server-node2 filebeat[1136]: 2020-03-22T13:52:00.256+0800#011ERROR#011pipeline/output.go:100#011Failed to connect to backoff(async(tcp://192.168.100.152:5044)): dial tcp 192.168.100.152:5044: connect: connection refused\",\"@version\":\"1\",\"fields\":{\"name\":\"syslog_from_filebeat_152\"}}"
127.0.0.1:6379[1]> LPOP syslog_150
"{\"@timestamp\":\"2020-03-22T05:46:08.122Z\",\"tags\":[\"beats_input_codec_plain_applied\"],\"fields\":{\"name\":\"syslog_from_filebeat_150\"},\"@version\":\"1\",\"agent\":{\"hostname\":\"tomcat-server-node1\",\"id\":\"93f937e9-e692-4434-8b83-7562f95ef976\",\"version\":\"7.6.1\",\"type\":\"filebeat\",\"ephemeral_id\":\"a03ec121-e70b-4039-a696-3e7ccefcb510\"},\"host\":{\"name\":\"tomcat-server-node1\",\"os\":{\"name\":\"Ubuntu\",\"platform\":\"ubuntu\",\"family\":\"debian\",\"kernel\":\"4.15.0-55-generic\",\"version\":\"18.04.3 LTS (Bionic Beaver)\",\"codename\":\"bionic\"},\"containerized\":false,\"architecture\":\"x86_64\",\"id\":\"e96c1092201442a4aeb7f67c5c417605\",\"hostname\":\"tomcat-server-node1\"},\"input\":{\"type\":\"log\"},\"ecs\":{\"version\":\"1.4.0\"},\"message\":\"Mar 22 13:11:57 tomcat-server-node1 kernel: [    0.000000]   2 disabled\",\"log\":{\"offset\":21567440,\"file\":{\"path\":\"/var/log/syslog\"}}}"
127.0.0.1:6379[1]>

2.4.2 logstash 日志

root@tomcat-server-node1:/etc/logstash/conf.d# tail /var/log/logstash/logstash-plain.log  -n66
...
[2020-03-22T13:49:33,428][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.6.1"}
[2020-03-22T13:49:35,044][INFO ][org.reflections.Reflections] Reflections took 35 ms to scan 1 urls, producing 20 keys and 40 values
[2020-03-22T13:49:35,573][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.RubyArray) has been create for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2020-03-22T13:49:35,603][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["/etc/logstash/conf.d/syslog_from_filebeat.conf"], :thread=>"#<Thread:0xacd988b run>"}
[2020-03-22T13:49:36,337][INFO ][logstash.inputs.beats    ][main] Beats inputs: Starting input listener {:address=>"192.168.100.150:5044"}
[2020-03-22T13:49:36,354][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2020-03-22T13:49:36,426][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-03-22T13:49:36,482][INFO ][org.logstash.beats.Server][main] Starting server on port: 5044
[2020-03-22T13:49:36,728][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

2.4.2 filebeat 日志

root@tomcat-server-node1:/etc/logstash/conf.d# tail /var/log/syslog  -n2
Mar 22 13:53:35 tomcat-server-node1 filebeat[1162]: 2020-03-22T13:53:35.082+0800#011INFO#011[monitoring]#011log/log.go:145#011Non-zero metrics in the last 30s#011{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":990,"time":{"ms":6}},"total":{"ticks":1440,"time":{"ms":6},"value":1440},"user":{"ticks":450}},"handles":{"limit":{"hard":4096,"soft":1024},"open":11},"info":{"ephemeral_id":"a03ec121-e70b-4039-a696-3e7ccefcb510","uptime":{"ms":450355}},"memstats":{"gc_next":14313856,"memory_alloc":11135512,"memory_total":142802472},"runtime":{"goroutines":29}},"filebeat":{"events":{"added":1,"done":1},"harvester":{"files":{"cf3638b8-1dfe-4b35-acd1-d3ec67a6780e":{"last_event_published_time":"2020-03-22T13:53:07.351Z","last_event_timestamp":"2020-03-22T13:53:07.351Z","read_offset":1259,"size":1259}},"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":1,"batches":1,"total":1},"read":{"bytes":6},"write":{"bytes":1134}},"pipeline":{"clients":1,"events":{"active":0,"published":1,"total":1},"queue":{"acked":1}}},"registrar":{"states":{"current":2,"update":1},"writes":{"success":1,"total":1}},"system":{"load":{"1":0.03,"15":0.41,"5":0.56,"norm":{"1":0.015,"15":0.205,"5":0.28}}}}}}
Mar 22 13:54:05 tomcat-server-node1 filebeat[1162]: 2020-03-22T13:54:05.082+0800#011INFO#011[monitoring]#011log/log.go:145#011Non-zero metrics in the last 30s#011{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":990,"time":{"ms":4}},"total":{"ticks":1450,"time":{"ms":9},"value":1450},"user":{"ticks":460,"time":{"ms":5}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":11},"info":{"ephemeral_id":"a03ec121-e70b-4039-a696-3e7ccefcb510","uptime":{"ms":480354}},"memstats":{"gc_next":14313856,"memory_alloc":12851080,"memory_total":144518040},"runtime":{"goroutines":29}},"filebeat":{"events":{"added":1,"done":1},"harvester":{"files":{"cf3638b8-1dfe-4b35-acd1-d3ec67a6780e":{"last_event_published_time":"2020-03-22T13:53:42.357Z","last_event_timestamp":"2020-03-22T13:53:42.357Z","read_offset":1245,"size":1245}},"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":1,"batches":1,"total":1},"read":{"bytes":6},"write":{"bytes":1130}},"pipeline":{"clients":1,"events":{"active":0,"published":1,"total":1},"queue":{"acked":1}}},"registrar":{"states":{"current":2,"update":1},"writes":{"success":1,"total":1}},"system":{"load":{"1":0.02,"15":0.39,"5":0.5,"norm":{"1":0.01,"15":0.195,"5":0.25}}}}}}

三. 配置 logstash 取出 redis 数据写入 ES 集群

日志数据流动:
redis-server: redis --> redis-server: logstash --> es-server-nodeX: elasticsearch

3.1 logstash 配置

root@redis-server:/etc/logstash/conf.d# vim syslog_redis_to_es.conf
root@redis-server:/etc/logstash/conf.d# cat syslog_redis_to_es.conf
input {redis {host => "192.168.100.154"port => "6379"data_type => "list"db => "1"key => "syslog_150"password => "stevenux"}redis {host => "192.168.100.154"port => "6379"data_type => "list"db => "1"key => "syslog_152"password => "stevenux"}}output {if [fields][name] == "syslog_from_filebeat_150" {   # 该判断值在filebeat配置文件中定义elasticsearch {hosts => ["192.168.100.144:9200"]index => "syslog_from_filebeat_150-%{+YYYY.MM.dd}"}}if [fields][name] == "syslog_from_filebeat_152" {elasticsearch {hosts => ["192.168.100.144:9200"]index => "syslog_from_filebeat_152-%{+YYYY.MM.dd}"}}}

测试语法:

root@redis-server:/etc/logstash/conf.d# pwd
/etc/logstash/conf.d
root@redis-server:/etc/logstash/conf.d# /usr/share/logstash/bin/logstash -f syslog_redis_to_es.conf -t

启动 logstash:

root@redis-server:/etc/logstash/conf.d# systemctl restart logstash

查看 redis 数据是否被取走:

127.0.0.1:6379[1]> KEYS *
1) "syslog_152"
2) "syslog_150"
127.0.0.1:6379[1]> llen syslog_152
(integer) 3804
127.0.0.1:6379[1]> llen syslog_152
(integer) 3804
127.0.0.1:6379[1]> KEYS *
1) "syslog_152"
2) "syslog_150"
127.0.0.1:6379[1]> llen syslog_152
(integer) 3805
127.0.0.1:6379[1]> llen syslog_152
(integer) 3808
127.0.0.1:6379[1]> KEYS *
1) "syslog_152"
2) "syslog_150"
127.0.0.1:6379[1]> KEYS *
1) "syslog_152"
2) "syslog_150"
127.0.0.1:6379[1]> KEYS *
1) "syslog_152"
2) "syslog_150"
127.0.0.1:6379[1]> KEYS *
(empty list or set)   # gone
127.0.0.1:6379[1]> KEYS *
(empty list or set)
127.0.0.1:6379[1]> KEYS *
(empty list or set)

四.将日志写入 MySQL

写入数据库的目的是用于持久化保存重要数据,比如状态码、客户端 IP、客户
端浏览器版本等等,用于后期按月做数据统计等。

4.1 安装 MySQL

使用 apt 安装:

root@redis-server:~# apt install mysql-server
root@redis-server:~# mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.29-0ubuntu0.18.04.1 (Ubuntu)
...
mysql> ALTER USER user() IDENTIFIED BY 'stevenux';  # 更root密码
Query OK, 0 rows affected (0.00 sec)mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)mysql>

4.2 创建 logstash 用户并授权

4.2.1 创建数据库和用户授权

授权 logstash 用户访问数据库,将数据存入新建的数据库 log_data:

...
mysql> CREATE DATABASE log_data  CHARACTER SET utf8 COLLATE utf8_bin;
Query OK, 1 row affected (0.00 sec)mysql> GRANT ALL PRIVILEGES ON log_data.* TO logstash@"%" IDENTIFIED BY 'stevenux';
Query OK, 0 rows affected, 1 warning (0.00 sec)mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)mysql>

4.2.2 测试 logstash 用户连接数据库

root@redis-server:~# mysql -ulogstash -p
Enter password: (stevenux)
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.7.29-0ubuntu0.18.04.1 (Ubuntu)
...mysql> SHOW DATABASES;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| log_data           |
+--------------------+
2 rows in set (0.00 sec)mysql>

4.3 配置 logstash 连接数据库

logstash 连接 MySQL 数据库需要使用 MySQL 官方的 JDBC 驱动程序MySQL Connector/J
MySQL Connector/J 是 MySQL 官方 JDBC 驱动程序,JDBC(Java Data Base Connectivity)
java 数据库连接器,是一种用于执行 SQL 语句的 Java API,可以为多种关系数据库提供
统一访问,它由一组用 Java 语言编写的类和接口组成。

4.3.1 下载驱动 jar 包

root@redis-server:/usr/local/src# pwd
/usr/local/src
root@redis-server:/usr/local/src# wget https://downloads.mysql.com/archives/get/p/3/file/mysql-connector-java-5.1.42.zip

4.3.2 安装 jar 包到 logstash

# 解压zip包
root@redis-server:/usr/local/src# unzip mysql-connector-java-5.1.42.zip
root@redis-server:/usr/local/src# ll mysql-connector-java-5.1.42
total 1468
drwxr-xr-x 4 root root   4096 Apr 17  2017 ./
drwxr-xr-x 5 root root   4096 Mar 22 15:05 ../
-rw-r--r-- 1 root root  91463 Apr 17  2017 build.xml
-rw-r--r-- 1 root root 244278 Apr 17  2017 CHANGES
-rw-r--r-- 1 root root  18122 Apr 17  2017 COPYING
drwxr-xr-x 2 root root   4096 Apr 17  2017 docs/
-rw-r--r-- 1 root root 996444 Apr 17  2017 mysql-connector-java-5.1.42-bin.jar
-rw-r--r-- 1 root root  61407 Apr 17  2017 README
-rw-r--r-- 1 root root  63658 Apr 17  2017 README.txt
drwxr-xr-x 8 root root   4096 Apr 17  2017 src/# 安装logstash的要求创建目录
root@redis-server:/usr/local/src# mkdir -pv /usr/share/logstash/vendor/jar/jdbc
mkdir: created directory '/usr/share/logstash/vendor/jar'
mkdir: created directory '/usr/share/logstash/vendor/jar/jdbc'# 将jar包拷贝过去
root@redis-server:/usr/local/src# cp mysql-connector-java-5.1.42/mysql-connector-java-5.1.42-bin.jar /usr/share/logstash/vendor/jar/jdbc/# 更改权限
root@redis-server:/usr/local/src# chown logstash.logstash /usr/share/logstash/vendor/jar -R
root@redis-server:/usr/local/src# ll /usr/share/logstash/vendor/jar/
total 12
drwxr-xr-x 3 logstash logstash 4096 Mar 22 15:07 ./
drwxrwxr-x 5 logstash logstash 4096 Mar 22 15:07 ../
drwxr-xr-x 2 logstash logstash 4096 Mar 22 15:09 jdbc/
root@redis-server:/usr/local/src# ll /usr/share/logstash/vendor/jar/jdbc/
total 984
drwxr-xr-x 2 logstash logstash   4096 Mar 22 15:09 ./
drwxr-xr-x 3 logstash logstash   4096 Mar 22 15:07 ../
-rw-r--r-- 1 logstash logstash 996444 Mar 22 15:09 mysql-connector-java-5.1.42-bin.jar

4.4 配置 logstash 的输出插件

4.4.1 配置 gem 源

logstash 的输出到 SQL 的插件为logstash-output-jdbc,该插件使用了 shell
脚本和 ruby 脚本写成,所以需要使用 ruby 的包管理器 gem 和 gem 源。

国外的 gem 源由于网络原因,从国内访问太慢而且不稳定,还经常安装不成功,因此
之前一段时间很多人都是使用国内淘宝的 gem 源https://ruby.taobao.org/,现在
淘宝的 gem 源虽然还可以使用,但是已经停止维护更新。其官方介绍

root@redis-server:/usr/local/src# snap install ruby
root@redis-server:/usr/local/src# apt install gem# 更改源
root@redis-server:/usr/local/src# gem sources --add https://gems.ruby-china.com/ --remove https://rubygems.org/
https://gems.ruby-china.com/ added to sources
https://rubygems.org/ removed from sources# 查看源
root@redis-server:/usr/local/src# gem sources -l
*** CURRENT SOURCES ***https://gems.ruby-china.com/   # 确认只有ruby-china

4.4.2 安装配置 logstash-output-jdbc 插件

该插件可以将 logstash 的数据通过 JDBC 输出到 SQL 数据库。使用 logstash 自带的
/usr/share/logstash/bin/logstash-plugin工具安装该插件。

# 查看已安装的插件
root@redis-server:~# /usr/share/logstash/bin/logstash-plugin list
...
logstash-codec-avro
logstash-codec-cef
logstash-codec-collectd
logstash-codec-dots
logstash-codec-edn
logstash-codec-edn_lines
logstash-codec-es_bulk
logstash-codec-fluent
logstash-codec-graphite
logstash-codec-json
...# 安装logstash-output-jdbc
root@redis-server:~# /usr/share/logstash/bin/logstash-plugin install logstash-output-jdbc
...
Validating logstash-output-jdbc
Installing logstash-output-jdbc
Installation successful  # 安装完成

4.5 事先在数据库创建表结构

收集 tomcat 的访问日志中的 clientip,status,AgentVersion,method 和
访问时间等字段值。时间字段使用系统时间。

root@redis-server:~# mysql -ulogstash -p
Enter password:
...
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| log_data           |
+--------------------+
2 rows in set (0.00 sec)mysql> select CURRENT_TIMESTAMP;
+---------------------+
| CURRENT_TIMESTAMP   |
+---------------------+
| 2020-03-22 16:36:33 |
+---------------------+
1 row in set (0.00 sec)mysql> USE log_data;
Database changed
mysql> CREATE TABLE tom_log (host varchar(128), status int(32), clientip varchar(50), AgentVersion varchar(512), time timestamp default current_timestamp);
Query OK, 0 rows affected (0.02 sec)mysql> show tables;
+--------------------+
| Tables_in_log_data |
+--------------------+
| tom_log            |
+--------------------+
1 row in set (0.00 sec)mysql> desc tom_log;
+--------------+--------------+------+-----+-------------------+-------+
| Field        | Type         | Null | Key | Default           | Extra |
+--------------+--------------+------+-----+-------------------+-------+
| host         | varchar(128) | YES  |     | NULL              |       |
| status       | int(32)      | YES  |     | NULL              |       |
| clientip     | varchar(50)  | YES  |     | NULL              |       |
| AgentVersion | varchar(512) | YES  |     | NULL              |       |
| time         | timestamp    | NO   |     | CURRENT_TIMESTAMP |       |  # 时间字段
+--------------+--------------+------+-----+-------------------+-------+
5 rows in set (0.02 sec)

4.6 syslog 更改为手机 tomcat 访问日志

4.6.1 保证 tomcat 的日志格式为 json

root@tomcat-server-node1:~# cat /usr/local/tomcat/conf/server.xml
...
<Server>
...<Service>...<Engine>...<Host>...<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"prefix="tomcat_access_log" suffix=".log"pattern="{&quot;clientip&quot;:&quot;%h&quot;,&quot;ClientUser&quot;:&quot;%l&quot;,&quot;authenticated&quot;:&quot;%u&quot;,&quot;AccessTime&quot;:&quot;%t&quot;,&quot;method&quot;:&quot;%r&quot;,&quot;status&quot;:&quot;%s&quot;,&quot;SendBytes&quot;:&quot;%b&quot;,&quot;Query?string&quot;:&quot;%q&quot;,&quot;partner&quot;:&quot;%{Referer}i&quot;,&quot;AgentVersion&quot;:&quot;%{User-Agent}i&quot;}" /></Host></Engine></Service>
</Server>

4.6.2 filebeat 配置

tomcat-server-node1

root@tomcat-server-node1:~# cat /etc/filebeat/filebeat.yml
...
filebeat.inputs:
- type: logenabled: truepaths:- /var/log/syslogdocument_type: system-logexclude_lines: ['^DBG']#include_lines: ['^ERR', '^WARN']fields:name: syslog_from_filebeat_150  # 自定义条目filebeat.inputs:
- type: logenabled: truepaths:- /usr/local/tomcat/logs/tomcat_access_log.2020-03-22.logdocument_type: tomcat-logexclude_lines: ['^DBG']fields:name: tom_from_filebeat_150
# logstash输出
output.logstash:hosts: ["192.168.100.150:5044"]

tomcat-server-node2

root@tomcat-server-node2:~# cat /etc/filebeat/filebeat.yml
...
filebeat.inputs:
- type: logenabled: truepaths:- /var/log/syslogdocument_type: system-logexclude_lines: ['^DBG']fields:name: syslog_from_filebeat_152  # 自定义字段filebeat.inputs:
- type: logenabled: truepaths:- /usr/local/tomcat/logs/tomcat_access_log.2020-03-22.logdocument_type: tomcat-logexclude_lines: ['^DBG']fields:name: tom_from_filebeat_152# logstash输出
output.logstash:hosts: ["192.168.100.152:5044"]

4.6.3 logstash 配置

tomcat-server-node1

root@tomcat-server-node1:~# cat /etc/logstash/conf.d/tom_from_filebeat.conf
input {beats {host => "192.168.100.150"port => "5044"}
}output {if [fields][name] == "tom_from_filebeat_150" {redis {host => "192.168.100.154"port => "6379"db   => "1"key  => "tomlog_150"data_type => "list"password  => "stevenux"}}
}

tomcat-server-node2

root@tomcat-server-node2:~# cat /etc/logstash/conf.d/tomlog_from_filebeat.conf
input {beats {host => "192.168.100.152"port => "5044"}
}output {if [fields][name] == "tom_from_filebeat_152" {redis {host => "192.168.100.154"port => "6379"db   => "1"key  => "tomlog_152"data_type => "list"password  => "stevenux"}}
}

4.6.4 查看 redis 数据

root@redis-server:/etc/logstash/conf.d# redis-cli
127.0.0.1:6379> auth stevenux
OK
127.0.0.1:6379> select 1
OK
127.0.0.1:6379[12]> KEYS *
(empty list or set)
127.0.0.1:6379[1]> KEYS *
(empty list or set)
127.0.0.1:6379[1]> KEYS *
(empty list or set)
1) "tomlog_150"
127.0.0.1:6379[1]> KEYS *
1) "tomlog_150"
127.0.0.1:6379[1]> KEYS *
1) "tomlog_150"
127.0.0.1:6379[1]> KEYS *
1) "tomlog_150"
127.0.0.1:6379[1]> KEYS *
1) "tomlog_150"
.....
127.0.0.1:6379[1]>
127.0.0.1:6379[1]>
127.0.0.1:6379[1]>127.0.0.1:6379[1]> KEYS *
1) "tomlog_150"
2) "tomlog_152"
127.0.0.1:6379[1]>

4.6 配置 logstash 输出到 MySQL

日志数据流动:
redis-server:redis --> redis-server: logstash --> redis-server: MySQL

logstash 使用该插件配置示例:

input
{stdin { }
}
output {jdbc {driver_class => "com.mysql.jdbc.Driver"connection_string => "jdbc:mysql://HOSTNAME/DATABASE?user=USER&password=PASSWORD"statement => [ "INSERT INTO log (host, timestamp, message) VALUES(?, CAST(? AS timestamp), ?)", "host", "@timestamp", "message" ]}
}

数据库连接出错解决:

~# mysql -ulogstash -h192.168.100.154 -pstevenux
mysql: [Warning] Using a password on the command line interface can be insecure.
# 出现以下错误,则更改一下监听地址为所有地址
ERROR 2003 (HY000): Can\'t connect to MySQL server on '192.168.100.154' (111)~# vim /etc/logstash/conf.d# vim /etc/mysql/mysql.conf.d/mysqld.cnf
...
bind-address            = 0.0.0.0
...# 重启
root@redis-server:/etc/logstash/conf.d# systemctl restart mysql# 连接试试
root@redis-server:/etc/logstash/conf.d# mysql -ulogstash -h192.168.100.154 -p
Enter password:mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| log_data           |
+--------------------+
2 rows in set (0.01 sec)

确保 logstash 已经安装插件:

root@redis-server:/etc/logstash/conf.d# /usr/share/logstash/bin/logstash-plugin list | grep jdbc
...
logstash-integration-jdbc├── logstash-input-jdbc├── logstash-filter-jdbc_streaming└── logstash-filter-jdbc_static
logstash-output-jdbcroot@redis-server:~# ll /usr/share/logstash/vendor/jar/jdbc/mysql-connector-java-5.1.42-bin.jar
-rw-r--r-- 1 logstash logstash 996444 Mar 22 15:09 /usr/share/logstash/vendor/jar/jdbc/mysql-connector-java-5.1.42-bin.jar

配置 logstash:

root@redis-server:~# cat /etc/logstash/conf.d/tomlog_from_redis_to_mysql.conf
input {redis {host => "192.168.100.154"port => "6379"db   => "1"data_type => "list"key       => "tomlog_150"password  => "stevenux"}redis {host => "192.168.100.154"port => "6379"db   => "1"data_type => "list"key       => "tomlog_152"password  => "stevenux"}
}output {jdbc {driver_class => "com.mysql.jdbc.Driver"connection_string => "jdbc:mysql://192.168.100.154/log_data?user=logstash&password=stevenux"statement => [ "INSERT INTO tom_log (host, status, clientip, AgentVersion, time) VALUES(?, ?, ?, ?, ?)", "host", "status", "clientip", "AgentVersion","time" ]}
}

检查语法:

root@redis-server:~# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/tomlog_from_redis_to_mysql.conf -t
...
[INFO ] 2020-03-22 17:13:34.157 [LogStash::Runner] Reflections - Reflections took 41 ms to scan 1 urls, producing 20 keys and 40 values
Configuration OK
[INFO ] 2020-03-22 17:13:34.593 [LogStash::Runner] runner - Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

4.7 测试

4.7.1 访问一下

root@es-server-node2:~# curl 192.168.100.152:8080
root@es-server-node2:~# curl 192.168.100.152:8080/not_exists
root@es-server-node3:~# curl 192.168.100.152:8080
root@es-server-node3:~# curl 192.168.100.152:8080/not_exists

4.7.2 查看数据

root@redis-server:/etc/logstash/conf.d# mysql -ulogstash -h192.168.100.154 -p
Enter password:
...
mysql> SHOW DATABASES;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| log_data           |
+--------------------+
2 rows in set (0.00 sec)mysql> USE log_data;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -ADatabase changed
mysql> SHOW TABLES;
+--------------------+
| Tables_in_log_data |
+--------------------+
| tom_log            |
+--------------------+
1 row in set (0.00 sec)mysql> SELECT COUNT(*) FROM tom_log;
+----------+
| COUNT(*) |
+----------+
|     2072 |
+----------+
1 row in set (0.00 sec)mysql> SELECT * FROM tom_log ORDER BY time DESC LIMIT 5 \G
*************************** 1. row ***************************host: tomcat-server-node2status: 200clientip: 192.168.100.146
AgentVersion: curl/7.58.0time: 2020-03-22 21:05:42
*************************** 2. row ***************************host: tomcat-server-node2status: 200clientip: 192.168.100.152
AgentVersion: curl/7.58.0time: 2020-03-22 21:05:42
*************************** 3. row ***************************host: tomcat-server-node2status: 200clientip: 192.168.100.150
AgentVersion: curl/7.58.0time: 2020-03-22 21:05:42
*************************** 4. row ***************************host: tomcat-server-node2status: 200clientip: 192.168.100.144
AgentVersion: curl/7.58.0time: 2020-03-22 21:05:42
*************************** 5. row ***************************host: tomcat-server-node2status: 200clientip: 192.168.100.150
AgentVersion: curl/7.58.0time: 2020-03-22 21:05:42
5 rows in set (0.01 sec)

4.7.3 写 MySQL 的 logstash 日志

root@redis-server:/etc/logstash/conf.d# tail /var/log/logstash/logstash-plain.log -n12
[2020-03-22T20:48:54,555][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.6.1"}
[2020-03-22T20:48:56,366][INFO ][org.reflections.Reflections] Reflections took 33 ms to scan 1 urls, producing 20 keys and 40 values
[2020-03-22T20:48:56,825][INFO ][logstash.outputs.jdbc    ][main] JDBC - Starting up
[2020-03-22T20:48:56,892][INFO ][com.zaxxer.hikari.HikariDataSource][main] HikariPool-1 - Starting...
[2020-03-22T20:48:57,242][INFO ][com.zaxxer.hikari.HikariDataSource][main] HikariPool-1 - Start completed.
[2020-03-22T20:48:57,326][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.RubyArray) has been create for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2020-03-22T20:48:57,331][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["/etc/logstash/conf.d/tomlog_from_redis_to_mysql.conf"], :thread=>"#<Thread:0x752a3c2 run>"}
[2020-03-22T20:48:58,173][INFO ][logstash.inputs.redis    ][main] Registering Redis {:identity=>"redis://<password>@192.168.100.154:6379/1 list:tomlog_150"}
[2020-03-22T20:48:58,178][INFO ][logstash.inputs.redis    ][main] Registering Redis {:identity=>"redis://<password>@192.168.100.154:6379/1 list:tomlog_152"}
[2020-03-22T20:48:58,193][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2020-03-22T20:48:58,313][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-03-22T20:48:58,813][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

五. 通过 HAProxy 代理 Kibana

5.1 部署 Kibana

5.1.1 es-server-node1

# 安装
root@es-server-node1:/usr/local/src# dpkg -i kibana-7.6.1-amd64.deb
Selecting previously unselected package kibana.
(Reading database ... 85899 files and directories currently installed.)
Preparing to unpack kibana-7.6.1-amd64.deb ...
Unpacking kibana (7.6.1) ...
Setting up kibana (7.6.1) ...
Processing triggers for ureadahead (0.100.0-21) ...
Processing triggers for systemd (237-3ubuntu10.24) ...# 配置文件
root@es-server-node1:/usr/local/src# grep "^[a-Z]" /etc/kibana/kibana.yml
server.port: 5601
server.host: "0.0.0.0"
server.name: "kibana-demo-node1"
elasticsearch.hosts: ["http://192.168.100.144:9200"]

5.1.2 es-server-node2

# 安装
root@es-server-node1:/usr/local/src# dpkg -i kibana-7.6.1-amd64.deb
Selecting previously unselected package kibana.
(Reading database ... 85899 files and directories currently installed.)
Preparing to unpack kibana-7.6.1-amd64.deb ...
Unpacking kibana (7.6.1) ...
Setting up kibana (7.6.1) ...
Processing triggers for ureadahead (0.100.0-21) ...
Processing triggers for systemd (237-3ubuntu10.24) ...# 配置文件
root@es-server-node1:/usr/local/src# grep "^[a-Z]" /etc/kibana/kibana.yml
server.port: 5601
server.host: "0.0.0.0"
server.name: "kibana-demo-node2"
elasticsearch.hosts: ["http://192.168.100.144:9200"]

5.1.3 es-server-node3

将之前安装好的 kibana 配置文件分发给 node1 和 node2

root@es-server-node3:~# scp  /etc/kibana/kibana.yml  192.168.100.142:/etc/kibana/
root@es-server-node3:~# scp  /etc/kibana/kibana.yml  192.168.100.144:/etc/kibana/

5.2 HAProxy 和 keepalived 配置

5.2.1 Keepalived 配置

tomcat-server-node1

~# apt install  keepalived -y
~# vim /etc/keepalived/keepalived.conf
root@tomcat-server-node1:~# cat /etc/keepalived/keepalived.conf
global_defs {notification_email {root@localhost}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id ha1.example.com
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0vrrp_mcast_group4 224.0.0.18
#vrrp_iptables
}vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 80
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass stevenux
}
virtual_ipaddress {192.168.100.200 dev eth0 label eth0:0
}
}root@tomcat-server-node1:~# systemctl start keepalivedroot@tomcat-server-node1:~# ip addr show eth0 | grep inetinet 192.168.100.150/24 brd 192.168.100.255 scope global eth0inet 192.168.100.200/32 scope global eth0:0inet6 fe80::20c:29ff:fe64:9fdf/64 scope link

tomcat-server-node2

~# apt install  keepalived -y
~# vim /etc/keepalived/keepalived.confroot@tomcat-server-node2:/etc/logstash/conf.d# cat /etc/keepalived/keepalived.conf
global_defs {notification_email {root@localhost}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id ha1.example.com
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0vrrp_mcast_group4 224.0.0.18
#vrrp_iptables
}vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 80
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass stevenux
}
virtual_ipaddress {192.168.100.200 dev eth0 label eth0:0
}
}root@tomcat-server-node1:~# systemctl start keepalivedroot@tomcat-server-node2:/etc/logstash/conf.d# ip addr show eth0 | grep eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000inet 192.168.100.152/24 brd 192.168.100.255 scope global eth0

5.2.2 HAProxy 配置

tomcat-server-node1

root@tomcat-server-node1:~# cat /etc/haproxy/haproxy.cfg
globallog /dev/log	local0log /dev/log	local1 noticechroot /var/lib/haproxystats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listenersstats timeout 30suser haproxygroup haproxydaemonlog 127.0.0.1 local6 infoca-base /etc/ssl/certscrt-base /etc/ssl/privatessl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSSssl-default-bind-options no-sslv3defaultslog	globalmode	httpoption	httplogoption	dontlognulltimeout connect 5000timeout client  50000timeout server  50000errorfile 400 /etc/haproxy/errors/400.httperrorfile 403 /etc/haproxy/errors/403.httperrorfile 408 /etc/haproxy/errors/408.httperrorfile 500 /etc/haproxy/errors/500.httperrorfile 502 /etc/haproxy/errors/502.httperrorfile 503 /etc/haproxy/errors/503.httperrorfile 504 /etc/haproxy/errors/504.httplisten statsmode httpbind 0.0.0.0:9999stats enablelog globalstats uri     /haproxy-statusstats auth    haadmin:stevenuxlisten elasticsearch_clustermode httpbalance roundrobinbind 192.168.100.200:80server 192.168.100.142 192.168.100.142:5601 check inter 3s fall 3 rise 5server 192.168.100.144 192.168.100.144:5601 check inter 3s fall 3 rise 5server 192.168.100.146 192.168.100.146:5601 check inter 3s fall 3 rise 5root@tomcat-server-node1:~# systemctl restart haproxy

tomcat-server-node2

root@tomcat-server-node2:~# cat /etc/haproxy/haproxy.cfg
globallog /dev/log	local0log /dev/log	local1 noticechroot /var/lib/haproxystats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listenersstats timeout 30suser haproxygroup haproxydaemonlog 127.0.0.1 local6 infoca-base /etc/ssl/certscrt-base /etc/ssl/privatessl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSSssl-default-bind-options no-sslv3defaultslog	globalmode	httpoption	httplogoption	dontlognulltimeout connect 5000timeout client  50000timeout server  50000errorfile 400 /etc/haproxy/errors/400.httperrorfile 403 /etc/haproxy/errors/403.httperrorfile 408 /etc/haproxy/errors/408.httperrorfile 500 /etc/haproxy/errors/500.httperrorfile 502 /etc/haproxy/errors/502.httperrorfile 503 /etc/haproxy/errors/503.httperrorfile 504 /etc/haproxy/errors/504.httplisten statsmode httpbind 0.0.0.0:9999stats enablelog globalstats uri     /haproxy-statusstats auth    haadmin:stevenuxlisten elasticsearch_clustermode httpbalance roundrobinbind 192.168.100.200:80server 192.168.100.142 192.168.100.142:5601 check inter 3s fall 3 rise 5server 192.168.100.144 192.168.100.144:5601 check inter 3s fall 3 rise 5server 192.168.100.146 192.168.100.146:5601 check inter 3s fall 3 rise 5root@tomcat-server-node2:~# systemctl restart haproxy.service

六. 通过 nginx 代理 Kibana

将 nginx 作为反向代理服务器,增加登录用户认证功能,有效避免无关人员随意访问 kibana 页面。

6.1 nginx 配置

root@tomcat-server-node1:~# cat /apps/nginx/conf/nginx.confworker_processes  1;events {worker_connections  1024;
}http {include       mime.types;default_type  application/octet-stream;log_format  access_json  '{"@timestamp":"$time_iso8601",''"host":"$server_addr",''"clientip":"$remote_addr",''"size":$body_bytes_sent,''"responsetime":$request_time,''"upstreamtime":"$upstream_response_time",''"upstreamhost":"$upstream_addr",''"http_host":"$host",''"url":"$uri",''"domain":"$host",''"xff":"$http_x_forwarded_for",''"referer":"$http_referer",''"status":"$status"}';access_log  logs/access.log  access_json;sendfile        on;keepalive_timeout  65;upstream kibana_server {server  192.168.100.142:5601 weight=1 max_fails=3 fail_timeout=60;}server {listen 80;server_name 192.168.100.150;location / {proxy_pass http://kibana_server;proxy_http_version 1.1;proxy_set_header Upgrade $http_upgrade;proxy_set_header Connection 'upgrade';proxy_set_header Host $host;proxy_cache_bypass $http_upgrade;}}}root@tomcat-server-node1:~# nginx -t
nginx: the configuration file /apps/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /apps/nginx/conf/nginx.conf test is successful
root@tomcat-server-node1:~# nginx -s reload

6.3 登录认证配置

6.3.1 创建认证文件

# Centos
~# yum install httpd-tools# Ubuntu
root@tomcat-server-node1:~# apt install apache-utilsroot@tomcat-server-node1:~# htpasswd -bc  /apps/nginx/conf/htpasswd.users stevenux stevenux
Adding password for user stevenux
root@tomcat-server-node1:~# cat /apps/nginx/conf/htpasswd.users
stevenux:$apr1$xYOszdHs$b2GX4zCBNv6tuNj427WoT1
root@tomcat-server-node1:~# htpasswd -b  /apps/nginx/conf/htpasswd.users jack stevenux
Adding password for user jack
root@tomcat-server-node1:~# cat /apps/nginx/conf/htpasswd.users
stevenux:$apr1$xYOszdHs$b2GX4zCBNv6tuNj427WoT1
jack:$apr1$hfqwuymq$J4G86iNOyjUA08yMPhVU8.

6.3.2 nginx 配置

root@tomcat-server-node1:~# vim /apps/nginx/conf/nginx.conf
...
server {listen 80;server_name 192.168.100.150;auth_basic "Restricted Access";  # 添加这两翰auth_basic_user_file /apps/nginx/conf/htpasswd.users;location / {proxy_pass http://kibana_server;proxy_http_version 1.1;proxy_set_header Upgrade $http_upgrade;proxy_set_header Connection 'upgrade';proxy_set_header Host $host;proxy_cache_bypass $http_upgrade;}
...root@tomcat-server-node1:~# nginx -t
nginx: the configuration file /apps/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /apps/nginx/conf/nginx.conf test is successful
root@tomcat-server-node1:~# nginx -s reload

相关文章:

ELKstack-日志收集案例

由于实验环境限制&#xff0c;将 filebeat 和 logstash 部署在 tomcat-server-nodeX&#xff0c;将 redis 和 写 ES 集群的 logstash 部署在 redis-server&#xff0c;将 HAproxy 和 Keepalived 部署在 tomcat-server-nodeX。将 Kibana 部署在 ES 集群主机。 环境&#xff1a;…...

基于GPT-4和LangChain构建云端定制化PDF知识库AI聊天机器人

参考&#xff1a; GitHub - mayooear/gpt4-pdf-chatbot-langchain: GPT4 & LangChain Chatbot for large PDF docs 1.摘要&#xff1a; 使用新的GPT-4 api为多个大型PDF文件构建chatGPT聊天机器人。 使用的技术栈包括LangChain, Pinecone, Typescript, Openai和Next.js…...

Python可视化工具分享

今天和大家分享几个实用的纯python构建可视化界面服务&#xff0c;比如日常写了脚本但是不希望给别人代码&#xff0c;可以利用这些包快速构建好看的界面作为服务提供他人使用。有关于库的最新更新时间和当前star数量。 streamlit (23.3k Updated 2 hours ago) Streamlit 可让…...

ethers.js:构建ERC-20代币交易的不同方法

在这篇文章中,我们将探讨如何使用ethers.js将ERC-20令牌从一个地址转移到另一个地址 Ethers是一个非常酷的JavaScript库,它能够发送EIP-1559事务,而无需手动指定气体属性。它将确定gasLimit,并默认使用1.5 Gwei的maxPriorityFeePerGas,从v5.6.0开始。 此外,如果您使用签名…...

[实践篇]13.23 QNX环境变量profile

一,profile简介 /etc/profile或/system/etc/profile是qnx侧的设置环境变量的文件,该文件适用于所有用户,它可以用作以下情形: 设置HOMENAME和SYSNAME环境变量设置PATH环境变量设置TMPDIR环境变量(/tmp)设置PCI以及IFS_BASE等环境变量等文件内容示例如下: /etc/profile…...

HDLBits-Verilog学习记录 | Getting Started

Getting Started problem: Build a circuit with no inputs and one output. That output should always drive 1 (or logic high). 答案不唯一&#xff0c;仅共参考&#xff1a; module top_module( output one );// Insert your code hereassign one 1;endmodule相关解释…...

flask模型部署教程

搭建python flask服务的步骤 1、安装相关的包 具体参考https://blog.csdn.net/weixin_42126327/article/details/127642279 1、安装conda环境和相关包 # 一、安装conda # 1、首先&#xff0c;前往Anaconda官网&#xff08;https://www.anaconda.com/products/individual&am…...

一文详解4种聚类算法及可视化(Python)

在这篇文章中&#xff0c;基于20家公司的股票价格时间序列数据。根据股票价格之间的相关性&#xff0c;看一下对这些公司进行聚类的四种不同方式。 苹果&#xff08;AAPL&#xff09;&#xff0c;亚马逊&#xff08;AMZN&#xff09;&#xff0c;Facebook&#xff08;META&…...

SpringBoot---内置Tomcat 配置和切换

&#x1f600;前言 本篇博文是关于内置Tomcat 配置和切换&#xff0c;希望你能够喜欢 &#x1f3e0;个人主页&#xff1a;晨犀主页 &#x1f9d1;个人简介&#xff1a;大家好&#xff0c;我是晨犀&#xff0c;希望我的文章可以帮助到大家&#xff0c;您的满意是我的动力&#x…...

Qt 显示git版本信息

项目场景&#xff1a; 项目需要在APP中显示当前的版本号&#xff0c;考虑到git共同开发&#xff0c;显示git版本&#xff0c;查找bug或恢复设置更为便捷。 使用需求&#xff1a; 显示的内容包括哪个分支编译的&#xff0c;版本号多少&#xff0c;编译时间&#xff0c;以及是否…...

Mysql的视图和管理

MySQL 视图(view) 视图是一个虚拟表&#xff0c;其内容由查询定义&#xff0c;同真实的表一样&#xff0c;视图包含列&#xff0c;其数据来自对应的真实表(基表) create view 视图名 as select语句alter view 视图名 as select语句 --更新成新的视图SHOW CREATE VIEW 视图名d…...

uniapp 顶部头部样式

<u-navbartitle"商城":safeAreaInsetTop"true"><view slot"left"><image src"/static/logo.png" mode"" class"u-w-50 u-h-50"></image></view></u-navbar>...

最新ai系统ChatGPT程序源码+详细搭建教程+mj以图生图+Dall-E2绘画+支持GPT4+AI绘画+H5端+Prompt知识库

目录 一、前言 二、系统演示 三、功能模块 3.1 GPT模型提问 3.2 应用工作台 3.3 Midjourney专业绘画 3.4 mind思维导图 四、源码系统 4.1 前台演示站点 4.2 SparkAi源码下载 4.3 SparkAi系统文档 五、详细搭建教程 5.1 基础env环境配置 5.2 env.env文件配置 六、环境…...

FairyGUI-Unity 自定义UIShader

FairyGUI中给组件更换Shader&#xff0c;最简单的方式就是找到组件中的Shader字段进行赋值。需要注意的是&#xff0c;对于自定的shader效果需要将目标图片进行单独发布&#xff0c;也就是一个目标图片占用一张图集。&#xff08;应该会有更好的解决办法&#xff0c;但目前还是…...

Excel/PowerPoint柱状图条形图负值设置补色

原始数据&#xff1a; 列1系列 1类别 14.3类别 2-2.5类别 33.5类别 44.5 默认作图 解决方案 1、选中柱子&#xff0c;双击&#xff0c;按如下顺序操作 2、这时候颜色会由一个变成两个 3、对第二个颜色进行设置&#xff0c;即为负值的颜色 条形图的设置方法相同...

el-date-picker 时间区域选择,type=daterange,form表单校验+数据回显问题

情景问题&#xff1a;新增表单有时间区域选择&#xff0c;选择了时间&#xff0c;还是提示必填的校验提示语&#xff0c;且修改时&#xff0c;通过 号赋值法&#xff0c;重新选择此时间范围无效。 解决方法&#xff1a;&#xff08;重点&#xff09; widthHoldTime:[]&#xf…...

LeetCode 面试题 01.02. 判定是否互为字符重排

文章目录 一、题目二、C# 题解 ​ 一、题目 给定两个由小写字母组成的字符串 s1 和 s2&#xff0c;请编写一个程序&#xff0c;确定其中一个字符串的字符重新排列后&#xff0c;能否变成另一个字符串&#xff0c;点击此处跳转。 示例 1&#xff1a; 输入: s1 “abc”, s2 “…...

学习maven工具

文章目录 &#x1f412;个人主页&#x1f3c5;JavaEE系列专栏&#x1f4d6;前言&#xff1a;&#x1f3e8;maven工具产生的背景&#x1f993;maven简介&#x1fa80;pom.xml文件(project object Model 项目对象模型) &#x1fa82;maven工具安装步骤两个前提&#xff1a;下载 m…...

手机直播源码开发,协议讨论篇(三):RTMP实时消息传输协议

实时消息传输协议RTMP简介 RTMP又称实时消息传输协议&#xff0c;是一种实时通信协议。在当今数字化时代&#xff0c;手机直播源码平台为全球用户进行服务&#xff0c;如何才能增加用户&#xff0c;提升用户黏性&#xff1f;就需要让一对一直播平台能够为用户提供优质的体验。…...

【JavaEE基础学习打卡05】JDBC之基本入门就可以了

目录 前言一、JDBC学习前说明1.Java SE中JDBC2.JDBC版本 二、JDBC基本概念1.JDBC原理2.JDBC组件 三、JDBC基本编程步骤1.JDBC操作的数据库准备2.JDBC操作数据库表步骤 四、代码优化1.简单优化2.with-resources探讨 总结 前言 &#x1f4dc; 本系列教程适用于JavaWeb初学者、爱好…...

2023/8/16 华为云OCR识别驾驶证、行驶证

目录 一、 注册华为云账号开通识别驾驶证、行驶证服务 二、编写配置文件 2.1、配置秘钥 2.2、 编写配置工具类 三、接口测试 3.1、测试接口 3.2、结果 四、实际工作中遇到的问题 4.1、前端传值问题 4.2、后端获取数据问题 4.3、使用openfeign调用接口报错 4.3、前端显示问题…...

【Java开发】 Mybatis-Plus 07:创建时间、更新时间自动添加

Mybatis-Plus 可以通过配置实体类的注解来自动添加创建时间和更新时间&#xff0c;这可以减轻一定的开发量。 1 在实体类中添加注解 public class User {TableId(type IdType.AUTO)private Long id;private String username;private String password;TableField(fill FieldF…...

解决vue2项目在IE11浏览器中无画面的兼容问题

解决vue2项目在IE11浏览器中无画面的兼容问题 背景介绍当前网上能找打的教程 背景介绍 当前项目面临其他浏览器都可以运行&#xff0c;但是在IE11浏览器中出现白屏的现象&#xff0c;F12后台也没有报错&#xff0c;项目月底也要交付了。当前项目的vue版本为2.6.11&#xff0c;…...

信号

信号也是IPC中的一种&#xff0c;是和管道&#xff0c;消息队列&#xff0c;共享内存并列的概念。 本文参考&#xff1a; Linux中的信号_linux中信号_wolf鬼刀的博客-CSDN博客 Linux系统编程&#xff08;信号处理 sigacation函数和sigqueue函数 )_花落已飘的博客-CSDN博客 Linu…...

产品经理的真实薪资有多少?今天带你看看

作为产品经理&#xff0c;除了需要拥有扎实的技术背景和出色的产品设计能力&#xff0c;还需具备出色的领导力和商业敏感度。因此&#xff0c;产品经理的薪资也越来越成为人们关注的话题。那么&#xff0c;一般来说&#xff0c;产品经理的薪资水平如何呢&#xff1f; 薪资多少…...

《一个操作系统的实现》windows用vm安装CentOS——从bochs环境搭建到第一个demo跑通

vm安装CentOS虚拟机带有桌面的版本。su输入密码123456。更新yum -y update 。一般已经安装好后面这2个工具&#xff1a;yum install -y net-tools wget。看下ip地址ifconfig&#xff0c;然后本地终端连接ssh root192.168.249.132输入密码即可&#xff0c;主要是为了复制网址方便…...

线程Thread

文章目录 一、概念1、进程2、线程3、CPU与线程的关系4、并行、并发5、线程的生命周期 二、创建1、继承Thread2、实现Runnable接口3、实现Callable接口 三、API1、获取运行使用的线程2、唯一标识3、线程名4、优先级5、是否处于活动状态6、守护线程7、join1、API2、有无join对比 …...

如何使用CSS实现一个渐变背景效果?

聚沙成塔每天进步一点点 ⭐ 专栏简介⭐ 使用CSS实现渐变背景效果⭐ 线性渐变&#xff08;Linear Gradient&#xff09;⭐ 径向渐变&#xff08;Radial Gradient&#xff09;⭐ 写在最后 ⭐ 专栏简介 前端入门之旅&#xff1a;探索Web开发的奇妙世界 记得点击上方或者右侧链接订…...

初始C语言(7)——详细讲解有关初阶指针的内容

系列文章目录 第一章 “C“浒传——初识C语言&#xff08;1&#xff09;&#xff08;更适合初学者体质哦&#xff01;&#xff09; 第二章 初始C语言&#xff08;2&#xff09;——详细认识分支语句和循环语句以及他们的易错点 第三章 初阶C语言&#xff08;3&#xff09;——…...

ArcGIS Pro技术应用(暨基础入门、制图、空间分析、影像分析、三维建模、空间统计分析与建模、python融合、案例应用)

GIS是利用电子计算机及其外部设备&#xff0c;采集、存储、分析和描述整个或部分地球表面与空间信息系统。简单地讲&#xff0c;它是在一定的地域内&#xff0c;将地理空间信息和 一些与该地域地理信息相关的属性信息结合起来&#xff0c;达到对地理和属性信息的综合管理。GIS的…...

RISC-V公测平台发布 · 数据库在RISC-V服务器上的适配评估

前言 上一期讲到YCSB在RISC-V服务器上对MySQL进行性能测试&#xff08;RISC-V公测平台发布 使用YCSB测试SG2042上的MySQL性能&#xff09;&#xff0c;在这一期文章中&#xff0c;我们继续深入讨论RISC-V数据库的应用。本期就继续利用HS-2平台来测试数据库软件在RISC-V服务器…...

UE5.2 LyraDemo源码阅读笔记(五)输入系统

Lyra里使用了增强输入系统&#xff0c;首先知道增强输入系统里的三个类型配置。 一、Input Actions (IA)&#xff1a; 输入操作带来的变量&#xff0c;与玩家的输入组件绑定&#xff0c;回调里驱动玩家行为。 二、InputMappingContext&#xff08;IMC&#xff09;&#xff1a…...

线段树详解——影子宽度

OK&#xff0c;今天来讲一讲线段树~~ 线段树是什么线段树的实现线段树的时间复杂度线段树的应用线段树的节点结构其他操作和优化例题——影子宽度输入输出格式输入格式输出格式 输入输出样例输入样例输出样例 例题讲解 线段树是什么 线段树&#xff08; S e g m e n t Segmen…...

使用R语言绘制折线图

写在前面 昨天我们分享了使用Python绘制折线图的教程,跟着NC学作图 | 使用python绘制折线图,考虑到很多同学基本不使用Python绘图。那么,我们也使用R语言复现此图形。 此外,在前期的教程中,我们基本没有分享过折线图的教程。因此,我们在这里也制作一期关于折线图的教程。…...

无涯教程-Perl - wantarray函数

描述 如果当前正在执行的函数的context正在寻找列表值,则此函数返回true。在标量context中返回false。 语法 以下是此函数的简单语法- wantarray返回值 如果没有context,则此函数返回undef&#xff1b;如果lvalue需要标量,则该函数返回0。 例 以下是显示其基本用法的示例…...

【gitkraken】gitkraken自动更新问题

GitKraken 会自动升级&#xff01;一旦自动升级&#xff0c;你的 GitKraken 自然就不再是最后一个免费版 6.5.1 了。 在安装 GitKraken 之后&#xff0c;在你的安装目录&#xff08;C:\Users\<用户名>\AppData\Local\gitkraken&#xff09;下会有一个名为 Update.exe 的…...

《Java Web程序设计》试卷03

《Java Web程序设计》试卷03 课程编码&#xff1a; 301209 适用专业&#xff1a; 计算机应用(包括JAVA方向) 注 意 事 项 1、首先按要求在试卷标封处填写你所在的系&#xff08;部&#xff09;、专业、班级及学号和姓名&#xff1b; 2、仔细阅读各类题目的回答要求&#xff0c;…...

怎么查看小程序中的会员信息

商家通过查看会员信息&#xff0c;可以更好地了解用户&#xff0c;并为他们提供更个性化的服务和推荐。接下来&#xff0c;就将介绍如何查看会员信息。 商家在管理员后台->会员管理处&#xff0c;可以查看到会员列表。支持搜索会员的卡号、手机号和等级。还支持批量删除会员…...

网络安全—黑客—自学笔记

想自学网络安全&#xff08;黑客技术&#xff09;首先你得了解什么是网络安全&#xff01;什么是黑客&#xff01; 网络安全可以基于攻击和防御视角来分类&#xff0c;我们经常听到的 “红队”、“渗透测试” 等就是研究攻击技术&#xff0c;而“蓝队”、“安全运营”、“安全…...

深度解读波卡 2.0:多核、更有韧性、以应用为中心

本文基于 Polkadot 生态研究院整理&#xff0c;有所删节 随着波卡 1.0 的正式实现&#xff0c;波卡于 6 月 28 日至 29 日在哥本哈根举办了年度最重要的会议 Polkadot Decoded 2023&#xff0c;吸引了来自全球的行业专家、开发者和爱好者&#xff0c;共同探讨和分享波卡生态的…...

微服务中间件--Eureka注册中心

Eureka注册中心 a.eureka原理分析b.搭建eureka服务c.服务注册d.服务发现 a.eureka原理分析 1.每个服务启动时&#xff0c;将自动在eureka中注册服务信息 (每个服务每隔30秒发送一次的心跳续约&#xff0c;当某个服务没有发送时&#xff0c;eurekaServer将自动剔除该服务&#x…...

积跬步至千里 || 矩阵可视化

矩阵可视化 矩阵可以很方面地展示事物两两之间的关系&#xff0c;这种关系可以通过矩阵可视化的方式进行简单监控。 定义一个通用类 from matplotlib import pyplot as plt import seaborn as sns import numpy as np import pandas as pdclass matrix_monitor():def __init…...

zookeeper详细介绍

ZooKeeper是一个开源的分布式协调服务,具有以下一些关键特点: 数据模型 ZooKeeper的数据模型采用层次化的多叉树形结构,每个节点称为znode,类似于文件系统中的文件和目录。每个znode可以存储数据和控制信息。一致性保证 ZooKeeper通过ZAB协议,实现分布式环境下数据的强一致性,…...

面板市场趋势分析:价格上涨势头或将减缓 | 百能云芯

8月末&#xff0c;面板价格报价公布&#xff0c;市场研究机构TrendForce指出&#xff0c;电视面板今年以来已经上涨超过30%&#xff0c;虽然下游品牌商对于价格上涨提出了不同声音&#xff0c;但由于面板厂商采取了按需生产的策略&#xff0c;8月仍然出现了3~5%的价格上涨。Tre…...

JVM性能调优

java 如何跨平台&#xff0c;如何一次编译到处执行 是由于java在不同的jvm上编译&#xff0c;jvm在软件层面屏蔽不同操作系统在底层硬件与指令上的区别。 jvm 包括 new 的对象都是放在堆中 栈&#xff0c;给线程单独使用&#xff08;线程私有&#xff09;&#xff0c;存储一个…...

【全链路追踪】XXL-JOB添加TraceID

文章目录 一、背景调用路径部署环境问题 二、方案三、Demo示例1、MDC2、RequestInterceptor3、HandlerInterceptor4、logback.xml 四、后续改进思路 一、背景 首先这个项目属于小型项目&#xff0c;由于人手以及时间限制&#xff0c;并未引入Skywalking等中间件来做调用链路追…...

[Unity]Lua本地时间、倒计时和正计时。

惯例&#xff0c;直接上代码&#xff1a; --正计时开始时的时间戳 self.begin_time os.time() --倒计时时长&#xff0c;01:30:00 self.countdown_time 5400 --是否开始计时 self.is_update_local_time true--Unity Update function time_transition:update_local_timer()i…...

探究HTTP API接口测试:工具、方法与自动化

本文将深入探讨HTTP API接口测试的重要性&#xff0c;并介绍了相关工具、方法以及自动化测试的实施&#xff0c;同时比较了HTTP和API接口测试的区别。从不同角度解析这一关键测试领域&#xff0c;帮助读者更好地理解和应用于实际项目中。 在如今数字化的世界中&#xff0c;软件…...

CSS中如何实现文字溢出省略号(text-overflow: ellipsis)效果?

聚沙成塔每天进步一点点 ⭐ 专栏简介⭐ CSS中如何实现文字溢出省略号&#xff08;text-overflow: ellipsis&#xff09;效果&#xff1f;⭐ 写在最后 ⭐ 专栏简介 前端入门之旅&#xff1a;探索Web开发的奇妙世界 记得点击上方或者右侧链接订阅本专栏哦 几何带你启航前端之旅 …...

CSDN编程题-每日一练(2023-08-21)

CSDN编程题-每日一练(2023-08-21) 一、题目名称:贝博士的论文审阅统计二、题目名称:生命进化书三、题目名称:寻找宝藏山一、题目名称:贝博士的论文审阅统计 时间限制:1000ms内存限制:256M 题目描述: 贝博士经常收到申请他审阅论文的信函,每封信函的信封上面只有两个申…...