BlueXIII's Blog

热爱技术,持续学习

0%

官网及下载

https://github.com/ginuerzh/gost

跳板机

  • 使用10.80.8.237
  • 复用Nginx的8080端口

服务端Gost

1
./gost -L "ws://172.17.0.1:31314?path=/gost"

服务端Nginx转发配置

1
2
3
4
5
6
7
8
9
10
location /ray {
proxy_redirect off;
proxy_pass http://172.17.0.1:31313;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

本机DockerEC

1
docker run --name ecweifang --device /dev/net/tun --cap-add NET_ADMIN -itd -p 127.0.0.1:7011:1080 -e EC_VER=7.6.3 -e CLI_OPTS="-d https://xx.xx.xx.xx -u username -p password" hagb/docker-easyconnect:cli

本机Gost

1
./gost -L=:7013 -F="ws://10.80.8.237:8080?path=/gost"

本机Proxifier配置

  • gost进程 + 10.80.8.237网段 –> 127.0.0.1:7011
  • 任意进程 + 10.80.8.*网段 –> 127.0.0.1:7013

官网及下载

跳板机

  • 使用10.80.8.237
  • 复用Nginx的8080端口
  • 方式为vmess-over-websocket

V2ray服务器搭建

  • v2ray安装路径为/opt/v2ray
  • nginx配置文件路径为/dubhe/nginx/conf

配置服务端config.json,vi config.json:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
{
"log": {
"access": "/opt/v2ray/access.log",
"error": "/opt/v2ray/error.log",
"loglevel": "debug"
},
"inbounds": [{
"port": 31313,
"listen": "172.17.0.1",
"protocol": "vmess",
"settings": {
"clients": [{
"id": "7255a312-be4b-9315-65c9-123412341234",
"alterId": 64
}]
},
"streamSettings": {
"network": "ws",
"wsSettings": {
"path": "/ray/"
}
}
}],
"outbounds": [{
"protocol": "freedom",
"settings": {}
}]
}

启动v2ray:

1
./v2ray --config config.json

配置Nginx的default.conf:

1
2
3
4
5
6
7
8
9
10
location /ray {
proxy_redirect off;
proxy_pass http://172.17.0.1:31313;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

重启Nginx:

1
docker restart nginx

本机DockerEasyConnect启动

1
docker run --name ecweifang --device /dev/net/tun --cap-add NET_ADMIN -itd -p 127.0.0.1:7011:1080 -e EC_VER=7.6.3 -e CLI_OPTS="-d https://xx.xx.xx.xx -u username -p password" hagb/docker-easyconnect:cli

占用端口7011

本机Proxifier配置

  • v2ray进程 + 10.80.8.237网段 –> 127.0.0.1:7011
  • 任意进程 + 10.80.8.*网段 –> 127.0.0.1:7011

V2ray客户端配置

vi weifang.json

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
{
"inbounds" : [
{
"listen" : "127.0.0.1",
"port" : 7012,
"protocol" : "socks",
"tag" : "socksinbound",
"settings" : {
"auth" : "noauth",
"udp" : false,
"ip" : "127.0.0.1"
}
}
],
"outbounds" : [
{
"protocol" : "vmess",
"settings" : {
"vnext" : [
{
"address" : "10.80.8.237",
"users" : [
{
"id" : "7255a312-be4b-9315-65c9-123412341234",
"alterId" : 64,
"security" : "auto",
"level" : 0
}
],
"port" : 8080
}
]
},
"streamSettings" : {
"wsSettings" : {
"path" : "\/ray\/",
"headers" : {

}
},
"tlsSettings" : {
"allowInsecure" : false,
"alpn" : [
"http\/1.1"
],
"serverName" : "",
"allowInsecureCiphers" : false
},
"security" : "none",
"network" : "ws"
}
}
]
}

本机客户端启动:

1
./v2ray --config weifang.json

占用端口7012

官网

https://docs.rancher.cn/docs/k3s/quick-start/_index

在线安装

1
2
3
4
5
6
7
8
# server
curl -sfL http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn sh -
cat /var/lib/rancher/k3s/server/node-token

# node
curl -sfL http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn K3S_URL=https://10.193.2.8:6443 K3S_TOKEN=K103d30bcbca9e48b6c642e0f92af0d656e465527b5929345537ae086b7fa8ea2e4::server:9bc179e5cfb5b8c09eac21c9ce145a58 sh -

vi /etc/rancher/k3s/k3s.yaml

Registry安装

1
2
3
4
5
6
7
8
yum install -y yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install docker-ce docker-ce-cli containerd.io
service docker start
docker run -d -p 5000:5000 --restart=always --name registry registry:2
docker run -d -p 5000:5000 -v /data/registry:/var/lib/registry registry:2

http://10.193.2.11:5000

Harbor离线安装

https://cloud.tencent.com/developer/article/1763874

参考文档

测试

1
2
3
4
5
6
7
8
cd /Users/bluexiii/Documents/code/evayinfo/agric/agric_pf
php artisan gatewayworker start

cd /Users/bluexiii/Documents/code/evayinfo/agric/agric_pf/storage/logs
tail -f laravel.log

cd /Users/bluexiii/Documents/code/playground/python-playground/socket_test
python full_duplex_client.py

报文拼接

首尾

起始 1字节 eb
结束 1字节 d7

控制域

功能码 6bit 01 000001
长度 10bit 4+34=38 0000100110

二进制 00000100 00100110
十六进制 04 26

发送接收序列

发送序列 2字节 00 01
接收序列 2字节 00 01

响应报文

EB 08 0C 00 01 00 01 8A EC AA 56 3C 82 9C 34 D7
EB 30 08 00 00 00 00 61 D3 DB FC D7 时间: 61d3dd97 1641274775 2022-01-04 13:39:35

测试报文

登录

硬件版本号(2字节) 硬件版本号 1 00 01
固件版本号(4字节) 固件版本号 20 00 01
唯一序列号(8字节) 8A EC AA 56 3C 82 9C 34
CCID(20 Byte) 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41

eb 04 26 00 01 00 01 00 01 00 00 00 14 8a ec aa 56 3c 82 9c 34 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 d7
eb080c000100018aecaa563c829c34d7
eb30080000000061d406a6d7

心跳

功能码 6bit 04 001000
长度 10bit 4+0=4 0000000100
0010000000000100 2004

eb 20 04 00 01 00 01 d7

重启

功能码 6bit 08 000100
长度 10bit 4+0=4 0000000100
0001000000000100 10 04

eb 10 04 00 01 00 01 d7

LORA传感器数据上传 32 0x20

传感器ID (4字节) 45 6F 00 22
时间戳(4字节) 61 d3 dd 97
数据条数 (1字节) 02
测量值1编号 (2字节) 00 01
测量值1 (4字节) 00 00 00 01
测量值2编号 (2字节) 00 02
测量值2 (4字节) 00 00 00 02

功能码 6bit 32 100000
长度 10bit 4+21=25 0000011001
1000000000011001 80 19

eb 80 19 00 01 00 01 45 6f 00 22 61 d3 dd 97 02 00 01 00 00 00 01 00 02 00 00 00 02 d7
eb 10 04 00 01 00 01 d7

mongodb

influxdb

iot协议

物联网平台

Setuptool

官网

教程

https://www.escapelife.site/posts/fc616494.html
https://pycharm.iswbm.com/
https://zhuanlan.zhihu.com/p/276461821

打egg包

1
python setup.py bdist_egg

安装

1
2
python setup.py install
python setup.py develop

pyinstaller

官网

教程

参考文档

scrapyd

安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
pip install scrapyd

vi /etc/scrapyd/scrapyd.conf
[scrapyd]
eggs_dir = /data/scrapyd/eggs
logs_dir = /data/scrapyd/logs
items_dir = /data/scrapyd/items
jobs_to_keep = 100
dbs_dir = /data/scrapyd/dbs
max_proc = 0
max_proc_per_cpu = 10
finished_to_keep = 100
poll_interval = 5.0
bind_address = 0.0.0.0
http_port = 6800
debug = off
runner = scrapyd.runner
application = scrapyd.app.application
launcher = scrapyd.launcher.Launcher
webroot = scrapyd.website.Root

[services]
schedule.json = scrapyd.webservice.Schedule
cancel.json = scrapyd.webservice.Cancel
addversion.json = scrapyd.webservice.AddVersion
listprojects.json = scrapyd.webservice.ListProjects
listversions.json = scrapyd.webservice.ListVersions
listspiders.json = scrapyd.webservice.ListSpiders
delproject.json = scrapyd.webservice.DeleteProject
delversion.json = scrapyd.webservice.DeleteVersion
listjobs.json = scrapyd.webservice.ListJobs
daemonstatus.json = scrapyd.webservice.DaemonStatus


nohup scrapyd>scrapyd.log 2>&1 &

open http://10.211.55.101:6800

# scrapyd服务器提前安装依赖
pip install -r requirements.txt

HTTP API

1
2
3
4
5
6
7
8
9
10
11
12
curl http://10.211.55.101:6800/daemonstatus.json
curl http://10.211.55.101:6800/addversion.json -F project=hydrabot -F version=1.0.0 -F egg=@hydrabot.egg
curl http://10.211.55.101:6800/schedule.json -d project=hydrabot -d spider=teacher -d task_id=1 -d entry_id=3070
curl http://10.211.55.101:6800/cancel.json -d project=hydrabot -d job=6487ec79947edab326d6db28a2d86S11e8247444

curl http://10.211.55.101:6800/listprojects.json
curl http://10.211.55.101:6800/listversions.json?project=hydrabot
curl http://10.211.55.101:6800/listspiders.json?project=hydrabot
curl http://10.211.55.101:6800/listjobs.json?project=hydrabot

curl http://10.211.55.101:6800/delversion.json -d project=hydrabot -d version=1.0.0
curl http://10.211.55.101:6800/delproject.json -d project=hydrabot

scrapyd-client

本机安装

pip3 install scrapyd-deploy
pip3 install scrapyd-client

scrapyd-deploy -h

配置scrapy.cfg

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[settings]
default = settings.default

[deploy:node5]
url = http://10.194.99.5:6800/
project = hydrabot

[deploy:node6]
url = http://10.194.99.6:6800/
project = hydrabot

[deploy:node7]
url = http://10.194.99.7:6800/
project = hydrabot

[deploy:node8]
url = http://10.194.99.8:6800/
project = hydrabot

CLI

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# 部署到一个节点
scrapyd-deploy node8

# 部署到全部节点
scrapyd-deploy -a

# 列出工程
scrapyd-client -t http://10.194.99.8:6800 projects

# 查看计划任务
scrapyd-client -t http://10.194.99.8:6800 schedule -p hydrabot \istic

# 列出爬虫
scrapyd-client -t http://10.194.99.8:6800 spiders -p hydrabot

官网

磁盘

1
2
3
4
5
vi /etc/fstab
UUID=c51eb23b-195c-4061-92a9-3fad812cc12f /data ext4 defaults,nodelalloc,noatime 0 2
mkdir /data
mount -a
mount -t ext4

Swap

1
2
3
4
5
6
7
8
echo "vm.swappiness = 0">> /etc/sysctl.conf
swapoff -a && swapon -a
sysctl -p

vi /etc/fstab # 注释掉swap一行
swapoff -a

free # 验证

ntp

1
2
3
4
yum install ntp ntpdate -y
systemctl start ntpd.service
systemctl enable ntpd.service
ntpstat

操作系统优化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
cat /sys/kernel/mm/transparent_hugepage/enabled


cat /sys/block/vdc/queue/scheduler
cat /sys/block/vdd/queue/scheduler

udevadm info --name=/dev/vdc | grep ID_SERIAL
udevadm info --name=/dev/vdd | grep ID_SERIAL

cpupower frequency-info --policy

-----------------

tuned-adm list
mkdir /etc/tuned/balanced-tidb-optimal/
vi /etc/tuned/balanced-tidb-optimal/tuned.conf
[main]
include=balanced

[cpu]
governor=performance

[vm]
transparent_hugepages=never

[disk]
devices_udev_regex=(ID_SERIAL=vos-5ef7r1pv)|(ID_SERIAL=vos-od1sacxs)
elevator=noop

tuned-adm profile balanced-tidb-optimal



echo "fs.file-max = 1000000">> /etc/sysctl.conf
echo "net.core.somaxconn = 32768">> /etc/sysctl.conf
echo "net.ipv4.tcp_tw_recycle = 0">> /etc/sysctl.conf
echo "net.ipv4.tcp_syncookies = 0">> /etc/sysctl.conf
echo "vm.overcommit_memory = 1">> /etc/sysctl.conf
sysctl -p

cat << EOF >>/etc/security/limits.conf
tidb soft nofile 1000000
tidb hard nofile 1000000
tidb soft stack 32768
tidb hard stack 32768
EOF

# 禁用透明大页
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag

# 安装numactl
yum install numactl -y

# 安装irqbalance
yum install irqbalance -y
systemctl enable irqbalance && systemctl start irqbalance

ssh互信

1
2
3
ssh-copy-id root@10.193.50.17
ssh-copy-id root@10.193.50.18
ssh-copy-id root@10.193.50.19

TiUP

1
2
3
4
5
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
which tiup
tiup cluster
tiup update --self && tiup update cluster
tiup --binary cluster
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
tiup cluster template > topology.yaml
vi topology.yaml

global:
user: "tidb"
ssh_port: 22
deploy_dir: "/data/tidb-deploy"
data_dir: "/data/tidb-data"

pd_servers:
- host: 10.193.50.17
- host: 10.193.50.18
- host: 10.193.50.19

tidb_servers:
- host: 10.193.50.17
- host: 10.193.50.18
- host: 10.193.50.19

tikv_servers:
- host: 10.193.50.17
- host: 10.193.50.18
- host: 10.193.50.19

monitoring_servers:
- host: 10.193.50.17

grafana_servers:
- host: 10.193.50.17

alertmanager_servers:
- host: 10.193.50.17

tiup cluster check ./topology.yaml --user root
tiup cluster check ./topology.yaml --apply --user root

tiup cluster deploy tidb-eip v5.3.0 ./topology.yaml --user root

HaProxy

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
yum install haproxy -y

vi /etc/haproxy/haproxy.cfg

global # 全局配置。
log 127.0.0.1 local2 # 定义全局的 syslog 服务器,最多可以定义两个。
chroot /var/lib/haproxy # 更改当前目录并为启动进程设置超级用户权限,从而提高安全性。
pidfile /var/run/haproxy.pid # 将 HAProxy 进程的 PID 写入 pidfile。
maxconn 4096 # 单个 HAProxy 进程可接受的最大并发连接数,等价于命令行参数 "-n"。
# nbthread 48 # 最大线程数。线程数的上限与 CPU 数量相同。
user haproxy # 同 UID 参数。
group haproxy # 同 GID 参数,建议使用专用用户组。
daemon # 让 HAProxy 以守护进程的方式工作于后台,等同于命令行参数“-D”的功能。当然,也可以在命令行中用“-db”参数将其禁用。
stats socket /var/lib/haproxy/stats # 统计信息保存位置。

defaults # 默认配置。
log global # 日志继承全局配置段的设置。
retries 2 # 向上游服务器尝试连接的最大次数,超过此值便认为后端服务器不可用。
timeout connect 2s # HAProxy 与后端服务器连接超时时间。如果在同一个局域网内,可设置成较短的时间。
timeout client 30000s # 客户端与 HAProxy 连接后,数据传输完毕,即非活动连接的超时时间。
timeout server 30000s # 服务器端非活动连接的超时时间。

listen admin_stats # frontend 和 backend 的组合体,此监控组的名称可按需进行自定义。
bind 0.0.0.0:8080 # 监听端口。
mode http # 监控运行的模式,此处为 `http` 模式。
option httplog # 开始启用记录 HTTP 请求的日志功能。
maxconn 10 # 最大并发连接数。
stats refresh 30s # 每隔 30 秒自动刷新监控页面。
stats uri /haproxy # 监控页面的 URL。
stats realm HAProxy # 监控页面的提示信息。
stats auth admin:pingcap123 # 监控页面的用户和密码,可设置多个用户名。
stats hide-version # 隐藏监控页面上的 HAProxy 版本信息。
stats admin if TRUE # 手工启用或禁用后端服务器(HAProxy 1.4.9 及之后版本开始支持)。

listen tidb-cluster # 配置 database 负载均衡。
bind 0.0.0.0:3390 # 浮动 IP 和 监听端口。
mode tcp # HAProxy 要使用第 4 层的传输层。
balance leastconn # 连接数最少的服务器优先接收连接。`leastconn` 建议用于长会话服务,例如 LDAP、SQL、TSE 等,而不是短会话协议,如 HTTP。该算法是动态的,对于启动慢的服务器,服务器权重会在运行中作调整。
server tidb-1 10.193.50.17:4000 check inter 2000 rise 2 fall 3 # 检测 4000 端口,检测频率为每 2000 毫秒一次。如果 2 次检测为成功,则认为服务器可用;如果 3 次检测为失败,则认为服务器不可用。
server tidb-2 10.193.50.18:4000 check inter 2000 rise 2 fall 3
server tidb-3 10.193.50.19:4000 check inter 2000 rise 2 fall 3


haproxy -f /etc/haproxy/haproxy.cfg
systemctl enable haproxy
systemctl start haproxy

启动

1
2
3
tiup cluster list
tiup cluster display tidb-eip
tiup cluster start tidb-eip

验证

PD

http://10.193.50.17:2379/dashboard

Grafana

http://10.193.50.17:3000

MySQL

mysql -uroot -p -h10.193.50.17 -P3390
select tidb_version()\G

官网

简易配置

vi mysql_sync.json

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
{
"job": {
"setting": {
"speed": {
"channel": 1
}
},
"content": [
{
"reader": {
"name": "mysqlreader",
"parameter": {
"username": "root",
"password": "password",
"connection": [
{
"querySql": ["select * from info_site where id < 300;"],
"jdbcUrl": ["jdbc:mysql://10.194.99.2:3306/hydrabot_info_prd"]
}
]
}
},
"writer": {
"name": "mysqlwriter",
"parameter": {
"writeMode": "insert",
"username": "root",
"password": "password",
"column": ["*"],
"session": ["set session sql_mode='ANSI'"],
"preSql": ["delete from test"],
"connection": [
{
"jdbcUrl": "jdbc:mysql://10.194.98.8:3306/hydrabot_info_prd?useUnicode=true&characterEncoding=gbk",
"table": ["test"]
}
]
}
}
}
]
}
}