I have 3 servers in 3 different datacenters with public IP stored in .env
file:
TZ=UTS
USER=root
PASSWORD=passroot
SERVER1=154.13.92.23
SERVER2=154.13.92.24
SERVER3=154.13.92.25
Docker compose for running distribute nebula is :
version: '3.9'
services:
metad:
image: vesoft/nebula-metad:v3.5.0
environment:
USER: "${USER}"
PASSWORD: "${PASSWORD}"
TZ: "${TZ}"
command:
- --meta_server_addrs="${SERVER1}":9559,${SERVER2}":9559,${SERVER3}":9559
- --local_ip=metad
- --ws_ip=metad
- --port=9559
- --ws_http_port=19559
- --data_path=/data/meta
- --log_dir=/logs
- --v=0
- --minloglevel=0
ports:
- 0.0.0.0:9559:9559
- 0.0.0.0:19559:19559
- 0.0.0.0:19560:19560
healthcheck:
test: ["CMD", "curl", "-sf", "http://metad:19559/status"]
interval: 30s
timeout: 10s
retries: 3
start_period: 20s
volumes:
- ./data/meta:/data/meta
- ./logs/meta:/logs
storaged:
image: vesoft/nebula-storaged:v3.5.0
environment:
USER: "${USER}"
PASSWORD: "${PASSWORD}"
TZ: "${TZ}"
command:
- --meta_server_addrs="${SERVER1}":9559,${SERVER2}":9559,${SERVER3}":9559
- --local_ip=storaged
- --ws_ip=storaged
- --port=9779
- --ws_http_port=19779
- --data_path=/data/storage
- --log_dir=/logs
- --v=0
- --minloglevel=0
depends_on:
- metad
ports:
- 0.0.0.0:9779:9779
- 0.0.0.0:19779:19779
- 0.0.0.0:19780:19780
healthcheck:
test: ["CMD", "curl", "-sf", "http://storaged:19779/status"]
interval: 30s
timeout: 10s
retries: 3
start_period: 20s
volumes:
- ./data/storage:/data/storage
- ./logs/storage:/logs
graphd:
image: vesoft/nebula-graphd:v3.5.0
environment:
USER: "${USER}"
PASSWORD: "${PASSWORD}"
TZ: "${TZ}"
command:
- --meta_server_addrs="${SERVER1}":9559,${SERVER2}":9559,${SERVER3}":9559
- --port=9669
- --local_ip=graphd
- --ws_ip=graphd
- --ws_http_port=19669
- --log_dir=/logs
- --v=0
- --minloglevel=0
- --enable_authorize=true
depends_on:
- storaged
ports:
- 0.0.0.0:5758:9669
- 0.0.0.0:19669:19669
- 0.0.0.0:19670:19670
healthcheck:
test: ["CMD", "curl", "-sf", "http://graphd:19669/status"]
interval: 30s
timeout: 10s
retries: 3
start_period: 20s
volumes:
- ./logs/graph:/logs
networks:
nebula-net:
external: true
but show error :
Log file created at: 2023/07/26 15:02:42
Running on machine: a3bbe51bcb1c
Running duration (h:mm:ss): 0:00:00
Log line format: [IWEF]yyyymmdd hh:mm:ss.uuuuuu threadid file:line] msg
E20230726 15:02:42.507052 1 FileUtils.cpp:377] Failed to read the directory "/data/meta/nebula" (2): No such file or directory
Any suggestion about how running distribute to difference metal server in Nebula?
For multiple server containerized deployments, it's recommended to leverage NebulaGraph K8s Operator.
In case of the docker way, we could use swarm rather than the compose, refer to both the master branch and the swarm branch here .
Regarding the meta data folder error, could you double check if it's already created automatically? For docker compose, it should be(for containerd I noticed it won't, but we could create on our own for once), thus the error should be OK as it's only reported in the initial run.
From your yaml, I can see the healthcheck lines are incorrect as they are calling the endpoint with domain name that's inside one container network test: ["CMD", "curl", "-sf", "http://graphd:19669/status"]
, thus all services will be offline in this condition.
The final thing to be noted are, in the initial run, the storaged should be activated with the NeublaGraph Console to call ADD HOSTS
with their exact SERVER IP and PORT in the configuration.
Hope this helps! and Welcome to the NebulaGraph community!