amazon-s3self-hostingseaweedfs

seaweedfs config for new mounted volume with same existed bucket?


I have config a seaweedfs cluster with 1 master, 1 volumn (SSD disk), 1 filer and 1 s3 server (1cluster - 4 server). But now my volumn server is nearly full of disk, so I have add a 2nd volumn server(HHD disk) and mounted it to master by following command: weed volume -dir="/some/data/dir2" -mserver="<master_host>:9333" -port=8081. But in old volumn , I have config a bucket and our clients are using S3 API to access this bucket. So I have to mount 2nd volumn to use same bucket as old volumn. But I can't find any instructions about it.

  1. Can we config to new volumn with same bucket (as old volumn) ?
  2. If we can config same bucket with new volumn, will we have any changes in S3 API ? If a new object is pushing to this bucket, will it is stored inside old volumn or new volumn ?
  3. I have some very old documents (it doesn't get any access in 1-2 years), so I want to store it in new volumn(HDD disk) instead of old volumn (SSD disk) to get extra space in SSD disk. Can I move between 2 volumns without changing bucket ? If it is posible, please instruct me how to do it ?

Please help


Solution

    1. Yes
    2. No change. Randomly stored in old/new volumes, as long as they are writable.
    3. Rebalance the volumes. See help in weed shell.