Seaweedfs的s3进阶试用方法

目录

SeaweedFS 已经用docker compose的方式部署在生产环境内中了,对外只开放了一个S3的端口127.0.0.1:8333,然后前面套上Caddy的https代理,这样很安全了。

那进阶的要求又来了:

一、S3的pre signed的URL

由于程序之前是在AWS跑的,所以用了S3的最佳实践,pre sign url来进行上传和下载,那seaweedfs也是完全支持的,基本是无缝修改

给出验证程序:

 1import boto3
 2from botocore.client import Config
 3
 4# Configure the S3 client to point to your SeaweedFS S3 gateway
 5s3_client = boto3.client(
 6    's3',
 7    endpoint_url='https://s3.rendoumi.com',  # Replace with your SeaweedFS S3 gateway address
 8    aws_access_key_id='aaaaaaaa',
 9    aws_secret_access_key='bbbbbbb',
10    config=Config(signature_version='s3v4')
11)
12
13bucket_name = 'myfiles'
14object_key = 'your-object-key'
15expiration_seconds = 3600  # URL valid for 1 hour
16
17# Generate a pre-signed URL for uploading (PUT)
18try:
19    upload_url = s3_client.generate_presigned_url(
20        'put_object',
21        Params={'Bucket': bucket_name, 'Key': object_key, 'ContentType': 'application/octet-stream'},
22        ExpiresIn=expiration_seconds
23    )
24    print(f"Pre-signed URL for upload: {upload_url}")
25except Exception as e:
26    print(f"Error generating upload URL: {e}")
27
28# Generate a pre-signed URL for downloading (GET)
29try:
30    download_url = s3_client.generate_presigned_url(
31        'get_object',
32        Params={'Bucket': bucket_name, 'Key': object_key},
33        ExpiresIn=expiration_seconds
34    )
35    print(f"Pre-signed URL for download: {download_url}")
36except Exception as e:
37    print(f"Error generating download URL: {e}")

能看到临时生成的upload的URL和download的URL,是完美支持的

image-20250917095011527

二、s3的域名代理

那aws的s3桶默认是有个域名的bucket.ap-southease-1……,前面也可以套上Cloudfront把桶做成静态网站的CDN

那迁移的话,做法也有两种,一是CDN厂家直接支持源端是S3桶,那这种就好办了,直接配置即可

第二种就是没有这功能,那只能把seaweedfs的filter的暴露出来,然后让CDN厂家http回源

那做法也很简单,修改docker -compose.yaml放开8888端口

 1services:
 2  seaweedfs-s3:
 3    image: chrislusf/seaweedfs
 4    container_name: seaweedfs-s3
 5    volumes:
 6      - ./data:/data
 7      - ./config/config.json:/seaweedfs/config.json
 8    ports:
 9      - "127.0.0.1:8333:8333"
10      - "127.0.0.1:8888:8888"
11    entrypoint: /bin/sh -c
12    command: |
13      "echo 'Starting SeaweedFS S3 server' && \
14      weed server -dir=/data -volume.max=100 -s3 -s3.config /seaweedfs/config.json"
15    restart: unless-stopped

然后用Caddy进行反代,Caddyfile如下:

1s3.rendoumi.com {
2  reverse_proxy 127.0.0.1:8333
3}
4
5
6mybucket.s3.rendoumi {
7    rewrite * /buckets/mybucket{uri}
8    reverse_proxy http://127.0.0.1:8888
9}

上面有一点要注意,seaweedfs暴漏出的8888端口可以看到所有桶,都在/buckets下,我们只想暴漏mybucket的话,就需要对url进行改写。

三、s3的cors

程序里居然还用到了s3的core,那比较麻烦了,万幸的是seaweedfs居然也支持!!!

链接:https://github.com/seaweedfs/seaweedfs/wiki/S3-CORS

我们只限制 mybucket 的桶的cors,先准备一个cors.xml文件

 1<CORSConfiguration>
 2  <CORSRule>
 3    <AllowedOrigin>https://www.rendoumi.com</AllowedOrigin>
 4    <AllowedOrigin>https://mybucket.rendoumi.com</AllowedOrigin>
 5    <AllowedOrigin>https://*.rendoumi.com</AllowedOrigin>
 6    <AllowedOrigin>https://mybucket.fcdn.net</AllowedOrigin>
 7    <AllowedMethod>GET</AllowedMethod>
 8    <AllowedMethod>POST</AllowedMethod>
 9    <AllowedMethod>PUT</AllowedMethod>
10    <AllowedMethod>HEAD</AllowedMethod>
11    <AllowedMethod>DELETE</AllowedMethod>
12    <AllowedHeader>*</AllowedHeader>
13    <ExposeHeader>ETag</ExposeHeader>
14    <ExposeHeader>Access-Control-Allow-Origin</ExposeHeader>
15    <MaxAgeSeconds>3600</MaxAgeSeconds>
16  </CORSRule>
17</CORSConfiguration>

这个是真无语,因为minio的mc只支持这个模式,不支持json(或者是自己没找到json的配法)

然后用mc推进去即可

1mc cors set mys3/mybucket /data/weedfs/cors.xml
2
3mc cors get mys3/mybucket

这样就搞定了。


Postgres Pgbackrest备份以及指定时间点恢复
comments powered by Disqus