Home
avatar

CatOi

国家队也做云盘了?

中科院免费送 20GB 空间,支持 S3 / WebDAV / 微信上传

中科院旗下的中国科技云推出了一款名叫数据胶囊的服务,针对实名认证用户免费提供 20GB 空间,可以在网页直接使用,还能通过 S3WebDAV 连接使用,绑定微信之后可从微信对话中上传文件。

如何注册

直接在浏览器打开以下官网地址即可注册,支持 QQ、微信,以及一大堆的第三方登录方式,甚至还有 GitHub 和 ORCID:

1.png

2.png

注册这个步骤就不用讲了吧,就是填表。

实名制验证后,可获得 20GB 空间

注册之后只有 1GB 空间,需要使用国家网络身份认证 APP 进行实名验证,在这里必须下载这个 APP,已经通过身份证绑定,注册网号。

3.png

4.png

如何使用?

第一步:获取 S3 连接参数

登录中国科技云数据胶囊,进入我的数据,选择要挂载为本地盘的数据集,点击客户端访问管理按钮。如下图:

1.png

进入 客户端访问管理界面后,可查看到具体的参数信息,在此界面创建用于边接云端数据 集的 AccessKey IDAccessKey Secret,并可获取到 Endpoint、Bucket、签名版本等参数信息,这些信息在后续 rclone 进行连接时会使用到。创建及获取参数界面如下:

2.png

如上图,新增 AccessKey创建访问参数,可创建多组,用于不同的环境,必要时 可删除收回权限。 另外,也支持更新 Bucket 名称。

3.png

第二步:下载并配置 rclone 连接参数

首先需下载并安装 rclone 软件,软件安装建议使用官方版本,链接如下:

下载对应操作系统的版本,本文档以 Win 安装使用为例,下载解开安装包,打开终端应用,使用命令行进行参数配置,具体过程如下:

方法一:进入 rclone 运行包目录,运行配置命令

#目前本机rclone1的安装位置为G:\rclone1.73.2
C:\Users\admin>g:

#进入 rclone 目录,例如:
G:\>cd G:\rclone1.73.2

#进入配置界面。
G:\rclone1.73.2>rclone config
#这个提示说明:rclone 没找到配置文件 rclone.conf,所以认为当前没有任何远程配置(remote)
2026/03/10 12:36:32 NOTICE: Config file "C:\\Users\\admin\\AppData\\Roaming\\rclone\\rclone.conf" not found - using defaults
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n

Enter name for new remote.
name> cstcloud

#选择选择存储类型:S3
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
 1 / 1Fichier
   \ (fichier)
 2 / Akamai NetStorage
   \ (netstorage)
 3 / Alias for an existing remote
   \ (alias)
 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, BizflyCloud, Ceph, ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology, TencentCOS, Wasabi, Zata, Other
   \ (s3)
 5 / Backblaze B2
   \ (b2)
 6 / Better checksums for other remotes
   \ (hasher)
 7 / Box
   \ (box)
 8 / Cache a remote
   \ (cache)
 9 / Citrix Sharefile
   \ (sharefile)
10 / Cloudinary
   \ (cloudinary)
11 / Combine several remotes into one
   \ (combine)
12 / Compress a remote
   \ (compress)
13 / DOI datasets
   \ (doi)
14 / Drime
   \ (drime)
15 / Dropbox
   \ (dropbox)
16 / Encrypt/Decrypt a remote
   \ (crypt)
17 / Enterprise File Fabric
   \ (filefabric)
18 / FTP
   \ (ftp)
19 / FileLu Cloud Storage
   \ (filelu)
20 / Filen
   \ (filen)
21 / Files.com
   \ (filescom)
22 / Gofile
   \ (gofile)
23 / Google Cloud Storage (this is not Google Drive)
   \ (google cloud storage)
24 / Google Drive
   \ (drive)
25 / Google Photos
   \ (google photos)
26 / HTTP
   \ (http)
27 / Hadoop distributed file system
   \ (hdfs)
28 / HiDrive
   \ (hidrive)
29 / ImageKit.io
   \ (imagekit)
30 / In memory object storage system.
   \ (memory)
31 / Internet Archive
   \ (internetarchive)
32 / Internxt Drive
   \ (internxt)
33 / Jottacloud
   \ (jottacloud)
34 / Koofr, Digi Storage and other Koofr-compatible storage providers
   \ (koofr)
35 / Linkbox
   \ (linkbox)
36 / Local Disk
   \ (local)
37 / Mail.ru Cloud
   \ (mailru)
38 / Mega
   \ (mega)
39 / Microsoft Azure Blob Storage
   \ (azureblob)
40 / Microsoft Azure Files
   \ (azurefiles)
41 / Microsoft OneDrive
   \ (onedrive)
42 / OpenDrive
   \ (opendrive)
43 / OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)
   \ (swift)
44 / Oracle Cloud Infrastructure Object Storage
   \ (oracleobjectstorage)
45 / Pcloud
   \ (pcloud)
46 / PikPak
   \ (pikpak)
47 / Pixeldrain Filesystem
   \ (pixeldrain)
48 / Proton Drive
   \ (protondrive)
49 / Put.io
   \ (putio)
50 / QingCloud Object Storage
   \ (qingstor)
51 / Quatrix by Maytech
   \ (quatrix)
52 / Read archives
   \ (archive)
53 / SMB / CIFS
   \ (smb)
54 / SSH/SFTP
   \ (sftp)
55 / Shade FS
   \ (shade)
56 / Sia Decentralized Cloud
   \ (sia)
57 / Storj Decentralized Cloud Storage
   \ (storj)
58 / Sugarsync
   \ (sugarsync)
59 / Transparently chunk/split large files
   \ (chunker)
60 / Uloz.to
   \ (ulozto)
61 / Union merges the contents of several upstream fs
   \ (union)
62 / WebDAV
   \ (webdav)
63 / Yandex Disk
   \ (yandex)
64 / Zoho
   \ (zoho)
65 / iCloud Drive
   \ (iclouddrive)
66 / premiumize.me
   \ (premiumizeme)
67 / seafile
   \ (seafile)
Storage> s3

#CSTCloud 是 S3兼容服务,选择Other。
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / Amazon Web Services (AWS) S3
   \ (AWS)
 2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
   \ (Alibaba)
 3 / Arvan Cloud Object Storage (AOS)
   \ (ArvanCloud)
 4 / Bizfly Cloud Simple Storage
   \ (BizflyCloud)
 5 / Ceph Object Storage
   \ (Ceph)
 6 / China Mobile Ecloud Elastic Object Storage (EOS)
   \ (ChinaMobile)
 7 / Cloudflare R2 Storage
   \ (Cloudflare)
 8 / Cubbit DS3 Object Storage
   \ (Cubbit)
 9 / DigitalOcean Spaces
   \ (DigitalOcean)
10 / Dreamhost DreamObjects
   \ (Dreamhost)
11 / Exaba Object Storage
   \ (Exaba)
12 / FileLu S5 (S3-Compatible Object Storage)
   \ (FileLu)
13 / Pure Storage FlashBlade Object Storage
   \ (FlashBlade)
14 / Google Cloud Storage
   \ (GCS)
15 / Hetzner Object Storage
   \ (Hetzner)
16 / Huawei Object Storage Service
   \ (HuaweiOBS)
17 / IBM COS S3
   \ (IBMCOS)
18 / IDrive e2
   \ (IDrive)
19 / Intercolo Object Storage
   \ (Intercolo)
20 / IONOS Cloud
   \ (IONOS)
21 / Leviia Object Storage
   \ (Leviia)
22 / Liara Object Storage
   \ (Liara)
23 / Linode Object Storage
   \ (Linode)
24 / Seagate Lyve Cloud
   \ (LyveCloud)
25 / Magalu Object Storage
   \ (Magalu)
26 / MEGA S4 Object Storage
   \ (Mega)
27 / Minio Object Storage
   \ (Minio)
28 / Netease Object Storage (NOS)
   \ (Netease)
29 / OUTSCALE Object Storage (OOS)
   \ (Outscale)
30 / OVHcloud Object Storage
   \ (OVHcloud)
31 / Petabox Object Storage
   \ (Petabox)
32 / Qiniu Object Storage (Kodo)
   \ (Qiniu)
33 / Rabata Cloud Storage
   \ (Rabata)
34 / RackCorp Object Storage
   \ (RackCorp)
35 / Rclone S3 Server
   \ (Rclone)
36 / Scaleway Object Storage
   \ (Scaleway)
37 / SeaweedFS S3
   \ (SeaweedFS)
38 / Selectel Object Storage
   \ (Selectel)
39 / Servercore Object Storage
   \ (Servercore)
40 / Spectra Logic Black Pearl
   \ (SpectraLogic)
41 / StackPath Object Storage
   \ (StackPath)
42 / Storj (S3 Compatible Gateway)
   \ (Storj)
43 / Synology C2 Object Storage
   \ (Synology)
44 / Tencent Cloud Object Storage (COS)
   \ (TencentCOS)
45 / Wasabi Object Storage
   \ (Wasabi)
46 / Zata (S3 compatible Gateway)
   \ (Zata)
47 / Any other S3 compatible provider
   \ (Other)
provider> Other

#是否从 系统环境变量 / 云服务器 IAM 获取密钥。
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
 1 / Enter AWS credentials in the next step.
   \ (false)
 2 / Get AWS credentials from the environment (env vars or IAM).
   \ (true)
#我们是在 Windows 本地电脑挂载 CSTCloud 数据胶囊所以应该:
env_auth> 1

Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
#(此处应是你的AccessKey ID)
access_key_id> AccessKey ID

Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
#(此处应是你的AccessKey Secret)
secret_access_key> AccessKey Secret

Option region.
Region to connect to.
Leave blank if you are using an S3 clone and you don't have a region.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
   / Use this if unsure.
 1 | Will use v4 signatures and an empty region.
   \ ()
   / Use this only if v4 signatures don't work.
 2 | E.g. pre Jewel/v10 CEPH.
   \ (other-v2-signature)
   #CSTCloud 数据胶囊是 S3 兼容存储,它 不依赖 AWS 的 region。
region> 1

Option endpoint.
Endpoint for S3 API.
Required when using an S3 clone.
Enter a value. Press Enter to leave empty.
#接入点 / Endpoint
endpoint> s3.cstcloud.cn

#这个参数 只在创建 bucket 时才有作用。直接回车就可以继续配置。
Option location_constraint.
Location constraint - must be set to match the Region.
Leave blank if not sure. Used when creating buckets only.
Enter a value. Press Enter to leave empty.
location_constraint>

Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
If the acl is an empty string then no X-Amz-Acl: header is added and
the default (private) will be used.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
   / Owner gets FULL_CONTROL.
 1 | No one else has access rights (default).
   \ (private)
   / Owner gets FULL_CONTROL.
 2 | The AllUsers group gets READ access.
   \ (public-read)
   / Owner gets FULL_CONTROL.
 3 | The AllUsers group gets READ and WRITE access.
   | Granting this on a bucket is generally not recommended.
   \ (public-read-write)
   / Owner gets FULL_CONTROL.
 4 | The AuthenticatedUsers group gets READ access.
   \ (authenticated-read)
   / Object owner gets FULL_CONTROL.
 5 | Bucket owner gets READ access.
   | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ (bucket-owner-read)
   / Both the object owner and the bucket owner get FULL_CONTROL over the object.
 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ (bucket-owner-full-control)
   #ACL(Access Control List) 控制对象的权限私有
acl> private

Edit advanced config?
y) Yes
n) No (default)
y/n> y

# 下面疯狂enter

Option bucket_acl.
Canned ACL used when creating buckets.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when only when creating buckets.  If it
isn't set then "acl" is used instead.
If the "acl" and "bucket_acl" are empty strings then no X-Amz-Acl:
header is added and the default (private) will be used.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
   / Owner gets FULL_CONTROL.
 1 | No one else has access rights (default).
   \ (private)
   / Owner gets FULL_CONTROL.
 2 | The AllUsers group gets READ access.
   \ (public-read)
   / Owner gets FULL_CONTROL.
 3 | The AllUsers group gets READ and WRITE access.
   | Granting this on a bucket is generally not recommended.
   \ (public-read-write)
   / Owner gets FULL_CONTROL.
 4 | The AuthenticatedUsers group gets READ access.
   \ (authenticated-read)
bucket_acl>

Option upload_cutoff.
Cutoff for switching to chunked upload.
Any files larger than this will be uploaded in chunks of chunk_size.
The minimum is 0 and the maximum is 5 GiB.
Enter a size with suffix K,M,G,T. Press Enter for the default (200Mi).
upload_cutoff>

Option chunk_size.
Chunk size to use for uploading.
When uploading files larger than upload_cutoff or files with unknown
size (e.g. from "rclone rcat" or uploaded with "rclone mount" or google
photos or google docs) they will be uploaded as multipart uploads
using this chunk size.
Note that "--s3-upload-concurrency" chunks of this size are buffered
in memory per transfer.
If you are transferring large files over high-speed links and you have
enough memory, then increasing this will speed up the transfers.
Rclone will automatically increase the chunk size when uploading a
large file of known size to stay below the 10,000 chunks limit.
Files of unknown size are uploaded with the configured
chunk_size. Since the default chunk size is 5 MiB and there can be at
most 10,000 chunks, this means that by default the maximum size of
a file you can stream upload is 48 GiB.  If you wish to stream upload
larger files then you will need to increase chunk_size.
Increasing the chunk size decreases the accuracy of the progress
statistics displayed with "-P" flag. Rclone treats chunk as sent when
it's buffered by the AWS SDK, when in fact it may still be uploading.
A bigger chunk size means a bigger AWS SDK buffer and progress
reporting more deviating from the truth.
Enter a size with suffix K,M,G,T. Press Enter for the default (5Mi).
chunk_size>

Option max_upload_parts.
Maximum number of parts in a multipart upload.
This option defines the maximum number of multipart chunks to use
when doing a multipart upload.
This can be useful if a service does not support the AWS S3
specification of 10,000 chunks.
Rclone will automatically increase the chunk size when uploading a
large file of a known size to stay below this number of chunks limit.
Enter a signed integer. Press Enter for the default (10000).
max_upload_parts>

Option copy_cutoff.
Cutoff for switching to multipart copy.
Any files larger than this that need to be server-side copied will be
copied in chunks of this size.
The minimum is 0 and the maximum is 5 GiB.
Enter a size with suffix K,M,G,T. Press Enter for the default (4.656Gi).
copy_cutoff>

Option disable_checksum.
Don't store MD5 checksum with object metadata.
Normally rclone will calculate the MD5 checksum of the input before
uploading it so it can add it to metadata on the object. This is great
for data integrity checking but can cause long delays for large files
to start uploading.
Enter a boolean value (true or false). Press Enter for the default (false).
disable_checksum> false

Option shared_credentials_file.
Path to the shared credentials file.
If env_auth = true then rclone can use a shared credentials file.
If this variable is empty rclone will look for the
"AWS_SHARED_CREDENTIALS_FILE" env variable. If the env value is empty
it will default to the current user's home directory.
    Linux/OSX: "$HOME/.aws/credentials"
    Windows:   "%USERPROFILE%\.aws\credentials"
Enter a value. Press Enter to leave empty.
shared_credentials_file>

Option profile.
Profile to use in the shared credentials file.
If env_auth = true then rclone can use a shared credentials file. This
variable controls which profile is used in that file.
If empty it will default to the environment variable "AWS_PROFILE" or
"default" if that environment variable is also not set.
Enter a value. Press Enter to leave empty.
profile>

Option session_token.
An AWS session token.
Enter a value. Press Enter to leave empty.
session_token>

Option role_arn.
ARN of the IAM role to assume.

Leave blank if not using assume role.
Enter a value. Press Enter to leave empty.
role_arn>

Option role_session_name.
Session name for assumed role.

If empty, a session name will be generated automatically.
Enter a value. Press Enter to leave empty.
role_session_name>

Option role_session_duration.
Session duration for assumed role.

If empty, the default session duration will be used.
Enter a value. Press Enter to leave empty.
role_session_duration>

Option role_external_id.
External ID for assumed role.

Leave blank if not using an external ID.
Enter a value. Press Enter to leave empty.
role_external_id>

Option upload_concurrency.
Concurrency for multipart uploads and copies.
This is the number of chunks of the same file that are uploaded
concurrently for multipart uploads and copies.
If you are uploading small numbers of large files over high-speed links
and these uploads do not fully utilize your bandwidth, then increasing
this may help to speed up the transfers.
Enter a signed integer. Press Enter for the default (4).
upload_concurrency>

Option force_path_style.
If true use path style access if false use virtual hosted style.
If this is true (the default) then rclone will use path style access,
if false then rclone will use virtual path style. See [the AWS S3
docs](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro)
for more info.
Some providers (e.g. AWS, Aliyun OSS, Netease COS, or Tencent COS) require this set to
false - rclone will do this automatically based on the provider
setting.
Note that if your bucket isn't a valid DNS name, i.e. has '.' or '_' in,
you'll need to set this to true.
Enter a boolean value (true or false). Press Enter for the default (true).
force_path_style>

Option v2_auth.
If true use v2 authentication.
If this is false (the default) then rclone will use v4 authentication.
If it is set then rclone will use v2 authentication.
Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH.
Enter a boolean value (true or false). Press Enter for the default (false).
v2_auth>

Option use_dual_stack.
If true use AWS S3 dual-stack endpoint (IPv6 support).
See [AWS Docs on Dualstack Endpoints](https://docs.aws.amazon.com/AmazonS3/latest/userguide/dual-stack-endpoints.html)
Enter a boolean value (true or false). Press Enter for the default (false).
use_dual_stack>

Option use_arn_region.
If true, enables arn region support for the service.
Enter a boolean value (true or false). Press Enter for the default (false).
use_arn_region>

Option list_chunk.
Size of listing chunk (response list for each ListObject S3 request).
This option is also known as "MaxKeys", "max-items", or "page-size" from the AWS S3 specification.
Most services truncate the response list to 1000 objects even if requested more than that.
In AWS S3 this is a global maximum and cannot be changed, see [AWS S3](https://docs.aws.amazon.com/cli/latest/reference/s3/ls.html).
In Ceph, this can be increased with the "rgw list buckets max chunk" option.
Enter a signed integer. Press Enter for the default (1000).
list_chunk>

Option list_version.
Version of ListObjects to use: 1,2 or 0 for auto.
When S3 originally launched it only provided the ListObjects call to
enumerate objects in a bucket.
However in May 2016 the ListObjectsV2 call was introduced. This is
much higher performance and should be used if at all possible.
If set to the default, 0, rclone will guess according to the provider
set which list objects method to call. If it guesses wrong, then it
may be set manually here.
Enter a signed integer. Press Enter for the default (0).
list_version>

Option list_url_encode.
Whether to url encode listings: true/false/unset
Some providers support URL encoding listings and where this is
available this is more reliable when using control characters in file
names. If this is set to unset (the default) then rclone will choose
according to the provider setting what to apply, but you can override
rclone's choice here.
Enter a value of type Tristate. Press Enter for the default (unset).
list_url_encode>

Option no_check_bucket.
If set, don't attempt to check the bucket exists or create it.
This can be useful when trying to minimise the number of transactions
rclone does if you know the bucket exists already.
It can also be needed if the user you are using does not have bucket
creation permissions. Before v1.52.0 this would have passed silently
due to a bug.
Enter a boolean value (true or false). Press Enter for the default (false).
no_check_bucket>

Option no_head.
If set, don't HEAD uploaded objects to check integrity.
This can be useful when trying to minimise the number of transactions
rclone does.
Setting it means that if rclone receives a 200 OK message after
uploading an object with PUT then it will assume that it got uploaded
properly.
In particular it will assume:
- the metadata, including modtime, storage class and content type was as uploaded
- the size was as uploaded
It reads the following items from the response for a single part PUT:
- the MD5SUM
- The uploaded date
For multipart uploads these items aren't read.
If an source object of unknown length is uploaded then rclone **will** do a
HEAD request.
Setting this flag increases the chance for undetected upload failures,
in particular an incorrect size, so it isn't recommended for normal
operation. In practice the chance of an undetected upload failure is
very small even with this flag.
Enter a boolean value (true or false). Press Enter for the default (false).
no_head>

Option no_head_object.
If set, do not do HEAD before GET when getting objects.
Enter a boolean value (true or false). Press Enter for the default (false).
no_head_object>

Option encoding.
The encoding for the backend.
See the [encoding section in the overview](/overview/#encoding) for more info.
Enter a value of type Encoding. Press Enter for the default (Slash,InvalidUtf8,Dot).
encoding>

Option disable_http2.
Disable usage of http2 for S3 backends.
There is currently an unsolved issue with the s3 (specifically minio) backend
and HTTP/2.  HTTP/2 is enabled by default for the s3 backend but can be
disabled here.  When the issue is solved this flag will be removed.
See: https://github.com/rclone/rclone/issues/4673, https://github.com/rclone/rclone/issues/3631
Enter a boolean value (true or false). Press Enter for the default (false).
disable_http2>

Option download_url.
Custom endpoint for downloads.
This is usually set to a CloudFront CDN URL as AWS S3 offers
cheaper egress for data downloaded through the CloudFront network.
Enter a value. Press Enter to leave empty.
download_url>

Option directory_markers.
Upload an empty object with a trailing slash when a new directory is created
Empty folders are unsupported for bucket based remotes, this option creates an empty
object ending with "/", to persist the folder.
Enter a boolean value (true or false). Press Enter for the default (false).
directory_markers>

Option use_multipart_etag.
Whether to use ETag in multipart uploads for verification
This should be true, false or left unset to use the default for the provider.
Enter a value of type Tristate. Press Enter for the default (unset).
use_multipart_etag>

Option use_unsigned_payload.
Whether to use an unsigned payload in PutObject
Rclone has to avoid the AWS SDK seeking the body when calling
PutObject. The AWS provider can add checksums in the trailer to avoid
seeking but other providers can't.
This should be true, false or left unset to use the default for the provider.
Enter a value of type Tristate. Press Enter for the default (unset).
use_unsigned_payload>

Option use_presigned_request.
Whether to use a presigned request or PutObject for single part uploads
If this is false rclone will use PutObject from the AWS SDK to upload
an object.
Versions of rclone < 1.59 use presigned requests to upload a single
part object and setting this flag to true will re-enable that
functionality. This shouldn't be necessary except in exceptional
circumstances or for testing.
Enter a boolean value (true or false). Press Enter for the default (false).
use_presigned_request>

Option use_data_integrity_protections.
If true use AWS S3 data integrity protections.
See [AWS Docs on Data Integrity Protections](https://docs.aws.amazon.com/sdkref/latest/guide/feature-dataintegrity.html)
Enter a value of type Tristate. Press Enter for the default (unset).
use_data_integrity_protections>

Option versions.
Include old versions in directory listings.
Enter a boolean value (true or false). Press Enter for the default (false).
versions>

Option version_at.
Show file versions as they were at the specified time.
The parameter should be a date, "2006-01-02", datetime "2006-01-02
15:04:05" or a duration for that long ago, eg "100d" or "1h".
Note that when using this no file write operations are permitted,
so you can't upload files or delete them.
See [the time option docs](/docs/#time-options) for valid formats.
Enter a value of type Time. Press Enter for the default (off).
version_at>

Option version_deleted.
Show deleted file markers when using versions.
This shows deleted file markers in the listing when using versions. These will appear
as 0 size files. The only operation which can be performed on them is deletion.
Deleting a delete marker will reveal the previous version.
Deleted files will always show with a timestamp.
Enter a boolean value (true or false). Press Enter for the default (false).
version_deleted>

Option decompress.
If set this will decompress gzip encoded objects.
It is possible to upload objects to S3 with "Content-Encoding: gzip"
set. Normally rclone will download these files as compressed objects.
If this flag is set then rclone will decompress these files with
"Content-Encoding: gzip" as they are received. This means that rclone
can't check the size and hash but the file contents will be decompressed.
Enter a boolean value (true or false). Press Enter for the default (false).
decompress>

Option might_gzip.
Set this if the backend might gzip objects.
Normally providers will not alter objects when they are downloaded. If
an object was not uploaded with `Content-Encoding: gzip` then it won't
be set on download.
However some providers may gzip objects even if they weren't uploaded
with `Content-Encoding: gzip` (eg Cloudflare).
A symptom of this would be receiving errors like
    ERROR corrupted on transfer: sizes differ NNN vs MMM
If you set this flag and rclone downloads an object with
Content-Encoding: gzip set and chunked transfer encoding, then rclone
will decompress the object on the fly.
If this is set to unset (the default) then rclone will choose
according to the provider setting what to apply, but you can override
rclone's choice here.
Enter a value of type Tristate. Press Enter for the default (unset).
might_gzip>

Option use_accept_encoding_gzip.
Whether to send `Accept-Encoding: gzip` header.
By default, rclone will append `Accept-Encoding: gzip` to the request to download
compressed objects whenever possible.
However some providers such as Google Cloud Storage may alter the HTTP headers, breaking
the signature of the request.
A symptom of this would be receiving errors like
        SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided.
In this case, you might want to try disabling this option.
Enter a value of type Tristate. Press Enter for the default (unset).
use_accept_encoding_gzip>

Option no_system_metadata.
Suppress setting and reading of system metadata
Enter a boolean value (true or false). Press Enter for the default (false).
no_system_metadata>

Option use_already_exists.
Set if rclone should report BucketAlreadyExists errors on bucket creation.
At some point during the evolution of the s3 protocol, AWS started
returning an `AlreadyOwnedByYou` error when attempting to create a
bucket that the user already owned, rather than a
`BucketAlreadyExists` error.
Unfortunately exactly what has been implemented by s3 clones is a
little inconsistent, some return `AlreadyOwnedByYou`, some return
`BucketAlreadyExists` and some return no error at all.
This is important to rclone because it ensures the bucket exists by
creating it on quite a lot of operations (unless
`--s3-no-check-bucket` is used).
If rclone knows the provider can return `AlreadyOwnedByYou` or returns
no error then it can report `BucketAlreadyExists` errors when the user
attempts to create a bucket not owned by them. Otherwise rclone
ignores the `BucketAlreadyExists` error which can lead to confusion.
This should be automatically set correctly for all providers rclone
knows about - please make a bug report if not.
Enter a value of type Tristate. Press Enter for the default (unset).
use_already_exists>

Option use_multipart_uploads.
Set if rclone should use multipart uploads.
You can change this if you want to disable the use of multipart uploads.
This shouldn't be necessary in normal operation.
This should be automatically set correctly for all providers rclone
knows about - please make a bug report if not.
Enter a value of type Tristate. Press Enter for the default (unset).
use_multipart_uploads>

Option use_x_id.
Set if rclone should add x-id URL parameters.
You can change this if you want to disable the AWS SDK from
adding x-id URL parameters.
This shouldn't be necessary in normal operation.
This should be automatically set correctly for all providers rclone
knows about - please make a bug report if not.
Enter a value of type Tristate. Press Enter for the default (unset).
use_x_id>

Option sign_accept_encoding.
Set if rclone should include Accept-Encoding as part of the signature.
You can change this if you want to stop rclone including
Accept-Encoding as part of the signature.
This shouldn't be necessary in normal operation.
This should be automatically set correctly for all providers rclone
knows about - please make a bug report if not.
Enter a value of type Tristate. Press Enter for the default (unset).
sign_accept_encoding>

Option sdk_log_mode.
Set to debug the SDK
This can be set to a comma separated list of the following functions:
- `Signing`
- `Retries`
- `Request`
- `RequestWithBody`
- `Response`
- `ResponseWithBody`
- `DeprecatedUsage`
- `RequestEventMessage`
- `ResponseEventMessage`
Use `Off` to disable and `All` to set all log levels. You will need to
use `-vv` to see the debug level logs.
Enter a value of type Bits. Press Enter for the default (Off).
sdk_log_mode>

Option description.
Description of the remote.
Enter a value. Press Enter to leave empty.
description>


#查看配置
G:\rclone1.73.2>rclone config
Current remotes:

Name                 Type
====                 ====
cstcloud             s3

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q

G:\rclone1.73.2>rclone config show cstcloud
[cstcloud]
type = s3
provider = Other
access_key_id = xxx
secret_access_key = xxx
endpoint = s3.cstcloud.cn
acl = private
force_path_style = true

方法二:手动创建配置文件目录

C:\Users\admin\AppData\Roaming\rclone\rclone.conf

[cstcloud]
type = s3
provider = Other
access_key_id = xxx
secret_access_key = xxx
endpoint = s3.cstcloud.cn
acl = private
force_path_style = true

第三步:WinFsp

1️⃣ 什么是 WinFsp

  • 全称 Windows File System Proxy
  • 类似 Linux 的 FUSE
  • 作用:让 rclone 可以把云存储挂载成 本地磁盘

没有 WinFsp,运行会报错:

cgofuse: cannot find winfsp
  1. 打开官网:https://winfsp.dev/rel/
  2. 下载 最新稳定版本的 MSI 安装包
  3. 双击安装,保持默认选项即可
  4. 安装完成后 最好重启电脑(确保系统识别 WinFsp)

安装 WinFsp 后:

第四步:云存储挂载

rclone mount cstcloud:桶名 Z: --vfs-cache-mode full
  • 如果命令停在终端,没有报错 → 挂载成功
  • 打开 此电脑 → Z 盘 → 就可以像普通硬盘访问云端文件

4.png

一键挂载脚本.bat

@echo off
chcp 65001
REM ------------------------------
REM CSTCloud 数据胶囊一键挂载脚本
REM ------------------------------

REM 设置 rclone 路径
set RCLONE_PATH=G:\rclone1.73.2\rclone.exe

REM 设置挂载盘符
set DRIVE_LETTER=Z:

REM 设置缓存目录
set CACHE_DIR=G:\rclone1.73.2\rclone-cache

REM 检查 Z 盘是否已经挂载
>nul 2>&1 fsutil fsinfo volumeinfo %DRIVE_LETTER%
if %errorlevel% == 0 (
    echo ⚠ Z 盘已经挂载,无需再次挂载
    pause
    exit /b
)

REM 创建缓存目录(如果不存在)
if not exist "%CACHE_DIR%" (
    mkdir "%CACHE_DIR%"
    echo 缓存目录已创建:%CACHE_DIR%
) else (
    echo 缓存目录已存在:%CACHE_DIR%
)

REM 输出挂载提示
echo 正在挂载 CSTCloud 数据胶囊到 %DRIVE_LETTER% ...
echo ⚠ 请保持此窗口打开,否则 Z 盘将被卸载

REM 挂载命令
"%RCLONE_PATH%" mount cstcloud:orange %DRIVE_LETTER% ^
 --vfs-cache-mode full ^
 --dir-cache-time 72h ^
 --poll-interval 10s ^
 --cache-dir "%CACHE_DIR%" ^
 --links

REM 保持窗口打开,方便查看日志
pause

感谢看到这里的你!希望本文对你有用,也欢迎随时分享你的想法和建议,让我们一起探索更多有趣的内容。

中科院 云存储 本地挂载

喜欢这篇文章嘛,觉得文章不错的话,奖励奖励我!

支付宝打赏支付宝微信打赏 微信