Skip to content

Shared Storage

Info

Shared Storage is only available on SOCA version 24.11.0 and newer.

Shared Storage filesystems are automatically applied to new nodes deployed in your SOCA environment. You can have filesystems automatically mounted on all SOCA Linux nodes (controller, hpc_nodes, login_nodes and dcv_nodes), regardless of user or project name.

Shared Storage filesystems configuration is centrally managed via the following SocaConfig tree hierarchy: /configuration/FileSystems.

{
    "apps": {
        "provider": "efs",
        "mount_path": "/apps",
        "mount_target": "fs-0065357f6e9b7c8a6",
        "mount_options": "nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport",
        "on_mount_failure": "exit",
        "enabled": "true"
    },
    "data": {
        "provider": "fsx_ontap",
        "mount_path": "/data", 
        "mount_target": "fsvol-0d86fc91559fef332",
        "mount_options": "defaults,noatime,_netdev",
        "on_mount_failure": "exit",
        "enabled": "true"
    }
}

Each filesystem must have a unique name (e.g apps, data, mycustomfs) and the following parameters:

Key Allowed Values Notes
(Required) provider efs nfs fsx_lustre fsx_ontap fsx_openzfs s3 The filesystem provider
(Required) mount_path Any string (must start with /) The Unix mount location (e.g /my_mount_path)
(Required) mount_target See below for accepted values based on provider The actual filesystem target
on_mount_failure exit ignore (default to ignore) Error Behavior if filesystem cannot be mounted
mount_options See below for default values Options to specify at mount time
enabled false true (default to true) Whether othe filesystem will be mounted automatically by SOCA

Important

/apps and /data are reserved mount_path for SOCA.

Filesystem Targets

mount_target expects a different value depending on the provider

EFS

mount_target for EFS (Amazon Elastic Filesystem) must be a valid EFS FileSystem ID. It starts with fs-abcdef123.

NFS

mount_target for NFS must be a valid NFS endpoint (either IP or DNS) with the mount path (optional) (e.g: 10.1.1.1:/nfs or mydns.com:/nfs)

FSx for Lustre

mount_target for FSx for Lustre must be a valid FileSystem ID. It starts with fs-abcdef123.

FSx for NetApp ONTAP

mount_target for FSx for NetApp ONTAP must be a valid Volume ID. It starts with fsvol-abcdef123.

FSx for OpenZFS

mount_target for FSx for Lustre must be a valid FileSystem ID. It starts with fs-abcdef123.

S3

mount_target for S3 (via s3-mountpoint) is a bucket name (e.g: mybucket).

Default Mount Options

If mount_options is not set, your filesystem will use the following parameters

  • efs: nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport
  • nfs: nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport
  • fsx_lustre: defaults,noatime,flock,_netdev
  • fsx_ontap: defaults,noatime,_netdev
  • fsx_openzfs: nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport
  • s3: --read-only

Mount Failure Behavior

Set on_mount_failure to:

  • ignore (default) if you want to continue the bootstrap process even in the case where the filesystem cannot be mounted.
  • exit if you want to break the bootstrap sequence if you cannot mount the filesystem. This will prevent the node to boot up correctly (only use this setting for critical filesystem(s) that must be mounted)

Get all Filesystems

Run socactl filesystems get to list all filesystems associated to your environment.

Tip

Use --key to narrow it down to a specific filesystem unique name.

/bin/bash socactl filesystems get --output json --key apps
{
    "apps": {
        "enabled": "true",
        "mount_options": "nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport",
        "mount_path": "/apps",
        "mount_target": "fs-0065357f6e9b7c8a6",
        "on_mount_failure": "exit",
        "provider": "efs"
    }
} 

Create a new filesystem

Run socactl filesystem set to create a new filesystem

Example:

socactl filesystem set --name "myfilesystem" \
  --provider "efs"
  --mount_path "/unix_path"
  --mount_target "fs-abcde123.."

Mount Filesystem directly via Jinja2 wrappers

Alternatively, you can directly import the various storage wrappers located on https://github.com/awslabs/scale-out-computing-on-aws/tree/main/source/soca/cluster_node_bootstrap/templates/linux/shared_storage and apply your logic in your scripts.

Example:

Mount your NFS server directly on cluster_node_bootstrap/compute_node/04_setup_user_customization.sh.j2

# Include mount_nfs 
{% include "templates/linux/shared_storage/mount_standalone_nfs.sh.j2" %}

# Optional, create future mount path if it does not already exist
mkdir -p /mypath

# Mount your NFS Filesystem to /mypath
mount_nfs --mount-target="10.1.1.1:/" --mount-path="/mypath"

Repeat the same logic with the other templates (EFS,FSxN,FSxL ...) under shared_storage.