Skip to content

LSF

Preview

IBM LSF on SOCA has been added since 25.11.0 and is considered experimental. Not all SOCA features have been migrated to LSF yet, so running full production workloads on it is not recommended at this time. Refer to Feature Matrix page to see what features are currently supported.

Upload LSF installer to S3

First, you need to upload your LSF installer (the tested version is 10.1.0.14), entitlement file and fixpack to S3. We recommend using the same bucket you plan to use for the SOCA installation, as all s3: permissions will already be configured by the SOCA installer:

aws s3 ls s3://<my_bucket>/lsf/
2025-09-23 13:06:48  648207331 lsf10.1_linux2.6-glibc2.3-x86_64-601547.tar.Z
2025-09-23 13:06:48  482596419 lsf10.1_linux2.6-glibc2.3-x86_64.tar.Z
2025-09-23 13:06:48  381091271 lsf10.1_lnx312-lib217-armv8-601547.tar.Z
2025-09-23 13:06:48  297835935 lsf10.1_lnx312-lib217-armv8.tar.Z
2025-09-23 13:06:48  211570759 lsf10.1_lsfinstall_linux_x86_64.tar.Z
2025-09-23 13:07:53   19033437 lsf10.1_no_jre_lsfinstall.tar.Z
2025-09-23 17:29:52         83 lsf_std_entitlement.dat

Note

You may use a different bucket if necessary. Just ensure that the Controller, ComputeNode, and LoginNode IAM roles have the correct S3 permissions before installing SOCA.

Warning

LSF integration on SOCA requires json support. Ensure your LSF version is at least 10.1.0.14 by running lsid:

# Wrong version (10.1.0.0)
lsid
IBM Spectrum LSF Standard 10.1.0.0, Jul 08 2016
.....

# Correct version (10.1.0.14)
lsid
IBM Spectrum LSF Standard 10.1.0.14, April 20 2023

Verify if you have the correct fix pack (10.1.0.14 is the tested version) on your S3.

Configure SOCA installer to deploy LSF

To enable LSF on SOCA, you first need to navigate to default_config.yml.

Add lsf to the scheduler.scheduler_engine section:

  scheduler:
    scheduler_engine: 
      # - "openpbs" # Production ready - Stable & Tested
      - "lsf" # Preview - Development, not stable, not fully tested and not suitable for production.
      # - "slurm" # Preview - Development, not stable, not fully tested and not suitable for production.

Note

You can keep openpbs / slurm if you want to run a dual-scheduler setup, or you can choose to use only LSF.

Then, navigate to system.scheduler.lsf section below and update the following parameters:

lsf:
    # Install path. We recommend you to not change this path
    # if you do, make sure to update relevant cluster_analytics / log_backup paths as well
    # Note: there is no sysconfigdir_path for LSF, everything is self-contained within install_prefix_path
    # Note: $SOCA_CLUSTER_ID will be automatically replaced by the SOCA Cluster Name specified at install time
    install_prefix_path: "/opt/soca/$SOCA_CLUSTER_ID/schedulers/default/lsf"

    # LSF version, used to browse default LSF folder name
    # Supported Version:
    # - 10.1 Service Pack 14 (10.1.0.14)
    # Note: This should match the major.minor version (e.g: if 10.1.0.14 then use 10.1).
    # This can be verified in the lsf installer name (e.g:lsf10.1_lsfinstall_linux_x86_64 or in an existing install (/opt/lsf/<version>)
    version: "10.1"

    # Required -  Path to your S3 folder containing LSF installer: e.g: s3://mybucket/lsf/ (IMPORTANT: Must end with a /).
    # Must contains all required install files + entitlement and fix pack.
    # IMPORTANT: Must use the same bucket as the one specified at install time
    lsf_installer_s3_uri: "s3://<my_bucket>/lsf/"

    # Required - Name of the installer file, e.g: lsf10.1_lsfinstall_linux_x86_64.tar.Z"
    lsf_installer_file_name: "lsf10.1_lsfinstall_linux_x86_64.tar.Z"

    # Required - Name of the entitlement file stored inside lsf_installer_s3_uri: e.g: lsf_std_entitlement.dat
    lsf_entitlement_file_name: "lsf_std_entitlement.dat"

    # Fix Pack (both x86/aarch64) file name
    lsf_fixpack_x86_file_name: "lsf10.1_linux2.6-glibc2.3-x86_64-601547.tar.Z" # Name of your x86_64 fixpack installer: e.g: lsf10.1_lnx418-lib228-x86_64.tar.Z
    lsf_fixpack_aarch64_file_name: "lsf10.1_lnx312-lib217-armv8-601547.tar.Z" # Name of your aarch64 LSF fixpack installer: e.g: lsf10.1_lnx418-lib228-aarch64.tar.Z

You're all set, you can now continue with a regular SOCA installation. While your cluster is provisioning, we suggest reviewing the SOCA LSF bootstrap scripts to become familiar with the automation happening behind the scenes.

Interact with your LSF cluster

Warning

Node management is a preview feature.

To enable it, update cluster_manager/orchestrator/jobs_dispatcher.sh and cluster_manager/orchestrator/nodes_manager.sh and update export SOCA_PREVIEW_NEXTGEN_SCHEDULER="false" to export SOCA_PREVIEW_NEXTGEN_SCHEDULER="true".

Nodes won't be provisioned if you use the legacy scheduler.

Load your LSF shell

Once your system is up and running, load your lsf shell by running soca_lsf-<scheduler_identifier> on your machine, where <scheduler_identifier> is a unique identifier automatically generated by SOCA.

This unique identifier is helpful if you plan to host multiple LSF environments on your SOCA controller or if you intend to access an on-premises LSF system through SOCA. You can install as many LSF environment(s) as you want on your SOCA environment.

# By default, LSF commands are not available on your shell
[socaadmin@ip-202-0-107-9 ~]$ bhosts
bash: bhosts: command not found

# Load your LSF using soca_lsf_<identifier> command
[socaadmin@ip-202-0-107-9 ~]$ soca_lsf_lsf-default-soca-lsfdemo
========= SOCA =========
>> LSF environment loaded, you can now run commands such as bsub/bjobs/lshosts etc ...
>> LSF is installed under: /opt/soca/soca-lsfdemo/schedulers/default/lsf
>> Add /bin/soca_lsf to your .bashrc / .bash_profile to automatically run this script
>> Type exit to close this shell
========================

# You can now run all your regular LSF commands
(soca_lsf) socaadmin@ip-202-0-107-9:~# bhosts
HOST_NAME          STATUS       JL/U    MAX  NJOBS    RUN  SSUSP  USUSP    RSV
ip-202-0-107-9.us closed          -      1      0      0      0      0      0

Submit a job

CLI

You can submit a LSF job using the bsub command:

(soca_lsf) socaadmin@ip-202-0-107-9:~# bsub -Jd "instance_type=m6i.4xlarge scratch_size=50" sleep 3600
Job <1> is submitted to default queue <normal>.

Pass the SOCA Job Resource Parameters via -Jd (Job Description).

Web Interface

You can submit your LSF jobs using the SOCA Web Interface:

HTTP API

You can your submit your LSF job using the SOCA HTTP REST API.

View jobs using Web Interface

You can view and control your LSF HPC jobs using "My Job Queue" page on the SOCA Web Interface: