Skip to the content.

Advanced configuration

One of design goal of MMS 1.0 is easy to use. The default settings form MMS should be sufficient for most of use cases. This document describe advanced configurations that allows user to deep customize MMS’s behavior.

Environment variables

User can set environment variables to change MMS behavior, following is a list of variables that user can set for MMS:

Note: environment variable has higher priority that command line or config.properties. It will override other property values.

Command line parameters

User can following parameters to start MMS, those parameters will override default MMS behavior:

See Running the Model Server for detail.

config.properties file

MMS use a config.properties file to store configurations. MMS use following order to locate this config.properties file:

  1. if MMS_CONFIG_FILE environment variable is set, MMS will load the configuration from the environment variable.
  2. if --mms-config parameter is passed to multi-model-server, MMS will load the configuration from the parameter.
  3. if there is a config.properties in current folder where user start the multi-model-server, MMS will load the config.properties file form current working directory.
  4. If none of above is specified, MMS will load built-in configuration with default values.

Note: Docker image that MMS provided has slightly different default value.

Customize JVM options

The restrict MMS frontend memory footprint, certain JVM options is set via vmargs property in config.properties file

User can adjust those JVM options for fit their memory requirement if needed.

Load models at startup

User can configure load models while MMS startup. MMS can load models from model_store or from HTTP(s) URL.

Note: model_store and load_models property can be override by command line parameters.

Configure MMS listening port

MMS doesn’t support authentication natively. To avoid unauthorized access, MMS only allows localhost access by default. Inference API is listening on 8080 port and accepting HTTP request. Management API is listening on 8081 port and accepting HTTP request. See Enable SSL for configuring HTTPS.

Here are a couple of examples:

# bind inference API to all network interfaces with SSL enabled
inference_address=https://0.0.0.0:8443

# bind inference API to private network interfaces
inference_address=https://172.16.1.10:8080

Enable SSL

For users who want to enable HTTPs, you can change inference_address or management_addrss protocol from http to https, for example: inference_addrss=https://127.0.0.1. This will make MMS listening on localhost 443 port to accepting https request.

User also must provide certificate and private keys to enable SSL. MMS support two ways to configure SSL:

  1. Use keystore
    • keystore: Keystore file location, if multiple private key entry in the keystore, first one will be picked.
    • keystore_pass: keystore password, key password (if applicable) MUST be the same as keystore password.
    • keystore_type: type of keystore, default: PKCS12
  2. Use private-key/certificate files
    • private_key_file: private key file location, support both PKCS8 and OpenSSL private key.
    • certificate_file: X509 certificate chain file location.

Self-signed certificate example

This is a quick example to enable SSL with self-signed certificate

  1. User java keytool to create keystore
    keytool -genkey -keyalg RSA -alias mms -keystore keystore.p12 -storepass changeit -storetype PKCS12 -validity 3600 -keysize 2048 -dname "CN=www.MY_MMS.com, OU=Cloud Service, O=model server, L=Palo Alto, ST=California, C=US"
    

Config following property in config.properties:

inference_address=https://127.0.0.1:8443
management_address=https://127.0.0.1:8444
keystore=keystore.p12
keystore_pass=changeit
keystore_type=PKCS12
  1. User OpenSSL to create private key and certificate

Config following property in config.properties:

inference_address=https://127.0.0.1:8443
management_address=https://127.0.0.1:8444
keystore=keystore.p12
keystore_pass=changeit
keystore_type=PKCS12

Configure Cross-Origin Resource Sharing (CORS)

CORS is a mechanism that uses additional HTTP headers to tell a browser to let a web application running at one origin (domain) have permission to access selected resources from a server at a different origin.

CORS is disabled by default. Configure following properties in config.properties file to enable CORS:

# cors_allowed_origin is required to enable CORS, use '*' or your domain name 
cors_allowed_origin=https://yourdomain.com
# required if you want to use preflight request 
cors_allowed_methods=GET, POST, PUT, OPTIONS
# required if the request has an Access-Control-Request-Headers header 
cors_allowed_headers=X-Custom-Header

Preloading a model

The model server gives users an option to take advantage of fork() sematics, ie., copy-on-write, on linux based systems. In order to load a model before spinning up the model workers, use preload_model option. Model server upon seeing this option set, will load the model just before scaling the first model worker. All the other workers will share the same instance of the loaded model. This way only the memory locations in the loaded model which are touch will be copied over to the individual model-workers process memory space.

preload_model=true

Prefer direct buffer

Configuration parameter prefer_direct_buffer controls if the model server will be using direct memory specified by -XX:MaxDirectMemorySize. This parameter is for model server only and doesn’t affect other packages’ usage of direct memory buffer. Default: false

prefer_direct_buffer=true

Restrict backend worker to access environment variable

Environment variable may contains sensitive information like AWS credentials. Backend worker will execute arbitrary model’s custom code, which may expose security risk. MMS provides a blacklist_env_vars property which allows user to restrict which environment variable can be accessed by backend worker.

Limit GPU usage

By default, MMS will use all available GPUs for inference, you use number_of_gpu to limit the usage of GPUs.

Other properties

Most of those properties are designed for performance tuning. Adjusting those numbers will impact scalability and throughput.

config.properties Example

See config.properties for docker