litani
Litani allows you to build up a graph of commands to execute in dependency order. You might find it more convenient than other meta-build systems for the following reasons:
LITANI WORKFLOW
The process of defining and running a Litani build proceeds as follows:
You may then periodically reload the HTML dashboard while the build is running, and use the run.json file during or after the build to gain insight into the build outcome.
MANUAL PAGES -- TABLE OF CONTENTS
The following man pages are available with local installations of Litani (try man litani-add-job), or online at https://awslabs.github.io/aws-build-accumulator/:
Chapter 1: Executable Commands
litani acquire-html-dir: print the path to a locked HTML
report directory
litani add-job: add a single command with its dependencies
litani dump-run: print the current run as a JSON document to stdout
litani get-jobs: print a list of jobs to be run
litani init: set up a new Litani run
litani print-html-dir: print the path to a continually-updated report
directory
litani release-html-dir: unlock a previously-locked HTML report
directory
litani run-build: start a Litani run
litani set-jobs: set the list of jobs to be run
litani transform-jobs: print a list of jobs to be modified before
running
Chapter 5: File Formats & Conventions
litani outcome-table.json: user-provided table to decide
the outcome of a job
litani run.json: description of a single Litani run
COPYING
Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. Litani's source code is licensed under the Apache License 2.0, and the manual pages are licensed under CC-BY-SA 4.0. See the LICENSE files for details.
SEE ALSO
litani transform-jobs
NAME
litani transform-jobs - Print a list of jobs to be modified before running
DESCRIPTION
This program allows clients to add to, delete from, or modify a list of Litani jobs before running them. The program prints out all the jobs that have been added to the current run so far, and then expects to read a new list of jobs on stdin. This new list of jobs will be 'saved' and run upon invocation of litani-run-build(1).
This program prints the list of jobs that have been added so far in JSON format. It also expects to read a list of jobs as a JSON list. Each job in the list will follow the same schema as the elements of the ["jobs"]["wrapper_arguments"] key in the litani run.json file; see litani-run.json(5) for the schema. Most users will want to run this program before running litani run-build, which means that none of the jobs will have started; that means that the JSON schema will match that of an unstarted job, meaning the complete key will be false and there will be no start_time key.
If this program prints a job, and the JSON list written to stdin contains a job with the same job id, then Litani will overwrite the original job with the one given on stdin. The 'job id' is the value of ["job_id"] in the job dictionary, as specified in litani-run.json(5).
If this program prints a job, but a job with the same job id does not exist in the list of jobs written to this program's stdin, then that job will be deleted.
If this program reads a job on stdin whose job id did not exist in the list of jobs printed on stdout, then that job will be added to the list of jobs to be executed. However, it is highly recommended to add jobs using litani-add-job(1) rather than adding a new dict to the JSON list. See CAVEATS below.
MOTIVATION & BEST PRACTICES
This program allows users to modify Litani-based builds that they do not control. Litani-based builds typically use a so-called 'run script' to make the following Litani invocations:
If you cannot modify the run script, you will not have a chance to add custom jobs before running them. However, if the run script includes a way to allow users to invoke litani transform-jobs after step 2 but before step 3, then users who cannot modify the run script can nevertheless add their own jobs.
Here are a couple of suggestions for how to write a run script such that downstream users can extend your build with additional jobs.
Add a --job-transformer flag to specify a custom job-transformer program
Suppose that you have a run script called run.py that performs the three steps above. Consider adding a flag, (we suggest --job-transformer J) to the script. When users pass this flag to run.py, it will run J as a subprocess after adding all the jobs but just before running litani run-build. Users can supply their own J, which would run litani transform-jobs as a subprocess and modify jobs as needed.
Add a --no-standalone flag to suppress `litani init' and `litani run-build'
Suppose that you have various run scripts called run-1.py, run-2.py, ... that each perform the three steps above, but which each add different jobs to the build. Suppose you then add a flag (we suggest --no-standalone) to each of those scripts, which has the effect of preventing them from running litani init and litani run-build. You can then combine the jobs from all of those scripts by running them as follows:
$ litani init # run this manually in your shell $ $ ./run-1.py --no-standalone # add the jobs emitted by run-1.py $ # without re-initializing the build $ # or running it yet $ $ ./run-2.py --no-standalone # same for run-2.py. You could also $ # run these programs in parallel $ # because it is safe to run parallel $ # invocations of litani add-job ... $ ./transform-all-jobs.sh # This script invokes `litani $ # transform-jobs` as a subprocess $ # and modifies the jobs $ $ litani run-build # run all jobs added from all scripts
This allows users to 'compose' Litani-based builds that they do not have control over, and also modify them before running.
MULTIPROCESS SAFETY
It is safe to run this program in parallel with invocations of litani-add-job(1), but this usually doesn't make sense; typically you want to ensure that your jobs have been added before transforming them. Therefore, wait until all invocations of litani add-job have terminated before running this program. However, see also CAVEATS below for when this does make sense.
It is safe to run several invocations of this program at the same time: this program always writes jobs out atomically, so an invocation will never read a file that has been half-written by another. However, the result of running multiple invocations of this program will be unpredictable if the transformation is not idempotent, since one invocation may transform a job that has already been transformed.
OPTIONS
This program does not accept any command-line options. It does expect to read a list of new jobs on stdin, and will block until stdin is closed. To print the entire run without modifying any jobs, use litani-dump-run(1) instead.
CAVEATS
It is possible to add new jobs using this command, by appending a dict to the output of this command and writing the resulting list back to stdin. However, this is not recommended, because this avoids the error-checking and enforcement of internal invariants that litani-add-job(1) performs. Thus, it is recommended to use litani-add-job(1) to add new jobs, either while you run litani transform-jobs, or after transform-jobs terminates.
EXAMPLES
This section contains code snippets illustrating how this command can be used.
Running litani transform-jobs
The following Python fragment illustrates how to invoke litani transform-jobs from a Python script; read the list of jobs; and then write them back to transform-jobs.
proc = subprocess.Popen(
[litani, "-v", "transform-jobs"], stdin=subprocess.PIPE,
stdout=subprocess.PIPE, text=True, bufsize=0) jobs = json.loads(proc.stdout.read()) # jobs is now a list of dicts, which can be processed here print(json.dumps(jobs), file=proc.stdin) proc.stdin.close() proc.wait()
Adding a new root node
Sometimes, you want to add a job that runs before all others. That is, you start off with a build graph that looks like this:
foo.exe --> foo.out
\
\ bar.exe --> bar.out --> final.exe --> final.out
/
/ baz.exe --> baz.out
And you want to print `echo Starting` before any job starts to run:
foo.exe --> foo.out
________^ \
/ \ echo 'Start' --> fake.out --> bar.exe --> bar.out --> final.exe --> final.out
\________ /
v /
baz.exe --> baz.out
The way to accomplish this is
These steps can be done in either order. Here is a code example:
proc = subprocess.Popen(
[litani, "transform-jobs"], stdin=subprocess.PIPE,
stdout=subprocess.PIPE, text=True, bufsize=0) jobs = json.loads(proc.stdout.read()) # A file name that the new root job will output, and which the old # root nodes will depend on as an input. This is not a real file that # the new root command will write, so we'll pass it to --phony-outputs # rather than --outputs. dependency_node = str(uuid.uuid4()) for job in jobs:
if not job["inputs"]:
job["inputs"] = [dependency_node] print(json.dumps(jobs), file=proc.stdin) proc.stdin.close() proc.wait() # Now, add the new root job. (It's also fine to do this before or during the # transformation, but remember to skip the new job when iterating!) subprocess.run([
litani, "add-job",
"--pipeline", "qux",
"--ci-stage", "build",
"--command", ">&2 echo Starting && sleep 2",
"--phony-outputs", dependency_node, ], check=True)
SEE ALSO
litani add-job
NAME
litani add-job - Add a single command with its dependencies
SYNOPSIS
litani add-job --command C --pipeline-name P --ci-stage <build|test|report>
DESCRIPTION
This program adds a job to an existing Litani run. The run must have been previously created with litani-init(1); you may add as many jobs as you need before running them with litani-run-build(1).
A job is a command that is part of a dependency graph. Litani runs the command if any of the job's inputs were out-of-date; once the inputs have been made up-to-date, Litani runs the job, which thereby makes the job's outputs up-to-date.
By repeatedly running litani add-job, you build up a dependency graph where jobs are connected by their inputs and outputs. Running litani run-build makes Litani run all of the jobs in the dependency graph that are needed to bring all inputs and outputs up-to-date.
Many of the flags to this command give you platform-portable control on various aspects of the program: timeouts, dealing with return codes, output streams, and more.
MULTIPROCESS SAFETY
It is safe to run multiple invocations of litani add-job in parallel. Having a configure script that can run multiple invocations of litani add-job from a thread pool (or similar) is recommended to decrease configure times.
OPTIONS
--command CMD
--inputs F [F ...]
--outputs F [F ...]
--phony-outputs [OUT ...]
This is useful when you want to create a dependency ordering between two jobs, but the first job does not write any output files that the second job can depend on. To achieve this, you could pass --outputs phony-file when adding the first job, and --inputs phony-file when adding the second job. However, Litani will print a warning if the first job exits without writing a file called phony-file. To suppress the warning, instead pass --phony-outputs phony-file when adding the first job. Doing this obviates the need to use touchfiles for dependency ordering, which is how this must be done when using a traditional build system like make(1).
--description D
--tags TAG [TAG ...]
--pipeline-name P
--ci-stage <build|test|report>
--cwd DIR
--ok-returns RC [RC ...]
--ignore-returns RC [RC ...]
--timeout N
--timeout-ok
--timeout-ignore
--outcome-table F
--interleave-stdout-stderr
--stdout-file F
--stderr-file F
--pool P
--profile-memory
The memory usage will also be included on a graph on the HTML dashboard if this job's tags include stats-group:; see the TAGS section below for more details.
--profile-memory-interval N
TAGS
Litani jobs can be 'tagged' by providing one or more strings to the --tags flag. These are meant for users to add arbitrary metadata to jobs; there is no defined format for tags, and Litani does not attempt to parse them.
There are some pre-defined tags that affect how Litani renders the HTML report:
ENVIRONMENT VARIABLES
LITANI_JOB_ID
litani get-jobs
NAME
litani get-jobs - Print a list of jobs to be run
SYNOPSIS
litani get-jobs
DESCRIPTION
This program prints the list of jobs that have been added so far in JSON format. Each job in the list will follow the same schema as the elements of the ["jobs"]["wrapper_arguments"] key in the litani run.json file; see litani-run.json(5) for the schema.
MOTIVATION & BEST PRACTICES
This program can be used in conjunction with litani set-jobs to achieve the same functionality as litani transform-jobs. A user can read the current jobs, make changes to them, and then set the new list of jobs for the litani run.
This command may be run at any point after running litani init -- either before calling litani run-build, during the run, or after run-build has terminated.
Compared to transform-jobs, this command is useful if you just want to read the jobs and not write them, but can be used along with set-jobs to achieve a similar end result.
MULTIPROCESS SAFETY
This command is not safe to run concurrently with set-jobs, because set-jobs might delete a JSON file while this command reads it. This command is safe to run concurrently with add-jobs and run-build because new job files get written atomically.
OPTIONS
-f, --out-file F
SEE ALSO
litani acquire-html-dir
NAME
litani acquire-html-dir - Print the path to a locked HTML report directory
SYNOPSIS
litani acquire-html-dir
DESCRIPTION
This program attempts to acquire a lock on the most recently-written Litani HTML report, and prints the path to that locked directory.
This command should be used to get a path to a HTML report that is guaranteed to not be deleted or modified by other processes that also use this locking API. If you instead want the path to a HTML report directory that Litani will continuously refresh with updated progress, use litani-print-html-dir(1).
If this command returns successfully, subsequent invocations of this command will either hang, time out, or print the path to a different report directory. That continues until litani-release-html-dir(1) is run. This program should thus be used by cooperating processes that wish to have exclusive access to a HTML report. Well-behaved processes should only modify HTML report directories after this command has successfully returned, and should run litani-release-html-dir(1) after finishing their modifications.
MULTIPROCESS SAFETY
It is safe to run this program concurrently with any other Litani command.
OPTIONS
--t/--timeout N
OUTPUT & RETURN CODE
This program prints out the path to an HTML report directory and returns 0 if it was able to acquire a lock on that directory. This program returns 2 if there was a command line error. Otherwise, this program prints nothing and returns 1.
SEE ALSO
litani print-html-dir
NAME
litani print-html-dir - Print the path to a continually-updated report directory
SYNOPSIS
litani print-html-dir
DESCRIPTION
This program prints the path to a Litani HTML report directory that will be continuously updated while litani-run-build(1) is concurrently running.
This command is intended to be used by clients who wish to read the most up-to-date report directory without modifying it. litani-run-build(1) will update the report directory periodically while it is running. Thus, this command can be used to print a path to be viewed in a web browser, for example.
litani-run-build(1) refreshes the report directory atomically, that is, the entire HTML report (including the pipeline subpages) are updated all at once. Clients can continue to read the new report data using the same path that this command originally printed. This is implemented through atomically moving a symbolic link from the old to the new report directories.
Clients should not resolve the path that this command prints out before accessing it. This command prints out the path to a symbolic link, and access to report data should be through the symbolic link only. In particular, litani-run-build(1) will occasionally garbage-collect old HTML report directories, but the path that this command prints out will always point to a current (not garbage-collected) report.
MULTIPROCESS SAFETY
It is safe to run this program concurrently with any other Litani command.
OUTPUT & RETURN CODE
This program prints out the path to an HTML report directory and returns 0.
SEE ALSO
litani release-html-dir
NAME
litani release-html-dir - Unlock a previously-locked HTML report directory
SYNOPSIS
litani release-html-dir
DESCRIPTION
This program releases a lock on a Litani HTML report directory that had previously been locked through an invocation of litani-acquire-html-dir(1). This allows other processes to attempt to acquire the directory.
After this command returns, it becomes possible for a single subsequent (or currently-running) invocation of litani-acquire-html-dir(1) to terminate successfully.
It is an error to run this command if you have not previously acquired the specified report directory using litani-acquire-html-dir(1). Litani does not check that the locking API is being used correctly, so doing this may cause a race where more than one process can modify a single report directory concurrently.
It is important to release a directory after acquiring it and completing any modifications, so that a concurrent run of litani-run-build(1) can acquire stale HTML report directories for garbage collection.
MULTIPROCESS SAFETY
It is safe to run this program concurrently with any other Litani command.
OPTIONS
-d/--dir D
RETURN CODE
This program returns 0.
SEE ALSO
litani init
NAME
litani init - Set up a new Litani run
SYNOPSIS
litani init --project-name NAME
DESCRIPTION
This program creates a new Litani run. This deletes previously-added jobs and allows you to add new jobs with litani --add-job, prior to running them with litani --run-build.
When you run this program, Litani will write a file called .litani_cache_dir to the current directory, which will contain the path to Litani's output directory. Subsequent litani runs will read this file to determine where to find the output directory. Litani will search the following locations for the file:
This allows you to run litani init in the 'root' of a project, and subsequently run litani add-job further down in the project.
MULTIPROCESS SAFETY
It is safe but pointless to run litani init more than once in parallel. One of the invocations will 'win', and subsequent invocations of litani add-job will add jobs to a single run.
OPTIONS
--project-name NAME
--stages NAME [NAME ...]
--pools NAME:DEPTH [NAME:DEPTH ...]
--output-directory DIR
--output-prefix DIR
--output-symlink DIR
A useful pattern is to run this program as
litani init --output-prefix output --output-symlink output/latest
--no-print-out-dir
ENVIRONMENT VARIABLES
LITANI_RUN_ID
litani set-jobs
NAME
litani set-jobs - Set the list of jobs to be run
SYNOPSIS
litani set-jobs
DESCRIPTION
This program reads a list of jobs as a JSON list. Each job in the list should be a dict with keys being the flags to litani add-job(1). This program will replace all previously-added jobs with the jobs in the user-proveded list.
MOTIVATION & BEST PRACTICES
This program can be used in conjunction with litani get-jobs to achieve the same functionality as litani transform-jobs. A user can read the current jobs, make changes to them, and then set the new list of jobs for the litani run.
This command can be used along with litani get-jobs to achieve a similar end result to litani transform-jobs, but instead of having access to the stdin and stdout of a subprocess this command uses input and output files, which may be more natural for use in a Unix pipeline.
EXAMPLES
Below is an example which demonstrates how to combine litani get-jobs and litani set-jobs to prepend echo to each command that has been added to a litani run either using a pipeline or a sequence
# Pipeline litani get-jobs | jq '[.[] | (.command = "echo " + .command)]' | litani set-jobs # Sequence litani get-jobs --out-file /tmp/original.json jq '[.[] | (.command = "echo " + .command)]' < /tmp/original.json > /tmp/transformed.json litani set-jobs --from-file /tmp/transformed.JSON
OPTIONS
-f, --from-file F
-s, --from-string S
SEE ALSO
litani dump-run
NAME
litani dump-run - Print the current run as a JSON document to stdout
SYNOPSIS
litani dump-run
DESCRIPTION
This program prints the run.json file for a Litani run to stdout or to the specified output file. The JSON file's schema is documented in litani-run.json(5).
This program is intended to be used while an invocation of litani-run-build(1) is running. You can access the run.json file in the run's output directory both during and after termination of the run-build process.
This program may be run as a Litani job, and it may be run as a subprocess from a command that is part of a Litani job. This allows commands to 'introspect' on a Litani run so far. It is guaranteed that if a single Litani job runs litani dump-run, all of the reverse-dependencies of that job will have the "complete" key set to True in the printed run.json file. That is, any job that runs litani dump-run will always see the most up-to-date state of the build with respect to its own reverse dependencies.
MULTIPROCESS SAFETY
It is safe to run multiple invocations of litani dump-run in parallel. However, if you do so, the printed result may be slightly out-of-date.
If you run litani dump-run from a Litani job, Litani will attempt to ensure that the printed run is up-to-date with respect to that job. In particular, Litani will try to re-load the run until all the reverse dependencies have their "complete" field set to true.
OPTIONS
-r N, --retries N
-o F, --out-file F
OUTPUT
A JSON document printed to stdout. This will be either null (if this program was not able to load the run after a certain number of tries), or a document whose schema matches the one in litani-run.json(5).
RETURN CODE
Zero unless the program crashed.
litani run-build
NAME
litani run-build - Start a Litani run
SYNOPSIS
litani run-build
DESCRIPTION
This program executes a Litani run. The run must have been created using litani init and had jobs added to it using litani add-job.
MULTIPROCESS SAFETY
It is NOT SAFE to run multiple invocations of litani run-build in parallel.
OPTIONS
-n, --dry-run
-j N, --parallel N
-o F, --out-file F
--fail-on-pipeline-failure
--no-pipeline-dep-graph
-p P [P ...], --pipelines P [P ...]
-s S, --ci-stage S
RETURN CODE
This program will return 1 if it terminates abnormally. On normal termination:
litani outcome-table.json
NAME
litani outcome-table.json - user-provided table to decide the outcome of a job
DESCRIPTION
By default, Litani jobs' outcome is set to success if their command returns 0, and fail on other return codes or if the command timed out. This can be customized to some extent using the --...-ok and --...-ignore flags of litani-add-job(1). For more advanced use cases, users can pass a JSON file called an 'outcome table' to the --outcome-table flag of litani-add-job(1). This allows, for example, to cause an outcome other than success for a return code of 0, for negative tests. This manual describes the schema for the JSON file that users should pass to litani-add-job(1).
SUMMARY
Schema for user-provided outcome table
{
"outcomes": [
"value": int ,
"action": outcome_ (),
Optional("comment"): str ,
}, {
"action": outcome_ (),
Optional("comment"): str ,
}, {
"action": outcome_ (),
Optional("comment"): str ,
})]
DETAILED DOCUMENTATION
Schema for user-provided outcome table
{
A description of the outcome table as a whole.
"outcomes": [
The outcome of the job will be the first item in this list that matches.
If the return code of the job matches the value of value, the outcome will be set to the value of action. The value of the optional comment key can contain a human-readable explanation for this outcome.
"value": int ,
"action": outcome_ (),
Optional("comment"): str ,
}, {
If this job timed out, the outcome will be set to the value of action. The value of the optional comment key can contain a human-readable explanation for this outcome.
"action": outcome_ (),
Optional("comment"): str ,
}, {
The "wildcard" action type matches any job and sets its outcome to the value of action. It is recommended to place a wildcard action as the last element of the list of outcomes to catch all jobs that were not matched by a previous rule.
"action": outcome_ (),
Optional("comment"): str ,
})]
SEE ALSO
litani-add-job(1)
litani run.json
NAME
litani run.json - description of a single Litani run
DESCRIPTION
Litani writes a run.json file to its cache directory when the user invokes litani-run-build(1). This file contains everything needed to generate the HTML dashboard, with the notable exception of any artefacts that the jobs might emit. (However, Litani does capture jobs' stdout and stderr; any output will be available in run.json.) This manual page describes run.json 's format.
outcome_, status_, and single_job_schema_ are sub-schemata that are referenced multiple times within this schema, and so are defined separately below. All timestamps are in ISO-8601.
SUMMARY
Schema for entire run.json file
{
"project": str ,
"stages": [str ],
"pools": {Optional(str): int },
"start_time": time_str_ ,
"version": str ,
"version_major": int ,
"version_minor": int ,
"version_patch": int ,
"release_candidate": bool ,
Optional("end_time"): time_str_ ,
"status": status_ (),
"aux": dict ,
"parallelism": Any ({
"finished": int ,
"running": int ,
"total": int ,
}],
Optional("max_parallelism"): int ,
Optional("n_proc"): Any(None, int ),
}),
"pipelines": [{
"name": str ,
"status": status_ (),
"ci_stages": [{
"name": str ,
"status": outcome_ (),
"progress": All(int, Range(min=0, max=100 )),
"jobs": [Any ({
"duration_str": None ,
"wrapper_arguments": single_job_schema_ (),
}, {
"start_time": time_str_ ,
"duration_str": None ,
"wrapper_arguments": single_job_schema_ (),
}, {
"complete": True ,
"outcome": outcome_ (),
"end_time": time_str_ ,
"start_time": time_str_ ,
"timeout_reached": bool ,
"command_return_code": int ,
"wrapper_return_code": int ,
"stderr": Any([str], None ),
"stdout": Any([str], None ),
"duration_str": Any(str, None ),
"wrapper_arguments": single_job_schema_ (),
"loaded_outcome_dict": Any(dict, None ),
"memory_trace": {
"vsz": int ,
"human_readable_rss": str ,
"human_readable_vsz": str ,
},
Optional("trace"): [{
"vsz": int ,
"time": time_str_ ,
}],
"latest_symlink": Any(str, None ),
}
Schema for 'wrapper_arguments' key
{
"command": str ,
"ci_stage": str ,
"verbose": bool ,
"timeout_ok": bool ,
"pipeline_name": str ,
"very_verbose": bool ,
"timeout_ignore": bool ,
"profile_memory": bool ,
"profile_memory_interval": int ,
"cwd": Any(str, None ),
"interleave_stdout_stderr": bool ,
"pool": Any(str, None ),
"tags": Any([str], None ),
"timeout": Any(int, None ),
"inputs": Any([str], None ),
"outputs": Any([str], None ),
"description": Any(str, None ),
"status_file": Any(str, None ),
"stderr_file": Any(str, None ),
"stdout_file": Any(str, None ),
"ok_returns": Any([str], None ),
"outcome_table": Any(str, None ),
"phony_outputs": Any([str], None ),
"ignore_returns": Any([str], None ),
"subcommand": Any("exec", "add-job "),
Schema for a pipeline or run status
Any("success", "fail", "in_progress")
Schema for a job or ci_stage outcome
Any("success", "fail", "fail_ignored")
DETAILED DOCUMENTATION
Schema for entire run.json file
{
A globally-unique ID for the run.
"project": str ,
A name for the project that this run is part of. This name is used by the HTML report generator and can be used to group related sets of runs, but is otherwise not used by litani.
"stages": [str ],
The CI stages that each job can be a member of. Stage names can be provided through the --stages flag of litani-init(1). Default stages "build", "test" and "report" are used if the flag is not used.
"pools": {Optional(str): int },
A mapping from pool names to the depth of the pool. Jobs can be a member of zero or one pool. The depth of a pool that a set of jobs belong to limits the number of those jobs that litani will run in parallel.
"start_time": time_str_ ,
The time at which the run started.
"version": str ,
The version string of the Litani binary that ran this run.
"version_major": int ,
Litani's major version number.
"version_minor": int ,
Litani's minor version number.
"version_patch": int ,
Litani's patch version number.
"release_candidate": bool ,
false if this version of Litani is a tagged release.
Optional("end_time"): time_str_ ,
The time at which the run ended. This key will only exist if status is not equal to "in_progress".
"status": status_ (),
The state of this run, see the status schema below.
"aux": dict ,
A free-form dict that users can add custom information into. There are no constraints on the format of this dict, but it is recommended that users add their information to a sub-dict with a key that indicates its function. For example, to add information pertaining to a CI run, users might add a key called "continuous_integration_data" whose value is a sub-dict containing all required fields.
"parallelism": Any ({
This dict contains information about the parallelism level of the jobs that litani runs. This is to measure whether the run is using as many processor cores as possible over the duration of the run.
A list of samples of the run's concurrency level.
The time at which the sample was taken.
"finished": int ,
How many jobs have finished
"running": int ,
How many jobs are running
"total": int ,
The total number of jobs
}],
Optional("max_parallelism"): int ,
The maximum parallelism attained over the run
Optional("n_proc"): Any(None, int ),
The number of processors detected on this machine
}),
"pipelines": [{
Each pipeline contains ci_stages which contain jobs.
"name": str ,
The pipeline name. The set of pipeline names are all the names passed to the --pipeline-name flag of litani-add-job(1).
"status": status_ (),
The pipeline's state, see the status schema below.
"ci_stages": [{
Each ci_stage contains a list of jobs.
Whether all the jobs in this stage are complete.
"name": str ,
The stage's name. This is any of the stages of the project.
"status": outcome_ (),
The stage's state, see the outcome schema below.
"progress": All(int, Range(min=0, max=100 )),
"jobs": [Any ({
The list of all the jobs in this ci_stage in this pipeline. There are three different forms the value of this key can take.
If complete is false and no start_time key exists, then this job has not yet started.
"duration_str": None ,
"wrapper_arguments": single_job_schema_ (),
The arguments passed to this job, see the single_job_schema schema below.
}, {
If complete is false but the start_time key exists, then the job has started running but has not yet finished.
"start_time": time_str_ ,
The time at which the job started running.
"duration_str": None ,
"wrapper_arguments": single_job_schema_ (),
The arguments passed to this job, see the single_job_schema schema below.
}, {
How long the job ran for.
"complete": True ,
If complete is true, then the job has terminated.
"outcome": outcome_ (),
The job's outcome, see the outcome schema below.
"end_time": time_str_ ,
The time at which the job completed.
"start_time": time_str_ ,
The time at which the job started running.
"timeout_reached": bool ,
Whether the job reached its timeout limit.
"command_return_code": int ,
The command's return code.
"wrapper_return_code": int ,
"stderr": Any([str], None ),
A list of strings that the command printed to its stderr.
"stdout": Any([str], None ),
A list of strings that the command printed to its stdout.
"duration_str": Any(str, None ),
A human-readable duration of this job (HH:MM:SS).
"wrapper_arguments": single_job_schema_ (),
The arguments passed to this job, see the single_job_schema schema below.
"loaded_outcome_dict": Any(dict, None ),
If wrapper_arguments["outcome_table"] is not null, the value of this key will be the deserialized data loaded from the outcome table file.
"memory_trace": {
If profile_memory was set to true in the wrapper arguments for this job, this dict will contain samples of the command's memory usage.
The command's peak memory usage.
Peak resident set
"vsz": int ,
Peak virtual memory size
"human_readable_rss": str ,
Peak resident set
"human_readable_vsz": str ,
Peak virtual memory size
},
Optional("trace"): [{
A list of samples of memory usage.
Resident set
"vsz": int ,
Virtual memory
"time": time_str_ ,
The time at which the sample was taken
}],
"latest_symlink": Any(str, None ),
The symbolic link to the report advertised to users
Schema for 'wrapper_arguments' key
The wrapper_arguments key to run.json maps to the following dict. None of the values in this dict change at any point during the run; they are mostly the same as the flags passed to litani-add-job(1) for this job. {
A globally-unique ID for this job.
"command": str ,
The command that litani will execute in a subshell.
"ci_stage": str ,
The name of the 'stage' that this job will execute in, used for organizing the HTML dashboard.
"verbose": bool ,
"timeout_ok": bool ,
If true, then if this job times out, the outcome will be set to 'success'.
"pipeline_name": str ,
The name of the 'pipeline' that this job will execute in, used for organizing the HTML dashboard.
"very_verbose": bool ,
"timeout_ignore": bool ,
If true, then if this job times out, the outcome will be set to 'fail_ignored'.
"profile_memory": bool ,
If true, then litani will regularly sample the memory usage of this job's command while it runs. Samples are stored in the job's memory_trace.
"profile_memory_interval": int ,
How frequently (in seconds) litani will profile the command's memory use, if profile_memory is true.
"cwd": Any(str, None ),
The directory that litani will run the command in.
"interleave_stdout_stderr": bool ,
Whether the command's stderr will be sent to the stdout stream. If true, the job's stderr key will be None and the stdout key will contain lines from both the command's stdout and stderr.
"pool": Any(str, None ),
The pool that this job will execute in; if not null, then it must be a key in the pools dict of the overall run.
"tags": Any([str], None ),
A list of user-specified tags. Litani mostly doesn't interpret these, although the HTML dashboard generator does use some of them. Tags are intended to help users find particular jobs for data analysis and can contain arbitrary data.
"timeout": Any(int, None ),
The number of seconds that Litani will allow the job to run for before sending SIGTERM followed by SIGKILL (see signal(3)).
"inputs": Any([str], None ),
The list of files that should be made up-to-date before the job will run
"outputs": Any([str], None ),
The list of files that this job will make up-to-date after it completes
"description": Any(str, None ),
A human-readable description of this job
"status_file": Any(str, None ),
"stderr_file": Any(str, None ),
A file to redirect stderr to, as well as buffering it internally
"stdout_file": Any(str, None ),
A file to redirect stdout to, as well as buffering it internally
"ok_returns": Any([str], None ),
A list of return codes. If the command exits with any of these return codes (or 0), then the outcome will be set to 'success'.
"outcome_table": Any(str, None ),
A file to load an outcome table from.
"phony_outputs": Any([str], None ),
A list of outputs that Litani will not warn about if they were not created by the job.
"ignore_returns": Any([str], None ),
A list of return codes. If the command exits with any of these return codes (or 0), then the outcome will be set to 'fail_ignored'.
"subcommand": Any("exec", "add-job "),
Schema for a pipeline or run status
pipelines and runs have a "status". The status is "in_progress" when some of the jobs are incomplete and either "success" or "fail" once all jobs complete.
Any("success", "fail", "in_progress")
Schema for a job or ci_stage outcome
Outcomes and ci_stages have an "outcome" (though, confusingly, the key is "status" for ci_stages). "fail_ignored" means that the job failed but the user specified that the job's dependencies should run anyway. If a pipeline contains a job whose outcome is "fail_ignored", then the status of the pipeline will be "fail" after all of its jobs complete.
Any("success", "fail", "fail_ignored")