litani

Litani allows you to build up a graph of commands to execute in dependency order. You might find it more convenient than other meta-build systems for the following reasons:

• Litani maintains a JSON-formatted record of the current run, which can be printed at any time by running litani dump-run. This record includes various metadata about each of the commands in the build, including their return codes, run times, stdout and stderr, and more. See litani-run.json(5) for the schema.

• Litani generates (and constantly updates) an HTML dashboard from this JSON file while running your build. The dashboard is convenient for watching the progress of your build and investigating failures afterward.

• For builds where some of the commands might fail, Litani provides fine-grained control over its behaviour. For each job, you can decide whether to proceed with dependent jobs upon failure, independently of whether the build as a whole is considered to have passed.

• Litani is invoked entirely as a command-line tool, rather than using a domain-specific language like make(1) or cmake(1). This means that you can create and run your Litani build by shelling out from a script, or even by invoking Litani from a Makefile. This allows you to replicate the build of an existing Makefile while gaining the HTML dashboard and other Litani features.

The process of defining and running a Litani build proceeds as follows:

1.Run litani init, usually in the root of your source tree. This touches a file (similar to git-init(1)) so that you can run subsequent Litani commands anywhere else in your source tree.
2.Run litani add-job once for every command you wish to add to the build graph. litani-add-job(1) accepts a variety of options for controlling how the command is executed.
3.Run litani run-build to execute all jobs in dependency order. litani-run-build(1) accumulates all the jobs that you added and generates a Ninja build, which Litani subsequently runs with ninja(1).

You may then periodically reload the HTML dashboard while the build is running, and use the run.json file during or after the build to gain insight into the build outcome.

The following man pages are available with local installations of Litani (try man litani-add-job), or online at https://awslabs.github.io/aws-build-accumulator/:

litani acquire-html-dir: print the path to a locked HTML report directory
litani add-job: add a single command with its dependencies
litani dump-run: print the current run as a JSON document to stdout
litani get-jobs: print a list of jobs to be run
litani init: set up a new Litani run
litani print-html-dir: print the path to a continually-updated report directory
litani release-html-dir: unlock a previously-locked HTML report directory
litani run-build: start a Litani run
litani set-jobs: set the list of jobs to be run
litani transform-jobs: print a list of jobs to be modified before running

litani outcome-table.json: user-provided table to decide the outcome of a job
litani run.json: description of a single Litani run

Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. Litani's source code is licensed under the Apache License 2.0, and the manual pages are licensed under CC-BY-SA 4.0. See the LICENSE files for details.

ninja(1), which Litani uses to actually run the build: https://ninja-build.org

litani transform-jobs

litani transform-jobs - Print a list of jobs to be modified before running

This program allows clients to add to, delete from, or modify a list of Litani jobs before running them. The program prints out all the jobs that have been added to the current run so far, and then expects to read a new list of jobs on stdin. This new list of jobs will be 'saved' and run upon invocation of litani-run-build(1).

This program prints the list of jobs that have been added so far in JSON format. It also expects to read a list of jobs as a JSON list. Each job in the list will follow the same schema as the elements of the ["jobs"]["wrapper_arguments"] key in the litani run.json file; see litani-run.json(5) for the schema. Most users will want to run this program before running litani run-build, which means that none of the jobs will have started; that means that the JSON schema will match that of an unstarted job, meaning the complete key will be false and there will be no start_time key.

If this program prints a job, and the JSON list written to stdin contains a job with the same job id, then Litani will overwrite the original job with the one given on stdin. The 'job id' is the value of ["job_id"] in the job dictionary, as specified in litani-run.json(5).

If this program prints a job, but a job with the same job id does not exist in the list of jobs written to this program's stdin, then that job will be deleted.

If this program reads a job on stdin whose job id did not exist in the list of jobs printed on stdout, then that job will be added to the list of jobs to be executed. However, it is highly recommended to add jobs using litani-add-job(1) rather than adding a new dict to the JSON list. See CAVEATS below.

This program allows users to modify Litani-based builds that they do not control. Litani-based builds typically use a so-called 'run script' to make the following Litani invocations:

1.litani init, once
2.litani add-job, for each job to be added
3.litani run-build, once

If you cannot modify the run script, you will not have a chance to add custom jobs before running them. However, if the run script includes a way to allow users to invoke litani transform-jobs after step 2 but before step 3, then users who cannot modify the run script can nevertheless add their own jobs.

Here are a couple of suggestions for how to write a run script such that downstream users can extend your build with additional jobs.

Suppose that you have a run script called run.py that performs the three steps above. Consider adding a flag, (we suggest --job-transformer J) to the script. When users pass this flag to run.py, it will run J as a subprocess after adding all the jobs but just before running litani run-build. Users can supply their own J, which would run litani transform-jobs as a subprocess and modify jobs as needed.

Suppose that you have various run scripts called run-1.py, run-2.py, ... that each perform the three steps above, but which each add different jobs to the build. Suppose you then add a flag (we suggest --no-standalone) to each of those scripts, which has the effect of preventing them from running litani init and litani run-build. You can then combine the jobs from all of those scripts by running them as follows:

$ litani init                     # run this manually in your shell
$
$ ./run-1.py --no-standalone      # add the jobs emitted by run-1.py
$                                 # without re-initializing the build
$                                 # or running it yet
$
$ ./run-2.py --no-standalone      # same for run-2.py. You could also
$                                 # run these programs in parallel
$                                 # because it is safe to run parallel
$                                 # invocations of litani add-job
...
$ ./transform-all-jobs.sh         # This script invokes `litani
$                                 # transform-jobs` as a subprocess
$                                 # and modifies the jobs
$
$ litani run-build                # run all jobs added from all scripts

This allows users to 'compose' Litani-based builds that they do not have control over, and also modify them before running.

It is safe to run this program in parallel with invocations of litani-add-job(1), but this usually doesn't make sense; typically you want to ensure that your jobs have been added before transforming them. Therefore, wait until all invocations of litani add-job have terminated before running this program. However, see also CAVEATS below for when this does make sense.

It is safe to run several invocations of this program at the same time: this program always writes jobs out atomically, so an invocation will never read a file that has been half-written by another. However, the result of running multiple invocations of this program will be unpredictable if the transformation is not idempotent, since one invocation may transform a job that has already been transformed.

This program does not accept any command-line options. It does expect to read a list of new jobs on stdin, and will block until stdin is closed. To print the entire run without modifying any jobs, use litani-dump-run(1) instead.

It is possible to add new jobs using this command, by appending a dict to the output of this command and writing the resulting list back to stdin. However, this is not recommended, because this avoids the error-checking and enforcement of internal invariants that litani-add-job(1) performs. Thus, it is recommended to use litani-add-job(1) to add new jobs, either while you run litani transform-jobs, or after transform-jobs terminates.

This section contains code snippets illustrating how this command can be used.

The following Python fragment illustrates how to invoke litani transform-jobs from a Python script; read the list of jobs; and then write them back to transform-jobs.

proc = subprocess.Popen(

[litani, "-v", "transform-jobs"], stdin=subprocess.PIPE,
stdout=subprocess.PIPE, text=True, bufsize=0) jobs = json.loads(proc.stdout.read()) # jobs is now a list of dicts, which can be processed here print(json.dumps(jobs), file=proc.stdin) proc.stdin.close() proc.wait()

Sometimes, you want to add a job that runs before all others. That is, you start off with a build graph that looks like this:

foo.exe --> foo.out

\
\ bar.exe --> bar.out --> final.exe --> final.out
/
/ baz.exe --> baz.out

And you want to print `echo Starting` before any job starts to run:


foo.exe --> foo.out
________^ \
/ \ echo 'Start' --> fake.out --> bar.exe --> bar.out --> final.exe --> final.out
\________ /
v /
baz.exe --> baz.out

The way to accomplish this is

• Add a new job that emits an output file. The output doesn't have to be a real file; you can use uuidgen(1) to get a fresh name. You would add this new job using litani-add-job(1).
• Use litani-transform-jobs(1) to modify the foo, bar, and baz jobs so that their "inputs" key includes the file that the new job outputs.

These steps can be done in either order. Here is a code example:

proc = subprocess.Popen(

[litani, "transform-jobs"], stdin=subprocess.PIPE,
stdout=subprocess.PIPE, text=True, bufsize=0) jobs = json.loads(proc.stdout.read()) # A file name that the new root job will output, and which the old # root nodes will depend on as an input. This is not a real file that # the new root command will write, so we'll pass it to --phony-outputs # rather than --outputs. dependency_node = str(uuid.uuid4()) for job in jobs:
if not job["inputs"]:
job["inputs"] = [dependency_node] print(json.dumps(jobs), file=proc.stdin) proc.stdin.close() proc.wait() # Now, add the new root job. (It's also fine to do this before or during the # transformation, but remember to skip the new job when iterating!) subprocess.run([
litani, "add-job",
"--pipeline", "qux",
"--ci-stage", "build",
"--command", ">&2 echo Starting && sleep 2",
"--phony-outputs", dependency_node, ], check=True)

uuidgen(1)

litani add-job

litani add-job - Add a single command with its dependencies

litani add-job --command C --pipeline-name P --ci-stage <build|test|report>

[--inputs F [F ...]] [--outputs F [F ...]] [--phony-outputs [F ...]] [--description D] [--tags T [T ...]] [--cwd D] [--ok-returns RC [RC ...]] [--ignore-returns RC [RC ...]] [--timeout N] [--timeout-ok] [--timeout-ignore] [--interleave-stdout-stderr] [--outcome-table F.json] [--stdout-file F] [--stderr-file F] [--pool P] [--profile-memory] [--profile-memory-interval N]

This program adds a job to an existing Litani run. The run must have been previously created with litani-init(1); you may add as many jobs as you need before running them with litani-run-build(1).

A job is a command that is part of a dependency graph. Litani runs the command if any of the job's inputs were out-of-date; once the inputs have been made up-to-date, Litani runs the job, which thereby makes the job's outputs up-to-date.

By repeatedly running litani add-job, you build up a dependency graph where jobs are connected by their inputs and outputs. Running litani run-build makes Litani run all of the jobs in the dependency graph that are needed to bring all inputs and outputs up-to-date.

Many of the flags to this command give you platform-portable control on various aspects of the program: timeouts, dealing with return codes, output streams, and more.

It is safe to run multiple invocations of litani add-job in parallel. Having a configure script that can run multiple invocations of litani add-job from a thread pool (or similar) is recommended to decrease configure times.

--command CMD

The command that Litani will execute to emit the output files (if any) from the input files, once the input files are up-to-date. Litani invokes CMD through a subshell, so wildcards and shell expansions are supported and special characters must be escaped.

--inputs F [F ...]

A list of inputs that this job depends on. Litani interprets each F as a file:
• If F starts with @, then treat the remainder of the file name as JSON file containing a list of files, which in turn are to be handled as specified in the following items.
• If every F exists and has an older timestamp than all of this job's outputs, then Litani will not run this job.
• If some of the F are newer than any of this job's outputs, then those outputs are called 'out-of-date'. In this case, Litani will run all of the jobs whose outputs include every F that is out of date before running this job.
• If option 2 applies, but there is no job whose output includes the out-of-date files, then the build will fail.

--outputs F [F ...]

A list of outputs that this job emits. Litani interprets each F as a file (or a JSON file if prefixed with @, as described for --inputs above), and expects that the command will write a file with that name upon completion. If a job J has F as an output, but does not actually write a file called F, then J will run unconditionally because F will always be considered out of date. Furthermore, all jobs whose inputs include F, and all their descendants, will also always run.

--phony-outputs [OUT ...]

Do not print a warning if this job has not written the named OUT files by the time it finishes running. If you do not specify any OUT files, Litani will not warn when any output specified to the --outputs flag does not exist when the job has finished running.

This is useful when you want to create a dependency ordering between two jobs, but the first job does not write any output files that the second job can depend on. To achieve this, you could pass --outputs phony-file when adding the first job, and --inputs phony-file when adding the second job. However, Litani will print a warning if the first job exits without writing a file called phony-file. To suppress the warning, instead pass --phony-outputs phony-file when adding the first job. Doing this obviates the need to use touchfiles for dependency ordering, which is how this must be done when using a traditional build system like make(1).

--description D

A human-readable description for this job. This flag is optional but highly recommended, as it makes the HTML dashboard much easier to navigate.

--tags TAG [TAG ...]

A list of tags for this job. Litani does not interpret tags (although the HTML dashboard generator does use some of them). Each tag can be in whatever format you prefer, e.g. a plain string, key-value pair, or even arbitrary JSON. A job's list of tags is included in its run.json and is intended to help with analyzing run data. See TAGS below for a list of tags that affect the display of the HTML report.

--pipeline-name P

The 'pipeline' that this job is part of. A pipeline is a subgraph of the overall build, representing a single 'end-to-end' set of related jobs. A job's pipeline does not affect how it is scheduled to run, but is used for grouping related jobs in the HTML dashboard.

--ci-stage <build|test|report>

The 'CI stage' that this job is part of. A CI stage is a subgraph of the overall build, representing a set of jobs that should complete before Litani moves onto the next stage. A job's CI stage does not affect how it is scheduled to run, but it is used for grouping related jobs in the HTML dashboard.

--cwd DIR

The directory that this job should execute in.

--ok-returns RC [RC ...]

Set the job outcome to success if the command exits with a return code of RC. By default, a job is only considered successful if it returns with 0. You can also use the --outcome-table option for fine-grained control over job outcomes.

--ignore-returns RC [RC ...]

Set the job outcome to fail_ignore if the command exits with a return code of RC. This means that jobs that depend on this one will still run as if this job had passed, but the pipeline that contains this job will fail after it completes. This is useful when you want to generate a report even when a command fails; you specify the report as a dependency of the command, and use --ignore-returns to ensure that the report command runs even if the job fails. You can also use the --outcome-table option for fine-grained control over job outcomes.

--timeout N

How many seconds this job should be allowed to run for. If the timeout is reached, then the command is killed with SIGTERM followed by SIGKILL (see signal(3)) and the timeout_reached key of the job's record is set to true. By default, the job also fails if the timeout is reached, though this behavior can be modified using --timeout-ok, --timeout-ignore, and --outcome-table.

--timeout-ok

Set the job outcome to success if it reaches the timeout specified in the --timeout flag.

--timeout-ignore

Set the job outcome to fail_ignore if it reaches the timeout specified in the --timeout flag. This means that jobs that depend on this one will still run as if this job had passed, but the pipeline that contains this job will fail after it completes.

--outcome-table F

Use the JSON-formatted outcome table F to determine the outcome (success, fail, fail-ignored) of this job. Using outcome tables gives finer-grained control over outcomes than using the --...-ok and --...-ignore flags: in particular, it is possible to specify an action other than success if the underlying command returns 0. This can be used, for example, to define negative tests. The schema for JSON outcome tables is specified in litani-outcome-table.json(5).

--interleave-stdout-stderr

Use a single pipe for the job's stdout and stderr. Similar to redirecting stderr to stdout using 2>&1 >.... The job's stdout list in the run.json file will contain lines of output from both stdout and stderr, while the value of stderr will be null.

--stdout-file F

Redirect the command's stdout to F. Litani will still retain a copy of the output in the stdout field of the run.json file. This flag is a useful alternative to using shell redirection (>).

--stderr-file F

Redirect the command's stderr to F. Litani will still retain a copy of the output in the stderr field of the run.json file. This flag is a useful alternative to using shell redirection (2>).

--pool P

Place this job in the pool named P. This pool must have been declared using the --pools flag of litani init. If pool P has a depth of D, that means that a maximum of D jobs whose pool is P will be running in parallel (though there may be other jobs in other pools that are also running at the same time).

--profile-memory

Turn on memory profiling for this job. The memory used by the command will be recorded in the memory_trace field of run.json.

The memory usage will also be included on a graph on the HTML dashboard if this job's tags include stats-group:; see the TAGS section below for more details.

--profile-memory-interval N

Profiles the memory usage of this job every N seconds. Has no effect unless --profile-memory is also passed.

Litani jobs can be 'tagged' by providing one or more strings to the --tags flag. These are meant for users to add arbitrary metadata to jobs; there is no defined format for tags, and Litani does not attempt to parse them.

There are some pre-defined tags that affect how Litani renders the HTML report:

literal-stdout: Litani will inline this job's stdout verbatim into the job's div on the pipeline detail page. This has the effect of treating the stdout as HTML rather than as plain text, allowing users to e.g. display arbitrary HTML or SVG output in the job div.
front-page-text: Litani will inline this job's stdout verbatim onto the front page of the dashboard, in its own section. Inlining verbatim means that Litani will not escape XML constructs, so this allows users to display custom HTML and SVG output on the front page.
stats-group:G: Litani will generate box-and-whiskers diagrams for jobs whose tag begins with stats-group:. Litani considers each value of G to be a different group. For each group, Litani will render:
• A box-and-whiskers diagram depicting the runtime of every job in that group
• A box-and-whiskers diagram depicting the peak memory use of every job in that group, if --profile-memory was also passed to those jobs.

LITANI_JOB_ID

Litani passes the job's unique ID to the command through this environment variable. The unique id is the job_id field in the litani-run.json(5) representation.

litani get-jobs

litani get-jobs - Print a list of jobs to be run

litani get-jobs

[-f/--out-file F]

This program prints the list of jobs that have been added so far in JSON format. Each job in the list will follow the same schema as the elements of the ["jobs"]["wrapper_arguments"] key in the litani run.json file; see litani-run.json(5) for the schema.

This program can be used in conjunction with litani set-jobs to achieve the same functionality as litani transform-jobs. A user can read the current jobs, make changes to them, and then set the new list of jobs for the litani run.

This command may be run at any point after running litani init -- either before calling litani run-build, during the run, or after run-build has terminated.

Compared to transform-jobs, this command is useful if you just want to read the jobs and not write them, but can be used along with set-jobs to achieve a similar end result.

This command is not safe to run concurrently with set-jobs, because set-jobs might delete a JSON file while this command reads it. This command is safe to run concurrently with add-jobs and run-build because new job files get written atomically.

-f, --out-file F

Write the list of jobs to a JSON file. If no path is provided Litani will write the list of jobs to stdout

litani set-jobs(1)

litani acquire-html-dir

litani acquire-html-dir - Print the path to a locked HTML report directory

litani acquire-html-dir

-t/--timeout N

This program attempts to acquire a lock on the most recently-written Litani HTML report, and prints the path to that locked directory.

This command should be used to get a path to a HTML report that is guaranteed to not be deleted or modified by other processes that also use this locking API. If you instead want the path to a HTML report directory that Litani will continuously refresh with updated progress, use litani-print-html-dir(1).

If this command returns successfully, subsequent invocations of this command will either hang, time out, or print the path to a different report directory. That continues until litani-release-html-dir(1) is run. This program should thus be used by cooperating processes that wish to have exclusive access to a HTML report. Well-behaved processes should only modify HTML report directories after this command has successfully returned, and should run litani-release-html-dir(1) after finishing their modifications.

It is safe to run this program concurrently with any other Litani command.

--t/--timeout N

Terminate with a return code of 1 if unable to acquire an HTML report directory after N seconds. If N is 0, this program will continue trying to acquire a directory forever. It is recommended to set a non-zero timeout because this program might livelock if, for example, a concurrent invocation of litani run-build crashes in the middle of writing a report directory.

This program prints out the path to an HTML report directory and returns 0 if it was able to acquire a lock on that directory. This program returns 2 if there was a command line error. Otherwise, this program prints nothing and returns 1.

litani-print-html-dir(1)
litani-release-html-dir(1)

litani print-html-dir

litani print-html-dir - Print the path to a continually-updated report directory

litani print-html-dir

This program prints the path to a Litani HTML report directory that will be continuously updated while litani-run-build(1) is concurrently running.

This command is intended to be used by clients who wish to read the most up-to-date report directory without modifying it. litani-run-build(1) will update the report directory periodically while it is running. Thus, this command can be used to print a path to be viewed in a web browser, for example.

litani-run-build(1) refreshes the report directory atomically, that is, the entire HTML report (including the pipeline subpages) are updated all at once. Clients can continue to read the new report data using the same path that this command originally printed. This is implemented through atomically moving a symbolic link from the old to the new report directories.

Clients should not resolve the path that this command prints out before accessing it. This command prints out the path to a symbolic link, and access to report data should be through the symbolic link only. In particular, litani-run-build(1) will occasionally garbage-collect old HTML report directories, but the path that this command prints out will always point to a current (not garbage-collected) report.

It is safe to run this program concurrently with any other Litani command.

This program prints out the path to an HTML report directory and returns 0.

litani-acquire-html-dir(1)

litani release-html-dir

litani release-html-dir - Unlock a previously-locked HTML report directory

litani release-html-dir

-d/--dir D

This program releases a lock on a Litani HTML report directory that had previously been locked through an invocation of litani-acquire-html-dir(1). This allows other processes to attempt to acquire the directory.

After this command returns, it becomes possible for a single subsequent (or currently-running) invocation of litani-acquire-html-dir(1) to terminate successfully.

It is an error to run this command if you have not previously acquired the specified report directory using litani-acquire-html-dir(1). Litani does not check that the locking API is being used correctly, so doing this may cause a race where more than one process can modify a single report directory concurrently.

It is important to release a directory after acquiring it and completing any modifications, so that a concurrent run of litani-run-build(1) can acquire stale HTML report directories for garbage collection.

It is safe to run this program concurrently with any other Litani command.

-d/--dir D

The path to a Litani HTML report directory that had previously been printed out from an invocation of litani-acquire-html-dir(1).

This program returns 0.

litani-acquire-html-dir(1)

litani init

litani init - Set up a new Litani run

litani init --project-name NAME

[--stages NAME [NAME ...]] [--pools NAME:DEPTH [NAME:DEPTH ...]] [--output-directory DIR] [--output-prefix DIR] [--output-symlink DIR] [--no-print-out-dir]

This program creates a new Litani run. This deletes previously-added jobs and allows you to add new jobs with litani --add-job, prior to running them with litani --run-build.

When you run this program, Litani will write a file called .litani_cache_dir to the current directory, which will contain the path to Litani's output directory. Subsequent litani runs will read this file to determine where to find the output directory. Litani will search the following locations for the file:

• The current directory;
• the parent directory, and every ancestor directory up to the filesystem root;
• every child directory, and every descendent down to the filesystem leaves.

This allows you to run litani init in the 'root' of a project, and subsequently run litani add-job further down in the project.

It is safe but pointless to run litani init more than once in parallel. One of the invocations will 'win', and subsequent invocations of litani add-job will add jobs to a single run.

--project-name NAME

The code project that this run is associated with. The project name is printed at the top of the HTML report. The project name is also included in the run.json file to help find all the runs for a particular project.

--stages NAME [NAME ...]

The stages that this run is composed of. At least one stage has to be provided. If this flag is not specified, the default stages build, test and report are used.

--pools NAME:DEPTH [NAME:DEPTH ...]

The job pools that you will subsequently add jobs into using the --pool flag of litani add-job. Each argument to this flag 'declares' the named pool with a particular depth, where the depth is an integer greater than zero. If you declare a pool called P with a depth of D, then a maximum of D jobs that have been added to the pool called P will run in parallel.

--output-directory DIR

Litani will write all of its output files for this run to DIR. DIR must not already exist. Use this flag when you want exact control over where the output files are written; however, --output-prefix combined with --output-symlink can often be more convenient.

--output-prefix DIR

Litani will write all of its output files for this run to a fresh directory under DIR; the fresh directory's name will be a UUID. DIR may already exist.

--output-symlink DIR

Litani will create a symbolic link from DIR to the output directory. DIR will be overwritten if it already exists.

A useful pattern is to run this program as

	litani init --output-prefix output --output-symlink output/latest
This way, all Litani runs are stored under the output directory, and the output/latest directory always points to the most recent run. It is then easy to keep a web browser pointed to output/latest/html/index.html, which will always resolve to the most recent dashboard.

--no-print-out-dir

Do not print the resolved path to the output directory. This is useful for clients that have their own method for printing the output directory's location.

LITANI_RUN_ID

This is expected to be a unique ID, for example generated by uuidgen(1). Litani will use the value of this variable as the ID for the run. This can be used, for example, by a CI system to set the UUID of the Litani run to be the same as the UUID for the CI run.

litani set-jobs

litani set-jobs - Set the list of jobs to be run

litani set-jobs

[-f/--from-file F] [-s/--from-string S]

This program reads a list of jobs as a JSON list. Each job in the list should be a dict with keys being the flags to litani add-job(1). This program will replace all previously-added jobs with the jobs in the user-proveded list.

This program can be used in conjunction with litani get-jobs to achieve the same functionality as litani transform-jobs. A user can read the current jobs, make changes to them, and then set the new list of jobs for the litani run.

This command can be used along with litani get-jobs to achieve a similar end result to litani transform-jobs, but instead of having access to the stdin and stdout of a subprocess this command uses input and output files, which may be more natural for use in a Unix pipeline.

Below is an example which demonstrates how to combine litani get-jobs and litani set-jobs to prepend echo to each command that has been added to a litani run either using a pipeline or a sequence

# Pipeline
litani get-jobs | jq '[.[] | (.command = "echo " + .command)]' | litani set-jobs
# Sequence
litani get-jobs --out-file /tmp/original.json
jq '[.[] | (.command = "echo " + .command)]' < /tmp/original.json > /tmp/transformed.json
litani set-jobs --from-file /tmp/transformed.JSON

-f, --from-file F

Read in the list of jobs to be set from a path to a JSON file F. Litani will read from stdin if no JSON file or string is provided

-s, --from-string S

Read in the list of jobs to be set from JSON string S

litani get-jobs(1)

litani dump-run

litani dump-run - Print the current run as a JSON document to stdout

litani dump-run

[-r/--retries N] [-o/--out-file F]

This program prints the run.json file for a Litani run to stdout or to the specified output file. The JSON file's schema is documented in litani-run.json(5).

This program is intended to be used while an invocation of litani-run-build(1) is running. You can access the run.json file in the run's output directory both during and after termination of the run-build process.

This program may be run as a Litani job, and it may be run as a subprocess from a command that is part of a Litani job. This allows commands to 'introspect' on a Litani run so far. It is guaranteed that if a single Litani job runs litani dump-run, all of the reverse-dependencies of that job will have the "complete" key set to True in the printed run.json file. That is, any job that runs litani dump-run will always see the most up-to-date state of the build with respect to its own reverse dependencies.

It is safe to run multiple invocations of litani dump-run in parallel. However, if you do so, the printed result may be slightly out-of-date.

If you run litani dump-run from a Litani job, Litani will attempt to ensure that the printed run is up-to-date with respect to that job. In particular, Litani will try to re-load the run until all the reverse dependencies have their "complete" field set to true.

-r N, --retries N

How many times to retry loading the run file if the litani run-build process doesn't emit it immediately. litani dump-run will retry loading the run once every second up to the limit given in this flag. If N is 0, this program will retry indefinitely---this is not recommended.

-o F, --out-file F

Print the run to F instead of stdout

A JSON document printed to stdout. This will be either null (if this program was not able to load the run after a certain number of tries), or a document whose schema matches the one in litani-run.json(5).

Zero unless the program crashed.

litani run-build

litani run-build - Start a Litani run

litani run-build

[-n/--dry-run] [-j/--parallel N] [-o/--out-file F] [--fail-on-pipeline-failure] [--no-pipeline-dep-graph] [-p/--pipelines P [P ...]] [-s/--ci-stage S]

This program executes a Litani run. The run must have been created using litani init and had jobs added to it using litani add-job.

It is NOT SAFE to run multiple invocations of litani run-build in parallel.

-n, --dry-run

Passes the -n flag directly to ninja(1). This makes ninja parse and run the build graph in dry-run mode, acting as if all the commands succeeded.

-j N, --parallel N

Run N jobs in parallel. This passes the -j flag directly to ninja(1). Ninja's default is calculated based on the number of cores in the system; run ninja -h to see the default. N=0 means do not restrict parallelism.

-o F, --out-file F

Periodically write the run.json file to F.

--fail-on-pipeline-failure

Return 0 only if all pipelines were successful. See RETURN CODE below.

--no-pipeline-dep-graph

Do not render dependency graphs for each pipeline onto the HTML individual pipeline pages. Pipeline graphs will also not be rendered if Graphviz is not installed.

-p P [P ...], --pipelines P [P ...]

Only run jobs that are part of the specified pipelines.

-s S, --ci-stage S

Only run jobs that are part of the specified CI stage. Currently S must be one of build, test, or report.

This program will return 1 if it terminates abnormally. On normal termination:

• If the --fail-on-pipeline-failure flag was passed, this program will return 0 if all pipelines were successful and 10 otherwise.
• Otherwise, this program will always return 0 upon normal termination, regardless of the pipelines' success or failure.

litani outcome-table.json

litani outcome-table.json - user-provided table to decide the outcome of a job

By default, Litani jobs' outcome is set to success if their command returns 0, and fail on other return codes or if the command timed out. This can be customized to some extent using the --...-ok and --...-ignore flags of litani-add-job(1). For more advanced use cases, users can pass a JSON file called an 'outcome table' to the --outcome-table flag of litani-add-job(1). This allows, for example, to cause an outcome other than success for a return code of 0, for negative tests. This manual describes the schema for the JSON file that users should pass to litani-add-job(1).

Schema for user-provided outcome table

{

Optional("comment"): str ,

"outcomes": [

Any({
"type": "return-code ",

"value": int ,

"action": outcome_ (),

Optional("comment"): str ,

}, {

"type": "timeout ",

"action": outcome_ (),

Optional("comment"): str ,

}, {

"type": "wildcard ",

"action": outcome_ (),

Optional("comment"): str ,

})]

}

Schema for user-provided outcome table

{

Optional("comment"): str ,

A description of the outcome table as a whole.

"outcomes": [

The outcome of the job will be the first item in this list that matches.

Any({
"type": "return-code ",

If the return code of the job matches the value of value, the outcome will be set to the value of action. The value of the optional comment key can contain a human-readable explanation for this outcome.

"value": int ,

"action": outcome_ (),

Optional("comment"): str ,

}, {

"type": "timeout ",

If this job timed out, the outcome will be set to the value of action. The value of the optional comment key can contain a human-readable explanation for this outcome.

"action": outcome_ (),

Optional("comment"): str ,

}, {

"type": "wildcard ",

The "wildcard" action type matches any job and sets its outcome to the value of action. It is recommended to place a wildcard action as the last element of the list of outcomes to catch all jobs that were not matched by a previous rule.

"action": outcome_ (),

Optional("comment"): str ,

})]

}

litani-add-job(1)

litani run.json

litani run.json - description of a single Litani run

Litani writes a run.json file to its cache directory when the user invokes litani-run-build(1). This file contains everything needed to generate the HTML dashboard, with the notable exception of any artefacts that the jobs might emit. (However, Litani does capture jobs' stdout and stderr; any output will be available in run.json.) This manual page describes run.json 's format.

outcome_, status_, and single_job_schema_ are sub-schemata that are referenced multiple times within this schema, and so are defined separately below. All timestamps are in ISO-8601.

Schema for entire run.json file

{

"run_id": str ,

"project": str ,

"stages": [str ],

"pools": {Optional(str): int },

"start_time": time_str_ ,

"version": str ,

"version_major": int ,

"version_minor": int ,

"version_patch": int ,

"release_candidate": bool ,

Optional("end_time"): time_str_ ,

"status": status_ (),

"aux": dict ,

"parallelism": Any ({

Optional("trace"): [{

"time": ms_time_str_ ,

"finished": int ,

"running": int ,

"total": int ,

}],

Optional("max_parallelism"): int ,

Optional("n_proc"): Any(None, int ),

}),

"pipelines": [{

"url": str ,

"name": str ,

"status": status_ (),

"ci_stages": [{

"url": str , "complete": bool ,

"name": str ,

"status": outcome_ (),

"progress": All(int, Range(min=0, max=100 )),

"jobs": [Any ({

"complete": False ,

"duration_str": None ,

"wrapper_arguments": single_job_schema_ (),

}, {

"complete": False ,

"start_time": time_str_ ,

"duration_str": None ,

"wrapper_arguments": single_job_schema_ (),

}, {

"duration": int ,

"complete": True ,

"outcome": outcome_ (),

"end_time": time_str_ ,

"start_time": time_str_ ,

"timeout_reached": bool ,

"command_return_code": int ,

"wrapper_return_code": int ,

"stderr": Any([str], None ),

"stdout": Any([str], None ),

"duration_str": Any(str, None ),

"wrapper_arguments": single_job_schema_ (),

"loaded_outcome_dict": Any(dict, None ),

"memory_trace": {

Optional("peak"): {

"rss": int ,

"vsz": int ,

"human_readable_rss": str ,

"human_readable_vsz": str ,

},

Optional("trace"): [{

"rss": int ,

"vsz": int ,

"time": time_str_ ,

}],

},
})],
}],
}],

"latest_symlink": Any(str, None ),

}

Schema for 'wrapper_arguments' key

{

"job_id": str ,

"command": str ,

"ci_stage": str ,

"verbose": bool ,

"timeout_ok": bool ,

"pipeline_name": str ,

"very_verbose": bool ,

"timeout_ignore": bool ,

"profile_memory": bool ,

"profile_memory_interval": int ,

"cwd": Any(str, None ),

"interleave_stdout_stderr": bool ,

"pool": Any(str, None ),

"tags": Any([str], None ),

"timeout": Any(int, None ),

"inputs": Any([str], None ),

"outputs": Any([str], None ),

"description": Any(str, None ),

"status_file": Any(str, None ),

"stderr_file": Any(str, None ),

"stdout_file": Any(str, None ),

"ok_returns": Any([str], None ),

"outcome_table": Any(str, None ),

"phony_outputs": Any([str], None ),

"ignore_returns": Any([str], None ),

"subcommand": Any("exec", "add-job "),

}

Schema for a pipeline or run status

Any("success", "fail", "in_progress")

Schema for a job or ci_stage outcome

Any("success", "fail", "fail_ignored")

Schema for entire run.json file

{

"run_id": str ,

A globally-unique ID for the run.

"project": str ,

A name for the project that this run is part of. This name is used by the HTML report generator and can be used to group related sets of runs, but is otherwise not used by litani.

"stages": [str ],

The CI stages that each job can be a member of. Stage names can be provided through the --stages flag of litani-init(1). Default stages "build", "test" and "report" are used if the flag is not used.

"pools": {Optional(str): int },

A mapping from pool names to the depth of the pool. Jobs can be a member of zero or one pool. The depth of a pool that a set of jobs belong to limits the number of those jobs that litani will run in parallel.

"start_time": time_str_ ,

The time at which the run started.

"version": str ,

The version string of the Litani binary that ran this run.

"version_major": int ,

Litani's major version number.

"version_minor": int ,

Litani's minor version number.

"version_patch": int ,

Litani's patch version number.

"release_candidate": bool ,

false if this version of Litani is a tagged release.

Optional("end_time"): time_str_ ,

The time at which the run ended. This key will only exist if status is not equal to "in_progress".

"status": status_ (),

The state of this run, see the status schema below.

"aux": dict ,

A free-form dict that users can add custom information into. There are no constraints on the format of this dict, but it is recommended that users add their information to a sub-dict with a key that indicates its function. For example, to add information pertaining to a CI run, users might add a key called "continuous_integration_data" whose value is a sub-dict containing all required fields.

"parallelism": Any ({

This dict contains information about the parallelism level of the jobs that litani runs. This is to measure whether the run is using as many processor cores as possible over the duration of the run.

Optional("trace"): [{

A list of samples of the run's concurrency level.

"time": ms_time_str_ ,

The time at which the sample was taken.

"finished": int ,

How many jobs have finished

"running": int ,

How many jobs are running

"total": int ,

The total number of jobs

}],

Optional("max_parallelism"): int ,

The maximum parallelism attained over the run

Optional("n_proc"): Any(None, int ),

The number of processors detected on this machine

}),

"pipelines": [{

Each pipeline contains ci_stages which contain jobs.

"url": str ,

"name": str ,

The pipeline name. The set of pipeline names are all the names passed to the --pipeline-name flag of litani-add-job(1).

"status": status_ (),

The pipeline's state, see the status schema below.

"ci_stages": [{

Each ci_stage contains a list of jobs.

"url": str , "complete": bool ,

Whether all the jobs in this stage are complete.

"name": str ,

The stage's name. This is any of the stages of the project.

"status": outcome_ (),

The stage's state, see the outcome schema below.

"progress": All(int, Range(min=0, max=100 )),

"jobs": [Any ({

The list of all the jobs in this ci_stage in this pipeline. There are three different forms the value of this key can take.

"complete": False ,

If complete is false and no start_time key exists, then this job has not yet started.

"duration_str": None ,

"wrapper_arguments": single_job_schema_ (),

The arguments passed to this job, see the single_job_schema schema below.

}, {

"complete": False ,

If complete is false but the start_time key exists, then the job has started running but has not yet finished.

"start_time": time_str_ ,

The time at which the job started running.

"duration_str": None ,

"wrapper_arguments": single_job_schema_ (),

The arguments passed to this job, see the single_job_schema schema below.

}, {

"duration": int ,

How long the job ran for.

"complete": True ,

If complete is true, then the job has terminated.

"outcome": outcome_ (),

The job's outcome, see the outcome schema below.

"end_time": time_str_ ,

The time at which the job completed.

"start_time": time_str_ ,

The time at which the job started running.

"timeout_reached": bool ,

Whether the job reached its timeout limit.

"command_return_code": int ,

The command's return code.

"wrapper_return_code": int ,

"stderr": Any([str], None ),

A list of strings that the command printed to its stderr.

"stdout": Any([str], None ),

A list of strings that the command printed to its stdout.

"duration_str": Any(str, None ),

A human-readable duration of this job (HH:MM:SS).

"wrapper_arguments": single_job_schema_ (),

The arguments passed to this job, see the single_job_schema schema below.

"loaded_outcome_dict": Any(dict, None ),

If wrapper_arguments["outcome_table"] is not null, the value of this key will be the deserialized data loaded from the outcome table file.

"memory_trace": {

If profile_memory was set to true in the wrapper arguments for this job, this dict will contain samples of the command's memory usage.

Optional("peak"): {

The command's peak memory usage.

"rss": int ,

Peak resident set

"vsz": int ,

Peak virtual memory size

"human_readable_rss": str ,

Peak resident set

"human_readable_vsz": str ,

Peak virtual memory size

},

Optional("trace"): [{

A list of samples of memory usage.

"rss": int ,

Resident set

"vsz": int ,

Virtual memory

"time": time_str_ ,

The time at which the sample was taken

}],

},
})],
}],
}],

"latest_symlink": Any(str, None ),

The symbolic link to the report advertised to users

}

Schema for 'wrapper_arguments' key

The wrapper_arguments key to run.json maps to the following dict. None of the values in this dict change at any point during the run; they are mostly the same as the flags passed to litani-add-job(1) for this job. {

"job_id": str ,

A globally-unique ID for this job.

"command": str ,

The command that litani will execute in a subshell.

"ci_stage": str ,

The name of the 'stage' that this job will execute in, used for organizing the HTML dashboard.

"verbose": bool ,

"timeout_ok": bool ,

If true, then if this job times out, the outcome will be set to 'success'.

"pipeline_name": str ,

The name of the 'pipeline' that this job will execute in, used for organizing the HTML dashboard.

"very_verbose": bool ,

"timeout_ignore": bool ,

If true, then if this job times out, the outcome will be set to 'fail_ignored'.

"profile_memory": bool ,

If true, then litani will regularly sample the memory usage of this job's command while it runs. Samples are stored in the job's memory_trace.

"profile_memory_interval": int ,

How frequently (in seconds) litani will profile the command's memory use, if profile_memory is true.

"cwd": Any(str, None ),

The directory that litani will run the command in.

"interleave_stdout_stderr": bool ,

Whether the command's stderr will be sent to the stdout stream. If true, the job's stderr key will be None and the stdout key will contain lines from both the command's stdout and stderr.

"pool": Any(str, None ),

The pool that this job will execute in; if not null, then it must be a key in the pools dict of the overall run.

"tags": Any([str], None ),

A list of user-specified tags. Litani mostly doesn't interpret these, although the HTML dashboard generator does use some of them. Tags are intended to help users find particular jobs for data analysis and can contain arbitrary data.

"timeout": Any(int, None ),

The number of seconds that Litani will allow the job to run for before sending SIGTERM followed by SIGKILL (see signal(3)).

"inputs": Any([str], None ),

The list of files that should be made up-to-date before the job will run

"outputs": Any([str], None ),

The list of files that this job will make up-to-date after it completes

"description": Any(str, None ),

A human-readable description of this job

"status_file": Any(str, None ),

"stderr_file": Any(str, None ),

A file to redirect stderr to, as well as buffering it internally

"stdout_file": Any(str, None ),

A file to redirect stdout to, as well as buffering it internally

"ok_returns": Any([str], None ),

A list of return codes. If the command exits with any of these return codes (or 0), then the outcome will be set to 'success'.

"outcome_table": Any(str, None ),

A file to load an outcome table from.

"phony_outputs": Any([str], None ),

A list of outputs that Litani will not warn about if they were not created by the job.

"ignore_returns": Any([str], None ),

A list of return codes. If the command exits with any of these return codes (or 0), then the outcome will be set to 'fail_ignored'.

"subcommand": Any("exec", "add-job "),

}

Schema for a pipeline or run status

pipelines and runs have a "status". The status is "in_progress" when some of the jobs are incomplete and either "success" or "fail" once all jobs complete.

Any("success", "fail", "in_progress")

Schema for a job or ci_stage outcome

Outcomes and ci_stages have an "outcome" (though, confusingly, the key is "status" for ci_stages). "fail_ignored" means that the job failed but the user specified that the job's dependencies should run anyway. If a pipeline contains a job whose outcome is "fail_ignored", then the status of the pipeline will be "fail" after all of its jobs complete.

Any("success", "fail", "fail_ignored")

Litani Documentation