this repo has no description
README.md

Candelabra#

This folder contains the source code for Candelabra as a Cargo workspace. First, setup the dependencies as detailed in the root of the repository.

Building#

Building is done with Cargo as normal: cargo build. This places the executable in ./target/debug/candelabra-cli.

This is not necessary if using the testing VM, and you should replace cargo run with candelabra-cli in all commands below.

Creating cost models#

To build and view a cost model, first pick an implementation to look at:

  • primrose_library::VecMap
  • primrose_library::VecSet
  • std::vec::Vec
  • std::collections::BTreeSet
  • std::collections::BTreeMap
  • primrose_library::SortedVecSet
  • std::collections::LinkedList
  • primrose_library::SortedVecMap
  • primrose_library::SortedVec
  • std::collections::HashMap
  • std::collections::HashSet

To view the cost model for a single implementation, run just cost-model <impl>.

Alternatively, run just cost-models to view models for all implementations. This will clear the cache before running.

Cost models are also saved to target/candelabra/benchmark_results as JSON files. To analyse your built cost models, copy them to ../analysis/current/candelabra/benchmark_results and see the README in ../analysis/.

Profiling applications#

To profile an application in the tests/ directory and display the results, run just profile <project>. Profiling info is also saved to target/candelabra/profiler_info/ as JSON.

Selecting containers#

To print the estimated cost of using each implementation in a project, run just select <project>. Alternatively, run just selections to run selection for all test projects.

You can add --compare to either of these commands to also benchmark the project with every assignment of implementations, and print out the results.

Running the full test suite#

To run everything we did, from scratch:

$ just cost-models # approx 10m
$ just selections --compare 2>&1 | tee ../analysis/current/log # approx 1hr 30m