diff options
author | Aria <me@aria.rip> | 2023-10-26 21:45:38 +0100 |
---|---|---|
committer | Aria <me@aria.rip> | 2023-10-26 21:45:38 +0100 |
commit | a9205019388a43cdd490e782e5dd30bc10f46d26 (patch) | |
tree | db5f18f45f7335d2c4a1b813fd8c4eb93f9dceae /Tasks.org | |
parent | 599d0c50843ef446378f9db70641e4bf6d0c10fe (diff) |
chore: update tasks
Diffstat (limited to 'Tasks.org')
-rw-r--r-- | Tasks.org | 53 |
1 files changed, 43 insertions, 10 deletions
@@ -25,7 +25,7 @@ - We could extend this to suggest different approaches if there is a spread of max n. - If time allows, could attempt to create a 'wrapper type' that switches between collections as n changes, using rules decided by something similar to the above algorithm. -* TODO Integrating with Primrose +* DONE Integrating with Primrose We want to be able to run primrose on some file, and get a list of candidates out. We also need to know where we can find the implementation types, for use in building cost models later. @@ -39,33 +39,66 @@ We can list every implementation available, its module path, the traits it imple We don't need a list of operations right now, as we'll just make a seperate implementation of cost model building for each trait, and that will give us it. +This is provided by ~primrose::LibSpec~, and defined in the ~lib_specs~ module. + ** DONE Interface for candidate generation We should be able to get a list of candidates for a given file, which links up with the implementation library correctly. +This is provided by ~primrose::ContainerSelector~, and defined in the ~selector~ module. + ** DONE Interface for code generation Given we know what type to use, we should be able to generate the real code that should replace some input file. -** DOING Proof of concept type thing +This is provided by ~primrose::ContainerSelector~, and defined in the ~codegen~ module. -To make sure this is alright, we can try re-creating the primrose CLI, but a bit nicer and with more info. - -* BLOCKED Semantic profiler +** DONE Proof of concept type thing -We need to be able to pick some random candidate type, wrap it with profiling stuff, and run user benchmarks to get data. - -Ideally, we could use information about the cargo project the file resides in to get benchmarks. +We can get all the candidates for all files in a given project, or even workspace! -We also need to figure out which operations we want to bother counting, and how we can get an 'allocation context'/callstack. +This is provided by ~Project~ in ~candelabra_cli::project~. -* BLOCKED Building cost models for types +* Building cost models for types We need to be able to build cost models for any given type and operation. This will probably involve some codegen. There should also be caching of the outputs. +** Generic trait benchmarkers + +Given some type implementing a given trait, we should be able to get performance information at various Ns. + +*** DONE Container trait benchmarker +*** TODO Indexable trait benchmarker +*** TODO Stack trait benchmarker + +** TODO Generate benchmarks for arbitrary type + +Given a type that implements some subset of our traits, we should be able to benchmark it. +We could either use the information from the library specs to generate some code and benchmark it, or we could just have benchmarking code for all of our types in primrose-library or somewhere else (easier). + +** TODO Build cost model from benchmark + +Fit polynomials for each operation from the benchmarking data. + +** TODO Integrate with CLI + +The CLI should get cost models for all of the candidate types, and for now just print them out. + +** TODO Caching and loading outputs + +We should cache the output cost models per machine, ideally with a key of library file modification time. + +* BLOCKED Semantic profiler + +We need to be able to pick some random candidate type, wrap it with profiling stuff, and run user benchmarks to get data. + +Ideally, we could use information about the cargo project the file resides in to get benchmarks. + +We also need to figure out which operations we want to bother counting, and how we can get an 'allocation context'/callstack. + * BLOCKED Integration We create the last bits of the whole pipeline: |