Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
O
outillage
Manage
Activity
Members
Labels
Plan
Issues
0
Issue boards
Milestones
Wiki
Code
Merge requests
0
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Package Registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Alice Brenon
outillage
Graph
ugly
Select Git revision
Branches
4
main
default
protected
ml-training
the-amazing-obj-refactoring
ugly
4 results
You can move around the graph by using the arrow keys.
Begin with the selected commit
Created with Raphaël 2.2.0
13
Sep
26
Mar
18
2
9
Feb
8
30
Jan
29
27
26
6
5
2
1
30
Dec
29
27
22
13
5
22
Nov
20
17
30
Oct
29
Sep
28
26
22
21
20
19
14
13
12
7
4
27
Jun
26
23
21
20
16
4
May
10
Dec
23
Nov
8
Move the applicative parser to detect first words into the textometry directory
ugly
ugly
Added a requirements equivalent of the python- part of the manifest (without versions)
Add the script used to compare pairs while building the Parallel subcorpus
Haskell libs moved
Convert the linearization script to take TSV as input and handle the structure itself, making it simpler to call
Simplify compute-profile again to simply read from stdin and drop all the optparse-applicative + ReadMonad combo
Update format for profiles to allow typing the event observed
Adapt compute-profile to latest changes in Conllu.Tree*
Refine the notion of size in the Tree representation of Conllu annotations as used in measures.hs from 922f5f4
Remove commented-out line commited by mistake
Add a python data-analysis tool to extract statistical metrics from the raw measurements
More comprehensive measure script taking both sign- and lexical units-levels into account
Fix Tree indexer script + add 2 textometry scripts
Keep haskell codebase up to date with lib ghc-geode
Describe how to represent a UD token as a tabular data structure to allow outputing matched words in a TSV
Add a script to extract classifier nouns
Add the ability to set the resolution from the command line in the profile visualisation script
Improve scripts for profile computation: searching from (serialized) indexed trees, and using ranges to compact occurrences found
Oh yeah we serialize things now
Add types to represent indexed sentences and documents, effectively absorbing part of the work that was done during the search
Add a script to serialize syntax trees computed from the .conllu files, copying part of the Conllu.Type types in the process to get a Generic instance for Serialization and to slightly improve structure by indexing features by name with a Map
Fix bug in resampling due to floating-point computation errors
Add a default TSV reader in GEODE module
Add a script to visualise the profiles computed
Lift some constraint on the monad in and use that to pass the config around as a Reader in the new profile extractor script
Add a script to extract the repartition of verbs with the imperative mood within a corpus
Fix inclusion path in haskell script
Export fromKey converter as well from GEODE
Fix bug in code reporting collisions between two annotations
Moving python lib out of scripts directory
Add a script to split files on lines with a certain pattern
Move haskell libs into a subdirectory to make space for python library modules
Add a script to retrieve a Simple train set from a Multi one
Add a script to generate reports and draw a confusion matrix
Get rid of deprecation warning with AdamW optimiser
A tiny script used to add the manually annotated label to a bunch of texts being prepared in a JSONL (to be used as input for prodigy)
Implement splitters for Simple and Multi workflows
Add write capacities to the JSONL module
Add predictor for MultiBERT model, improve the trainer for the same model and expose the predicted score in the model's output
Add script to split train and test coming from prodigy
Loading