A SciUnit library for data-driven testing of basal ganglia models. Employed for testing via the HBP Validation Framework. This test shall take as input a BluePyOpt optimized output file, containing a hall_of_fame.json file specifying a collection of parameter sets. The validation test would then evaluate the model for all (or specified) parameter sets against various eFEL features.
The pipeline ingests data from multiple measurement types of spatially organized neuronal activity, such as ECoG or calcium imaging recordings. The pipeline returns statistical measures to quantify the dynamic wave-like activity patterns found in the data. Individual parts of the snakemake-based pipeline are fully configurable. The composition of Cobrawap elements can be adapted to various datasets through by means of a modular design of self-contained sequential stages composed of multiple atomic blocks.
The EBRAINS Collaboratory offers researchers and developers a secure environment to work with others. You control the level of collaboration by sharing your projects with specific users, teams or all of the Internet. Many researchers are sharing their work already; several services, tools, datasets, and other resources are publicly available, and many more are available for registered users.
To better understand the relationships between interindividual variability in brain regions’ connectivity and behavioural phenotypes, we are developing a connectivity-based psychometric prediction framework (CBPP). Preliminary to the development of this region-wise machine learning approach, we performed an extensive assessment of the general connectivity-based psychometric prediction (CBPP) framework based on whole-brain connectivity information. Because a systematic evaluation of different parameters was lacking from previous literature, we evaluated several approaches pertaining to the different steps of a CBPP study. We hence tested 72 different approach combinations (3 types of preprocessing x 4 parcellation granularity x 2 connectivity methods x 3 regression methods = 72 combinations) in a cohort of over 900 healthy adults across 98 psychometric variables. Overall, our extensive evaluation combined to an innovative region-wise machine learning approach offer a framework that optimises both, prediction performance and neurobiological validity (and hence interpretability) for studying brain-behaviour relationships.
This test shall take as input a BluePyOpt optimized output file. The validation test would then evaluate the model for all parameter sets against various eFEL features. It should be noted that the reference data used is that located within the model, so this test can be considered as a quantification of the goodness of fitting the model. The results are registered on the HBP Validation Framework app.
The Python library Electrophysiology Analysis Toolkit (Elephant) provides tools for analysing neuronal activity data, such as spike trains, local field potentials and intracellular data. In addition to providing a platform for sharing analysis codes from different laboratories, Elephant provides a consistent and homogeneous framework for data analysis built on a modular foundation. The underlying data model is the Neo library. This framework easily captures a wide range of neuronal data types and methods, including dozens of file formats and network simulation tools. A common data description, as the Neo library provides, is essential for developing interoperable analysis workflows.
Frites allows the characterisation of task-related cognitive brain networks. Neural correlates of cognitive functions can be extracted both at the single brain area (or channel) and network level. The toolbox includes time-resolved directed (e.g., Granger causality) and undirected (e.g., Mutual Information) Functional Connectivity metrics. In addition, it includes cluster-based and permutation-based statistical methods for single-subject and group-level inference.
Among fiber tracking methods, spin glass tractography approaches propose an efficient framework to perform a global optimization of the inference of the structural brain connectivity from diffusion MRI HARDI or HYDI dataset. In addition, spin-glass based global tractography allows to add further regularization potentials to better constrain the energy landscape using anatomical or microstructural priors and thus help discard false positives. The proposed global tractography tools allows to compute from any diffusion MRI dataset a dense tractogram of virtual white matter fibers, under the constraint of a bending energy ensuring low curvature of fibres and robust inference of fibers in regions depicting several fiber populations (kissings, crossings, splittings), of anatomical prior (pial surface to drive the ending of fibers), and of microstructural priors (like the intraxonal volume fraction or the orientation dispersion of fibers, to allow sharp turns of fibres when connecting to the cortical ribbon).
A Python package for working with the Human Brain Project Model Validation Framework.
This package contains validation tests for models of hippocampus, based on the SciUnit framework and the NeuronUnit package. As in SciUnit, in HippoUnit tests four main classes are implemented: the Test class, the Model class, the Capabilities class and the Score class. The tests of HippoUnit automatically run simulations on single-cell models that mimic the electrophysiological protocol from which the target experimental data were derived. Then the behavior of the model is evaluated and quantitatively compared to the experimental data using various feature-based error functions. Current tests cover somatic behavior and signal propagation and integration in apical dendrites of hippocampal CA1 pyramidal cell models.
MorphoUnit is a SciUnit library for data-driven testing of neuronal morphologies. Employed for testing via the HBP Validation Framework.
The NetworkUnit module builds upon the formalized validation scheme of the SciUnit package, which enables the validation of models against experimental data (or other models) via tests. A test is matched to the model by capabilities and quantitatively evaluated by a score.
NSuite is a framework for maintaining and running benchmarks and validation tests for multi-compartment neural network simulations on HPC systems. NSuite automates the process of building simulation engines, and running benchmarks and validation tests. NSuite is specifically designed to allow easy deployment on HPC systems in testing workflows, such as benchmark-driven development or continuous integration. There are three motivations for the development of NSuite: The need for a definitive resource for comparing performance and correctness of simulation engines on HPC systems. The need to verify the performance and correctness of individual simulation engines as they change over time. The need to test that changes to an HPC system do not cause performance or correctness regressions in simulation engines. The framework currently supports the simulation engines Arbor, NEURON, and CoreNeuron, while allowing other simulation engines to be added.