JavaScript is required to consult this page

Tools

3DSpineS

Dendritic spines of pyramidal neurons are the targets of most excitatory synapses in the cerebral cortex and their morphology appears to be critical from the functional point of view. Thus, characterizing this morphology is necessary to link structural and functional spine data and thus interpret and make them more meaningful. We have used a large database of more than 7,000 individually 3D reconstructed dendritic spines from human cortical pyramidal neurons that is first transformed into a set of 54 quantitative features characterizing spine geometry mathematically. The resulting data set is grouped into spine clusters based on a probabilistic model with Gaussian finite mixtures. We uncover six groups of spines whose discriminative characteristics are identified with machine learning methods as a set of rules. The clustering model allows us to simulate accurate spines from human pyramidal neurons to suggest new hypotheses of the functional organization of these cells.

Data analysis and visualisationData

Android app for multimodal data acquisition from wearables

An app for acquiring and storing data from multiple sensors. Currently, can be used with the following devices: Empatica E4 Tablet/Smartphone built-in sensors MetaMotion R In order to improve reliability, a bipartite structure has been implemented. In particular, the Main Activity acts as an interface between the user and the main service that constitutes the principal actor. The latter performs scans, handles the user's requests to connect remote devices, all the unexpected disconnection that may happen and receives the data from the wireless sensors.

Data

Bids Manager & Pipeline

Manually driven processes for data storing can lead to human errors, which cannot be tolerated in the context of a clinical data sets. The Bids manager offers a secure system to import and structure patient’s clinical data sets in Brain Imaging Data Structure (BIDS). BIDS is an initiative aiming at establishing a common standard to describe data and its organization on disk for both neuroimaging and electrophysiological data. Bids Manager is a software for clinicians and researchers with a user-friendly interface.

Share dataData

Collaboratory

The EBRAINS Collaboratory offers researchers and developers a secure environment to work with others. You control the level of collaboration by sharing your projects with specific users, teams or all of the Internet. Many researchers are sharing their work already; several services, tools, datasets, and other resources are publicly available, and many more are available for registered users.

DataBrain atlasesModelling and simulationValidation and inference

Data Catalogue

PostgreSQL relational database schema for tracking the provenance of data for all software components and files involved in the data transformation process. This project provides a Docker container including Alembic and a Python model of the Data Catalog schema to setup and migrate this schema in a target database.

Data

Demics

Demics is a library for the Python programming language, adding support for distributed computational operations for very large, multi-dimensional arrays and matrices. Such operations include Deep Learning inference models from the libraries TensorFlow and PyTorch. The software focuses on the segmentation and detection of objects in particularly large image data.

Modelling and simulationData

EBRAINS Knowledge Graph

We built the EBRAINS Knowledge Graph (KG) to help you find and share the data you need to make your next discovery. We also built it to connect you to the software and hardware tools which will help you analyse the data you have and the data you find. The EBRAINS Knowledge Graph supports rich terminologies, ontologies and controlled vocabularies. The system is built by design to support iterative elaborations of common standards and supports these by probabilistic suggestion and review systems. The EBRAINS Knowledge Graph is a multi-modal metadata store which brings together information from different fields on brain research. At the core of the EBRAINS Knowledge Graph, a graph database tracks the linkage between experimental data and neuroscientific data science supporting more extensive data reuse and complex computational research than would be possible otherwise.

DataShare data

EBRAINS Metadata Wizard

In this form, you can describe key aspects of your dataset so that other researchers will be able to find, reuse and cite your work. You can use the navigation bar above to explore the different sections and categories of metadata collected through this form.

DataShare data

fairgraph

fairgraph is a Python library for working with metadata in the HBP/EBRAINS Knowledge Graph, with a particular focus on data reuse, although it is also useful in metadata registration/curation. The basic idea of the library is to represent metadata nodes from the Knowledge Graph as Python objects. Communication with the Knowledge Graph service is through a client object, for which an access token associated with an EBRAINS account is needed.

Data

fmralign

This library is meant to be a light-weight Python library that handles functional alignment tasks. It is compatible with and inspired from Nilearn. Alternative implementations of these ideas can be found in the pymvpa or brainiak packages.

Data

fMRIPrep

Preprocessing of functional MRI (fMRI) involves numerous steps to clean and standardize the data before statistical analysis. Generally, researchers create ad hoc preprocessing workflows for each dataset, building upon a large inventory of available tools. The complexity of these workflows has snowballed with rapid advances in acquisition and processing. fMRIPrep is an analysis-agnostic tool that addresses the challenge of robust and reproducible preprocessing for task-based and resting fMRI data. fMRIPrep automatically adapts a best-in-breed workflow to the idiosyncrasies of virtually any dataset, ensuring high-quality preprocessing without manual intervention. fMRIPrep robustly produces high-quality results on diverse fMRI data. Additionally, fMRIPrep introduces less uncontrolled spatial smoothness than observed with commonly used preprocessing tools. fMRIPrep equips neuroscientists with an easy-to-use and transparent preprocessing workflow, which can help ensure the validity of inference and the interpretability of results. The workflow is based on Nipype and encompases a large set of tools from well-known neuroimaging packages, including [FSL](<https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/), ANTs, FreeSurfer, AFNI, and Nilearn. This pipeline was designed to provide the best software implementation for each state of preprocessing, and will be updated as newer and better neuroimaging software becomes available. fMRIPrep performs basic preprocessing steps (coregistration, normalization, unwarping, noise component extraction, segmentation, skullstripping etc.) providing outputs that can be easily submitted to a variety of group level analyses, including task-based or resting-state fMRI, graph theory measures, surface or volume-based statistics, etc. fMRIPrep allows you to easily do the following: Take fMRI data from unprocessed (only reconstructed) to ready for analysis. Implement tools from different software packages. Achieve optimal data processing quality by using the best tools available. Generate preprocessing-assessment reports, with which the user can easily identify problems. Receive verbose output concerning the stage of preprocessing for each subject, including meaningful errors. Automate and parallelize processing steps, which provides a significant speed-up from typical linear, manual processing.

Data

Ginkgo/GlobalTractography

Among fiber tracking methods, spin glass tractography approaches propose an efficient framework to perform a global optimization of the inference of the structural brain connectivity from diffusion MRI HARDI or HYDI dataset. In addition, spin-glass based global tractography allows to add further regularization potentials to better constrain the energy landscape using anatomical or microstructural priors and thus help discard false positives. The proposed global tractography tools allows to compute from any diffusion MRI dataset a dense tractogram of virtual white matter fibers, under the constraint of a bending energy ensuring low curvature of fibres and robust inference of fibers in regions depicting several fiber populations (kissings, crossings, splittings), of anatomical prior (pial surface to drive the ending of fibers), and of microstructural priors (like the intraxonal volume fraction or the orientation dispersion of fibers, to allow sharp turns of fibres when connecting to the cortical ribbon).

DataValidation and inference

ibc-public

This Python package gives the pipeline used to process the MRI data obtained in the Individual Brain Charting Project. More info on the data can be found at IBC public protocols and IBC webpage. Latest collection of raw data is available on OpenNeuro, data accession no.002685. Latest collection of unthresholded statistical maps can be found on NeuroVault, id collection=6618. Install Under the main working directory of this repository in your computer, run the following command in a command prompt: pip install -e .<br /> ```<br /> <br /> ## Example usage<br /> <br /> One can import the entire package with `import ibc_public` or use specific parts of the package:<br /> <br /> ```python<br /> from ibc_public import utils_data<br /> utils_data.make_surf_db(derivatives="/path/to/ibc/derivatives", mesh="fsaverage5")<br /> ```<br /> <br /> ## Details<br /> <br /> These script make it possible to preprocess the data<br /> * run topup distortion correction<br /> * run motion correction<br /> * run coregistration of the fMRI scans to the individual T1 image<br /> * run spatial normalization of the data<br /> * run a general linear model to obtain brain activity maps for the main contrasts of the experiment.<br /> <br /> ## Core scripts<br /> <br /> The core scripts are in the `scripts` folder<br /> <br /> - `pipeline.py` lunches the full analysis on fMRI data (pre-processing + GLM)<br /> - `glm_only.py` launches GLM analyses on the data<br /> - `surface_based_analyses` launches surface extraction and registration with Freesurfer; it also projects fMRI data to the surface<br /> - `surface_glm_analysis.py` runs glm analyses on the surface<br /> - `dmri_preprocessing` (WIP) is for diffusion daat. It relies on dipy.<br /> - `anatomical mapping` (WIP) yields T1w, T2w and MWF surrogates from anatomical acquisitions.<br /> - `script_retino.py` yields some post-processing for retinotopic acquisitions (derivation of retinotopic representations from fMRI maps)<br /> <br /> ## Dependencies<br /> <br /> Dependencies are :<br /> * FSL (topup)<br /> * SPM12 for preprocessing<br /> * Freesurfer for surface-based analysis<br /> * Nipype to call SPM12 functions<br /> * Pypreprocess to generate preprocessing reports<br /> * Nilearn for various functions<br /> * Nistats to run general Linear models.<br /> <br /> The scripts have been used with the following versions of software and environment:<br /> <br /> * Python 3.5<br /> * Ubuntu 16.04<br /> * Nipype v0.14.0<br /> * Pypreprocess v0.0.1.dev<br /> * FSL v5.0.9<br /> * SPM12 rev 7219<br /> * Nilearn v0.4.0<br /> * Nistats v0.0.1.a<br /> <br /> ## Future work<br /> <br /> - More high-level analyses scripts<br /> - Scripts for additional datasets not yet available<br /> - scripts for surface-based analysis<br /> <br /> ## Contributions<br /> <br /> Please feel free to report any issue and propose improvements on Github.

Data

Jupyter Lab

An extensible environment for interactive and reproducible computing, based on the Jupyter Notebook and Architecture. Currently ready for users. JupyterLab is the next-generation user interface for Project Jupyter offering all the familiar building blocks of the classic Jupyter Notebook (notebook, terminal, text editor, file browser, rich outputs, etc.) in a flexible and powerful user interface. JupyterLab will eventually replace the classic Jupyter Notebook. JupyterLab can be extended using npm packages that use our public APIs. To find JupyterLab extensions, search for the npm keyword jupyterlab-extension or the GitHub topic jupyterlab-extension. To learn more about extensions, see the user documentation. The current JupyterLab releases are suitable for general usage, and the extension APIs will continue to evolve for JupyterLab extension developers.

Data

KnowledgeSpace

KnowledgeSpace (KS) is a community-based encyclopedia for neuroscience that links brain research concepts to the data, models, and literature that support them. The KS framework: combines the general descriptions of neuroscience concepts found in wikipedia with more their more detailed desrciptions from InterLex links the latest PubMed citations with the descriptions of neuroscience concepts, and provides users with access to the data and models linked to neuroscience research concepts found in some of the world’s leading neuroscience repositories. Further, it serves as a framework where large-scale neuroscience projects can expose their data to the neuroscience community-at-large. KnowledgeSpace is a joint development between the Human Brain Project (HBP), the International Neuroinformatics Coordinating Facility (INCF), and the Neuroscience Information Framework (NIF).

Data

LePetitPrince

Given a sequence of stimuli, fMRI data from subjects exposed to these sequence one or several models making quantitative predictions from each stimulus in the sequence, the code allows you build a flexible analysis pipeline combining functions that allow you to generate R² or r maps. Function types can be for example: Data compression methods (already coded or that you can add) Data transformation methods (standardization, convolution with a kernel,or whatever your heart desires...) Splitting strategies Encoding models Any task that you might find useful (and that you are willing to code) For example, the pipeline programmed in main.py fits stimuli-representations of several lanuage models to fMRI data obtained from participants listening to an audiobook (in 9 runs). R² are computed from a nested-cross validated Ridge-regression. The pipeline runs for tuples (1 subject, 1 model), to facilitate distributing the code on a cluster.

Data

Medical Informatics Platform

The Medical Informatics Platform (MIP) is designed to help clinicians, clinical scientists, and clinical data scientists aiming to adopt advanced analytics for clinical research. Users can explore harmonized medical data extracted from pre-processed neuroimaging, neurophysiological and medical records and research cohort datasets without transferring original clinical data.

MIP
Data

MIP Microservice Infrastructure

Platform for rapidly deploying globally distributed services. It supports clustering, security, monitoring and more out of the box. It is based on Cisco's Mantl cloud project (https://github.com/CiscoCloud/mantl). The aim of this project is to support the deployment of many services in the MIP and provide an environment for big data software such as Apache Spark.

Data

Results: 1 - 18 of 26

Make the most out of EBRAINS

EBRAINS is open and free. Sign up now for complete access to our tools and services.

Ready to get started?Create your account