JavaScript is required to consult this page

Tools

DC Explorer

DC Explorer focuses on statistical analysis of data subsets. In this regard, it provides a treemap visualization to facilitate the subset definition. Treemapping is used to visualize the filtering operations that define each subset by grouping the data in different compartments, color coding each item and sorting them by their value. Once the subsets have been defined different statistical tests are automatically performed in order to analyze relationship between the selected subsets.

Data analysis and visualisation

Demics

Demics is a library for the Python programming language, adding support for distributed computational operations for very large, multi-dimensional arrays and matrices. Such operations include Deep Learning inference models from the libraries TensorFlow and PyTorch. The software focuses on the segmentation and detection of objects in particularly large image data.

Modelling and simulationData

deNEST

deNEST is a Python library for specifying networks and running simulations using the NEST simulator. deNEST allows the user to concisely specify large-scale networks and simulations in hierarchically-organized declarative parameter files. From these parameter files, a network is instantiated in NEST (layers of neurons and stimulation devices, their connections, and recorder devices), and a simulation is run in sequential steps ("sessions"), during which the network parameters can be modified and the network can be stimulated, recorded, etc. Some advantages of the declarative approach: Parameters and code are separated Simulations are easier to reason about, reuse, and modify Parameters are more readable and succinct Parameter files can be easily version controlled and diffs are smaller and more interpretable Clean separation between the specification of the "network" (the simulated neuronal system) and the "simulation" (structured stimulation and recording of the network), which facilitates running different experiments using the same network Parameter exploration is more easily automated The complexity of interacting with NEST is hidden, which makes some tricky operations (such as connecting a weight_recorder) easy.

Modelling and simulation

DF Analysis

Distance-Fluctuation (DF) Analysis is a Python tool that performs analysis of the results of a MD simulation using the Distance Fluctuation matrices (DF), based on the Coordination Propensity (CP) hypothesis. Specifically, low CP values, corresponding to low pair-distance fluctuations, highlight groups of residues that move in a mechanically coordinated way. The script can analyze a MD trajectory and identify the coordinated motions between residues. It can then filter the output matrix based on the distance to identify long-range coordinated motions.

Modelling and simulation

Distance Fluctuation (DF) Analysis

Distance-Fluctuation (DF) Analysis is a Python tool that performs analysis of the results of a MD simulation using the Distance Fluctuation matrices (DF), based on the Coordination Propensity (CP) hypothesis. Specifically, low CP values, corresponding to low pair-distance fluctuations, highlight groups of residues that move in a mechanically coordinated way. The script can analyze a MD trajectory and identify the coordinated motions between residues. It can then filter the output matrix based on the distance to identify long-range coordinated motions.

Modelling and simulationMolecular and subcellular simulation

dopp

Library for the simulation of rate-based neuron models with conductance-based synapses in feedforward architectures. Supports synaptic plasticity. Both models with instantaneous membrane response and dynamic models are available. Dynamic models are integrated with Runge-Kutta 2/3 to increase stability for larger timesteps. For theoretical details see https://arxiv.org/abs/2104.13238.

Modelling and simulation

EBRAINS curation request form

In this form, you will be asked to provide some information about yourself and your dataset, model, or software. This should take around 10 min. Please note that all information provided can be adjusted later on.

Share data

EBRAINS Knowledge Graph

We built the EBRAINS Knowledge Graph (KG) to help you find and share the data you need to make your next discovery. We also built it to connect you to the software and hardware tools which will help you analyse the data you have and the data you find. The EBRAINS Knowledge Graph supports rich terminologies, ontologies and controlled vocabularies. The system is built by design to support iterative elaborations of common standards and supports these by probabilistic suggestion and review systems. The EBRAINS Knowledge Graph is a multi-modal metadata store which brings together information from different fields on brain research. At the core of the EBRAINS Knowledge Graph, a graph database tracks the linkage between experimental data and neuroscientific data science supporting more extensive data reuse and complex computational research than would be possible otherwise.

DataShare data

EBRAINS Metadata Wizard

In this form, you can describe key aspects of your dataset so that other researchers will be able to find, reuse and cite your work. You can use the navigation bar above to explore the different sections and categories of metadata collected through this form.

DataShare data

EBRAINS Model Catalog

The EBRAINS Model Catalog contains information about models developed and/or used within the EBRAINS research infrastructure. It allows you to find information about models and results obtained using those models, showcasing how those models have been validated against experimental findings.

Find data

eFEL

The Electrophys Feature Extraction Library (eFEL) allows neuroscientists to automatically extract features from time series data recorded from neurons (both in vitro and in silico). Examples are the action potential width and amplitude in voltage traces recorded during whole-cell patch clamp experiments. The user of the library provides a set of traces and selects the features to be calculated. The library will then extract the requested features and return the values to the user. The core of the library is written in C++, and a Python wrapper is included. At the moment we provide a way to automatically compile and install the library as a Python module.

Data analysis and visualisation

eFELunit

This test shall take as input a BluePyOpt optimized output file. The validation test would then evaluate the model for all parameter sets against various eFEL features. It should be noted that the reference data used is that located within the model, so this test can be considered as a quantification of the goodness of fitting the model. The results are registered on the HBP Validation Framework app.

Validation and inference

Elephant

The Python library Electrophysiology Analysis Toolkit (Elephant) provides tools for analysing neuronal activity data, such as spike trains, local field potentials and intracellular data. In addition to providing a platform for sharing analysis codes from different laboratories, Elephant provides a consistent and homogeneous framework for data analysis built on a modular foundation. The underlying data model is the Neo library. This framework easily captures a wide range of neuronal data types and methods, including dozens of file formats and network simulation tools. A common data description, as the Neo library provides, is essential for developing interoperable analysis workflows.

Modelling and simulationData analysis and visualisationValidation and inference

ExploreASL

ExploreASL is a pipeline and toolbox for image processing and statistics of arterial spin labeling perfusion MR images. It is designed as a multi-OS, open source, collaborative framework that facilitates cross-pollination between image processing method developers and clinical investigators. The software provides a complete head-to-tail approach that runs fully automatically, encompassing all necessary tasks from data import and structural segmentation, registration and normalization, up to CBF quantification. In addition, the software package includes and quality control (QC) procedures and region-of-interest (ROI) as well as voxel-wise analysis on the extracted data. To-date, ExploreASL has been used for processing ~10000 ASL datasets from all major MRI vendors and ASL sequences, and a variety of patient populations, representing ~30 studies. The ultimate goal of ExploreASL is to combine data from multiple studies to identify disease related perfusion patterns that may prove crucial in using ASL as a diagnostic tool and enhance our understanding of the interplay of perfusion and structural changes in neurodegenerative pathophysiology. Additionally, this (semi-)automatic pipeline allows us to minimize manual intervention, which increases the reproducibility of studies.

Data analysis and visualisation

Extrae

Extrae is a dynamic instrumentation package to trace programs compiled and run with the shared memory model (like OpenMP and pthreads), the message passing (MPI) programming model or both programming models (different MPI processes using OpenMP or pthreads within each MPI process). Instrumentation of CUDA, CUPTI, OpenCL, Python multiprocessing, Java threads and relevant system, dynamic memory and I/O calls is also supported. Extrae generates trace files that can be visualized with Paraver. Extrae is currently available on different architectures and operating systems, including: GNU/Linux (x86, x86_64, ARM, POWER), IBM AIX, SGI Altix, Open Solaris, FreeBSD, Android, Fujitsu FX10/100, Cray XT, IBM Blue Gene, Intel Xeon Phi, GPUs and FPGAs. The combined use of Extrae and Paraver offers an enormous analysis potential, both qualitative and quantitative. With these tools the actual performance bottlenecks of parallel applications can be identified. The microscopic view of the program behavior that the tools provide is very useful to optimize the parallel program performance.

Modelling and simulation

Extra-P

Extra-P is an automatic performance-modeling tool that supports the user in the identification of scalability bugs. A scalability bug is a part of the program whose scaling behavior is unintentionally poor, that is, much worse than expected. Extra-P uses measurements of various performance metrics at different processor configurations as input to represent the performance of code regions (including their calling context) as a function of the number of processes. All it takes to search for scalability issues even in full-blown codes is to run a manageable number of small-scale performance experiments, launch Extra-P, and compare the asymptotic or extrapolated performance of the worst instances to the expectations. Besides the number of processes, it is also possible to consider other parameters such as the input problem size. Extra-P generates not only a list of potential scalability bugs but also human-readable models for all performance metrics available such as floating-point operations or bytes sent by MPI calls that can be further analyzed and compared to identify the root causes of scalability issues.

Modelling and simulation

FAConstructor

The fiber architecture constructor (FAConstructor) allows a simple and effective creation of fiber models based on mathematical functions or the manual input of data points. Models are visualized during creation and can be interacted with by translating them in the 3-dimensional space.

Modelling and simulation

Results: 37 - 54 of 225

Make the most out of EBRAINS

EBRAINS is open and free. Sign up now for complete access to our tools and services.

Ready to get started?Create your account