JavaScript is required to consult this page

Tools

Cube

Cube, which is used as performance report explorer for Scalasca and Score-P, is a generic tool for displaying a multi-dimensional performance space consisting of the dimensions (i) performance metric, (ii) call path, and (iii) system resource. Each dimension can be represented as a tree, where non-leaf nodes of the tree can be collapsed or expanded to achieve the desired level of granularity. In addition, Cube can display multi-dimensional Cartesian process topologies. The Cube 4.x series report explorer and the associated Cube4 data format is provided for Cube files produced with the Score-P performance instrumentation and measurement infrastructure or the Scalasca version 2.x trace analyzer (and other compatible tools). However, for backwards compatibility, Cube 4.x can also read and display Cube 3.x data.

Modelling and simulation

CxSystem2

CxSystem2 is a simulation framework for cortical networks, which operates on personal computers. It is implemented in Python on top of the popular Brian2 simulator, and runs on Linux, Windows and MacOS. There is also a web-based version available via the Human Brain Project Brain Simulation Platform (BSP). CxSystem2 embraces the main goal of Brian – minimizing development time – by providing the user with a simplified interface. While many simple models can be written in pure Brian code, more complex models can get hard to manage due to the large number of biological details. We currently provide two interfaces for constructing networks: a browser-based interface (locally or via the BSP), and a file-based interface (json or csv). Before incorporating neuron models into a network, the user can explore their behavior using the Neurodynlib submodule. Spike output and 3D structure of network simulations can be visualized using ViSimpl, a visualization tool developed by the GMRV Lab.

Modelling and simulation

Demics

Demics is a library for the Python programming language, adding support for distributed computational operations for very large, multi-dimensional arrays and matrices. Such operations include Deep Learning inference models from the libraries TensorFlow and PyTorch. The software focuses on the segmentation and detection of objects in particularly large image data.

Modelling and simulationData

deNEST

deNEST is a Python library for specifying networks and running simulations using the NEST simulator. deNEST allows the user to concisely specify large-scale networks and simulations in hierarchically-organized declarative parameter files. From these parameter files, a network is instantiated in NEST (layers of neurons and stimulation devices, their connections, and recorder devices), and a simulation is run in sequential steps ("sessions"), during which the network parameters can be modified and the network can be stimulated, recorded, etc. Some advantages of the declarative approach: Parameters and code are separated Simulations are easier to reason about, reuse, and modify Parameters are more readable and succinct Parameter files can be easily version controlled and diffs are smaller and more interpretable Clean separation between the specification of the "network" (the simulated neuronal system) and the "simulation" (structured stimulation and recording of the network), which facilitates running different experiments using the same network Parameter exploration is more easily automated The complexity of interacting with NEST is hidden, which makes some tricky operations (such as connecting a weight_recorder) easy.

Modelling and simulation

DF Analysis

Distance-Fluctuation (DF) Analysis is a Python tool that performs analysis of the results of a MD simulation using the Distance Fluctuation matrices (DF), based on the Coordination Propensity (CP) hypothesis. Specifically, low CP values, corresponding to low pair-distance fluctuations, highlight groups of residues that move in a mechanically coordinated way. The script can analyze a MD trajectory and identify the coordinated motions between residues. It can then filter the output matrix based on the distance to identify long-range coordinated motions.

Modelling and simulation

Distance Fluctuation (DF) Analysis

Distance-Fluctuation (DF) Analysis is a Python tool that performs analysis of the results of a MD simulation using the Distance Fluctuation matrices (DF), based on the Coordination Propensity (CP) hypothesis. Specifically, low CP values, corresponding to low pair-distance fluctuations, highlight groups of residues that move in a mechanically coordinated way. The script can analyze a MD trajectory and identify the coordinated motions between residues. It can then filter the output matrix based on the distance to identify long-range coordinated motions.

Modelling and simulationMolecular and subcellular simulation

dopp

Library for the simulation of rate-based neuron models with conductance-based synapses in feedforward architectures. Supports synaptic plasticity. Both models with instantaneous membrane response and dynamic models are available. Dynamic models are integrated with Runge-Kutta 2/3 to increase stability for larger timesteps. For theoretical details see https://arxiv.org/abs/2104.13238.

Modelling and simulation

Elephant

The Python library Electrophysiology Analysis Toolkit (Elephant) provides tools for analysing neuronal activity data, such as spike trains, local field potentials and intracellular data. In addition to providing a platform for sharing analysis codes from different laboratories, Elephant provides a consistent and homogeneous framework for data analysis built on a modular foundation. The underlying data model is the Neo library. This framework easily captures a wide range of neuronal data types and methods, including dozens of file formats and network simulation tools. A common data description, as the Neo library provides, is essential for developing interoperable analysis workflows.

Modelling and simulationData analysis and visualisationValidation and inference

Extrae

Extrae is a dynamic instrumentation package to trace programs compiled and run with the shared memory model (like OpenMP and pthreads), the message passing (MPI) programming model or both programming models (different MPI processes using OpenMP or pthreads within each MPI process). Instrumentation of CUDA, CUPTI, OpenCL, Python multiprocessing, Java threads and relevant system, dynamic memory and I/O calls is also supported. Extrae generates trace files that can be visualized with Paraver. Extrae is currently available on different architectures and operating systems, including: GNU/Linux (x86, x86_64, ARM, POWER), IBM AIX, SGI Altix, Open Solaris, FreeBSD, Android, Fujitsu FX10/100, Cray XT, IBM Blue Gene, Intel Xeon Phi, GPUs and FPGAs. The combined use of Extrae and Paraver offers an enormous analysis potential, both qualitative and quantitative. With these tools the actual performance bottlenecks of parallel applications can be identified. The microscopic view of the program behavior that the tools provide is very useful to optimize the parallel program performance.

Modelling and simulation

Extra-P

Extra-P is an automatic performance-modeling tool that supports the user in the identification of scalability bugs. A scalability bug is a part of the program whose scaling behavior is unintentionally poor, that is, much worse than expected. Extra-P uses measurements of various performance metrics at different processor configurations as input to represent the performance of code regions (including their calling context) as a function of the number of processes. All it takes to search for scalability issues even in full-blown codes is to run a manageable number of small-scale performance experiments, launch Extra-P, and compare the asymptotic or extrapolated performance of the worst instances to the expectations. Besides the number of processes, it is also possible to consider other parameters such as the input problem size. Extra-P generates not only a list of potential scalability bugs but also human-readable models for all performance metrics available such as floating-point operations or bytes sent by MPI calls that can be further analyzed and compared to identify the root causes of scalability issues.

Modelling and simulation

FAConstructor

The fiber architecture constructor (FAConstructor) allows a simple and effective creation of fiber models based on mathematical functions or the manual input of data points. Models are visualized during creation and can be interacted with by translating them in the 3-dimensional space.

Modelling and simulation

Factorisation-based Image Labelling

Rationale The approach assumes that segmented (into GM, WM and background) images have been aligned, so does not require the additional complexity of a convolutional approach. The use of segmented images is to make the approach less dependent on the particular image contrasts so it generalises better to a wider variety of brain scans. The approach assumes that there are only a relatively small number of labelled images, but many images that are unlabelled. It therefore uses a semi-supervised learning approach, with an underlying Bayesian generative model that has relatively few weights to learn. Model The approach is patch based. For each patch, a set of basis functions model both the (categorical) image to label, and the corresponding (categorical) label map. A common set of latent variables control the two sets of basis functions, and the results are passed through a softmax so that the model encodes the means of a multinouli distribution (Böhning, 1992; Khan et al, 2010). Continuity over patches is achieved by modelling the probability of the latent variables within each patch conditional on the values of the latent variables in the six adjacent patches, which is a type of conditional random field (Zhang et al, 2015; Brudfors et al, 2019). This model (with Wishart priors) gives the prior mean and covariance of a Gaussian prior over the latent variables of each patch. Patches are updated using an iterative red-black checkerboard scheme. Labelling After training, labelling a new image is relatively fast because optimising the latent variables can be formulated within a scheme similar to a recurrent Res-Net (He et al, 2016)."

Modelling and simulation

Feature Extraction Graphical User Interface

The Feature Extraction Graphical User Interface (GUI) is a web application that allows users to extract an ensemble of electrophysiological properties from voltage traces recorded upon electrical stimulation of neuronal cells. The main outcome of the application is the generation of two files - features.json and protocol.json - that can be used for later model parameter optimizations.

Modelling and simulation

Ginkgo/MEDUSA

Ginkgo/MEDUSA (Microstructure Environment Designer Using Sphere Atoms) is a HPC compatible simulation tool that allows an all-in one simulation of brain tissue microstructure and their diffusion MRI signal relying on 3 simulation features: Simulation of realistic geometries representing cell membranes populating brain gray and white matters to create virtual tissues using a generative approach called MEDUSA, Simulation of the diffusion process of water molecules present within tissues using a Monte-Carlo approach,Simulation of the attenuation of the diffusion MRI signal for any tuning of a diffusion-weighted MRI pulse sequence (Pulsed Gradient Spin Echo, Oscillating Gradient Spin Echo, Abritrary Gradient Spin Echo, ….). The Ginkgo/MEDUSA tool is dedicated to the development of computational models of brain tissue microstructure in order to go beyond existing analytical models known to be limited to accurately represent the complexity of brain cellular environments

Modelling and simulation

Hodgkin Huxley Neuron Builder

The Hodgkin-Huxley Neuron Builder implements a Use Case of the Brain Simulation Platform. It allows the user to interactively go through the entire cell model building pipeline. The workflow consists of three steps: 1) electrophysiological feature extraction from voltage traces; 2) model parameter optimization; 3) in silico experiments using the optimized model cell. The user is provided with a friendly interface enabling to interact with both the HBP Collaboratory storage and the High Performance Computing (HPC) resources. The application has been built in a flexible way to allow the user to enter the workflow at any desired step, by either interacting with HBP resources or uploading his own files.

Modelling and simulation

Human Brain Project HPC Status Monitor

The HPC Status Monitor allows to check the status of the HPC systems available for job submission from the HBP Collaboratory and the remaining quotas reserved to the user on each of them. In order to run a job in the HPC systems, the HBP user needs to be mapped on and to be part of (at least) a project on those systems. If the user does not have any access and allocation to any systems, she/he can still submit jobs (with limited quotas) through the available service accounts.

Modelling and simulation

Interactive Workflows for Cellular Level Modeling

Work through a number of pipelines for single cell model optimization of different brain region cells, run in silico experiments of individual neurons, small circuits and entire brain regions, perform ad hoc data analysis on electrophysiological data, synaptic events fitting, morphology analysis and visualization.

Modelling and simulationCellular level simulation

Results: 19 - 36 of 108

Make the most out of EBRAINS

EBRAINS is open and free. Sign up now for complete access to our tools and services.

Ready to get started?Create your account