JavaScript is required to consult this page

Tools

Neo

Neo is a Python package for working with electrophysiology data in Python, together with support for reading a wide range of neurophysiology file formats, including Spike2, NeuroExplorer, AlphaOmega, Axon, Blackrock, Plexon, Tdt, and support for writing to a subset of these formats plus non-proprietary formats including HDF5. The goal of Neo is to improve interoperability between Python tools for analyzing, visualizing and generating electrophysiology data by providing a common, shared object model. In order to be as lightweight a dependency as possible, Neo is deliberately limited to represention of data, with no functions for data analysis or visualization. Neo is used by a number of other software tools, including SpykeViewer (data analysis and visualization), Elephant (data analysis), the G-node suite (databasing), PyNN (simulations), tridesclous (spike sorting) and ephyviewer (data visualization). OpenElectrophy (data analysis and visualization) uses an older version of neo. Neo implements a hierarchical data model well adapted to intracellular and extracellular electrophysiology and EEG data with support for multi-electrodes (for example tetrodes). Neo's data objects build on the quantities package, which in turn builds on NumPy by adding support for physical dimensions. Thus Neo objects behave just like normal NumPy arrays, but with additional metadata, checks for dimensional consistency and automatic unit conversion. A project with similar aims but for neuroimaging file formats is NiBabel.

Data

OTF2

The Open Trace Format Version 2 (OTF2) is a highly scalable, memory efficient event trace data format plus support library. It is the standard trace format for Scalasca, Vampir, and Tau and is open for other tools. OTF2 is the common successor format for the Open Trace Format (OTF) and the Epilog trace format. It preserves the essential features as well as most record types of both and introduces new features such as support for multiple read/write substrates, in-place time stamp manipulation, and on-the-fly token translation. In particular, it will avoid copying during unification of parallel event streams.

Data

rgsl_odeiv2

An R package that solves a series of initial value problems in C via the GNU scientific library (ode solvers). The C code calls gsl_odeiv2 module functions to solve the problem or problems. The goal is to offload as much work as possible to the C code and keep the overhead minimal. That is why this package expects to solve a set of problems, rather than one, for the same model file (varying in e.g.: initial conditions, or parameters). The package contains 3 interface functions, they accept different ways of defining a set of problems,each with their own drawbacks and advantages. The interface functions are described in the following Sections. The ODE has to exist as a shared library (.so) file (currently in the current working directory: ?setwd and ?getwd ). There are some assumptions we make about the contents of the shared library file. Here we assume that the solutions serve some scientific purpose and the lab experiments come with observables, some measureable values that depend on the system's state (but are not the full state vector). We call the part of the model that calculates the observables ${ModelName}_func() (vfgen also calls them Functions of the model).

Data

SCAIView-NEURO

SCAIView-NEURO is an semantic search engine especially built for translational neurodegeneration research. It supports literature mining in PubMed abstracts and PubMedCentral full text publications.

Data

SLURM plugin for the co-allocation of compute and data resources

Using the Job Submit Plugin API of the Slurm Workload Manager this plugin is intended for use in a multi-tiered storage cluster. Having considered two storage tiers, called low performance storage (lps) and high performance storage (hps), this plugin allows for the co-allocation of compute and data resources by passing the job storage requirements of jobs individually.

Data

The Jupyter Notebook

Jupyter notebook is a language-agnostic HTML notebook application for Project Jupyter. In 2015, Jupyter notebook was released as a part of The Big Split™ of the IPython codebase. IPython 3 was the last major monolithic release containing both language-agnostic code, such as the IPython notebook, and language specific code, such as the IPython kernel for Python. As computing spans across many languages, Project Jupyter will continue to develop the language-agnostic Jupyter notebook in this repo and with the help of the community develop language specific kernels which are found in their own discrete repos.

Data

Vishnu

DC Explorer, Pyramidal Explorer and Clint Explorer are the core of an application suite designed to help scientists to explore their data. Vishnu is a communication framework that allows them to interchange information and cooperate in real-time. It provides a unique access point to the three applications and manages a database with the users’ data sets.

Data

ZetaStitcher

ZetaStitcher was designed to stitch the large volumetric datasets that are produced, for example, with Light-Sheet Microscopy when imaging large samples (such as a whole mouse brain) at high resolution. This tool computes optimal alignment of adjacent tiles by evaluating the cross-correlation of overlapping areas at selected stack depths. This ensures a high throughput, since a large dataset need not be processed in its entirety. Cross-correlation is computed efficiently by means of FFT. The software is fully written in Python and exposes an Application Programming Interface (API) that can be used to perform queries on the stitched dataset for further processing.

Data

Results: 19 - 26 of 26

Make the most out of EBRAINS

EBRAINS is open and free. Sign up now for complete access to our tools and services.

Ready to get started?Create your account