JavaScript is required to consult this page

Tools

Surf Ice

Surf Ice is a tool for surface rendering the cortex with overlays to illustrate tractography, network connections, anatomical atlases and statistical maps. While there are many alternatives, Surf Ice is easy to use and uses advances shaders to generate stunning images. It supports many popular mesh formats [3ds, ac3d, BrainVoyager (srf), ctm, Collada (dae), dfs, dxf, FreeSurfer (Asc, Srf, Curv, gcs, Pial, W), GIfTI (gii), gts, lwo, ms3d, mz3, nv, obj, off, ply, stl, vtk], connectome formats (edge/node) and tractography formats [bfloat, pdb, tck, trk, vtk]. Surf Ice uses three stages to draw your image. The first two stages are computed in 3D and create both an image (left column) and a depth buffer (right column). The first stage draws all the items, while the second stage omits the background anatomical image. The final stage uses the 2D outputs of the prior stages. The depth map from the first stage is used to estimate ambient occlusion (SSAO), and the difference between the depth maps from the previous stages allows the software to infer the depth of the overlays behind the background (depth). The SSAO and depth images are composited with the images from the first two stages to generate the final image.

Data analysis and visualisation

Tide

Tide (Tiled Interactive Display Environment) is a distributed application that can run on multiple machines to power display walls or projection systems of any size. Its user interface is designed to offer an intuitive experience on touch walls. It works just as well on non touch-capable installations by using its web interface from any web browser. Tide helps users with: Presenting and collaborating on a variety of media such as high-resolution images, movies and pdfs. Sharing multiple desktop or laptop screens using the DesktopStreamer application. Sketching new ideas by drawing on a whiteboard and browsing websites. Interacting with content streamed from remote sources such as high-performance visualisation machines through the Deflect protocol. In particular all Equalizer-based applications as well as Brayns ray-tracing engine have built-in support. Viewing high-resolution, immersive stereo 3D streams on compatible hardware.

Data analysis and visualisation

Vaa3D

Vaa3D is a handy, fast, and versatile 3D/4D/5D Image Visualization and Analysis System for Bioimages and Surface Objects. It also provides many unique functions that you may not find in other software. It is Open Source, and supports a very simple and powerful plugin interface and thus can be extended and enhanced easily. Vaa3D is cross-platform (Mac, Linux, and Windows). This software suite is powerful for visualizing large- or massive-scale (giga-voxels and even tera-voxels) 3D image stacks and various surface data. Vaa3D is also a container of powerful modules for 3D image analysis (cell segmentation, neuron tracing, brain registration, annotation, quantitative measurement and statistics, etc) and data management. This makes Vaa3D suitable for various bioimage informatics applications, and a nice platform to develop new 3D image analysis algorithms for high-throughput processing. In short, Vaa3D streamlines the workflow of visualization-assisted analysis. Vaa3D can render 5D (spatial-temporal) data directly in 3D volume-rendering mode; it supports convenient and interactive local and global 3D views at different scales... it comes with a number of plugins and toolboxes. Importantly, you can now write your own plugins to take advantage of the Vaa3D platform, possibly within minutes!

Data analysis and visualisation

VIOLA

VIOLA (Visualization Of Layer Activity) is a lightweight, open-source, web-based, and platform-independent application combining and adapting modern interactive visualization paradigms, such as coordinated multiple views, for massively parallel neurophysiological data. It gives an insight into spatially resolved time series such as simulation results of neural networks with 2D geometry. With the multiple coordinated views, an explorative and qualitative assessment of the spatiotemporal features of neuronal activity can be performed upfront of a detailed quantitative data analysis of specific aspects of the data.

Data analysis and visualisation

ViSimpl

ViSimpl involves two components: SimPart and StackViz. SimPart is a three-dimensional visualizer for spatio-temporal data that allow spatio/temporal analysis of the simulation data, using particle-based rendering. StackViz illustrates how the electrophysiological variables evolve over time and provides a temporal representation of the data at different aggregation levels. They allow users to visually discriminate the activity of different groups of neurons, and provide detailed information about individual neurons of interest. These components share synchroniszed playback control of the simulation being analyzsed and work together as linked views, although they are loosely coupled and can also be used independently. They are ready to be used with BlueConfig Datasets among other file formats such as specific HDF5 and CSV. VisSimpl can be coupled with NeuroScheme for adding functionality such as navigate through the underlying structure of the data using symbolic representations and different levels of abstraction.

Modelling and simulationCellular level simulationData analysis and visualisation

Viziphant

Viziphant is a Python module for easy visualization of Neo objects and Elephant results. It provides a high-level API to easily generate plots and interactive visualizations of neuroscientific data and analysis results. This API uses and extends the same structure as in Elephant to ensure intuitive usage for scientists that are used to Elephant.

Data analysis and visualisation

VTK

VTK is an open-source software system for image processing, 3D graphics, volume rendering and visualization. VTK includes many advanced algorithms (e.g., surface reconstruction, implicit modelling, decimation) and rendering techniques (e.g., hardware-accelerated volume rendering, LOD control). VTK is used by academicians for teaching and research; by government research institutions such as Los Alamos National Lab in the US or CINECA in Italy; and by many commercial firms who use VTK to build or extend products. The origin of VTK is with the textbook "The Visualization Toolkit, an Object-Oriented Approach to 3D Graphics" originally published by Prentice Hall and now published by Kitware, Inc. (Third Edition ISBN 1-930934-07-6). VTK has grown (since its initial release in 1994) to a world-wide user base in the commercial, academic, and research communities.

Data analysis and visualisation

VTK-m

One of the biggest recent changes in high-performance computing is the increasing use of accelerators. Accelerators contain processing cores that independently are inferior to a core in a typical CPU, but these cores are replicated and grouped such that their aggregate execution provides a very high computation rate at a much lower power. Current and future CPU processors also require much more explicit parallelism. Each successive version of the hardware packs more cores into each processor, and technologies like hyperthreading and vector operations require even more parallel processing to leverage each core’s full potential. VTK-m is a toolkit of scientific visualization algorithms for emerging processor architectures. VTK-m supports the fine-grained concurrency for data analysis and visualization algorithms required to drive extreme scale computing by providing abstract models for data and execution that can be applied to a variety of algorithms across many different processor architectures.

Data analysis and visualisation

Woken

Woken provides a web service and an API to execute on demand data analytics and machine learning algorithms encapsulated in Docker containers. Algorithms and their runtime are fetched from the Algorithm Repository, a Docker registry containing approved and compatible algorithms and their runtimes. Woken provides the algorithms with data loaded from a database, monitors the execution of the algorithm other one machine of a cluster, then it collects the result formatted as a PFA document and returns a response to the client. Woken tracks provenance information, runs cross-validation of the models produced by the ML algorithms.

Data analysis and visualisation

Results: 55 - 63 of 63

Make the most out of EBRAINS

EBRAINS is open and free. Sign up now for complete access to our tools and services.

Ready to get started?Create your account