Neo IO

Preamble

The Neo io module aims to provide an exhaustive way of loading and saving several widely used data formats in electrophysiology. The more these heterogeneous formats are supported, the easier it will be to manipulate them as Neo objects in a similar way. Therefore the IO set of classes propose a simple and flexible IO API that fits many format specifications. It is not only file-oriented, it can also read/write objects from a database.

neo.io can be seen as a pure-Python and open-source Neuroshare replacement.

At the moment, there are 3 families of IO modules:
  1. for reading closed manufacturers’ formats (Spike2, Plexon, AlphaOmega, BlackRock, Axon, ...)
  2. for reading(/writing) formats from open source tools (KlustaKwik, Elan, WinEdr, WinWcp, PyNN, ...)
  3. for reading/writing Neo structure in neutral formats (HDF5, .mat, ...) but with Neo structure inside (NeoHDF5, NeoMatlab, ...)

Combining 1 for reading and 3 for writing is a good example of use: converting your datasets to a more standard format when you want to share/collaborate.

Introduction

There is an intrinsic structure in the different Neo objects, that could be seen as a hierachy with cross-links. See Neo core. The highest level object is the Block object, which is the high level container able to encapsulate all the others.

A Block has therefore a list of Segment objects, that can, in some file formats, be accessed individually. Depending on the file format, i.e. if it is streamable or not, the whole Block may need to be loaded, but sometimes particular Segment objects can be accessed individually. Within a Segment, the same hierarchical organisation applies. A Segment embeds several objects, such as SpikeTrain, AnalogSignal, AnaloSignalArray, EpochArray, EventArray (basically, all the different Neo objects).

Depending on the file format, these objects can sometimes be loaded separately, without the need to load the whole file. If possible, a file IO therefore provides distinct methods allowing to load only particular objects that may be present in the file. The basic idea of each IO file format is to have, as much as possible, read/write methods for the individual encapsulated objects, and otherwise to provide a read/write method that will return the object at the highest level of hierarchy (by default, a Block or a Segment).

The neo.io API is a balance between full flexibility for the user (all read_XXX() methods are enabled) and simple, clean and understandable code for the developer (few read_XXX() methods are enabled). This means that not all IOs offer the full flexibility for partial reading of data files.

One format = one class

The basic syntax is as follows. If you want to load a file format that is implemented in a generic MyFormatIO class:

>>> from neo.io import MyFormatIO
>>> reader = MyFormatIO(filename = "myfile.dat")

you can replace MyFormatIO by any implemented class, see List of implemented formats

Modes

IO can be based on file, directory, database or fake This is describe in mode attribute of the IO class.

>>> from neo.io import MyFormatIO
>>> print MyFormatIO.mode
'file'

For file mode the filename keyword argument is necessary. For directory mode the dirname keyword argument is necessary.

Ex:
>>> reader = io.PlexonIO(filename='File_plexon_1.plx')
>>> reader = io.TdtIO(dirname='aep_05')

Supported objects/readable objects

To know what types of object are supported by a given IO interface:

>>> MyFormatIO.supported_objects
[Segment , AnalogSignal , SpikeTrain, Event, Spike]

Supported objects does not mean objects that you can read directly. For instance, many formats support AnalogSignal but don’t allow them to be loaded directly, rather to access the AnalogSignal objects, you must read a Segment:

>>> seg = reader.read_segment()
>>> print(seg.analogsignals)
>>> print(seg.analogsignals[0])

To get a list of directly readable objects

>>> MyFormatIO.readable_objects
[Segment]

The first element of the previous list is the highest level for reading the file. This mean that the IO has a read_segment() method:

>>> seg = reader.read_segment()
>>> type(seg)
neo.core.Segment

All IOs have a read() method that returns a list of Block objects (representing the whole content of the file):

>>> bl = reader.read()
>>> print bl[0].segments[0]
neo.core.Segment

Lazy and cascade options

In some cases you may not want to load everything in memory because it could be too big. For this scenario, two options are available:

  • lazy=True/False. With lazy=True all arrays will have a size of zero, but all the metadata will be loaded. lazy_shape attribute is added to all object that inheritate Quantitities or numpy.ndarray (AnalogSignal, AnalogSignalArray, SpikeTrain) and to object that have array like attributes (EpochArray, EventArray) In that cases, lazy_shape is a tuple that have the same shape with lazy=False.
  • cascade=True/False. With cascade=False only one object is read (and one_to_many and many_to_many relationship are not read).

By default (if they are not specified), lazy=False and cascade=True, i.e. all data is loaded.

Example cascade:

>>> seg = reader.read_segment( cascade=True)
>>> print(len(seg.analogsignals))  # this is N
>>> seg = reader.read_segment(cascade=False)
>>> print(len(seg.analogsignals))  # this is zero

Example lazy:

>>> seg = reader.read_segment(lazy=False)
>>> print(seg.analogsignals[0].shape)  # this is N
>>> seg = reader.read_segment(lazy=True)
>>> print(seg.analogsignals[0].shape)  # this is zero, the AnalogSignal is empty
>>> print(seg.analogsignals[0].lazy_shape)  # this is N

Some IOs support advanced forms of lazy loading, cascading or both (these features are currently limited to the HDF5 IO, which supports both forms).

  • For lazy loading, these IOs have a load_lazy_object() method that takes a single parameter: a data object previously loaded by the same IO in lazy mode. It returns the fully loaded object, without links to container objects (Segment etc.). Continuing the lazy example above:

    >>> lazy_sig = seg.analogsignals[0]  # Empty signal
    >>> full_sig = reader.load_lazy_object(lazy_sig)
    >>> print(lazy_sig.lazy_shape, full_sig.shape)  # Identical
    >>> print(lazy_sig.segment)  # Has the link to the object "seg"
    >>> print(full_sig.segment)  # Does not have the link: None
    
  • For lazy cascading, IOs have a load_lazy_cascade() method. This method is not called directly when interacting with the IO, but its presence can be used to check if an IO supports lazy cascading. To use lazy cascading, the cascade parameter is set to 'lazy':

    >>> block = reader.read(cascade='lazy')
    

    You do not have to do anything else, lazy cascading is now active for the object you just loaded. You can interact with the object in the same way as if it was loaded with cascade=True. However, only the objects that are actually accessed are loaded as soon as they are needed:

    >>> print(block.recordingchannelgroups[0].name)  # The first RecordingChannelGroup is loaded
    >>> print(block.segments[0].analogsignals[1])  # The first Segment and its second AnalogSignal are loaded
    

    Once an object has been loaded with lazy cascading, it stays in memory:

    >>> print(block.segments[0].analogsignals[0])  # The first Segment is already in memory, its first AnalogSignal is loaded
    

Details of API

The neo.io API is designed to be simple and intuitive:
  • each file format has an IO class (for example for Spike2 files you have a Spike2IO class).
  • each IO class inherits from the BaseIO class.
  • each IO class can read or write directly one or several Neo objects (for example Segment, Block, ...): see the readable_objects and writable_objects attributes of the IO class.
  • each IO class supports part of the neo.core hierachy, though not necessarily all of it (see supported_objects).
  • each IO class has a read() method that returns a list of Block objects. If the IO only supports Segment reading, the list will contain one block with all segments from the file.
  • each IO class that supports writing has a write() method that takes as a parameter a list of blocks, a single block or a single segment, depending on the IO’s writable_objects.
  • each IO is able to do a lazy load: all metadata (e.g. sampling_rate) are read, but not the actual numerical data. lazy_shape attribute is added to provide information on real size.
  • each IO is able to do a cascade load: if True (default) all child objects are loaded, otherwise only the top level object is loaded.
  • each IO is able to save and load all required attributes (metadata) of the objects it supports.
  • each IO can freely add user-defined or manufacturer-defined metadata to the annotations attribute of an object.

List of implemented formats

neo.io provides classes for reading and/or writing electrophysiological data files.

Note that if the package dependency is not satisfied for one io, it does not raise an error but a warning.

neo.io.iolist provides a list of succesfully imported io classes.

Classes:

class neo.io.AlphaOmegaIO(filename=None)

Class for reading data from Alpha Omega .map files (experimental)

This class is an experimental reader with important limitations. See the source code for details of the limitations. The code of this reader is of alpha quality and received very limited testing.

Usage:
>>> from neo import io
>>> r = io.AlphaOmegaIO( filename = 'File_AlphaOmega_1.map')
>>> blck = r.read_block(lazy = False, cascade = True)
>>> print blck.segments[0].analogsignals
class neo.io.AsciiSignalIO(filename=None)

Class for reading signal in generic ascii format. Columns respresents signal. They share all the same sampling rate. The sampling rate is externally known or the first columns could hold the time vector.

Usage:
>>> from neo import io
>>> r = io.AsciiSignalIO(filename='File_asciisignal_2.txt')
>>> seg = r.read_segment(lazy=False, cascade=True)
>>> print seg.analogsignals
[<AnalogSignal(array([ 39.0625    ,   0.        ,   0.        , ..., -26.85546875 ...
class neo.io.AsciiSpikeTrainIO(filename=None)

Classe for reading/writing SpikeTrain in a text file. Each Spiketrain is a line.

Usage:
>>> from neo import io
>>> r = io.AsciiSpikeTrainIO( filename = 'File_ascii_spiketrain_1.txt')
>>> seg = r.read_segment(lazy = False, cascade = True,)
>>> print seg.spiketrains     
[<SpikeTrain(array([ 3.89981604,  4.73258781,  0.608428  ,  4.60246277,  1.23805797,
...
class neo.io.AxonIO(filename=None)

Class for reading abf (axon binary file) file.

Usage:
>>> from neo import io
>>> r = io.AxonIO(filename='File_axon_1.abf')
>>> bl = r.read_block(lazy=False, cascade=True)
>>> print bl.segments
[<neo.core.segment.Segment object at 0x105516fd0>]
>>> print bl.segments[0].analogsignals
[<AnalogSignal(array([ 2.18811035,  2.19726562,  2.21252441, ...,  1.33056641,
        1.3458252 ,  1.3671875 ], dtype=float32) * pA, [0.0 s, 191.2832 s], sampling rate: 10000.0 Hz)>]
>>> print bl.segments[0].eventarrays
[]
class neo.io.BlackrockIO(filename, full_range=array(8192.0) * mV)

Class for reading/writing data in a BlackRock Neuroshare ns5 files.

class neo.io.BrainVisionIO(filename=None)

Class for reading/writing data from BrainVision product (brainAmp, brain analyser...)

Usage:
>>> from neo import io
>>> r = io.BrainVisionIO( filename = 'File_brainvision_1.eeg')
>>> seg = r.read_segment(lazy = False, cascade = True,)
class neo.io.BrainwareDamIO(filename=None)

Class for reading Brainware raw data files with the extension ‘.dam’.

The read_block method returns the first Block of the file. It will automatically close the file after reading. The read method is the same as read_block.

Note:

The file format does not contain a sampling rate. The sampling rate is set to 1 Hz, but this is arbitrary. If you have a corresponding .src or .f32 file, you can get the sampling rate from that. It may also be possible to infer it from the attributes, such as “sweep length”, if present.

Usage:
>>> from neo.io.brainwaredamio import BrainwareDamIO
>>> damfile = BrainwareDamIO(filename='multi_500ms_mulitrep_ch1.dam')
>>> blk1 = damfile.read()
>>> blk2 = damfile.read_block()
>>> print blk1.segments
>>> print blk1.segments[0].analogsignals
>>> print blk1.units
>>> print blk1.units[0].name
>>> print blk2
>>> print blk2[0].segments
class neo.io.BrainwareF32IO(filename=None)

Class for reading Brainware Spike ReCord files with the extension ‘.f32’

The read_block method returns the first Block of the file. It will automatically close the file after reading. The read method is the same as read_block.

The read_all_blocks method automatically reads all Blocks. It will automatically close the file after reading.

The read_next_block method will return one Block each time it is called. It will automatically close the file and reset to the first Block after reading the last block. Call the close method to close the file and reset this method back to the first Block.

The isopen property tells whether the file is currently open and reading or closed.

Note 1:
There is always only one RecordingChannelGroup. BrainWare stores the equivalent of RecordingChannelGroups in separate files.
Usage:
>>> from neo.io.brainwaref32io import BrainwareF32IO
>>> f32file = BrainwareF32IO(filename='multi_500ms_mulitrep_ch1.f32')
>>> blk1 = f32file.read()
>>> blk2 = f32file.read_block()
>>> print blk1.segments
>>> print blk1.segments[0].spiketrains
>>> print blk1.units
>>> print blk1.units[0].name
>>> print blk2
>>> print blk2[0].segments
class neo.io.BrainwareSrcIO(filename=None)

Class for reading Brainware Spike ReCord files with the extension ‘.src’

The read_block method returns the first Block of the file. It will automatically close the file after reading. The read method is the same as read_block.

The read_all_blocks method automatically reads all Blocks. It will automatically close the file after reading.

The read_next_block method will return one Block each time it is called. It will automatically close the file and reset to the first Block after reading the last block. Call the close method to close the file and reset this method back to the first Block.

The isopen property tells whether the file is currently open and reading or closed.

Note 1:
The first Unit in each RecordingChannelGroup is always UnassignedSpikes, which has a SpikeTrain for each Segment containing all the spikes not assigned to any Unit in that Segment.
Note 2:
The first Segment in each Block is always Comments, which stores all comments as Event objects. The Event times are the timestamps of the comments as the number of days since dec 30th 1899, while the timestamp attribute has the same value in python datetime format
Note 3:
The parameters from the BrainWare table for each condition are stored in the Segment annotations. If there are multiple repetitions of a condition, each repetition is stored as a separate Segment.
Note 4:
There is always only one RecordingChannelGroup. BrainWare stores the equivalent of RecordingChannelGroups in separate files.
Usage:
>>> from neo.io.brainwaresrcio import BrainwareSrcIO
>>> srcfile = BrainwareSrcIO(filename='multi_500ms_mulitrep_ch1.src')
>>> blk1 = srcfile.read()
>>> blk2 = srcfile.read_block()
>>> blks = srcfile.read_all_blocks()
>>> print blk1.segments
>>> print blk1.segments[0].spiketrains
>>> print blk1.units
>>> print blk1.units[0].name
>>> print blk2
>>> print blk2[0].segments
>>> print blks
>>> print blks[0].segments
class neo.io.ElanIO(filename=None)

Classe for reading/writing data from Elan.

Usage:
>>> from neo import io
>>> r = io.ElanIO( filename = 'File_elan_1.eeg')
>>> seg = r.read_segment(lazy = False, cascade = True,)
>>> print seg.analogsignals   
[<AnalogSignal(array([ 89.21203613,  88.83666992,  87.21008301, ...,  64.56298828,
    67.94128418,  68.44177246], dtype=float32) * pA, [0.0 s, 101.5808 s], sampling rate: 10000.0 Hz)>]
>>> print seg.spiketrains     
[]
>>> print seg.eventarrays     
[]
class neo.io.ElphyIO(filename=None)

Class for reading from and writing to an Elphy file.

It enables reading: - Block - Segment - RecordingChannel - RecordingChannelGroup - EventArray - SpikeTrain

Usage:
>>> from neo import io
>>> r = io.ElphyIO(filename='ElphyExample.DAT')
>>> seg = r.read_block(lazy=False, cascade=True)
>>> print(seg.analogsignals)  
>>> print(seg.spiketrains)    
>>> print(seg.eventarrays)    
>>> print(anasig._data_description)
>>> anasig = r.read_analogsignal(lazy=False, cascade=False)
>>> bl = Block()
>>> # creating segments, their contents and append to bl
>>> r.write_block( bl )
class neo.io.KlustaKwikIO(filename, sampling_rate=30000.0)

Reading and writing from KlustaKwik-format files.

class neo.io.MicromedIO(filename=None)

Class for reading data from micromed (.trc).

Usage:
>>> from neo import io
>>> r = io.MicromedIO(filename='File_micromed_1.TRC')
>>> seg = r.read_segment(lazy=False, cascade=True)
>>> print seg.analogsignals              
[<AnalogSignal(array([ -1.77246094e+02,  -2.24707031e+02,  -2.66015625e+02,
...
class neo.io.NeoHdf5IO(filename=None, **kwargs)

The IO Manager is the core I/O class for HDF5 / NEO. It handles the connection with the HDF5 file, and uses PyTables for data operations. Use this class to get (load), insert or delete NEO objects to HDF5 file.

class neo.io.NeoMatlabIO(filename=None)

Class for reading/writing Neo objects in MATLAB format (.mat) versions 5 to 7.2.

This module is a bridge for MATLAB users who want to adopt the Neo object representation. The nomenclature is the same but using Matlab structs and cell arrays. With this module MATLAB users can use neo.io to read a format and convert it to .mat.

Rules of conversion:
  • Neo classes are converted to MATLAB structs. e.g., a Block is a struct with attributes “name”, “file_datetime”, ...
  • Neo one_to_many relationships are cellarrays in MATLAB. e.g., seg.analogsignals[2] in Python Neo will be seg.analogsignals{3} in MATLAB.
  • Quantity attributes are represented by 2 fields in MATLAB. e.g., anasig.t_start = 1.5 * s in Python will be anasig.t_start = 1.5 and anasig.t_start_unit = 's' in MATLAB.
  • classes that inherit from Quantity (AnalogSignal, SpikeTrain, ...) in Python will have 2 fields (array and units) in the MATLAB struct. e.g.: AnalogSignal( [1., 2., 3.], 'V') in Python will be anasig.array = [1. 2. 3] and anasig.units = 'V' in MATLAB.

1 - Scenario 1: create data in MATLAB and read them in Python

This MATLAB code generates a block:

block = struct();
block.segments = { };
block.name = 'my block with matlab';
for s = 1:3
    seg = struct();
    seg.name = strcat('segment ',num2str(s));
    seg.analogsignals = { };
    for a = 1:5
        anasig = struct();
        anasig.array = rand(100,1);
        anasig.units = 'mV';
        anasig.t_start = 0;
        anasig.t_start_units = 's';
        anasig.sampling_rate = 100;
        anasig.sampling_rate_units = 'Hz';
        seg.analogsignals{a} = anasig;
    end
    seg.spiketrains = { };
    for t = 1:7
        sptr = struct();
        sptr.array = rand(30,1)*10;
        sptr.units = 'ms';
        sptr.t_start = 0;
        sptr.t_start_units = 'ms';
        sptr.t_stop = 10;
        sptr.t_stop_units = 'ms';
        seg.spiketrains{t} = sptr;
    end

    block.segments{s} = seg;
end
save 'myblock.mat' block -V7

This code reads it in Python:

import neo
r = neo.io.NeoMatlabIO(filename='myblock.mat')
bl = r.read_block()
print bl.segments[1].analogsignals[2]
print bl.segments[1].spiketrains[4]

2 - Scenario 2: create data in Python and read them in MATLAB

This Python code generates the same block as in the previous scenario:

import neo
import quantities as pq
from scipy import rand

bl = neo.Block(name='my block with neo')
for s in range(3):
    seg = neo.Segment(name='segment' + str(s))
    bl.segments.append(seg)
    for a in range(5):
        anasig = neo.AnalogSignal(rand(100), units='mV', t_start=0*pq.s, sampling_rate=100*pq.Hz)
        seg.analogsignals.append(anasig)
    for t in range(7):
        sptr = neo.SpikeTrain(rand(30), units='ms', t_start=0*pq.ms, t_stop=10*pq.ms)
        seg.spiketrains.append(sptr)

w = neo.io.NeoMatlabIO(filename=’myblock.mat’) w.write_block(bl)

This MATLAB code reads it:

load 'myblock.mat'
block.name
block.segments{2}.analogsignals{3}.array
block.segments{2}.analogsignals{3}.units
block.segments{2}.analogsignals{3}.t_start
block.segments{2}.analogsignals{3}.t_start_units

3 - Scenario 3: conversion

This Python code converts a Spike2 file to MATLAB:

from neo import Block
from neo.io import Spike2IO, NeoMatlabIO

r = Spike2IO(filename='myspike2file.smr')
w = NeoMatlabIO(filename='convertedfile.mat')
seg = r.read_segment()
bl = Block(name='a block')
bl.segments.append(seg)
w.write_block(bl)
class neo.io.NeuroExplorerIO(filename=None)

Class for reading nex file.

Usage:
>>> from neo import io
>>> r = io.NeuroExplorerIO(filename='File_neuroexplorer_1.nex')
>>> seg = r.read_segment(lazy=False, cascade=True)
>>> print seg.analogsignals   
[<AnalogSignal(array([ 39.0625    ,   0.        ,   0.        , ..., -26.85546875, ...
>>> print seg.spiketrains     
[<SpikeTrain(array([  2.29499992e-02,   6.79249987e-02,   1.13399997e-01, ...
>>> print seg.eventarrays     
[<EventArray: @21.1967754364 s, @21.2993755341 s, @21.350725174 s, @21.5048999786 s, ...
>>> print seg.epocharrays     
[<neo.core.epocharray.EpochArray object at 0x10561ba90>, <neo.core.epocharray.EpochArray object at 0x10561bad0>]
class neo.io.NeuroScopeIO(filename=None)
class neo.io.NeuroshareIO(filename='', dllname='')

Class for reading file trougth neuroshare API. The user need the DLLs in the path of the file format.

Usage:
>>> from neo import io
>>> r = io.NeuroshareIO(filename='a_file', dllname=the_name_of_dll)
>>> seg = r.read_segment(lazy=False, cascade=True, import_neuroshare_segment=True)
>>> print seg.analogsignals        
[<AnalogSignal(array([ -1.77246094e+02,  -2.24707031e+02,  -2.66015625e+02,
...
>>> print seg.spiketrains
[]
>>> print seg.eventarrays
[<EventArray: 1@1.12890625 s, 1@2.02734375 s, 1@3.82421875 s>]
Note:

neuroshare.ns_ENTITY_EVENT: are converted to neo.EventArray neuroshare.ns_ENTITY_ANALOG: are converted to neo.AnalogSignal neuroshare.ns_ENTITY_NEURALEVENT: are converted to neo.SpikeTrain

neuroshare.ns_ENTITY_SEGMENT: is something between serie of small AnalogSignal
and Spiketrain with associated waveforms. It is arbitrarily converted as SpikeTrain.
class neo.io.PickleIO(filename=None, **kargs)
class neo.io.PlexonIO(filename=None)

Class for reading plx file.

Usage:
>>> from neo import io
>>> r = io.PlexonIO(filename='File_plexon_1.plx')
>>> seg = r.read_segment(lazy=False, cascade=True)
>>> print seg.analogsignals
[]
>>> print seg.spiketrains  
[<SpikeTrain(array([  2.75000000e-02,   5.68250000e-02,   8.52500000e-02, ...,
...
>>> print seg.eventarrays
[]
class neo.io.PyNNNumpyIO(filename=None, **kargs)

Reads/writes data from/to PyNN NumpyBinaryFile format

class neo.io.PyNNTextIO(filename=None, **kargs)

Reads/writes data from/to PyNN StandardTextFile format

class neo.io.RawBinarySignalIO(filename=None)

Class for reading/writing data in a raw binary interleaved compact file.

Usage:
>>> from neo import io
>>> r = io.RawBinarySignalIO( filename = 'File_ascii_signal_2.txt')
>>> seg = r.read_segment(lazy = False, cascade = True,)
>>> print seg.analogsignals  
...
class neo.io.TdtIO(dirname=None)

Class for reading data from from Tucker Davis TTank format.

Usage:
>>> from neo import io
>>> r = io.TdtIO(dirname='aep_05')
>>> bl = r.read_block(lazy=False, cascade=True)
>>> print bl.segments
[<neo.core.segment.Segment object at 0x1060a4d10>]
>>> print bl.segments[0].analogsignals
[<AnalogSignal(array([ 2.18811035,  2.19726562,  2.21252441, ...,  1.33056641,
        1.3458252 ,  1.3671875 ], dtype=float32) * pA, [0.0 s, 191.2832 s], sampling rate: 10000.0 Hz)>]
>>> print bl.segments[0].eventarrays
[]
class neo.io.WinEdrIO(filename=None)

Class for reading data from WinEDR.

Usage:
>>> from neo import io
>>> r = io.WinEdrIO(filename='File_WinEDR_1.EDR')
>>> seg = r.read_segment(lazy=False, cascade=True,)
>>> print seg.analogsignals
[<AnalogSignal(array([ 89.21203613,  88.83666992,  87.21008301, ...,  64.56298828,
        67.94128418,  68.44177246], dtype=float32) * pA, [0.0 s, 101.5808 s], sampling rate: 10000.0 Hz)>]
class neo.io.WinWcpIO(filename=None)

Class for reading from a WinWCP file.

Usage:
>>> from neo import io
>>> r = io.WinWcpIO( filename = 'File_winwcp_1.wcp')
>>> bl = r.read_block(lazy = False, cascade = True,)
>>> print bl.segments   
[<neo.core.segment.Segment object at 0x1057bd350>, <neo.core.segment.Segment object at 0x1057bd2d0>,
...
>>> print bl.segments[0].analogsignals
[<AnalogSignal(array([-2438.73388672, -2428.96801758, -2425.61083984, ..., -2695.39453125,
...

If you want to develop your own IO

See IO developers’ guide for information on how to implement of a new IO.