loopbio blog

2018 In Review

Written on Montag Jänner 07, 2019

2018 was a great year for loopbio! We continued to improve our products, particularly Loopy image processing and tracking software, and Motif video recording software, and our Virtual Reality systems, to meet the needs of quantitative biologists around the world.

We are especially happy with the improvements and success of Loopy, which left beta testing in January 2018 and grew in popularity rapidly - over the course of 2018 Loopy was used:

  • To help 149 scientists with their research
  • To process 10577 videos (totalling 14.9TB of data)
  • To track or analyze, using deep learning, 2D or 3D tracking, 3484 videos
  • To annotate 2368 video segments
  • To score 791 experiments

Let's review the year in order and highlight some of the moments we are most proud of, including:

Loopy Is Born

At the turn of the year we launched Loopy to the general public, making deep learning and 3D tracking available to all scientists immediately, without having to write or maintain their own software. Loopy can be used immediately by signing up at https://app.loopb.io.

Loopy Grows Up

After Loopy's release, we didn't' stop adding features. We began by added scoring and behavioral coding to Loopy, including social scoring.

We taught all our products - Loopy, Motif, and realtime 3D tracking to get along with one another, demonstrating a world first at FENS conference: realtime marker free pose tracking of mice (and other animals).

We added even more plotting and data analysis capabilities to Loopy - making it a complete and integrated tool for analysis. Now you can create your own tracking solution, including training your own deep learning based detector animal tracker, upload and process videos, and plot and analyze data, without leaving your web browser.

We polished our complete suite of tools for 3D tracking, calibration and analysis. Use your deep learning animal or pose detector for 3D reconstruction with ease!

Motif - the Versatile Video Recording Platform

2018 featured a number of cool uses and examples of Motif - our recording software which can be used standalone, or combined across multiple computers to build complex and automated experimental assays.

Here's Motif for longterm activity recording.

Motif for microscope-like resolution and high framerate recording from multi-well plates.

Motif for wide-area high framerate recording with realtime compression.

Motif for synchronized multi-camera 3D recording - collecting data for analysis in Loopy.

Virtual Reality

We developed and delivered an innovative virtual reality system for flies.

We also delivered two FishVR systems to customers.


2018 was a great year for loopbio!
We wish everyone a happy new year and look forward to 2019!

A Multi-Camera High Throughput Worm Assay

Written on Mittwoch Oktober 17, 2018

We were approached by Andre Brown to create a custom imaging setup for high-throughput worm screening. Dr. Brown and his team are searching for novel neuroactive compounds using the nematode C. elegans. He explained:

"To screen a large number of drugs, it helps to image many worms in multiwell plates. Normal plate scanning microscopes don't help because we want to see behaviour changes that require looking at each worm for minutes so we have to image the worms in parallel. At the same time, we need enough resolution to extract a detailed behavioural fingerprint using Tierpsy Tracker. A six-camera array provides the pixel density and frame rate we need and opens the door to phenotypic screens for complex behaviours at an unprecedented scale."

The Solution

To meet the requirements we designed a solution using 6x 12 Megapixel cameras at 25 frames per second. To save on storage space, all video is compressed in real-time, at exceptional quality, before being saved to disk. The flexibility of Motif allows synchronized recording from all cameras, controlled from a single web-based user interface.

For an impression of the images possible with such a system, check out the interactive (mouse over / touch for zoom controls) viewer below.

unfortunately the generation of the visualisation above introducted some artifacts not present in the orignal video

Interested in Motif?

Motif is the first video and camera recording system designed for the experiments of modern scientists. It supports single and multiple synchronized camera scenarios, remote operation, high framerate and unlimited duration recording. It is always updated and has no single-user or other usage limitations.

If you are interested in a Motif system, please contact us for a quote or to see how Motif can solve your video recording needs.

Loopbio Joins NVIDIA Inception Program

Written on Donnerstag Oktober 04, 2018

We are happy to announce that loopbio gmbh has been accepted into the NVIDIA Inception Program. The program is designed to nurture dedicated and exceptional startups who are revolutionizing industries with AI and data science. The Inception Program provides direct access to NVIDIA's latest technology, deep learning expertise, and a global network of partners and customers.

Loopbio was the first company to bring easy to use deep learning based video analysis and tracking soulutions to the quantitative and behavioural biology research fields. Our revolutionary loopy product was launched in 2017 and allowed AI tracking and analysis of animal behaviour, using only your web browser and without writing any code. Loopy has been improved ever since with the addition of state of the art AI algorithms for pose and 3D tracking, and image and behavioural classification.

conda

Unlike other AI platforms, loopy does not stop at just model training. It provides comprehensive tools for performing quantitative analysis on processed video to let users get high quality scientific data faster.

About Loopbio

Loopbio was founded in 2016 to bring cutting edge technology to behavioural biology. The company is based in Vienna and provides integrated solutions for high-speed single- and multiple-camera video recording, video analysis and tracking, and virtual reality.

Motif Version 4.5 - New Features

Written on Dienstag August 28, 2018

Following the last release we have continued to add features and improvements to our Motif software. This post includes a short overview of some hightlights, while a full list of changes is provided on our website.


Each and every Motif system receives automatic updates without any extra charge.

Increased Support for Environmental Sensors

We have improved our support for environmental sensors from Phidgets greatly. This means that you can simply connect any one of their 'VINT' series sensors to a Motif system and you can now take sensor recordings automatically, at several different sample rates. Phidgets offer an enormous variety of sensors which can be used to measure various parameters of your experiment while recording video, including;

compatible sensors

A selection of compatible sensors

Because we record to our extensible and open imgstore format, all sensor recordings are immediately associated with both the time and framenumber of the video being recorded.

Controlling Outputs

In addition to measuring environmental sensors, we added the ability to switch on or off supported Phidgets outputs, relays and motors. Using our open API you can now, for example, perform tasks like the following

  • control experimental stimuli at regular or scheduled times
  • control physical devices such as motors, servos or actuators
  • switch on/off LEDs, lights or other stimuli
  • schedule tasks for before or after recording has been completed, such as automated feeding or cleaning procedures

Improved Integration with Loopy

Following on from above, if you have a recording with environmental data associated, this will be immediately visible in Loopy after the imgstore has been uploading or imported.

environmental readings showing in loopy

Graphing and export of environmental data associated with a recording is displayed in Loopy

Automatic Import

If you are running an on-site version of Loopy, your Motif and Loopy systems can be configured to allow automatic import of recordings after the completion of your experiment. This feature is especially advantageous when both systems are integrated with your IT infrastructure because all video and experimental data is automatically added to your shared and backed-up network storage without risk of deletion or loss.

loopy integration

Simply enter you Loopy username and your recording will be automatically backed up and imported


Further descriptions of the powerful integrations between Motif and Loopy will be the subject of a future blog post.

New Alignment Visualization

The last motif release added a number of image feedback augmentations to help with setting up your cameras and experimental assays. This release added a new visualization designed to help with alignment of samples inside the experimental apparatus, or to help align multiple cameras in a multiple-camera situation.

alignment feedback

The center of the image is indicated, in addition to concentric bands respecting the aspect ration of the camera sensor.

Interested?

Motif is the first video and camera recording system designed for the experiments of modern scientists. It supports single and multiple synchronized camera scenarios, remote operation, high framerate and unlimited duration recording. It is always updated and has no single-user or other usage limitations.

If you are interested in a Motif system, please contact us for a quote or to see how Motif can solve your video recording needs.

OpenCV Conda Packages

Written on Freitag Mai 25, 2018

At loopbio we maintain some linux packages for use with the conda package manager. These can replace the original packages present in the community-driven conda-forge channel, while retaining full compatibility with the rest of the packages in the conda-forge stack. They include some useful modifications that make them more suited to us, but that we find difficult to submit "upstream" for inclusion in the respective official packages.

Why might our packages be useful to you?

At the time of writing this note, we are actively maintaining three packages:

We have written a getting started with Conda guide here. If you are already familiar with conda then replacing your conda-forge packages with ours is a breeze. Using your command line:

# Before getting our conda packages, get a conda-forge based environment.
# For example, use conda-forge by default for all your environments.
conda config --add channels conda-forge

# install and pin ffmpeg GPL (including libx264)...
conda install 'loopbio::ffmpeg=*=*gpl*'

# ...or install and pin ffmpeg LGPL (without libx264)
conda install 'loopbio::ffmpeg=*=*lgpl*'

# install and pin libjpeg-turbo
# note, this is not needed for opencv to use libjpeg-turbo
conda install 'loopbio::libjpeg-turbo=1.5.90=noclob_prefixed_gcc48_*'

# install and pin opencv
conda install 'loopbio::opencv=3.4.3=*h6df427c*'

If you use these packages and find any problem, please let us know using each package issue tracker.

Example: controlling ffmpeg number of threads when used through OpenCV VideoCapture

We have added an environment variable OPENCV_FFMPEG_THREAD_COUNT that controls ffmpeg's thread_count, and a capture read-only property cv2.CAP_PROP_THREAD_COUNT that can be queried to get the number of threads used by a VideoCapture object. The reason why an environment variable is needed and the property is read only is that the number of threads is a property that needs to be set early in ffmpeg's lifecycle and should not really be modified once the video reader is open. Note that threading support actually depends on the codec used to encode the video (some codecs might, for example, ignore setting thread_count). At the moment we do not support changing the threading strategy type (usually one of slice or frame).

The following are a few functions that help controlling the number of threads used by ffmpeg when decoding a video via opencv VideoCapture objects.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
  """OpenCV utils."""
  import contextlib
  import os
  import cv2
  import logging

  _log = logging.getLogger(__package__)


  @contextlib.contextmanager
  def cv2_num_threads(num_threads):
      """Context manager to temporarily change the number of threads used by opencv."""
      old_num_threads = cv2.getNumThreads()
      cv2.setNumThreads(num_threads)
      yield
      cv2.setNumThreads(old_num_threads)


  # A string to request not to change the current value of an envvar
  USE_CURRENT_VALUE = object()


  @contextlib.contextmanager
  def envvar(name, value=USE_CURRENT_VALUE):
      """
      Context manager to temporarily change the value of an environment variable for the current process.

      Remember that some envvars only affects the process on startup (e.g. LD_LIBRARY_PATH).

      Parameters
      ----------
      name : string
        The name of the environment value to modify.

      value : None, `cv2utils.USE_CURRENT_VALUE` or object; default "USE_CURRENT_VALUE"
        If `cv2utils.USE_CURRENT_VALUE`, the environment variable value is not modified whatsoever.
        If None, the environment variable value is temporarily removed, if it exists.
        Else, str(value) will be temporarily set as the value for the environment variable

      Examples
      --------
      When a variable is not already set...
      >>> name = 'AN_ENVIRONMENT_VARIABLE'
      >>> with envvar(name, None):
      ...     print(os.environ.get(name))
      None
      >>> with envvar(name, USE_CURRENT_VALUE):
      ...     print(os.environ.get(name))
      None
      >>> with envvar(name, 42):
      ...     print(os.environ.get(name))
      42
      >>> print(os.environ.get(name))
      None

      When a variable is already set...
      >>> os.environ[name] = 'a_default_value'
      >>> with envvar(name, USE_CURRENT_VALUE):
      ...     print(os.environ.get(name))
      a_default_value
      >>> with envvar(name, None):
      ...     print(os.environ.get(name))
      None
      >>> print(os.environ.get(name))
      a_default_value
      >>> with envvar(name, 42):
      ...     print(os.environ.get(name))
      42
      >>> print(os.environ.get(name))
      a_default_value
      """
      if value is USE_CURRENT_VALUE:
          yield
      elif name not in os.environ:
          if value is not None:
              os.environ[name] = str(value)
              yield
              del os.environ[name]
          else:
              yield
      else:
          old_value = os.environ[name]
          if value is not None:
              os.environ[name] = str(value)
          else:
              del os.environ[name]
          yield
          os.environ[name] = old_value


  def ffmpeg_thread_count(thread_count=USE_CURRENT_VALUE):
      """
      Context manager to temporarily change the number of threads requested by cv2.VideoCapture.

      This works manipulating global state, so this function is not thread safe. Take care
      if you instantiate capture objects with different thread_count concurrently.

      The actual behavior depends on the codec. Some codecs will honor thread_count,
      while others will not. You can always call `video_capture_thread_count(cap)`
      to check whether the concrete codec used does one thing or the other.

      Note that as of 2018/03, we only support changing the number of threads for decoding
      (i.e. VideoCapture, but not VideoWriter).

      Parameters
      ----------
      thread_count : int or None or `cv2utils.USE_CURRENT_VALUE`, default USE

        * if None, then no change on the default behavior of opencv will happen
          on opencv 3.4.1 and linux, this means "the number of logical cores as reported
          by "sysconf(SC_NPROCESSORS_ONLN)" - which is a pretty aggresive setting in terms
          of resource consumption, specially in multiprocess applications,
          and might even be problematic if running with capped resources,
          like in a cgroups/container, under tasksel or numactl.

        * if an integer, set capture decoders to the specifiednumber of threads
          usually 0 means "auto", that is, let ffmpeg decide

        * if `cv2utils.USE_CURRENT_VALUE`, the current value of the environment
          variable OPENCV_FFMPEG_THREAD_COUNT is used (if undefined, then the default
          value given by opencv is used)
      """
      return envvar(name='OPENCV_FFMPEG_THREAD_COUNT', value=thread_count)


  def cv2_supports_thread_count():
      """Returns True iff opencv has been built with support to expose ffmpeg thread_count."""
      return hasattr(cv2, 'CAP_PROP_THREAD_COUNT')


  def video_capture_thread_count(cap):
      """
      Returns the number of threads used by a VideoCapture as reported by opencv.
      Returns None if the opencv build does not support this feature.
      """
      try:
          # noinspection PyUnresolvedReferences
          return cap.get(cv2.CAP_PROP_THREAD_COUNT)
      except AttributeError:
          return None


  def open_video_capture(path,
                         num_threads=USE_CURRENT_VALUE,
                         fail_if_unsupported_num_threads=False,
                         backend=cv2.CAP_FFMPEG):
      """
      Returns a VideoCapture object for the specified path.

      Parameters
      ----------
      path : string
        The path to a video source (file or device)

      num_threads : None, int or `cv2utils.USE_CURRENT_VALUE`, default None
        The number of threads used for decoding.
        If None, opencv defaults is used (number of logical cores in the system).
        If an int, the number of threads to use. Usually 0 means "auto", 1 "single-threaded"
        (but it might depend on the codec).

      fail_if_unsupported_num_threads : bool, default False
        If False, an warning is cast if num_threads is not None and setting the
        number of threads is unsupported either by opencv or the used codec.

        If True, a ValueError is raised in any of these two cases.

      backend : cv2 backend or None, default cv2.CAP_FFMPEG
        If provided, it will be used as preferred backend for opencv VidecCapture
      """
      if num_threads is not None and not cv2_supports_thread_count():
          message = ('OpenCV does not support setting the number of threads to %r; '
                     'use loopbio build' % num_threads)
          if fail_if_unsupported_num_threads:
              raise ValueError(message)
          else:
              _log.warn(message)

      with ffmpeg_thread_count(num_threads):
          if backend is not None:
              cap = cv2.VideoCapture(path, backend)
          else:
              cap = cv2.VideoCapture(path)

      if cap is None or not cap.isOpened():
          raise IOError("OpenCV unable to open %s" % path)

      if num_threads is USE_CURRENT_VALUE:
          try:
              num_threads = float(os.environ['OPENCV_FFMPEG_THREAD_COUNT'])
          except (KeyError, TypeError):
              num_threads = None
      if num_threads is not None and num_threads != video_capture_thread_count(cap):
          message = 'OpenCV num_threads for decoder setting to %r ignored for %s' % (num_threads, path)
          if fail_if_unsupported_num_threads:
              raise ValueError(message)
          else:
              _log.warn(message)

      return cap

If you get these functions, you can open and read capture like this:

1
2
3
4
  # Do whatever you need
  if not cap.isOpened():
      raise Exception('Something is wrong and the capture is not open')
  retval, image = cap.read()

Hoping other people find these packages useful.

Getting Started With Conda

Written on Freitag Mai 04, 2018

Here at loopbio gmbh we use and recommend the Python programming language. For image processing our primary choice is Python + OpenCV.

Customers often approach us and ask what stack we use and how we set up our environments. The short answer is: we use conda and have our own packages for OpenCV and FFmpeg.

conda

In the following post, we will bravely explain how easy it is to set up a Conda environment for image processing using miniconda and our packages for OpenCV and a matched FFmpeg version on Linux (Ubuntu). If you are not familiar with the concept of Conda: Conda is a package manager and widely used in science, data analysis and machine learning, additionally, it is fairly easy and convenient to use.

If you are more interested in why we are using OpenCV, FFmpeg and Conda and what performance benefits you can expect from our packages please check out our other posts.

Install Miniconda

  1. Download the appropriate 3.X installer
  2. In your Terminal window, run: bash Miniconda3-latest-Linux-x86_64.sh
  3. Follow the prompts on the installer screens. If you are unsure about any setting, accept the defaults. You can change them later. To make the changes take effect, close and then re-open your Terminal window.
  4. Test your installation (a list of pacakages should be printed). conda list

More information is provided here

Setting up the environment

  # Before getting our conda packages, get a conda-forge based environment.
  # For example, use conda-forge by default for all your environments.
  conda config --add channels conda-forge

  # Create a new conda environment
  conda create -n loopbio

  # Source that environment
  source activate loopbio

  # install FFmpeg
  # install and pin ffmpeg GPL (including libx264)...
  conda install 'loopbio::ffmpeg=*=gpl*'

  # ...or install and pin ffmpeg LGPL (without libx264)
  conda install 'loopbio::ffmpeg=*=lgpl*'


  # install and pin opencv
  conda install 'loopbio::opencv=3.4.1'

Reading a video file

  # Make sure that the loopbio environment is activated
  source activate loopbio

  # Start Python
  python

In Python

  import cv2
  cap = cv2.VideoCapture('Downloads/small.mp4')
  ret, frame = cap.read()
  print frame