BioCV Motion Capture Dataset

Making measurements of how a person moves (for instance, when walking naturally, when practicing a sport, when recovering from injury) is a valuable tool for researchers, medial practitioners and animators. The measurements can help identify injury risks, optimise athlete performance, diagnose the source of aches and pains, monitor the progress of a patient's recovery from injury, or help an animator turn an actor into a superhero. The way that motion is measured often involves dressing a person in special clothing, fixing special reflective markers to their bodies, and watching them through specialised camera systems. This can be invasive, awkward, limiting and time consuming to set up, but the marker tracks can be highly accurate and very quick to capture. To make getting the measurements easier, researchers have been trying to remove the need for markers, training computers to better detect and identify parts and points on the human body, to create "markerless" motion capture systems. The BioCV dataset provides both video imagery and traditional motion capture measurements so that the performance of these "markerless" systems can be compared to the more traditional approach, and identify what level of accuracy is possible, and thus what applications they can be applied to.

The BioCV dataset consists of synchronised multi-camera video, marker tracks from optical motion capture, and forceplate data for 15 participants performing controlled and repeated motions. It also includes photogrammetry scans (images and point cloud reconstructions) for each participant. The dataset was created for the purposes of evaluating the performance of computer vision based markerless motion capture systems with respect to marker based systems.

Keywords:
Synchronised video, motion capture, forceplate data, biomechanics validation
Subjects:
Information and communication technologies
Medical and health interface

Cite this dataset as:
Evans, M., Needham, L., Wade, L., Parsons, M., Colyer, S., McGuigan, P., Bilzon, J., Cosker, D., 2024. BioCV Motion Capture Dataset. Bath: University of Bath Research Data Archive. Available from: https://doi.org/10.15125/BATH-01258.

Export

[QR code for this page]

Access on request: The data are provided freely, but to adhere to our ethical commitments we must ensure, as much as possible, that users of the dataset understand that the participants have granted permission for their data to be used in the context of biomechanics and machine vision research, that the data are not to be shared outwith the dataset itself, and are not to be used for any purposes which could be considered demeaning, or cause embarrassment or harm to any participant. We ask that users explicitly agree to suitable terms before downloading. The 15 .tar archive files are typically 15 GB each in size.

Creators

Murray Evans
University of Bath

Laurie Needham
University of Bath

Logan Wade
University of Bath

Martin Parsons
University of Bath

Steffi Colyer
University of Bath

Polly McGuigan
University of Bath

James Bilzon
University of Bath

Darren Cosker
University of Bath

Contributors

University of Bath
Rights Holder

Coverage

Collection date(s):

From 2 February 2022 to 18 March 2022

Documentation

Data collection method:

The BioCV dataset provides observations of 15 human participants as they perform a set of movements typical of biomechanical assessment. The observations come from 3 modalities: - 200 fps HD colour video from a 9 camera machine vision system - marker tracks from a 200 Hz optical motion capture system synchronised to the video system - 1000 Hz analogue signals from forceplates embedded in the floor. Each of the 15 participants was also scanned using a 64 camera photogrammetry system. The scan photos and a rudimentary point cloud are also provided. A 200 fps 9-camera (JAI SP-5000C-CXP2 / Silicon Software MicroEnable 5) machine vision system and 15 camera optical motion capture system (Qualisys Oqus) were set up in a circle around a set of in-floor force-plates (Kistler 9287CA). The machine vision system generated a sychronisation signal which was used to control image capture of both the machine vision and motion capture cameras. The Forceplates were recorded in sync with the motion capture system, and a further auxilliary set of timing lights were used to ensure frame synchronisation of the systems. The machine vision system was calibrated using observations of a calibration board in conjunction with standard bundle adjustment, while the motion capture system was calibrated as per manfacturer specifications. The machine vision calibration was then spatially aligned to the motion capture system using observations of a single marker in motion through the scene. The 15 participants are healthy volunteers and each provided informed consent to have their data shared in this dataset. The 8 females and 7 males peformed a range of motion trials consistent with common biomechanical assessments, including counter-movement jumps, walking, and running. Each movement was repeated numerous times with the participant wearing a full set of 42 motion capture markers while observed by the 3 sensor systems. A further smaller set of trials was conducted without wearing the markers, so that these trials could be used by practitioners who wish to make use of the marker-free video data to augment the training of any relevant algorithms. Motion capture data is provided as marker tracks in the .c3d file format which can be read using (for example) the EZC3D software library (C++, Python and other interfaces). Two versions of the .c3d file are provided. The first is the "raw" data as exported by Qualisys Track Manager, and the second "markers" file has been processed using Visual 3D. For processing, the raw tracks were low-pass filtered (Butterworth 4th order, cut-off 12 Hz) and a 6DoF model was computed, with joint centres computed as the point 50\% between the medial and lateral marker for all joints except the hip joint centre which was computed using the regression equations reported by (Bell 1989). Each participant was also scanned by a 64 camera photogrammetry system. The resulting images are included with the dataset. The images have been processed using the AgiSoft Photoscan software to automatically calibrate the cameras and reconstruct 3D point clouds of the surface of the participant. This should allow the fitting of volumetric models, which may be relevant for markerless motion capture techniques that can benefit from person specific body models.

Technical details and requirements:

Machine Vision System: - 9 JAI SP-5000C-CXP2 - 3 Silicon Software MircoEnable 5 frame grabbers - Silicon Software OptoTrigger 5 I/O board for externalising sync and trigger signals - custom recording software - custom calibration software - set to record a 1920x1280 resolution and 200 fps. Motion capture system: - 15 camera Qualisys Oqus system - 4 Kistler 9287CA force plates - Qualisys Track Manager v2019.3 - Visual 3D (v6, C-Motion Inc) Photogrammetry system: - 50 Canon EOS 1300D cameras - 14 Canon EOS 2000D cameras - triggering system - Agisoft Photoscan software

Additional information:

The data have been divided into 15 .tar archives, with one archive for each participant. Within each archive are: - camera calibration files for the machine vision system for each participant - directories for each participant's movement trials, each of which contains: - 9 video files, one for each camera (h265 encoded mpeg4 files using yuv 444p) - 1 "raw.c3d" unprocessed motion capture file - 1 "markers.c3d" Visual 3D processed motion capture file - 1 "led.c3d" file, observations of the auxilliary sychronisation light system. The video files were created using the FFMPeg library and can be viewed either using FFMpeg's tools (e.g. ffplay) or various other video players, such as VLC. (The YUV444p colour format may make some video players not accept the files). Computer Vision practitioners should have no difficulties using the videos through OpenCV using the FFMpeg backend either with Python or C++ in Linux and Windows. Calibration files have a simple text file format providing image dimensions, 3x3 intrinsic matrix K, 4x4 extrinsic transformation L (which transforms a point from world to camera coordinates) and distortion parameters k0 -> k4 compatible with OpenCV distortion models. The format of the file is as follows: ``` <width> <height> K00 K01 K02 K10 K11 K12 K20 K21 K22 L00 L01 L02 L03 L10 L11 L12 L13 L20 L21 L22 L23 L30 L31 L32 L33 k0 k1 k2 k3 k4 ```

Documentation Files

README.md
text/markdown (9kB)
All Rights Reserved

Dataset documentation. See LICENSE.md for licence.

LICENSE.md
text/markdown (1kB)
All Rights Reserved

Licensing information

Funders

Centre for the Analysis of Motion, Entertainment Research and Applications (CAMERA) - 2.0
EP/T022523/1

Centre for the Analysis of Motion, Entertainment Research and Applications (CAMERA)
EP/M023281/1

Publication details

Publication date: 27 September 2024
by: University of Bath

Version: 1

DOI: https://doi.org/10.15125/BATH-01258

URL for this record: https://researchdata.bath.ac.uk/id/eprint/1258

Contact information

Please contact the Research Data Service in the first instance for all matters concerning this item.

Contact person: Murray Evans

Departments:

Faculty of Science
Computer Science

Research Centres & Institutes
Centre for the Analysis of Motion, Entertainment Research & Applications