The BioCV dataset provides observations of 15 human participants as they perform a set of movements typical of biomechanical assessment. The observations come from 3 modalities: - 200 fps HD colour video from a 9 camera machine vision system - marker tracks from a 200 Hz optical motion capture system synchronised to the video system - 1000 Hz analogue signals from forceplates embedded in the floor. Each of the 15 participants was also scanned using a 64 camera photogrammetry system. The scan photos and a rudimentary point cloud are also provided. A 200 fps 9-camera (JAI SP-5000C-CXP2 / Silicon Software MicroEnable 5) machine vision system and 15 camera optical motion capture system (Qualisys Oqus) were set up in a circle around a set of in-floor force-plates (Kistler 9287CA). The machine vision system generated a sychronisation signal which was used to control image capture of both the machine vision and motion capture cameras. The Forceplates were recorded in sync with the motion capture system, and a further auxilliary set of timing lights were used to ensure frame synchronisation of the systems. The machine vision system was calibrated using observations of a calibration board in conjunction with standard bundle adjustment, while the motion capture system was calibrated as per manfacturer specifications. The machine vision calibration was then spatially aligned to the motion capture system using observations of a single marker in motion through the scene. The 15 participants are healthy volunteers and each provided informed consent to have their data shared in this dataset. The 8 females and 7 males peformed a range of motion trials consistent with common biomechanical assessments, including counter-movement jumps, walking, and running. Each movement was repeated numerous times with the participant wearing a full set of 42 motion capture markers while observed by the 3 sensor systems. A further smaller set of trials was conducted without wearing the markers, so that these trials could be used by practitioners who wish to make use of the marker-free video data to augment the training of any relevant algorithms. Motion capture data is provided as marker tracks in the .c3d file format which can be read using (for example) the EZC3D software library (C++, Python and other interfaces). Two versions of the .c3d file are provided. The first is the "raw" data as exported by Qualisys Track Manager, and the second "markers" file has been processed using Visual 3D. For processing, the raw tracks were low-pass filtered (Butterworth 4th order, cut-off 12 Hz) and a 6DoF model was computed, with joint centres computed as the point 50\% between the medial and lateral marker for all joints except the hip joint centre which was computed using the regression equations reported by (Bell 1989). Each participant was also scanned by a 64 camera photogrammetry system. The resulting images are included with the dataset. The images have been processed using the AgiSoft Photoscan software to automatically calibrate the cameras and reconstruct 3D point clouds of the surface of the participant. This should allow the fitting of volumetric models, which may be relevant for markerless motion capture techniques that can benefit from person specific body models.