Lab Info


Equipment


Phasespace Impulse X2E

We have recently been equipped with a new Phasespace Impulse X2E motion capture system with active LEDs, and established our own motion capture laboratory at the RISE centre of excellence, in collaboration with the Graphics Virtual Reality Lab, University of Cyprus.

Our Phasespace Impulse X2E system costists 24 cameras that are able to capture 3D motion using modulated LEDs. These cameras contain a pair of linear scanner arrays operating at high frequency each of which can capture the position of any number of bright spots of light as generated by the LEDs. The system offers a fast rate of capture (up to 960Hz) and allows the individual markers to be identified by combining the information from several frames and hence identifying the marker from its unique modulation.

Our system is able to capture 3D motion data over time for multiple characters (up to 5 character at the same time), maintaining the correct human proportions and the naturalness of the action, while having a realistic motion. The system also offers mobility enabling motion capture sessions to be performed at dance schools and studios. Our system is capable of acquiring, in parallel and synchronized the motion capture data (in c3d format), RGB video data (in .mp4 and .mov formats) using the PhaseSpace Rev5 Vision cameras (120 fps @ 4 megapixels and 240 fps @ HD), RGB-depth data using 3 Microfodt Kinect Azure consoles, and audio data (in .mp3 and .wav formats).

The Graphics and Virtual Reality Lab is further equipped with:

  • A 12-cameras Optitrack passive motion capture system (the Flex 3 camera, Motive software),
  • A 3-wall immersive virtual reality set-up,
  • Microsoft Kinect Azure consoles (3 consoles),
  • Microsoft Hololens 2 (1 console),
  • 3D head mounted display,
  • A Haptic Data Glove,
  • Access to a 3D scanner,
  • 3D software applications,
  • Game development software


Optitrack

The Optitrack system consists of a twelve-cameras passive motion capture system (the Optitrack Flex 3 cameras, with resolution at 640x480) that are capable of capturing the 3D motion of articulated subjects. The system uses markers that are coated with a retroreflective material to reflect light that is generated near the camera’s lens. The cameras operate at high frequency (up to 100Hz), each of which can capture the position of any number of bright spots from the reflective markers. The position of the markers in 3D space are estimated using triangulation, assuming that at least three cameras have direct view of their reflected light. The markers are manually labelled and cleaned-up in post processing. The capturing volume of our studio where this system is installed is 3m x 3m. Exports in c3d, and bvh formats.


Microsoft Kinect Azure

Azure Kinect DK integrates a Microsoft designed 1-Megapixel Time-of-Flight (ToF) depth camera using the image sensor presented at ISSCC 2018. The depth camera supports the several modes (NFOV and WFOV) at different resolutions (up to 1024x1024), and capturing rate (5-30Hz). Exports 3d key points.


XSens

The Xsens sensors (MTw Awinda), with the standard Xsens placement, consists of inertial sensors based upon miniature MEMS inertial sensor technology. The system consists of six MTw Awinda wireless 3DoF motion trackers, that communicate with the server to reconstruct the joint angles of an articulated subject. Exports in c3d, bvh, mvn formats.