Advanced VR Cybersickness Reduction Through Real-Time Phase Processing
Welcome to our innovative graduation project that addresses one of the most significant challenges in virtual reality: cybersickness. By leveraging advanced phase-based motion processing techniques, we have developed a real-time solution that reduces motion perception in VR environments, creating a more comfortable and immersive experience for users.
Virtual reality cybersickness affects millions of users worldwide, limiting the adoption and enjoyment of VR technologies. Our solution uses phase-based image processing to generate counter-motion images that trick the brain into perceiving reduced motion, significantly decreasing cybersickness symptoms.
Implemented shader-based processing in Unity for zero-latency application
Utilized FFT-based phase analysis in YIQ color space
Created a working VR system that demonstrably reduces cybersickness
Cybersickness in virtual reality environments is a persistent problem that affects user comfort and limits VR adoption. Traditional approaches often involve hardware modifications or basic software adjustments that don't address the fundamental perceptual issues causing motion sickness.
Inspired by the groundbreaking work in phase-based motion magnification by Wadhwa et al., we developed an innovative approach that processes visual information in real-time to reduce perceived motion. Instead of magnifying subtle motions as in the original research, we apply inverse processing to create counter-motions that reduce the brain's perception of movement.
Our approach is grounded in the principle that phase variations in complex-valued image representations correspond directly to motion. By analyzing and manipulating these phase variations, we can influence motion perception without traditional optical flow computation.
Objective: Validate the theoretical approach using modified existing algorithms
Objective: Achieve real-time performance suitable for VR applications
.
.
.
.
.
To objectively measure the efficacy of our motion suppression algorithms, we conducted a comprehensive dense optical flow analysis. This technique computes motion vectors for each pixel between consecutive frames, allowing us to quantify the total magnitude of visual motion presented to the user. We compared our two primary approaches—a direct FFT method with bandpass filtering (Non-Pyramid) and a Steerable Pyramid method—against the unprocessed baseline footage. The results, summarized in Table 1, demonstrate a profound reduction in motion across all tested configurations.
Table 1: Comparative Optical Flow Analysis of Motion Suppression Techniques
The primary finding is that all tested methods were highly effective, achieving a motion magnitude reduction between 70% and 73%. This is a substantial decrease in optical flow, providing strong quantitative evidence that the core phase-manipulation technique successfully suppresses the visual motion signals responsible for the sensory conflict that causes cybersickness.
Numerically, the 4-level Steerable Pyramid approach (0.191-0.45 Freq) achieved the highest raw motion reduction at 73.3%. However, as will be discussed next, this marginal numerical superiority came at a significant cost to visual quality and performance stability. The non-pyramid methods provided a robust and stable reduction of ~71%, positioning them as highly viable candidates.
Supervisor
Student ID: 2200356034
Student ID: 2210356146
Our work builds upon the groundbreaking research in phase-based motion magnification by Wadhwa et al. from MIT. Their innovative approach to revealing imperceptible motions through phase manipulation inspired us to explore the inverse application: reducing perceived motion to minimize cybersickness.