Phase-Based Motion Manipulation

Advanced VR Cybersickness Reduction Through Real-Time Phase Processing

Welcome to our innovative graduation project that addresses one of the most significant challenges in virtual reality: cybersickness. By leveraging advanced phase-based motion processing techniques, we have developed a real-time solution that reduces motion perception in VR environments, creating a more comfortable and immersive experience for users.

Project Overview

Virtual reality cybersickness affects millions of users worldwide, limiting the adoption and enjoyment of VR technologies. Our solution uses phase-based image processing to generate counter-motion images that trick the brain into perceiving reduced motion, significantly decreasing cybersickness symptoms.

Key Achievements

Real-time Performance

Implemented shader-based processing in Unity for zero-latency application

Advanced Signal Processing

Utilized FFT-based phase analysis in YIQ color space

Practical Application

Created a working VR system that demonstrably reduces cybersickness

The Challenge

Cybersickness in virtual reality environments is a persistent problem that affects user comfort and limits VR adoption. Traditional approaches often involve hardware modifications or basic software adjustments that don't address the fundamental perceptual issues causing motion sickness.

Our Solution

Inspired by the groundbreaking work in phase-based motion magnification by Wadhwa et al., we developed an innovative approach that processes visual information in real-time to reduce perceived motion. Instead of magnifying subtle motions as in the original research, we apply inverse processing to create counter-motions that reduce the brain's perception of movement.

Methodology & Technical Approach

Theoretical Foundation

Our approach is grounded in the principle that phase variations in complex-valued image representations correspond directly to motion. By analyzing and manipulating these phase variations, we can influence motion perception without traditional optical flow computation.

Processing Pipeline

1

Image Acquisition

  • Capture current and previous frames from Unity VR environment
  • Convert RGB color space to YIQ for optimized luminance processing
  • Isolate luma channel for phase analysis
2

Frequency Domain Analysis

  • Apply Fast Fourier Transform (FFT) to luma channels
  • Extract phase information from complex frequency representations
  • Calculate phase delta: Δφ = φ_previous - φ_current
3

Phase Manipulation

  • Apply bandpass filtering to eliminate irrelevant frequency components
  • Implement controlled magnification/reduction to target phases
  • Maintain spatial coherence while reducing motion perception
4

Reconstruction

  • Reconstruct processed frames using inverse FFT
  • Convert back to RGB color space
  • Output to VR display system in real-time

Implementation Details

Development Phases

Phase 1: Proof of Concept

Objective: Validate the theoretical approach using modified existing algorithms

Approach:
  • Modified code from established phase-based motion magnification implementations
  • Processed Unity-captured frames through external processing pipeline
  • Implemented basic frame-to-frame phase analysis
Challenges:
  • Significant processing latency (>10000ms)
  • Memory bottlenecks in data transfer
  • Insufficient frame rate for real-time VR applications

Phase 2: Real-Time Implementation (GPU-based)

Objective: Achieve real-time performance suitable for VR applications

Technical Specifications:
  • Platform: Unity 3D with custom HLSL shaders
  • Processing Target: 30+ FPS for smooth VR experience
  • Latency Requirement: <20ms total processing time
  • Resolution Support: Up to 2160x1200 per eye

Before & After Comparison

Before After

Drag the handle to compare the before and after processing results.

Comparison 1: Unprocessed vs without pyramid (0.5 lower bound)

.

Drag the handle to compare

Comparison 2: Unprocessed vs without pyramid (0.6 lower bound)

.

Drag the handle to compare

Comparison 3: Unprocessed vs Pyramidlevel 3 191

.

Drag the handle to compare

Comparison 4: Unprocessed vs Pyramidlevel 3 170

.

Drag the handle to compare

Comparison 5: Unprocessed vs Pyramidlevel 2 170

.

Drag the handle to compare

Results & Discussion

To objectively measure the efficacy of our motion suppression algorithms, we conducted a comprehensive dense optical flow analysis. This technique computes motion vectors for each pixel between consecutive frames, allowing us to quantify the total magnitude of visual motion presented to the user. We compared our two primary approaches—a direct FFT method with bandpass filtering (Non-Pyramid) and a Steerable Pyramid method—against the unprocessed baseline footage. The results, summarized in Table 1, demonstrate a profound reduction in motion across all tested configurations.

Table 1: Comparative Optical Flow Analysis of Motion Suppression Techniques

Table 1: Comparative Optical Flow Analysis of Motion Suppression Techniques

Discussion

The primary finding is that all tested methods were highly effective, achieving a motion magnitude reduction between 70% and 73%. This is a substantial decrease in optical flow, providing strong quantitative evidence that the core phase-manipulation technique successfully suppresses the visual motion signals responsible for the sensory conflict that causes cybersickness.
Numerically, the 4-level Steerable Pyramid approach (0.191-0.45 Freq) achieved the highest raw motion reduction at 73.3%. However, as will be discussed next, this marginal numerical superiority came at a significant cost to visual quality and performance stability. The non-pyramid methods provided a robust and stable reduction of ~71%, positioning them as highly viable candidates.

Our Team

Ufuk Çelikcan

Doç. Dr. Ufuk Çelikcan

Supervisor

Kenan Gökdeniz Acet

Kenan Gökdeniz Acet

Student ID: 2200356034

Emre Can Şahin

Emre Can Şahin

Student ID: 2210356146

Research Inspiration

Our work builds upon the groundbreaking research in phase-based motion magnification by Wadhwa et al. from MIT. Their innovative approach to revealing imperceptible motions through phase manipulation inspired us to explore the inverse application: reducing perceived motion to minimize cybersickness.

Downloads & Resources