PhysMaster

Mastering Physical Representation
for Video Generation

via Reinforcement Learning

We propose PhysMaster, which captures physical knowledge as a representation for guiding video generation models to enhance their physics-awareness.

Visual Representation Icon
Physical Representation Injection: Based on the image-to-video task, we devise PhysEncoder to encode physical knowledge from the input image as an extra condition to inject into the video generation process.
Connector Design Icon
Representation Learning by RLHF: PhysEncoder leverages generative feedback from generation models to optimize physical representation with Direct Preference Optimization in an end-to-end manner.
Instruction Tuning Data Icon
Training Paradigm: We improve physics-awareness of PhysEncoder and thus of video generation model in a three-stage training pipeline, which proves to generalize effectively to diverse physical scenarios guided by different physical principles.
Instruction Tuning Recipes Icon
Generic Solution: Our PhysMaster, which learns physical knowledge via representation learning,can act as a generic solution for physics-aware video generation and has potential for broader applications.
Teaser Image

Abstract

Video generation models nowadays are capable of generating visually realistic videos, but often fail to adhere to physical laws, limiting their ability to generate physically plausible videos and serve as ''world models''. To address this issue, we propose PhysMaster, which captures physical knowledge as a representation for guiding video generation models to enhance their physics-awareness. Specifically, PhysMaster is based on the image-to-video task where the model is expected to predict physically plausible dynamics from the input image. Since the input image provides physical priors like relative positions and potential interactions of objects in the scenario, we devise PhysEncoder to encode physical information from it as an extra condition to inject physical knowledge into the video generation process. The lack of proper supervision on the model's physical performance beyond mere appearance motivates PhysEncoder to apply reinforcement learning with human feedback to physical representation learning, which leverages feedback from generation models to optimize physical representations with Direct Preference Optimization (DPO) in an end-to-end manner. PhysMaster provides a feasible solution for improving physics-awareness of PhysEncoder and thus of video generation, proving its ability on a simple proxy task and generalizability to wide-ranging physical scenarios. This implies that our PhysMaster, which unifies solutions for various physical processes via representation learning in the reinforcement learning paradigm, can act as a generic and plug-in solution for physics-aware video generation and broader applications.

Data Logo Training
Pipeline
Connector Logo Results of
Proxy Task
Visual Representation Logo Results of
General Open-world Scenarios

Click to jump to each section.


Three-stage Training Pipeline

We propose a three-stage training pipeline for PhysMaster to enable physical representation learning of PhysEncoder by leveraging the generative feedback from the video generation model. The core idea is formulating DPO for PhysEncoder Ep with the reward signal from generated videos of pretrained DiT model vθ , thus help physical knowledge learning.

Spatial Vision Aggregator (SVA)
Figure 1: Training pipeline of PhysMaster.

Results of Proxy Task

Our work aims to provide a scalable and generalizable methodology for learning physics from targeted data, so for demonstrating the effectiveness of our PhysMaster, we start by defining a proxy task ("free-fall") under simple physical principles and construct domain-specific data for preliminary validation. We compare the physical accuracy of our model with existing works and ablate different training techniques of PhysEncoder.

PhysGen
PISA
Ours

Figure 2: Qualitative comparison with PhysGen and PISA which are specialized for rigid-body motion proves the advantage of our model in shape consistency and trajectory accuracy on ''free-fall''.
Ours Base Ours Base
   

Figure 3: Qualitative ablation for models in different training stages on the real-world test set of "free-fall". Our three-stage training improves model performance in preserving objects’ rigidity and complying with physical laws (e.g., gravitational acceleration and collision) over base model.

Results of General Open-world Scenarios

CogVideoX-5B
WISA
Wan2.1-I2V-14B
Ours

Figure 4: Qualitative comparison with T2V models on rigid-body related scenarios.
CogVideoX-5B
WISA
Wan2.1-I2V-14B
Ours

Figure 5: Qualitative comparison with T2V models on fluid related scenarios.
Ours (Stage III) Ours (Stage I) Ours (Stage III) Ours (Stage I)
     

Figure 6: Qualitative ablation for models in different training stages on fluid related scenarios. DPO following Stage I improves the physical coherence of model in Stage III.
Ours (Stage III) Ours (Stage I) Ours (Stage III) Ours (Stage I)
     

Figure 7: Qualitative ablation for models in different training stages on rigid-body related scenarios. DPO following Stage I improves the physical coherence of model in Stage III.

Conclusion

We propose PhysMaster, which learns physical representation from input image for guiding I2V model to generate physically plausible videos. We optimize physical encoder PhysEncoder based on generative feedback from a pretrained video generation model via DPO on both proxy task and general open-world scenarios, which proves to enhance the model's physical accuracy and demonstrate generalizability across various physical processes by injecting physical knowledge into generation, proving its potential to act as a generic solution for physics-aware video generation and broader applications.

BibTeX

@article{Ji2025physmaster,
  title={{PhysMaster: Mastering Physical Representation for Video Generation via Reinforcement Learning}},
  author={Ji, Sihui and Chen, Xi and Tao, Xin and Wan, Pengfei and Zhao, Hengshuang},
  journal={arXiv preprint arXiv:2510.13809},
  year={2025}
}