What is a noise schedule?
A diffusion model learns by destroying data and learning to restore it. The noise schedule is the plan for how fast that destruction happens, and how fast the restoration runs in reverse.
Definition
A noise schedule is the mathematical function that defines how much noise is added to a training sample at each timestep during the forward diffusion process, and how that noise is removed during the reverse (generation) process.
During training, the schedule determines the noise level at each of T timesteps. At t=0 the sample is clean. At t=T the sample is pure noise. The noise schedule defines the trajectory between these two states. At inference, the reverse schedule defines how the model steps from t=T back to t=0.
Why the noise schedule matters
The shape of the noise schedule has significant consequences for both training efficiency and generation quality.
A schedule that adds noise too quickly in early timesteps means the model spends most of its training capacity learning to denoise from nearly-pure-noise, which is less useful than learning the final refinement steps where most visual detail is determined.
A schedule that adds noise too slowly means the model never sees enough corrupted data to learn robust denoising at high noise levels.
The ideal schedule exposes the model to a useful distribution of noise levels throughout training, producing a model that can handle the full range from coarse structure to fine detail.
Types of noise schedules
Linear schedule: Noise variance increases linearly from t=0 to t=T. The original DDPM formulation. Simple but not optimal: images become nearly indistinguishable from pure noise quite early in the schedule, wasting training capacity.
Cosine schedule: Noise follows a cosine curve, adding noise more slowly at the start and end of the schedule and faster in the middle. Introduced by Nichol and Dhariwal (2021). Produces significantly better results than linear for image generation.
Sigmoid schedule: A similar S-shaped curve to cosine but with a different parameterization. Used in some more recent models.
EDM (Elucidated Diffusion Models) schedule: Karras et al. (2022) proposed a principled approach to noise schedule design with improved theoretical foundations, enabling higher-quality generation at fewer inference steps.
Flow matching schedules: In flow matching models like LTX-2, the schedule defines not a noise level but a position along the interpolation path between noise and data. The key property is that flow matching naturally produces straighter paths, making the "schedule" effectively more efficient than traditional diffusion schedules.
Noise schedule at inference
At inference time, the noise schedule determines the sampling trajectory: the sequence of timesteps the model steps through from noise to clean output.
The number of inference steps you choose is effectively how many points on this trajectory you sample. More steps produce finer-grained denoising and generally higher quality outputs, at the cost of more compute. Fewer steps are faster but can miss detail.
Advanced samplers (DDIM, DPM-Solver, DPM-Solver++) can traverse the same schedule in far fewer steps than naive sampling by taking larger, more accurate steps. These are the samplers you typically want in production.
How the noise schedule relates to the sampling steps parameter
The sampling steps parameter you set in a generation API is the number of discrete steps the reverse schedule takes. In a standard DDPM, you might need 1000 steps for high quality. DDIM reduces this to 20-50. Flow matching models like LTX-2 can produce strong results in significantly fewer steps because the trajectories are straighter.
This is directly relevant to generation cost and speed: fewer steps means faster generation and lower API cost.
The LTX-2.3 prompt guide covers practical step count recommendations for different quality and speed tradeoffs. The specifics of LTX-2.3's schedule are documented in the technical release notes.