STUDY OBJECTIVES: The brain appears to use internal models to successfully interact with its environment via active predictions of future events. Both internal models and the predictions derived from them are based on previous experience. However, it remains unclear how previously encoded information is maintained to support this function, especially in the visual domain. In the present study, we hypothesized that sleep consolidates newly encoded spatio-temporal regularities to improve predictions afterwards. METHODS: We tested this hypothesis using a novel sequence-learning paradigm that aimed to dissociate perceptual from motor learning. We recorded behavioral performance and high-density electroencephalography (EEG) in male human participants during initial training and during testing two days later, following an experimental night of sleep (n = 16, including high-density EEG recordings) or wakefulness (n = 17). RESULTS: Our results show sleep-dependent behavioral improvements correlated with sleep-spindle activity specifically over occipital cortices. Moreover, event-related potential (ERP) responses indicate a shift of attention away from predictable to unpredictable sequences after sleep, consistent with an enhanced automaticity in the processing of predictable sequences. CONCLUSIONS: These findings suggest a sleep-dependent improvement in the prediction of visual sequences, likely related to visual cortex reactivation during sleep spindles. Considering that controls in our experiments did not fully exclude oculomotor contributions, future studies will need to address the extent to which these effects depend on purely perceptual versus oculomotor sequence learning.