In 3D echocardiography (3D echo), the image orientation varies depending on the position and direction of the transducer during examination. As a result, when reviewing images the user must initially identify anatomical landmarks to understand image orientation – a potentially challenging and time-consuming task. We automated this initial step by training a deep residual neural network (ResNet) to predict the rotation required to re-orient an image to the standard apical four-chamber view). Three data pre-processing strategies were explored: 2D, 2.5D and 3D. Three different loss function strategies were investigated: classification of discrete integer angles, regression with mean absolute angle error loss, and regression with geodesic loss. We then integrated the model into a virtual reality application and aligned the re-oriented 3D echo images with a standard anatomical heart model. The deep learning strategy with the highest accuracy – 2.5D classification of discrete integer angles – achieved a mean absolute angle error on the test set of 9.0∘. This work demonstrates the potential of artificial intelligence to support visualisation and interaction in virtual reality.