The image plane is perpendicular to the view direction, and, assuming orthogonal projection, the rays are parallel to the view direction. This geometry is attractive because parallel rays greatly simplify computation. For this reason, most Direct Volume Rendering algorithms implement orthographic viewing. In addition, perspective viewing creates problems because of ray divergence. Moreover, parallel projection also does not mislead the viewer with warped data due to the perspective transformation. Also, most data explored in volume visualization do not benefit from the foreshortening illusion because the eye-point is distant compared to the size of the imaged volume. Depth cues can be given by lighting or animation.
For a given ray direction, two non-trivial tasks must be done. First, the voxels intersecting each ray have to be identified. Then, a value from the classified data set volume has to be found for each voxel along the ray.
It is straight forward to ray-cast along planes parallel to the faces of a volume when sampling is cubic or rectangular, which is most often the case. Rays are easily sampled at unit intervals, which fall at the centers of adjacent voxels. Therefore, the average weight of the composed values at the voxel gridpoints is sufficient for the calculation. Then, back-to-front or front-to-back traversals along each axis direction will give front, back, top, bottom and side views.
However, projecting the volume to a general image plane is a complicated process, since the plane is not perpendicular to any volume axis. In order to examine a volume from any desired orientation, resampling and anti-aliasing need to be performed. Parallel rays are sampled at unit intervals. As a result sample points are irregularly related to voxel positions when an arbitrary ray direction is chosen. Therefore, sample points need to be calculated from voxel gridpoint values by trilinear interpolation with weights assigned relative to the position of the sample point in the voxel. This derivation best takes into account the distances from the sample point to voxel gridpoints.
Another approach is to pre-transform the data into the desired orientation, which then simplifies the geometry of the actual ray casting, since traversal are done along rows and columns of the transformed volume data. There exists a fast algorithm that uses special-purpose hardware to perform the transformation, as a sequence of shears (Wol90). In this case, resampling is done during each shear. Nevertheless, since every voxel in a single shear is moved through a constant amount, the resampling is a simple bilinear interpolation.