Rendering in AMIDE is accomplished using the Volpack volume rendering library. This software library is both portable, and provides for true volume rendering (as opposed to the surface rendering used by many other libraries and hardware accelerators).
To start a rendering window, select the "View->Rendering" menu item. A small dialog window will pop-up allowing you to select which objects you'd like rendered, along with some additional options. The first "Set values greater than max threshold to zero" allows you to strip high level voxels out of the rendering process. In general you won't want this, but it might be useful if you have high valued areas in your data set that obscures what you'd like to see. The second option "Accelerate Rendering" tells VolPack to use a faster method for doing the volume rendering. You will in general want to use this option, as it causes a significance performance enhancement (around 10 fold). It does, however, require around 3 fold as much rendering as the non-accelerated option, so if you're running out of memory, you'll want to try to rendering without the acceleration. The third option "Initial opacity functions only density dependent" sets things such that the initial gradient opacity function does not contribute to the rendering. This is useful for data sets (e.g. PET) where one is more interested in having an accurate view of the data, rather than a view where gradients in the data set are highlighted.
After hitting "Execute" the program will reslice the data sets and ROI's into a data structure that the volpack library can handle, and then perform some initial renderin gcalculations. For data sets, the interpolation type specified for the data set will be used. This whole process will take some time, so be patient. Please also note that, when converting the data set, the data is scaled between the current minimum and maximum threshold, with all data above the current maximum threshold set to the maximum threshold value (or zero, if specified), and all data below the current minimum threshold set to the minimum threshold value. This scaling can be relative to the data set's "Global" maximum and minimum, to the "Per Frame" maximum and minimum, or can be from maximum and minimum values "Interpolated Between Frames". "Per slice" scaling does not make sense in the context of volume rendering, and is interpreted as "Global" scaling.
When all this is completed, the rendering window should pop-up. Its use is described below.
The result of the rendering process is presented on the canvas in the center of the window. This canvas can accept user input to change the orientation of the rendering. Button 1 allows rotating on the x and y axis, and button 2 allows rotating on the z axis.
You should notice two slider type widgets, one on top of the rendered image, and one on the right side. These are both appropriately labeled with the axis around which the rendering will be spun if they are changed. Additionally you should notice a dial widget (labeled 'z'). The dial is for rotating on the z axis (which comes out of the plane of the display). Note that the effect of rotations are cumulative.
This will pop-up a dialog with a panel for each object being rendered. The available options are described below:
This setting determines whether the rendering returns an image which looks more analogous to an x-ray (the "opacity" setting), or returns an image which looks more like a surface (the "grayscale" setting). The "grayscale" setting does this by specifying a light source, material properties, and using depth cueing.
The color table of each rendered object can be changed here.
This is the most confusing part of rendering, so hang on here. The classification functions are used to map between the value in each voxel and how much that voxel should be represented in the final rendered image. On the x axis is the possible values of the different voxels. On the y-axis is the opacity that will be given a voxel based on its value.
Both classification functions have several buttons on the right side of their graphs. The top button allows the classification function to be drawn as a spline. The second button allows the classification function to be drawn as a series of straight lines. Finally, the last button resets the classification function to a straight line.
There are two classification functions:
Density Dependent: This function tells you how opaque each voxel will be based on its current value. In a sense, this is analogous to an x-ray, where the amount of the x-rays that are absorbed in a structure is related to the density of that structure. of the display.
Gradient Dependent: Instead of relating the density of a voxel to its opacity, this function relates the gradient of a voxel (how much the value changes between this voxel and its neighbors) to its opacity. This has the effect of giving added weight to surfaces.
You can choose between generating a single rendered image (monoscopic), or a stereoscopic image pair. A stereoscopic image pair is a pair of images that have been generated at slightly different angles. When viewed correctly, these two images can be interpreted by the viewer's eyes as a single image containing depth information.
This menu item allows you to export the rendered image to an external image file. The saved data format is jpeg.
This causes the movie generation dialog box to pop up. This dialog box is further described below: the section called “Rendering Movie Dialog”.
This causes the rendering parameters dialog box to pop up. This dialog box is described below: the section called “Rendering Parameters Dialog”.
With this drop-down menu, the user can choose between rendering speed and rendering quality. To increase speed, voxels with values either close to zero or close to unity can be counted as completely translucent or completely opaque, respectively. The highest quality doesn't use this approximation at all, the lowest quality setting uses this approximation big-time.
These parameters are used for controlling the results when the "stereoscopic" option has been chosen.
This is the angle offset (in degrees) between a pair of rendered images. Increasing this number will generally give a greater sensation of depth in the image pair. A Reasonable value for this parameter is between 2 and 5 degrees. Note that this parameter will be saved between different sessions of the program (not currently done on MS Windows).
Ideally, this should be (roughly) the distance between the two rendered images, and corresponds to the distance between the user's eyes. It is impossible for a person to resolve a stereoscopic pair if the images are farther apart then the person's eyes, since human eyes cannot move independently. While this parameter is specified in millimeters, the actually distance between the pair of images that gets displayed on the monitor depends on the setup of the computer. If the monitor information reported by the operating system is not correct (usually the case), the "eye width" parameter will not be in true millimeters. Note that this parameter will be saved between different sessions of the program (not currently done on MS windows).
These parameters are only used if the "grayscale" output type has been chosen.
Specify whether or not we want depth cueing. Depth cueing puts in a "fog" that causes more distant voxels to appear less bright.
This is the transparency of the fog at the front of the data set. If this number is greater than 1.0, voxels toward the front of the data set will be brightened. If this parameter is less than 1.0, voxels toward the front of the data set will be darker, respectively.
How many frames should be in the MPEG1 movie. The MPEG1 movies generated will be set to run at 30 frames/second, so the default of 300 frames will give a ten second movie.
This setting determines how many times the data set will be rotated around the given axis over the course of the movie. The rotation for each frame is done in x->y->z order (rotate on x first, then y, then z).
This option allows a rendered movie to be made over a time period, which is useful for dynamic data sets.
Note that every time a frame boundary in the data set is passed over, the rendering process must slice and load in a new frame of data. This makes creating a rendered movie over time significantly slower than a movie with just rotations.
Picking "over time" will allow entry of a start and end time for which the data from the data sets should be drawn. With the "over time" option, each second is given equal waiting in terms of how many images from that time period are generated for the output movie.
Picking "over frames" allows entry of a start and end frame (note that this really only makes sense with a single data set). The advantage of "over frames", is that each frame is weighted equally in terms of how many images are generated for the output movie, so for data sets were the dynamics of interest correspond closely to the dynamics of the data set framing sequence, "over frames" may give a more appealing result.
The "over frames smoothed" option is almost the same as "over frames", except that data will be interpolated between frames. This makes for a smoother movie (no jumps) but takes much longer as nearly every movie frame has to be reloaded.