"Normally" they are intended to be processed as part of the forward transformation of stored pixel values in the grayscale rendering pipeline.
BUT, in the XA/XRF case (IODs that include the X-Ray Image Module), if Pixel Intensity Relationship (0028,1040) is present with a value of LOG or DISP, then the Modality LUT is NOT applied to the stored pixel values before windowing; rather it is intended to be used to map pixels backward to linear to X-Ray intensity (for some king of analysis or processing).
See PS 3.3 C.8.7.1.1.2, and also PS 3.4 N.2.1.2.
In the Enhanced XA/XRF objects, this bizarre behavior has been separated out into a Pixel Intensity Relationship LUT, rather than abusing the Modality LUT.
By the way, in general, it can be difficult to decide whether or not to apply the conceptual Modality LUT step before windowing, even if it is specified by Rescale Slope/Intercept values rather than an actual LUT. For example, in MR images to which Philips has added the rescale values, these should not be applied before their window values; likewise in PET images, especially those with GML Units and rescale values to SUV (small decimal numbers), the window values are historically usually in stored pixel values rather than SUVs. Making the correct decision may require comparing the range of possible rescaled output values (across the domain of possible input stored pixel values) with the specific window values, to see if the latter "make sense".
In Presentation States, the full pipeline is specified explicitly, but historically, in the images, this is a mess, apart from CT.
In more recent image objects, the Real World Value Mapping Sequence is used for most of the use cases that previously required a Modality LUT step to report physical units, with the intention of separating the rendering pipeline from the value extraction pipeline. Note also that the window values reported in the user interface can be in physical units, but converted into stored pixel value (identity Modality LUT) "units" for persistence, if appropriate. In some case, the newer objects fix Rescale Slope and Intercept to 1 and 0 respectively to prevent implementers getting creative by adding them, but sometimes there are legitimate reasons to (such as to take advantage of widespread support for them in reporting ROI values, where as there is little support in viewers for the Real World Value Mapping Sequence yet).
Reply from D. Clunie:
The Modality LUTs are pretty screwy.
"Normally" they are intended to be processed as part of the forward
transformation of stored pixel values in the grayscale rendering
pipeline.
BUT, in the XA/XRF case (IODs that include the X-Ray Image Module),
if Pixel Intensity Relationship (0028,1040) is present with a
value of LOG or DISP, then the Modality LUT is NOT applied to the
stored pixel values before windowing; rather it is intended to
be used to map pixels backward to linear to X-Ray intensity (for
some king of analysis or processing).
See PS 3.3 C.8.7.1.1.2, and also PS 3.4 N.2.1.2.
In the Enhanced XA/XRF objects, this bizarre behavior has been
separated out into a Pixel Intensity Relationship LUT, rather
than abusing the Modality LUT.
By the way, in general, it can be difficult to decide whether or
not to apply the conceptual Modality LUT step before windowing,
even if it is specified by Rescale Slope/Intercept values rather
than an actual LUT. For example, in MR images to which Philips
has added the rescale values, these should not be applied before
their window values; likewise in PET images, especially those
with GML Units and rescale values to SUV (small decimal numbers),
the window values are historically usually in stored pixel values
rather than SUVs. Making the correct decision may require comparing
the range of possible rescaled output values (across the domain of
possible input stored pixel values) with the specific window values,
to see if the latter "make sense".
In Presentation States, the full pipeline is specified explicitly,
but historically, in the images, this is a mess, apart from CT.
In more recent image objects, the Real World Value Mapping Sequence
is used for most of the use cases that previously required a
Modality LUT step to report physical units, with the intention
of separating the rendering pipeline from the value extraction
pipeline. Note also that the window values reported in the user
interface can be in physical units, but converted into stored
pixel value (identity Modality LUT) "units" for persistence,
if appropriate. In some case, the newer objects fix Rescale Slope
and Intercept to 1 and 0 respectively to prevent implementers
getting creative by adding them, but sometimes there are legitimate
reasons to (such as to take advantage of widespread support for
them in reporting ROI values, where as there is little support
in viewers for the Real World Value Mapping Sequence yet).