Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
185 Cards in this Set
- Front
- Back
Film based imaging has been the workhorse of radiology ever since the discovery of |
x-rays, 1895 |
|
However, what has resolved film's shortcomings? |
digital image processing |
|
Steps in the production of film based |
1. Patient exposed to predetermined amount of radiation needed to provide diagnostic image quality 2. Latent image is formed on film that is subsequently processed by chemicals in a processor to render the image visible 3. Processed image is then ready for viewing by the radiologist- who makes the diagnosis |
|
Limitations of film based imaging |
may result in poor image quality |
|
Too dark IF initial radiation exposure is too |
high - film is overexposed, processed image appears too dark, the radiologist can't make diagnosis from image |
|
Too light IF initial radiation exposure is too |
low - too light, cannot be used by radiologist |
|
As a radiation detector, film screen cannot show differences in tissue contrast that are |
less than 10% |
|
Limitations: Optical range and contrast are |
fixed and limited |
|
Limitations: Requires manual handling for |
archiving and retrieval |
|
Increased radiation exposure |
repeated image |
|
Film is not ideal for performing 3 basic functions of radiation |
1. detection 2. image display 3. image archiving |
|
What is the highest of all imaging modalities? |
spatial resolution *main reason radiography has played a significant role in imaging patients throughout the years |
|
Generic digital imaging system: |
1. Data acquisition 2. Image processing 3. Image display/storage/archiving 4. Image communication |
|
Data acquisition refers to |
a systematic method of collecting data from the patient |
|
Data acquisition components |
1. xray tube 2. digital image detector |
|
Data acquisition is the measurement of the |
linear attenuation coefficient of the x-ray beam by digital image detectors |
|
The detectors produce an electronic signal that is converted by |
the analog to digital converter in preparation for processing by the computer |
|
The output signal from the detectors is an |
electrical signal - an analog signal that varies continuously in time |
|
Because digital computer is used, |
analog signal must be converted into a digital signal (discrete units) for processing by a digital computer |
|
This conversion is performed by a |
ADC - analog to digital converter |
|
For projection digital CT, the data are the |
electron density of tissues which is related to the linear attenuation coefficient |
|
It is attenuation data that are |
collected for these imaging modalities |
|
Image processing is |
performed by a computer using the binary number system |
|
Image processing uses digital information acquired by the |
ADC during the data acquisition phase and processes it into a format usable for diagnosis |
|
The ADC sends the digital data for |
digital image processing by a digital computer |
|
Image processing takes an |
input digital image and processes it to produce an output digital image by using a binary # system |
|
Binary system operates with base |
2, 0, 1 |
|
These two digits are referred to as |
binary digits or bits - bits not continuous, they are discrete |
|
Computers operate with binary #s |
0 and 1 |
|
These operations can be used to |
1. reduce noise in the output image 2. enhance the sharpness of the input image 3. change the contrast of the input image |
|
Image Display/Storage/Communication |
uses a DAC to convert the processed image into a viewable image on the computer monitor |
|
The output of computer processing, the output digital image, must be
|
first converted into an analog signal before it can be displayed on a monitor for viewing by the observer |
|
Information is stored and archived on |
magnetic data (magnet tapes/disks) carriers and laser optical disks (for retrospective viewing and manipulation) |
|
Information can be sent electronically via |
computer networks to the PACS |
|
History of digital image processing dates back to |
early 1960s ^When NASA was developing its lunar and planetary exploration program |
|
Digital image processing is a multidisciplinary subject that includes |
1. physics 2. math 3. engineering 4. computer science |
|
Image Formation and Representation |
1. analog signal 2. digital signal |
|
Analog signal |
1. example- sine wave or a continuous function 2. made up of a comprehensive gray scale |
|
Digital signal |
1. discrete function 2. represented by numbers that can be processed by a computer |
|
Castlemans theory: images are |
all of subjects |
|
Within the set of images there are other subsets |
1. visible images 2. optical images 3. noninvisible physical images 4. mathematical images |
|
Visible images |
paintings, drawings, photographs |
|
Optical images |
holograms |
|
Noninvisible physical images |
temperature, pressure, elevation maps |
|
Mathematical images |
continuous and discrete functions |
|
Castleman noted that |
only the digital images can be processed by the computer |
|
Analog images are |
continuous images ex. black and while photograph of chest x-ray because it represents a continuous distribution of light intensity as a function of position on the radiograph |
|
Digital images are |
numerical representations or images of objects |
|
Formation requires a |
digital computer |
|
Data must be in a |
digital format |
|
ADC is crucial in |
converting continuous (analog) signals to digital |
|
Analog processing |
both the input image and output image are analog |
|
DIgital processing |
both the input image and output image are discrete |
|
Process |
a series of actions or operations leading to a desired result |
|
Digital image processing |
subjecting numerical representations of objects to a series of operations in order to obtain a desired result |
|
In image processing, it is necessary to convert |
an input image into an output image |
|
In cases where an analog image must be converted into digital data for input to the computer, a |
digitization system is required |
|
CT is based on a |
reconstruction process whereby a digital image is changed into a visible physical image |
|
Image Domains - images can be represented in two domains on the basis of |
how they are acquired |
|
The two image domains? |
1. Spatial location domain 2. Spatial frequency domain |
|
Spatial location domain |
1. images viewed by humans 2. radiography and CT acquire info in spatial location domain |
|
Spatial frequency domain |
1. MRI acquires info |
|
Small structures within an object (patient) produce |
high frequencies that represent the detail in the image |
|
Large structures produce |
low frequencies and represent contrast info in the image |
|
Digital image processing can transform |
one image domain into another image domain |
|
Fourier transformation |
mathematical calculation performed by a computer - mathematically rigorous |
|
The fourier transformation converts |
image data from the spatial location domain to the spatial frequency domain, or vice versa |
|
The major reason for doing this is to |
facilitate image processing that can enhance or suppress certain features of the image |
|
The fourier transform converts a function in the |
time domain to a function in frequency domain |
|
Fundamental parameters of a digital image's structure |
1. matrix 2. pixels 3. voxels 4. bit depth |
|
A digital image is made up of a 2D array of # called a |
matrix |
|
The matrix consists of |
columns (M) and rows (N) that define small square regions called picture elements or pixels |
|
The size of the image can be described as |
MxNxk bits |
|
When M=N |
the image is square |
|
Matrix size is also sometimes referred to as |
FOV |
|
Generally, the diagnostic digital images are |
rectangular in shape |
|
The operator selects the matrix size by choosing the |
FOV |
|
As matrix size increases, images require |
more processing time, more storage space, and take longer to transmit to remote locations |
|
Pixels |
make up the matrix, generally square |
|
Each pixel contains |
a # (discrete value) that represents a brightness level or tissue characteristic |
|
In CT, these numbers are related to the |
1. atomic number 2. mass density of the imaged tissues |
|
Pixel size= |
FOV/matrix size |
|
The larger the matrix, |
the smaller the pixel size (for the same FOV) and the better the resolution (spatial resolution) |
|
Voxel |
contraction for volume element |
|
Voxel - pixels that are representing |
information contained in a volume of tissue
|
|
Such volume is referred to as a |
voxel |
|
Voxel info is converted into |
numerical values contained in the pixels and these # are assigned brightness levels |
|
The higher # represent |
high signal intensity from the detectors - shaded white (bright) |
|
The lower # represent |
low signal intensity - shaded dark (black) |
|
Bit depth |
number of bits per pixel |
|
Bit depth is represented by |
"k bits" in the formula M x N x k bits |
|
k bits= |
2k - each pixel will have 2^k gray level (density) |
|
Matrix size has an effect on the |
detail or spatial resolution of the image |
|
What can affect the spatial resolution and density resolution of an image? |
1. matrix size 2. pixel size 3. bit depth |
|
Larger matrix |
smaller pixel size, improved spatial resolution |
|
FOV decreases |
smaller pixel size, improved spatial resolution |
|
Increase bit depth |
increase contrast resolution |
|
Image digitization - primary objective |
to convert an analog image into numerical data for processing by a computer |
|
Image digitization consists of 3 distinct steps |
1. scanning 2. sampling 3. quantization |
|
1st step |
scanning |
|
Scanning |
picture image is divided into small regions, pixels, placed within rows and columns, matrix |
|
The matrix allows |
identification of each pixel by providing an address for that pixel |
|
Increase the # of pixels in the image matrix and the |
image becomes more recognizable and facilitates better perception of image detail |
|
Each small region of the picture is a |
picture element (pixel) |
|
Scanning results in a |
grid characterized by rows and columns |
|
Size of the grid depends on the |
# of pixels on each side of the grid |
|
2nd step |
sampling |
|
Sampling |
brightness of each pixel is measured in the entire image |
|
A small spot of light is projected onto the transparency and the transmitted light is |
detected by a photomultiplier tube and outputs an electrical (analog) signal |
|
The output of the photomultiplier tube is an |
electrical (analog) signal |
|
final step |
Quantization |
|
Quantization |
electrical signal obtained from sampling is assigned an integer (0, or +/- #) proportional to the strength of that signal |
|
The result is each pixel being assigned a |
gray level ranging 0-255 placed on a rectangular grid |
|
Number 0 representing |
black |
|
Numer 255 representing |
white |
|
Number 1-254 representing a shade |
of gray |
|
The gray scale is based on the |
volume of gray levels |
|
The result of quantization |
1. a digital image 2. an array of # representing the analog image that was scanned, sampled, quantized |
|
Analog to digital conversion |
responsible in converting analog signals to digital information |
|
2 important characteristics of the ADC |
1. speed 2. accuracy |
|
Speed |
inversely proportional to accuracy - the greater the accuracy, the longer the digitization process *time taken to digitize the analog signal |
|
Accurary |
the more samples taken, the more accurate the representation of the digital image |
|
Too few samples will result in |
aliasing artifacts |
|
Accuracy refers to the |
sampling of the signal |
|
Aliasing artifacts appear as |
Moire pattern on the image |
|
Why digitize images? |
1. image enhancement 2. image restoration 3. image analysis 4. image compression 5. image synthesis |
|
Image enhancement |
the purpose is to generate an image that is more pleasing to the observer |
|
Image restoration |
the purpose is to improve the quality of the images that have distortions/degradations |
|
Image analysis |
allows measurements and statistics to be performed |
|
Image compression |
the purpose is to reduce the size of the image to decrease transmission time and reduce storage space |
|
Image synthesis |
create images from other images or non-image data |
|
Image processing techniques are based on three types of operations |
1. point operations 2. local operations 3. global operations |
|
Point operations |
1. gray level mapping 2. histogram modification |
|
Local operations |
1. area processes/group processes 2. spatial frequency filtering |
|
Global operations |
fourier transform - entire input image is used to compute the value of the pixel into the output image; uses filtering in the frequency domain rather than space domain |
|
Alternate image processing technique |
Geometric operations |
|
Geometric operations |
changes the position (spatial position or orientation) of the pixel
` |
|
Geometric operations result in |
the scaling and sizing of the images and image rotation/translation |
|
Gray level mapping |
1. uses LUT - which plots the output/input gray levels against each other |
|
Gray level mapping changes the |
brightness of the image |
|
Gray level mapping results in the |
enhancement of the display image |
|
Gray level mapping results in a |
modification of the histogram of the pixel values |
|
Histogram |
a graph of the pixels plotted as a function of the gray level |
|
Histogram created by |
observing the image matrix and creating a table of the # of pixels with a specific intensity value |
|
Histogram - plotting a graph of the |
# of pixels versus the gray levels |
|
A histogram indicates the overall |
brightness and contrast of an image |
|
Histogram modification |
technique of modifying the histogram causing the brightness and contrast of the image to be modified |
|
Wide histogram results in |
high contrast |
|
Narrow histogram results in |
low contrast |
|
Low range values image appears |
dark |
|
Higher range values image appears |
bright |
|
Local operations |
image processing operation in which the output image pixel value is determined from a small area of pixels around the corresponding input pixel |
|
Example of local operations |
spatial frequency filtering |
|
Spatial frequency filtering |
1. high spatial frequency 2. low spatial frequencys |
|
High spatial frequency |
brightness of an image changes rapidly with distance in the horizontal/vertical direction |
|
An image with smaller pixels has |
higher frequency info than an image with larger pixels |
|
Low spatial frequency |
brightness changes slowly or at a constant rate |
|
Spatial location filtering: Convolution |
the value of the output pixel depends on a group of pixels in the input image that surround the input pixel of interest - pixel P5 |
|
Convolution is a general purpose algorithm that is a technique of |
filtering in the space domain |
|
THe new value is a |
weighted average |
|
Convolution kernel |
each pixel in the kernel is a weighting factor or convolution coefficient -size of kernel 3x3 matrix |
|
Spatial frequency filtering |
1. high pass filtering 2. low pass filtering 3. unsharp (blurred) masking |
|
High pass filtering is known as |
edge enhancement or sharpness |
|
High pass filtering is intended to |
sharpen an input image in the spatial domain that appears blurred |
|
Low pass filtering is used for the |
goal of image smoothing |
|
Smoothing is intended to |
reduce noise and the displayed brightness levels of pixels; however, image detail is compromised |
|
Unsharp (blurred) masking |
uses the blurred image produced from the low pass filtering process and subtracts it from the original image to produce a sharp image |
|
A CT image exam consists of |
two images per exam |
|
Image compression |
the use of software and hardware techniques to reduce information by removing unnecessary data |
|
Image compression allows |
remaining information to be encoded, stored or transmitted in an archive or storage media such as a tape or disc |
|
Upon decompression, the information is |
decoded and is filled with a representation of the data that was removed during compression |
|
Types of image compression |
1. Lossless 2. Lossy |
|
Lossless compression |
1. reversible 2. no info loss in compressed image data 3. does not involve the process of quantization |
|
Lossy compression |
1. irreversible 2. provides high compression ratios 3. currently not used by radiologists due to possibility of misdiagnosis |
|
Lossy compression involves 3 steps |
1. image transformation 2. quantization 3. encoding |
|
Image synthesis overview |
MRI, CT, 3D imaging in radiology, virtual reality imaging in radiology |
|
Virtual reality |
a branch of computer science that immerses the users in a computer generated environment and allows them to interact with 3D scenes- virtual endoscopy |
|
In CT - the slice of the patient is divided into small regions (voxels) because |
dimensions of depth (slice thickness) is added to the pixel |
|
Image processing hardware - basic image processing system consists of several interconnected components |
1. data acquisition device 2. digitizer 3. image memory 4. DAC 5. internal image processor 6. host computer |
|
Data acquisition device |
the video camera -in CT this would be represented by the x-ray tube and detectors and detector electronics |
|
Digitizer |
analog signal converted into digital form by the digitizer, or ADC |
|
Image memory |
the digitized image is held in storage for further processing; size of the memory depends on the image |
|
DAC |
digital imaging held in the memory can be displayed on TV monitor; monitors the work with analog signals to convert digital data to analog signals with a DAC |
|
Internal imaging processor |
responsible for high speed processing of the input digital data |
|
Host computer |
primary component capable of performing several functions; plays significant role in applications |