• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 445
  • 78
  • 12
  • 9
  • 8
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 661
  • 661
  • 188
  • 187
  • 167
  • 86
  • 66
  • 62
  • 55
  • 54
  • 50
  • 46
  • 46
  • 43
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
441

Splitting Frames Based on Hypothesis Testing for Patient Motion Compensation in SPECT

MA, LINNA 30 August 2006 (has links)
"Patient motion is a significant cause of artifacts in SPECT imaging. It is important to be able to detect when a patient undergoing SPECT imaging is stationary, and when significant motion has occurred, in order to selectively apply motion compensation. In our system, optical cameras observe reflective markers on the patient. Subsequent image processing determines the marker positions relative to the SPECT system, calculating patient motion. We use this information to decide how to aggregate detected gamma rays (events) into projection images (frames) for tomographic reconstruction. For the most part, patients are stationary, and all events acquired at a single detector angle are treated as a single frame. When a patient moves, it becomes necessary to split a frame into subframes during each of which the patient is stationary. This thesis presents a method for splitting frames based on hypothesis testing. Two competing hypotheses and probability model are designed. Whether to split frames is based on a Bayesian recursive estimation of the likelihood function. The estimation procedure lends itself to an efficient iterative implementation. We show that the frame splitting algorithm performance is good for a sample SNR. Different motion simulation cases are presented to verify the algorithm performance. This work is expected to improve the accuracy of motion compensation in clinical diagnoses."
442

A Fringe Projection System for Measurement of Condensing Fluid Films in Reduced Gravity

Tulsiani, Deepti 04 January 2006 (has links)
The thesis describes the design of a fringe projection system to study the dynamics of condensation with potential application in a reduced gravity environment. The concept is that an optical system for imaging the condensation layer enables extraction of valuable data from the image because of the ability of the optical system to image the perturbations in the condensation films. By acquiring a sequence of images of the deformed fringe pattern, the change in the surface topology can be observed over time, giving greater understanding of condensation dynamics in reduced gravity.
443

Compressive sensing for microwave and millimeter-wave array imaging

Cheng, Qiao January 2018 (has links)
Compressive Sensing (CS) is a recently proposed signal processing technique that has already found many applications in microwave and millimeter-wave imaging. CS theory guarantees that sparse or compressible signals can be recovered from far fewer measure- ments than those were traditionally thought necessary. This property coincides with the goal of personnel surveillance imaging whose priority is to reduce the scanning time as much as possible. Therefore, this thesis investigates the implementation of CS techniques in personnel surveillance imaging systems with different array configurations. The first key contribution is the comparative study of CS methods in a switched array imaging system. Specific attention has been paid to situations where the array element spacing does not satisfy the Nyquist criterion due to physical limitations. CS methods are divided into the Fourier transform based CS (FT-CS) method that relies on conventional FT and the direct CS (D-CS) method that directly utilizes classic CS formulations. The performance of the two CS methods is compared with the conventional FT method in terms of resolution, computational complexity, robustness to noise and under-sampling. Particularly, the resolving power of the two CS methods is studied under various cir- cumstances. Both numerical and experimental results demonstrate the superiority of CS methods. The FT-CS and D-CS methods are complementary techniques that can be used together for optimized efficiency and image reconstruction. The second contribution is a novel 3-D compressive phased array imaging algorithm based on a more general forward model that takes antenna factors into consideration. Imaging results in both range and cross-range dimensions show better performance than the conventional FT method. Furthermore, suggestions on how to design the sensing con- figurations for better CS reconstruction results are provided based on coherence analysis. This work further considers the near-field imaging with a near-field focusing technique integrated into the CS framework. Simulation results show better robustness against noise and interfering targets from the background. The third contribution presents the effects of array configurations on the performance of the D-CS method. Compressive MIMO array imaging is first derived and demonstrated with a cross-shaped MIMO array. The switched array, MIMO array and phased array are then investigated together under the compressive imaging framework. All three methods have similar resolution due to the same effective aperture. As an alternative scheme for the switched array, the MIMO array is able to achieve comparable performance with far fewer antenna elements. While all three array configurations are capable of imaging with sub-Nyquist element spacing, the phased array is more sensitive to this element spacing factor. Nevertheless, the phased array configuration achieves the best robustness against noise at the cost of higher computational complexity. The final contribution is the design of a novel low-cost beam-steering imaging system using a flat Luneburg lens. The idea is to use a switched array at the focal plane of the Luneburg lens to control the beam-steering. By sequentially exciting each element, the lens forms directive beams to scan the region of interest. The adoption of CS for image reconstruction enables high resolution and also data under-sampling. Numerical simulations based on mechanically scanned data are conducted to verify the proposed imaging system.
444

An interactive digital image processing system

Fawcett, George January 1975 (has links)
Thesis. 1975. B.S.--Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science. / Includes bibliographical references. / by George Fawcett, Jr. / B.S.
445

A three-dimensional computer display

Berlin, Edwin P January 1979 (has links)
Thesis (B.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1979. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING. / Bibliography: leaf 79. / by Edwin P. Berlin, Jr. / B.S.
446

A microprocessor implementation of an image enhancement/transmission system

Gallington, Raleigh Cedric January 1981 (has links)
Thesis (Elec.E)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1981. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING. / Bibliography: leaves 147-148. / by Raleigh Cedric Gallington. / Elec.E
447

2D and 3D high-speed multispectral optical imaging systems for in-vivo biomedical research

Bouchard, Matthew Bryan January 2014 (has links)
Functional optical imaging encompasses the use of optical imaging techniques to study living biological systems in their native environments. Optical imaging techniques are well-suited for functional imaging because they are minimally-invasive, use non ionizing radiation, and derive contrast from a wide range of biological molecules. Modern transgenic labeling techniques, active and inactive exogenous agents, and intrinsic sources of contrast provide specific and dynamic markers of in-vivo processes at subcellular resolution. A central challenge in building functional optical imaging systems is to acquire data at high enough spatial and temporal resolutions to be able to resolve the in-vivo process(es) under study. This challenge is particularly highlighted within neuroscience where considerable effort in the field has focused on studying the structural and functional relationships within complete neurovascular units in the living brain. Many existing functional optical techniques are limited in meeting this challenge by their imaging geometries, light source(s), and/or hardware implementations. In this thesis we describe the design, construction, and application of novel 2D and 3D optical imaging systems to address this central challenge with a specific focus on functional neuroimaging applications. The 2D system is an ultra-fast, multispectral, wide-field imaging system capable of imaging 7.5 times faster than existing technologies. Its camera-first design allows for the fastest possible image acquisition rates because it is not limited by synchronization challenges that have hindered previous multispectral systems. We present the development of this system from a bench top instrument to a portable, low-cost, modular, open source, laptop based instrument. The constructed systems can acquire multispectral images at >75 frames per second with image resolutions up to 512 x 512 pixels. This increased speed means that spectral analysis more accurately reflects the instantaneous state of tissues and allows for significantly improved tracking of moving objects. We describe 3 quantitative applications of these systems to in-vivo research and clinical studies of cortical imaging and calcium signaling in stem cells. The design and source code of the portable system was released to the greater scientific community to help make high-speed, multispectral imaging more accessible to a larger number of dynamic imaging applications, and to foster further development of the software package. The second system we developed is an entirely new, high-speed, 3D fluorescence microscopy platform called Laser-Scanning Intersecting Plane Tomography (L-SIPT). L-SIPT uses a novel combination of light-sheet illumination and off-axis detection to provide en-face 3D imaging of samples. L-SIPT allows samples to move freely in their native environments, enabling a range of experiments not possible with previous 3D optical imaging techniques. The constructed system is capable of acquiring 3D images at rates >20 volumes per second (VPS) with volume resolutions of 1400 x 50 x 150 pixels, over a 200 fold increase over conventional laser scanning microscopes. Spatial resolution is set by choice of telescope design. We developed custom opto-mechanical components, computer raytracing models to guide system design and to characterize the technique's fundamental resolution limits, and phantoms and biological samples to refine the system's performance capabilities. We describe initial applications development of the system to image freely moving, transgenic Drosophila Melanogaster larvae, 3D calcium signaling and hemodynamics in transgenic and exogenously labeled rodent cortex in-vivo, and 3D calcium signaling in acute transgenic rodent cortical brain slices in-vitro.
448

Multi-scale Representations for Classification of Protein Crystal Images and Multi-Modal Registration of the Lung

Po, Ming Jack January 2015 (has links)
In recent years, multi-resolution techniques have become increasingly popular in the image processing community. New techniques have been developed with applications ranging from edge detection, texture recognition, image registration, multi-resolution features for image classification and more. The central focus of this two-part thesis is the multi-resolution analysis of images. In the first part, we used multi-resolution approaches to help with the classification of a set of protein crystal images. In the second, similar approaches were used to help register a set of 3D image volumes that would otherwise be computationally prohibitive without leveraging multi-resolution techniques. Specifically, the first part of this work proposes a classification framework that is being developed in collaboration with NorthEast Structural Genomics Consoritum (NESG) to assist in the automated screening of protein crystal images. Several groups have previously proposed automated algorithms to expedite such analysis. However, none of the classifiers described in the literature are sufficiently accurate or fast enough to be practical in a structural genomics production pipeline. The second part of this work proposes a 3D image registration algorithm to register regions of emphysema as quantified by densitometry on lung CT with MR lung volumes. The ability to register quantitatively-determined regions of emphysema with perfusion MRI will allow for further exploration of the pathophysiology of Chronic Obstructive Pulmonary Disorder (COPD). The registration method involves the registration of CT volumes at different levels of inspiration (total lung capacity to functional residual capacity [FRC]) followed by another registration between FRC-CT and FRC-MR volume pairs.
449

Non-invasive and cost-effective quantification of Positron Emission Tomography data

Mikhno, Arthur January 2015 (has links)
Molecular imaging of the human body is beginning to revolutionize drug development, drug delivery targeting, prognostics and diagnostics, and patient screening for clinical trials. The primary clinical tool of molecular imaging is Positron Emission Tomography (PET), which uses radioactively tagged probes (radioligands) for the in vivo quantification of blood flow, metabolism, protein distribution, gene expression and drug target occupancy. While many radioligands are used in human research, only a few have been adopted for clinical use. A major obstacle to translating these tools from bench-to-bedside is that PET images acquired using complex radioligands can not be properly interpreted or quantified without arterial blood sampling during the scan. Arterial blood sampling is an invasive, risky, costly, time consuming and uncomfortable procedure that deters subjects' participation and requires highly specialized medical staff presence and laboratories to run blood analysis. Many approaches have been developed over the years to reduce the number of blood samples for certain classes of radioligands, yet the ultimate goal of zero blood samples has remained illusive. In this dissertation we break this proverbial blood barrier and present for the first time a non-invasive PET quantification framework. To accomplish this, we introduce novel image processing, modeling, and tomographic reconstruction tools. First, we developed dedicated pharmacokinetic modeling, machine learning and optimization framework based on the fusion of Electronic Health Records (EHR) data with dynamic PET brain imaging information. EHR data is used to infer individualized metabolism and clearance rates of the radioligand from the body. This is combined with simultaneous estimation on multiple distinct regions of the PET image. A substantial part of this effort involved curating, and then mining, an extensive database of PET, EHR and arterial blood sampling data. Second, we outline a new tomographic reconstruction and resolution modeling approach that takes into account the scanner point spread function in order to improve the resolution of existing PET data-sets. This technique allows visualization and quantification of structures smaller than previously possible. Recovery of signal from blood vessels and integration with the non-invasive framework is demonstrated. We also show general applicability of this technique for visualization and signal recovery from the raphe, a sub-resolution cluster of nuclei in the brain that were previously not detectible with standard techniques. Our framework can be generalizable to all classes of radioligands, independent of their kinetics and distribution within body. Work presented in this thesis will allow the PET scientific and clinical community to advance towards the ultimate goal of making PET cost-effective and to enable new clinical use cases.
450

Through the Forest of Speckles: Robust Spectroscopy of Extremely Faint Companions of Nearby Stars

Veicht, Aaron Michael January 2016 (has links)
The discovery and characterization of exoplanetary systems is a new exciting field. At just over two decades old, it has already fundamentally reshaped our knowledge of planet and solar system formation. We now know that there is a vast diversity of planetary systems, in highly varied, even bizarre, configurations. Known planetary bodies span all masses from objects less massive and smaller than Earth to objects as large as the smallest stars or brown dwarfs. They exhibit periods of but a few hours to periods spanning millennia, from nearly perfectly circular orbits to highly elliptical, from fluffy gas giants to dense rocky worlds, from purely metallic worlds to water worlds. Exoplanets come in all sizes, compositions and varieties. These new discoveries have fundamentally changed the way we approach planetary science. With such a great diversity in exoplanets, we look extend our knowledge to including understanding their individual composition. We wish to understand the climate of these exoplanets and to resolve the differences between, for example, Earth-like and Venus-like planets. To facilitate these discoveries several methods of exoplanery detection and characterization have been developed. Among them are indirect methods that infer the existence of exoplanets from their influence on their star, and direct methods that detect the light from the exoplanets themselves. Direct detection of exoplanets allows not only for a determination of the existence of the object, but also for the determination of its composition and climate through the measurement of its atmosphere's chemical composition. Using purely high-contrast direct imaging methods, coarse spectra can now be measured for exoplanets with a relative brightness 10⁻⁴-10⁻⁵ below that of the host star. Below this contrast level the companion is at the same level of brightness as the noise caused by optical defects and wave front errors in the observed light, called speckles. In this thesis, I demonstrate the usage and optimization of a new novel technique, S4_Spectrum, to model and remove speckle noise from directly imaged systems. S4_Spectrum is capable of reducing 99% of the speckle noise. This allows for the detection and spectral characterization of exoplanets as faint as 10⁻⁶-10⁻⁷ times the brightness of their host stars. This represents two orders of magnitude gain in sensitivity. I present the design of one of these high-contrast systems, Project 1640, as well as the data collection method, including the data pipeline and analysis techniques. Also, I describe the S4_Spectrum technique in detail, as implemented in Project 1640, and present its operation and optimization. Additionally, I present the application of this new tool to obtain several spectral characterizations of objects found in the Project 1640 survey.

Page generated in 0.0785 seconds