• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 181
  • 68
  • 50
  • 19
  • 9
  • 7
  • 6
  • 4
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 407
  • 138
  • 73
  • 66
  • 60
  • 50
  • 49
  • 48
  • 45
  • 43
  • 41
  • 38
  • 35
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Model Based Diagnosis of the Intake ManifoldPressure on a Diesel Engine / Modellbaserad laddtrycksdiagnos för en dieselmotor

Bergström, Christoffer, Höckerdal, Gunnar January 2009 (has links)
Stronger environmental awareness as well as actual and future legislations increase the demands on diagnosis and supervision of any vehicle with a combustion engine. Particularly this concerns heavy duty trucks, where it is common with long driving distances and large engines. Model based diagnosis is an often used method in these applications, since it does not require any hardware redundancy. Undesired changes in the intake manifold pressure can cause increased emissions. In this thesis a diagnosis system for supervision of the intake manifold pressure is constructed and evaluated. The diagnosis system is based on a Mean Value Engine Model (MVEM) of the intake manifold pressure in a diesel engine with Exhaust Gas Recirculation (EGR) and Variable Geometry Turbine (VGT). The observer-based residual generator is a comparison between the measured intake manifold pressure and the observer based estimation of this pressure. The generated residual is then post treated in the CUSUM algorithm based diagnosis test. When constructing the diagnosis system, robustness is an important aspect. To achieve a robust system design, four different observer approaches are evaluated. The four approaches are extended Kalman filter, high-gain, sliding mode and an adaption of the open model. The conclusion of this evaluation is that a sliding mode approach is the best alternative to get a robust diagnosis system in this application. The CUSUM algorithm in the diagnosis test improves the properties of the diagnosis system further.

Advanced Techniques for Image Quality Assessment of Modern X-ray Computed Tomography Systems

Solomon, Justin Bennion January 2016 (has links)
<p>X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination. </p><p>A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.</p><p>Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.</p><p>The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).</p><p>First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.</p><p>Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.</p><p>Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.</p><p>The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.</p><p>To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.</p><p>The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.</p><p>The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.</p><p>Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.</p><p>The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose. </p><p>In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.</p> / Dissertation

Structural Diagnosis Implementation of Dymola Models using Matlab Fault Diagnosis Toolbox

Lannerhed, Petter January 2017 (has links)
Models are of great interest in many fields of engineering as they enable prediction of a systems behaviour, given an initial mode of the system. However, in the field of model-based diagnosis the models are used in a reverse manner, as they are combined with the observations of the systems behaviour in order to estimate the system mode. This thesis describes computation of diagnostic systems based on models implemented in Dymola. Dymola is a program that uses the language Modelica. The Dymola models are translated to Matlab, where an application called Fault Diagnosis Toolbox, FDT is applied. The FDT has functionality for pinpointing minimal overdetermined sets of equations, MSOs, which is developed further in this thesis. It is shown that the implemented algorithm has exponential time complexity with regards to what level the system is overdetermined,also known as the degree of redundancy. The MSOs are used to generate residuals, which are functions that are equal to zero given that the system is fault-free. Residual generation in Dymola is added to the original methods of the FDT andthe results of the Dymola methods are compared to the original FDT methods, when given identical data. Based on these tests it is concluded that adding the Dymola methods to the FDT results in higher accuracy, as well as a new way tocompute optimal observer gain. The FDT methods are applied to 2 models, one model is based on a system ofJAS 39 Gripen; SECS, which stands for Secondary Enviromental Control System. Also, applications are made on a simpler model; a Two Tank System. It is validated that the computational properties of the developed methods in Dymolaand Matlab differs and that it therefore exists benefits of adding the Dymola implementations to the current FDT methods. Furthermore, the investigation of the potential isolability based on the current setup of sensors in SECS shows that full isolability is achievable by adding 2 mass flow sensors, and that the isolability is not limited by causality constraints. One of the found MSOs is solvable in Dymola when given data from a fault-free simulation. However, if the simulation is not fault-free, the same MSO results in a singular equation system. By utilizing MSOs that had no reaction to any modelled faults, certain non-monitored faults is isolated from the monitored ones and therefore the risk of false alarms is reduced. Some residuals are generated as observers, and a new method for constructing observers is found during the thesis by using Lannerheds theorem in combination with Pontryagin’s Minimum Priniple. This method enables evaluation of observer based residuals in Dymola without any selection of a specific operating point, as well as evaluation of observers based on high-index Differential Algebraic Equations, DAEs. The method also results in completely different behaviourof the estimation error compared to the method that is already implemented inthe FDT. For example, one of the new observer-implementations achieves both an estimation error that converges faster towards zero when no faults are implementedin the monitored system, and a sharper reaction to implemented faults.

Interobserver Reliability in the Diagnosis of Pulpal and Periradicular Disease

Mellin, Todd Peter 01 January 2005 (has links)
The purpose of this study was evaluate the interobserver reliability of endodontists in the diagnosis the presence or absence of pulpal and/or periradicular disease. The study used 47 patients presenting to the VCU School of Dentistry for screening appointments as a test population under the rules and regulations of the VCU IRB. The patients were examined separately by two endodontists, using a thorough patient history, clinical exam, and radiographs. The answer to the question was then answered, does the patient have pulpal and/or periradicular disease, and compared. The data was analyzed using Kappa and the standard error was determined to test for statistical significance. Observers agreed 88% of the time with a Kappa of 0.74. This was determined to represent a bona fide reliability with p<.0001. The results indicate that agreement among endodontists is very good when patients are evaluated for pulpal and/or periradicular disease.

Design of a practical model-observer-based image quality assessment method for x-ray computed tomography imaging systems

Tseng, Hsin-Wu, Fan, Jiahua, Kupinski, Matthew A. 28 July 2016 (has links)
The use of a channelization mechanism on model observers not only makes mimicking human visual behavior possible, but also reduces the amount of image data needed to estimate the model observer parameters. The channelized Hotelling observer (CHO) and channelized scanning linear observer (CSLO) have recently been used to assess CT image quality for detection tasks and combined detection/estimation tasks, respectively. Although the use of channels substantially reduces the amount of data required to compute image quality, the number of scans required for CT imaging is still not practical for routine use. It is our desire to further reduce the number of scans required to make CHO or CSLO an image quality tool for routine and frequent system validations and evaluations. This work explores different data-reduction schemes and designs an approach that requires only a few CT scans. Three different kinds of approaches are included in this study: a conventional CHO/CSLO technique with a large sample size, a conventional CHO/CSLO technique with fewer samples, and an approach that we will show requires fewer samples to mimic conventional performance with a large sample size. The mean value and standard deviation of areas under ROC/EROC curve were estimated using the well-validated shuffle approach. The results indicate that an 80% data reduction can be achieved without loss of accuracy. This substantial data reduction is a step toward a practical tool for routine-task-based QA/QC CT system assessment. (C) 2016 Society of Photo-Optical Instrumentation Engineers (SPIE)

Reconstruction du signal ou de l'état basé sur un espace de mesure de dimension réduite / Signal state Reconstruction from Lower-dimensional Measurements

Yu, Lei 20 November 2011 (has links)
Le 21_eme siècle est le siècle de l'explosion informatique, des milliards de Données sont produites, collectées et stockées dans notre vie quotidienne. Les façons de collecter les ensembles de données sont multiples mais toujours en essayant d'optimiser le critère qui consiste _a avoir le maximum d'information dans le minimum de données numérique. Il est préférable de collecter directement l'information, car les informations étant contraintes sont dans un espace plus faible que celui où évolues les données (signaux ou états). Cette méthode est donc appelée \la collecte de l'information", et conceptuellement peut ^être résumée dans les trois étapes suivantes : (1) la modélisation, ceci consiste _a condenser l'information pertinente pour les signaux _a un sous-espace plus petit; (2) l'acquisition, ceci consiste _a collecter et préserver l'information dans un espace inferieur _a la dimension des données et (3) la restauration, ceci consiste _a reconstituer l'information dans son espace d'origine. En suivant cette pensée, les principales contributions de cette thèse, concernant les observateurs et le \Compressive Sensing" (CS) basé sur des modèles bay_esiens peuvent ^être unies dans le cadre de la collecte de l'information : les principaux problèmes concernés par ces deux applications peuvent ^être de façon analogue, scindés en les trois étapes sus- mentionnées. Dans la première partie de la th_ese, le problème réside dans le domaine des systèmes dynamiques où l'objectif est de retrouver l'état du système _a partir de la mesure de la sortie. Il nous faut donc déterminer si les états du système sont récupérables _a partir des mesures de la sortie et de la connaissance partielle ou totale du modèle dynamique, c'est le problème de l'observabilité. Ensuite de transposer notre problème dans une représentation plus appropriée, c'est l'écriture sous forme normale et en récupérer l'information, c'est la phase de synthèse d'observateur. Plus précisément dans cette partie, nous avons considéré une classe de systèmes à commutation haute fréquence allant jusqu'au phénomène de Zénon. Pour ces deux types de commutation les transitions de l'état discret sont considérées trop élevées pour ^être mesurées. Toutefois, la valeur moyenne obtenue par filtrage des transitions peut ^être acquise ce qui donne une connaissance partielle des états discrets. Ici, avec ces seuls informations partielles, nous avons discuté de l'observabilité et ceci par les approches géométrie différentielle et algébrique. Aussi, des observateurs ont été proposes par la suite. Dans la deuxième partie de cette thèse, nous avons abordé de la même manière le thème du CS qui est une alternative efficace à l'acquisition abondante de données faiblement informatives pour ensuite les compresser. Le CS se propose de collecter l'information directement de façon compressée, ici les points clés sont la modélisation du signal en fonction des connaissances a priori dont on dispose, ainsi que la construction d'une matrice de mesure satisfaisant la \restricted isometry property" et finalement la restauration des signaux originaux clairsemés en utilisant des algorithmes d'éparpillement régularisé et d'inversion linéaire. Plus précisément, dans cette seconde partie, en considérant les propriétés du CS liées _a la modélisation, la capture et la restauration, il est proposé : (1) d'exploiter les séquences chaotiques pour construire la matrice de mesure qui est appelée la matrice chaotique de mesure, (2) considérer des types de modèle de signal clairsemé et reconstruire le modèle du signal à partir de ces structures sous-jacentes des modèles clairsemés, et (3) proposer trois algorithmes non paramétriques pour la méthode bayesienne hiérarchique. Dans cette dernière partie, des résultats expérimentaux prouvent d'une part que la matrice chaotique de mesure a des propriétés semblables aux matrices aléatoires sous-gaussienne et d'autre part que des informations supplémentaires sur les structures sous-jacentes clairsemés améliorent grandement les performances de reconstruction du signal et sa robustesse vis-a-vis du bruit. / This is the era of information-explosion, billions of data are produced, collected and then stored in our daily life. The manners of collecting the data sets are various but always following the criteria { the less data while the more information. Thus the most favorite way is to directly measure the information, which, commonly, resides in a lower dimensional space than its carrier, namely, the data (signals or states). This method is thus called information measuring, and conceptually can be concluded in a framework with the following three steps: (1) modeling, to condense the information relevant to signals to a small subspace; (2) measuring, to preserve the information in lower dimensional measurement space; and (3) restoring, to reconstruct signals from the lower dimensional measurements. From this vein, the main contributions of this thesis, saying observer and model based Bayesian compressive sensing can be well uni_ed in the framework of information measuring: the main concerned problems of both applications can be decomposed into the above three aspects. In the _rst part, the problem is resided in the domain of control systems where the objective of observer design is located in the observability to determine whether the system states are recoverable and observation of the system states from the lower dimensional measurements (commonly but not restrictively). Speci_cally, we considered a class of switched systems with high switching frequency, or even with Zeno phenomenon, where the transitions of the discrete state are too high to be captured. However, the averaged value obtained through filtering the transitions can be easily sensed as the partial knowledge. Consequently, only with this partial knowledge, we discussed the observability respectively from differential geometric approach and algebraic approach and the corresponding observers are designed as well. At the second part, we switched to the topic of compressive sensing which is objected to sampling the sparse signals directly in a compressed manner, where the central fundamentals are resided in signal modeling according to available priors, constructing sensing matrix satisfying the so-called restricted isometry property and restoring the original sparse signals using sparse regularized linear inversion algorithms. Respectively, considering the properties of CS related to modeling, measuring and restoring, we propose to (1) exploit the chaotic sequences to construct the sensing matrix (or measuring operator) which is called chaotic sensing matrix, (2) further consider the sparsity model and then rebuild the signal model to consider structures underlying the sparsity patterns, and (3) propose three non-parametric algorithms through the hierarchical Bayesian method. And the experimental results prove that the chaotic sensing matrix is with the similar property to sub-Gaussian random matrix and the additional consideration on structures underlying sparsity patterns largely improves the performances of reconstruction and robustness.

Sensorless field oriented control of brushless permanent magnet synchronous motors

Mevey, James Robert January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / James E. DeVault / Working with the subject of sensorless motor control requires an understanding of several topical areas; this report presents an understanding that was gained during this research work. The fundamentals of electric motors (particularly brushless motors) are developed from first principles and the basic models are discussed. The theory of sinusoidal synchronous motors is reviewed (phasor analysis of the single phase equivalent circuit). The concept of a complex space vector is introduced and developed using a working knowledge of the sinusoidal synchronous motor. This leads to the presentation of the space vector model of the permanent magnet synchronous motor, in both the stationary and rotor reference frames. An overview of the operation of three-phase voltage source inverters is given, followed by an explanation of space vector modulation and its relationship to regular sinusoidal pulse width modulation. Torque control of the permanent magnet synchronous machine is reviewed in several reference frames and then rotor-flux-field-oriented-control is explained. Finally, some schemes for sensorless operation are discussed.

Assessing and Optimizing Pinhole SPECT Imaging Systems for Detection Tasks

Gross, Kevin Anthony January 2006 (has links)
The subject of this dissertation is the assessment and optimization of image quality for multiple-pinhole, multiple-camera SPECT systems. These systems collect gamma-ray photons emitted from an object using pinhole apertures. Conventional measures of image quality, such as the signal-to-noise ratio or the modulation transfer function, do not predict how well a system's images can be used to perform a relevant task. This dissertation takes the stance that the ultimate measure of image quality is to measure how well images produced from a system can be used to perform a task. Furthermore, we recognize that image quality is inherently a statistical concept that must be assessed for the average task performance across a large ensemble of images.The tasks considered in this dissertation are detection tasks. Namely we consider detecting a known three-dimensional signal embedded in a three-dimensional stochastic object using the Bayesian ideal observer. Out of all possible observers (human or otherwise) the ideal observer sets the absolute upper bound for detection task performance by using all possible information in the image data. By employing a stochastic object model we can account for the effects of object variability, which has a large effect on observer performance.An imaging system whose hardware has been optimized for ideal observer detection task performance is an imaging system that maximally transfers detection task relevant information to the image data.The theory and simulation of image quality, detection tasks, and gamma-ray imaging are presented. Assessments of ideal observer detection task performance are used to optimize imaging hardware for SPECT systems as well as to rank different imaging system designs.

The Actor-Observer Effect and Perceptions of Agency: The Options of Obedience and Pro-social Behavior

Downs, Samuel David 06 June 2012 (has links)
The actor-observer effect suggests that actors attribute to the situation while observers attribute to the actor's disposition. This effect has come under scrutiny because of an alternative perspective that accounts for anomalous finding. This alternative, called the contextual perspective, suggests that actors and observers foreground different aspects of the context because of a relationship with the context, and has roots in Gestalt psychology and phenomenology. I manipulated a researcher's prompt and the presence of a distressed confederate as the context for attributions, and hypothesized that actors and observers would differ on attributions to choice, situation, and disposition because of presence of a distressed confederate. Actors were presented with either a distressed or non-distressed confederate and either a prompt to leave, a prompt to stay, or no prompt. For example, some actors experienced a distressed confederate and were asked to leave while others experienced a non-distressed confederate and were asked to stay. Actors then made a decision to either stay and help the confederate or leave. Observers watched one of ten videos, each of one actor condition in which the actor either stayed or left (five actor conditions by 2 options of stay or leave). Actors' and observers' choice, situational, and dispositional attributions were analyzed using factorial MANOVAs. Actors and observers foregrounded the distressed confederate when making attributions to choice, situation, and disposition. Furthermore, observers' attributions to choice were also influenced by the actor's behavior. These findings support the contextual perspective since context does influence actors' and observers' attributions.

Image-based visual servoing of a quadrotor using model predictive control

Sheng, Huaiyuan 19 December 2019 (has links)
With numerous distinct advantages, quadrotors have found a wide range of applications, such as structural inspection, traffic control, search and rescue, agricultural surveillance, etc. To better serve applications in cluttered environment, quadrotors are further equipped with vision sensors to enhance their state sensing and environment perception capabilities. Moreover, visual information can also be used to guide the motion control of the quadrotor. This is referred to as visual servoing of quadrotor. In this thesis, we identify the challenging problems arising in the area of visual servoing of the quadrotor and propose effective control strategies to address these issues. The control objective considered in this thesis is to regulate the relative pose of the quadrotor to a ground target using a limited number of sensors, e.g., a monocular camera and an inertia measurement unit. The camera is attached underneath the center of the quadrotor and facing down. The ground target is a planar object consisting of multiple points. The image features are selected as image moments defined in a ``virtual image plane". These image features offer an image kinematics that is independent of the tilt motion of the quadrotor. This independence enables the separation of the high level visual servoing controller design from the low level attitude tracking control. A high-gain observer-based model predictive control (MPC) scheme is proposed in this thesis to address the image-based visual servoing of the quadrotor. The high-gain observer is designed to estimate the linear velocity of the quadrotor which is part of the system states. Due to a limited number of sensors on board, the linear velocity information is not directly measurable. The high-gain observer provides the estimates of the linear velocity and delivers them to the model predictive controller. On the other hand, the model predictive controller generates the desired thrust force and yaw rate to regulate the pose of the quadrotor relative to the ground target. By using the MPC controller, the tilt motion of the quadrotor can be effectively bounded so that the scene of the ground target is well maintained in the field of view of the camera. This requirement is referred to as visibility constraint. The satisfaction of visibility constraint is a prerequisite of visual servoing of the quadrotor. Simulation and experimental studies are performed to verify the effectiveness of the proposed control strategies. Moreover, image processing algorithms are developed to extract the image features from the captured images, as required by the experimental implementation. / Graduate / 2020-12-11

Page generated in 0.2095 seconds