• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 188
  • 68
  • 51
  • 19
  • 9
  • 7
  • 6
  • 4
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 417
  • 145
  • 73
  • 67
  • 63
  • 55
  • 54
  • 51
  • 46
  • 44
  • 42
  • 38
  • 37
  • 37
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Fault estimation algorithms : design and verification

Su, Jinya January 2016 (has links)
The research in this thesis is undertaken by observing that modern systems are becoming more and more complex and safety-critical due to the increasing requirements on system smartness and autonomy, and as a result health monitoring system needs to be developed to meet the requirements on system safety and reliability. The state-of-the-art approaches to monitoring system status are model based Fault Diagnosis (FD) systems, which can fuse the advantages of system physical modelling and sensors' characteristics. A number of model based FD approaches have been proposed. The conventional residual based approaches by monitoring system output estimation errors, however, may have certain limitations such as complex diagnosis logic for fault isolation, less sensitiveness to system faults and high computation load. More importantly, little attention has been paid to the problem of fault diagnosis system verification which answers the question that under what condition (i.e., level of uncertainties) a fault diagnosis system is valid. To this end, this thesis investigates the design and verification of fault diagnosis algorithms. It first highlights the differences between two popular FD approaches (i.e., residual based and fault estimation based) through a case study. On this basis, a set of uncertainty estimation algorithms are proposed to generate fault estimates according to different specifications after interpreting the FD problem as an uncertainty estimation problem. Then FD algorithm verification and threshold selection are investigated considering that there are always some mismatches between the real plant and the mathematical model used for FD observer design. Reachability analysis is drawn to evaluate the effect of uncertainties and faults such that it can be quantitatively verified under what condition a FD algorithm is valid. First the proposed fault estimation algorithms in this thesis, on the one hand, extend the existing approaches by pooling the available prior information such that performance can be enhanced, and on the other hand relax the existence condition and reduce the computation load by exploiting the reduced order observer structure. Second, the proposed framework for fault diagnosis system verification bridges the gap between academia and industry since on the one hand a given FD algorithm can be verified under what condition it is effective, and on the other hand different FD algorithms can be compared and selected for different application scenarios. It should be highlighted that although the algorithm design and verification are for fault diagnosis systems, they can also be applied for other systems such as disturbance rejection control system among many others.
22

Sensorless field oriented control of brushless permanent magnet synchronous motors

Mevey, James Robert January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / James E. DeVault / Working with the subject of sensorless motor control requires an understanding of several topical areas; this report presents an understanding that was gained during this research work. The fundamentals of electric motors (particularly brushless motors) are developed from first principles and the basic models are discussed. The theory of sinusoidal synchronous motors is reviewed (phasor analysis of the single phase equivalent circuit). The concept of a complex space vector is introduced and developed using a working knowledge of the sinusoidal synchronous motor. This leads to the presentation of the space vector model of the permanent magnet synchronous motor, in both the stationary and rotor reference frames. An overview of the operation of three-phase voltage source inverters is given, followed by an explanation of space vector modulation and its relationship to regular sinusoidal pulse width modulation. Torque control of the permanent magnet synchronous machine is reviewed in several reference frames and then rotor-flux-field-oriented-control is explained. Finally, some schemes for sensorless operation are discussed.
23

DIFFERENCES IN DIMENSIONS OF CHILDHOOD FUNCTIONING IN CHILDREN OF PRETERM VERSUS FULL TERM BIRTH STATUS

Turner, Tameika Shenay 01 January 2006 (has links)
As medical advances are made in the area of neonatology, more and more premature babies are surviving at younger gestational ages and lower birth weights. Growth in the survival rates of preterm infants leads to questions regarding the long term developmental trajectory of these children. The current study sought to expand on research regarding dimensions of childhood functioning and to apply it to the problem of prematurity by (a) utilizing a new instrument: the Merrill Palmer Revised edition, (b) including children of preterm and full term birth statuses from as young as 2 months of age, and (c) collecting data from parental and clinician reports. In addition to attempts to clarify the relationship between birth status and childhood dysfunction, this study also sought to augment existing literature by exploring the correlation between parental report and clinician observation of childhood dysfunction. The results of this study did not support the hypothesis that children of preterm birth will demonstrate more problems in functioning when compared to full term peers. Although there were more significant differences between preterm and full term children in the older cohort group, those differences did not consistently reflect dysfunction by the preterm children. Additionally, this study considered dimensions of dysfunction as measured by parental report and clinician observations. Notably, a lack of agreement between parent and clinician observations emerged for the young age cohort group. However, the high level of agreement for the older children suggests that parental and clinician perspectives converge with older children. Contrary to the hypothesis, birth status, gender, ethnicity, and SES did not collectively form a specific risk index for dysfunction. However, these factors did interact with each other to predict functioning on several scales. In fact, there were no significant main effects. Instead, predictors of dysfunction were interactions of variables such as birth status, age, gender, and ethnicity. This general finding illustrates the importance of taking into consideration all aspects of the childs situation when making an assessment of functioning.
24

Bioprocess Software Sensors Development Facing Modelling and Model uncertainties/Développement de Capteurs Logiciels pour les Bioprocédés face aux incertitudes de modélisation et de modèle

Hulhoven, Xavier 07 December 2006 (has links)
The exponential development of biotechnology has lead to a quasi unlimited number of potential products going from biopolymers to vaccines. Cell culture has therefore evolved from the simple cell growth outside its natural environment to its use to produce molecules that they do not naturally produce. This rapid development could not be continued without new control and supervising tools as well as a good process understanding. This requirement involves however a large diversity and a better accessibility of process measurements. In this framework, software sensors show numerous potentialities. The objective of a software sensor is indeed to provide an estimation of the system state variables and particularly those which are not obtained through in situ hardware sensors or laborious and expensive analysis. In this context, This work attempts to join the knowledge of increasing bioprocess complexity and diversity and the time scale of process developments and favours systematic modelling methodology, its flexibility and the speed of development. In the field of state observation, an important modelling constraint is the one induced by the selection of the state to estimate and the available measurements. Another important constraint is the model quality. The central axe of this work is to provide solutions in order to reduce the weight of these constraints to software sensors development. On this purpose, we propose four solutions to four main questions that may arise. The first two ones concern modelling uncertainties. 1."How to develop a software sensor using measurements easily available on pilot scale bioreactor?" The proposed solution is a static software sensor using an artificial neural network. Following this modelling methodology we developed static software sensors for the biomass and ethanol concentrations in a pilot scale S. cerevisae cell culture using the measurement of titrating base quantity, agitation rate and CO₂ concentration in the exhaust gas. 2."How to obtain a reaction scheme and a kinetic model to develop a dynamic observation model?". The proposed solution is to combine three elements: a systematic methodology to generate, identify and select the possible reaction schemes, a general kinetic model and a systematic identification procedure where the last step is particularly dedicated to the identification of observation models. Combining these methodologies allowed us to develop a software sensor for the concentrations of an allergen produced by an animal cell culture using the discrete measurement of glucose, glutamine and ammonium concentrations (which are also estimated in continuous time by the software sensors). The two other questions are dealing with kinetic model uncertainty. 3 "How to correct kinetic model parameters while keeping the system observability?". We consider the possibility to correct some model parameters during the process observation. We propose indeed an adaptive observer based on the theory of the most likely initial conditions observer and exploiting the information from the asymptotic observer. This algorithm allows to jointly estimate the state and some kinetic model parameters. 4 "How to avoid any state observer selection that requires an a priori knowledge on the model quality?". Answering this question lead us to the development of hybrid state observers. The general principle of a hybrid observer is to automatically evaluate the model quality and to select the appropriate state observer. In this work we focus on kinetic model quality and propose hybrid observers that evolves between the state observation from an exponential observer (free convergence rate tuning but model error sensitivity) and the one provided by an asymptotic observer (no kinetic model requirement but a convergence rate depending on the dilution rate). Two strategies are investigated in order to evaluate the model quality and to induce the state observation evolution. Each of them have been validated on two simulated cultures (microbial and animal cells) and one real industrial one (B. subtilis). ∙ In a first strategy, the hybrid observer is based on the determination of a parameter that drives the state estimation from the one obtained with an exponential observer (exponential observation) when the model is of good quality to the one provided by an asymptotic observer (asymptotic observation) when a kinetic model error is detected. The evaluation of this driving parameter is made either with an a priori defined function or is coupled to the identification of the initial conditions in a most likely initial conditions observer. ∙ In another strategy, the hybrid observer is based on a statistical test that compares the state estimations provided by an exponential and an asymptotic observer and corrects the state estimation according to it./ Le rapide développement des biotechnologies permet actuellement d'envisager un nombre quasi illimité de produits potentiels allant du biopolymère au vaccin. La culture cellulaire a dès lors évolué de la simple croissance de cellules en dehors de leur environnement naturel à son exploitation pour la production de molécules qu'elles ne produisent pas naturellement. Un tel développement ne peut se poursuivre sans l'utilisation de nouvelles technologies de contrôle et de supervision ainsi q'une bonne compréhension et maîtrise du biprocédé. Cette exigence nécessite cependant une meilleure accessibilité et une plus grande variabilité des mesures des différentes variables de ce procédé. Dans ce contexte, les capteurs logiciels présentent de nombreuses potentialités. L'objectif d'un capteur logiciel est en effet de fournir une estimation des états d'un système et particulièrement de ceux qui ne sont pas mesurés par des capteurs physiquement installés sur le système ou par de longues et coûteuses analyses. Cet objectif peut être obtenu en combinant un modèle du système avec certaines mesures physiques au sein d'un algorithme d'observation d'état. Dans ce domaine de l'observation des bioprocédés, ce travail tente de considérer, à la fois, l'augmentation de la complexité et de la diversité des bioprocédés et l'exigence d'un développement rapide en favorisant le caractère systématique, flexible et rapide des méthodes proposées. Dans le cadre de l'observation des bioprocédés, une importante contrainte de modélisation est induite par la sélection des états à estimer et des mesures disponibles pour cette estimation. Une seconde contrainte est la qualité du modèle. L'axe central de ce travail est de fournir certaines solutions afin de réduire le poids de ces contraintes dans le développement de capteurs logiciels. Pour ce faire, nous proposons quatre réponses à quatre questions qui peuvent survenir lors de ce développement. Les deux premières questions concernent l'incertitude de modélisation. Quant aux deux questions suivantes, elles concernent l'incertitude du modèle lui-même. 1."Comment développer un capteur logiciel exploitant des mesures facilement disponibles sur un bioréacteur pilote?". La réponse que nous apportons à cette question est le développement d'un capteur logiciel statique basé sur un réseau de neurones artificiels. Cette structure nous a permis de développer des capteurs logiciels de concentrations en biomasse et éthanol au sein d'une culture de S. cerevisae utilisant les mesures en ligne de quantité de base titrante, de vitesse d'agitation et de concentration en CO₂ dans le gaz sortant du réacteur. 2."Comment obtenir un schéma réactionnel et un modèle cinétique pour l'identification d'un modèle dynamique d'observation". Afin de répondre à cette question, nous proposons de combiner trois éléments: une méthode de génération systématique de schémas réactionnels, une structure générale de modèle cinétique et une méthode d'identification dont la dernière étape est particulièrement dédiée à l'identification de modèles d'observation. La combinaison de ces éléments nous a permis de développer un capteur logiciel permettant l'estimation continue de la concentration en un allergène produit par une culture de cellules animales en utilisant des mesures échantillonnées de glucose, glutamine et ammonium (qui sont elles aussi estimées en continu par le capteur logiciel). 3."Comment corriger certains paramètres cinétiques tout en maintenant l'observabilité du système?". Nous considérons ici la possibilité de corriger certains paramètres du modèle cinétique durant le procédé de culture. Nous proposons, en effet, un observateur d'état adaptatif exploitant la théorie de l'observateur par identification des conditions initiales les plus vraisemblables et l'information fournie par un observateur asymptotique. L'algorithme proposé permet ainsi de fournir une estimation conjointe de l'état et de certains paramètres cinétiques. 4."Comment éviter la sélection d'un observateur d'état nécessitant une connaissance, a priori, de la qualité du modèle?". La dernière contribution de ce travail concerne le développement d'observateurs d'état hybrides. Le principe général d'un observateur hybride est d'évaluer automatiquement la qualité du modèle et de sélectionner l'observateur d'état approprié. Au sein de ce travail nous considérons la qualité du modèle cinétique et proposons des observateurs d'état hybrides évoluant entre un observateur dit exponentiel (libre ajustement de la vitesse de convergence mais forte sensibilité aux erreurs de mesures) et un observateur asymptotique (ne nécessite aucun modèle cinétique mais présente une vitesse de convergence dépendante du taux de dilution). Afin de réaliser cette évaluation et d'induire l'évolution de l'observation d'état entre ces deux extrémités, deux stratégies sont proposées. Chacune d'elle est illustrée sur deux cultures simulées (une croissance bactérienne et une culture de cellules animales) et une culture réelle de B. subtilis. ∙ Une première stratégie est basée sur la détermination d'un paramètre de pondération entre l'observation fournie par un observateur exponentiel et un observateur asymptotique. L'évaluation de ce paramètre peut être obtenue soit au moyen d'une fonction définie a priori soit par une identification conjointe aux conditions initiales d'un observateur par identification des conditions initiales les plus vraisemblables. ∙ Une seconde stratégie est basée sur une comparaison statistique entre les observations fournies par les deux types d'observateurs. Le résultat de cette comparaison, lorsqu'il indique une incohérence entre les deux observateurs d'état, est alors utilisé pour corriger l'estimation fournie par l'observateur exponentiel.
25

Assessing and Optimizing Pinhole SPECT Imaging Systems for Detection Tasks

Gross, Kevin Anthony January 2006 (has links)
The subject of this dissertation is the assessment and optimization of image quality for multiple-pinhole, multiple-camera SPECT systems. These systems collect gamma-ray photons emitted from an object using pinhole apertures. Conventional measures of image quality, such as the signal-to-noise ratio or the modulation transfer function, do not predict how well a system's images can be used to perform a relevant task. This dissertation takes the stance that the ultimate measure of image quality is to measure how well images produced from a system can be used to perform a task. Furthermore, we recognize that image quality is inherently a statistical concept that must be assessed for the average task performance across a large ensemble of images.The tasks considered in this dissertation are detection tasks. Namely we consider detecting a known three-dimensional signal embedded in a three-dimensional stochastic object using the Bayesian ideal observer. Out of all possible observers (human or otherwise) the ideal observer sets the absolute upper bound for detection task performance by using all possible information in the image data. By employing a stochastic object model we can account for the effects of object variability, which has a large effect on observer performance.An imaging system whose hardware has been optimized for ideal observer detection task performance is an imaging system that maximally transfers detection task relevant information to the image data.The theory and simulation of image quality, detection tasks, and gamma-ray imaging are presented. Assessments of ideal observer detection task performance are used to optimize imaging hardware for SPECT systems as well as to rank different imaging system designs.
26

Advanced Techniques for Image Quality Assessment of Modern X-ray Computed Tomography Systems

Solomon, Justin Bennion January 2016 (has links)
<p>X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination. </p><p>A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.</p><p>Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.</p><p>The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).</p><p>First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.</p><p>Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.</p><p>Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.</p><p>The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.</p><p>To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.</p><p>The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.</p><p>The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.</p><p>Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.</p><p>The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose. </p><p>In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.</p> / Dissertation
27

Design of a practical model-observer-based image quality assessment method for x-ray computed tomography imaging systems

Tseng, Hsin-Wu, Fan, Jiahua, Kupinski, Matthew A. 28 July 2016 (has links)
The use of a channelization mechanism on model observers not only makes mimicking human visual behavior possible, but also reduces the amount of image data needed to estimate the model observer parameters. The channelized Hotelling observer (CHO) and channelized scanning linear observer (CSLO) have recently been used to assess CT image quality for detection tasks and combined detection/estimation tasks, respectively. Although the use of channels substantially reduces the amount of data required to compute image quality, the number of scans required for CT imaging is still not practical for routine use. It is our desire to further reduce the number of scans required to make CHO or CSLO an image quality tool for routine and frequent system validations and evaluations. This work explores different data-reduction schemes and designs an approach that requires only a few CT scans. Three different kinds of approaches are included in this study: a conventional CHO/CSLO technique with a large sample size, a conventional CHO/CSLO technique with fewer samples, and an approach that we will show requires fewer samples to mimic conventional performance with a large sample size. The mean value and standard deviation of areas under ROC/EROC curve were estimated using the well-validated shuffle approach. The results indicate that an 80% data reduction can be achieved without loss of accuracy. This substantial data reduction is a step toward a practical tool for routine-task-based QA/QC CT system assessment. (C) 2016 Society of Photo-Optical Instrumentation Engineers (SPIE)
28

Interobserver Reliability in the Diagnosis of Pulpal and Periradicular Disease

Mellin, Todd Peter 01 January 2005 (has links)
The purpose of this study was evaluate the interobserver reliability of endodontists in the diagnosis the presence or absence of pulpal and/or periradicular disease. The study used 47 patients presenting to the VCU School of Dentistry for screening appointments as a test population under the rules and regulations of the VCU IRB. The patients were examined separately by two endodontists, using a thorough patient history, clinical exam, and radiographs. The answer to the question was then answered, does the patient have pulpal and/or periradicular disease, and compared. The data was analyzed using Kappa and the standard error was determined to test for statistical significance. Observers agreed 88% of the time with a Kappa of 0.74. This was determined to represent a bona fide reliability with p<.0001. The results indicate that agreement among endodontists is very good when patients are evaluated for pulpal and/or periradicular disease.
29

Commande sans capteur mécanique de la machine asynchrone pour la variation de vitesse industrielle / Sensorless induction machine control for industrial speed variation

Solvar, Sébastien 21 December 2012 (has links)
La machine asynchrone présente un intérêt majeur par rapport aux autres types de machines(courant continu, synchrone, ...), sa robustesse, son faible coût de fabrication etd'entretien en sont les principales raisons. Cependant ces avantages ont longtemps été inhibés par la complexité de la commande de celle-ci.De nos jours de nombreux industrielles proposent des variateurs de vitesse pour la machine asynchrone offrant à la fois la souplesse de contrôle, et la qualité de la conversion électromagnétique,naturellement obtenues jusqu'alors avec la machine à courant continu et de la machine synchrone.Depuis quelques années les industrielles font face à une nouvelle problématique, qui est la suppression du capteur mécanique dans le processus de régulation de vitesse de la machine asynchrone. Les travaux de cette thèse, effectués dansle cadre d'un support CIFRE entre l'entreprise GS Maintenance et le laboratoire ECS-Lab EA 3649, ont été orientésvers la réalisation d'un système de contrôle commande d'un variateur industrieldédié aux machines asynchrones sans capteur mécanique. De ce point de vue, l'objectifpremier du travail de thèse, est la conception des techniques de détermination des grandeursmécaniques (vitesse) de la machine asynchrone en utilisant comme seules mesuresles grandeurs électriques. Ces techniques, utilisées pour remplacer l'informationdonnée par les capteurs mécaniques, sont parfois appelées capteurs logiciels.Une attention particulière est donnée au fonctionnement de la machine asynchrone sanscapteur mécanique à basse vitesse. Dans un second temps l'objectif étant d'illustrer lesintérêts technologiques d'un observateur basé sur la technique des modes glissants dansle but d'intégrer celui-ci dans le système contrôle commande d'un variateur industriel. / Induction machine includes main interests compared with others electricals machines like brushed DC Motor,or synchronus electric Motor.Its robustness, its low cost manufacture, and maintenance are major reason of its success.However, for long time this advantages inhibited because of induction machine control complexity.Nowadays,many industrial propose speed drives for induction machine giving both control flexibility, and electromagnetic qualited conversion, naturally obtained with DC motor, and synchronus electric Motor.For several years now, many manufacturers face to a new problematic, wich is sensorless induction machine control.This thesis work, carried out in concert with the firm GS Maintenance and ECS-Lab EA 3649 laboratory under CIFRE financement.This work focused on conception plant dedicated to sensorless industrial speed drive control for induction machine.From this point of view, at first glance this work proposes technical strategies to identify mechanical induction machine variables, by using only electrical measurements.This strategies used to stand in for informations from a mechanical sensor, are the so called software sensor.Specific attention has been paid to induction machine sensorless working at very low speed. Secondly, we propose to illustrate the interest of a second order Sliding Mode Observer with final aim to be integrated into an industrial speed drive
30

CONSUMER EMBARRASSMENT – A META-ANALYTIC REVIEW AND EXPERIMENTAL EXAMINATION

Ziegler, Alexander H. 01 January 2019 (has links)
This dissertation consists of two essays that discuss the influence of embarrassment on consumers. In the first essay, I examine consumers’ coping responses to embarrassment in a meta-analytic review. In essay two, I utilize an experimental approach to investigate the impact of embarrassing encounters on unrelated consumers who merely observe the situation. In the first essay, the meta-analysis is guided by findings in the literature that demonstrate embarrassment can both promote and detract from consumer well-being. However, despite being investigated for decades, little is known about how consumers cope with embarrassing situations, and when and why consumers respond in positive and negative ways. The meta-analysis draws on the transactional framework of appraisals and coping to analyze the extant literature, construing positive responses as problem-focused coping, and negative responses as emotion-focused coping. I examine both situational and trait factor moderators to explain variance in these divergent outcomes and to resolve competing findings. A meta-analysis of 93 independent samples (N = 24,051) revealed that embarrassment leads to both problem-focused coping (r = 0.21), which can promote consumer well-being, and emotion-focused coping (r = 0.23), which can detract from consumer well-being. The relationship between embarrassment and emotion-focused coping was particularly strong in emotionally intense situations that were out of a transgressor’s control, for female consumers, and for consumers with an individualist orientation. The relationship between embarrassment and problem-focused coping was particularly strong in emotionally intense situations for male and young consumers. The second essay investigates the influence of embarrassing situations on neutral observers of the situation. The extant literature suggests that a consumer who commits a social transgression will experience embarrassment if real or imagined others are present to witness the transgression. However, the parallel embarrassment experienced, in turn, by those observers lacks a theoretical account, since observers have committed no transgression and are not the subject of appraisal by others. I label this phenomenon observer embarrassment, and introduce perspective taking as the underlying process that leads to observer embarrassment. Across six studies, I use physiological, behavioral, and self-report measures to validate the presence of observer embarrassment, as well as the underlying perspective-taking mechanism. Specifically, the results demonstrate that observers are more likely to experience embarrassment when they imagine themselves as the transgressor (versus experience empathy for the transgressor), something more likely to occur when the observer and actor share a common identity. Thus, observer embarrassment is not an empathetic response to witnessing a social transgression, but rather an experience parallel to personal embarrassment of others.

Page generated in 0.4302 seconds