• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 7
  • 7
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Anisotropic Quadrilateral Mesh Optimization

Ferguson, Joseph Timothy Charles 12 August 2016 (has links)
In order to determine the validity and the quality of meshes, mesh optimization methods have been formulated with quality measures. The basic idea of mesh optimization is to relocate the vertices to obtain a valid mesh (untangling) or improve the mesh quality (smoothing), or both. We will look at a new algebraic way of calculating quality measure on quadrilateral meshes, based on triangular meshes in 2D as well as new optimization methods for simultaneous untangling and smoothing for severely deformed meshes. An innovative anisotropic diffusion method will be introduced for consideration of inner boundary deformation movements for quadrangle meshes in 2D.
2

Task-based Robotic Grasp Planning

Lin, Yun 13 November 2014 (has links)
Grasp should be selected intelligently to fulfill different stability properties and manipulative requirements. Currently, most grasping approaches consider only pick-and-place tasks without any physical interaction with other objects or the environment, which are common in an industry setting with limited uncertainty. When robots move to our daily-living environment and perform a broad range of tasks in an unstructured environment, all sorts of physical interactions will occur, which will result in random physical interactive wrenches: forces and torques on the tool. In addition, for a tool to perform a required task, certain motions need to occur. We call it "functional tool motion," which represents the innate function of the tool and the nature of the task. Grasping with a robotic hand gives flexibility in "mounting" the tool onto the robotic arm - a different grasp will connect the tool to the robotic arm with a different hand posture, then the inverse kinematics approach will result in a different joint motion of the arm in order to achieve the same functional tool motion. Thus, the grasp and the functional tool motion decide the manipulator's motion, as well as the effort to achieve the motion. Therefore, we propose to establish two objectives to serve the purpose of a grasp: the grasp should maintain a firm grip and withstand interactive wrenches on the tool during the task; and the grasp should enable the manipulator to carry out the task most efficiently with little motion effort, and then search for a grasp to optimize both objectives. For this purpose, two grasp criteria are presented to evaluate the grasp: the task wrench coverage criterion and the task motion effort criterion. The two grasp criteria are used as objective functions to search for the optimal grasp for grasp planning. To reduce the computational complexity of the search in high-dimensional robotic hand configuration space, we propose a novel grasp synthesis approach that integrates two human grasp strategies - grasp type, and thumb placement (position and direction) - into grasp planning. The grasping strategies abstracted from humans should meet two important criteria: they should reflect the demonstrator's intention, and they should be general enough to be used by various robotic hand models. Different abstractions of human grasp constrain the grasp synthesis and narrow down the solutions of grasp generation to different levels. If a strict constraint is imposed, such as defining all contact points of the fingers on the object, the strategy loses flexibility and becomes rarely achievable for a robotic hand with a different kinematic model. Thus, the choice of grasp strategies should balance the learned constraints and required flexibility to accommodate the difference between a human hand and a robotic hand. The human strategies of grasp type and thumb placement have such a balance while conveying important human intents to the robotic grasping. The proposed approach has been thoroughly evaluated both in simulation and on a real robotic system for multiple objects that would be encountered in daily living.
3

An Algorithm for Image Quality Assessment

Ivkovic, Goran 10 July 2003 (has links)
Image quality measures are used to optimize image processing algorithms and evaluate their performances. The only reliable way to assess image quality is subjective evaluation by human observers, where the mean value of their scores is used as the quality measure. This is known as mean opinion score (MOS). In addition to this measure there are various objective (quantitative) measures. Most widely used quantitative measures are: mean squared error (MSE), peak signal to noise ratio (PSNR) and signal to noise ratio (SNR). Since these simple measures do not always produce results that are in agreement with subjective evaluation, many other quality measures have been proposed. They are mostly various modifications of MSE, which try to take into account some properties of human visual system (HVS) such as nonlinear character of brightness perception, contrast sensitivity function (CSF) and texture masking. In these approaches quality measure is computed as MSE of input image intensities or frequency domain coefficients obtained after some transform (DFT, DCT etc.), weighted by some coefficients which account for the mentioned properties of HVS. These measures have some advantages over MSE, but their ability to predict image quality is still limited. A different approach is presented here. Quality measure proposed here uses simple model of HVS, which has one user-defined parameter, whose value depends on the reference image. This quality measure is based on the average value of locally computed correlation coefficients. This takes into account structural similarity between original and distorted images, which cannot be measured by MSE or any kind of weighted MSE. The proposed measure also differentiates between random and signal dependant distortion, because these two have different effect on human observer. This is achieved by computing the average correlation coefficient between reference image and error image. Performance of the proposed quality measure is illustrated by examples involving images with different types of degradation.
4

An algorithm for image quality assessment [electronic resource] / by Goran Ivkovic.

Ivkovic, Goran. January 2003 (has links)
Title from PDF of title page. / Document formatted into pages; contains 82 pages. / Thesis (M.S.E.E.)--University of South Florida, 2003. / Includes bibliographical references. / Text (Electronic thesis) in PDF format. / ABSTRACT: Image quality measures are used to optimize image processing algorithms and evaluate their performances. The only reliable way to assess image quality is subjective evaluation by human observers, where the mean value of their scores is used as the quality measure. This is known as mean opinion score (MOS). In addition to this measure there are various objective (quantitative) measures. Most widely used quantitative measures are: mean squared error (MSE), peak signal to noise ratio (PSNR) and signal to noise ratio (SNR). Since these simple measures do not always produce results that are in agreement with subjective evaluation, many other quality measures have been proposed. They are mostly various modifications of MSE, which try to take into account some properties of human visual system (HVS) such as nonlinear character of brightness perception, contrast sensitivity function (CSF) and texture masking. / ABSTRACT: In these approaches quality measure is computed as MSE of input image intensities or frequency domain coefficients obtained after some transform (DFT, DCT etc.), weighted by some coefficients which account for the mentioned properties of HVS. These measures have some advantages over MSE, but their ability to predict image quality is still limited. A different approach is presented here. Quality measure proposed here uses simple model of HVS, which has one user-defined parameter, whose value depends on the reference image. This quality measure is based on the average value of locally computed correlation coefficients. This takes into account structural similarity between original and distorted images, which cannot be measured by MSE or any kind of weighted MSE. The proposed measure also differentiates between random and signal dependant distortion, because these two have different effect on human observer. / ABSTRACT: This is achieved by computing the average correlation coefficient between reference image and error image. Performance of the proposed quality measure is illustrated by examples involving images with different types of degradation. / System requirements: World Wide Web browser and PDF reader. / Mode of access: World Wide Web.
5

Analyse automatique de la circulation automobile par vidéosurveillance routière / Automatic traffic analysis in video sequences

Intawong, Kannikar 27 September 2017 (has links)
Cette thèse s’inscrit dans le contexte de l’analyse vidéo du trafic routier. Dans certaines grandes villes, des centaines de caméras produisent de très grandes quantités de données, impossible à manipuler sans traitement automatique. Notre principal objectif est d'aider les opérateurs humains en analysant automatiquement les données vidéo. Pour aider les contrôleurs de la circulation à prendre leurs décisions, il est important de connaître en temps réel, l'état du trafic (nombre de véhicules et vitesse des véhicules sur chaque segment de voie), mais aussi de disposer de statistiques temporelles tout au long de la journée, de la semaine, de la saison ou de l'année. Les caméras ont été déployées depuis longtemps pour le trafic et pour d'autres fins de surveillance, car elles fournissent une source d'information riche pour la compréhension humaine. L'analyse vidéo peut désormais apporter une valeur ajoutée aux caméras en extrayant automatiquement des informations pertinentes. De cette façon, la vision par ordinateur et l'analyse vidéo deviennent de plus en plus importantes pour les systèmes de transport intelligents (intelligent transport systems : ITSs). L’une des problématiques abordées dans cette thèse est liée au comptage automatique de véhicules. Pour être utile, un système de surveillance vidéo doit être entièrement automatique et capable de fournir, en temps réel, l'information qui concerne le comportement de l'objet dans la scène. Nous pouvons obtenir ces renseignements sur la détection et le suivi des objets en mouvement dans les vidéos, ce qui a été un domaine largement étudié. Néanmoins, la plupart des systèmes d'analyse automatique par vidéo ont des difficultés à gérer les situations particulières. Aujourd'hui, il existe de nombreux défis à résoudre tels que les occultations entre les différents objets, les arrêts longs, les changements de luminosité, etc… qui conduisent à des trajectoires incomplètes. Dans la chaîne de traitements que nous proposons, nous nous sommes concentrés sur l'extraction automatique de statistiques globales dans les scènes de vidéosurveillance routière. Notre chaîne de traitements est constituée par les étapes suivantes : premièrement, nous avons évalué différentes techniques de segmentation de vidéos et de détection d'objets en mouvement. Nous avons choisi une méthode de segmentation basée sur une version paramétrique du mélange de gaussiennes appliquée sur une hiérarchie de blocs, méthode qui est considérée actuellement comme l'un des meilleurs procédés pour la détection d'objets en mouvement. Nous avons proposé une nouvelle méthodologie pour choisir les valeurs optimales des paramètres d’un algorithme permettant d’améliorer la segmentation d’objets en utilisant des opérations morphologiques. Nous nous sommes intéressés aux différents critères permettant d’évaluer la qualité d’une segmentation, résultant d’un compromis entre une bonne détection des objets en mouvement, et un faible nombre de fausses détections, par exemple causées par des changements d’illumination, des reflets ou des bruits d’acquisition. Deuxièmement, nous effectuons une classification des objets, basée sur les descripteurs de Fourier, et nous utilisons ces descripteurs pour éliminer les objets de type piétons ou autres et ne conserver que les véhicules. Troisièmement, nous utilisons un modèle de mouvement et un descripteur basé sur les couleurs dominantes pour effectuer le suivi des objets extraits. En raison des difficultés mentionnées ci-dessus, nous obtenons des trajectoires incomplètes, qui donneraient une information de comptage erronée si elles étaient exploitées directement. Nous proposons donc d’agréger les données partielles des trajectoires incomplètes et de construire une information globale sur la circulation des véhicules dans la scène. Notre approche permet la détection des points d’entrée et de sortie dans les séquences d’images. Nous avons testé nos algorithmes sur des données privées provenant... / This thesis is written in the context of video traffic analysis. In several big cities, hundreds of cameras produce very large amounts of data, impossible to handle without automatic processing. Our main goal is to help human operators by automatically analyzing video data. To help traffic controllers make decisions, it is important to know the traffic status in real time (number of vehicles and vehicle speed on each path), but also to dispose of traffic statistics along the day, week, season or year. The cameras have been deployed for a long time for traffic and other monitoring purposes, because they provide a rich source of information for human comprehension. Video analysis can automatically extract relevant information. Computer vision and video analysis are becoming more and more important for Intelligent Transport Systems (ITSs). One of the issues addressed in this thesis is related to automatic vehicle counting. In order to be useful, a video surveillance system must be fully automatic and capable of providing, in real time, information concerning the behavior of the objects in the scene. We can get this information by detection and tracking of moving objects in videos, a widely studied field. However, most automated video analysis systems do not easily manage particular situations.Today, there are many challenges to be solved, such as occlusions between different objects, long stops of an object in the scene, luminosity changes, etc., leading to incomplete trajectories of moving objects detected in the scene. We have concentrated our work on the automatic extraction of global statistics in the scenes. Our workflow consists of the following steps: first, we evaluated different methods of video segmentation and detection of moving objects. We have chosen a segmentation method based on a parametric version of the Mixture of Gaussians, applied to a hierarchy of blocks, which is currently considered one of the best methods for the detection of moving objects. We proposed a new methodology to choose the optimal parameter values of an algorithm to improve object segmentation by using morphological operations. We were interested in the different criteria for evaluating the segmentation quality, resulting from a compromise between a good detection of moving objects, and a low number of false detections, for example caused by illumination changes, reflections or acquisition noises. Secondly, we performed an objects classification, based on Fourier descriptors, and we use these descriptors to eliminate pedestrian or other objects and retain only vehicles. Third, we use a motion model and a descriptor based on the dominant colors to track the extracted objects. Because of the difficulties mentioned above, we obtain incomplete trajectories, which, exploited as they are, give incorrect counting information. We therefore proposed to aggregate the partial data of the incomplete trajectories and to construct a global information on the vehicles circulation in the scene. Our approach allows to detect input and output points in image sequences. We tested our algorithms on private data from the traffic control center in Chiang Mai City, Thailand, as well as on MIT public video data. On this last dataset, we compared the performance of our algorithms with previously published articles using the same data. In several situations, we illustrate the improvements made by our method in terms of location of input / output zones, and in terms of vehicle counting.
6

Improving the Quality and Safety of Drug Use in Hospitalized Elderly : Assessing the Effects of Clinical Pharmacist Interventions and Identifying Patients at Risk of Drug-related Morbidity and Mortality

Alassaad, Anna January 2014 (has links)
Older people admitted to hospital are at high risk of rehospitalization and medication errors. We have demonstrated, in a randomized controlled trial, that a clinical pharmacist intervention reduces the incidence of revisits to hospital for patients aged 80 years or older admitted to an acute internal medicine ward. The aims of this thesis were to further study the effects of the intervention and to investigate possibilities of targeting the intervention by identifying predictors of treatment response or adverse health outcomes. The effect of the pharmacist intervention on the appropriateness of prescribing was assessed, by using three validated tools. This study showed that the quality of prescribing was improved for the patients in the intervention group but not for those in the control group. However, no association between the appropriateness of prescribing at discharge and revisits to hospital was observed. Subgroup analyses explored whether the clinical pharmacist intervention was equally effective in preventing emergency department visits in patients with few or many prescribed drugs and in those with different levels of inappropriate prescribing on admission. The intervention appeared to be most effective in patients taking fewer drugs, but the treatment effect was not altered by appropriateness of prescribing. The most relevant risk factors for rehospitalization and mortality were identified for the same study population, and a score for risk-estimation was constructed and internally validated (the 80+ score). Seven variables were selected. Impaired renal function, pulmonary disease, malignant disease, living in a nursing home, being prescribed an opioid and being prescribed a drug for peptic ulcer or gastroesophageal reflux disease were associated with an increased risk, while being prescribed an antidepressant drug (tricyclic antidepressants not included) was linked with a lower risk. These variables made up the components of the 80+ score. Pending external validation, this score has potential to aid identification of high-risk patients. The last study investigated the occurrence of prescription errors when patients with multi-dose dispensed (MDD) drugs were discharged from hospital. Twenty-five percent of the MDD orders contained at least one medication prescription error. Almost half of the errors were of moderate or major severity, with potential to cause increased health-care utilization.
7

Assessing And Modeling Quality Measures for Healthcare Systems

Li, Nien-Chen 06 November 2021 (has links)
Background: Shifting the healthcare payment system from a volume-based to a value-based model has been a significant effort to improve the quality of care and reduce healthcare costs in the US. In 2018, Massachusetts Medicaid launched Accountable Care Organizations (ACOs) as part of the effort. Constructing, assessing, and risk-adjusting quality measures are integral parts of the reform process. Methods: Using data from the MassHealth Data Warehouse (2016-2019), we assessed the loss of community tenure (CTloss) as a potential quality measure for patients with bipolar, schizophrenia, or other psychotic disorders (BSP). We evaluated various statistical models for predicting CTloss using deviance, Akaike information criterion, Vuong test, squared correlation and observed vs. expected (O/E) ratios. We also used logistic regression to investigate risk factors that impacted medication nonadherence, another quality measure for patients with bipolar disorders (BD). Results: Mean CTloss was 12.1 (±31.0 SD) days in the study population; it varied greatly across ACOs. For risk adjustment modeling, we recommended the zero-inflated Poisson or doubly augmented beta model. The O/E ratio ranged from 0.4 to 1.2, suggesting variation in quality, after adjusting for differences in patient characteristics for which ACOs served as reflected in E. Almost half (47.7%) of BD patients were nonadherent to second-generation antipsychotics. Patient demographics, medical and mental comorbidities, receiving institutional services like those from the Department of Mental Health, homelessness, and neighborhood socioeconomic stress impacted medication nonadherence. Conclusions: Valid quality measures are essential to value-based payment. Heterogeneity implies the need for risk adjustment. The search for a model type is driven by the non-standard distribution of CTloss.

Page generated in 0.0727 seconds