• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 24
  • 24
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Automated Complexity-Sensitive Image Fusion

Jackson, Brian Patrick January 2014 (has links)
No description available.
12

Comparison of ego-involvement and service qualitiy measures in predicting leisure participation in consumer service settings

Murray, Duncan January 2005 (has links)
This thesis investigates ego-involvement, a measure focused around the customer satisfaction and service quality assessement concept. It questions whether or not it has the potential to be a better predictor of leisure participation and leisure satisfaction than the measures of service quality that dominate leisure service assessment at present.
13

Comparison of ego-involvement and service qualitiy measures in predicting leisure participation in consumer service settings

Murray, Duncan January 2005 (has links)
This thesis investigates ego-involvement, a measure focused around the customer satisfaction and service quality assessement concept. It questions whether or not it has the potential to be a better predictor of leisure participation and leisure satisfaction than the measures of service quality that dominate leisure service assessment at present.
14

Algorithmes automatiques pour la fouille visuelle de données et la visualisation de règles d’association : application aux données aéronautiques / Automatic algorithms for visual data mining and association rules visualization : application to aeronautical data

Bothorel, Gwenael 18 November 2014 (has links)
Depuis quelques années, nous assistons à une véritable explosion de la production de données dans de nombreux domaines, comme les réseaux sociaux ou le commerce en ligne. Ce phénomène récent est renforcé par la généralisation des périphériques connectés, dont l'utilisation est devenue aujourd'hui quasi-permanente. Le domaine aéronautique n'échappe pas à cette tendance. En effet, le besoin croissant de données, dicté par l'évolution des systèmes de gestion du trafic aérien et par les événements, donne lieu à une prise de conscience sur leur importance et sur une nouvelle manière de les appréhender, qu'il s'agisse de stockage, de mise à disposition et de valorisation. Les capacités d'hébergement ont été adaptées, et ne constituent pas une difficulté majeure. Celle-ci réside plutôt dans le traitement de l'information et dans l'extraction de connaissances. Dans le cadre du Visual Analytics, discipline émergente née des conséquences des attentats de 2001, cette extraction combine des approches algorithmiques et visuelles, afin de bénéficier simultanément de la flexibilité, de la créativité et de la connaissance humaine, et des capacités de calculs des systèmes informatiques. Ce travail de thèse a porté sur la réalisation de cette combinaison, en laissant à l'homme une position centrale et décisionnelle. D'une part, l'exploration visuelle des données, par l'utilisateur, pilote la génération des règles d'association, qui établissent des relations entre elles. D'autre part, ces règles sont exploitées en configurant automatiquement la visualisation des données concernées par celles-ci, afin de les mettre en valeur. Pour cela, ce processus bidirectionnel entre les données et les règles a été formalisé, puis illustré, à l'aide d'enregistrements de trafic aérien récent, sur la plate-forme Videam que nous avons développée. Celle-ci intègre, dans un environnement modulaire et évolutif, plusieurs briques IHM et algorithmiques, permettant l'exploration interactive des données et des règles d'association, tout en laissant à l'utilisateur la maîtrise globale du processus, notamment en paramétrant et en pilotant les algorithmes. / In the past few years, we have seen a large scale data production in many areas, such as social networks and e-business. This recent phenomenon is enhanced by the widespread use of devices, which are permanently connected. The aeronautical field is also involved in this trend. Indeed, its growing need for data, which is driven by air trafic management systems evolution and by events, leads to a widescale focus on its key role and on new ways to manage it. It deals with storage, availability and exploitation. Data hosting capacity, that has been adapted, is not a major challenge. The issue is now in data processing and knowledge extraction from it. Visual Analytics is an emerging field, stemming from the September 2001 events. It combines automatic and visual approaches, in order to benefit simultaneously from human flexibility, creativity and knowledge, and also from processing capacities of computers. This PhD thesis has focused on this combination, by giving to the operator a centered and decisionmaking role. On the one hand, the visual data exploration drives association rules extraction. They correspond to links between the data. On the other hand, these rules are exploited by automatically con_gurating the visualization of the concerned data, in order to highlight it. To achieve this, a bidirectional process has been formalized, between data and rules. It has been illustrated by air trafic recordings, thanks to the Videam platform, that we have developed. By integrating several HMI and algorithmic applications in a modular and upgradeable environment, it allows interactive exploration of both data and association rules. This is done by giving to human the mastering of the global process, especially by setting and driving algorithms.
15

PCA and CVA biplots : a study of their underlying theory and quality measures

Brand, Hilmarie 03 1900 (has links)
Thesis (MComm)--Stellenbosch University, 2013. / ENGLISH ABSTRACT: The main topics of study in this thesis are the Principal Component Analysis (PCA) and Canonical Variate Analysis (CVA) biplots, with the primary focus falling on the quality measures associated with these biplots. A detailed study of different routes along which PCA and CVA can be derived precedes the study of the PCA biplot and CVA biplot respectively. Different perspectives on PCA and CVA highlight different aspects of the theory that underlie PCA and CVA biplots respectively and so contribute to a more solid understanding of these biplots and their interpretation. PCA is studied via the routes followed by Pearson (1901) and Hotelling (1933). CVA is studied from the perspectives of Linear Discriminant Analysis, Canonical Correlation Analysis as well as a two-step approach introduced in Gower et al. (2011). The close relationship between CVA and Multivariate Analysis of Variance (MANOVA) also receives some attention. An explanation of the construction of the PCA biplot is provided subsequent to the study of PCA. Thereafter follows an in depth investigation of quality measures of the PCA biplot as well as the relationships between these quality measures. Specific attention is given to the effect of standardisation on the PCA biplot and its quality measures. Following the study of CVA is an explanation of the construction of the weighted CVA biplot as well as two different unweighted CVA biplots based on the two-step approach to CVA. Specific attention is given to the effect of accounting for group sizes in the construction of the CVA biplot on the representation of the group structure underlying a data set. It was found that larger groups tend to be better separated from other groups in the weighted CVA biplot than in the corresponding unweighted CVA biplots. Similarly it was found that smaller groups tend to be separated to a greater extent from other groups in the unweighted CVA biplots than in the corresponding weighted CVA biplot. A detailed investigation of previously defined quality measures of the CVA biplot follows the study of the CVA biplot. It was found that the accuracy with which the group centroids of larger groups are approximated in the weighted CVA biplot is usually higher than that in the corresponding unweighted CVA biplots. Three new quality measures that assess that accuracy of the Pythagorean distances in the CVA biplot are also defined. These quality measures assess the accuracy of the Pythagorean distances between the group centroids, the Pythagorean distances between the individual samples and the Pythagorean distances between the individual samples and group centroids in the CVA biplot respectively. / AFRIKAANSE OPSOMMING: Die hoofonderwerpe van studie in hierdie tesis is die Hoofkomponent Analise (HKA) bistipping asook die Kanoniese Veranderlike Analise (KVA) bistipping met die primêre fokus op die kwaliteitsmaatstawwe wat daarmee geassosieer word. ’n Gedetailleerde studie van verskillende roetes waarlangs HKA en KVA afgelei kan word, gaan die studie van die HKA en KVA bistippings respektiewelik vooraf. Verskillende perspektiewe op HKA en KVA belig verskillende aspekte van die teorie wat onderliggend is tot die HKA en KVA bistippings respektiewelik en dra sodoende by tot ’n meer breedvoerige begrip van hierdie bistippings en hulle interpretasies. HKA word bestudeer volgens die roetes wat gevolg is deur Pearson (1901) en Hotelling (1933). KVA word bestudeer vanuit die perspektiewe van Linieêre Diskriminantanalise, Kanoniese Korrelasie-analise sowel as ’n twee-stap-benadering soos voorgestel in Gower et al. (2011). Die noue verwantskap tussen KVA en Meerveranderlike Analise van Variansie (MANOVA) kry ook aandag. ’n Verduideliking van die konstruksie van die HKA bistipping word voorsien na afloop van die studie van HKA. Daarna volg ’n indiepte-ondersoek van die HKA bistipping kwaliteitsmaatstawwe sowel as die onderlinge verhoudings tussen hierdie kwaliteitsmaatstawe. Spesifieke aandag word gegee aan die effek van die standaardisasie op die HKA bistipping en sy kwaliteitsmaatstawe. Opvolgend op die studie van KVA is ’n verduideliking van die konstruksie van die geweegde KVA bistipping sowel as twee veskillende ongeweegde KVA bistippings gebaseer op die twee-stap-benadering tot KVA. Spesifieke aandag word gegee aan die effek wat die inagneming van die groepsgroottes in die konstruksie van die KVA bistipping op die voorstelling van die groepstruktuur onderliggend aan ’n datastel het. Daar is gevind dat groter groepe beter geskei is van ander groepe in die geweegde KVA bistipping as in die oorstemmende ongeweegde KVA bistipping. Soortgelyk daaraan is gevind dat kleiner groepe tot ’n groter mate geskei is van ander groepe in die ongeweegde KVA bistipping as in die oorstemmende geweegde KVA bistipping. ’n Gedetailleerde ondersoek van voorheen gedefinieerde kwaliteitsmaatstawe van die KVA bistipping volg op die studie van die KVA bistipping. Daar is gevind dat die akkuraatheid waarmee die groepsgemiddeldes van groter groepe benader word in die geweegde KVA bistipping, gewoonlik hoër is as in die ooreenstemmende ongeweegde KVA bistippings. Drie nuwe kwaliteitsmaatstawe wat die akkuraatheid van die Pythagoras-afstande in die KVA bistipping meet, word gedefinieer. Hierdie kwaliteitsmaatstawe beskryf onderskeidelik die akkuraatheid van die voorstelling van die Pythagoras-afstande tussen die groepsgemiddeldes, die Pythagoras-afstande tussen die individuele observasies en die Pythagoras-afstande tussen die individuele observasies en groepsgemiddeldes in die KVA bistipping.
16

Vigilancia de la infección nosocomial en un Servicio de Medicina Intensiva mediante la aplicación de un Ciclo de Garantia de Calidad. Nosocomial infection surveillance in the intensive care unit through measures designed for quality assurance.

Gil Rueda, Bernardo 11 July 2003 (has links)
FUNDAMENTO: análisis de la aplicación de un ciclo de garantía de calidad sobre las tasas de infección nosocomial (IN) en una UCI polivalente de nivel II. MÉTODO: Estudio prospectivo de cohortes, de dos años de duración, sobre 568 pacientes; Grupo A (n=281), observacional y Grupo B (n=287), en el que se aplicaron medidas de mejora (administración de sucralfato, correcta profilaxis antibiótica quirúrgica y medidas estrictas de asepsia) Se comparan las tasas de IN asociada a ventilación mecánica (NAVM), sonda uretral, catéter venoso central e infección de herida quirúrgica en ambos grupos, así como la estancia y mortalidad intra-UCI. RESULTADOS Tras la aplicación del ciclo de mejora de calidad mediante el cumplimiento de criterios de calidad, obtuvimos una reducción significativa de las tasas de incidencia de todas las infecciones controladas. No apreciamos diferencias en la mortalidad global intra-UCI entre ambos grupos, aunque sí en los que desarrollaron una IN. Los pacientes con NAVM, mostraron una reducción no significativa de la mortalidad. El subgrupo de pacientes que recibió sucralfato presentó una disminución de la frecuencia de IN y mortalidad relacionada. Sin embargo, el grado de incumplimientos del protocolo de mejora se mostró elevado (diagrama de Pareto). CONCLUSIONES: La instauración de un sistema de vigilancia y la aplicación de medidas de mejora han logrado reducir tanto la incidencia como la mortalidad de la IN, no así la mortalidad global intra-UCI. / BACKGROUND: To analyze the effects of implementation of a quality assurance cycle on rates of nosocomial infection (NI) in a level II intensive care unit (ICU.) METHOD: Prospective cohort study of two years on 568 patients divided in Group A (n = 281, cohort observational control group) and Group B (n = 287; experimental cohort group), which were implemented improvement measures (administration of oral sucralfate, surgical prophylaxis and aseptic measures). We compare the rates of follow aspects: ventilator-associated pneumonia (VAP), urethral catheter, central venous catheter and surgical wound infections, length and ICU mortality in both groups. RESULTS: After the implementation of quality improvement cycle by meeting quality criteria, we obtained a significant reduction in incidence rates of all infections under control. We found no differences in overall mortality ICU between the two groups, except in those who developed one NI. Patients with VAP showed a non significant reduction in mortality. The subgroup of patients receiving sucralfate showed a decrease in the frequency of NI and related mortality. However, the degree of improvement protocol violations was high (analyzed by Diagram’ s Pareto). CONCLUSIONS: The establishment of a surveillance system and implementation of improvement measures have reduced both the incidence and mortality of NI, but not the overall ICU mortality.
17

Quality Measures of Halftoned Images (A Review)

Axelson, Per-Erik January 2003 (has links)
<p>This study is a thesis for the Master of Science degree in Media Technology and Engineering at the Department of Science and Technology, Linkoping University. It was accomplished from November 2002 to May 2003. </p><p>Objective image quality measures play an important role in various image processing applications. In this paper quality measures applied on halftoned images are aimed to be in focus. Digital halftoning is the process of generating a pattern of binary pixels that create the illusion of a continuous- tone image. Algorithms built on this technique produce results of very different quality and characteristics. To evaluate and improve their performance, it is important to have robust and reliable image quality measures. This literature survey is to give a general description in digital halftoning and halftone image quality methods.</p>
18

Fouille et classement d'ensembles fermés dans des données transactionnelles de grande échelle / Mining and ranking closed itemsets from large-scale transactional datasets

Kirchgessner, Martin 26 September 2016 (has links)
Les algorithmes actuels pour la fouille d’ensembles fréquents sont dépassés par l’augmentation des volumes de données. Dans cette thèse nous nous intéressons plus particulièrement aux données transactionnelles (des collections d’ensembles d’objets, par exemple des tickets de caisse) qui contiennent au moins un million de transactions portant sur au moins des centaines de milliers d’objets. Les jeux de données de cette taille suivent généralement une distribution dite en "longue traine": alors que quelques objets sont très fréquents, la plupart sont rares. Ces distributions sont le plus souvent tronquées par les algorithmes de fouille d’ensembles fréquents, dont les résultats ne portent que sur une infime partie des objets disponibles (les plus fréquents). Les méthodes existantes ne permettent donc pas de découvrir des associations concises et pertinentes au sein d’un grand jeu de données. Nous proposons donc une nouvelle sémantique, plus intuitive pour l’analyste: parcourir les associations par objet, au plus une centaine à la fois, et ce pour chaque objet présent dans les données.Afin de parvenir à couvrir tous les objets, notre première contribution consiste à définir la fouille centrée sur les objets. Cela consiste à calculer, pour chaque objet trouvé dans les données, les k ensembles d’objets les plus fréquents qui le contiennent. Nous présentons un algorithme effectuant ce calcul, TopPI. Nous montrons que TopPI calcule efficacement des résultats intéressants sur nos jeux de données. Il est plus performant que des solutions naives ou des émulations reposant sur des algorithms existants, aussi bien en termes de rapidité que de complétude des résultats. Nous décrivons et expérimentons deux versions parallèles de TopPI (l’une sur des machines multi-coeurs, l’autre sur des grappes Hadoop) qui permettent d’accélerer le calcul à grande échelle.Notre seconde contribution est CAPA, un système permettant d’étudier quelle mesure de qualité des règles d’association serait la plus appropriée pour trier nos résultats. Cela s’applique aussi bien aux résultats issus de TopPI que de jLCM, notre implémentation d’un algorithme récent de fouille d’ensembles fréquents fermés (LCM). Notre étude quantitative montre que les 39 mesures que nous comparons peuvent être regroupées en 5 familles, d’après la similarité des classements de règles qu’elles produisent. Nous invitons aussi des experts en marketing à participer à une étude qualitative, afin de déterminer laquelle des 5 familles que nous proposons met en avant les associations d’objets les plus pertinentes dans leur domaine.Notre collaboration avec Intermarché, partenaire industriel dans le cadre du projet Datalyse, nous permet de présenter des expériences complètes et portant sur des données réelles issues de supermarchés dans toute la France. Nous décrivons un flux d’analyse complet, à même de répondre à cette application. Nous présentons également des expériences portant sur des données issues d’Internet; grâce à la généricité du modèle des ensembles d’objets, nos contributions peuvent s’appliquer dans d’autres domaines.Nos contributions permettent donc aux analystes de découvrir des associations d’objets au milieu de grandes masses de données. Nos travaux ouvrent aussi la voie vers la fouille d’associations interactive à large échelle, afin d’analyser des données hautement dynamiques ou de réduire la portion du fichier à analyser à celle qui intéresse le plus l’analyste. / The recent increase of data volumes raises new challenges for itemset mining algorithms. In this thesis, we focus on transactional datasets (collections of items sets, for example supermarket tickets) containing at least a million transactions over hundreds of thousands items. These datasets usually follow a "long tail" distribution: a few items are very frequent, and most items appear rarely. Such distributions are often truncated by existing itemset mining algorithms, whose results concern only a very small portion of the available items (the most frequents, usually). Thus, existing methods fail to concisely provide relevant insights on large datasets. We therefore introduce a new semantics which is more intuitive for the analyst: browsing associations per item, for any item, and less than a hundred associations at once.To address the items' coverage challenge, our first contribution is the item-centric mining problem. It consists in computing, for each item in the dataset, the k most frequent closed itemsets containing this item. We present an algorithm to solve it, TopPI. We show that TopPI computes efficiently interesting results over our datasets, outperforming simpler solutions or emulations based on existing algorithms, both in terms of run-time and result completeness. We also show and empirically validate how TopPI can be parallelized, on multi-core machines and on Hadoop clusters, in order to speed-up computation on large scale datasets.Our second contribution is CAPA, a framework allowing us to study which existing measures of association rules' quality are relevant to rank results. This concerns results obtained from TopPI or from jLCM, our implementation of a state-of-the-art frequent closed itemsets mining algorithm (LCM). Our quantitative study shows that the 39 quality measures we compare can be grouped into 5 families, based on the similarity of the rankings they produce. We also involve marketing experts in a qualitative study, in order to discover which of the 5 families we propose highlights the most interesting associations for their domain.Our close collaboration with Intermarché, one of our industrial partners in the Datalyse project, allows us to show extensive experiments on real, nation-wide supermarket data. We present a complete analytics workflow addressing this use case. We also experiment on Web data. Our contributions can be relevant in various other fields, thanks to the genericity of transactional datasets.Altogether our contributions allow analysts to discover associations of interest in modern datasets. We pave the way for a more reactive discovery of items' associations in large-scale datasets, whether on highly dynamic data or for interactive exploration systems.
19

Quality Measures of Halftoned Images (A Review)

Axelson, Per-Erik January 2003 (has links)
This study is a thesis for the Master of Science degree in Media Technology and Engineering at the Department of Science and Technology, Linkoping University. It was accomplished from November 2002 to May 2003. Objective image quality measures play an important role in various image processing applications. In this paper quality measures applied on halftoned images are aimed to be in focus. Digital halftoning is the process of generating a pattern of binary pixels that create the illusion of a continuous- tone image. Algorithms built on this technique produce results of very different quality and characteristics. To evaluate and improve their performance, it is important to have robust and reliable image quality measures. This literature survey is to give a general description in digital halftoning and halftone image quality methods.
20

Using Healthcare Data to Inform Health Policy: Quantifying Cardiovascular Disease Risk and Assessing 30-Day Readmission Measures

Fouayzi, Hassan 21 May 2019 (has links)
Health policy makers are struggling to manage health care and spending. To identify strategies for improving health quality and reducing health spending, policy makers need to first understand health risks and outcomes. Despite lacking some desirable clinical detail, existing health care databases, such as national health surveys and claims and enrollment data for insured populations, are often rich in information relating patient characteristics to heath risks and outcomes. They typically encompass more inclusive populations than can feasibly be achieved with new data collection and are valuable resources for informing health policy. This dissertation illustrates how the Medicare Current Beneficiary Survey (MCBS) and MassHealth data can be used to develop models that provide useful estimates of risks and health quality measures. It provides insights into: 1) the benefits of a proxy for the Framingham cardiovascular disease (CVD) risk score, that relies only on variables available in the MCBS, to target health interventions to policy-relevant subgroups, such as elderly Medicare beneficiaries, based on their risk of developing CVD, 2) the importance of setting appropriate risk-adjusted quality of care standards for accountable care organizations (ACOs) based on the characteristics of their enrolled members, and 3) the outsized effect of high- frequency hospital users on re-admission measures and possibly other quality measures. This work develops tools that can be used to identify and support care of vulnerable patients to both improve their health outcomes and reduce spending – an important step on the road to health equity.

Page generated in 0.059 seconds