• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 10
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 31
  • 31
  • 11
  • 10
  • 9
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A Pre-enactment Model For Measuring Process Quality

Guceglioglu, A.selcuk 01 June 2006 (has links) (PDF)
Most of the process measurement studies are related with time and cost based models. Although quality is the other conventional aspect, there are no widely used models for measuring the process quality in the literature. In order to provide complementary information about the quality, a process quality measurement model has been chosen to be developed and the studies about process characteristics have been searched in the scope of the thesis. Moreover, by utilizing the similarities between process and software, the studies in software quality have been investigated. In the light of the researches, a model is built on the basis of ISO/IEC 9126 Software Product Quality Model. Some of the quality attributes are redefined in the model according to the process characteristics. In addition, new attributes unique only to the process are developed. A case study is performed and its results discussed from different perspectives of applicability, understandability and suitability.
12

Secure and Privacy-Aware Data Collection in Wireless Sensor Networks

Rodhe, Ioana January 2012 (has links)
A wireless sensor network is a collection of numerous sensors distributed on an area of interest to collect and process data from the environment. One particular threat in wireless sensor networks is node compromise attacks, that is, attacks where the adversary gets physical access to a node and to the programs and keying material stored on it. Only authorized queries should be allowed in the network and the integrity and confidentiality of the data that is being collected should be protected. We propose a layered key distribution scheme together with two protocols for query authentication and confidential data aggregation. The layered key distribution is more robust to node and communication failures than a predefined tree structure. The protocols are secure under the assumption that less than n sensor nodes are compromised. n is a design parameter that allows us to trade off security for overhead. When more than n sensor nodes are compromised, our simulations show that the attacker can only introduce unauthorized queries into a limited part of the network and can only get access to a small part of the data that is aggregated in the network. Considering the data collection protocol we also contribute with strategies to reduce the energy consumption of an integrity preserving in-network aggregation scheme to a level below the energy consumption of a non-aggregation scheme. Our improvements reduce node congestion by a factor of three and the total communication load by 30%. Location privacy of the users carrying mobile devices is another aspect considered in this thesis. Considering a mobile sink that collects data from the network, we propose a strategy for data collection that requires no information about the location and movement pattern of the sink. We show that it is possible to provide data collection services, while protecting the location privacy of the sink. When mobile phones with built-in sensors are used as sensor nodes, location information about where the data has been sensed can be used to trace users and infer other personal information about them, like state of health or personal preferences. Therefore, location privacy preserving mechanisms have been proposed to provide location privacy to the users. We investigate how a location privacy preserving mechanism influences the quality of the collected data and consider strategies to reconstruct the data distribution without compromising location privacy. / WISENET
13

Spatial characterization of pollution sources an analysis of in-stream water quality data from the Potomac Headwaters of West Virginia /

MacQueen, A. Andrew. January 2005 (has links)
Thesis (M.S.)--West Virginia University, 2005 / Title from document title page. Document formatted into pages; contains vi, 113 p. : ill. (some col.), maps (some col.). Includes abstract. Includes bibliographical references (p. 107-113).
14

Avaliação da qualidade da codificação atribuída aos diagnósticos nas internações em um hospital de pequeno porte no Vale do Paraíba / Coding quality assessment attributed to diagnoses of hospitalizations occurred in a small hospital in the Vale do Paraíba

Aretha de Fatima do Amaral Santos 17 December 2014 (has links)
Introdução: A qualidade da informação é um assunto muito complexo, cheio de meandros pelo fato de lidar com um número extremamente grande de variáveis, por essa razão é preciso investir em sistemas de informações gerenciais, implementar o processo de mudança organizacional e capacitar as pessoas para adaptação às novas realidades. Há grande necessidade de veracidade e exatidão nos registros de saúde para embasamento de estudos epidemiológicos, desenvolvimento de políticas de saúde e estratégias de prevenção e promoção da saúde. Dados incorretos trazem prejuizos do planejamento à execução da atenção. Método: Estudo longitudinal, qualitativo, retrospectivo para a análise da qualidade das codificações atribuidas aos 1194 diagósticos em prontuários de clínica médica, cirúrgica e UTI de um hospital de pequeno porte no Vale do Paraíba. Resultados: Foram consultados 1.194 prontuários, foram identificados: 465 diagnósticos sem a codificação atribuida (38,94 por cento ). Após a correção do banco de dados, realizada por um codificador treinado, 511 das 729 (70,10 por cento ) codificações estavam adequadas, 68 (9,33 por cento ) diagnósticos continham uma codificação completamente diferente da correta, 82 (11,24 por cento ) codificações estavam apenas com a sub categoria incorreta e 68 (9,33 por cento ) continham apenas o capítulo adequado. Conclusão: A qualidade de informação no que diz respeito à codificação diagnóstica tem um índice de concordância de 70,10 por cento e de discordância de 29,90 por cento . Falhas no fluxo administrativo, o despreparo das equipes operacionais para lançar as informações adequadamente no sistema integrado do Hospital e a ausência de um profissional codificador treinado para tal atividade são pontos fracos encontrados e que podem ter contribuido para uma taxa de não preenchimento dos códigos elevada e que modificou alguns grupos de patologias ditos como principais e também a ordem de representatividade destes nos rankings epidemiológicos. / Introduction: The quality of information is a very complex subject, full of intricacies because dealing with an extremely large number of variables, therefore we must invest in management information systems, implement organizational change process and enable people to adaptat to new realities. There is great need for truthfulness and accuracy in health records for basis of epidemiological studies, health policy development and strategies for prevention and health promotion. Incorrect data bring losses from planning to implementation of the attention. Method: Longitudinal study, qualitative, retrospective to examine the quality of coding assigned to 1.194 diagnoses to general medical records, surgical and ICU of a small hospital in the Vale do Paraíba. Results: 1.194 records were consulted, were identified: 465 diagnoses without the assigned encoding (38.94 per cent ). After the database correction, performed by a trained coder, 511 of 729 (70.10 per cent ) encodings were appropriate, 68 (9.33 per cent ) diagnoses contained a completely different coding from the correct, 82 (11.24 per cent ) encodings were only with the subcategory incorrect and 68 (9.33 per cent ) contained only the appropriate chapter. Conclusions: quality information with respect to the diagnostic code has a concordance rate of 70.10 per cent and 29.90 per cent of mismatch. Flaws in the administrative flow, the unpreparedness of the operational teams to launch the information properly in the integrated system of the Hospital and the absence of a professional coder trained for such activity are found weaknesses that may have contributed to a high non-completion rate of Codes that modified some of the main groups of diseases and also the representation of the order of epidemiological rankings.
15

Inteligencia Artificial aplicada al marketing: Impacto del uso de Chatbots Cognitivos en la satisfacción del cliente del sector bancario / Artificial Intelligence applied to marketing: Impact of the use of Cognitive Chatbots on customer satisfaction in the banking sector

Villón Cabrera, Nicole 09 July 2020 (has links)
Hoy en día, vivimos en un mundo digitalizado, donde cada vez más las empresas intentan brindar mejores servicios para diferenciarse entre unas y otras. De esta manera, los chatbots permiten atender las consultas de los clientes y brindar un servicio diferente. En esta investigación, se analiza el uso de chatbots en el sector bancario peruano y cómo impacta en la satisfacción del cliente. Además, se observa el efecto que tiene la calidad de servicio, calidad de información y facilidad de uso. Los datos fueron recolectados de 250 encuestados. Los resultados proporcionan una visión a los bancos para poder fortalecer la satisfacción de sus clientes a través de los chatbots. / Nowadays, we live in a digitized world, where more and more companies try to provide better services to differentiate between them. In this way, chatbots allow to answer customer queries and provide a different service. In this research, I analyze the use of chatbots in the Peruvian banking sector and how it impacts customer satisfaction. In addition, the effect of quality of service, quality of information and ease of use is observed. Data was collected from 250 respondents. The results allowed a vision for the banks to be able to strengthen the satisfaction of their clients through chatbots. / Trabajo de investigación
16

Complementary, Alternative, and Integrative Medicine, Natural Health Products, and Medical Cannabis: Patient Preference and Prevalence of Use, Quality of Patient Health Information, and Safety and Effectiveness Concerns

Ng, Jeremy Yongwen January 2021 (has links)
The thesis is comprised of three separate studies that each relate to one of the aforementioned therapy types: complementary, alternative, and integrative medicine (CAIM), natural health products (NHPs), and medical cannabis. Parallels can be drawn across these therapy types in general including patient preference and prevalence of use, quality of patient health information, and safety and effectiveness concerns. Knowledge of these parallels both informed the development of these three studies and emerged across findings. Chapter 1 provides a comprehensive introduction to these parallels in the context of CAIM, NHPs, and medical cannabis. Chapter 2 comprises a cross-sectional survey determining NHP use disclosure to primary care physicians among patients attending a Canadian naturopathic clinic. Chapter 3 comprises a qualitative interview study identifying attitudes towards medical cannabis among family physicians practicing in Ontario, Canada. Chapter 4 comprises a sentiment analysis of Twitter data to understand how CAIM is mentioned during the COVID-19 pandemic. Lastly, chapter 5 serves as the conclusion of this thesis, and summarizes the most important findings, addresses study strengths and limitations, and discusses future directions from this work. / Thesis / Doctor of Philosophy (PhD)
17

Melhores práticas de gestão e performance da qualidade da informação em projetos de tecnologia da informação sob o efeito moderador de restrições: um survey da experiência brasileira

Rosa, Thatiane de Oliveira 13 November 2015 (has links)
Proposta – Este trabalho tem por objetivo avaliar a influência das melhores práticas de gestão na performance da qualidade da informação em projetos de Tecnologia da Informação (TI), em condições de restrições. O estudo foi baseado na experiência brasileira. Metodologia – A pesquisa tem lastro em um modelo conceitual, composto por variáveis independentes, variáveis moderadoras e variáveis dependentes, sendo estas as melhores práticas de gestão de projetos, critérios de avaliação da qualidade da informação e performance da qualidade da informação. Para verificar o modelo conceitual, em um primeiro momento a pesquisa foi elaborada à luz da literatura especializada. Esta fase está estruturada em três etapas: 1 – Fundamentos da gestão de projetos de TI; 2 – Levantamento das melhores práticas em gestão de projetos; 3 – Identificação dos critérios de avaliação da qualidade da informação e perspectivas da performance da qualidade da informação. Em um segundo momento, foi realizado o agrupamento das melhores práticas de gestão de projetos identificadas na etapa 2, utilizando para isso a técnica estatística análise de cluster. Em seguida, realizou-se uma consulta junto a especialistas para confirmar as variáveis do modelo conceitual, bem como apresentar os principais efeitos (influências) das melhores práticas na performance da qualidade da informação, condicionadas às variáveis moderadoras, ou seja, aos critérios de avaliação da qualidade da informação. Foram selecionados, por meio de critérios técnicos e científicos, especialistas com conhecimento e experiência sobre o objeto investigado. Desta forma, foram selecionados 303 especialistas com formações em diversas Áreas do Conhecimento (Tecnologia da Informação; Sistemas de Informação, Qualidade da Informação, Administração, entre outras), porém estas, com foco em Tecnologia da Informação. O estudo foi baseado na experiência brasileira em projetos de TI. Os dados foram coletados por meio de um questionário do tipo escalar (Likert) de 1 (menor intensidade) a 5 (maior intensidade), com algumas perguntas abertas. E para reduzir a subjetividade nos resultados alcançados foram aplicadas técnicas estatísticas como o teste de Duncan, para comparação das médias, e a correlação de Spearman, para análise da influência investigada. Limitações da pesquisa – O estudo está direcionado à experiência brasileira, desse modo, recomendam-se aplicações do estudo em outros países. Originalidade/valor – Este estudo parte de um gap nos recortes teóricos sobre influência das melhores práticas na performance da qualidade da informação em projetos de TI, sobretudo em condições de restrições. Implicações para a prática da gestão – Espera-se que este estudo possa apoiar gestores em seus processos de decisão em projetos de TI em contextos dinâmicos e contingenciais. Acredita-se ainda que este represente incremento de valor aos negócios de empreendimentos. / Proposal – This study aims to assess the influence of best management practices in information quality performance in projects of Information Technology (IT) in restrictions conditions. The study was based on the brazilian experience. Methodology – The survey is backed by a conceptual model, composed of independent variables, moderating variables and dependent variables, which are the best project management practices, criteria for evaluating the quality of information and performance of information quality. To check the conceptual model, at first moment, the search was made in light of the literature. This phase is structured in three stages: 1 - Fundamentals of management of IT projects; 2 - Survey of best practices in project management; 3 - Identification of criteria for evaluating the quality of information and perspectives of information quality performance. In a second step, it was carried out by the group of the best project management practices identified in stage 2, using the statistical method of cluster analysis. Then, a consultation was held with experts to confirm the variables of the conceptual model, as well as presenting the main effects (influences) of best practices in information quality performance, conditioned to moderating variables, namely the criteria for assessing the quality of information. They were selected through technical and scientific criteria, experts with knowledge and experience of the investigated object. Thus, initially they were selected 303 experts with diverse backgrounds Knowledge Areas (Information Technology, Information Systems, Quality of Information, Administration), among others, but these, focusing on Information Technology. The study was based on the brazilian experience in IT projects. Data were collected through a questionnaire of scalar type (Likert) from 1 (lowest intensity) to 5 (highest intensity), with some open questions. And to reduce the subjectivity of the results achieved were applied statistical methods such as the Duncan test, to compare the means, and the Spearman correlation for analysis the influence investigated. Search limitations – The study is aimed at the brazilian experience, therefore, it is recommended applications of the study in other countries. Originality / value – This study stems from a gap in the theoretical clippings about influence best practice in information quality performance in IT projects, particularly in restrictions conditions. Implications for practice management – It is expected that this study will support managers in their decision-making processes in IT projects in dynamic and contingent contexts. It is believed that this represents an increase of value to business ventures.
18

Confiance et incertitude dans les environnements distribués : application à la gestion des donnéeset de la qualité des sources de données dans les systèmes M2M (Machine to Machine). / Trust and uncertainty in distributed environments : application to the management of dataand data sources quality in M2M (Machine to Machine) systems.

Ravi, Mondi 19 January 2016 (has links)
La confiance et l'incertitude sont deux aspects importants des systèmes distribués. Par exemple, de multiples sources d'information peuvent fournir le même type d'information. Cela pose le problème de sélectionner la source la plus fiable et de résoudre l'incohérence dans l'information disponible. Gérer de front la confiance et l'incertitude constitue un problème complexe et nous développons à travers cette thèse, une solution pour y répondre. La confiance et l'incertitude sont intrinsèquement liés. La confiance concerne principalement les sources d'information alors que l'incertitude est une caractéristique de l'information elle-même. En l'absence de mesures de confiance et d'incertitude, un système doit généralement faire face à des problèmes tels que l'incohérence et l'incertitude. Pour aborder ce point, nous émettons l'hypothèse que les sources dont les niveaux de confiance sont élevés produiront de l'information plus fiable que les sources dont les niveaux de confiance sont inférieurs. Nous utilisons ensuite les mesures de confiance des sources pour quantifier l'incertitude dans l'information et ainsi obtenir des conclusions de plus haut niveau avec plus de certitude.Une tendance générale dans les systèmes distribués modernes consiste à intégrer des capacités de raisonnement dans les composants pour les rendre intelligents et autonomes. Nous modélisons ces composants comme des agents d'un système multi-agents. Les principales sources d'information de ces agents sont les autres agents, et ces derniers peuvent posséder des niveaux de confiance différents. De plus, l'information entrante et les croyances qui en découlent sont associées à un degré d'incertitude. Par conséquent, les agents sont confrontés à un double problème: celui de la gestion de la confiance sur les sources et celui de la présence de l'incertitude dans l'information. Nous illustrons cela avec trois domaines d'application: (i) la communauté intelligente, (ii) la collecte des déchets dans une ville intelligente, et (iii) les facilitateurs pour les systèmes de l'internet du futur (FIWARE - le projet européen n° 285248, qui a motivé la recherche sur nos travaux). La solution que nous proposons consiste à modéliser les composants de ces domaines comme des agents intelligents qui incluent un module de gestion de la confiance, un moteur d'inférence et un système de révision des croyances. Nous montrons que cet ensemble d'éléments peut aider les agents à gérer la confiance aux autres sources, à quantifier l'incertitude dans l'information et à l'utiliser pour aboutir à certaines conclusions de plus haut niveau. Nous évaluons finalement notre approche en utilisant des données à la fois simulées et réelles relatives aux différents domaines d'application. / Trust and uncertainty are two important aspects of many distributed systems. For example, multiple sources of information can be available for the same type of information. This poses the problem to select the best source that can produce the most certain information and to resolve incoherence amongst the available information. Managing trust and uncertainty together forms a complex problem and through this thesis we develop a solution to this. Trust and uncertainty have an intrinsic relationship. Trust is primarily related to sources of information while uncertainty is a characteristic of the information itself. In the absence of trust and uncertainty measures, a system generally suffers from problems like incoherence and uncertainty. To improve on this, we hypothesize that the sources with higher trust levels will produce more certain information than those with lower trust values. We then use the trust measures of the information sources to quantify uncertainty in the information and thereby infer high level conclusions with greater certainty.A general trend in the modern distributed systems is to embed reasoning capabilities in the end devices to make them smart and autonomous. We model these end devices as agents of a Multi Agent System. Major sources of beliefs for such agents are external information sources that can possess varying trust levels. Moreover, the incoming information and beliefs are associated with a degree of uncertainty. Hence, the agents face two-fold problems of managing trust on sources and presence of uncertainty in the information. We illustrate this with three application domains: (i) The intelligent community, (ii) Smart city garbage collection, and (iii) FIWARE : a European project about the Future Internet that motivated the research on this topic. Our solution to the problem involves modelling the devices (or entities) of these domains as intelligent agents that comprise a trust management module, an inference engine and a belief revision system. We show that this set of components can help agents to manage trust on the other sources and quantify uncertainty in the information and then use this to infer more certain high level conclusions. We finally assess our approach using simulated and real data pertaining to the different application domains.
19

Tvorba a hodnocení informačních zdrojů v medicíně / Creation and evaluation of information resources in medicine

Janda, Aleš January 2010 (has links)
A field covering generation and evaluation of the medical information is a broad one. During my doctoral studies I focused on medical information mainly using the evidence-based medicine (EBM) principles. This approach leads to the use of currently best available medical data in the treatment of patients. The EBM methodology might be classified into two basic groups: afferent and efferent one. The afferent part is focused on development of information resources whereas the efferent one promotes optimal use of these resources and a critical application of the retrieved facts. Due to practical reasons (to limit extent of the text) I concentrated in the thesis on only one field of my activity. The text deals mainly with afferent part of EBM, namely with formation of a qualitative and quantitative synthesis of knowledge through compilation of a systematic review and meta-analysis. Compilation of systematic reviews of prognostic markers is discussed in a detail. It is a field not appropriately covered in the Czech literature. The practical outcome documented in the thesis is a description of the systematic review of immunohistochemical prognostic markers in intracranial ependymomas that was created and published by our research team. Compilation of this systematic review and meta-analysis was one of the...
20

Metadata Quality Assurance for Audiobooks: : An explorative case study on how to measure, identify and solve metadata quality issues

Carlsson, Patrik January 2023 (has links)
Metadata is essential to how (digital) archives, collections or databases operate. It is the backbone to organise different types of content, make them discoverable and keep the digital records’ authenticity, integrity and meaning over time. For that reason, it is also important to iteratively assess if the metadata is of high quality. Despite its importance, there is an acknowledged lack of research verifying if existing assessment frameworks and methodologies do indeed work and if so how well, especially in fields outside the libraries. Thus, this thesis conducted an exploratory case study and applied already existing frameworks in a new context by evaluating the metadata quality of audiobooks. The Information Continuum Model was used as a way to capture the metadata quality needs of customers/end users who will be searching and listening to audiobooks. Using a mixed methods approach, the results showed that the frameworks can indeed be generalised and adapted to a new context. However, although the frameworks helped measure, identify and find potential solutions to the problems, they could be better adjusted to the context and more metrics and information could be added. Thus, there can be a generalised method to assess metadata quality. But the method needs improvements and to be used by people who understand the data and the processes to reach its full potential.

Page generated in 0.1414 seconds