• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 152
  • 34
  • 33
  • 26
  • 12
  • 10
  • 9
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 334
  • 334
  • 76
  • 59
  • 49
  • 35
  • 34
  • 32
  • 31
  • 30
  • 30
  • 29
  • 28
  • 28
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Assessment of Single Crystal X-ray Diffraction Data Quality

Krause, Lennard 02 March 2017 (has links)
No description available.
222

Factors affecting antiretroviral therapy patients' data quality at Princess Marina Hospital pharmacy in Botswana

Tesema, Hana Tsegaye 04 June 2015 (has links)
AIM: This study aimed to explore the factors influencing antiretroviral therapy patients` data quality at Princess Marina Hospital Pharmacy in Botswana. METHODS: A phenomenological approach was adopted in this study. Specifically, Interpretative Phenomenological Analysis qualitative design was used to explore the factors influencing antiretroviral therapy patients` data quality at Princess Marina Hospital Pharmacy in Botswana. Data were collected using a semi-structured interview format on 18 conveniently selected pharmacy staff. Data were analysed using Smith’s (2005) Interpretative Phenomenological Analysis framework. RESULT: Five thematic categories emerged from data analysis: data capturing: an extra task, knowledge and experience of IPMS, training and education, mentoring and supervision, and data quality: impact on patients’ care. The findings of this study have implications for practice, training and research. CONCLUSION: Pharmacy staff had limited knowledge of IPMS and its utilisation in data capturing. Such limitations have implications in the context of the quality of data captured / Health Studies / M. A. (Health Studies)
223

Konsekvenser vid införandet av ett affärssystem : En jämförelseanalys av nutida och framtida läge

Olofsson, Simon, Åkerlund, Agnes January 2018 (has links)
The study aims at identifying the impacts accruing from the implementation of an ERP-system on a large production company that previously managed information in several unintegrated systems. This is investigated from the perspectives and frameworks of Lean Administration, information security and information- and data quality. The study has been conducted at SCA Östrands marketing department, SCA is currently investing 7,8 billion Swedish kronor in the Östrand pulp mill and will be leading the world market. SCA aims to produce 1 million tons of pulp per year with the new mill. As production increases, more data and information will be handled, and support processes must be improved and developed. Process flows were mapped with the process mapping method BPMN and analyzed at a current state and a future state. In the current state, several unintegrated systems are used, in the future, many parts have been integrated and automated. Three main processes were identified and analyzed according to the earlier mentioned frameworks. The analysis shows that the information security is improved according to the CIA-triad, Information- and data quality is improved, and the working method is Leaner after the implementation. In order to calculate efficiency, Data Envelopment Analysis, DEA method has been used with the CCR and ARI models. The results of DEA-CCR and DEA-ARI shows that process flow of the current state is between 48,3% and 57,1% effective compared to the future state, which is 100% effective in the models. / Studien syftar till att identifiera vilka konsekvenser en implementation av ett affärssystem har på ett stort produktionsföretag som tidigare arbetat med informationshantering i flera ointegrerade systemstöd. Detta undersöks ur perspektiven och ramverken för Lean Administration, informationssäkerhet samt informations- och datakvalitet. Undersökningen har genomförts på SCA Östrands marknadsavdelning, SCA investerar just nu 7,8 miljarder kronor i massafabriken Östrand och kommer att bli ledade på världsmarknaden. SCA har som mål att producera 1 miljon ton massa per år med den nybyggda fabriken. Då produktionen ökar kommer mer data och information hanteras och stödprocesserna måste förbättras och utvecklas. Processflöden har kartlagts med processkartläggningsmetoden BPMN och analyserats i ett nutida samt ett framtida optimalt läge. I det nutida läget används flera ointegrerade systemstöd och i det framtida läget har många delar blivit integrerade och automatiserade. Tre huvudprocesser har identifierats och dessa processflödena analyseras utifrån de tre ramverken. Analysen visar att informationssäkerheten förbättras enligt CIA-triaden, Informations- och datakvaliteten förbättras och arbetssättet är mer Lean efter implementationen. För att beräkna ett relativt effektivitetsmått har metoden Data Envelopment Analysis, DEA, använts med modellerna CCR och ARI. Resultatet av DEA-CCR och DEA-ARI visar att det nutida läget är mellan 48,3% och 57,1% effektivt jämfört med det framtida läget som anses vara 100% effektivt i modellerna.
224

Discovering data quality rules in a master data management context / Fouille de règles de qualité de données dans un contexte de gestion de données de référence

Diallo, Thierno Mahamoudou 17 July 2013 (has links)
Le manque de qualité des données continue d'avoir un impact considérable pour les entreprises. Ces problèmes, aggravés par la quantité de plus en plus croissante de données échangées, entrainent entre autres un surcoût financier et un rallongement des délais. De ce fait, trouver des techniques efficaces de correction des données est un sujet de plus en plus pertinent pour la communauté scientifique des bases de données. Par exemple, certaines classes de contraintes comme les Dépendances Fonctionnelles Conditionnelles (DFCs) ont été récemment introduites pour le nettoyage de données. Les méthodes de nettoyage basées sur les CFDs sont efficaces pour capturer les erreurs mais sont limitées pour les corriger . L’essor récent de la gestion de données de référence plus connu sous le sigle MDM (Master Data Management) a permis l'introduction d'une nouvelle classe de règle de qualité de données: les Règles d’Édition (RE) qui permettent d'identifier les attributs en erreur et de proposer les valeurs correctes correspondantes issues des données de référence. Ces derniers étant de très bonne qualité. Cependant, concevoir ces règles manuellement est un processus long et coûteux. Dans cette thèse nous développons des techniques pour découvrir de manière automatique les RE à partir des données source et des données de référence. Nous proposons une nouvelle sémantique des RE basée sur la satisfaction. Grace à cette nouvelle sémantique le problème de découverte des RE se révèle être une combinaison de la découverte des DFCs et de l'extraction des correspondances entre attributs source et attributs des données de référence. Nous abordons d'abord la découverte des DFCs, en particulier la classe des DFCs constantes très expressives pour la détection d'incohérence. Nous étendons des techniques conçues pour la découverte des traditionnelles dépendances fonctionnelles. Nous proposons ensuite une méthode basée sur les dépendances d'inclusion pour extraire les correspondances entre attributs source et attributs des données de référence avant de construire de manière automatique les RE. Enfin nous proposons quelques heuristiques d'application des ER pour le nettoyage de données. Les techniques ont été implémenté et évalué sur des données synthétiques et réelles montrant la faisabilité et la robustesse de nos propositions. / Dirty data continues to be an important issue for companies. The datawarehouse institute [Eckerson, 2002], [Rockwell, 2012] stated poor data costs US businesses $611 billion dollars annually and erroneously priced data in retail databases costs US customers $2.5 billion each year. Data quality becomes more and more critical. The database community pays a particular attention to this subject where a variety of integrity constraints like Conditional Functional Dependencies (CFD) have been studied for data cleaning. Repair techniques based on these constraints are precise to catch inconsistencies but are limited on how to exactly correct data. Master data brings a new alternative for data cleaning with respect to it quality property. Thanks to the growing importance of Master Data Management (MDM), a new class of data quality rule known as Editing Rules (ER) tells how to fix errors, pointing which attributes are wrong and what values they should take. The intuition is to correct dirty data using high quality data from the master. However, finding data quality rules is an expensive process that involves intensive manual efforts. It remains unrealistic to rely on human designers. In this thesis, we develop pattern mining techniques for discovering ER from existing source relations with respect to master relations. In this set- ting, we propose a new semantics of ER taking advantage of both source and master data. Thanks to the semantics proposed in term of satisfaction, the discovery problem of ER turns out to be strongly related to the discovery of both CFD and one-to-one correspondences between sources and target attributes. We first attack the problem of discovering CFD. We concentrate our attention to the particular class of constant CFD known as very expressive to detect inconsistencies. We extend some well know concepts introduced for traditional Functional Dependencies to solve the discovery problem of CFD. Secondly, we propose a method based on INclusion Dependencies to extract one-to-one correspondences from source to master attributes before automatically building ER. Finally we propose some heuristics of applying ER to clean data. We have implemented and evaluated our techniques on both real life and synthetic databases. Experiments show both the feasibility, the scalability and the robustness of our proposal.
225

Análise e avaliação do controle de qualidade de dados hospitalares na região de Ribeirão Preto / Analysis and evaluation of the quality control of hospital data in the Ribeirão Preto region.

Vinci, André Luiz Teixeira 08 April 2015 (has links)
Introdução: A Qualidade de Dados é de extrema importância atualmente pela crescente utilização de sistemas de informação, em especial na área da Saúde. O Observatório Regional de Atenção Hospitalar (ORAH) é tido como referência na coleta, processamento e manutenção da qualidade de informações hospitalares devida a extensa base de dados de informações oriundas das Folhas de Alta Hospitalar de hospitais públicos, mistos e privados da região de Ribeirão Preto. Uma verificação sistemática é feita para melhorar a qualidade desses dados impedindo a existência de incompletudes e inconsistências ao final do seu processamento. Objetivo: Estabelecer o panorama da qualidade dos dados das altas hospitalares ocorridas em 2012 para cada hospital parceiro do ORAH na região de Ribeirão Preto. Analisar e identificar o ganho ou perda de qualidade durante as etapas de coleta e processamento dos mesmos. Métodos: Análise do fluxo das informações dentro dos hospitais conveniados ao ORAH em conjunto com a análise da qualidade dos dados armazenados pelo ORAH após seu processamento, a partir da criação de indicadores de completude e consistência. Avaliação da qualidade dos dados em cada etapa do protocolo interno de verificação adotado pelo ORAH, a partir da criação de indicadores de qualidade específicos. Por fim, avaliação da concordância entre as informações de uma amostra das Folha de Alta registradas no ORAH e o Prontuário Médico do Paciente por meio da mensuração da sensibilidade, especificidade e acurácia da amostra. Resultados: Um panorama com foco na produção dos dados dos pacientes e nível de informatização foi elaborado para os hospitais complementarmente a análise de qualidade dos dados do ORAH. Tal análise constatou coeficientes médios de 99,6% de completude e 99,5% de consistência e um percentual de preenchimento acima de 99,2% para todos os campos da Folha de Alta. Por meio do indicador de qualidade elaborado a partir das comparações das dimensões de completude e consistência entre etapas do processamento dos dados pelo ORAH, foi possível averiguar a manutenção na qualidade das informações pela execução dos protocolos de validação e consistência adotados. Entretanto, com a apreciação entre as etapas da dimensão de volatilidade dos valores contidos nos campos, foi possível confirmar e quantificar a ocorrência de mudanças dos campos. A exatidão dos dados presentes na Folha de Alta com os do Prontuário do Paciente também pode ser comprovada pelas altas sensibilidade (99,0%; IC95% 98,8% - 99,2%), especificidade (97,9%; IC95% 97,5% - 98,2%) e acurácia (96,3%; IC95% 96,0% - 96,6%) encontradas na amostra. Conclusão: Como consequência de todas essas análises, foi possível comprovar a excelência da qualidade das informações disponibilizadas pelo ORAH, estabelecer uma metodologia abrangente para a análise dessa qualidade e definir possíveis problemas a serem enfrentados para a constante melhoria da qualidade das informações presentes na Folha de Alta Hospitalar e no banco de dados do ORAH por completo. / Introduction: The Data Quality is of utmost importance nowadays due the increasing use of information systems, especially in healthcare. The Regional Health Care Observatory (ORAH) is considered as reference in gathering, processing and maintaining the quality of hospital data due to the extensive database of information derived from the hospital discharge sheets of public, mixed and private hospitals. A systematic verification of those data is made to improve their data quality preventing the existence of incompleteness and inconsistencies at the end in their processing. Aim: Establish the overall picture of the data quality of hospital discharge sheets occurred in 2012 for each partner hospital in the Ribeirão Preto region. Analyze and identify the quality gain or loss during the gathering and processing stages of the data by the ORAH. Methods: Analysis of the information flow within the hospitals in partnership with the ORAH together with the analysis of the quality of the data stored by ORAH after its processing through the creation of completeness and consistency indicators. Data quality assessment at each stage of the internal protocol checking adopted by the ORAH through the establishment of specific quality indicators. Finally, evaluation of the agreement between the information in a sample of the hospital discharge sheets recorded in the ORAH and the patient medical records by measuring the sensitivity, specificity and accuracy of the sample. Results: Na overall picture focused on the patient data and the informatization level was developed for the hospitals in complement of the analysis of ORAHs data quality. This analysis found 99.6% completeness and 99.5% consistency mean rates and a completion percentage above 99.2% for all the fields of the discharge. Through the data quality indicator created from the comparisons of the completeness and consistency dimensions between the data processing steps of the ORAH was possible to verify the maintenance of the information quality by the implementation of validation and consistency protocols in use by the ORAH staff. However, with the assessment between the steps of the volatility dimension of the values contained in the fields, was possible to confirm and quantify the occurrence of changes in the fields. The agreement between the data in the hospital discharge sheets and the patient health record data can be proven by the high sensitivity (99.0%; CI95% 98.8% - 99.2%), specificity (97.9%; CI95% 97.5% - 98.2%) and accuracy (96.3%; CI95% 96.0% - 96.6%) found in the sample. Conclusion: As a result of all these analyzes, was possible to prove the excellence of the quality of the information provided by the ORAH, establish a comprehensive methodology for the analysis of this quality and identify possible problems to be addressed further improve the quality of information in the hospital discharge sheet and the ORAH database altogether.
226

Multimodales zerebrales Monitoring bei schweren Schädel-Hirn-Trauma

Kiening, Karl Ludwig 06 January 2004 (has links)
Die vorliegende Arbeit setzt sich mit der klinischen Anwendung von zwei neuen Monitoringparametern - Hirngewebe-PO2 in der weißen Substanz (PtiO2), und online intrakranielle Compliance (cICC) - im Rahmen des multimodalen zerebralen Monitorings bei Patienten mit schwerem Schädel-Hirn-Trauma auseinander. Bezüglich des PtiO2 konnte erstmalig eine Hypoxiegrenze von 8,5 mmHg durch vergleichende Messungen mit der jugular-venösen Oxymetrie ermittelt werden. Ferner konnte gezeigt werden, dass, bei intakter zerebraler Autoregulation, der PtiO2 bei einem zerebralen Perfusionsdruck (CPP) >60 mmHg über dem pathologischen Grenzwert liegt. Eine forcierte bzw. moderate Hyperventilation hingegen, induziert, trotz adäquatem CPP, eine Reduktion des PtiO2, die im individuellen Fall zur Unterschreitung des hypoxischen Grenzwerts führt. Das PtiO2-Verfahren ist somit v.a. dann indiziert, wenn eine Hyperventilationstherapie zur Kontrolle eines pathologisch erhöhten intrakraniellen Drucks (ICP) eingesetzt werden muss. PtiO2-Messwerte bedürfen aber einer kritischen Interpretation, sofern der PtiO2-Katheter in der Nähe einer Kontusion lokalisiert ist. Hier ist der PtiO2, als Ausdruck des perikontusionell reduzierten zerebralen Blutflusses, signifikant erniedrigt und somit nicht repräsentativ für die globale zerebrale Oxygenierung. Für die cICC konnte ebenfalls ein pathologischer Grenzwert angegeben werden (0,5 ml/mmHg). Die Dateninterpretation ist aber, durch die offensichtliche Abnahme der intrakraniellen Compliance mit zunehmendem Lebensalter, erschwert. Ferner bleibt die cICC bzgl. ihrer Datenqualität weit hinter etablierten Parametern zurück, so dass ihre routinemäßige Anwendung zum jetzigen Zeitpunkt nicht zu empfehlen ist. Basierend auf unseren Untersuchungen hat sich das PtiO2-Verfahren international als Langzeitmonitoring der zerebralen Oxygenierung etablieren können. Die cICC hingegen bedarf umfangreicher Systemänderungen, um eine frühe Risikoabschätzung bezüglich eines drohenden ICP-Anstiegs suffizient zu ermöglichen. / The aim of our clinical and experimental studies was to evaluate two new monitoring parameters -brain tissue PO2 (PtiO2) of cerebral white matter, and online intracranial compliance (cICC) - in patients with severe traumatic brain injury by using a computerized multimodal cerebral monitoring system. By comparing PtiO2 with jugular vein oxygen saturation, we were able to establish the hypoxic PtiO2-threshold of 8.5 mmHg. Moreover, we demonstrated that in case of an intact cerebral autoregulation, PtiO2 was well above the hypoxic threshold as long as cerebral perfusion pressure (CPP) stayed above 60 mmHg. However, forced or moderate hyperventilation carried an individual risk of a PtiO2 reduction below the hypoxic threshold despite an adequate CPP. PtiO2 monitoring is therefore particularly indicated, if hyperventilation therapy is necessary for control of pathologically increased intracranial pressure (ICP). However, PtiO2-values needed critical interpretation, if catheters were placed close to contusions. In these situations, PtiO2 has been shown to be significantly reduced, presumably due to low peri-contusional blood flow. Thus, such PtiO2 measurements cannot be taken as representatives of global cerebral oxygenation. In cICC monitoring, a pathological threshold was accomplished (0.5 ml/mmHg). Due to a stepwise cICC reduction with increasing age, cICC data interpretation was aggravated, and cICC data quality was significantly reduced in comparison to other established monitoring parameters. Hence, a routine use of this device is currently not advisable. Based on ours results, the PtiO2-methode has been established internationally as an ideal tool for long-term monitoring of cerebral oxygenation. On the contrary, the cICC system needs extensive alterations in order to anticipate sufficiently pathological ICP rises.
227

Análise e avaliação do controle de qualidade de dados hospitalares na região de Ribeirão Preto / Analysis and evaluation of the quality control of hospital data in the Ribeirão Preto region.

André Luiz Teixeira Vinci 08 April 2015 (has links)
Introdução: A Qualidade de Dados é de extrema importância atualmente pela crescente utilização de sistemas de informação, em especial na área da Saúde. O Observatório Regional de Atenção Hospitalar (ORAH) é tido como referência na coleta, processamento e manutenção da qualidade de informações hospitalares devida a extensa base de dados de informações oriundas das Folhas de Alta Hospitalar de hospitais públicos, mistos e privados da região de Ribeirão Preto. Uma verificação sistemática é feita para melhorar a qualidade desses dados impedindo a existência de incompletudes e inconsistências ao final do seu processamento. Objetivo: Estabelecer o panorama da qualidade dos dados das altas hospitalares ocorridas em 2012 para cada hospital parceiro do ORAH na região de Ribeirão Preto. Analisar e identificar o ganho ou perda de qualidade durante as etapas de coleta e processamento dos mesmos. Métodos: Análise do fluxo das informações dentro dos hospitais conveniados ao ORAH em conjunto com a análise da qualidade dos dados armazenados pelo ORAH após seu processamento, a partir da criação de indicadores de completude e consistência. Avaliação da qualidade dos dados em cada etapa do protocolo interno de verificação adotado pelo ORAH, a partir da criação de indicadores de qualidade específicos. Por fim, avaliação da concordância entre as informações de uma amostra das Folha de Alta registradas no ORAH e o Prontuário Médico do Paciente por meio da mensuração da sensibilidade, especificidade e acurácia da amostra. Resultados: Um panorama com foco na produção dos dados dos pacientes e nível de informatização foi elaborado para os hospitais complementarmente a análise de qualidade dos dados do ORAH. Tal análise constatou coeficientes médios de 99,6% de completude e 99,5% de consistência e um percentual de preenchimento acima de 99,2% para todos os campos da Folha de Alta. Por meio do indicador de qualidade elaborado a partir das comparações das dimensões de completude e consistência entre etapas do processamento dos dados pelo ORAH, foi possível averiguar a manutenção na qualidade das informações pela execução dos protocolos de validação e consistência adotados. Entretanto, com a apreciação entre as etapas da dimensão de volatilidade dos valores contidos nos campos, foi possível confirmar e quantificar a ocorrência de mudanças dos campos. A exatidão dos dados presentes na Folha de Alta com os do Prontuário do Paciente também pode ser comprovada pelas altas sensibilidade (99,0%; IC95% 98,8% - 99,2%), especificidade (97,9%; IC95% 97,5% - 98,2%) e acurácia (96,3%; IC95% 96,0% - 96,6%) encontradas na amostra. Conclusão: Como consequência de todas essas análises, foi possível comprovar a excelência da qualidade das informações disponibilizadas pelo ORAH, estabelecer uma metodologia abrangente para a análise dessa qualidade e definir possíveis problemas a serem enfrentados para a constante melhoria da qualidade das informações presentes na Folha de Alta Hospitalar e no banco de dados do ORAH por completo. / Introduction: The Data Quality is of utmost importance nowadays due the increasing use of information systems, especially in healthcare. The Regional Health Care Observatory (ORAH) is considered as reference in gathering, processing and maintaining the quality of hospital data due to the extensive database of information derived from the hospital discharge sheets of public, mixed and private hospitals. A systematic verification of those data is made to improve their data quality preventing the existence of incompleteness and inconsistencies at the end in their processing. Aim: Establish the overall picture of the data quality of hospital discharge sheets occurred in 2012 for each partner hospital in the Ribeirão Preto region. Analyze and identify the quality gain or loss during the gathering and processing stages of the data by the ORAH. Methods: Analysis of the information flow within the hospitals in partnership with the ORAH together with the analysis of the quality of the data stored by ORAH after its processing through the creation of completeness and consistency indicators. Data quality assessment at each stage of the internal protocol checking adopted by the ORAH through the establishment of specific quality indicators. Finally, evaluation of the agreement between the information in a sample of the hospital discharge sheets recorded in the ORAH and the patient medical records by measuring the sensitivity, specificity and accuracy of the sample. Results: Na overall picture focused on the patient data and the informatization level was developed for the hospitals in complement of the analysis of ORAHs data quality. This analysis found 99.6% completeness and 99.5% consistency mean rates and a completion percentage above 99.2% for all the fields of the discharge. Through the data quality indicator created from the comparisons of the completeness and consistency dimensions between the data processing steps of the ORAH was possible to verify the maintenance of the information quality by the implementation of validation and consistency protocols in use by the ORAH staff. However, with the assessment between the steps of the volatility dimension of the values contained in the fields, was possible to confirm and quantify the occurrence of changes in the fields. The agreement between the data in the hospital discharge sheets and the patient health record data can be proven by the high sensitivity (99.0%; CI95% 98.8% - 99.2%), specificity (97.9%; CI95% 97.5% - 98.2%) and accuracy (96.3%; CI95% 96.0% - 96.6%) found in the sample. Conclusion: As a result of all these analyzes, was possible to prove the excellence of the quality of the information provided by the ORAH, establish a comprehensive methodology for the analysis of this quality and identify possible problems to be addressed further improve the quality of information in the hospital discharge sheet and the ORAH database altogether.
228

Desenvolvimento de um método tentativo para a melhoria da acuridade de dados de um sistema de programação da produção – um estudo de caso em uma empresa do setor de alimentos cárneos

Rücker, Eduardo Scherer 27 February 2009 (has links)
Made available in DSpace on 2015-03-05T17:04:35Z (GMT). No. of bitstreams: 0 Previous issue date: 27 / Nenhuma / O presente estudo teve como objetivo o desenvolvimento de um método tentativo para a melhoria da acuracidade dos dados de um sistema específico de programação da produção para a indústria cárnea. A proposição baseou-se no projeto de implementação da referida ferramenta na Empresa Alfa, a qual produz alimentos a base de frangos, perus e suínos. O método de pesquisa utilizado foi um estudo de caso, por meio do qual se relatou e se analisou a influência da acuracidade dos dados sobre as informações geradas pelo sistema durante o projeto. O desenvolvimento do método proposto baseou-se no referencial teórico sobre programação da produção, qualidade de dados e qualidade de informações; nas percepções do autor acerca da participação do mesmo no projeto onde se aplicou o estudo de caso; e nas contribuições de especialistas na temática do trabalho. A partir disso, estruturou-se o método tentativo por meio de processos e subprocessos, hierarquização que possibilitou a execução dos objetivos de cada processo em relação / This study aims to develop a tentative method for improving the data accuracy of a specific production scheduling software for the meat industry. The proposition was based on the project phase of the implementation of a tool in Empresa Alfa, which produces food based on chickens, turkeys and porks. The research method used was a case study, by means of which it is reported and analyzed the influence of data accuracy on the information generated by the system during the project. The development of the proposed method was based on the theoretical framework on production scheduling, data quality and information quality, the author's perceptions about the same project which was applied in the case study and the contributions of experts in the thematic of the work. From this, the tentative method was structured by means of processes and subprocesses, hierarchy that enabled the implementation of the objectives of each process regarding the data accuracy in stages (subprocesses). At the end of the study, the autor c
229

Principy a možnosti fungování BI v malých a středních podnicích / Principles of function and the possibility of BI in small and medium enterprises

Tříska, Aleš January 2011 (has links)
Business intelligence is an effort to better understanding of company processes and business consequences in which company occur. The main goal of this thesis is to characterize the principles and possibilities of implementation and application Business intelligence in small or medium enterprise. This goal should be achieved by description of general needs which small or medium enterprise, looking forward to use business intelligence, has. This includes evaluation and universal description of information technologies in small and medium enterprises, as well as what data are used for managerial decisions and how are they made. In order to implement BI into the small or medium enterprise, it is necessary to imply how the Business intelligence work and on what principles. The application part of this thesis describes real procedures in preparation of BI implementation, impact on company and evaluation of the meaningfulness of this solution.
230

Aplicação de princípios de qualidade de dados durante o desenvolvimento de um sistema computacional médico para a cirurgia coloproctológica / Application of data quality principles in the development of a computacional medical system for coloproctology surgery

Jung, Wilson 25 April 2012 (has links)
Made available in DSpace on 2017-07-10T17:11:51Z (GMT). No. of bitstreams: 1 WILSON JUNG.pdf: 3777203 bytes, checksum: 02dd354bc8c0d25187fd3960d5d56152 (MD5) Previous issue date: 2012-04-25 / Lately, many human knowledge fields use computer systems to support data management which are the foundation to the decision making process. Data Quality (DQ) is a key feature whose absence can undermine the usefulness of the information and the processes that use it. There can be found in the literature several cases of DQ problems with impact in many areas, resulting in economic and social losses. Therefore, DQ research aims to study data problems causes and proposes assessment methods and processes to assist in quality assurance. In healthcare, data constitutes an important element used as the basis for applying medical treatments and procedures to patients, thus requiring a high quality level. The data is also used in the research and application of computational knowledge discovery methods, such as Data Mining. Therefore, the goal of this work is to study the implementation of principles to assist DQ guarantee during the medical software development. This goal motivated the development of a case study related to Coloproctology, in which a surgery data management system prototype was de- veloped in partnership with the Coloproctology Service of FCM - UNICAMP. The interaction with domain experts was a key factor during the development process, providing the adequate data structure modeling that composes the system. A module to monitor specific data problems has also been incorporated into the prototype to assist the appropriate information insertion as much as the control of patients records which have DQ problems. The prototype has been evaluated by computer and healthcare s colaborators, who, after using the system, answered to a qualitative DQ assessment form. The assessment s results pointed out the prototype suitability to the activities it is aimed for, guided specific functionalities review and may support the proposed software evolution and future related work. / Atualmente, diversas áreas do conhecimento humano fazem uso de sistemas computacionais para auxiliar no gerenciamento de dados, que são a base para o processo de tomada de decisão. A Qualidade de Dados (QD) constitui uma característica fundamental cuja ausência pode comprometer a utilidade da informação e os processos que a utilizam. Na literatura são apresentados diversos casos que relatam o impacto de problemas de QD nas mais diversas áreas, represen- tando perdas econômicas e sociais. Assim, a área de QD visa o estudo das causas de problemas nos dados e a proposição de métodos de avaliação e processos que auxiliem na garantia da qualidade. Na área da saúde os dados constituem elementos importantes que são utilizados como base para a aplicação de tratamentos e procedimentos médicos aos pacientes, fatores que exigem um nível elevado de qualidade. Esses dados também são utilizados em pesquisas e aplicações de métodos computacionais de extração de conhecimento, como a Mineração de Dados. Assim, o objetivo deste trabalho consiste em estudar a aplicação de princípios que auxiliem na garantia da QD durante o desenvolvimento de um sistema computacional médico. Tal objetivo motivou a realização de um estudo de caso relacionado à especialidade da Coloproctologia, no qual foi desenvolvido o protótipo de um sistema para gerenciamento de dados de cirurgia coloproctológica em parceria com o Serviço de Coloproctologia da FCM - UNICAMP. A interação com os especialistas de domínio constituiu um fator fundamental durante o processo de desenvolvimento, possibilitando a modelagem adequada da estrutura dos dados que forma o sistema. Também foi incorporado ao protótipo um módulo para monitoramento de problemas específicos nos dados, auxiliando tanto no preenchimento adequado da informação quanto no controle dos registros de pacientes que apresentam problemas de QD. Ao final, o protótipo foi subme- tido à avaliação por colaboradores da área da computação e da saúde, que após a utilização do sistema responderam a um formulário para avaliação qualitativa de QD. Os resultados da avaliação indicaram a adequação do protótipo para as atividades a que é destinado, orientaram para a revisão de funcionalidades específicas e poderão auxiliar na evolução do sistema proposto e em trabalhos futuros.

Page generated in 0.0488 seconds