Spelling suggestions: "subject:"data quality."" "subject:"mata quality.""
291 |
Riktlinjer för att förbättra datakvaliteten hos data warehouse system / Guiding principles to improve data quality in data warehouse systemCarlswärd, Martin January 2008 (has links)
<p>Data warehouse system är något som har växt fram under 1990-talet och det har implementeras hos flera verksamheter. De källsystem som en verksamhet har kan integreras ihop med ett data warehouse system för att skapa en version av verkligheten och ta fram rapporter för beslutsunderlag. Med en version av verkligheten menas att det skapas en gemensam bild som visar hur verksamhetens dagliga arbete sker och utgör grundinformation för de framtagna analyserna från data warehouse systemet. Det blir därför väsenligt för verksamheten att de framtagna rapporterna håller en, enligt verksamheten, tillfredställande god datakvalitet. Detta leder till att datakvaliteten hos data warehouse systemet behöver hålla en tillräckligt hög kvalitetsnivå. Om datakvaliteten hos beslutsunderlaget brister kommer verksamheten inte att ta de optimala besluten för verksamheten utan det kan förekomma att beslut tas som annars inte hade tagits.</p><p>Att förbättra datakvaliteten hos data warehouse systemet blir därför centralt för verksamheten. Med hjälp av kvalitetsfilosofin Total Quality Management, TQM, har verksamheten ett stöd för att kunna förbättra datakvaliteten eftersom det möjliggör att ett helhetsgrepp om kvaliteten kan tas. Anledningen till att ta ett helhetsperspektiv angående datakvaliteten är att orsakerna till bristande datakvalitet inte enbart beror på orsaker inom själva data warehouse systemet utan beror även på andra orsaker. De kvalitetsförbättrande åtgärder som behöver utföras inom verksamheter varierar eftersom de är situationsanpassade beroende på hur verksamheten fungerar även om det finns mer övergripande gemensamma åtgärder.</p><p>Det som kommuniceras i form av exempelvis rapporter från data warehouse systemet behöver anses av verksamhetens aktörer som förståeligt och trovärdigt. Anledningen till det är att de framtagna beslutunderlagen behöver vara förståliga och trovärdiga för mottagaren av informationen. Om exempelvis det som kommuniceras i form av rapporter innehåller skräptecken bli det svårt för mottagaren att anse informationen som trovärdig och förståelig. Förbättras kvaliteten hos det kommunikativa budskapet, det vill säga om kommunikationskvaliteten förbättras, kommer datakvaliteten hos data warehouse systemet i slutändan också förbättras. Inom uppsatsen har det tagits fram riktlinjer för att kunna förbättra datakvaliteten hos data warehouse system med hjälp av kommunikationskvaliteten samt TQM. Riktlinjernas syfte är att förbättra datakvaliteten genom att förbättra kvaliteten hos det som kommuniceras inom företagets data warehouse system.</p><p>Det finns olika åtgärder som är situationsanpassade för att förbättra datakvaliteten med hjälp av kommunikationskvalitet. Ett exempel är att införa en möjlighet för mottagaren att få reda på vem som är sändaren av informationsinnehållet hos de framtagna rapporterna. Detta för att mottagaren bör ha möjlighet att kritisera och kontrollera den kommunikativa handlingen med sändaren, som i sin tur har möjlighet att försvara budskapet. Detta leder till att öka trovärdigheten hos den kommunikativa handlingen. Ett annat exempel är att införa inmatningskontroller hos källsystemen för att undvika att aktörer matar in skräptecken som sedan hamnar i data warehouse systemet. Detta leder till att mottagarens förståelse av det som kommuniceras förbättras.</p> / <p>The data warehouse system is something that has grown during the 1990s and has been implemented in many companies. The operative information system that a company has, can be integrated with a data warehouse system to build one version of the reality and take forward the decision basis. This means that a version of the reality creates a common picture that show how the company’s daily work occurs and constitutes the base of information for the created analysis reports from the data warehouse system. It is therefore important for a company that the reports have an acceptable data quality. This leads to that the data quality in the data warehouse system needs to hold an acceptable level of high quality. If data quality at the decision basis falls short, the company will not take the optimal decision for the company. Instead the company will take decision that normally would not have been taken.</p><p>To improve the data quality in the data warehouse system would therefore be central for the company. With help from a quality philosophy, like TQM, the company have support to improve the data quality since it makes it possible for wholeness about the quality to be taken. The reason to take a holistic perspective about the data quality is because lacking of the data quality not only depends on reasons in the data warehouse system, but also on other reasons. The measurement of the quality improvement which needs to perform in the company depends on the situation on how the company works even in the more overall actions.</p><p>The communication in form of for example reports from the data warehouse system needs to be understandable and trustworthy for the company’s actors. The reason is that the decision basis needs to be understandable and trustworthy for the receiver of the information. If for example the communication in form of reports contains junk characters it gets difficulty for the receiver of the information to consider if it is trustworthy and understandable. If the quality in the communication message is improving, videlicet that the communication quality improves, the data quality in the data warehouse will also improve in the end. In the thesis has guiding principles been created with the purpose to improve data quality in a data warehouse system with help of communication quality and TQM. Improving the quality in the communication, which is performed at the company’s data warehouse to improve the data quality, does this.</p><p>There are different measures that are depending on the situations to improve the data quality with help of communication quality. One example is to introduce a possibility for the receiver to get information about who the sender of the information content in the reports is. This is because the receiver needs to have the option to criticize and control the communication acts with the sender, which will have the possibility to defend the message. This leads to a more improved trustworthy in the communication act. Another example is to introduce input controls in the operative system to avoid the actors to feed junk characters that land in the data warehouse system. This leads to that the receivers understanding of the communication improves.</p>
|
292 |
Observation of a Higgs boson and measurement of its mass in the diphoton decay channel with the ATLAS detector at the LHCLorenzo Martinez, Narei 10 September 2013 (has links) (PDF)
The Standard Model of the particle physics predicts the existence of a massive scalar boson, usually referred to as Higgs boson in the literature, as resulting from the Spontaneous Symmetry Breaking mechanism, needed to generate the mass of the particles. The Higgs boson whose mass is theoretically undetermined, is experimentally looked for since half a century by various experiments. This is the case of the ATLAS experiment at LHC which started taking data from high energy collisions in 2010. One of the most important decay channel in the LHC environment is the diphoton channel, because the final state can be completely reconstructed with high precision. The photon energy response is a key point in this analysis, as the signal would appear as a narrow resonance over a large background. In this thesis, a detailed study of the photon energy response, using the ATLAS electromagnetic calorimeter has been performed. This study has provided a better understanding of the photon energy resolution and scale, thus enabling an improvement of the sensitivity of the diphoton analysis as well as a precise determination of the systematic uncertainties on the peak position. The diphoton decay channel had a prominent role in the discovery of a new particle compatible with the Standard Model Higgs boson by the ATLAS and CMS experiments, that occurred in July 2012. Using this channel as well as the better understanding of the photon energy response, a measurement of the mass of this particle is proposed in this thesis, with the data collected in 2011 and 2012 at a center-of-mass energy of 7 TeV and 8 TeV. A mass of 126.8 +/- 0.2 (stat) +\- 0.7 (syst) GeV/c2 is found. The calibration of the photon energy measurement with the calorimeter is the source of the largest systematic uncertainty on this measurement. Strategies to reduce this systematic error are discussed.
|
293 |
La démographie des centenaires québécois : validation des âges au décès, mesure de la mortalité et composante familiale de la longévitéBeaudry-Godin, Mélissa 06 1900 (has links)
L’explosion récente du nombre de centenaires dans les pays à faible mortalité n’est pas étrangère à la multiplication des études portant sur la longévité, et plus spécifiquement sur ses déterminants et ses répercussions. Alors que certains tentent de découvrir les gènes pouvant être responsables de la longévité extrême, d’autres s’interrogent sur l’impact social, économique et politique du vieillissement de la population et de l’augmentation de l’espérance de vie ou encore, sur l’existence d’une limite biologique à la vie humaine. Dans le cadre de cette thèse, nous analysons la situation démographique des centenaires québécois depuis le début du 20e siècle à partir de données agrégées (données de recensement, statistiques de l’état civil, estimations de population). Dans un deuxième temps, nous évaluons la qualité des données québécoises aux grands âges à partir d’une liste nominative des décès de centenaires des générations 1870-1894. Nous nous intéressons entre autres aux trajectoires de mortalité au-delà de cent ans. Finalement, nous analysons la survie des frères, sœurs et parents d’un échantillon de semi-supercentenaires (105 ans et plus) nés entre 1890 et 1900 afin de se prononcer sur la composante familiale de la longévité.
Cette thèse se compose de trois articles. Dans le cadre du premier, nous traitons de l’évolution du nombre de centenaires au Québec depuis les années 1920. Sur la base d’indicateurs démographiques tels le ratio de centenaires, les probabilités de survie et l’âge maximal moyen au décès, nous mettons en lumière les progrès remarquables qui ont été réalisés en matière de survie aux grands âges. Nous procédons également à la décomposition des facteurs responsables de l’augmentation du nombre de centenaires au Québec. Ainsi, au sein des facteurs identifiés, l’augmentation de la probabilité de survie de 80 à 100 ans s’inscrit comme principal déterminant de l’accroissement du nombre de centenaires québécois.
Le deuxième article traite de la validation des âges au décès des centenaires des générations 1870-1894 d’origine canadienne-française et de confession catholique nés et décédés au Québec. Au terme de ce processus de validation, nous pouvons affirmer que les données québécoises aux grands âges sont d’excellente qualité. Les trajectoires de mortalité des centenaires basées sur les données brutes s’avèrent donc représentatives de la réalité. L’évolution des quotients de mortalité à partir de 100 ans témoigne de la décélération de la mortalité. Autant chez les hommes que chez les femmes, les quotients de mortalité plafonnent aux alentours de 45%.
Finalement, dans le cadre du troisième article, nous nous intéressons à la composante familiale de la longévité. Nous comparons la survie des frères, sœurs et parents des semi-supercentenaires décédés entre 1995 et 2004 à celle de leurs cohortes de naissance respectives. Les différences de survie entre les frères, sœurs et parents des semi-supercentenaires sous observation et leur génération « contrôle » s’avèrent statistiquement significatives à un seuil de 0,01%. De plus, les frères, sœurs, pères et mères des semi-supercentenaires ont entre 1,7 (sœurs) et 3 fois (mères) plus de chance d’atteindre 90 ans que les membres de leur cohorte de naissance correspondante. Ainsi, au terme de ces analyses, il ne fait nul doute que la longévité se concentre au sein de certaines familles. / The recent rise in the number of centenarians within low mortality countries has led to multiple studies conducted on longevity, and more specifically on its determinants and repercussions. Some are trying to identify genes that could be responsible for extreme longevity. Others are studying the social, economic and political impact of the rise in life expectancy and population aging, or questioning themselves about the existence of a biological limit to the human life span. In this thesis, we first study the demographic situation of centenarians from Quebec using aggregated data (census data, vital statistics, and population estimations). Then, we evaluate the quality of Quebec data at the oldest ages using the death records of centenarians belonging to the 1870-1894 birth cohorts. We are particularly interested in the mortality trajectories beyond 100 years old. Finally, we analyze the survival of siblings and parents of a semi-supercentenarians (105 years and over) sample in order to assess the familial component of longevity.
The thesis is divided into three articles. In the first article, we study the evolution of the centenarian population from the 1920s in Quebec. With demographic indicators such as the centenarian ratio, the survival probabilities and the maximal age at death, we try to demonstrate the remarkable progress realised in old age mortality. We also analyze the determinants of the increase in the number of centenarians in Quebec. Among the factors identified, the improvement in late mortality is the main determinant of the increase of the number of centenarians in Quebec.
The second article deals with the validation of the ages at death of French-Canadian centenarians born in Quebec between 1870-1894. The validation results confirm that Quebec data at the highest ages at death are of very good quality. Therefore, the measure of centenarian mortality based on all death records is representative of the true trends. The evolution of age-specific life table death rates beyond 100 years old assesses the mortality deceleration at the highest ages. Among men and women, the death rates reach a plateau at around 45%.
Finally, in the third article, we study the familial predisposition for longevity. We compare the survival probabilities of siblings and parents of semi-supercentenarians deceased between 1995 and 2004 to those of their birth cohort-matched counterparts. The survival differences between the siblings and parents of semi-supercentenarians and their respective birth cohorts are statistically significant at a 0,01% level of significance. The siblings and parents have a 1,7 to 3 times greater probability of survival from age 50 to 90 then members of their respective birth cohorts. These findings support the existence of a substantial familial component to longevity.
|
294 |
Approches bio-informatiques appliquées aux technologies émergentes en génomiqueLemieux Perreault, Louis-Philippe 02 1900 (has links)
Les études génétiques, telles que les études de liaison ou d’association, ont permis d’acquérir une plus grande connaissance sur l’étiologie de plusieurs maladies affectant les populations humaines. Même si une dizaine de milliers d’études génétiques ont été réalisées sur des centaines de maladies ou autres traits, une grande partie de leur héritabilité reste inexpliquée. Depuis une dizaine d’années, plusieurs percées dans le domaine de la génomique ont été réalisées. Par exemple, l’utilisation des micropuces d’hybridation génomique comparative à haute densité a permis de démontrer l’existence à grande échelle des variations et des polymorphismes en nombre de copies. Ces derniers sont maintenant détectables à l’aide de micropuce d’ADN ou du séquençage à haut débit. De plus, des études récentes utilisant le séquençage à haut débit ont permis de démontrer que la majorité des variations présentes dans l’exome d’un individu étaient rares ou même propres à cet individu. Ceci a permis la conception d’une nouvelle micropuce d’ADN permettant de déterminer rapidement et à faible coût le génotype de plusieurs milliers de variations rares pour un grand ensemble d’individus à la fois.
Dans ce contexte, l’objectif général de cette thèse vise le développement de nouvelles méthodologies et de nouveaux outils bio-informatiques de haute performance permettant la détection, à de hauts critères de qualité, des variations en nombre de copies et des variations nucléotidiques rares dans le cadre d’études génétiques. Ces avancées permettront, à long terme, d’expliquer une plus grande partie de l’héritabilité manquante des traits complexes, poussant ainsi l’avancement des connaissances sur l’étiologie de ces derniers.
Un algorithme permettant le partitionnement des polymorphismes en nombre de copies a donc été conçu, rendant possible l’utilisation de ces variations structurales dans le cadre d’étude de liaison génétique sur données familiales. Ensuite, une étude exploratoire a permis de caractériser les différents problèmes associés aux études génétiques utilisant des variations en nombre de copies rares sur des individus non reliés. Cette étude a été réalisée avec la collaboration du Wellcome Trust Centre for Human Genetics de l’University of Oxford. Par la suite, une comparaison de la performance des algorithmes de génotypage lors de leur utilisation avec une nouvelle micropuce d’ADN contenant une majorité de marqueurs rares a été réalisée. Finalement, un outil bio-informatique permettant de filtrer de façon efficace et rapide des données génétiques a été implémenté. Cet outil permet de générer des données de meilleure qualité, avec une meilleure reproductibilité des résultats, tout en diminuant les chances d’obtenir une fausse association. / Genetic studies, such as linkage and association studies, have contributed greatly to a better understanding
of the etiology of several diseases. Nonetheless, despite the tens of thousands of genetic
studies performed to date, a large part of the heritability of diseases and traits remains unexplained.
The last decade experienced unprecedented progress in genomics. For example, the use of
microarrays for high-density comparative genomic hybridization has demonstrated the existence
of large-scale copy number variations and polymorphisms. These are now detectable using DNA
microarray or high-throughput sequencing. In addition, high-throughput sequencing has shown
that the majority of variations in the exome are rare or unique to the individual. This has led to
the design of a new type of DNA microarray that is enriched for rare variants that can be quickly
and inexpensively genotyped in high throughput capacity.
In this context, the general objective of this thesis is the development of methodological approaches
and bioinformatics tools for the detection at the highest quality standards of copy number polymorphisms
and rare single nucleotide variations. It is expected that by doing so, more of the
missing heritability of complex traits can then be accounted for, contributing to the advancement
of knowledge of the etiology of diseases.
We have developed an algorithm for the partition of copy number polymorphisms, making it feasible
to use these structural changes in genetic linkage studies with family data. We have also conducted
an extensive study in collaboration with the Wellcome Trust Centre for Human Genetics of the
University of Oxford to characterize rare copy number definition metrics and their impact on study
results with unrelated individuals. We have conducted a thorough comparison of the performance
of genotyping algorithms when used with a new DNA microarray composed of a majority of very
rare genetic variants. Finally, we have developed a bioinformatics tool for the fast and efficient
processing of genetic data to increase quality, reproducibility of results and to reduce spurious
associations.
|
295 |
A Importância da qualidade da informação na predição de valores genéticos para características de crescimento em bovinos da raça nelorePereira, Cristiane de Fátima 01 September 2014 (has links)
The objective of this study was to evaluate the influence of information quality in the prediction of genetic values for the Nellore cattle\'s growth traits. The information regarding the Nellore cattle from four farms participating in a program of the Brazilian National Association of Breeders and Researchers (ANCP, acronym in portuguese), named Nellore Brasil, is corresponding to the years 2012 and 2013. The selected farms are certified by the ANCP for quality of zootechnical information, having received the association\'s seal of approval, Global G1. Field data such as batch or management group were considered an assessment criteria, and the growth traits related to the body weight at 120 (W120), 210 (W210), 365 (W365) and 450 (W450) days were evaluated under different scenarios of information quality: inclusion of all management lot information and random inclusion of management lot information in 90%, 70%, 50%, 30% and absence of information on management lots. During the scrutiny of different scenarios of management lot information for the genetic evaluation of growth traits, it was found that there were changes in the file structure for genetic evaluation, after the statistical treatment of the data. The contemporary group experienced changes in the number of animals due to the poor quality of information. Changes were observed in all the (co)variance components and genetic parameters compared to those obtained by the scenario used as a reference. Concerning growth traits, there was an increase in the estimates of heritability and the additive genetic variance in as far as the number of information about management lots decreased. Due to the change in contemporary groups in the absence of information on management lots, the results obtained in genetic analyzes\' were biased, reaching overestimated breeding values, which can be misleading as to the choice of animals. With respect to the average of predicted breeding values, it was found that for the maternal additive genetic effects of the body weight at 120 and 210 days of age, the change in the information quality interfered negatively, leading to reduction of average breeding values of the herds evaluated.
When evaluating correlations or Spearman rank breeding values obtained from data with information quality (scenario), in three groups of accuracy of breeding values compared to those obtained in different scenarios of the inclusion of information, in different proportions, it was found that there was a change in the animal classification for growth traits. / Objetivou-se com este trabalho avaliar a influência da qualidade da informação na predição de valores genéticos para as características de crescimento em bovinos da raça Nelore. Foram utilizadas informações de bovinos da raça Nelore provenientes de fazendas participantes do Programa Nelore Brasil da Associação Nacional de Criadores e Pesquisadores (ANCP), correspondentes aos anos de 2012 a 2013. As fazendas selecionadas são certificadas quanto à qualidade da informação zootécnica, possuindo o selo Global G1. Considerou-se como critério de avaliação dados de campo, como lote ou grupo de manejo, sendo analisadas características de crescimento relacionadas aos pesos padronizados aos 120 (P120), 210 (P210), 365 (P365) e 450 (P450) dias de idade, avaliadas sob diferentes cenários de qualidade da informação: inclusão de todas as informações de lote de manejo; inclusão aleatória das informações de lote de manejo em 90%, 70%, 50%, 30% e dados sem a informação do lote de manejo dos animais com medidas fenotípicas. Ao considerar os diferentes cenários das informações de lotes de manejo para a avaliação genética das características de crescimento, verificou-se que houve alterações na estrutura dos arquivos para avaliação genética, após o tratamento estatístico dos dados. O grupo de contemporâneo sofreu alterações no número dos animais em decorrência da pouca qualidade da informação. Observou-se alterações em todos os componentes de (co)variância e parâmetros genéticos, quando comparados aos obtidos pelo cenário utilizado como referência. Para as características de crescimento houve aumento nas estimativas de herdabilidade e nas variâncias genéticas aditivas à medida que se reduziu o número de informações sobre os lotes de manejo. Devido à alteração nos grupos contemporâneos na ausência de informações sobre os lotes de manejo, foram obtidos resultados viesados nas análises genéticas, atingindo valores genéticos superestimados, que podem levar a enganos quanto à escolha dos animais.
Ao avaliar as correlações de posto ou Spearman dos valores genéticos obtidos a partir de dados com qualidade na informação (cenário REF), em três grupos de acurácia dos valores genéticos, comparados aos obtidos nos diferentes cenários de inclusão das informações de lotes de manejo houve alteração na classificação dos animais para as características de crescimento avaliadas. / Mestre em Ciências Veterinárias
|
296 |
Évaluation et requêtage de données multisources : une approche guidée par la préférence et la qualité des données : application aux campagnes marketing B2B dans les bases de données de prospection / A novel quality-based, preference-driven data evaluation and brokering : approaches in multisource environments : application to marketing prospection databasesBen Hassine, Soumaya 10 October 2014 (has links)
Avec l’avènement du traitement distribué et l’utilisation accrue des services web inter et intra organisationnels alimentée par la disponibilité des connexions réseaux à faibles coûts, les données multisources partagées ont de plus en plus envahi les systèmes d’informations. Ceci a induit, dans un premier temps, le changement de leurs architectures du centralisé au distribué en passant par le coopératif et le fédéré ; et dans un deuxième temps, une panoplie de problèmes d’exploitation allant du traitement des incohérences des données doubles à la synchronisation des données distribuées. C’est le cas des bases de prospection marketing où les données sont enrichies par des fichiers provenant de différents fournisseurs.Nous nous intéressons au cadre particulier de construction de fichiers de prospection pour la réalisation de campagnes marketing B-to-B, tâche traitée manuellement par les experts métier. Nous visons alors à modéliser le raisonnement de brokers humains, afin d’optimiser et d’automatiser la sélection du « plan fichier » à partir d’un ensemble de données d’enrichissement multisources. L’optimisation en question s’exprimera en termes de gain (coût, qualité) des données sélectionnées, le coût se limitant à l’unique considération du prix d’utilisation de ces données.Ce mémoire présente une triple contribution quant à la gestion des bases de données multisources. La première contribution concerne l’évaluation rigoureuse de la qualité des données multisources. La deuxième contribution porte sur la modélisation et l’agrégation préférentielle des critères d’évaluation qualité par l’intégrale de Choquet. La troisième contribution concerne BrokerACO, un prototype d’automatisation et d’optimisation du brokering multisources basé sur l’algorithme heuristique d’optimisation par les colonies de fourmis (ACO) et dont la Pareto-optimalité de la solution est assurée par l’utilisation de la fonction d’agrégation des préférences des utilisateurs définie dans la deuxième contribution. L’efficacité du prototype est montrée par l’analyse de campagnes marketing tests effectuées sur des données réelles de prospection. / In Business-to-Business (B-to-B) marketing campaigns, manufacturing “the highest volume of sales at the lowest cost” and achieving the best return on investment (ROI) score is a significant challenge. ROI performance depends on a set of subjective and objective factors such as dialogue strategy, invested budget, marketing technology and organisation, and above all data and, particularly, data quality. However, data issues in marketing databases are overwhelming, leading to insufficient target knowledge that handicaps B-to-B salespersons when interacting with prospects. B-to-B prospection data is indeed mainly structured through a set of independent, heterogeneous, separate and sometimes overlapping files that form a messy multisource prospect selection environment. Data quality thus appears as a crucial issue when dealing with prospection databases. Moreover, beyond data quality, the ROI metric mainly depends on campaigns costs. Given the vagueness of (direct and indirect) cost definition, we limit our focus to price considerations.Price and quality thus define the fundamental constraints data marketers consider when designing a marketing campaign file, as they typically look for the "best-qualified selection at the lowest price". However, this goal is not always reachable and compromises often have to be defined. Compromise must first be modelled and formalized, and then deployed for multisource selection issues. In this thesis, we propose a preference-driven selection approach for multisource environments that aims at: 1) modelling and quantifying decision makers’ preferences, and 2) defining and optimizing a selection routine based on these preferences. Concretely, we first deal with the data marketer’s quality preference modelling by appraising multisource data using robust evaluation criteria (quality dimensions) that are rigorously summarized into a global quality score. Based on this global quality score and data price, we exploit in a second step a preference-based selection algorithm to return "the best qualified records bearing the lowest possible price". An optimisation algorithm, BrokerACO, is finally run to generate the best selection result.
|
297 |
Estudo dos fatores influenciadores da intenção de uso da informação dos sistemas de Business Intelligence em empresas brasileiras / Study of factors that impact use intention of Business Intelligence systems in Brazilian companiesClaudinei de Paula Santos 21 August 2014 (has links)
Neste final de século o processo de globalização dos mercados e seu efeito sobre os padrões de conduta econômica, política, social e organizacional, vêm assumindo importância crescente, compondo um cenário no qual a competitividade emerge como uma questão imperativa. Como característica das empresas modernas, tem-se o aumento de padrão de automação onde as tecnologias tem disponibilizado o acesso a uma grande quantidade de dados. Tecnologias de data warehouse (DW) têm servido como repositores desses dados e o avanço nas aplicações de extração, transformação e carregamento (ETL) têm aumentado a velocidade da coleta. Atualmente, muito se tem discutido a respeito desse produto secundário resultante dos processos empresariais, os dados, que tem sido vistos como uma potencial fonte de informação capaz de possibilitar às instituições a garantia de sobrevivência em sua indústria. Nesse contexto, os sistemas de Business Intelligence (SBI), que têm como função prover o tratamento dos dados e entregar informação acionável que pode ser usada para uma específica tomada de decisão, têm recebido o reconhecimento de sua importância por parte dos executivos para a continuidade de suas empresas. Fato esse reforçado pelos resultados de pesquisas realizadas mundialmente pelo Gartner onde por anos seguidos os SBI têm sido relatados pelos executivos como o sonho de consumo das empresas. Aplicações de business intelligence têm dominado a lista de prioridade de tecnologia de muitos CIOs. Apesar desse cenário bastante favorável para os SBI, o Gartner Group aponta um elevado índice na subutilização desses sistemas, o que nos leva a questionar porque um sistema importante e desejado pelas empresas não consegue atender as expectativas dos usuários. Assim surgiu a proposta de estudar nesse trabalho a influência das dimensões fatores críticos de sucesso (FCS) e benefícios esperados (BE) sobre a dimensão intenção de uso (USO) da informação disponibilizada pelos SBI, verificando o efeito das variáveis de cada dimensão sobre o USO. Para isso foi estabelecido um modelo conceitual relacionando as dimensões mencionadas utilizando-se como referência outros trabalhos acadêmicos, suas variáveis e resultados de pesquisa. Foi realizada uma pesquisa quantitativa com a aplicação da técnica estatística Partial Least Square (PLS) com os dados obtidos de usuários de SBI em diferentes áreas da empresa de diferentes setores. Com o uso da técnica PLS, foi possível obter os indicadores para as variáveis das dimensões e estabelecer o modelo estrutural baseado em confiança. / As this century ends, the market globalization process and its effect on patterns of economic, political, social and organizational behaviors become increasingly important, composing a scenario in which competitiveness emerges as an imperative issue. As a trait of modern enterprises, there is an increase in automation standards where technologies provide access to a large amount of data. Technologies of data warehouse (DW) have been serving as repositories of such data and advances in extraction, transformation and loading (ETL) applications have been increasing the speed of data collection. More recently, much has been discussed about this secondary product resulting from business processing: the data that has been seen as a potential source of information able to allow institutions guarantee survival in their industry. In this context, Business Intelligence Systems (BIS), that have as function provide data processing and deliver actionable information, i.e., information that could be used for a specific decision making, have received recognition from executives of its importance to the continuity of their business since for years, has been reported in research conducted worldwide by Gartner as the technology desire of these professionals. Business Intelligence applications have been considered the technology priority investment of many CIOs. Despite of this favorable scenario for Business Intelligence Systems, the Gartner Group indicates a high level of underutilization of these systems which leads us to question why an important and desired business system cannot achieve user\'s expectations. Thus, this work proposes to study the influence of the dimensions critical success factors (CSF) and expected benefits (BE) on the dimension use (USO) to the information provided by BIS, checking the effect of each dimension on the USO emerged. To do this a conceptual model was established by relating these dimensions using as reference other academic papers, their variables and search results. It was realized a quantitative research with an application of statistical technique Partial Least Square (PLS) with data obtained from users of BIS in different areas of the company from different sectors. Using the PLS technique, it was possible to obtain indicators for the variables and dimensions to establish the structural model based on trust.
|
298 |
Agila Business Intelligence System : Kritiska framgångsfaktorer / Agile Business Intelligence Systems : Critical Success FactorsYoo, Sam, Naef, Petter January 2014 (has links)
Agila Business Intelligence System (ABIS) är en relativt ny och komplex typ av informationssystem, som kännetecknas av förkortade utvecklingstider, genom att exempelvis införa mer självbetjäning i de analytiska systemen, för att kunna möta ett behov av att analysera omvärldsfaktorer, som förändras i en allt snabbare takt. Eftersom ABIS är ett nytt och relativt outforskat område, finns ett behov av att utforska detta område. IT-investeringar är alltför ofta olönsamma och finns ett intresse av att visa på vad som bidrar till ett framgångsrikt införande av ett ABIS och på vilket sätt. Syftet med denna fallstudie var att identifiera högt rankade och gemensamma faktorer baserat på de kritiska framgångsfaktorer som belagts av tidigare forskning inom ABIS, beskriva hur dessa bidragit till ett framgångsrikt införande samt utröna skillnader och/eller likheter mellan hur dessa faktorer verkar ur kund- respektive leverantörsperspektiv. Som underlag för denna studie användes framför allt tidigare forskning kring kritiska framgångsfaktorer för Business Intelligence System. Speciellt en modell som utvecklades 2010 av Yeoh och Koronios användes som utgångspunkt för att lista de potentiella faktorer, som skulle beaktas av denna studie. Denna undersökning genomfördes som en fallstudie med hjälp av ett företag, som både levererar konsulttjänster och ABIS. En Delphipanel användes för att sortera fram framgångsfaktorer, som sedan studerades närmare genom semistrukturerade intervjuer för hur dessa kritiska framgångsfaktorer bidragit till ett framgångsrikt införande av ABIS från dels ett kundperspektiv, dels ett leverantörsperspektiv. De två faktorer som rankades högt och samtidigt delades av samtliga respondenter var: affärsvision och planer datakvalitet och dataintegritet Kundperspektivet var det styrande och leverantörens roll var ordentligt förstå kundens perspektiv, för att på så sätt framgångsrikt införa ABIS. Affärsvision och planer var av betydelse för att koppla införande till kundens målsättningar. Datakvalitet och dataintegritet var den mest betydelsefull faktorn utifrån den resursfördelningen skedde inom ett införandeprojekt för ABIS. / An Agile Business Intelligence System (ABIS) is a relatively new and complex type of information system, which is characterized by shortened development times, for by example introducing more self-service in the analytical systems, in order to meet the need to analyze the business environment, which is changing at an even faster pace. As the ABIS is a new and relatively uncharted area there is a need to explore this area. IT investments are too often unprofitable and there is an interest to show what contributes to a successful implementation of an ABIS and in which manner. The purpose of this case study was to identify highly ranked and common critical success factors based on the critical success factors faced by previous research in ABIS, describe how these contributed to a successful introduction of the system and examining differences and / or similarities between how these factors act from customer and supplier perspective. Earlier research on critical success factors for business intelligence systems was used as a basis for this study. Especially the model developed in 2010 by Yeoh and Koronios was used as a starting point to list potential factors to be considered by this study. This study was conducted as a case study with the help of a company that delivers both consulting services and ABIS. A Delphi panel was used to shortlist two success factors, which were then studied in more detail in semi-structured interviews to describe how these contributed to the successful introduction of ABIS from both a client as well as a supplier perspective. The two factors that both ranked high and were common for all respondents were: Clear vision and well-established business case Data quality and data integrity The leading perspective was the customer and the supplier role was to properly understand the customer perspective in order to successfully introduce ABIS. The vision and business case were important to link the introduction ABIS to client objectives. Data quality and data integrity was the most significant factor on the basis of the resource allocation of implementation projects for ABIS.
|
299 |
Qualité des données dans le système d'information sanitaire de routine et facteurs associés au Bénin: place de l'engagement au travail / Data quality in the routine health information system and related factors: work engagement positionGlele Ahanhanzo, Yolaine 22 October 2014 (has links)
La qualité des données est un enjeu essentiel dans les systèmes d’information sanitaire vue l’importance de ces derniers pour la prise de décision. Ce travail de recherche a un objectif double :i) d’une part, celui de mesurer la qualité des données dans le système d’information sanitaire de routine au Bénin, et, ii) d’autre part, celui d’identifier les facteurs associés à cette qualité des données en déterminant la place de l’engagement au travail au sein de ces interactions. Le but visé est finalement de fournir des outils opérationnels et des pistes de réflexion pour la santé publique et dans le domaine la recherche, pour l’amélioration de la qualité des données.<p>Dans les centres de santé de 1er échelon des départements de l’Atlantique et du Littoral, au sud du Bénin, nous avons développé six études pour atteindre les objectifs de recherche. Les études 1 et 2 basées respectivement sur les méthodologies lot quality assurance sampling et capture recapture sont destinées à mesurer la qualité des données. Les études 3 et 4, transversales, analysent l’engagement au travail des agents de santé responsables du SISR au niveau opérationnel. Les études 5 et 6, respectivement transversale et prospective, identifient les facteurs associés à la qualité des données.<p>Il ressort de ces analyses que :<p>•\ / Doctorat en Sciences de la santé publique / info:eu-repo/semantics/nonPublished
|
300 |
Dopad procesní a datové integrace na efektivitu reportingu / Impact of the process and data integration on reporting efficiencySys, Bohuslav January 2013 (has links)
Nowadays, when the difference between failure and success is amount of the available information combined with exponential growth of the available information on web leads to rising need to track the quality of the data. This worldwide trend is not only global, but it affects even individuals and companies in particular. In comparison with the past these companies produce higher amount of data, which are more complex at the same time, all to get a better idea about the real world. This leads us to the main problem, when we not only need to gather the data, but we have to present them in such way, so they can serve the purpose for which they have been gathered. Therefore the purpose of this thesis is to focus on processes following the data gathering -- data quality and transformation processes. In the first part of the thesis we will define a basic concept and issues, followed by methods necessary for acquiring requested data in expected quality, which includes the required infrastructure. In the second part of the thesis we will define real-life example and use the knowledge from previous part to design usable solution and deploy it into use. In conclusion we will evaluate the design compared to the result acquired from its real-life utilization.
|
Page generated in 0.0765 seconds