• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 108
  • 11
  • 4
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 151
  • 151
  • 56
  • 49
  • 33
  • 25
  • 25
  • 25
  • 22
  • 16
  • 13
  • 13
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Modelos multiníveis aplicados ao estudo da mortalidade infantil no Rio Grande do Sul, Brasil, de 1994 a 2004

Zanini, Roselaine Ruviaro January 2007 (has links)
CONTEXTO: O Coeficiente de Mortalidade Infantil (CMI), que expressa o risco de um nascido vivo morrer antes de completar um ano de vida, é considerado um dos mais eficientes sensores de desenvolvimento social, econômico e ético, e seu acompanhamento permite inferir sobre a qualidade de vida de uma população. No Rio Grande do Sul, esse coeficiente vem apresentando tendência decrescente, permanecendo abaixo da média nacional. Entretanto, ampliar a compreensão dos determinantes da mortalidade infantil pode contribuir na elaboração de políticas e programas de saúde específicos. São inúmeros os fatores de risco citados na literatura, e a maioria deles é evidenciada em estudos que desconsideram a hierarquia existente nos dados. Porém, crianças que vivem em determinadas regiões podem apresentar características similares, quando comparadas a outras que vivem em regiões diferentes. Assim, as técnicas clássicas de análise, que pressupõem independência entre as observações, podem produzir estimativas viesadas. OBJETIVOS: O objetivo deste estudo foi utilizar os dados de sistemas de informações para analisar a evolução e os determinantes da mortalidade infantil e seus componentes no Rio Grande do Sul, de 1994 a 2004, assim como identificar os fatores associados à mortalidade neonatal, em 2003, considerando características individuais e contextuais. MÉTODO: Para a análise da evolução, foi realizado um estudo ecológico longitudinal, considerando-se medidas repetidas e regressão linear multinível, com microrregiões no nível 2 e tempo no nível 1. Para identificar os determinantes associados ao óbito neonatal, foi utilizada uma coorte retrospectiva que vinculou os nascimentos registrados no período de 01/01/2003 a 03/12/2003 aos óbitos neonatais originados desses nascimentos. Esses fatores foram estimados e comparados por meio da análise dos modelos de regressão logística clássica e multinível. RESULTADOS: Verificou-se que a taxa de mortalidade infantil reduziu de 19,2 para 15,2 por mil nascidos vivos, e as principais causas de óbitos infantis, nos últimos cinco anos, foram as afecções perinatais (54,10%). Aproximadamente 47% da variação nas taxas de mortalidade ocorreu no nível das microrregiões, sendo que 10% de acréscimo na cobertura do Programa Saúde da Família esteve associado à redução de 1‰ na mortalidade infantil, e um acréscimo de 10% na taxa de pobreza esteve associado com uma redução de 2,1‰ nos óbitos infantis. Também, encontrou-se associação positiva com a proporção de baixo peso e a taxa de leitos hospitalares na população e, negativa, com a proporção de partos cesáreos e a taxa de hospitais. As variáveis associadas ao óbito neonatal, no modelo clássico, foram: baixo peso ao nascer, Apgar no 1º e 5º minuto inferiores a 8, presença de anomalia congênita, parto cesáreo, prematuridade e perda fetal anterior. No modelo multinível, essa variável não se manteve significativa, mas a inclusão da variável contextual indicou que 15% da variação da mortalidade neonatal pode ser explicada pela variabilidade nas taxas de pobreza em cada microrregião. CONCLUSÕES: Este estudo evidenciou a predominância dos fatores individuais na mortalidade infantil e neonatal, mas demonstrou que a análise multinível foi capaz de identificar efeitos contextuais, possibilitando ações públicas direcionadas aos grupos vulneráveis. / CONTEXT: The Infant Mortality Coefficient (IMC), that express the risk of a bornalive baby die before completing one year of life, is considered one of the most efficient sensors of social, economic and ethical development, and its following allows to infer on the population life quality. In Rio Grande do Sul this coefficient has presented a decreasing trend, remaining below national average. However, to extend the understanding determinants of infant mortality can contribute in the elaboration of policies and specific health programs. Several risk factors are mentioned in literature, and the majority of them are evidenced in studies that disrespect the existing hierarchy in data. However, children who live in certain regions can present similar characteristics, when compared to others who live in different regions. Thus, classical techniques of analysis that estimate independence between comments, can produce biased estimates. OBJECTIVES: The objective of this study was to use the systems of information data to analyze the evolution and determinants of infant mortality and their components in Rio Grande do Sul from 1994 to 2004, as well as to identify the factors associated to neonatal mortality, in 2003, considering individual and contextual characteristics. METHOD: For the evolution analysis a longitudinal ecologic study was carried out, considering repeated-measures and multilevel linear regression, with microregions in level 2 and time in level 1. To identify the determinants associated to neonatal death, a historic cohort was used to link births recorded from 01/01/2003 to 12/03/2003 with the originated neonatal deaths of these births. These factors were estimated and compared by classic and multilevel logistic regression models. RESULTS: It was verified that the infant mortality rate decreased from 19.2 to 15.2 per thousand live births, and the main causes of infant deaths in the last five years has been perinatal affections (54.10%). Approximately 47% of the variation in mortality rates occurred at a microregion level, being that 10% increase in Family Health Program coverage was associated to the reduction of 1‰ in infant mortality, and an increase of 10% in poverty rate was associated to an increase of 2.1‰ in infant deaths. Also, there was positive association with the proportion of low weight and hospital bed rates in the population and, negative, with the proportion of caesarean sections and hospital rates. Low birthweight, Apgar scores at 1 and at 5 minutes lower 8, presence of congenital abnormality, caesarean section, pre-term birth and previous fetal loss were associated to neonatal deaths in the classical model. In the multilevel model, previous fetal loss did not remain significant, but the inclusion of contextual variable indicated that 15% of neonatal mortality variation can be explained by the variability in rates of poverty in each microregion. CONCLUSIONS: This study evidenced the predominance of individual factors in infant and neonatal mortality, but it demonstrated that the multilevel analysis was capable of identifying contextual effects, making directed actions to the susceptible groups possible.
132

User privacy in collaborative filtering systems / Protection de la vie privée des utilisateurs de systèmes de filtrage collaboratif

Rault, Antoine 23 June 2016 (has links)
Les systèmes de recommandation essayent de déduire les intérêts de leurs utilisateurs afin de leurs suggérer des items pertinents. Ces systèmes offrent ainsi aux utilisateurs un service utile car ils filtrent automatiquement les informations non-pertinentes, ce qui évite le problème de surcharge d’information qui est courant de nos jours. C’est pourquoi les systèmes de recommandation sont aujourd’hui populaires, si ce n’est omniprésents dans certains domaines tels que le World Wide Web. Cependant, les intérêts d’un individu sont des données personnelles et privées, comme par exemple son orientation politique ou religieuse. Les systèmes de recommandation recueillent donc des données privées et leur utilisation répandue nécessite des mécanismes de protection de la vie privée. Dans cette thèse, nous étudions la protection de la confidentialité des intérêts des utilisateurs des systèmes de recommandation appelés systèmes de filtrage collaboratif (FC). Notre première contribution est Hide & Share, un nouveau mécanisme de similarité, respectueux de la vie privée, pour la calcul décentralisé de graphes de K-Plus-Proches-Voisins (KPPV). C’est un mécanisme léger, conçu pour les systèmes de FC fondés sur les utilisateurs et décentralisés (ou pair-à-pair), qui se basent sur les graphes de KPPV pour fournir des recommandations. Notre seconde contribution s’applique aussi aux systèmes de FC fondés sur les utilisateurs, mais est indépendante de leur architecture. Cette contribution est double : nous évaluons d’abord l’impact d’une attaque active dite « Sybil » sur la confidentialité du profil d’intérêts d’un utilisateur cible, puis nous proposons une contre-mesure. Celle-ci est 2-step, une nouvelle mesure de similarité qui combine une bonne précision, permettant ensuite de faire de bonnes recommandations, avec une bonne résistance à l’attaque Sybil en question. / Recommendation systems try to infer their users’ interests in order to suggest items relevant to them. These systems thus offer a valuable service to users in that they automatically filter non-relevant information, which avoids the nowadays common issue of information overload. This is why recommendation systems are now popular, if not pervasive in some domains such as the World Wide Web. However, an individual’s interests are personal and private data, such as one’s political or religious orientation. Therefore, recommendation systems gather private data and their widespread use calls for privacy-preserving mechanisms. In this thesis, we study the privacy of users’ interests in the family of recommendation systems called Collaborative Filtering (CF) ones. Our first contribution is Hide & Share, a novel privacy-preserving similarity mechanism for the decentralized computation of K-Nearest-Neighbor (KNN) graphs. It is a lightweight mechanism designed for decentralized (a.k.a. peer-to-peer) user-based CF systems, which rely on KNN graphs to provide recommendations. Our second contribution also applies to user-based CF systems, though it is independent of their architecture. This contribution is two-fold: first we evaluate the impact of an active Sybil attack on the privacy of a target user’s profile of interests, and second we propose a counter-measure. This counter-measure is 2-step, a novel similarity metric combining a good precision, in turn allowing for good recommendations,with high resilience to said Sybil attack.
133

Impact of ICT reliability and situation awareness on power system blackouts

Panteli, Mathaios January 2013 (has links)
Recent major electrical disturbances highlight the extent to which modern societies depend on a reliable power infrastructure and the impact of these undesirable events on the economy and society. Numerous blackout models have been developed in the last decades that capture effectively the cascade mechanism leading to a partial or complete blackout. These models usually consider only the state of the electrical part of the system and investigate how failures or limitations in this system affect the probability and severity of a blackout.However, an analysis of the major disturbances that occurred during the last decade, such as the North America blackout of 2003 and the UCTE system disturbance of 2006, shows that failures or inadequacies in the Information and Communication Technology (ICT) infrastructure and also human errors had a significant impact on most of these blackouts.The aim of this thesis is to evaluate the contribution of these non-electrical events to the risk of power system blackouts. As the nature of these events is probabilistic and not deterministic, different probabilistic techniques have been developed to evaluate their impact on power systems reliability and operation.In particular, a method based on Monte Carlo simulation is proposed to assess the impact of an ICT failure on the operators’ situation awareness and consequently on their performance during an emergency. This thesis also describes a generic framework using Markov modeling for quantifying the impact of insufficient situation awareness on the probability of cascading electrical outages leading to a blackout. A procedure based on Markov modeling and fault tree analysis is also proposed for assessing the impact of ICT failures and human errors on the reliable operation of fast automatic protection actions, which are used to provide protection against fast-spreading electrical incidents. The impact of undesirable interactions and the uncoordinated operation of these protection schemes on power system reliability is also assessed in this thesis.The simulation results of these probabilistic methods show that a deterioration in the state of the ICT infrastructure and human errors affect significantly the probability and severity of power system blackouts. The conclusion of the work undertaken in this research is that failures in all the components of the power system, and not just the “heavy electrical” ones, must be considered when assessing the reliability of the electrical supply.
134

Posouzení informačního systému firmy a návrh změn / Information System Assessment and Proposal for ICT Modification

Novák, Lukáš January 2012 (has links)
Diploma thesis deals with the assessment of information systems and enterprise informatics in real business environment. Based on theoretical systematically processed bases is developed analytical part, which is divided according to content into two logical examination of the area. The first is dedicated to the analysis of business firms and the second part has been engaged in business informatics. The primary objective of this thesis is based on strategic business analysis and enterprise informatics proposes measures for improving the current state of information systems and information technology in selected company.
135

Quality of Service and Predictability in DBMS

Sattler, Kai-Uwe, Lehner, Wolfgang 03 May 2022 (has links)
DBMS are a ubiquitous building block of the software stack in many complex applications. Middleware technologies, application servers and mapping approaches hide the core database technologies just like power, networking infrastructure and operating system services. Furthermore, many enterprise-critical applications demand a certain degree of quality of service (QoS) or guarantees, e.g. wrt. response time, transaction throughput, latency but also completeness or more generally quality of results. Examples of such applications are billing systems in telecommunication, where each telephone call has to be monitored and registered in a database, Ecommerce applications where orders have to be accepted even in times of heavy load and the waiting time of customers should not exceed a few seconds, ERP systems processing a large number of transactions in parallel, or systems for processing streaming or sensor data in realtime, e.g. in process automation of traffic control. As part of complex multilevel software stack, database systems have to share or contribute to these QoS requirements, which means that guarantees have to be given by the DBMS, too, and that the processing of database requests is predictable. Todays mainstream DBMS typically follow a best effort approach: requests are processed as fast as possible without any guarantees: the optimization goal of query optimizers and tuning approaches is rather to minimize resource consumption instead of just fulfilling given service level agreements. However, motivated by the situation described above there is an emerging need for database services providing guarantees or simply behave in a predictable manner and at the same time interact with other components of the software stack in order to fulfill the requirements. This is also driven by the paradigm of service-oriented architectures widely discussed in industry. Currently, this is addressed only by very specialized solutions. Nevertheless, database researchers have developed several techniques contributing to the goal of QoS-aware database systems. The purpose of the tutorial is to introduce database researchers and practitioners to the scope, the challenges and the available techniques to the problem of predictability and QoS agreements in DBMS.
136

Model vrednovanja mogućnosti uvodjenjakooperativnih otvorenih interorganizacionih informacionih sistema / Evaluation model for implementation possibilities of the cooperative open interorganiyational information systems

Jošanov Borislav 26 January 2001 (has links)
<p>U doktorskoj disertaciji opisani su interorganizacioni informacioni sistemi i uslovi za njihovo uvođenje. Definisan je model vrednovanja informacionih sistema sa aspekta njihove integracije u kooperativne, otvorene interorganizacione informacione sisteme. Ovaj model je testiran sa 50 odabranih organizacija i formulisani su zaključci sprovedenog istrživanja.</p> / <p>In this master degree thesis interorganisational information sytems and factors for their<br />implementation are described. Evaluation model for information systems from the aspects for their integration into cooperative, open interorganisational information systems is defined. This model is tested on the sample made from 50 organisations and conclusions from whole research are formulated.</p>
137

The Dresden Database Systems Group

Lehner, Wolfgang 13 June 2023 (has links)
The Dresden Database Systems Group focuses on the advancement of data management techniques from a system level as well as information management perspective. With more than 15 PhD students the research group is involved in a variety of larger research projects ranging from activities to exploit modern hardware for scalable storage engines to advancing statistical methods for large-scale time series management. The group is visible at an international level as well as actively involved in cooperations with national and regional research partners
138

Reference Framework for Distributed Repositories / Towards an Open Repository Environment / Referenz-Architektur für eine dezentrale Repositorien-Infrastruktur

Aschenbrenner, Andreas 25 November 2009 (has links)
No description available.
139

Implementation of a fuzzy rule-based decision support system for the immunohistochemical diagnosis of small B-cell lymphomas

Arthur, Gerald L. Gong, Yang, January 2009 (has links)
Thesis (M.S.)--University of Missouri-Columbia, 2009. / The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Thesis advisor: Yang Gong. "May 2009" Includes bibliographical references.
140

Posouzení informačního systému firmy a návrh změn / Information System Assessment and Proposal for ICT Modification

Sommer, Marek January 2014 (has links)
The diploma thesis analyses application of information systems in a company called Měšťanský pivovar Havlíčkův Brod a.s. The theoretical part contains an introduction to the topics of corporate informatics and information systems. It also describes various methods and procedures of the analyses used in the thesis. On the basis of the analyses performed in the following chapter, there are discovered drawbacks that are crucial to design subsequent changes in the system. These changes may be used by a company management as a solid source of improvements of the current information systems.

Page generated in 0.1207 seconds