• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 39
  • 39
  • 23
  • 13
  • 10
  • 10
  • 9
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Um modelo para avaliação de sistemas de informação do SUS de abrangência nacional / A model for evaluating information systems of SUS with national scope

Morais, Rinaldo Macedo de 27 June 2014 (has links)
Os processos de gestão do Sistema Único de Saúde (SUS) são apoiados por um conjunto de sistemas de informação de abrangência nacional, com funcionalidades para as áreas epidemiológicas, ambulatoriais, hospitalares e administrativas. Trabalhos na literatura descrevem pontos fracos nessas aplicações, tais como a falta de integração entre sistemas e bases de dados, fragmentação de informações, baixa cobertura, incertezas quanto à confiabilidade dos dados mantidos e deficiências no apoio ao gestor para planejamento e tomada de decisões. Nesse contexto, foi identificada uma lacuna para pesquisar e discutir qualidade e avaliação dessas aplicações. Este trabalho propõe um modelo para avaliação de características de qualidade de sistemas de informação em saúde que possa ser aplicado aos sistemas do SUS. É descrito o processo de pesquisa, análise e classificação dos indicadores de avaliação para o modelo. Os indicadores foram obtidos por meio de pesquisa em bases bibliográficas e classificados segundo os atributos de qualidade da norma ISO/IEC 25010, adotada como modelo de qualidade no estudo. Como resultado, 57 indicadores foram identificados e mapeados, abrangendo todas as características de qualidade do modelo. A avaliação dos indicadores foi instrumentalizada por meio do desenvolvimento de questionários e procedimentos de inspeção de software, orientados aos diversos stakeholders envolvidos: profissionais de saúde, gestores de saúde, profissionais de tecnologia da informação e usuários do sistema de saúde. A aplicabilidade do modelo proposto foi avaliada para o Sistema de Informações sobre Mortalidade (SIM). A factibilidade dos procedimentos de inspeção foram analisados pelos mantenedores da aplicação e o SIM foi avaliado em um processo que envolveu profissionais de saúde de 18 municípios paulistas e 10 gestores de saúde que atuam nos níveis locais, regionais e estadual. Este trabalho poderá contribuir como mais uma referência para estudos que envolvam processos de avaliação da qualidade de softwares em saúde e auxiliar na normatização de planos de avaliação e monitoramento de qualidade de sistemas e dados em saúde pública no Brasil e em projetos de melhoria de software. / The management processes of the Unified Health System (SUS) are supported by a set of information systems with national scope, having features for the epidemiological, ambulatory, hospital, and administrative areas. Papers in the literature describe weaknesses in these applications, such as the lack of integration between systems and databases, fragmentation of information, low coverage, uncertainties regarding the reliability of data held and weaknesses in supporting managers in planning and decision making. In this context, we identified a gap for quality research and discuss and review these applications.This work proposes a model for evaluating health information systems that may be applied to the systems of SUS. It describes the research, analysis, and classification process of evaluation indicators for the model. The indicators were obtained by search in bibliographic databases and they were classified according to the quality attributes of the standard ISO/IEC 25010, adopted as quality model in the study. As a result, 57 indicators were identified and mapped, covering all quality features of the model. The evaluation of the indicators was instrumentalized by developing questionnaires and inspection procedures of software, oriented to the various stakeholders involved: health professionals, health managers, information technology professionals and users of the health system. The applicability of the proposed model was evaluated for the Information System on Mortality (SIM). The feasibility of the inspection procedures were analyzed by the maintainers of the application and the SIM has been reported in a case involving health professionals from 18 cities and 10 health managers working in local, regional and state levels. This work will be able to contribute as a further reference for studies involving quality assessment processes for softwares in health and to assist in the standardization of plans for assessing and monitoring the quality of systems and data on public health in Brazil and in projects for improving softwares
22

Joint TCP congestion control and wireless-link scheduling for mobile Internet applications

Unknown Date (has links)
The Transmission Control Protocol (TCP) is one of the core protocols of the Internet protocol suite, which is used by major Internet applications such as World Wide Web, email, remote administration and file transfer. TCP implements scalable and distributed end-to-end congestion control algorithms to share network resources among competing users. TCP was originally designed primarily for wired networks, and it has performed remarkably well as the Internet scaled up by six orders of magnitude in the past decade. However, many studies have shown that the unmodified standard TCP performs poorly in networks with large bandwidth-delay products and/or lossy wireless links. In this thesis, we analyze the problems TCP exhibits in the wireless communication environment, and develop joint TCP congestion control and wireless-link scheduling schemes for mobile applications. ... Different from the existing solutions, the proposed schemes can be asynchronously implemented without message passing among network nodes; thus they are readily deployable with current infrastructure. Moreover, global convergence/stability of the proposed schemes to optimal equilibrium is established using the Lyapunov method in the network fluid model. Simulation results are provided to evaluate the proposed schemes in practical networks. / by Zhaoquan Li. / Thesis (Ph.D.)--Florida Atlantic University, 2013. / Includes bibliography. / Mode of access: World Wide Web. / System requirements: Adobe Reader.
23

Transportation policies and quality of life an analysis of the socioeconomic effects of implementing Ramp Metering, High Occupancy Vehicle (HOV) Lanes and High Occupancy (HOT) Lanes within an urban transportation network /

Jefferson, Katherine D. January 2008 (has links)
Thesis (Ph.D.)--George Mason University, 2008. / Vita: p. 199. Thesis director: Roger Stough. Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Public Policy. Title from PDF t.p. (viewed Mar. 16. 2009). Includes bibliographical references (p. 182-198). Also issued in print.
24

Um modelo para avaliação de sistemas de informação do SUS de abrangência nacional / A model for evaluating information systems of SUS with national scope

Rinaldo Macedo de Morais 27 June 2014 (has links)
Os processos de gestão do Sistema Único de Saúde (SUS) são apoiados por um conjunto de sistemas de informação de abrangência nacional, com funcionalidades para as áreas epidemiológicas, ambulatoriais, hospitalares e administrativas. Trabalhos na literatura descrevem pontos fracos nessas aplicações, tais como a falta de integração entre sistemas e bases de dados, fragmentação de informações, baixa cobertura, incertezas quanto à confiabilidade dos dados mantidos e deficiências no apoio ao gestor para planejamento e tomada de decisões. Nesse contexto, foi identificada uma lacuna para pesquisar e discutir qualidade e avaliação dessas aplicações. Este trabalho propõe um modelo para avaliação de características de qualidade de sistemas de informação em saúde que possa ser aplicado aos sistemas do SUS. É descrito o processo de pesquisa, análise e classificação dos indicadores de avaliação para o modelo. Os indicadores foram obtidos por meio de pesquisa em bases bibliográficas e classificados segundo os atributos de qualidade da norma ISO/IEC 25010, adotada como modelo de qualidade no estudo. Como resultado, 57 indicadores foram identificados e mapeados, abrangendo todas as características de qualidade do modelo. A avaliação dos indicadores foi instrumentalizada por meio do desenvolvimento de questionários e procedimentos de inspeção de software, orientados aos diversos stakeholders envolvidos: profissionais de saúde, gestores de saúde, profissionais de tecnologia da informação e usuários do sistema de saúde. A aplicabilidade do modelo proposto foi avaliada para o Sistema de Informações sobre Mortalidade (SIM). A factibilidade dos procedimentos de inspeção foram analisados pelos mantenedores da aplicação e o SIM foi avaliado em um processo que envolveu profissionais de saúde de 18 municípios paulistas e 10 gestores de saúde que atuam nos níveis locais, regionais e estadual. Este trabalho poderá contribuir como mais uma referência para estudos que envolvam processos de avaliação da qualidade de softwares em saúde e auxiliar na normatização de planos de avaliação e monitoramento de qualidade de sistemas e dados em saúde pública no Brasil e em projetos de melhoria de software. / The management processes of the Unified Health System (SUS) are supported by a set of information systems with national scope, having features for the epidemiological, ambulatory, hospital, and administrative areas. Papers in the literature describe weaknesses in these applications, such as the lack of integration between systems and databases, fragmentation of information, low coverage, uncertainties regarding the reliability of data held and weaknesses in supporting managers in planning and decision making. In this context, we identified a gap for quality research and discuss and review these applications.This work proposes a model for evaluating health information systems that may be applied to the systems of SUS. It describes the research, analysis, and classification process of evaluation indicators for the model. The indicators were obtained by search in bibliographic databases and they were classified according to the quality attributes of the standard ISO/IEC 25010, adopted as quality model in the study. As a result, 57 indicators were identified and mapped, covering all quality features of the model. The evaluation of the indicators was instrumentalized by developing questionnaires and inspection procedures of software, oriented to the various stakeholders involved: health professionals, health managers, information technology professionals and users of the health system. The applicability of the proposed model was evaluated for the Information System on Mortality (SIM). The feasibility of the inspection procedures were analyzed by the maintainers of the application and the SIM has been reported in a case involving health professionals from 18 cities and 10 health managers working in local, regional and state levels. This work will be able to contribute as a further reference for studies involving quality assessment processes for softwares in health and to assist in the standardization of plans for assessing and monitoring the quality of systems and data on public health in Brazil and in projects for improving softwares
25

Supporting data quality assessment in eScience = a provenance based approach = Apoio à avaliação da qualidade de dados em eScience: uma abordagem baseada em proveniência / Apoio à avaliação da qualidade de dados em eScience : uma abordagem baseada em proveniência

Gonzales Malaverri, Joana Esther, 1981- 05 June 2013 (has links)
Orientador: Claudia Maria Bauzer Medeiros / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-23T01:02:06Z (GMT). No. of bitstreams: 1 GonzalesMalaverri_JoanaEsther_D.pdf: 4107657 bytes, checksum: f285cdfecf84c5d5cc51db0249035297 (MD5) Previous issue date: 2013 / Resumo: Qualidade dos dados é um problema recorrente em todos os domínios da ciência. Os experimentos analisam e manipulam uma grande quantidade de conjuntos de dados gerando novos dados para serem (re) utilizados por outros experimentos. A base para a obtenção de bons resultados científicos está fortemente associada ao grau de qualidade de tais da- dos. No entanto, os dados utilizados nos experimentos são manipulados por uma diversa variedade de usuários, os quais visam interesses diferentes de pesquisa, utilizando seus próprios vocabulários, metodologias de trabalho, modelos, e necessidades de amostragem. Considerando este cenário, um desafio em ciência da computação é oferecer soluções que auxiliem aos cientistas na avaliação da qualidade dos seus dados. Diferentes esforços têm sido propostos abordando a avaliação de qualidade. Alguns trabalhos salientam que os atributos de proveniência dos dados poderiam ser utilizados para avaliar qualidade. No entanto, a maioria destas iniciativas aborda a avaliação de um atributo de qualidade específico, frequentemente focando em valores atômicos de dados. Isto reduz a aplicabilidade destas abordagens. Apesar destes esforços, há uma necessidade de novas soluções que os cientistas possam adotar para avaliar o quão bons seus dados são. Nesta pesquisa de doutorado, apresentamos uma abordagem para lidar com este problema, a qual explora a noção de proveniência de dados. Ao contrário de outras abordagens, nossa proposta combina os atributos de qualidade especificados dentro de um contexto pelos especialistas e os metadados que descrevem a proveniência de um conjunto de dados. As principais contribuições deste trabalho são: (i) a especificação de um framework que aproveita a proveniência dos dados para obter informação de qualidade, (ii) uma metodologia associada a este framework que descreve os procedimentos para apoiar a avaliação da qualidade, (iii) a proposta de dois modelos diferentes de proveniência que possibilitem a captura das informações de proveniência, para cenários fixos e extensíveis, e (iv) a validação dos itens (i) a (iii), com suas discussões via estudos de caso em agricultura e biodiversidade / Abstract: Data quality is a recurrent concern in all scientific domains. Experiments analyze and manipulate several kinds of datasets, and generate data to be (re)used by other experiments. The basis for obtaining good scientific results is highly associated with the degree of quality of such datasets. However, data involved with the experiments are manipulated by a wide range of users, with distinct research interests, using their own vocabularies, work methodologies, models, and sampling needs. Given this scenario, a challenge in computer science is to come up with solutions that help scientists to assess the quality of their data. Different efforts have been proposed addressing the estimation of quality. Some of these efforts outline that data provenance attributes should be used to evaluate quality. However, most of these initiatives address the evaluation of a specific quality attribute, frequently focusing on atomic data values, thereby reducing the applicability of these approaches. Taking this scenario into account, there is a need for new solutions that scientists can adopt to assess how good their data are. In this PhD research, we present an approach to attack this problem based on the notion of data provenance. Unlike other similar approaches, our proposal combines quality attributes specified within a context by specialists and metadata on the provenance of a data set. The main contributions of this work are: (i) the specification of a framework that takes advantage of data provenance to derive quality information; (ii) a methodology associated with this framework that outlines the procedures to support the assessment of quality; (iii) the proposal of two different provenance models to capture provenance information, for fixed and extensible scenarios; and (iv) validation of items (i) through (iii), with their discussion via case studies in agriculture and biodiversity / Doutorado / Ciência da Computação / Doutora em Ciência da Computação
26

Srovnání (a historická podmíněnost) výstupů ze strojových překladačů / Comparing Machine Translation Output (and the Way it Changes over Time)

Kyselová, Soňa January 2018 (has links)
This diploma thesis focuses on machine translation (MT), which has been studied for a relatively long time in linguistics (and later also in translation studies) and which in recent years is at the forefront of the broader public as well. This thesis aims to explore the quality of machine translation outputs and the way it changes over time. The theoretical part first deals with the machine translation in general, namely basic definitions, brief history and approaches to machine translation, then describes online machine translation systems and evaluation methods. Finally, this part provides a methodological model for the empirical part. Using a set of texts translated with MT, the empirical part seeks to check how online machine translation systems deal with translation of different text-types and whether there is improvement of the quality of MT outputs over time. In order to do so, an analysis of text-type, semantics, lexicology, stylistics and pragmatics is carried out as well as a rating of the general applicability of the translation. The final part of this thesis compares and concludes the results of the analysis. With regard to this comparation, conclusions are made and general tendencies stated that have emerged from the empirical part of the thesis.
27

A Grounded Theory of Information Quality in Web Archives

Reyes, Brenda 08 1900 (has links)
Web archiving is the practice of preserving websites as a historical record. It is a technologically-challenging endeavor that has as its goal the creation of a high-quality archived website that looks and behaves exactly like the original website. Despite the importance of the notion of quality, comprehensive definitions of Information Quality (IQ) in a web archive have yet to be developed. Currently, the field has no single, comprehensive theory that describes what is a high-quality or low-quality archived website. Furthermore, most of the research that has been conducted on web archives has been system-centered and not user-centered, leading to a dearth of information on how humans perceive web archives. This dissertation seeks to remedy this problem by presenting a user-centered grounded theory of IQ for web archives. It answers two research questions: 1) What is the definition of information quality (IQ) for web archives? and 2) How can IQ in a web archive be measured? The theory presented is grounded on data obtained from users of the Internet Archive's Archive-It system, the largest web-archiving subscription service in the United States. Also presented are mathematical definitions for each dimension of IQ, which can then be applied to measure the quality of a web archive.
28

Vyhodnocení provozu tlakové kanalizace obce Štěpánovice / Assessment of operation of the pressure sewerage system of the municipality Štěpánovice

Laksar, Luboš January 2012 (has links)
This thesis deals with the description of the pressure sewerage systems. Primarily focuses on the assessment of operation of the pressure sewerage system of the municipality Štěpánovice, where he was evaluated on the network pressure, flow of waste water, waste water quality and system silure rate.
29

Using unsupervised machine learning for fault identification in virtual machines

Schneider, C. January 2015 (has links)
Self-healing systems promise operating cost reductions in large-scale computing environments through the automated detection of, and recovery from, faults. However, at present there appears to be little known empirical evidence comparing the different approaches, or demonstrations that such implementations reduce costs. This thesis compares previous and current self-healing approaches before demonstrating a new, unsupervised approach that combines artificial neural networks with performance tests to perform fault identification in an automated fashion, i.e. the correct and accurate determination of which computer features are associated with a given performance test failure. Several key contributions are made in the course of this research including an analysis of the different types of self-healing approaches based on their contextual use, a baseline for future comparisons between self-healing frameworks that use artificial neural networks, and a successful, automated fault identification in cloud infrastructure, and more specifically virtual machines. This approach uses three established machine learning techniques: Naïve Bayes, Baum-Welch, and Contrastive Divergence Learning. The latter demonstrates minimisation of human-interaction beyond previous implementations by producing a list in decreasing order of likelihood of potential root causes (i.e. fault hypotheses) which brings the state of the art one step closer toward fully self-healing systems. This thesis also examines the impact of that different types of faults have on their respective identification. This helps to understand the validity of the data being presented, and how the field is progressing, whilst examining the differences in impact to identification between emulated thread crashes and errant user changes – a contribution believed to be unique to this research. Lastly, future research avenues and conclusions in automated fault identification are described along with lessons learned throughout this endeavor. This includes the progression of artificial neural networks, how learning algorithms are being developed and understood, and possibilities for automatically generating feature locality data.
30

Proposta de simulação computacional para avaliação de sistemas de imagem radiológica pelo método das funções de transferência. / A computer simulation proposal for radiographic systems evaluation by the transfer functions method.

Schiabel, Homero 12 June 1992 (has links)
A presente tese demonstra, a partir da avaliação convencional pelo método das Funções de Transferência de sistemas de imagem radiológica, que é necessário obter imagens de fenda em diversas orientações no campo para que essa análise tenha um significado mais real no caso de sistemas não isotrópicos. Isso provém da não linearidade na variação entre as FTMs obtidas para diversas direções 0 e 90&#176C relativas ao eixo do tubo de raios-X. Essa verificação, entretanto, representa um sério problema prático, pois indica um aumento no grau de complexidade de um método que, embora considerado o mais preciso pela maioria dos pesquisadores, tem sido utilizado apenas por laboratórios muito bem equipados. Assim, visando solucionar esse problema, esta tese propõe um novo método de simulação por computador que calcula a FEL e a FTM devidas ao ponto focal, dispensando, portanto, todo o complexo aparato experimental convencionalmente utilizado, o que contribui para tornar acessível à avaliação pelas funções de transferência a qualquer unidade radiológica. Por fim, faz parte desse trabalho também uma investigação do significado físico das variações registradas entre as FTMs e um estudo formal desenvolvido acerca dos conceitos da característica de campo e da magnificação lateral. / From the conventional evaluation by the radiological systems Transfer Functions, this work shows that it is necessary to obtain slit images at several field orientations so that this annalysis has a more real significance for non-isotropic systems. This is achieved from the non-linearity on the variations among the MTFs obtained in several directions between 0 and 90&#176C relative to the X-ray tube axis. This notification, however, represents a serious practical matter, because it shows an increase on the complexity of a method which has been used just by well structured laboratories, although many researchers have considered it the most accurate. Hence, in order to solve this problem, we present a new computer simulation method which calculates the LSF and the MTF due to the focal spot, without all the conventional complex experimental apparatus. This makes the evaluation by the transfer functions suitable to any radiological unit. Finally, it is also part of this work an investigation of the physical meaning of the variations among the MTFs and a formal study about the field characteristics and the lateral magnification concepts.

Page generated in 0.0719 seconds