• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 14
  • 11
  • 7
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 124
  • 124
  • 23
  • 23
  • 17
  • 14
  • 14
  • 14
  • 13
  • 13
  • 13
  • 12
  • 12
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

[en] THE E SCORE MODEL FOR THE PREDICTION OF BANKRUPTCY OF INTERNET COMPANIES / [pt] O MODELO E SCORE DE PREVISÃO DE FALÊNCIAS PARA EMPRESAS DE INTERNET

ORLANDO MANSUR T S A PEREIRA 10 March 2003 (has links)
[pt] O objetivo desta pesquisa é propor um modelo estatístico que possa estimar a probabilidade de ocorrência de falências ou concordatas em empresas de Internet. Após as recentes e drásticas perdas de capital em investimentos nas empresas desta nova indústria, instituições financeiras, pessoas físicas e todos os investidores desejam ter o conhecimento da real situação financeira das empresas denominadas pontocom. Esta pesquisa selecionou empresas norte-americanas que pediram falência ou concordata nas Cortes Norte-Americanas de Falências,entre 1999 e 2001, e empresas que não o fizeram, por amostragem de conveniência, que possuem ações listadas em bolsa e operam no e-commerce, isto é que vendem seus produtos ou serviços através da Internet. Utilizou, ainda, as demonstrações financeiras destas empresas para identificar, por intermédio de um teste T de amostras independentes, as variáveis mais significantes na discriminação dos dois grupos de empresas observados na amostra: o de empresas falidas e o de não-falidas. Analisadas as distribuições estatísticas das variáveis,o modelo de regressão logística demonstra ser o mais apropriado à pesquisa, por não possuir a premissa de normalidade multivariada. A conclusão final da pesquisa é a proposição de um modelo estatístico que indica a probabilidade de uma empresa de Internet falir ou não, com índice R2 de Nagelkerke de 0,887, percentual máximo de acerto na classificação de 97,4 por cento e que utiliza ainda não utilizadas em pesquisas anteriores similares. / [en] The objective of this research is to propose a statistical model that could estimate the probability of occurrence of bankruptcy for Internet companies. After the recent and drastic losses of investment capital in companies in this new sector of the economy, financial institutions, individuals and all investors wish to know the real financial position of these companies called dotcom. This research selected American companies that have filed a petition under the United States Bankruptcy Code, between 1999 and 2001, and companies which have not done it, by convenience sampling, that list their shares on stock markets and operate in e-commerce, i.e. companies that sell their products or services through the Internet. The financial statements of these companies were also used to identify,by analyzing a T test of independent samples, the most significant variables for discriminating the two observed groups in the sample: the bankrupt and the nonbankrupt companies. After analyzing the variables statistical distributions, a logistic regression model revealed to be the more appropriate for the research, for not having the multivariate normality assumption. The conclusion of this research proposes a statistical model which indicates the probability of an Internet company becoming bankrupt or not, with a Nagelkerke R Squared of 0,887, and an overall percentage of correct prediction of 97,4 percent. The model uses several variables not previously included in similar previous financial difficulties prediction models.
72

Estudo experimental de lubrificação e resfriamento no torneamento duro em componentes de grande porte / Experimental study of cooling lubrication methods in hard turning of large components

Guilherme Carlos Alves 12 June 2017 (has links)
Este trabalho teve como objetivo o estudo experimental de lubrificação e resfriamento no torneamento duro de uma superfície funcional em componentes de grande porte, com aplicação no chão de fábrica. Foram comparadas as técnicas de mínima quantidade de lubrificante e resfriamento abundante, utilizando o método de resposta de superfície. O planejamento tridimensional utilizado foi de dois níveis, adicionado de pontos axiais, ponto central e de réplica. Foram variados a velocidade periférica da peça, o avanço da ferramenta de usinagem e a profundidade de usinagem. Após a execução do experimento, a integridade da superfície foi analisada pela medição da tensão residual, da espessura de possíveis modificações microestruturais e pela medição dos parâmetros estatísticos Sa, Sq e Sz. As medidas foram então ajustadas em modelos matemáticos, otimizadas e por fim comparadas. A análise dos resultados sinalizou que os modelos ajustados para ambos os métodos foram capazes de explicar satisfatoriamente o comportamento das variáveis de resposta. Ainda, a partir da função desirability, foi possível estimar valores ótimos com qualidade equivalente entre os métodos. Tanto para o emprego com mínimas quantidades quanto para resfriamento abundante foram registradas tensões circunferenciais altamente compressivas. Nas condições ótimas, quando empregadas mínimas quantidades de lubrificante, a superfície apresentou valores de tensões residuais 37% maiores em comparação ao obtido quando empregado o resfriamento abundante. Ambos os métodos produziram tanto superfícies livres de modificações microestruturais significativas, como também superfícies com modificações microstruturais significativas. Porém, quando detectadas, as modificações se mostraram muito reduzidas, com espessura de até 2,35 &#956m. Nas condições ótimas, quando empregado mínimas quantidades de lubrificante, a espessura da modificação foi 74% menor em comparação ao obtido quando empregado o resfriamento abundante. Os parâmetros estatísticos sugeriram alguma vantagem da aplicação de mínimas quantidades de lubrificante. Nas condições ótimas, a aplicação MQL apresentou melhor rugosidade Sa, em 47%, e Sz, em 11%. Porém, o desvio-padrão Sq da superfície apresentou valor 12% maior ao resfriamento abundante. / This work aimed to conduct a study of cooling and lubrication in hard turning of functional surface of large components. It was compared the performance of minimum quantity lubrication and abundant cooling through the application of response surface methodology. The activities were developed in shop floor application. It was designed a three-dimensional experiment with two levels, added by axial points, center point and one center replicate. The input variables were the peripheral velocity of the workpiece, the feed of the cutting tool and the depth of cut of machining. After the process were analyzed the surface integrity through the circumferential residual stress, possibly microstructure modified layers and statistical parameters such Sa, Sq and Sz. Then, the measurements were adjusted in mathematical models, optimized and compared. The analysis of the results indicated that the adjusted models for both methods were capable of explaining satisfactorily the behavior of the response variables. Also, the use of the desirability function allowed to predict optimal values with equivalent quality between the methods. The minimal quantities, as well the abundant cooling, produced circumferential residual stresses highly compressive. On optimal conditions, the MQL presented residual stresses 37% lower the abundant cooling. Both methods produced surfaces free of significant altered layers as well surface containing significant surface altered layer. However, when detected, the altered surface layer was very thin, with thickness up to 2,35 &#956m. On optimal conditions, the MQL altered layer was 74% lower the abundant cooling. The statistical parameters indicated some advantage on the application of MQL. On optimal conditions, the minimal quantities presented better Sa roughness in 47%, and Sz, in 11%. However, the standard deviation Sq of the surface presented a value 12% higher the abundant cooling.
73

Statistical Modeling of High-Dimensional Nonlinear Systems: A Projection Pursuit Solution

Swinson, Michael D. 28 November 2005 (has links)
Despite recent advances in statistics, artificial neural network theory, and machine learning, nonlinear function estimation in high-dimensional space remains a nontrivial problem. As the response surface becomes more complicated and the dimensions of the input data increase, the dreaded "curse of dimensionality" takes hold, rendering the best of function approximation methods ineffective. This thesis takes a novel approach to solving the high-dimensional function estimation problem. In this work, we propose and develop two distinct parametric projection pursuit learning networks with wide-ranging applicability. Included in this work is a discussion of the choice of basis functions used as well as a description of the optimization schemes utilized to find the parameters that enable each network to best approximate a response surface. The essence of these new modeling methodologies is to approximate functions via the superposition of a series of piecewise one-dimensional models that are fit to specific directions, called projection directions. The key to the effectiveness of each model lies in its ability to find efficient projections for reducing the dimensionality of the input space to best fit an underlying response surface. Moreover, each method is capable of effectively selecting appropriate projections from the input data in the presence of relatively high levels of noise. This is accomplished by rigorously examining the theoretical conditions for approximating each solution space and taking full advantage of the principles of optimization to construct a pair of algorithms, each capable of effectively modeling high-dimensional nonlinear response surfaces to a higher degree of accuracy than previously possible.
74

Statistical Models for Environmental and Health Sciences

Xu, Yong 01 January 2011 (has links)
Statistical analysis and modeling are useful for understanding the behavior of different phenomena. In this study we will focus on two areas of applications: Global warming and cancer research. Global Warming is one of the major environmental challenge people face nowadays and cancer is one of the major health problem that people need to solve. For Global Warming, we are interest to do research on two major contributable variables: Carbon dioxide (CO2) and atmosphere temperature. We will model carbon dioxide in the atmosphere data with a system of differential equations. We will develop a differential equation for each of six attributable variables that constitute CO2 in the atmosphere and a differential system of CO2 in the atmosphere. We are using real historical data on the subject phenomenon to develop the analytical form of the equations. We will evaluate the quality of the developed model by utilizing a retrofitting process. Having such an analytical system, we can obtain good estimates of the rate of change of CO2 in the atmosphere, individually and cumulatively as a function of time for near and far target times. Such information is quite useful in strategic planning of the subject matter. We will develop a statistical model taking into consideration all the attributable variables that have been identified and their corresponding response of the amount of CO2 in the atmosphere in the continental United States. The development of the statistical model that includes interactions and higher order entities, in addition to individual contributions to CO2 in the atmosphere, are included in the present study. The proposed model has been statistically evaluated and produces accurate predictions for a given set of the attributable variables. Furthermore, we rank the attributable variables with respect to their significant contribution to CO2 in the atmosphere. For Cancer Research, the object of the study is to probabilistically evaluate commonly used methods to perform survival analysis of medical patients. Our study includes evaluation of parametric, semi-parametric and nonparametric analysis of probability survival models. We will evaluate the popular Kaplan-Meier (KM), the Cox Proportional Hazard (Cox PH), and Kernel density (KD) models using both Monte Carlo simulation and using actual breast cancer data. The first part of the evaluation will be based on how these methods measure up to parametric analysis and the second part using actual cancer data. As expected, the parametric survival analysis when applicable gives the best results followed by the not commonly used nonparametric Kernel density approach for both evaluations using simulation and actual cancer data. We will develop a statistical model for breast cancer tumor size prediction for United States patients based on real uncensored data. When we simulate breast cancer tumor size, most of time these tumor sizes are randomly generated. We want to construct a statistical model to generate these tumor sizes as close as possible to the real patients' data given other related information. We accomplish the objective by developing a high quality statistical model that identifies the significant attributable variables and interactions. We rank these contributing entities according to their percentage contribution to breast cancer tumor growth. This proposed statistical model can also be used to conduct surface response analysis to identify the necessary restrictions on the significant attributable variables and their interactions to minimize the size of the breast tumor. We will utilize the Power Law process, also known as Non-homogenous Poisson Process and Weibull Process to evaluate the effectiveness of a given treatment for Stage I & II Ductal breast cancer patients. We utilize the shape parameter of the intensity function to evaluate the behavior of a given treatment with respect to its effectiveness. We will develop a differential equation that will characterize the behavior of the tumor as a function of time. Having such a differential equation, the solution of which once plotted will identify the rate of change of tumor size as a function of age. The structure of the differential equation consists of the significant attributable variables and their interactions to the growth of breast cancer tumor. Once we have developed the differential equations and its solution, we proceed to validate the quality of the proposed differential equations and its usefulness.
75

Cilindrinių kevalų statistinis modeliavimas ir analizė / Cylindrical shells statistical modeling and analysis

Klova, Egidijus 03 June 2005 (has links)
The made-up software let to construct the cylindrical shell with composition in laminate by chosen test (reinforcement’s corner, number of shells), also it offer to simulate a reliability of construction and to optimise it by chosen in statistic parameters. Shell’s parameters can be evaluated like episodic variables, which are modulated with Monte - Carlo method. There is given opportunity to evaluate construction’s reliability: of the supposition that distribution of strain at shell is known, construction can be optimised when we are minimizing mass. If we want to show construction’s distribution of limitary state in dotted chart, we have to model strain’s value in every Monte – Carlo step. It is computed factors, which have influence for shell’s strain in construction’s stability. In such succession of operations is controlled the reliability of model’s evaluation. There are analysed parameters, which have the biggest influence for strain at shell. Studied the minimal shell’s mass fluctuation in dependence with dispersion and also parameter’s, of construction strain, overtop probability.
76

A General System for Supervised Biomedical Image Segmentation

Chen, Cheng 15 March 2013 (has links)
Image segmentation is important with applications to several problems in biology and medicine. While extensively researched, generally, current segmentation methods perform adequately in the applications for which they were designed, but often require extensive modifications or calibrations before used in a different application. We describe a system that, with few modifications, can be used in a variety of image segmentation problems. The system is based on a supervised learning strategy that utilizes intensity neighborhoods to assign each pixel in a test image its correct class based on training data. In summary, we have several innovations: (1) A general framework for such a system is proposed, where rotations and variations of intensity neighborhoods in scales are modeled, and a multi-scale classification framework is utilized to segment unknown images; (2) A fast algorithm for training data selection and pixel classification is presented, where a majority voting based criterion is proposed for selecting a small subset from raw training set. When combined with 1-nearest neighbor (1-NN) classifier, such an algorithm is able to provide descent classification accuracy within reasonable computational complexity. (3) A general deformable model for optimization of segmented regions is proposed, which takes the decision values from previous pixel classification process as input, and optimize the segmented regions in a partial differential equation (PDE) framework. We show that the performance of this system in several different biomedical applications, such as tissue segmentation tasks in magnetic resonance and histopathology microscopy images, as well as nuclei segmentation from fluorescence microscopy images, is similar or better than several algorithms specifically designed for each of these applications. In addition, we describe another general segmentation system for biomedical applications where a strong prior on shape is available (e.g. cells, nuclei). The idea is based on template matching and supervised learning, and we show the examples of segmenting cells and nuclei from microscopy images. The method uses examples selected by a user for building a statistical model which captures the texture and shape variations of the nuclear structures from a given data set to be segmented. Segmentation of subsequent, unlabeled, images is then performed by finding the model instance that best matches (in the normalized cross correlation sense) local neighborhood in the input image. We demonstrate the application of our method to segmenting cells and nuclei from a variety of imaging modalities, and quantitatively compare our results to several other methods. Quantitative results using both simulated and real image data show that, while certain methods may work well for certain imaging modalities, our software is able to obtain high accuracy across several imaging modalities studied. Results also demonstrate that, relative to several existing methods, the template based method we propose presents increased robustness in the sense of better handling variations in illumination, variations in texture from different imaging modalities, providing more smooth and accurate segmentation borders, as well as handling better cluttered cells and nuclei.
77

Análise, imputação de dados e interfaces computacionais em estudos de séries temporais epidemiológicas / Analysis, data imputation and computer interfaces in time-series epidemiologic studies

Washington Leite Junger 01 April 2008 (has links)
efeitos são frequentemente observados na morbidade e mortalidade por doenças respiratórias e cardiovasculares, câncer de pulmão, diminuição da função respiratória, absenteísmo escolar e problemas relacionados com a gravidez. Estudos também sugerem que os grupos mais suscetíveis são as crianças e os idosos. Esta tese apresenta estudos sobre o efeito da poluição do ar na saúde na saúde na cidade do Rio de Janeiro e aborda aspectos metodológicos sobre a análise de dados e imputação de dados faltantes em séries temporais epidemiológicas. A análise de séries temporais foi usada para estimar o efeito da poluição do ar na mortalidade de pessoas idosas por câncer de pulmão com dados dos anos 2000 e 2001. Este estudo teve como objetivo avaliar se a poluição do ar está associada com antecipação de óbitos de pessoas que já fazem parte de uma população de risco. Outro estudo foi realizado para avaliar o efeito da poluição do ar no baixo peso ao nascer de nascimentos a termo. O desenho deste estudo foi o de corte transversal usando os dados disponíveis no ano de 2002. Em ambos os estudos foram estimados efeitos moderados da poluição do ar. Aspectos metodológicos dos estudos epidemiológicos da poluição do ar na saúde também são abordados na tese. Um método para imputação de dados faltantes é proposto e implementado numa biblioteca para o aplicativo R. A metodologia de imputação é avaliada e comparada com outros métodos frequentemente usados para imputação de séries temporais de concentrações de poluentes atmosféricos por meio de técnicas de simulação. O método proposto apresentou desempenho superior aos tradicionalmente utilizados. Também é realizada uma breve revisão da metodologia usada nos estudos de séries temporais sobre os efeitos da poluição do ar na saúde. Os tópicos abordados na revisão estão implementados numa biblioteca para a análise de dados de séries temporais epidemiológicas no aplicativo estatístico R. O uso da biblioteca é exemplificado com dados de internações hospitalares de crianças por doenças respiratórias no Rio de Janeiro. Os estudos de cunho metodológico foram desenvolvidos no âmbito do estudo multicêntrico para avaliação dos efeitos da poluição do ar na América Latina o Projeto ESCALA. / Air pollution is a public health problem in major urban areas and its effects are frequently observed in the morbidity and mortality due respiratory and cardiovascular causes, lung cancer, decreasing in the respiratory function, school absenteeism, and pregnancy outcomes. This thesis presents studies on the effects of air pollution on health in the Rio de Janeiro city and tackle some methodological issues on data analysis and missing data imputation in epidemiologic time series. Daily time series were used to estimate the effect of the air pollution on deaths among the elderly due to lung cancer during 2000 and 2001. The purpose of the study was to evaluate if air pollution is associated with premature deaths of people that already are in risk population. Another study was conducted to assess the relationship between air pollution and low birth weight of singleton full term babies. A crosssectional design was used on data available during the year 2002. Moderate effects of the air pollution were estimated in both studies. Methodological aspects of epidemiologic studies on air pollution are also approached. A data imputation method is presented and implemented as library for the statistical package R. The imputation methodology is evaluated and compared to others often used for data imputation in time series of air pollutant concentrations, through simulation techniques. The proposed method has shown best performance compared to those traditionally used. A brief review on the methodology used in the time series studies on the effects of air pollution on health is also presented. The issues approached in the review are also implemented as a library for the analysis of epidemiologic time series in R. The use of the library is exemplified with the analysis on the data of hospital admissions of children due to respiratory causes in the city of Rio de Janeiro. The methodological studies were carried out under the umbrella of the multi-city study to assess the effects of air pollution on health in the Latin America the ESCALA Project.
78

Optimisation du test de production de circuits analogiques et RF par des techniques de modélisation statistique / Optimisation of the production test of analog and RF circuit using statistical modeling techniques

Akkouche, Nourredine 09 September 2011 (has links)
La part dû au test dans le coût de conception et de fabrication des circuits intégrés ne cesse de croître, d'où la nécessité d'optimiser cette étape devenue incontournable. Dans cette thèse, de nouvelles méthodes d'ordonnancement et de réduction du nombre de tests à effectuer sont proposées. La solution est un ordre des tests permettant de détecter au plus tôt les circuits défectueux, qui pourra aussi être utilisé pour éliminer les tests redondants. Ces méthodes de test sont basées sur la modélisation statistique du circuit sous test. Cette modélisation inclus plusieurs modèles paramétriques et non paramétrique permettant de s'adapté à tous les types de circuit. Une fois le modèle validé, les méthodes de test proposées génèrent un grand échantillon contenant des circuits défectueux. Ces derniers permettent une meilleure estimation des métriques de test, en particulier le taux de défauts. Sur la base de cette erreur, un ordonnancement des tests est construit en maximisant la détection des circuits défectueux au plus tôt. Avec peu de tests, la méthode de sélection et d'évaluation est utilisée pour obtenir l'ordre optimal des tests. Toutefois, avec des circuits contenant un grand nombre de tests, des heuristiques comme la méthode de décomposition, les algorithmes génétiques ou les méthodes de la recherche flottante sont utilisées pour approcher la solution optimale. / The share of test in the cost of design and manufacture of integrated circuits continues to grow, hence the need to optimize this step. In this thesis, new methods of test scheduling and reducing the number of tests are proposed. The solution is a sequence of tests for early identification of faulty circuits, which can also be used to eliminate redundant tests. These test methods are based on statistical modeling of the circuit under test. This model included several parametric and non-parametric models to adapt to all types of circuit. Once the model is validated, the suggested test methods generate a large sample containing defective circuits. These allow a better estimation of test metrics, particularly the defect level. Based on this error, a test scheduling is constructed by maximizing the detection of faulty circuits. With few tests, the Branch and Bound method is used to obtain the optimal order of tests. However, with circuits containing a large number of tests, heuristics such as decomposition method, genetic algorithms or floating search methods are used to approach the optimal solution.
79

Procedimentos e modelos para previsão de vendas e determinação de quotas na indústria calçadista: proposta e estudo de caso

Mantovani, Almir 18 February 2011 (has links)
Made available in DSpace on 2016-06-02T19:50:10Z (GMT). No. of bitstreams: 1 3530.pdf: 3939391 bytes, checksum: 555a1e91c6d19c737e2b04d918e65d42 (MD5) Previous issue date: 2011-02-18 / Frequent changes in consumer behavior and the characteristics of complexity and competition involving the market demands constant innovation in products, technologies, management strategies and management systems, including the systems that make the integration between the company and its customers or suppliers, as well as systems that integrate or coordinate business processes. In this scenario, understanding and managing of the vendors and/or sales representatives of the company turn out to be an essential factor for the company to remain competitive in the market. In this light, the process of determining and allocating sales quotas appears as integral and important part of the sales management process, which allows, among other things, the evaluation of vendor performance toward the goals set by the company. The purpose of this thesis was to establish a set of procedures to forecast appropriately, sales in the footwear industry and from then on, build a model to determine sales quotas which meets the specific of a sector and integrate two important functions of the company: Sales and Production. Semi-structured interviews with sales professionals, as well as production planning and control were made to subsidize the development of the proposal. The evaluation of the proposal was made by means of a case study in a footwear company in the city of Franca (SP). It was concluded that demand management in the footwear industry, is connected to the quotas they should limit (directly or indirectly) to sales due to the production capacity available and the considered models, in this study, meet the criteria to the studied situation. / As frequentes mudanças no comportamento do consumidor e as características de complexidade e competição que envolvem o mercado demandam inovações constantes nos produtos, tecnologias, estratégias de gestão e sistemas de gestão, entre eles, os sistemas que fazem a integração entre a empresa e seus clientes ou fornecedores, bem como os sistemas que integram ou coordenam processos da empresa. Neste cenário, a compreensão e administração dos vendedores e/ou representantes de vendas da empresa revelam-se um fator essencial para a empresa manter-se competitiva no mercado. Sob esta ótica, o processo de determinação e alocação de quotas de vendas aparece como parte integrante e importante do processo de gestão de vendas, o que permite, entre outras coisas, a avaliação do desempenho do vendedor em direção aos objetivos traçados pela empresa. A proposta desta tese foi estabelecer um conjunto de procedimentos para prever, de forma adequada, as vendas na indústria de calçados e, a partir daí, construir um modelo para determinar quotas de vendas o qual atenda as especificidades do setor e integre duas importantes funções da empresa: Vendas e Produção. Para subsidiar a elaboração da proposta foram feitas entrevistas semiestruturadas com profissionais de vendas e planejamento e controle da produção. A avaliação da proposta deu-se por meio de um estudo de caso em uma empresa de calçados da cidade de Franca (S.P). Concluiu-se que a gestão de demanda na indústria calçadista está atrelada às quotas que devem limitar (direta ou indiretamente) as vendas em função da capacidade de produção disponível, e os modelos propostos neste estudo atendem bem à situação estudada.
80

Análise, imputação de dados e interfaces computacionais em estudos de séries temporais epidemiológicas / Analysis, data imputation and computer interfaces in time-series epidemiologic studies

Washington Leite Junger 01 April 2008 (has links)
efeitos são frequentemente observados na morbidade e mortalidade por doenças respiratórias e cardiovasculares, câncer de pulmão, diminuição da função respiratória, absenteísmo escolar e problemas relacionados com a gravidez. Estudos também sugerem que os grupos mais suscetíveis são as crianças e os idosos. Esta tese apresenta estudos sobre o efeito da poluição do ar na saúde na saúde na cidade do Rio de Janeiro e aborda aspectos metodológicos sobre a análise de dados e imputação de dados faltantes em séries temporais epidemiológicas. A análise de séries temporais foi usada para estimar o efeito da poluição do ar na mortalidade de pessoas idosas por câncer de pulmão com dados dos anos 2000 e 2001. Este estudo teve como objetivo avaliar se a poluição do ar está associada com antecipação de óbitos de pessoas que já fazem parte de uma população de risco. Outro estudo foi realizado para avaliar o efeito da poluição do ar no baixo peso ao nascer de nascimentos a termo. O desenho deste estudo foi o de corte transversal usando os dados disponíveis no ano de 2002. Em ambos os estudos foram estimados efeitos moderados da poluição do ar. Aspectos metodológicos dos estudos epidemiológicos da poluição do ar na saúde também são abordados na tese. Um método para imputação de dados faltantes é proposto e implementado numa biblioteca para o aplicativo R. A metodologia de imputação é avaliada e comparada com outros métodos frequentemente usados para imputação de séries temporais de concentrações de poluentes atmosféricos por meio de técnicas de simulação. O método proposto apresentou desempenho superior aos tradicionalmente utilizados. Também é realizada uma breve revisão da metodologia usada nos estudos de séries temporais sobre os efeitos da poluição do ar na saúde. Os tópicos abordados na revisão estão implementados numa biblioteca para a análise de dados de séries temporais epidemiológicas no aplicativo estatístico R. O uso da biblioteca é exemplificado com dados de internações hospitalares de crianças por doenças respiratórias no Rio de Janeiro. Os estudos de cunho metodológico foram desenvolvidos no âmbito do estudo multicêntrico para avaliação dos efeitos da poluição do ar na América Latina o Projeto ESCALA. / Air pollution is a public health problem in major urban areas and its effects are frequently observed in the morbidity and mortality due respiratory and cardiovascular causes, lung cancer, decreasing in the respiratory function, school absenteeism, and pregnancy outcomes. This thesis presents studies on the effects of air pollution on health in the Rio de Janeiro city and tackle some methodological issues on data analysis and missing data imputation in epidemiologic time series. Daily time series were used to estimate the effect of the air pollution on deaths among the elderly due to lung cancer during 2000 and 2001. The purpose of the study was to evaluate if air pollution is associated with premature deaths of people that already are in risk population. Another study was conducted to assess the relationship between air pollution and low birth weight of singleton full term babies. A crosssectional design was used on data available during the year 2002. Moderate effects of the air pollution were estimated in both studies. Methodological aspects of epidemiologic studies on air pollution are also approached. A data imputation method is presented and implemented as library for the statistical package R. The imputation methodology is evaluated and compared to others often used for data imputation in time series of air pollutant concentrations, through simulation techniques. The proposed method has shown best performance compared to those traditionally used. A brief review on the methodology used in the time series studies on the effects of air pollution on health is also presented. The issues approached in the review are also implemented as a library for the analysis of epidemiologic time series in R. The use of the library is exemplified with the analysis on the data of hospital admissions of children due to respiratory causes in the city of Rio de Janeiro. The methodological studies were carried out under the umbrella of the multi-city study to assess the effects of air pollution on health in the Latin America the ESCALA Project.

Page generated in 0.1419 seconds