• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 67
  • 62
  • 11
  • 10
  • 10
  • 6
  • 5
  • 4
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 189
  • 47
  • 45
  • 39
  • 31
  • 25
  • 20
  • 19
  • 18
  • 16
  • 16
  • 16
  • 15
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Factores que se asocian con el bajo peso del recién nacido

Corasma Uñurucu, Vilma Yovanna January 2002 (has links)
El bajo peso del recién nacido, es considerado como un indicador general de salud en los países en vías de desarrollo, de allí la importancia de obtener los factores que más influyen en el bajo peso al nacer. Utilizamos el análisis de Regresión Logística para clasificar a los recién nacidos en dos grupos: bajo peso al nacer y peso normal al nacer. El estudio se basa en todos los pacientes que se atendieron en el Instituto Materno Perinatal durante los meses de Enero a Junio del presente año. El presente trabajo no se limita a la estimación de parámetros del modelo; se realiza la validación de supuestos, evaluación de la bondad de ajuste del modelo, análisis de los residuos, detectar observaciones influyentes y finalmente la evaluación de la capacidad predictiva del modelo propuesto. / The low weight of the recent born, is considered like a general indicator of health in developing countries, of there the importance of obtaining the factors with more influence in the low weight when being born. We used the analysis of Logistic Regression to classify to the recent born in two groups: low weight when being born and normal weight when being born. The study is based on all the patients who were taken care of in Perinatal Maternal Institute during the months from January to June of the present year. The present work is not limited to the estimation of parameters of the model; it is made the validation of assumptions, evaluation of the kindness of adjustment of the model, analysis of the residues, to detect influential observations and finally the evaluation of the predictive capacity of the proposed model.
82

Optimisation of a Diagnostic Test for a Truck Engine / Optimering av ett diagnostest för en lastbilsmotor

Haraldsson, Petter January 2002 (has links)
<p>Diagnostic systems become more and more an important within the field of vehicle systems. This is much because new rules and regulation forcing the manufacturer of heavy duty trucks to survey the emission process in its engines during the whole lifetime of the truck. To do this a diagnostic system has to be implemented which always survey the process and check that the thresholds of the emissions set by the government not are exceeded. There is also a demand that this system should be reliable, i.e. not producing false alarms or missed detection. One way of producing such a system is to use model based diagnosis system where thresholds has to be set deciding if the system is corrupt or not. There is a lot of difficulties involved in this. Firstly, there is no way of knowing if the signals logged are corrupt or not. This is because faults in these signals should be detected. Secondly, because of strict demand of reliability the thresholds has to be set where there is very low probability of finding values while driving. In this thesis a methodology is proposed for setting thresholds in a diagnosis system in an experimental test engine at Scania. Measurement data has been logged over 20 hours of effective driving by two individuals of the same engine. It is shown that the result is improved significantly by using this method and the threshold can be set so smaller faults in the system reliably can be detected.</p>
83

A Framework for Participatory Sensing Systems

Mendez Chaves, Diego 01 January 2012 (has links)
Participatory sensing (PS) systems are a new emerging sensing paradigm based on the participation of cellular users in a cooperative way. Due to the spatio-temporal granularity that a PS system can provide, it is now possible to detect and analyze events that occur at different scales, at a low cost. While PS systems present interesting characteristics, they also create new problems. Since the measuring devices are cheaper and they are in the hands of the users, PS systems face several design challenges related to the poor accuracy and high failure rate of the sensors, the possibility of malicious users tampering the data, the violation of the privacy of the users as well as methods to encourage the participation of the users, and the effective visualization of the data. This dissertation presents four main contributions in order to solve some of these challenges. This dissertation presents a framework to guide the design and implementation of PS applications considering all these aspects. The framework consists of five modules: sample size determination, data collection, data verification, data visualization, and density maps generation modules. The remaining contributions are mapped one-on-one to three of the modules of this framework: data verification, data visualization and density maps. Data verification, in the context of PS, consists of the process of detecting and removing spatial outliers to properly reconstruct the variables of interest. A new algorithm for spatial outliers detection and removal is proposed, implemented, and tested. This hybrid neighborhood-aware algorithm considers the uneven spatial density of the users, the number of malicious users, the level of conspiracy, and the lack of accuracy and malfunctioning sensors. The experimental results show that the proposed algorithm performs as good as the best estimator while reducing the execution time considerably. The problem of data visualization in the context of PS application is also of special interest. The characteristics of a typical PS application imply the generation of multivariate time-space series with many gaps in time and space. Considering this, a new method is presented based on the kriging technique along with Principal Component Analysis and Independent Component Analysis. Additionally, a new technique to interpolate data in time and space is proposed, which is more appropriate for PS systems. The results indicate that the accuracy of the estimates improves with the amount of data, i.e., one variable, multiple variables, and space and time data. Also, the results clearly show the advantage of a PS system compared with a traditional measuring system in terms of the precision and spatial resolution of the information provided to the users. One key challenge in PS systems is that of the determination of the locations and number of users where to obtain samples from so that the variables of interest can be accurately represented with a low number of participants. To address this challenge, the use of density maps is proposed, a technique that is based on the current estimations of the variable. The density maps are then utilized by the incentive mechanism in order to encourage the participation of those users indicated in the map. The experimental results show how the density maps greatly improve the quality of the estimations while maintaining a stable and low total number of users in the system. P-Sense, a PS system to monitor pollution levels, has been implemented and tested, and is used as a validation example for all the contributions presented here. P-Sense integrates gas and environmental sensors with a cell phone, in order to monitor air quality levels.
84

Identifikation av icke-representativa svar i frågeundersökningar genom detektion av multivariata avvikare

Galvenius, Hugo January 2014 (has links)
To United Minds, large-scale surveys are an important offering to clients, not least the public opinion poll Väljarbarometern. A risk associated with surveys is satisficing – sub-optimal response behaviour impairing the possibility of correctly describing the sampled population through its results. The purpose of this study is to – through the use of multivariate outlier detection methods - identify those observations assumed to be non-representative of the population. The possibility of categorizing responses generated through satisficing as outliers is investigated. With regards to the character of the Väljarbarometern dataset, three existing algorithms are adapted to detect these outliers. Also, a number of randomly generated observations are added to the data, by all algorithms correctly labelled as outliers. The resulting anomaly scores generated by each algorithm are compared, concluding the Otey algorithm as the most effective for the purpose, above all since it takes into account correlation between variables. A plausible cut-off value for outliers and separation between non-representative and representative outliers are discussed. The resulting recommendation is to handle observations labelled as outliers through respondent follow-up or if not possible, through downweighting, inversely proportional to the anomaly scores.
85

Factores que se asocian con el bajo peso del recién nacido

Corasma Uñurucu, Vilma Yovanna January 2002 (has links)
El bajo peso del recién nacido, es considerado como un indicador general de salud en los países en vías de desarrollo, de allí la importancia de obtener los factores que más influyen en el bajo peso al nacer. Utilizamos el análisis de Regresión Logística para clasificar a los recién nacidos en dos grupos: bajo peso al nacer y peso normal al nacer. El estudio se basa en todos los pacientes que se atendieron en el Instituto Materno Perinatal durante los meses de Enero a Junio del presente año. El presente trabajo no se limita a la estimación de parámetros del modelo; se realiza la validación de supuestos, evaluación de la bondad de ajuste del modelo, análisis de los residuos, detectar observaciones influyentes y finalmente la evaluación de la capacidad predictiva del modelo propuesto. / --- The low weight of the recent born, is considered like a general indicator of health in developing countries, of there the importance of obtaining the factors with more influence in the low weight when being born. We used the analysis of Logistic Regression to classify to the recent born in two groups: low weight when being born and normal weight when being born. The study is based on all the patients who were taken care of in Perinatal Maternal Institute during the months from January to June of the present year. The present work is not limited to the estimation of parameters of the model; it is made the validation of assumptions, evaluation of the kindness of adjustment of the model, analysis of the residues, to detect influential observations and finally the evaluation of the predictive capacity of the proposed model.
86

Robust second-order least squares estimation for linear regression models

Chen, Xin 10 November 2010 (has links)
The second-order least-squares estimator (SLSE), which was proposed by Wang (2003), is asymptotically more efficient than the least-squares estimator (LSE) if the third moment of the error distribution is nonzero. However, it is not robust against outliers. In this paper. we propose two robust second-order least-squares estimators (RSLSE) for linear regression models. RSLSE-I and RSLSE-II, where RSLSE-I is robust against X-outliers and RSLSE-II is robust. against X-outliers and Y-outliers. The basic idea is to choose proper weight matrices, which give a zero weight to an outlier. The RSLSEs are asymptotically normally distributed and are highly efficient with high breakdown point.. Moreover, we compare the RSLSEs with the LSE, the SLSE and the robust MM-estimator through simulation studies and real data examples. The results show that they perform very well and are competitive to other robust regression estimators.
87

Robust principal component analysis biplots

Wedlake, Ryan Stuart 03 1900 (has links)
Thesis (MSc (Mathematical Statistics))--University of Stellenbosch, 2008. / In this study several procedures for finding robust principal components (RPCs) for low and high dimensional data sets are investigated in parallel with robust principal component analysis (RPCA) biplots. These RPCA biplots will be used for the simultaneous visualisation of the observations and variables in the subspace spanned by the RPCs. Chapter 1 contains: a brief overview of the difficulties that are encountered when graphically investigating patterns and relationships in multidimensional data and why PCA can be used to circumvent these difficulties; the objectives of this study; a summary of the work done in order to meet these objectives; certain results in matrix algebra that are needed throughout this study. In Chapter 2 the derivation of the classic sample principal components (SPCs) is first discussed in detail since they are the „building blocks‟ of classic principal component analysis (CPCA) biplots. Secondly, the traditional CPCA biplot of Gabriel (1971) is reviewed. Thirdly, modifications to this biplot using the new philosophy of Gower & Hand (1996) are given attention. Reasons why this modified biplot has several advantages over the traditional biplot – some of which are aesthetical in nature – are given. Lastly, changes that can be made to the Gower & Hand (1996) PCA biplot to optimally visualise the correlations between the variables is discussed. Because the SPCs determine the position of the observations as well as the orientation of the arrows (traditional biplot) or axes (Gower and Hand biplot) in the PCA biplot subspace, it is useful to give estimates of the standard errors of the SPCs together with the biplot display as an indication of the stability of the biplot. A computer-intensive statistical technique called the Bootstrap is firstly discussed that is used to calculate the standard errors of the SPCs without making underlying distributional assumptions. Secondly, the influence of outliers on Bootstrap results is investigated. Lastly, a robust form of the Bootstrap is briefly discussed for calculating standard error estimates that remain stable with or without the presence of outliers in the sample. All the preceding topics are the subject matter of Chapter 3. In Chapter 4, reasons why a PC analysis should be made robust in the presence of outliers are firstly discussed. Secondly, different types of outliers are discussed. Thirdly, a method for identifying influential observations and a method for identifying outlying observations are investigated. Lastly, different methods for constructing robust estimates of location and dispersion for the observations receive attention. These robust estimates are used in numerical procedures that calculate RPCs. In Chapter 5, an overview of some of the procedures that are used to calculate RPCs for lower and higher dimensional data sets is firstly discussed. Secondly, two numerical procedures that can be used to calculate RPCs for lower dimensional data sets are discussed and compared in detail. Details and examples of robust versions of the Gower & Hand (1996) PCA biplot that can be constructed using these RPCs are also provided. In Chapter 6, five numerical procedures for calculating RPCs for higher dimensional data sets are discussed in detail. Once RPCs have been obtained by using these methods, they are used to construct robust versions of the PCA biplot of Gower & Hand (1996). Details and examples of these robust PCA biplots are also provided. An extensive software library has been developed so that the biplot methodology discussed in this study can be used in practice. The functions in this library are given in an appendix at the end of this study. This software library is used on data sets from various fields so that the merit of the theory developed in this study can be visually appraised.
88

Análise de "outliers" para o controle do risco de evasão tributária do ICMS

Bittencourt Neto, Sérgio Augusto Pará 03 July 2018 (has links)
Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2018. / Submitted by Fabiana Santos (fabianacamargo@bce.unb.br) on 2018-11-07T18:38:41Z No. of bitstreams: 1 2018_SérgioAugustoParáBittencourtNeto.pdf: 5650773 bytes, checksum: 743dbdc02efa3ebbf053f062cbc76e28 (MD5) / Approved for entry into archive by Fabiana Santos (fabianacamargo@bce.unb.br) on 2018-11-12T17:44:26Z (GMT) No. of bitstreams: 1 2018_SérgioAugustoParáBittencourtNeto.pdf: 5650773 bytes, checksum: 743dbdc02efa3ebbf053f062cbc76e28 (MD5) / Made available in DSpace on 2018-11-12T17:44:26Z (GMT). No. of bitstreams: 1 2018_SérgioAugustoParáBittencourtNeto.pdf: 5650773 bytes, checksum: 743dbdc02efa3ebbf053f062cbc76e28 (MD5) Previous issue date: 2018-11-12 / Esta dissertação apresenta a aplicação associada de selecionados modelos estatísticos e de métodos de mineração de dados para a análise de outliers sobre as informações da Notas Fiscais Eletrônicas e do Livro Fiscal Eletrônico, proporcionando a investigação de novas modalidades de evasão fiscal no ICMS. São combinados: 1. o método de programação matemática da Análise Envoltória de Dados (DEA) para diferenciar as empresas com desempenho relativo de arrecadação ineficiente, dentro de um segmento econômico, e eleger os contribuintes suspeitos para investigação; 2. modelos de análise de séries temporais para avaliação dos dados fiscais atinentes à apuração do imposto (comparação gráfica dos valores reais e respectivas escriturações, gráficos boxplots, decomposição das componentes de tendência e sazonalidade e o modelo de alisamento exponencial Holtz-Winter), com o objetivo de detectar períodos de tempo anômalos (outliers); e 3. outras técnicas estatísticas descritivas (gráficos analíticos da distribuição de frequência), probabilísticas (Desigualdade de Chebyshev e Lei de Newcomb Benford) e o método de mineração por clusterização K-Means sobre as informações fiscais dos contribuintes selecionados, para identificar os registros escriturais e os documentos fiscais sob suspeição. É proposto um recurso computacional construído em linguagem R (plataforma R Studio) para: extrair do banco de dados (ORACLE) da Receita do Distrito Federal, processar as informações aplicando-lhes os modelos e métodos designados, e em conclusão, disponibilizar os resultados em painéis analíticos que facilitam e otimizam o trabalho de auditoria. Assim, a identificação das circunstâncias anômalas, a partir de um tratamento sistemático dos dados, proporciona maior eficiência à atividade de programação fiscal de auditorias tributárias. / This dissertation presents the associated application of selected statistical models and data mining methods for the analysis of outliers on the information of the Electronic Fiscal Notes and the Electronic Fiscal Book, providing the investigation of new types of tax evasions in ICMS. The following methods are applied: 1. the mathematical programming method of Data Envelopment Analysis (DEA) to differentiate companies with inefficient performance relative in the tax collection within an economic segment and to choose suspected taxpayers for research; 2. the analysis of time series used in the evaluation of fiscal data related to the calculation of the ICMS tax (graphical comparison of actual values and respective deeds, boxplot graphs, the decomposition of trend and seasonality components and the Holtz-Winter method), capable of anomalous time periods (outliers) detection; and 3. descriptive statistical analysis (frequency distribution), probabilistic analysis (Chebyshev Inequality and Newcomb Benford Law) and K-Means clustering techniques on selected taxpayers’ tax information to identify book entries and tax documents under suspicion. A computational code in R language (R Studio platform) is developed for: extraction of data from the Federal District Revenue database (ORACLE), processing of the extracted information while applying the designated models and methods and generating the results in panels that facilitate and optimize audit work. Thus, in conclusion, the identification of the anomalous circumstances, based on a systematic treatment of the data, provides greater efficiency to the fiscal programming activity of tax audits.
89

Proposta de um novo método para o planejamento de redes geodésicas

Klein, Ivandro January 2014 (has links)
O objetivo deste trabalho é desenvolver e propor um novo método para o planejamento de redes geodésicas. O planejamento (ou pré-análise) de uma rede geodésica consiste em planejar (ou otimizar) a rede, de modo que a mesma atenda a critérios de qualidade pré-estabelecidos de acordo com os objetivos do projeto, como precisão, confiabilidade e custos. No método aqui proposto, os critérios a serem considerados na etapa de planejamento são os níveis de confiabilidade e homogeneidade mínimos aceitáveis para as observações; a acurácia posicional dos vértices, considerando tanto os efeitos de precisão quanto os (possíveis) efeitos de tendência, segundo ainda um determinado nível de confiança; o número de outliers não detectados máximo admissível; e o poder do teste mínimo do procedimento Data Snooping (DS) no cenário n-dimensional, isto é, considerando todas as observações (testadas individualmente). De acordo com as classificações encontradas na literatura, o método aqui proposto consiste em um projeto combinado, solucionado por meio do método da tentativa e erro, além de apresentar alguns aspectos inéditos em seus critérios de planejamento. Para demonstrar a sua aplicação prática, um exemplo numérico de planejamento de uma rede GNSS (Global Navigation Satellite System – Sistema Global de Navegação por Satélite) é apresentado e descrito. Os resultados obtidos após o processamento dos dados da rede GNSS foram concordantes com os valores estimados na sua etapa de planejamento, ou seja, o método aqui proposto apresentou desempenho satisfatório na prática. Além disso, também foram investigados como os critérios pré-estabelecidos, a geometria/configuração da rede geodésica e a precisão/correlação inicial das observações podem influenciar nos resultados obtidos na etapa de planejamento, seguindo o método aqui proposto. Com a realização destes experimentos, dentre outras conclusões, verificou-se que todo os critérios de planejamento do método aqui proposto estão intrinsecamente interligados, pois, por exemplo, uma baixa redundância conduz a um valor relativamente mais alto para a componente de precisão, e consequentemente, um valor relativamente mais baixo para a componente de tendência (mantendo a acurácia final constante), o que também conduz a um poder do teste mínimo nos cenários unidimensional e n-dimensional significativamente mais baixos. / The aim of this work is to develop and propose a new method for the design of geodetic networks. Design (planning or pre-analysis) of a geodetic network consists of planning (or optimizing) the network so that it follows the pre-established quality criteria according to the project objectives, such as accuracy, reliability and costs. In the method proposed here, the criteria to be considered in the planning stage are the minimum acceptable levels of reliability and homogeneity of the observations; the positional accuracy of the points considering both the effects of precision and the (possible) effects of bias (according to a given confidence level); the maximum allowable number of undetected outliers; and the minimum power of the test of the Data Snooping procedure (DS) in the n-dimensional scenario, i.e., considering all observations (individually tested). According to the classifications found in the literature, the method proposed here consists of a combined project, solved by means of trial and error approach, and presents some new aspects in their planning criteria. To demonstrate its practical application, a numerical example of a GNSS (Global Navigation Satellite System) network design is presented and described. The results obtained after processing the data of the GNSS network were found in agreement with the estimated values in the design stage, i.e., the method proposed here showed satisfactory performance in practice. Moreover, were also investigated as the pre-established criteria, the geometry/configuration of the geodetic network, and the initial values for precision/correlation of the observations may influence the results obtained in the planning stage, following the method proposed here. In these experiments, among other findings, it was found that all the design criteria of the method proposed here are intrinsically related, e.g., a low redundancy leads to a relatively higher value for the precision component, and consequently to a relatively lower value for the bias component (keeping constant the final accuracy), which also leads to a minimum power of the test significantly lower in the one-dimensional and the n-dimensional scenarios.
90

Estudo do balan?o de umidade por meio de modelos regionais para o clima do passado e do futuro sobre a Am?rica do Sul / Moisture budget stydy of regioal models into through for past and future climates of the South America

Coutinho, Mayt? Duarte Leal 18 June 2015 (has links)
Submitted by Automa??o e Estat?stica (sst@bczm.ufrn.br) on 2016-05-31T20:42:21Z No. of bitstreams: 1 MayteDuarteLealCoutinho_TESE.pdf: 5311317 bytes, checksum: abfa1929995ae7c350372d6daa5498e6 (MD5) / Approved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2016-06-03T20:03:24Z (GMT) No. of bitstreams: 1 MayteDuarteLealCoutinho_TESE.pdf: 5311317 bytes, checksum: abfa1929995ae7c350372d6daa5498e6 (MD5) / Made available in DSpace on 2016-06-03T20:03:24Z (GMT). No. of bitstreams: 1 MayteDuarteLealCoutinho_TESE.pdf: 5311317 bytes, checksum: abfa1929995ae7c350372d6daa5498e6 (MD5) Previous issue date: 2015-06-18 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior (CAPES) / No contexto de mudan?as clim?ticas sobre a Am?rica do Sul (AS) tem-se observado que a combina??o de altas temperaturas mais chuvas e altas temperaturas menos chuvas, provocam diferentes impactos como: eventos extremos de precipita??o, condi??es favor?veis para queimadas e secas. Com isto, estas regi?es enfrentam amea?a crescente de escassez de ?gua, local ou generalizada. Assim, a disponibilidade de ?gua no Brasil depende em grande parte, do clima e de suas varia??es em diversas escalas de tempo. Neste sentido, o objetivo principal desta pesquisa ? estudar o balan?o de umidade por meio de Regional Climate Models (RCM) do Project Regional Climate Change Assessments for La Plata Basin (CLARIS- LPB), assim como, combinar estes RCM por meio de duas t?cnicas estat?sticas, na tentativa de melhorar a previs?o sobre tr?s ?reas da AS: Amaz?nia (AMZ), Nordeste do Brasil (NEB) e Bacia do Prata (LPB) nos climas do passado (1961-1990) e do futuro (2071-2100). O transporte de umidade sobre AS foi investigado por meio do fluxo de umidade integrado verticalmente. Os principais resultados mostraram que os fluxos m?dios de vapor d??gua nas regi?es tropicais (AMZ e NEB) s?o maiores atrav?s das bordas leste e norte, assim indicando que as contribui??es dos ventos al?sios do Atl?ntico Norte e do Sul s?o igualmente importantes para a entrada de umidade durante os meses de JJA e DJF. Esta configura??o foi observada em todos os modelos e climas. Na compara??o dos climas verificou-se que a converg?ncia do fluxo de umidade no clima passado foi menor em rela??o ao futuro em diferentes regi?es e ?pocas. De forma semelhante, constatou-se que a precipita??o foi reduzida no clima futuro nas regi?es tropicais (AMZ e NEB), possivelmente devido os intensos fluxos de umidade que adentraram nas regi?es. Por interm?dio das t?cnicas de Regress?o M?ltipla por Componente Principal (C_RCP) e da combina??o convexa (C_EQM), analisamos e comparamos as combina??es dos modelos (ensemble). Os resultados indicaram que a combina??o por RCP foi melhor em representar a precipita??o observada em ambos os climas. Sendo que, al?m dos valores mostrarem ser pr?ximos aos observados, a t?cnica obteve coeficiente de correla??o de moderada ? forte magnitude, em praticamente todos os meses nos diferentes climas e regi?es. Al?m do mais, na avalia??o das t?cnicas de combina??o, a tend?ncia percentual (PBIAS) mostrou que em geral, a C_RCP obteve valores de baixa magnitude (PBIAS = 0%) indicando ter um desempenho ?muito bom? no clima passado e (PBIAS = -4% a 3%) para o clima futuro, sobre as regi?es de estudo. Enquanto a C_EQM mostrou no clima passado, que a AMZ obteve um desempenho ?bom? e nas regi?es do NEB e LPB obtiveram desempenho de ?bom a satisfat?rio?. Logo, os resultados mostraram que as t?cnicas tem um potencial promissor para aplica??es operacionais em centro de previs?o de tempo e clima. / In the context of climate change over South America (SA) has been observed that the combination of high temperatures and rain more temperatures less rainfall, cause different impacts such as extreme precipitation events, favorable conditions for fires and droughts. As a result, these regions face growing threat of water shortage, local or generalized. Thus, the water availability in Brazil depends largely on the weather and its variations in different time scales. In this sense, the main objective of this research is to study the moisture budget through regional climate models (RCM) from Project Regional Climate Change Assessments for La Plata Basin (CLARIS-LPB) and combine these RCM through two statistical techniques in an attempt to improve prediction on three areas of AS: Amazon (AMZ), Northeast Brazil (NEB) and the Plata Basin (LPB) in past climates (1961-1990) and future (2071-2100). The moisture transport on AS was investigated through the moisture fluxes vertically integrated. The main results showed that the average fluxes of water vapor in the tropics (AMZ and NEB) are higher across the eastern and northern edges, thus indicating that the contributions of the trade winds of the North Atlantic and South are equally important for the entry moisture during the months of JJA and DJF. This configuration was observed in all the models and climates. In comparison climates, it was found that the convergence of the flow of moisture in the past weather was smaller in the future in various regions and seasons. Similarly, the majority of the SPC simulates the future climate, reduced precipitation in tropical regions (AMZ and NEB), and an increase in the LPB region. The second phase of this research was to carry out combination of RCM in more accurately predict precipitation, through the multiple regression techniques for components Main (C.RPC) and convex combination (C.EQM), and then analyze and compare combinations of RCM (ensemble). The results indicated that the combination was better in RPC represent precipitation observed in both climates. Since, in addition to showing values be close to those observed, the technique obtained coefficient of correlation of moderate to strong magnitude in almost every month in different climates and regions, also lower dispersion of data (RMSE). A significant advantage of the combination of methods was the ability to capture extreme events (outliers) for the study regions. In general, it was observed that the wet C.EQM captures more extreme, while C.RPC can capture more extreme dry climates and in the three regions studied.

Page generated in 0.0341 seconds