• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 8
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 45
  • 45
  • 9
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Variabilidade espacial do solo em sistema plantio direto estabilizado / Spatial variability of soil in stabilized direct planting system

Sara de Jesus Duarte 10 April 2015 (has links)
A homogeneidade do solo em sistema de plantio direto, é um assunto questionável, pois alguns autores têm considerado que, com o passar do tempo, há aumento da homogeneidade do solo, outros têm verificado a redução. A hipótese deste trabalho é que em sistema de plantio direto consolidado existe correlação e dependência espacial dos atributos físicos-hídrico do solo e do desenvolvimento vegetativo da soja, sendo a cokrigagem colocalizada interpolador mais representativo destas correlações. O objetivo foi avaliar a variabilidade espacial de atributos físicos do solo e do desenvolvimento vegetativo da soja em sistema de plantio direto, adotado há mais de 19 anos. O trabalho foi desenvolvido na fazenda-escola da Universidade Estadual de Ponta Grossa - Paraná. A área de estudo tem como cultura a soja e está inserida em um relevo cuja declividade máxima, no sentido da pendente, é de aproximadamente 10 %. Nesta área, foram avaliados atributos físicos e hídrico do solo, como: densidade do solo (Ds), granulometria (areia e argila) e condutividade hidráulica saturada (Kfs). Avaliou-se, ainda, atributos de planta: altura da planta, estádio reprodutivo e stand. Para tais avaliações, foi demarcado um grid com espaçamento 10 x 10 metros, onde as avaliações foram realizadas em cada ponto. A análise dos dados foi efetuada por geoestatística, utilizando o pacote de programas GEOSTAT, para todas as variáveis que apresentaram dependência espacial. Foi obtido mapa de krigagem, e para todas as que apresentaram correlação, mapas de cokrigagem e cokrigagem colocalizada. A precisão de tais mapas foi obtida por meio dos menores valores de variância e a raiz quadrada do erro médio (RMSE). Verificou-se existência de dependência espacial na área em estudo, sendo a declividade um dos fatores responsáveis pela variação e o outro fator pode ser atribuído ao manejo uniforme adotado na área. Existiu correlação direta e positiva entre Kfs e areia e negativa com argila. Os atributos que influenciaram positivamente o desenvolvimento da planta foram a Kfs e, negativamente, a densidade do solo (Ds). Quanto aos métodos de estimação, o que obteve o mapa mais representativo da condição real, para a maioria das variáveis estudadas, foi a cokrigagem colocalizada. Apenas para a correlação argila x areia não houve ganho no uso da cokrigagem colocalizada, por isso, a cokrigagem ordinária foi a mais indicada. / The homogeneity of the soil in no-tilled system is being a highly controversial question, as some authors have claimed that there is an increasing in the soil homogeneity over time, while others have proven it to be reduced. The hypothesis is that in established no-tillage system is no correlation and spatial dependence of the physical-hydric soil properties and soybean vegetative development, cokriging-located interpolator being most representative of these correlations The objective was to evaluate the spatial variability of the soil physical attributes and soybean vegetative growth along 19-years of no-tillage system. The study was carried out at the farm-school from the Universidade Estadual de Ponta Grossa - Paraná. The area was cultivated with soybean in a topossequence with a maximum slope of10 %. In this area, were evaluated physical and hydrical soil properties, e.g. soil bulk density (Ds), soil texture (sand and clay) and saturated hydraulic conductivity (Kfs). It was also evaluated plant characteristics such as plant height, reproductive stage and standFor these assessments, it was delimited a 10 x 10 meters grid, where the assessments were carried out in each point. The data analysis was performed using a specific geostatistic software GEOSTAT, for all spatial-dependent variables. A kriging map was obtained from the results, and for all those correlated variables cokriging maps and cokriging-located have been also performed. The maps accuracy were achieved from the smallest values of variance and square root of the mean error (RMSE). The results indicated the existence of spatial dependence in the study area with the slope and soil management the mainly dependent factors, wherein in the latter the spatial variability is disregarded. Kfs was direct and positively correlated with sand, and a negatively with clay. The soil attribute that influenced positively the plant development was Kfs, whilst the bulk density (Ds) influenced negatively. With regard to the estimative methods, cokriging-located produced the most accurate and representative map of the real conditions for most of the variables. The low correlation between sandy and clay fractions makes the ordinary cokriging is being more appropiate than the cokriging-located.
12

Multifraktální povaha finančních trhů a její vztah k tržní efektivnosti / Multifractal nature of financial markets and its relationship to market efficiency

Jeřábek, Jakub January 2009 (has links)
The thesis shows the relationship between the persistence in the financial markets returns and their efficiency. It interprets the efficient markets hypothesis and provides various time series models for the analysis of financial markets. The concept of long memory is broadly presented and two main types of methods to estimate long memory are analysed - time domain and frequency domain methods. A Monte Carlo study is used to compare these methods and selected estimators are then used on real world data - exchange rate and stock market series. There is no evidence of long memory in the returns but the stock market volatilities show clear signs of persistence.
13

Impact of Design Features for Cross-Classified Logistic Models When the Cross-Classification Structure Is Ignored

Ren, Weijia 16 December 2011 (has links)
No description available.
14

Confirmatory factor analysis with ordinal data : effects of model misspecification and indicator nonnormality on two weighted least squares estimators

Vaughan, Phillip Wingate 22 October 2009 (has links)
Full weighted least squares (full WLS) and robust weighted least squares (robust WLS) are currently the two primary estimation methods designed for structural equation modeling with ordinal observed variables. These methods assume that continuous latent variables were coarsely categorized by the measurement process to yield the observed ordinal variables, and that the model proposed by the researcher pertains to these latent variables rather than to their ordinal manifestations. Previous research has strongly suggested that robust WLS is superior to full WLS when models are correctly specified. Given the realities of applied research, it was critical to examine these methods with misspecified models. This Monte Carlo simulation study examined the performance of full and robust WLS for two-factor, eight-indicator confirmatory factor analytic models that were either correctly specified, overspecified, or misspecified in one of two ways. Seven conditions of five-category indicator distribution shape at four sample sizes were simulated. These design factors were completely crossed for a total of 224 cells. Previously findings of the relative superiority of robust WLS with correctly specified models were replicated, and robust WLS was also found to perform better than full WLS given overspecification or misspecification. Robust WLS parameter estimates were usually more accurate for correct and overspecified models, especially at the smaller sample sizes. In the face of misspecification, full WLS better approximated the correct loading values whereas robust estimates better approximated the correct factor correlation. Robust WLS chi-square values discriminated between correct and misspecified models much better than full WLS values at the two smaller sample sizes. For all four model specifications, robust parameter estimates usually showed lower variability and robust standard errors usually showed lower bias. These findings suggest that robust WLS should likely remain the estimator of choice for applied researchers. Additionally, highly leptokurtic distributions should be avoided when possible. It should also be noted that robust WLS performance was arguably adequate at the sample size of 100 when the indicators were not highly leptokurtic. / text
15

Modelos INAR e RCINAR, estimação e aplicação / INAR and RCINAR models, estimation and application

Lima, Tiago de Almeida Cerqueira 07 May 2013 (has links)
Neste trabalho primeiramente apresentamos um modelo para uma sequência estacionária de valores inteiros (processo de contagem) autoregressivo de ordem p (INAR(p)). Depois disso, mos- traremos uma extensão desse processo, chamado modelo autoregressivo inteiro com coeficientes aleatórios (RCINAR(p)) . Para ambos os modelos, apresentamos suas propriedades assim como diferentes métodos de estimação de seus parâmetros. Os resultados da simulação e comparação dos estimadores são mostrados. Finalmente os modelos são aplicados em dois conjuntos de dados reais: Número mensal de empresas em falência; Número mensal de consultas no bureau de crédito. / At this work we first present a model for stationary sequence of integer-valued random variables (counting process) referred to as the integer-valued autoregressive of order p (INAR(p)) process. Af- ter this we show an extension of this process, called random coefficient integer-valued autoregressive process (RCINAR(p)). For both models we present its properties as well as different methods of estimation of its parameters. Simulation results and the comparison of the estimators are reported. Finally the models are applied to two real data sets: monthly number of companies with bankruptcy; monthly number of enquiries in credit bureau.
16

Parameter estimation methods based on binary observations - Application to Micro-Electromechanical Systems (MEMS) / Estimation des paramètres d'un système à partir de données fortement quantifiées, application aux MEMS

Jafaridinani, Kian 09 July 2012 (has links)
Bien que les dimensions caractéristiques des systèmes électroniques aient été réduites aux micro- ou nano-échelles, leur performance reste très sensible à des facteurs extérieurs. Les variations lors du processus de fabrication des microsystèmes et celles dans leurs conditions de fonctionnement (température, humidité, pression) sont la cause habituelle de ces dispersions. Par conséquent, il est important de co-intégrer des routines de self-test ou d'auto-ajustement pour ces micro-dispositifs. La plupart des méthodes d'estimation des paramètres du système existantes sont fondées sur la mise en œuvre de mesures numériques haute résolution de la sortie du système. Leur mise en œuvre nécessite ainsi un long temps de conception et une grande surface de silicium, ce qui augmente le coût de ces micro-dispositifs. Les méthodes d'estimation de paramètres basées sur les observations binaires ont été présentées comme des méthodes d'identification alternatives, nécessitant seulement un Convertisseur Analogique-Numérique (CAN) 1-bit.Dans cette thèse, nous proposons une nouvelle méthode d'identification récursive pour le problème d'estimation des paramètres à partir des observations binaires. Un algorithme d'identification en ligne avec de faibles besoins de stockage et une complexité algorithmique réduite est introduit. Nous prouvons la convergence asymptotique de cette méthode sous certaines hypothèses. Ensuite, nous montrons par des simulations de Monte-Carlo que ces hypothèses ne doivent pas nécessairement être respectées dans la pratique pour obtenir une bonne performance de la méthode. De plus, nous présentons la première application expérimentale de cette méthode dédiée au self-test de MEMS intégrés. La méthode de «Built-In Self-Test» en ligne proposée est très intéressante pour le self-test de capteurs, car elle nécessite des ressources faibles de stockage, un seul CAN 1-bit et un seul CNA 1-bit qui peut être facilement mis en œuvre dans une petite surface de silicium avec une consommation réduite d'énergie. / While the characteristic dimensions of electronic systems scale down to micro- or nano-world, their performance is greatly influenced. Micro-fabrication process or variations of the operating situation such as temperature, humidity or pressure are usual cause of dispersion. Therefore, it seems essential to co-integrate self-testing or self-adjustment routines for these microdevices. For this feature, most existing system parameter estimation methods are based on the implementation of high-resolution digital measurements of the system's output. Thus, long design time and large silicon areas are needed, which increases the cost of the micro-fabricated devices. The parameter estimation problems based on binary outputs can be introduced as alternative self-test identification methods, requiring only a 1-bit Analog-to-Digital Converter (ADC) and a 1-bit Digital-to-Analog Converter (DAC).In this thesis, we propose a novel recursive identification method to the problem of system parameter estimation from binary observations. An online identification algorithm with low-storage requirements and small computational complexity is derived. We prove the asymptotic convergence of this method under some assumptions. We show by Monte Carlo simulations that these assumptions do not necessarily have to be met in practice in order to obtain an appropriate performance of the method. Furthermore, we present the first experimental application of this method dedicated to the self-test of integrated micro-electro-mechanical systems (MEMS). The proposed online Built-In Self-Test method is very amenable to integration for the self-testing of systems relying on resistive sensors and actuators, because it requires low memory storage, only a 1-bit ADC and a 1-bit DAC which can be easily implemented in a small silicon area with minimal energy consumption.
17

Modelos INAR e RCINAR, estimação e aplicação / INAR and RCINAR models, estimation and application

Tiago de Almeida Cerqueira Lima 07 May 2013 (has links)
Neste trabalho primeiramente apresentamos um modelo para uma sequência estacionária de valores inteiros (processo de contagem) autoregressivo de ordem p (INAR(p)). Depois disso, mos- traremos uma extensão desse processo, chamado modelo autoregressivo inteiro com coeficientes aleatórios (RCINAR(p)) . Para ambos os modelos, apresentamos suas propriedades assim como diferentes métodos de estimação de seus parâmetros. Os resultados da simulação e comparação dos estimadores são mostrados. Finalmente os modelos são aplicados em dois conjuntos de dados reais: Número mensal de empresas em falência; Número mensal de consultas no bureau de crédito. / At this work we first present a model for stationary sequence of integer-valued random variables (counting process) referred to as the integer-valued autoregressive of order p (INAR(p)) process. Af- ter this we show an extension of this process, called random coefficient integer-valued autoregressive process (RCINAR(p)). For both models we present its properties as well as different methods of estimation of its parameters. Simulation results and the comparison of the estimators are reported. Finally the models are applied to two real data sets: monthly number of companies with bankruptcy; monthly number of enquiries in credit bureau.
18

Mobile Services Based Traffic Modeling

Strengbom, Kristoffer January 2015 (has links)
Traditionally, communication systems have been dominated by voice applications. Today with the emergence of smartphones, focus has shifted towards packet switched networks. The Internet provides a wide variety of services such as video streaming, web browsing, e-mail etc, and IP trac models are needed in all stages of product development, from early research to system tests. In this thesis, we propose a multi-level model of IP traffic where the user behavior and the actual IP traffic generated from different services are considered as being two independent random processes. The model is based on observations of IP packet header logs from live networks. In this way models can be updated to reflect the ever changing service and end user equipment usage. Thus, the work can be divided into two parts. The first part is concerned with modeling the traffic from different services. A subscriber is interested in enjoying the services provided on the Internet and traffic modeling should reflect the characteristics of these services. An underlying assumption is that different services generate their own characteristic pattern of data. The FFT is used to analyze the packet traces. We show that the traces contains strong periodicities and that some services are more or less deterministic. For some services this strong frequency content is due to the characteristics of cellular network and for other it is actually a programmed behavior of the service. The periodicities indicate that there are strong correlations between individual packets or bursts of packets. The second part is concerned with the user behavior, i.e. how the users access the different services in time. We propose a model based on a Markov renewal process and estimate the model parameters. In order to evaluate the model we compare it to two simpler models. We use model selection, using the model's ability to predict future observations as selection criterion. We show that the proposed Markov renewal model is the best of the three models in this sense. The model selection framework can be used to evaluate future models.
19

L'estimation de l'économie souterraine : les apports d'une modélisation floue / The estimate of the underground economy : the contributions of the fuzzy modeling

Tahmasebi, Mostafa 28 May 2015 (has links)
Une grande d’attentions ont été accordée au cours des années récentes à l’étude de l'économie souterraine dans de nombreux pays développés et en voie de développement. Les conséquences et les implications politiques associées à cette partie ambiguë de l’économie ont suscité des inquiétudes parmi les économistes et les gouvernements qui ont été amenés à proposer diverses mesures et méthodes d’estimation. Il n’est cependant pas facile d’évaluer avec exactitude et précision la taille (l’ampleur) et la tendance de l’économie souterraine à cause de sa nature cachée (dissimulée, discrète). Néanmoins, certaines techniques ont été utilisées par les économistes pour estimer directement ou indirectement la taille de l’économie souterraine. Dans ce manuscrit de thèse, nous nous intéressons à l’économie souterraine comme un phénomène universel ayant une manifestation unique et incontournable sous forme d’activités à la fois légales et illégales. Le but principal de cette thèse est de proposer des « méthodes floues » (méthode de la logique floue) pour en mesurer la taille et la quantité. Dans un premier temps, nous avons construit un cadre conceptuel nous permettant d’étudier les spécificités de l’économie souterraine. Ensuite, de nombreuses (plusieurs, différentes…) méthodes communes d’estimation (méthodes en vigueur pour l’estimation) de l’économie souterraine ont été examinées. Les conditions de base de l’application du concept flou (de la logique floue) ainsi que les conditions initiales de l’économie souterraine ont été explorées (interrogées) pour voir si elles correspondent les unes aux autres (si elles font la paire, si elles sont cohérentes). Dès lors que la logique floue permet une modélisation rapide même avec des données imprécises et incomplètes et des fonctions non-linéaires d’une complexité arbitraire et d’atteindre la simplicité et la flexibilité, nous avons été encouragés à appliquer cette méthode. Trois méthodes floues ont été proposées pour évaluer l’économie souterraine sur la base des enquêtes initiales réalisées pendant la période 1985-2010, à savoir : la modélisation floue appliquant la moyenne et la variance, la modélisation floue appliquant le regroupement flou, la modélisation floue utilisant de multiples indicateurs et causes avec des données floues. En dernier lieu, la taille de l’économie souterraine a été mesurée pour la France, l’Allemagne, l’Italie, les Etats-Unis et le Canada. Les résultats issus de ces différentes méthodes floues ont ensuite été comparés avec ceux obtenus par d’autres méthodes conventionnelles. Il peut être affirmé que les méthodes proposées dans ce travail de recherche sont qualitativement comparables avec les méthodes couramment utilisées pour évaluer l’économie souterraine. / Much attention has been paid in recent years to the study of the underground economy in many developed and developing countries. The consequences and the policy implications linked with this ambiguous part of the economy have raised concerns among economists and governments that led to proposing various measures and estimation methods. It is not however easy to estimate accurately and precisely the size and trend of the underground economy due to its hidden nature. But, some techniques have been used by economists to directly and indirectly estimate the size of the underground economy.In this thesis, we have focused on the underground economy as a universal phenomenon having its unique and inescapable manifestation in the form of both legal and illegal activities. The main purpose of this thesis is to propose the fuzzy methods to measure the size and the amount of the underground economy.We first constructed a conceptual framework that allowed us to study the specifications of the underground economy. The common numerous estimation methods of the underground economy and their weaknesses and strengths were reviewed then. The basic conditions of applying the fuzzy concept and the initial conditions of the underground economy were investigated to see if they match. Since fuzzy modeling allows for rapid modeling even with imprecise and incomplete data and lets us to model non-linear functions of arbitrary complexity and achieve simplicity and flexibility, we were encouraged to apply fuzzy method.Two fuzzy methods were proposed to estimate the underground economy based on initial investigations during the period 1985-2010, including: the fuzzy modeling applying mean and standard deviation, the fuzzy modeling applying fuzzy clustering and Multiple Indicators and Multiple Causes (Structural Equation Modeling) with Fuzzy Data.Finally, the size of the underground economy was measured for France, Germany, Italy, USA and Canada and the results of these fuzzy methods were compared with other conventional methods. It can be claimed that the proposed methods in this work are qualitatively comparable to the common methods used to estimate the underground economy.
20

Evaluation of the Catchment Parameter (CAPA) and Midgley and Pitman (MIPI) empirical design flood estimation methods

Smal, Ruan 12 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: The devastating effects floods have on both social and economic level make effective flood risk management an essential part of rural and urban development. A major part of effective flood risk management is the application of reliable design flood estimation methods. Research over the years has illustrated that current design flood estimation methods as a norm show large discrepancies which can mainly be attributed to the fact that these methods are outdated (Smithers, 2007). The research presented focused on the evaluation and updating of the Midgley and Pitman (MIPI) and the Catchment Parameter (CAPA or McPherson) empirical design flood estimation methods. The evaluation was done by means of comparing design floods estimated by each method with more reliable probabilistic design floods derived from historical flow records. Flow gauging stations were selected as drainage data points based on the availability of flow data and available catchment characteristics. A selection criterion was developed resulting in 53 gauging stations. The Log Normal (LN) and Log Pearson Type III (LP III) distributions were used to derive the probabilistic floods for each gauging station. The flow gauging stations were used to delineate catchments and to quantify catchment characteristics using Geographic Information Systems (GIS) software and their associated applications. The two methods were approximated by means derived formulas instead of evaluating and updating the two methods from first principles. This was done as a result of the constraints brought about by both time and the attainment of the relevant literature. The formulae were derived by means of plotting method inputs and resulted in graphs, fitting a trendline through the points and deriving a formula best describing the trendline. The derived formulae and the catchment characteristics were used to estimate the design floods for each method. A comparison was then done between the design flood results of the two methods and the probabilistic design floods. The results of these comparisons were used to derive correction factors which could potentially increase the reliability of the two methods used to estimate design floods. The effectiveness of any updating would be the degree (or level) in which the reliability of a method could be increased. It was proven that the correction factors did decrease the difference between the „assumed and more reliable probabilistic design floods‟ and the methods‟ estimates. However, the increase in reliability of the methods through the use of the recommended correction factors is questionable due to factors such as the reliability of the flow data as well as the methods which had to be used to derive the correction factors. / AFRIKAANSE OPSOMMING: Die verwoestende gevolge van vloede op beide ekonomiese en sosiale gebiede beklemtoon die belangrikheid van effektiewe vloed risiko bestuur vir ontwikellings doeleindes. „n Baie belangrikke gedeelte van effektiewe vloed risiko bestuur is die gebruik van betroubare ontwerp vloed metodes. Navorsing oor die laaste paar jaar het die tekortkominge van die metodes beklemtoon, wat meestal toegeskryf kan word aan die metodes wat verouderd is. Die navorsing het gefokus op die evaluering en moontlike opdatering van die Midley en Pitman (MIPI) en die “Catchment Parameter” (CAPA of McPherson) empiriese ontwerp vloed metodes. Die evaluering het geskied deur middel van die vergelyking van die ontwerp vloed soos bereken deur die twee metodes en die aanvaarde, meer betroubare probabilistiese ontwerp vloede, bepaal deur middel van statistiese ontledings. Vloei meetstasies is gekies as data-punte omrede die beskikbaarheid van vloei data en beskikbare opvanggebied eienskappe. „n Seleksie kriteruim is ontwikkel waaruit 53 meetstasies gekies is. Die Log Normale (LN) en Log Pearson Tipe III (LP III) verspreidings is verder gebruik om die probabilistiese ontwerp vloede te bereken vir elke meetstasie. Die posisie van die meetstasies is ook verder gebruik om opvanggebiede te definieer en opvanggebied eienskappe te bereken. Geografiese inligtingstelsels (GIS) is vir die doel gebruik inplaas van die oorspronlik hand metodes. Die twee metodes is benader deur die gebruik van afgeleide formules inplaas van „n eerste beginsel benadering. Dit is gedoen as gevolg van die beperkings wat teweeggebring is deur beide tyd en die beskikbaarheid van die relevante litratuur wat handel oor die ontwikkeling van die twee metodes. Die formules is verkry deur middel van die plot van beide insette en resultate in grafieke, die passing van tendenslyne en die afleiding van formules wat die tendenslyne die beste beskryf. Die afgeleide formules saam met die opvanggebied eienskappe is toe verder gebruik om die ontwerp vloede van elke meet stasie te bepaal, vir beide metodes. The resultate van die twee metodes is toe vergelyk met die probabilistiese ontwerp vloede. Die resultate van hierdie vergelyking is verder gebruik om korreksie faktore af te lei wat moontlik die betroubaarheid van die twee metodes kon verhoog. Die doeltreffendheid van enige opdatering sal die mate wees waarin die betroubaarheid van n metode verhoog kan word. Gedurende die verhandeling is dit bewys dat die korreksie faktore wel n vermindering teweebring in die verskil tussen die ontwerp vloede van die aanvaarde meer betroubare probabilistiese ontwerp vloede van beide metodes. Die toename in betroubaarheid van die metodes deur die gebruik van die voorgestelde korreksie faktore is egter bevraagteken as gevolg van faktore soos die betroubaarheid van die vloei data self asook die metodologie wat gevolg is om die korreksie faktore af te lei.

Page generated in 0.0998 seconds