• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 323
  • 143
  • 42
  • 25
  • 12
  • 8
  • 6
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 648
  • 104
  • 101
  • 99
  • 89
  • 76
  • 72
  • 70
  • 69
  • 68
  • 60
  • 56
  • 51
  • 50
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Estimação não-paramétrica e semi-paramétrica de fronteiras de produção

Torrent, Hudson da Silva January 2010 (has links)
Existe uma grande e crescente literatura sobre especificação e estimação de fronteiras de produção e, portanto, de eficiência de unidades produtivas. Nesta tese, o foco esta sobre modelos de fronteiras determinísticas, os quais são baseados na hipótese de que os dados observados pertencem ao conjunto tecnológico. Dentre os modelos estatísticos e estimadores para fronteiras determinísticas existentes, uma abordagem promissora e a adotada por Martins-Filho e Yao (2007). Esses autores propõem um procedimento de estimação composto por três estágios. Esse estimador e de fácil implementação, visto que envolve procedimentos não-paramétricos bem conhecidos. Além disso, o estimador possui características desejáveis vis-à-vis estimadores para fronteiras determinísticas tradicionais como DEA e FDH. Nesta tese, três artigos, que melhoram o modelo proposto por Martins-Filho e Yao (2007), sao propostos. No primeiro artigo, o procedimento de estimação desses autores e melhorado a partir de uma variação do estimador exponencial local, proposto por Ziegelmann (2002). Demonstra-se que estimador proposto a consistente e assintoticamente normal. Além disso, devido ao estimador exponencial local, estimativas potencialmente negativas para a função de variância condicional, que poderiam prejudicar a aplicabilidade do estimador proposto por Martins-Filho e Yao, são evitadas. No segundo artigo, e proposto um método original para estimação de fronteiras de produção em apenas dois estágios. E mostrado que se pode eliminar o segundo estágio proposto por Martins-Filho e Yao, assim como, eliminar o segundo estagio proposto no primeiro artigo desta tese. Em ambos os casos, a estimação do mesmo modelo de fronteira de produção requer três estágios, sendo versões diferentes para o segundo estagio. As propriedades assintóticas do estimador proposto são analisadas, mostrando-se consistência e normalidade assintótica sob hipóteses razoáveis. No terceiro artigo, a proposta uma variação semi-paramétrica do modelo estudado no segundo artigo. Reescreve-se aquele modelo de modo que se possa estimar a fronteira de produção e a eficiência de unidades produtivas no contexto de múltiplos insumos, sem incorrer no curse of dimensionality. A abordagem adotada coloca o modelo na estrutura de modelos aditivos, a partir de hipóteses sobre como os insumos se combinam no processo produtivo. Em particular, considera-se aqui os casos de insumos aditivos e insumos multiplicativos, os quais são amplamente considerados em teoria econômica e aplicações. Estudos de Monte Carlo são apresentados em todos os artigos, afim de elucidar as propriedades dos estimadores propostos em amostras finitas. Além disso, estudos com dados reais são apresentados em todos os artigos, nos quais são estimador rankings de eficiência para uma amostra de departamentos policiais dos EUA, a partir de dados sobre criminalidade daquele país. / There exists a large and growing literature on the specification and estimation of production frontiers and therefore efficiency of production units. In this thesis we focus on deterministic production frontier models, which are based on the assumption that all observed data lie in the technological set. Among the existing statistical models and estimators for deterministic frontiers, a promising approach is that of Martins-Filho and Yao (2007). They propose an estimation procedure that consists of three stages. Their estimator is fairly easy to implement as it involves standard nonparametric procedures. In addition, it has a number of desirable characteristics vis-a-vis traditional deterministic frontier estimators as DEA and FDH. In this thesis we propose three papers that improve the model proposed in Martins-Filho and Yao (2007). In the first paper we improve their estimation procedure by adopting a variant of the local exponential smoothing proposed in Ziegelmann (2002). Our estimator is shown to be consistent and asymptotically normal. In addition, due to local exponential smoothing, potential negativity of conditional variance functions that may hinder the use of Martins-Filho and Yao's estimator is avoided. In the second paper we propose a novel method for estimating production frontiers in only two stages. (Continue). There we show that we can eliminate the second stage of Martins-Filho and Yao as well as of our first paper, where estimation of the same frontier model requires three stages under different versions for the second stage. We study asymptotic properties showing consistency andNirtnin, asymptotic normality of our proposed estimator under standard assumptions. In the third paper we propose a semiparametric variation of the frontier model studied in the second paper. We rewrite that model allowing for estimating the production frontier and efficiency of production units in a multiple input context without suffering the curse of dimensionality. Our approach places that model within the framework of additive models based on assumptions regarding the way inputs combine in production. In particular, we consider the cases of additive and multiplicative inputs, which are widely considered in economic theory and applications. Monte Carlo studies are performed in all papers to shed light on the finite sample properties of the proposed estimators. Furthermore a real data study is carried out in all papers, from which we rank efficiency within a sample of USA Law Enforcement agencies using USA crime data.
352

[en] COMBINING TO SUCCEED: A NOVEL STRATEGY TO IMPROVE FORECASTS FROM EXPONENTIAL SMOOTHING MODELS / [pt] COMBINANDO PARA TER SUCESSO: UMA NOVA ESTRATÉGIA PARA MELHORAR A PREVISÕES DE MODELOS DE AMORTECIMENTO EXPONENCIAL

TIAGO MENDES DANTAS 04 February 2019 (has links)
[pt] A presente tese se insere no contexto de previsão de séries temporais. Nesse sentido, embora muitas abordagens tenham sido desenvolvidas, métodos simples como o de amortecimento exponencial costumam gerar resultados extremamente competitivos muitas vezes superando abordagens com maior nível de complexidade. No contexto previsão, papers seminais na área mostraram que a combinação de previsões tem potencial para reduzir de maneira acentuada o erro de previsão. Especificamente, a combinação de previsões geradas por amortecimento exponencial tem sido explorada em papers recentes. Apesar da combinação de previsões utilizando Amortecimento Exponencial poder ser feita de diversas formas, um método proposto recentemente e chamado de Bagged.BLD.MBB.ETS utiliza uma técnica chamada Bootstrap Aggregating (Bagging) em combinação com métodos de amortecimento exponencial para gerar previsões mostrando que a abordagem é capaz de gerar previsões mensais mais precisas que todos os benchmarks analisados. A abordagem era considerada o estado da arte na utilização de Bagging e Amortecimento Exponencial até o desenvolvimento dos resultados obtidos nesta tese. A tese em questão se ocupa de, inicialmente, validar o método Bagged.BLD.MBB.ETS em um conjunto de dados relevante do ponto de vista de uma aplicação real, expandindo assim os campos de aplicação da metodologia. Posteriormente, são identificados motivos relevantes para redução do erro de e é proposta uma nova metodologia que utiliza Bagging, Amortecimento Exponencial e Clusters para tratar o efeito covariância, até então não identificado anteriormente na literatura do método. A abordagem proposta foi testada utilizando diferentes tipo de séries temporais da competição M3, CIF 2016 e M4, bem como utilizando dados simulados. Os resultados empíricos apontam para uma redução substancial na variância e no erro de previsão. / [en] This thesis is inserted in the context of time series forecasting. In this sense, although many approaches have been developed, simple methods such as exponential smoothing usually produce extremely competitive results, often surpassing approaches with a higher level of complexity. Seminal papers in time series forecasting showed that the combination of forecasts has the potential to dramatically reduce the forecast error. Specifically, the combination of forecasts generated by Exponential Smoothing has been explored in recent papers. Although this can be done in many ways, a specific method called Bagged.BLD.MBB.ETS uses a technique called Bootstrap Aggregating (Bagging) in combination with Exponential Smoothing methods to generate forecasts, showing that the approach can generate more accurate monthly forecasts than all the analyzed benchmarks. The approach was considered the state of the art in the use of Bagging and Exponential Smoothing until the development of the results obtained in this thesis. This thesis initially deals with validating Bagged.BLD.MBB.ETS in a data set relevant from the point of view of a real application, thus expanding the fields of application of the methodology. Subsequently, relevant motifs for error reduction are identified and a new methodology using Bagging, Exponential Smoothing and Clusters is proposed to treat the covariance effect, not previously identified in the method s literature. The proposed approach was tested using data from three time series competitions (M3, CIF 2016 and M4), as well as using simulated data. The empirical results point to a substantial reduction in variance and forecast error.
353

Distribuição exponencial generalizada: uma análise bayesiana aplicada a dados de câncer / Generalized exponential distribution: a Bayesian analysis applied to cancer data

Juliana Boleta 19 December 2012 (has links)
A técnica de análise de sobrevivência tem sido muito utilizada por pesquisadores na área de saúde. Neste trabalho foi usada uma distribuição em análise de sobrevivência recentemente estudada, chamada distribuição exponencial generalizada. Esta distribuição foi estudada sob todos os aspectos: para dados completos e censurados, sob a presençaa de covariáveis e considerando sua extensão para um modelo multivariado derivado de uma função cópula. Para exemplificação desta nova distribuição, foram utilizados dados reais de câncer (leucemia mielóide aguda e câncer gástrico) que possuem a presença de censuras e covariáveis. Os dados referentes ao câncer gástrico tem a particularidade de apresentar dois tempos de sobrevida, um relativo ao tempo global de sobrevida e o outro relativo ao tempo de sobrevida livre do evento, que foi utilizado para a aplicação do modelo multivariado. Foi realizada uma comparação com outras distribuições já utilizadas em análise de sobrevivência, como a distribuiçãoo Weibull e a Gama. Para a análise bayesiana adotamos diferentes distribuições a priori para os parâmetros. Foi utilizado, nas aplicações, métodos de simulação de MCMC (Monte Carlo em Cadeias de Markov) e o software Winbugs. / Survival analysis methods has been extensively used by health researchers. In this work it was proposed the use a survival analysis model recently studied, denoted as generalized exponential distribution. This distribution was studied in all respects: for complete data and censored, in the presence of covariates and considering its extension to a multivariate model derived from a copula function. To exemplify the use of these models, it was considered real cancer lifetime data (acute myeloid leukemia and gastric cancer) in presence of censored data and covariates. The assumed cancer gastric lifetime data has two survival responses, one related to the total lifetime of the patient and another one related to the time free of the disease, that is, multivariate data associated to each patient. In these applications there was considered a comparative study with standard existing lifetime distributions, as Weibull and gamma distributions.For a Bayesian analysis we assumed different prior distributions for the parameters of the model. For the simulation of samples of the joint posterior distribution of interest, we used standard MCMC (Markov Chain Monte Carlo) methods and the software Winbugs.
354

Planejamento probabilístico sensível a risco com ILAO* e função utilidade exponencial / Probabilistic risk-sensitive planning with ILAO* and exponential utility function

Freitas, Elthon Manhas de 18 October 2018 (has links)
Os processos de decisão de Markov (Markov Decision Process - MDP) têm sido usados para resolução de problemas de tomada de decisão sequencial. Existem problemas em que lidar com os riscos do ambiente para obter um resultado confiável é mais importante do que maximizar o retorno médio esperado. MDPs que lidam com esse tipo de problemas são chamados de processos de decisão de Markov sensíveis a risco (Risk-Sensitive Markov Decision Process - RSMDP). Dentre as diversas variações de RSMDP, estão os trabalhos baseados em utilidade exponencial que utilizam um fator de risco, o qual modela a atitude a risco do agente e que pode ser propensa ou aversa. Os algoritmos existentes na literatura para resolver esse tipo de RSMDPs são ineficientes se comparados a outros algoritmos de MDP. Neste projeto, é apresentada uma solução que pode ser usada em problemas maiores, tanto por executar cálculos apenas em estados relevantes para atingir um conjunto de estados meta partindo de um estado inicial, quanto por permitir processamento de números com expoentes muito elevados para os ambientes computacionais atuais. Os experimentos realizados evidenciam que (i) o algoritmo proposto é mais eficiente, se comparado aos algoritmos estado-da-arte para RSMDPs; e (ii) o uso da técnica LogSumExp permite resolver o problema de trabalhar com expoentes muito elevados em RSMDPs. / Markov Decision Process (MDP) has been used very efficiently to solve sequential decision-making problems. There are problems where dealing with environmental risks to get a reliable result is more important than maximizing the expected average return. MDPs that deal with this type of problem are called risk-sensitive Markov decision processes (RSMDP). Among the several variations of RSMDP are the works based on exponential utility that use a risk factor, which models the agent\'s risk attitude that can be prone or averse. The algorithms in the literature to solve this type of RSMDPs are inefficient when compared to other MDP algorithms. In this project, a solution is presented that can be used in larger problems, either by performing calculations only in relevant states to reach a set of meta states starting from an initial state, or by allowing the processing of numbers with very high exponents for the current computational environments. The experiments show that (i) the proposed algorithm is more efficient when compared to state-of-the-art algorithms for RSMDPs; and (ii) the LogSumExp technique solves the problem of working with very large exponents in RSMDPs
355

Modelagem de dados contínuos censurados, inflacionados de zeros / Modeling censored continous, zero inflated

Janeiro, Vanderly 16 July 2010 (has links)
Muitos equipamentos utilizados para quantificar substâncias, como toxinas em alimentos, freqüentemente apresentam deficiências para quantificar quantidades baixas. Em tais casos, geralmente indicam a ausência da substância quando esta existe, mas está abaixo de um valor pequeno \'ksi\' predeterminado, produzindo valores iguais a zero não necessariamente verdadeiros. Em outros casos, detectam a presença da substância, mas são incapazes de quantificá-la quando a quantidade da substância está entre \'ksai\' e um valor limiar \'tau\', conhecidos. Por outro lado, quantidades acima desse valor limiar são quantificadas de forma contínua, dando origem a uma variável aleatória contínua X cujo domínio pode ser escrito como a união dos intervalos, [ómicron, \"ksai\'), [\"ksai\', \'tau\' ] e (\'tau\', ?), sendo comum o excesso de valores iguais a zero. Neste trabalho, são propostos modelos que possibilitam discriminar a probabilidade de zeros verdadeiros, como o modelo de mistura com dois componentes, sendo um degenerado em zero e outro com distribuição contínua, sendo aqui consideradas as distribuições: exponencial, de Weibull e gama. Em seguida, para cada modelo, foram observadas suas características, propostos procedimentos para estimação de seus parâmetros e avaliados seus potenciais de ajuste por meio de métodos de simulação. Finalmente, a metodologia desenvolvida foi ilustrada por meio da modelagem de medidas de contaminação com aflatoxina B1, observadas em grãos de milho, de três subamostras de um lote de milho, analisados no Laboratório de Micotoxinas do Departamento de Agroindústria, Alimentos e Nutrição da ESALQ/USP. Como conclusões, na maioria dos casos, as simulações indicaram eficiência dos métodos propostos para as estimações dos parâmetros dos modelos, principalmente para a estimativa do parâmetro \'delta\' e do valor esperado, \'Epsilon\' (Y). A modelagem das medidas de aflatoxina, por sua vez, mostrou que os modelos propostos são adequados aos dados reais, sendo que o modelo de mistura com distribuição de Weibull, entretanto, ajustou-se melhor aos dados. / Much equipment used to quantify substances, such as toxins in foods, is unable to measure low amounts. In cases where the substance exists, but in an amount below a small fixed value \'ksi\' , the equipment usually indicates that the substance is not present, producing values equal to zero. In cases where the quantity is between \'\'ksi\' and a known threshold value \'tau\', it detects the presence of the substance but is unable to measure the amount. When the substance exists in amounts above the threshold value ?, it is measure continuously, giving rise to a continuous random variable X whose domain can be written as the union of intervals, [ómicron, \"ksai\'), [\"ksai\', \'tau\' ] and (\'tau\', ?), This random variable commonly has an excess of zero values. In this work we propose models that can detect the probability of true zero, such as the mixture model with two components, one being degenerate at zero and the other with continuous distribution, where we considered the distributions: exponential, Weibull and gamma. Then, for each model, its characteristics were observed, procedures for estimating its parameters were proposed and its potential for adjustment by simulation methods was evaluated. Finally, the methodology was illustrated by modeling measures of contamination with aflatoxin B1, detected in grains of corn from three sub-samples of a batch of corn analyzed at the laboratory of of Mycotoxins, Department of Agribusiness, Food and Nutrition ESALQ/USP. In conclusion, in the majority of cases the simulations indicated that the proposed methods are efficient in estimating the parameters of the models, in particular for estimating the parameter ? and the expected value, E(Y). The modeling of measures of aflatoxin, in turn, showed that the proposed models are appropriate for the actual data, however the mixture model with a Weibull distribution fits the data best.
356

Generalizing Multistage Partition Procedures for Two-parameter Exponential Populations

Wang, Rui 06 August 2018 (has links)
ANOVA analysis is a classic tool for multiple comparisons and has been widely used in numerous disciplines due to its simplicity and convenience. The ANOVA procedure is designed to test if a number of different populations are all different. This is followed by usual multiple comparison tests to rank the populations. However, the probability of selecting the best population via ANOVA procedure does not guarantee the probability to be larger than some desired prespecified level. This lack of desirability of the ANOVA procedure was overcome by researchers in early 1950's by designing experiments with the goal of selecting the best population. In this dissertation, a single-stage procedure is introduced to partition k treatments into "good" and "bad" groups with respect to a control population assuming some key parameters are known. Next, the proposed partition procedure is genaralized for the case when the parameters are unknown and a purely-sequential procedure and a two-stage procedure are derived. Theoretical asymptotic properties, such as first order and second order properties, of the proposed procedures are derived to document the efficiency of the proposed procedures. These theoretical properties are studied via Monte Carlo simulations to document the performance of the procedures for small and moderate sample sizes.
357

High voltage transient protection for automotive

Lindholm, Viktor January 2019 (has links)
Electronics for automotive needs to be able to handle different situations that can occur on the power line, such as high voltage transients. ISO16750 and ISO-7637 describes different pulses and tests a system needs to be able to handle. This report compares three different protection circuits that can output +5V and +12V built for low power devices. The circuits use different techniques for protection, one that uses TVS diodes, another that uses a voltage regulator IC with built in protection. The last protection uses P-channel MOSFET’s for protection. The circuits are compared against protection, price and leakage current. The most relevant transients to test a system against are decided to be pulse1, pulse 2a and load dump. A pulse generator consisting of a pulse shaping network and a common drain amplifier is used to create the test pulses. The result shows that all the circuits could protect against pulse 2a and load dump. However, all the circuits did fail against pulse 1 due to an undersized diode for negative voltage protection. The leakage current did not exceed 4µA for two of the circuits in the temperature interval of -40°C to +100°C. All the circuits started to have high leakage current when the temperature got up to +150°C. The price for the circuits didn’t differ that much, all the circuits cost below 3 US-dollar per circuit when making 10 000 circuits. The conclusions that could be made of the results are that all the circuits could protect against pulse 1, pulse 2a and load dump if correct diode is used for negative voltage protection. The protection that builds on Pchannel MOSFET’s should be the best choice for low power devices due to its low leakage current and potential for low cost. The disadvantage is the complexity and number of components needed for the circuit. The TVS diodes should be used if low complexity and low number of components is preferred. The disadvantage is that TVS diodes gets hot if a load dump is applied and the interval between stand-off voltage and maximum clamping voltage is quite high. The study also shows that there are cheaper solutions than using TVS diodes.
358

Option pricing under the double exponential jump-diffusion model by using the Laplace transform : Application to the Nordic market

Nadratowska, Natalia Beata, Prochna, Damian January 2010 (has links)
<p>In this thesis the double exponential jump-diffusion model is considered and the Laplace transform is used as a method for pricing both plain vanilla and path-dependent options. The evolution of the underlying stock prices are assumed to follow a double exponential jump-diffusion model. To invert the Laplace transform, the Euler algorithm is used. The thesis includes the programme code for European options and the application to the real data. The results show how the Kou model performs on the NASDAQ OMX Stockholm Market in the case of the SEB stock.</p>
359

Option pricing under the double exponential jump-diffusion model by using the Laplace transform : Application to the Nordic market

Nadratowska, Natalia Beata, Prochna, Damian January 2010 (has links)
In this thesis the double exponential jump-diffusion model is considered and the Laplace transform is used as a method for pricing both plain vanilla and path-dependent options. The evolution of the underlying stock prices are assumed to follow a double exponential jump-diffusion model. To invert the Laplace transform, the Euler algorithm is used. The thesis includes the programme code for European options and the application to the real data. The results show how the Kou model performs on the NASDAQ OMX Stockholm Market in the case of the SEB stock.
360

Numerical studies of transtion in wall-bounded flows

Levin, Ori January 2005 (has links)
Disturbances introduced in wall-bounded flows can grow and lead to transition from laminar to turbulent flow. In order to reduce losses or enhance mixing in energy systems, a fundamental understanding of the flow stability and transition mechanism is important. In the present thesis, the stability, transition mechanism and early turbulent evolution of wall-bounded flows are studied. The stability is investigated by means of linear stability equations and the transition mechanism and turbulence are studied using direct numerical simulations. Three base flows are considered, the Falkner-Skan boundary layer, boundary layers subjected to wall suction and the Blasius wall jet. The stability with respect to the exponential growth of waves and the algebraic growth of optimal streaks is studied for the Falkner-Skan boundary layer. For the algebraic growth, the optimal initial location, where the optimal disturbance is introduced in the boundary layer, is found to move downstream with decreased pressure gradient. A unified transition prediction method incorporating the influences of pressure gradient and free-stream turbulence is suggested. The algebraic growth of streaks in boundary layers subjected to wall suction is calculated. It is found that the spatial analysis gives larger optimal growth than temporal theory. Furthermore, it is found that the optimal growth is larger if the suction begins a distance downstream of the leading edge. Thresholds for transition of periodic and localized disturbances as well as the spreading of turbulent spots in the asymptotic suction boundary layer are investigated for Reynolds number Re=500, 800 and 1200 based on the displacement thickness and the free-stream velocity. It is found that the threshold amplitude scales like Re^-1.05 for transition initiated by streamwise vortices and random noise, like Re^-1.3 for oblique transition and like Re^-1.5 for the localized disturbance. The turbulent spot is found to take a bullet-shaped form that becomes more distinct and increases its spreading rate for higher Reynolds number. The Blasius wall jet is matched to the measured flow in an experimental wall-jet facility. Both the linear and nonlinear regime of introduced waves and streaks are investigated and compared to measurements. It is demonstrated that the streaks play an important role in the breakdown process where they suppress pairing and enhance breakdown to turbulence. Furthermore, statistics from the early turbulent regime are analyzed and reveal a reasonable self-similar behavior, which is most pronounced with inner scaling in the near-wall region. / QC 20101025

Page generated in 0.135 seconds