• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 91
  • 33
  • 11
  • 7
  • 5
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 200
  • 29
  • 25
  • 21
  • 20
  • 17
  • 16
  • 16
  • 15
  • 15
  • 14
  • 14
  • 13
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Essays on Experimental Economics

Daniel John Woods (11038146) 22 July 2021 (has links)
This thesis contains three chapters, each of which covers a different topic in experimental economics.<br><br>The first chapter investigates power and power analysis in economics experiments. Power is the probability of detecting an effect when a true effect exists, which is an important but under-considered concept in empirical research. Power analysis is the process of selecting the number of observations in order to avoid issues with low power. However, it is often not clear ex-ante what the required parameters for a power analysis, like the effect size and standard deviation, should be. <br>This chapter considers the use of Quantal Choice/Response (QR) simulations for ex-ante power analysis, as it maps related data-sets into predictions for novel environments. <br>The QR can also guide optimal design decisions, both ex-ante as well as ex-post for conceptual replication studies. This chapter demonstrates QR simulations on a wide variety of applications related to power analysis and experimental design.<br><br>The second chapter considers a question of interest to computer scientists, information technology and security professionals. How do people distribute defenses over a directed network attack graph, where they must defend a critical node? Decision-makers are often subject to behavioral biases that cause them to make sub-optimal defense decisions. Non-linear probability weighting<br>is one bias that may lead to sub-optimal decision-making in this environment. An experimental test provides support for this conjecture, and also other empirically important biases such as naive diversification and preferences over the spatial timing of the revelation of an overall successful defense. <br><br>The third chapter analyzes how individuals resolve an exploration versus exploitation trade-off in a laboratory experiment. The experiment implements the single-agent exponential bandit model. The experiment finds that subjects respond in the predicted direction to changes in the prior belief, safe action, and discount factor. However, subjects also typically explore less than predicted. A structural model that incorporates risk preferences, base rate neglect/conservatism, and non-linear probability weighting explains the empirical findings well. <br>
102

Market Risk: Exponential Weightinh in the Value-at-Risk Calculation

Broll, Udo, Förster, Andreas, Siebe, Wilfried 03 September 2020 (has links)
When measuring market risk, credit institutions and Alternative Investment Fund Managers may deviate from equally weighting historical data in their Value-at-Risk calculation and instead use an exponential time series weighting. The use of expo-nential weighting in the Value-at-Risk calculation is very popular because it takes into account changes in market volatility (immediately) and can therefore quickly adapt to VaR. In less volatile market phases, this leads to a reduction in VaR and thus to lower own funds requirements for credit institutions. However, in the ex-ponential weighting a high volatility in the past is quickly forgotten and the VaR can be underestimated when using exponential weighting and the VaR may be un-derestimated. To prevent this, credit institutions or Alternative Investment Fund Managers are not completely free to choose a weighting (decay) factor. This article describes the legal requirements and deals with the calculation of the permissible weighting factor. As an example we use the exchange rate between Euro and Polish zloty to estimate the Value-at-Risk. We show the calculation of the weighting factor with two different approaches. This article also discusses exceptions to the general legal requirements.
103

Analogy-based software project effort estimation. Contributions to projects similarity measurement, attribute selection and attribute weighting algorithms for analogy-based effort estimation.

Azzeh, Mohammad Y.A. January 2010 (has links)
Software effort estimation by analogy is a viable alternative method to other estimation techniques, and in many cases, researchers found it outperformed other estimation methods in terms of accuracy and practitioners¿ acceptance. However, the overall performance of analogy based estimation depends on two major factors: similarity measure and attribute selection & weighting. Current similarity measures such as nearest neighborhood techniques have been criticized that have some inadequacies related to attributes relevancy, noise and uncertainty in addition to the problem of using categorical attributes. This research focuses on improving the efficiency and flexibility of analogy-based estimation to overcome the abovementioned inadequacies. Particularly, this thesis proposes two new approaches to model and handle uncertainty in similarity measurement method and most importantly to reflect the structure of dataset on similarity measurement using Fuzzy modeling based Fuzzy C-means algorithm. The first proposed approach called Fuzzy Grey Relational Analysis method employs combined techniques of Fuzzy set theory and Grey Relational Analysis to improve local and global similarity measure and tolerate imprecision associated with using different data types (Continuous and Categorical). The second proposed approach presents the use of Fuzzy numbers and its concepts to develop a practical yet efficient approach to support analogy-based systems especially at early phase of software development. Specifically, we propose a new similarity measure and adaptation technique based on Fuzzy numbers. We also propose a new attribute subset selection algorithm and attribute weighting technique based on the hypothesis of analogy-based estimation that assumes projects that are similar in terms of attribute value are also similar in terms of effort values, using row-wise Kendall rank correlation between similarity matrix based project effort values and similarity matrix based project attribute values. A literature review of related software engineering studies revealed that the existing attribute selection techniques (such as brute-force, heuristic algorithms) are restricted to the choice of performance indicators such as (Mean of Magnitude Relative Error and Prediction Performance Indicator) and computationally far more intensive. The proposed algorithms provide sound statistical basis and justification for their procedures. The performance figures of the proposed approaches have been evaluated using real industrial datasets. Results and conclusions from a series of comparative studies with conventional estimation by analogy approach using the available datasets are presented. The studies were also carried out to statistically investigate the significant differences between predictions generated by our approaches and those generated by the most popular techniques such as: conventional analogy estimation, neural network and stepwise regression. The results and conclusions indicate that the two proposed approaches have potential to deliver comparable, if not better, accuracy than the compared techniques. The results also found that Grey Relational Analysis tolerates the uncertainty associated with using different data types. As well as the original contributions within the thesis, a number of directions for further research are presented. Most chapters in this thesis have been disseminated in international journals and highly refereed conference proceedings. / Applied Science University, Jordan.
104

Hierarchical Autoassociative Polynomial Network for Deep Learning of Complex Manifolds

Aspiras, Theus Herrera January 2015 (has links)
No description available.
105

Arabic language processing for text classification : contributions to Arabic root extraction techniques, building an Arabic corpus, and to Arabic text classification techniques

Al-Nashashibi, May Yacoub Adib January 2012 (has links)
The impact and dynamics of Internet-based resources for Arabic-speaking users is increasing in significance, depth and breadth at highest pace than ever, and thus requires updated mechanisms for computational processing of Arabic texts. Arabic is a complex language and as such requires in depth investigation for analysis and improvement of available automatic processing techniques such as root extraction methods or text classification techniques, and for developing text collections that are already labeled, whether with single or multiple labels. This thesis proposes new ideas and methods to improve available automatic processing techniques for Arabic texts. Any automatic processing technique would require data in order to be used and critically reviewed and assessed, and here an attempt to develop a labeled Arabic corpus is also proposed. This thesis is composed of three parts: 1- Arabic corpus development, 2- proposing, improving and implementing root extraction techniques, and 3- proposing and investigating the effect of different pre-processing methods on single-labeled text classification methods for Arabic. This thesis first develops an Arabic corpus that is prepared to be used here for testing root extraction methods as well as single-label text classification techniques. It also enhances a rule-based root extraction method by handling irregular cases (that appear in about 34% of texts). It proposes and implements two expanded algorithms as well as an adjustment for a weight-based method. It also includes the algorithm that handles irregular cases to all and compares the performances of these proposed methods with original ones. This thesis thus develops a root extraction system that handles foreign Arabized words by constructing a list of about 7,000 foreign words. The outcome of the technique with best accuracy results in extracting the correct stem and root for respective words in texts, which is an enhanced rule-based method, is used in the third part of this thesis. This thesis finally proposes and implements a variant term frequency inverse document frequency weighting method, and investigates the effect of using different choices of features in document representation on single-label text classification performance (words, stems or roots as well as including to these choices their respective phrases). This thesis applies forty seven classifiers on all proposed representations and compares their performances. One challenge for researchers in Arabic text processing is that reported root extraction techniques in literature are either not accessible or require a long time to be reproduced while labeled benchmark Arabic text corpus is not fully available online. Also, by now few machine learning techniques were investigated on Arabic where usual preprocessing steps before classification were chosen. Such challenges are addressed in this thesis by developing a new labeled Arabic text corpus for extended applications of computational techniques. Results of investigated issues here show that proposing and implementing an algorithm that handles irregular words in Arabic did improve the performance of all implemented root extraction techniques. The performance of the algorithm that handles such irregular cases is evaluated in terms of accuracy improvement and execution time. Its efficiency is investigated with different document lengths and empirically is found to be linear in time for document lengths less than about 8,000. The rule-based technique is improved the highest among implemented root extraction methods when including the irregular cases handling algorithm. This thesis validates that choosing roots or stems instead of words in documents representations indeed improves single-label classification performance significantly for most used classifiers. However, the effect of extending such representations with their respective phrases on single-label text classification performance shows that it has no significant improvement. Many classifiers were not yet tested for Arabic such as the ripple-down rule classifier. The outcome of comparing the classifiers' performances concludes that the Bayesian network classifier performance is significantly the best in terms of accuracy, training time, and root mean square error values for all proposed and implemented representations.
106

Synthèse acoustico-visuelle de la parole par sélection d'unités bimodales / Acoustic-Visual Speech Synthesis by Bimodal Unit Selection

Musti, Utpala 21 February 2013 (has links)
Ce travail porte sur la synthèse de la parole audio-visuelle. Dans la littérature disponible dans ce domaine, la plupart des approches traite le problème en le divisant en deux problèmes de synthèse. Le premier est la synthèse de la parole acoustique et l'autre étant la génération d'animation faciale correspondante. Mais, cela ne garantit pas une parfaite synchronisation et cohérence de la parole audio-visuelle. Pour pallier implicitement l'inconvénient ci-dessus, nous avons proposé une approche de synthèse de la parole acoustique-visuelle par la sélection naturelle des unités synchrones bimodales. La synthèse est basée sur le modèle de sélection d'unité classique. L'idée principale derrière cette technique de synthèse est de garder l'association naturelle entre la modalité acoustique et visuelle intacte. Nous décrivons la technique d'acquisition de corpus audio-visuelle et la préparation de la base de données pour notre système. Nous présentons une vue d'ensemble de notre système et nous détaillons les différents aspects de la sélection d'unités bimodales qui ont besoin d'être optimisées pour une bonne synthèse. L'objectif principal de ce travail est de synthétiser la dynamique de la parole plutôt qu'une tête parlante complète. Nous décrivons les caractéristiques visuelles cibles que nous avons conçues. Nous avons ensuite présenté un algorithme de pondération de la fonction cible. Cet algorithme que nous avons développé effectue une pondération de la fonction cible et l'élimination de fonctionnalités redondantes de manière itérative. Elle est basée sur la comparaison des classements de coûts cible et en se basant sur une distance calculée à partir des signaux de parole acoustiques et visuels dans le corpus. Enfin, nous présentons l'évaluation perceptive et subjective du système de synthèse final. Les résultats montrent que nous avons atteint l'objectif de synthétiser la dynamique de la parole raisonnablement bien / This work deals with audio-visual speech synthesis. In the vast literature available in this direction, many of the approaches deal with it by dividing it into two synthesis problems. One of it is acoustic speech synthesis and the other being the generation of corresponding facial animation. But, this does not guarantee a perfectly synchronous and coherent audio-visual speech. To overcome the above drawback implicitly, we proposed a different approach of acoustic-visual speech synthesis by the selection of naturally synchronous bimodal units. The synthesis is based on the classical unit selection paradigm. The main idea behind this synthesis technique is to keep the natural association between the acoustic and visual modality intact. We describe the audio-visual corpus acquisition technique and database preparation for our system. We present an overview of our system and detail the various aspects of bimodal unit selection that need to be optimized for good synthesis. The main focus of this work is to synthesize the speech dynamics well rather than a comprehensive talking head. We describe the visual target features that we designed. We subsequently present an algorithm for target feature weighting. This algorithm that we developed performs target feature weighting and redundant feature elimination iteratively. This is based on the comparison of target cost based ranking and a distance calculated based on the acoustic and visual speech signals of units in the corpus. Finally, we present the perceptual and subjective evaluation of the final synthesis system. The results show that we have achieved the goal of synthesizing the speech dynamics reasonably well
107

O dano econômico do aquecimento global: uma revisão da metodologia de cálculo e dos parâmetros e procedimentos fundamentais que afetam a sua estimação

Santos, Edi Carlos Martins dos 05 May 2009 (has links)
Made available in DSpace on 2016-04-26T20:48:54Z (GMT). No. of bitstreams: 1 Edi Carlos Martins dos Santos.pdf: 2441010 bytes, checksum: a8b22a8372a512ed57f3ea07b0ff445d (MD5) Previous issue date: 2009-05-05 / The economic damages caused by the global warming are used expressed in a percentage loss of the gross domestic product or as a ratio of world output or even by the absolute value of certain monetary units. Generally, no much attention is given to the estimation methodology or calculation process itself. More detailed analysis of this process (damage valuation) shows that a lot of factors can contribute to the increase or decrease of the economic amount damage: externalities which are not included in the calculation, simplification, extrapolation, per capita income correction (in the form of equity weighting), and choice of discounting rates. The aim of this work is to do a review of that estimation methodology, pointing out the way, sequence, key parameters and procedures that have influence over the total amount of the economic damage, caused by the global warming / Os danos econômicos causados pelo aquecimento global costumam ser expressos na forma de uma perda percentual do produto nacional ou mundial ou ainda pelo valor absoluto de certo valor de unidades monetárias. Em geral, não é dada muita atenção ao processo ou método de estimação do cálculo em si. Uma análise mais detalhada desse processo (valuation do dano) mostra que vários fatores podem contribuir para aumentar ou diminuir o montante econômico do dano: a não captura de várias externalidades, simplificações, extrapolações, correções por meio da renda per capita (ponderação ou pesos de igualdade) e por meio da escolha das taxas de desconto. O objetivo desse trabalho é fazer uma revisão dessa metodologia de cálculo, evidenciando a forma, a seqüência, os parâmetros e os procedimentos fundamentais que têm influência sobre montante total do dano econômico, causado pelo aquecimento global
108

Filtragem adaptativa de baixa complexidade computacional. / Low-complexity adaptive filtering.

Almeida Neto, Fernando Gonçalves de 20 February 2015 (has links)
Neste texto são propostos algoritmos de filtragem adaptativa de baixo custo computacional para o processamento de sinais lineares no sentido amplo e para beamforming. Novas técnicas de filtragem adaptativa com baixo custo computacional são desenvolvidas para o processamento de sinais lineares no sentido amplo, representados por números complexos ou por quaternions. Os algoritmos propostos evitam a redundância de estatísticas de segunda ordem na matriz de auto correlação, o que é obtido por meio da substituição do vetor de dados original por um vetor de dados real contendo as mesmas informações. Dessa forma, evitam-se muitas operações entre números complexos (ou entre quaternions), que são substituídas por operações entre reais e números complexos (ou entre reais e quaternions), de menor custo computacional. Análises na media e na variância para qualquer algoritmo de quaternions baseados na técnica least-mean squares (LMS) são desenvolvidas. Também é obtido o algoritmo de quaternions baseado no LMS e com vetor de entrada real de mais rápida convergência. Uma nova versão estável e de baixo custo computacional do algoritmo recursive least squares (RLS) amplamente linear também é desenvolvida neste texto. A técnica é modificada para usar o método do dichotomous coordinate descent (DCD), resultando em uma abordagem de custo computacional linear em relação ao comprimento N do vetor de entrada (enquanto o algoritmo original possui custo computacional quadrático em N). Para aplicações em beamforming, são desenvolvidas novas técnicas baseadas no algoritmo adaptive re-weighting homotopy. As novas técnicas são aplicadas para arrays em que o número de fontes é menor do que o número de sensores, tal que a matriz de auto correlação se torna mal-condicionada. O algoritmo DCD é usado para obter uma redução adicional do custo computacional. / In this text, low-cost adaptive filtering techniques are proposed for widely-linear processing and beamforming applications. New reduced-complexity versions of widely-linear adaptive filters are proposed for complex and quaternion processing. The low-cost techniques avoid redundant secondorder statistics in the autocorrelation matrix, which is obtained replacing the original widely-linear data vector by a real vector with the same information. Using this approach, many complex-complex (or quaternion-quaternion) operations are substituted by less costly real-complex (or real-quaternion) computations in the algorithms. An analysis in the mean and in the variance is performed for quaternion-based techniques, suitable for any quaternion least-mean squares (LMS) algorithm. The fastest-converging widely-linear quaternion LMS algorithm with real-valued input is obtained. For complex-valued processing, a low-cost and stable version of the widely-linear recursive least-squares (RLS) algorithm is also developed. The widely-linear RLS technique is modified to apply the dichotomous coordinate descent (DCD) method, which leads to an algorithm with computational complexity linear on the data vector length N (in opposition to the original WL technique, for which the complexity is quadratic in N). New complex-valued techniques based on the adaptive re-weighting homotopy algorithm are developed for beamforming. The algorithms are applied to sensor arrays in which the number of interferer sources is less than the number of sensors, so that the autocorrelation matrix is ill-conditioned. DCD iterations are applied to further reduce the computational complexity.
109

Filtragem adaptativa de baixa complexidade computacional. / Low-complexity adaptive filtering.

Fernando Gonçalves de Almeida Neto 20 February 2015 (has links)
Neste texto são propostos algoritmos de filtragem adaptativa de baixo custo computacional para o processamento de sinais lineares no sentido amplo e para beamforming. Novas técnicas de filtragem adaptativa com baixo custo computacional são desenvolvidas para o processamento de sinais lineares no sentido amplo, representados por números complexos ou por quaternions. Os algoritmos propostos evitam a redundância de estatísticas de segunda ordem na matriz de auto correlação, o que é obtido por meio da substituição do vetor de dados original por um vetor de dados real contendo as mesmas informações. Dessa forma, evitam-se muitas operações entre números complexos (ou entre quaternions), que são substituídas por operações entre reais e números complexos (ou entre reais e quaternions), de menor custo computacional. Análises na media e na variância para qualquer algoritmo de quaternions baseados na técnica least-mean squares (LMS) são desenvolvidas. Também é obtido o algoritmo de quaternions baseado no LMS e com vetor de entrada real de mais rápida convergência. Uma nova versão estável e de baixo custo computacional do algoritmo recursive least squares (RLS) amplamente linear também é desenvolvida neste texto. A técnica é modificada para usar o método do dichotomous coordinate descent (DCD), resultando em uma abordagem de custo computacional linear em relação ao comprimento N do vetor de entrada (enquanto o algoritmo original possui custo computacional quadrático em N). Para aplicações em beamforming, são desenvolvidas novas técnicas baseadas no algoritmo adaptive re-weighting homotopy. As novas técnicas são aplicadas para arrays em que o número de fontes é menor do que o número de sensores, tal que a matriz de auto correlação se torna mal-condicionada. O algoritmo DCD é usado para obter uma redução adicional do custo computacional. / In this text, low-cost adaptive filtering techniques are proposed for widely-linear processing and beamforming applications. New reduced-complexity versions of widely-linear adaptive filters are proposed for complex and quaternion processing. The low-cost techniques avoid redundant secondorder statistics in the autocorrelation matrix, which is obtained replacing the original widely-linear data vector by a real vector with the same information. Using this approach, many complex-complex (or quaternion-quaternion) operations are substituted by less costly real-complex (or real-quaternion) computations in the algorithms. An analysis in the mean and in the variance is performed for quaternion-based techniques, suitable for any quaternion least-mean squares (LMS) algorithm. The fastest-converging widely-linear quaternion LMS algorithm with real-valued input is obtained. For complex-valued processing, a low-cost and stable version of the widely-linear recursive least-squares (RLS) algorithm is also developed. The widely-linear RLS technique is modified to apply the dichotomous coordinate descent (DCD) method, which leads to an algorithm with computational complexity linear on the data vector length N (in opposition to the original WL technique, for which the complexity is quadratic in N). New complex-valued techniques based on the adaptive re-weighting homotopy algorithm are developed for beamforming. The algorithms are applied to sensor arrays in which the number of interferer sources is less than the number of sensors, so that the autocorrelation matrix is ill-conditioned. DCD iterations are applied to further reduce the computational complexity.
110

Controle H-infinito não linear e a equação de Hamilton Jacobi-Isaacs. / Nonlinear H-infinity control and the Hamilton-Jacobi-Isaacs equation.

Ferreira, Henrique Cezar 10 December 2008 (has links)
O objetivo desta tese é investigar aspectos práticos que facilitem a aplicação da teoria de controle H1 não linear em projetos de sistemas de controle. A primeira contribuição deste trabalho é a proposta do uso de funções ponderação com dinâmica no projeto de controladores H1 não lineares. Essas funções são usadas no projeto de controladores H1 lineares para rejeição de perturbações, ruídos, atenuação de erro de rastreamento, dentre outras especificações. O maior obstáculo para aplicação prática da teoria de controle H1 não linear é a dificuldade para resolver simultaneamente as duas equações de Hamilton-Jacobi-Isaacs relacionadas ao problema de realimentação de estados e injeção da saída. Não há métodos sistematicos para resolver essas duas equações diferenciais parciais não lineares, equivalentes µas equações de Riccati da teoria de controle H1 linear. A segunda contribuição desta tese é um método para obter a injeção da saída transformando a equação de Hamilton-Jacobi-Isaacs em uma sequencia de equações diferenciais parciais lineares, que são resolvidas usando o método de Galerkin. Controladores H1 não lineares para um sistema de levitação magnética são obtidos usando o método clássico de expansão em série de Taylor e o método de proposto para comparação. / The purpose of this thesis is to investigate practical aspects to facilitate the ap- plication of nonlinear H1 theory in control systems design. Firstly, it is shown that dynamic weighting functions can be used to improve the performance and robustness of the nonlinear H1 controller such as in the design of H1 controllers for linear plants. The biggest bottleneck to the practical applications of nonlinear H1 control theory has been the di±culty in solving the Hamilton-Jacobi-Isaacs equations associated with the design of a state feedback and an output injection gain. There is no systematic numerical approach for solving this ¯rst order, nonlinear partial di®erential equations, which reduces to Riccati equations in the linear context. In this work, successive ap- proximation and Galerkin approximation methods are combined to derive an algorithm that produces an output injection gain. Design of nonlinear H1 controllers obtained by the well established Taylor approximation and by the proposed Galerkin approxi- mation method applied to a magnetic levitation system are presented for comparison purposes.

Page generated in 0.0634 seconds