Spelling suggestions: "subject:"mean aquare"" "subject:"mean equare""
131 |
Macroscopic and Microscopic surface features of Hydrogenated silicon thin filmsPepenene, Refuoe Donald January 2018 (has links)
Magister Scientiae - MSc (Physics) / An increasing energy demand and growing environmental concerns regarding the use of fossil
fuels in South Africa has led to the challenge to explore cheap, alternative sources of energy.
The generation of electricity from Photovoltaic (PV) devices such as solar cells is currently
seen as a viable alternative source of clean energy. As such, crystalline, amorphous and
nanocrystalline silicon thin films are expected to play increasingly important roles as
economically viable materials for PV development. Despite the growing interest shown in these
materials, challenges such as the partial understanding of standardized measurement protocols,
and the relationship between the structure and optoelectronic properties still need to be
overcome.
|
132 |
Multi-Antenna Communication Receivers Using Metaheuristics and Machine Learning AlgorithmsNagaraja, Srinidhi January 2013 (has links) (PDF)
In this thesis, our focus is on low-complexity, high-performance detection algorithms for multi-antenna communication receivers. A key contribution in this thesis is the demonstration that efficient algorithms from metaheuristics and machine learning can be gainfully adapted for signal detection in multi- antenna communication receivers. We first investigate a popular metaheuristic known as the reactive tabu search (RTS), a combinatorial optimization technique, to decode the transmitted signals in large-dimensional communication systems. A basic version of the RTS algorithm is shown to achieve near-optimal performance for 4-QAM in large dimensions. We then propose a method to obtain a lower bound on the BER performance of the optimal detector. This lower bound is tight at moderate to high SNRs and is useful in situations where the performance of optimal detector is needed for comparison, but cannot be obtained due to very high computational complexity. To improve the performance of the basic RTS algorithm for higher-order modulations, we propose variants of the basic RTS algorithm using layering and multiple explorations. These variants are shown to achieve near-optimal performance in higher-order QAM as well.
Next, we propose a new receiver called linear regression of minimum mean square error (MMSE) residual receiver (referred to as LRR receiver). The proposed LRR receiver improves the MMSE receiver by learning a linear regression model for the error of the MMSE receiver. The LRR receiver uses pilot data to estimate the channel, and then uses locally generated training data (not transmitted over the channel) to find the linear regression parameters. The LRR receiver is suitable for applications where the channel remains constant for a long period (slow-fading channels) and performs well. Finally, we propose a receiver that uses a committee of linear receivers, whose parameters are estimated from training data using a variant of the AdaBoost algorithm, a celebrated supervised classification algorithm in ma- chine learning. We call our receiver boosted MMSE (B-MMSE) receiver. We demonstrate that the performance and complexity of the proposed B-MMSE receiver are quite attractive for multi-antenna communication receivers.
|
133 |
Critical assessment of predicted interactions at atomic resolutionMendez Giraldez, Raul 21 September 2007 (has links)
Molecular Biology has allowed the characterization and manipulation of the molecules of life in the wet lab. Also the structures of those macromolecules are being continuously elucidated. During the last decades of the past century, there was an increasing interest to study how the different genes are organized into different organisms (‘genomes’) and how those genes are expressed into proteins to achieve their functions. Currently the sequences for many genes over several genomes have been determined. In parallel, the efforts to have the structure of the proteins coded by those genes go on. However it is experimentally much harder to obtain the structure of a protein, rather than just its sequence. For this reason, the number of protein structures available in databases is an order of magnitude or so lower than protein sequences. Furthermore, in order to understand how living organisms work at molecular level we need the information about the interaction of those proteins. Elucidating the structure of protein macromolecular assemblies is still more difficult. To that end, the use of computers to predict the structure of these complexes has gained interest over the last decades.<p>The main subject of this thesis is the evaluation of current available computational methods to predict protein – protein interactions and build an atomic model of the complex. The core of the thesis is the evaluation protocol I have developed at Service de Conformation des Macromolécules Biologiques et de Bioinformatique, Université Libre de Bruxelles, and its computer implementation. This method has been massively used to evaluate the results on blind protein – protein interaction prediction in the context of the world-wide experiment CAPRI, which have been thoroughly reviewed in several publications [1-3]. In this experiment the structure of a protein complex (‘the target’) had to be modeled starting from the coordinates of the isolated molecules, prior to the release of the structure of the complex (this is commonly referred as ‘docking’).<p>The assessment protocol let us compute some parameters to rank docking models according to their quality, into 3 main categories: ‘Highly Accurate’, ‘Medium Accurate’, ‘Acceptable’ and ‘Incorrect’. The efficiency of our evaluation and ranking is clearly shown, even for borderline cases between categories. The correlation of the ranking parameters is analyzed further. In the same section where the evaluation protocol is presented, the ranking participants give to their predictions is also studied, since often, good solutions are not easily recognized among the pool of computer generated decoys.<p>An overview of the CAPRI results made per target structure and per participant regarding the computational method they used and the difficulty of the complex. Also in CAPRI there is a new ongoing experiment about scoring previously and anonymously generated models by other participants (the ‘Scoring’ experiment). Its promising results are also analyzed, in respect of the original CAPRI experiment. The Scoring experiment was a step towards the use of combine methods to predict the structure of protein – protein complexes. We discuss here its possible application to predict the structure of protein complexes, from a clustering study on the different results.<p>In the last chapter of the thesis, I present the preliminary results of an ongoing study on the conformational changes in protein structures upon complexation, as those rearrangements pose serious limitations to current computational methods predicting the structure protein complexes. Protein structures are classified according to the magnitude of its conformational re-arrangement and the involvement of interfaces and particular secondary structure elements is discussed. At the end of the chapter, some guidelines and future work is proposed to complete the survey. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
|
134 |
[en] ADVANCED TRANSMIT PROCESSING FOR MIMO DOWNLINK CHANNELS WITH 1-BIT QUANTIZATION AND OVERSAMPLING AT THE RECEIVERS / [pt] PROCESSAMENTO AVANÇADO DE TRANSMISSÃO PARA CANAIS DE DOWNLINK MIMO COM QUANTIZAÇÃO DE 1 BIT E SOBREAMOSTRAGEM NOS RECEPTORES10 September 2020 (has links)
[pt] IoT refere-se a um sistema de dispositivos de computação inter-relacionados
que visa transferir dados através de uma rede sem exigir interação humanohumano
ou humano-para-computador. Esses sistemas de comunicação modernos,
exigem restrições de baixo consumo de energia e baixa complexidade
no receptor. Nesse sentido, o conversor analógico-digital representa
um gargalo para o desenvolvimento das aplicações dessas novas tecnologias,
pois apresenta alto consumo de energia devido à sua alta resolução. A pesquisa
realizada em relação aos conversores analógico-digitais com quantização
grosseira mostrou que esses dispositivos são promissores para o projeto
de futuros sistemas de comunicação. Para equilibrar a perda de informações,
devido à quantização grosseira, a resolução no tempo é aumentada através
da superamostragem. Esta tese considera um sistema com quantização de
1 bit e superamostragem no receptor com um canal de downlink MIMO
multiusuário com banda ilimitada e apresenta, como principal contribuição,
a nova modulação de cruzamento de zeros que implica que a informação
é transmitida no instante de tempo zero-crossings. Este método é usado
para a pré-codificação temporal através da otimização do design da forma
de onda para dois pré-codificadores diferentes, a maximização temporal da
distância mínima até o limiar de decisão com forçamento a zero espacial e
a pré-codificação MMSE no espácio-temporal. Os resultados da simulação
mostram que a abordagem de cruzamento de zeros proposta supera o estado
da arte em termos da taxa de erro de bits para os dois pré-codificadores
estudados. Além disso, essa nova modulação reduz a complexidade computacional,
permite dispositivos de complexidade muito baixa e economiza
recursos de banda em comparação com o método mais avançado. Análises
adicionais mostram que a abordagem do cruzamento de zeros é benéfica em
comparação com o método mais avançado em termos de maior distância
mínima até o limiar de decisão e menor MSE para sistemas com limitações
de banda. Além disso, foi desenvolvido um esquema de mapeamento de bits
para modulação de cruzamento por zero, semelhante à codificação de Gray
para reduzir ainda mais a taxa de erro de bits. / [en] The IoT refers to a system of interrelated computing devises which aims to
transfer data over a network without requiring human-to-human or humanto-
computer interaction. This Modern communication systems demand restrictions
of low energy consumption and low complexity in the receiver. In
this sense, the analog-to-digital converter represents a bottleneck for the
development of the applications of these new technologies since it has a
high energy consumption due to its high resolution. The research carried
out concerning to the analog-to-digital converters with coarse quantization
has shown that such devices are promising for the design of future communication
systems. To balance the loss of information, due to the coarse
quantization, the resolution in time is increased through oversampling. This
thesis considers a system with 1-bit quantization and oversampling at the
receiver with a bandlimited multiuser MIMO downlink channel and introduces,
as the main contribution, the novel zero-crossing modulation which
implies that the information is conveyed within the time instant of the
zero-crossings. This method is used for the temporal precoding through the
waveform design optimization for two different precoders, the temporal maximization
of the minimum distance to the decision threshold with spatial
zero forcing and the space-time MMSE precoding. The simulation results
show that the proposed zero-crossing approach outperforms the state-of-theart
in terms of the bit error rate for both precoders studied. In addition,
this novel modulation reduces the computational complexity, allows very low
complexity devices and saves band resources in comparison to the state-ofthe-
art method. Additional analyses show that the zero-crossing approach
is beneficial in comparison to the state-of-the-art method in terms of greater
minimum distance to the decision threshold and lower MSE for systems
with band limitations. Moreover, it was devised a bit-mapping scheme for
zero-crossing modulation, similar to Gray-coding to further reduce the bit
error rate.
|
135 |
Extrakce a modifikace vlastností číslicových zvukových signálů v dynamické rovině / Digital Audio Signal Feature Extraction and Modification in Dynamic PlaneKramoliš, Ondřej January 2010 (has links)
This thesis deals with basic methods of root mean square and peak value measurement of a digital acoustic signal, algotithms to measure audio programme loudness and true-peak audio level according to recommendation ITU-R BS.1770-1 and digital systems for control of signal dynamic range. It shows achieved results of root mean square and peak value measurement and results of implementation of dynamic processor with general piecewise linear non-decreasing static curve and algorithms according to recommendation ITU-R BS.1770-1.
|
136 |
Essays in dynamic panel data models and labor supplyNayihouba, Kolobadia Ada 08 1900 (has links)
Cette thèse est organisée en trois chapitres. Les deux premiers proposent
une approche régularisée pour l’estimation du modèle de données de panel
dynamique : l’estimateur GMM et l’estimateur LIML. Le dernier chapitre de
la thèse est une application de la méthode de régularisation à l’estimation
des élasticités de l’offre de travail en utilisant des modèles de pseudo-données
de panel.
Dans un modèle de panel dynamique, le nombre de conditions de moments
augmente rapidement avec la dimension temporelle du panel conduisant à
une matrice de covariance des instruments de grande dimension. L’inversion
d’une telle matrice pour calculer l’estimateur affecte négativement les propriétés
de l’estimateur en échantillon fini. Comme solution à ce problème,
nous proposons une approche par la régularisation qui consiste à utiliser une
inverse généralisée de la matrice de covariance au lieu de son inverse classique.
Trois techniques de régularisation sont utilisées : celle des composantes
principales, celle de Tikhonov qui est basée sur le Ridge régression (aussi appelée
Bayesian shrinkage) et enfin celle de Landweber Fridman qui est une
méthode itérative. Toutes ces techniques introduisent un paramètre de régularisation
qui est similaire au paramètre de lissage dans les régressions non
paramétriques. Les propriétés en echantillon fini de l’estimateur régularisé
dépend de ce paramètre qui doit être sélectionné parmis plusieurs valeurs
potentielles.
Dans le premier chapitre (co-écrit avec Marine Carrasco), nous proposons
l’estimateur GMM régularisé du modèle de panel dynamique. Sous l’hypothèse
que le nombre d’individus et de périodes du panel tendent vers l’infini,
nous montrons que nos estimateurs sont convergents and assymtotiquement
normaux. Nous dérivons une méthode empirique de sélection du paramètrede régularisation basée sur une expansion de second ordre du l’erreur quadratique
moyenne et nous démontrons l’optimalité de cette procédure de sélection.
Les simulations montrent que la régularisation améliore les propriétés
de l ’estimateur GMM classique. Comme application empirique, nous avons
analysé l’effet du développement financier sur la croissance économique.
Dans le deuxième chapitre (co-écrit avec Marine Carrasco), nous nous intéressons
à l’estimateur LIML régularisé du modèle de données de panel
dynamique. L’estimateur LIML est connu pour avoir de meilleures propriétés
en échantillon fini que l’estimateur GMM mais son utilisation devient
problématique lorsque la dimension temporelle du panel devient large. Nous
dérivons les propriétes assymtotiques de l’estimateur LIML régularisé sous
l’hypothèse que le nombre d’individus et de périodes du panel tendent vers
l’infini. Une procédure empirique de sélection du paramètre de régularisation
est aussi proposée. Les bonnes performances de l’estimateur régularisé par
rapport au LIML classique (non régularisé), au GMM classique ainsi que le
GMM régularisé sont confirmées par des simulations.
Dans le dernier chapitre, je considère l’estimation des élasticités d’offre de travail
des hommes canadiens. L’hétérogéneité inobservée ainsi que les erreurs de
mesures sur les salaires et les revenus sont connues pour engendrer de l’endogéneité
quand on estime les modèles d’offre de travail. Une solution fréquente
à ce problème d’endogéneité consiste à régrouper les données sur la base des
carastéristiques observables et d’ éffectuer les moindres carrées pondérées sur
les moyennes des goupes. Il a été démontré que cet estimateur est équivalent
à l’estimateur des variables instrumentales sur les données individuelles avec
les indicatrices de groupe comme instruments. Donc, en présence d’un grand
nombre de groupe, cet estimateur souffre de biais en échantillon fini similaire
à celui de l’estimateur des variables instrumentales quand le nombre d’instruments
est élevé. Profitant de cette correspondance entre l’estimateur sur
les données groupées et l’estimateur des variables instrumentales sur les données
individuelles, nous proposons une approche régularisée à l’estimation du
modèle. Cette approche conduit à des élasticités substantiellement différentes
de ceux qu’on obtient en utilisant l’estimateur sur données groupées. / This thesis is organized in three chapters. The first two chapters propose
a regularization approach to the estimation of two estimators of the dynamic
panel data model : the Generalized Method of Moment (GMM) estimator
and the Limited Information Maximum Likelihood (LIML) estimator. The
last chapter of the thesis is an application of regularization to the estimation
of labor supply elasticities using pseudo panel data models.
In a dynamic panel data model, the number of moment conditions increases
rapidly with the time dimension, resulting in a large dimensional covariance
matrix of the instruments. Inverting this large dimensional matrix to compute
the estimator leads to poor finite sample properties. To address this
issue, we propose a regularization approach to the estimation of such models
where a generalized inverse of the covariance matrix of the intruments is used
instead of its usual inverse. Three regularization schemes are used : Principal
components, Tikhonov which is based on Ridge regression (also called Bayesian
shrinkage) and finally Landweber Fridman which is an iterative method.
All these methods involve a regularization parameter which is similar to the
smoothing parameter in nonparametric regressions. The finite sample properties
of the regularized estimator depends on this parameter which needs
to be selected between many potential values.
In the first chapter (co-authored with Marine Carrasco), we propose the regularized
GMM estimator of the dynamic panel data models. Under double
asymptotics, we show that our regularized estimators are consistent and
asymptotically normal provided that the regularization parameter goes to
zero slower than the sample size goes to infinity. We derive a data driven
selection of the regularization parameter based on an approximation of the
higher-order Mean Square Error and show its optimality. The simulations confirm that regularization improves the properties of the usual GMM estimator.
As empirical application, we investigate the effect of financial development
on economic growth.
In the second chapter (co-authored with Marine Carrasco), we propose the
regularized LIML estimator of the dynamic panel data model. The LIML
estimator is known to have better small sample properties than the GMM
estimator but its implementation becomes problematic when the time dimension
of the panel becomes large. We derive the asymptotic properties of
the regularized LIML under double asymptotics. A data-driven procedure to
select the parameter of regularization is proposed. The good performances
of the regularized LIML estimator over the usual (not regularized) LIML estimator,
the usual GMM estimator and the regularized GMM estimator are
confirmed by the simulations.
In the last chapter, I consider the estimation of the labor supply elasticities
of Canadian men through a regularization approach. Unobserved heterogeneity
and measurement errors on wage and income variables are known to
cause endogeneity issues in the estimation of labor supply models. A popular
solution to the endogeneity issue is to group data in categories based
on observable characteristics and compute the weighted least squares at the
group level. This grouping estimator has been proved to be equivalent to instrumental
variables (IV) estimator on the individual level data using group
dummies as intruments. Hence, in presence of large number of groups, the
grouping estimator exhibites a small bias similar to the one of the IV estimator
in presence of many instruments. I take advantage of the correspondance
between grouping estimators and the IV estimator to propose a regularization
approach to the estimation of the model. Using this approach leads to
wage elasticities that are substantially different from those obtained through
grouping estimators.
|
137 |
Régression non-paramétrique pour variables fonctionnelles / Non parametric regression for functional dataElamine, Abdallah Bacar 23 March 2010 (has links)
Cette thèse se décompose en quatre parties auxquelles s'ajoute une présentation. Dans un premier temps, on expose les outils mathématiques essentiels à la compréhension des prochains chapitres. Dans un deuxième temps, on s'intéresse à la régression non paramétrique locale pour des données fonctionnelles appartenant à un espace de Hilbert. On propose, tout d'abord, un estimateur de l'opérateur de régression. La construction de cet estimateur est liée à la résolution d'un problème inverse linéaire. On établit des bornes de l'erreur quadratique moyenne (EQM) de l'estimateur de l'opérateur de régression en utilisant une décomposition classique. Cette EQM dépend de la fonction de petite boule de probabilité du régresseur au sujet de laquelle des hypothèses de type Gamma-variation sont posées. Dans le chapitre suivant, on reprend le travail élaboré dans le précédent chapitre en se plaçant dans le cadre de données fonctionnelles appartenant à un espace semi-normé. On établit des bornes de l'EQM de l'estimateur de l'opérateur de régression. Cette EQM peut être vue comme une fonction de la fonction de petite boule de probabilité. Dans le dernier chapitre, on s'intéresse à l'estimation de la fonction auxiliaire associée à la fonction de petite boule de probabilité. D'abord, on propose un estimateur de cette fonction auxiliare. Ensuite, on établit la convergence en moyenne quadratique et la normalité asymptotique de cet estimateur. Enfin, par des simulations, on étudie le comportement de de cet estimateur au voisinage de zéro. / This thesis is divided in four sections with an additionnal presentation. In the first section, We expose the essential mathematics skills for the comprehension of the next sections. In the second section, we adress the problem of local non parametric with functional inputs. First, we propose an estimator of the unknown regression function. The construction of this estimator is related to the resolution of a linear inverse problem. Using a classical method of decomposition, we establish a bound for the mean square error (MSE). This bound depends on the small ball probability of the regressor which is assumed to belong to the class of Gamma varying functions. In the third section, we take again the work done in the preceding section by being situated in the frame of data belonging to a semi-normed space with infinite dimension. We establish bound for the MSE of the regression operator. This MSE can be seen as a function of the small ball probability function. In the last section, we interest to the estimation of the auxiliary function. Then, we establish the convergence in mean square and the asymptotic normality of the estimator. At last, by simulations, we study the bahavour of this estimator in a neighborhood of zero.
|
138 |
Massively Parallel, Fast Fourier Transforms and Particle-Mesh Methods: Massiv parallele schnelle Fourier-Transformationen und Teilchen-Gitter-MethodenPippig, Michael 13 October 2015 (has links)
The present thesis provides a modularized view on the structure of fast numerical methods for computing Coulomb interactions between charged particles in three-dimensional space. Thereby, the common structure is given in terms of three self-contained algorithmic frameworks that are built on top of each other, namely fast Fourier transform (FFT), nonequispaced fast Fourier transform (NFFT) and NFFT based particle-mesh methods (P²NFFT). For each of these frameworks algorithmic enhancement and parallel implementations are presented with special emphasis on scalability up to hundreds of thousands of parallel processes.
In the context of FFT massively parallel algorithms are composed from hardware adaptive low level modules provided by the FFTW software library. The new algorithmic NFFT concepts include pruned NFFT, interlacing, analytic differentiation, and optimized deconvolution in Fourier space with respect to a mean square aliasing error. Enabled by these generalized concepts it is shown that NFFT provides a unified access to particle-mesh methods. Especially, mixed-periodic boundary conditions are handled in a consistent way and interlacing can be incorporated more efficiently. Heuristic approaches for parameter tuning are presented on the basis of thorough error estimates. / Die vorliegende Dissertation beschreibt einen modularisierten Blick auf die Struktur schneller numerischer Methoden für die Berechnung der Coulomb-Wechselwirkungen zwischen Ladungen im dreidimensionalen Raum. Die gemeinsame Struktur ist geprägt durch drei selbstständige und auf einander aufbauenden Algorithmen, nämlich der schnellen Fourier-Transformation (FFT), der nicht äquidistanten schnellen Fourier-Transformation (NFFT) und der NFFT-basierten Teilchen-Gitter-Methode (P²NFFT). Für jeden dieser Algorithmen werden Verbesserungen und parallele Implementierungen vorgestellt mit besonderem Augenmerk auf massiv paralleler Skalierbarkeit.
Im Kontext der FFT werden parallele Algorithmen aus den Hardware adaptiven Modulen der FFTW Softwarebibliothek zusammengesetzt. Die neuen NFFT-Konzepte beinhalten abgeschnittene NFFT, Versatz, analytische Differentiation und optimierte Entfaltung im Fourier-Raum bezüglich des mittleren quadratischen Aliasfehlers. Mit Hilfe dieser Verallgemeinerungen bietet die NFFT einen vereinheitlichten Zugang zu Teilchen-Gitter-Methoden. Insbesondere gemischt periodische Randbedingungen werden einheitlich behandelt und Versatz wird effizienter umgesetzt. Heuristiken für die Parameterwahl werden auf Basis sorgfältiger Fehlerabschätzungen angegeben.
|
139 |
Real-time Adaptive Cancellation of Satellite Interference in Radio AstronomyPoulsen, Andrew Joseph 17 July 2003 (has links) (PDF)
Radio astronomy is the science of observing the heavens at radio frequencies, from a few kHz to approximately 300 GHz. In recent years, radio astronomy has faced a growing interference problem as radio frequency (RF) bandwidth has become an increasingly scarce commodity. A programmable real-time DSP least-mean-square interference canceller was developed and demonstrated as a successful method of excising satellite down-link signals from both an experimental platform at BYU, and the Green Bank Telescope at the National Radio Astronomy Observatory in West Virginia. A performance analysis of this cancellation system in the radio astronomy radio frequency interference (RFI) mitigation regime constitutes the main contribution of this thesis. The real-time BYU test platform consists of small radio telescopes, low noise RF receivers, and a state-of-the-art DSP platform. This programmable real-time radio astronomy RFI mitigation tool is the first of its kind. Basic tools needed for radio astronomy observations and the analysis and implementation of interference mitigation algorithms were also implemented in the DSP platform, including a power spectral density estimator, a beamformer, and an array signal correlator.
|
140 |
Relative or Discounted Cash Flow Valuation on the Fifty Largest US-Based Corporations on Nasdaq : Which of these valuation methods provides the most accurate valuation forecast?Öhrner, Marcus, Öhman, Otto January 2023 (has links)
The topic of this Bachelor Thesis is “Which of these valuation methods provides the most accurate valuation forecast”. Assuming that the year is 2020, the goal of this thesis is to forecast the future stock prices of the fifty largest US-based companies on the Nasdaq stock exchange for 2021 and 2022. By using a quantitative method and looking ten years back at historical data. We determine which valuation method provides the most accurate stock price when conducted in a non-sector specific sample by comparing predicted prices to actual stock prices and discussing the results. There are several ways to evaluate a company and the ones being utilized in this thesis are the discounted cash flow valuation method, the price-to-earnings ratio method (equity multiple), and enterprise value to enterprise value before interest, tax, and depreciation (firm multiple). Our results show that when reviewing the valuations of multiple companies in different sectors the relative valuation methods provide better predictions with EV/EBITDA rather than the discounted cash flow method. This thesis provides the reader with a comprehensive overview of these different valuation methods and their effectiveness in providing valuation forecasts. The result of this thesis is beneficial for policymakers, investors, and financial analysts when forecasting future stock prices.
|
Page generated in 0.0666 seconds