• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 101
  • 47
  • 27
  • 14
  • 14
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 262
  • 61
  • 42
  • 36
  • 35
  • 33
  • 29
  • 26
  • 26
  • 25
  • 25
  • 22
  • 21
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Performances of different estimation methods for generalized linear mixed models.

Biswas, Keya 08 May 2015 (has links)
Generalized linear mixed models (GLMMs) have become extremely popular in recent years. The main computational problem in parameter estimation for GLMMs is that, in contrast to linear mixed models, closed analytical expressions for the likelihood are not available. To overcome this problem, several approaches have been proposed in the literature. For this study we have used one quasi-likelihood approach, penalized quasi-likelihood (PQL), and two integral approaches: Laplace and adaptive Gauss-Hermite quadrature (AGHQ) approximation. Our primary objective was to measure the performances of each estimation method. AGHQ approximation is more accurate than Laplace approximation, but slower. So the question is when Laplace approximation is adequate, versus when AGHQ approximation provides a significantly more accurate result. We have run two simulations using PQL, Laplace and AGHQ approximations with different quadrature points for varying random effect standard deviation (Ɵ) and number of replications per cluster. The performances of these three methods were measured base on the root mean square error (RMSE) and bias. Based on the simulated data, we have found that for both smaller values of Ɵ and small number of replications and for larger values of and for larger values of Ɵ and lager number of replications, the RMSE of PQL method is much higher than Laplace and AGHQ approximations. However, for intermediate values of Ɵ (random effect standard deviation) ranging from 0.63 to 3.98, regardless of number of replications per cluster, both Laplace and AGHQ approximations gave similar estimates. But when both number of replications and Ɵ became small, increasing quadrature points increases RMSE values indicating that Laplace approximation perform better than the AGHQ method. When random effect standard deviation is large, e.g. Ɵ=10, and number of replications is small the Laplace RMSE value is larger than that of AGHQ approximation. Increasing quadrature points decreases the RMSE values. This indicates that AGHQ performs better in this situation. The difference in RMSE between PQL vs Laplace and AGHQ vs Laplace is approximately 12% and 10% respectively. In addition, we have tested the relative performance and the accuracy between two different packages of R (lme4, glmmML) and SAS (PROC GLIMMIX) based on real data. Our results suggested that all of them perform well in terms of accuracy, precision and convergence rates. In most cases, glmmML was found to be much faster than lme4 package and SAS. The only difference was found in the Contraception data where the required computational time for both R packages was exactly the same. The difference in required computational times for these two platforms decreases as the number of quadrature points increases. / Thesis / Master of Science (MSc)
112

Efficient Reliability Estimation Approach for Analysis and Optimization of Composite Structures

Singh, Mukti Nath 13 December 2002 (has links)
The efficient evaluation of reliability index is of considerable importance in the assessment of component reliability and reliability-based structural optimization. In this thesis, the structural reliabiltiy analysis is performed using the random sampling techniques such as traditional Monte Carlo simulation and the analytical techniques such as first-order reliability method. The feasibility of Gauss quadrature points as means of target sampling of design space and generating accurate first- and second-order response surface models of failure functions is examined. Parametric uncertainty is considered by probabilistic modeling of design parameters. Various alternative approaches for estimation of component reliability index are examined with application to two structural problems: ply failure in a multidirectional composite laminate and axial buckling of a composite circular cylinder. The probabilistic sensitivity analysis is performed to measure the influence of each random variable on the estimated reliability index. The advantages and disadvantages of each approach are discussed and the approach considered the most efficient in terms of accuracy and computational requirements is identified. Furthermore, the most efficient approach is applied in reliability-based structural optimization of a composite circular cylinder with ply failure and axial buckling constraints. The optimization problem is solved using sequential quadratic programming based on sequential local response surface approximations of failure functions. The optimization results are presented for different geometric properties, laminate configurations, and coefficients of variation.
113

Discontinuous Galerkin Finite Element Methods for Shallow Water Flow: Developing a Computational Infrastructure for Mixed Element Meshes

Maggi, Ashley L. 22 July 2011 (has links)
No description available.
114

The Realization of Narrow Band-Pass characteristics Using Sampled Data Filters

Benthin, Louis 04 1900 (has links)
Pages 42, 63, 69, 71-72, 77, 87, 90, 93-94, and 97 had titles that were cut off in the scanning process. The administrator uploading this file re-wrote them on the bottom of each page. / <p>This thesis presents the results of an investigation of an alternative technique for the realization of narrow band-pass filters. This technique uses N parallel connected RC time-varying networks. A comparison of the performance of the 3-channel sampled data filter and one using the technique of quadrature modulation is made with respect to overall system performance.</p> <p>Excellent agreement between the theoretical and experimental results are obtained for the band-pass characteristics. Design criteria are also presented in order to approach the ideal operation of an N-path sampled data filter.</p> / Thesis / Master of Engineering (ME)
115

State-space LQG self-tuning control of flexible structures

Ho, Fusheng 04 May 2006 (has links)
This dissertation presents a self-tuning regulator (STR) design method developed based upon a state-space linear quadratic Gaussian (LQG) control strategy for rejecting a disturbance in a flexible structure in the face of model uncertainty. The parameters to be tuned are treated as additional state variables and are estimated recursively together with the system state that is needed for feedback. Also, the feedback gains are designed in the LQ framework based upon the estimated model parameters. Two problems concerning the uncertainty of model parameters are recognized. First, we consider the uncertainty in the system matrix of the state space model. The self-tuning regulator is implemented by computer and the control law is obtained based upon a discrete-time model; however, only selected continuous-time parameters with physical meanings to which the controller is highly sensitive are tuned. It is formulated as a nonlinear filtering problem such that both the estimated state and the unknown parameters can be obtained by an extended Kahman filter. The capability of this design method is experimentally demonstrated by applying it to the rejection of a disturbance in a simply supported plate. The other problem considered is that the location where the disturbance enters the system is unknown. This corresponds to an unknown disturbance influence matrix. Under the assumption that the system matrix is known and the disturbance can be measured, it is formulated as a linear filtering problem with an approximate discrete-time design model. Similarly, the estimated state for feedback and the unknown parameters are identified simultaneously and recursively. Also, the feedback gains are calculated approximately by recursively solving the discrete-time control Riccati equation. The effectiveness of the controller is shown by applying it to a simply-supported plate, when the location of the disturbance is assumed unknown. Since implementing LQG self-tuning controllers for vibration control systems requires significant real-time computation, methods that can reduce the computing load are examined. In addition, the possibility of extending the self tuning to disturbance model parameters is explored. / Ph. D.
116

Design and Analysis of a Low-Power Low-Voltage Quadrature LO Generation Circuit for Wireless Applications

Wang, Shen 25 September 2012 (has links)
The competitive market of wireless communication devices demands low power and low cost RF solutions. A quadrature local oscillator (LO) is an essential building block for most transceivers. As the CMOS technology scales deeper into the nanometer regime, design of a low-power low-voltage quadrature LO still poses a challenge for RF designers. This dissertation investigates a new quadrature LO topology featuring a transformer-based voltage controlled oscillator (VCO) stacked with a divide-by-two for low-power low-voltage wireless applications. The transformer-based VCO core adopts the Armstrong VCO configuration to mitigate the small voltage headroom and the noise coupling. The LO operating conditions, including the start-up condition, the oscillation frequency, the voltage swing and the current consumption are derived based upon a linearized small-signal model. Both linear time-invariant (LTI) and linear time-variant (LTV) models are utilized to analyze the phase noise of the proposed LO. The results indicate that the quality factor of the primary coil and the mutual inductance between the primary and the secondary coils play an important role in the trade-off between power and noise. The guidelines for determining the parameters of a transformer are developed. The proposed LO was fabricated in 65 nm CMOS technology and its die size is about 0.28 mm2. The measurement results show that the LO can work at 1 V supply voltage, and its operation is robust to process and temperature variations. In high linearity mode, the LO consumes about 2.6 mW of power typically, and the measured phase noise is -140.3 dBc/Hz at 10 MHz offset frequency. The LO frequency is tunable from 1.35 GHz to 1.75 GHz through a combination of a varactor and an 8-bit switched capacitor bank. The proposed LO compares favorably to the existing reported LOs in terms of the figure of merit (FoM). More importantly, high start-up gain, low power consumption and low voltage operation are achieved simultaneously in the proposed topology. However, it also leads to higher design complexity. The contributions of this work can be summarized as 1) proposal of a new quadrature LO topology that is suitable for low-power low-voltage wireless applications, 2) an in-depth circuit analysis as well as design method development, 3) implementation of a fully integrated LO in 65 nm CMOS technology for GPS applications, 4) demonstration of high performance for the design through measurement results. The possible future improvements include the transformer optimization and the method of circuit analysis. / Ph. D.
117

[en] TRANSMISSION AND RECEPTION OF DATA IN EHF / [pt] TRANSMISSÃO E RECEPÇÃO DE DADOS EM EHF

ANDY ALVAREZ ARELLANO 30 November 2017 (has links)
[pt] Nos últimos anos, as bandas de frequências nas comunicações sem fio estão começando a saturar devido ao incremento do tráfego e o aumento dos usuários, é devido a isso que, é necessário estudar as bandas de frequências que não estão sendo utilizadas nas áreas das comunicações como a banda milimétrica e sub-milimétrica. A transmissão de dados na banda EHF o banda milimétrica constitui uma possível solução para conseguir transmitir maiores quantidades de informação a altas velocidades de transmissão aliviando as bandas de frequências atuais. Neste trabalho se estuda a transmissão de dados em frequências de 100, 200, 300 e 400 GHz, empregando a modulação Quadrature Phase-Shift Keying (QPSK) mediante uma arquitetura baseada no batimento de dois lasers, cujas frequências são combinadas em um Beam Splitter, para que a corrente resultante da soma dos campos elétricos dos dois lasers seja convertida em um sinal de alta frequência por meio de uma antena fotocondutora. O batimento dos dois lasers, com diferentes comprimentos de onda e com a mesma potência, ao interagir com uma antena fotocondutora dá como resultado uma frequência na ordem de Gigahertz. No experimento utilizaram-se dois tipos de diodos receptores, um de banda larga (menor que 4 GHz) e outro de banda estreita (menor que 1 MHz). As duas antenas foram testadas em diferentes distâncias e com diferentes frequências de portadora para verificar qual delas tinha o melhor desempenho na banda EHF para poder realizar a transmissão de dados. / [en] In recent years, the frequency bands in wireless communications are beginning to saturate due to the increase of traffic and the increase of users, and it for that reason that is necessary to study the frequency bands that are not begin used in the communication areas like millimeter and sub-millimeters bands. Data transmission in the EHF band is a possible solution to be able to transmit large amounts of information at high transmission speeds, alleviating current frequency bands. In this work, the transmission of data in frequencies of 100, 200, 300 and 400 Gigahertz is studied, using Quadrature phase-shift keying (QPSK) modulation with an architecture based on the beat of two lasers, whos frequencies are combined by means of Beam Splitter, so that result of the electric fields of two lasers is converted into a high frequency signal with the aid of a photoconductor antenna. The.beating of the two lasers, with different wavelengths and with the same power, when interacting with a photoconductor antenna results in a frequency in the order of Gigahertz. In the experiment, two types of receiver diodes were used, one Broadband (less than 4 GHz) and the other of narrowband (less than 1 MHz). The two antennas were tested at different distances and with different carrier frequencies to verify which one had the best performance in the EHF band in order to perform the data transmission.
118

Méthodes de Monte Carlo stratifiées pour l'intégration numérique et la simulation numériques / Stratified Monte Carlo methods for numerical integration and simulation

Fakhereddine, Rana 26 September 2013 (has links)
Les méthodes de Monte Carlo (MC) sont des méthodes numériques qui utilisent des nombres aléatoires pour résoudre avec des ordinateurs des problèmes des sciences appliquées et des techniques. On estime une quantité par des évaluations répétées utilisant N valeurs et l'erreur de la méthode est approchée par la variance de l'estimateur. Le présent travail analyse des méthodes de réduction de la variance et examine leur efficacité pour l'intégration numérique et la résolution d'équations différentielles et intégrales. Nous présentons d'abord les méthodes MC stratifiées et les méthodes d'échantillonnage par hypercube latin (LHS : Latin Hypercube Sampling). Parmi les méthodes de stratification, nous privilégions la méthode simple (MCS) : l'hypercube unité Is := [0; 1)s est divisé en N sous-cubes d'égale mesure, et un point aléatoire est choisi dans chacun des sous-cubes. Nous analysons la variance de ces méthodes pour le problème de la quadrature numérique. Nous étudions particulièrment le cas de l'estimation de la mesure d'un sous-ensemble de Is. La variance de la méthode MCS peut être majorée par O(1=N1+1=s). Les résultats d'expériences numériques en dimensions 2,3 et 4 montrent que les majorations obtenues sont précises. Nous proposons ensuite une méthode hybride entre MCS et LHS, qui possède les propriétés de ces deux techniques, avec un point aléatoire dans chaque sous-cube et les projections des points sur chacun des axes de coordonnées également réparties de manière régulière : une projection dans chacun des N sousintervalles qui divisent I := [0; 1) uniformément. Cette technique est appelée Stratification Sudoku (SS). Dans le même cadre d'analyse que précédemment, nous montrons que la variance de la méthode SS est majorée par O(1=N1+1=s) ; des expériences numériques en dimensions 2,3 et 4 valident les majorations démontrées. Nous présentons ensuite une approche de la méthode de marche aléatoire utilisant les techniques de réduction de variance précédentes. Nous proposons un algorithme de résolution de l'équation de diffusion, avec un coefficient de diffusion constant ou non-constant en espace. On utilise des particules échantillonnées suivant la distribution initiale, qui effectuent un déplacement gaussien à chaque pas de temps. On ordonne les particules suivant leur position à chaque étape et on remplace les nombres aléatoires qui permettent de calculer les déplacements par les points stratifiés utilisés précédemment. On évalue l'amélioration apportée par cette technique sur des exemples numériques Nous utilisons finalement une approche analogue pour la résolution numérique de l'équation de coagulation, qui modélise l'évolution de la taille de particules pouvant s'agglomérer. Les particules sont d'abord échantillonnées suivant la distribution initiale des tailles. On choisit un pas de temps et, à chaque étape et pour chaque particule, on choisit au hasard un partenaire de coalescence et un nombre aléatoire qui décide de cette coalescence. Si l'on classe les particules suivant leur taille à chaque pas de temps et si l'on remplace les nombres aléatoires par des points stratifiés, on observe une réduction de variance par rapport à l'algorithme MC usuel. / Monte Carlo (MC) methods are numerical methods using random numbers to solve on computers problems from applied sciences and techniques. One estimates a quantity by repeated evaluations using N values ; the error of the method is approximated through the variance of the estimator. In the present work, we analyze variance reduction methods and we test their efficiency for numerical integration and for solving differential or integral equations. First, we present stratified MC methods and Latin Hypercube Sampling (LHS) technique. Among stratification strategies, we focus on the simple approach (MCS) : the unit hypercube Is := [0; 1)s is divided into N subcubes having the same measure, and one random point is chosen in each subcube. We analyze the variance of the method for the problem of numerical quadrature. The case of the evaluation of the measure of a subset of Is is particularly detailed. The variance of the MCS method may be bounded by O(1=N1+1=s). The results of numerical experiments in dimensions 2,3, and 4 show that the upper bounds are tight. We next propose an hybrid method between MCS and LHS, that has properties of both approaches, with one random point in each subcube and such that the projections of the points on each coordinate axis are also evenly distributed : one projection in each of the N subintervals that uniformly divide the unit interval I := [0; 1). We call this technique Sudoku Sampling (SS). Conducting the same analysis as before, we show that the variance of the SS method is bounded by O(1=N1+1=s) ; the order of the bound is validated through the results of numerical experiments in dimensions 2,3, and 4. Next, we present an approach of the random walk method using the variance reduction techniques previously analyzed. We propose an algorithm for solving the diffusion equation with a constant or spatially-varying diffusion coefficient. One uses particles, that are sampled from the initial distribution ; they are subject to a Gaussian move in each time step. The particles are renumbered according to their positions in every step and the random numbers which give the displacements are replaced by the stratified points used above. The improvement brought by this technique is evaluated in numerical experiments. An analogous approach is finally used for numerically solving the coagulation equation ; this equation models the evolution of the sizes of particles that may agglomerate. The particles are first sampled from the initial size distribution. A time step is fixed and, in every step and for each particle, a coalescence partner is chosen and a random number decides if coalescence occurs. If the particles are ordered in every time step by increasing sizes an if the random numbers are replaced by statified points, a variance reduction is observed, when compared to the results of usual MC algorithm.
119

O problema da quadratura do círculo: uma abordagem histórica sob a perspectiva atual

Santana, Erivaldo Ribeiro 30 April 2015 (has links)
Submitted by Kamila Costa (kamilavasconceloscosta@gmail.com) on 2015-08-07T13:59:57Z No. of bitstreams: 1 Dissertacao - Erivaldo R. Santana.pdf: 3301648 bytes, checksum: f3e68eae0be26f8d67132dc1bd792d18 (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2015-08-07T14:09:31Z (GMT) No. of bitstreams: 1 Dissertacao - Erivaldo R. Santana.pdf: 3301648 bytes, checksum: f3e68eae0be26f8d67132dc1bd792d18 (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2015-08-07T14:11:00Z (GMT) No. of bitstreams: 1 Dissertacao - Erivaldo R. Santana.pdf: 3301648 bytes, checksum: f3e68eae0be26f8d67132dc1bd792d18 (MD5) / Made available in DSpace on 2015-08-07T14:11:00Z (GMT). No. of bitstreams: 1 Dissertacao - Erivaldo R. Santana.pdf: 3301648 bytes, checksum: f3e68eae0be26f8d67132dc1bd792d18 (MD5) Previous issue date: 2015-04-30 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / This work bears the purpose of setting up the course of the circle quadrature solution attempts, as well as to mention its influences, contributions for the mathematics development until now and to incentive the geometry dynamics use. In it we produce a possible explanation of how geometry has been created besides of a brief study on the number followed of the production GeoGebra software, the too we have utilized to build up the figures and the work implementations. We will utilize the areas equivalence based on Euclides elements to solve an initial problem: that of constructing a quadrilateral equivalent to a given pentagon and, for such, it will be necessary the demonstration of some propositions. We will utilize the square to relate its areas with those of the polygonal figures through the âquadratureâ method. With such we will execute the rectangle, triangle, pentagon quadrature, and that of the convex n sides polygon. We will utilize Pitagoras theorem to sum up the squares areas by bringing up brief comments about its use. Afterward this method will also the utilized in the attempt of squaring the curvelin figures such as the circle which has later on originated the problem of the circle quadrature. For explain such a problem we will utilize the geometric construction along with the demonstration of two methods for obtaining the circle quadrature and its respective results and comparisons. In the sequence, we will know what the are constructive numbers, algebraic and transcendent, which will enable us to reach to a classification of the number and its relation to the circle quadrature problem, reaching out the answer to our problem. While defining the geometrical average we will demonstrate how to obtain some quadrature utilized in such an average in the proposed activities. In other words, we can say that this work aims to produce the circle quadrature problem, the investigation of the methods developed by mathematicians for the solution of this problem in the course of history and, finally, an ascertainment on the answer these methods point us. / Este trabalho tem o intuito de traçar o percurso das tentativas de solução da quadratura do círculo, bem como citar suas influências, contribuições para o desenvolvimento da matemática até os dias de hoje e incentivar o uso da geometria dinâmica. Nele apresentamos uma possível explicação de como surgiu a geometria, além de um breve estudo sobre o número , seguido de uma apresentação do software GeoGebra, ferramenta que utilizamos para construção das figuras e das implementações do trabalho. Utilizaremos a equivalência de áreas baseada na obra dos elementos de Euclides para resolvermos um problema inicial: o de construir um quadrilátero equivalente a um pentágono dado e, para isso, será necessária a demonstração de algumas proposições. Utilizaremos o quadrado para relacionarmos a sua área com as das demais figuras poligonais pelo método da "quadratura". Com isso, executaremos as quadraturas do retângulo, triângulo, pentágono e do polígono convexo de n lados. Utilizaremos o Teorema de Pitágoras para somarmos áreas de quadrados, tecendo breves comentários acerca de seu uso. Posteriormente esse método também foi utilizado na tentativa de quadrar-se áreas de figuras curvilíneas, como o círculo, no que mais tarde originou o problema da quadratura do círculo. Para a exposição deste problema mostraremos a construção geométrica e a demonstração de dois métodos para obtermos a quadratura do círculo e seus respectivos resultados e comparações. Em seguida, definiremos o que são números construtíveis, algébricos e transcendentes, o que nos possibilitará chegar a uma classificação do número e sua relação com o problema da quadratura do círculo, chegando à resposta do nosso problema. Ao definirmos a média geométrica, mostraremos como obter algumas quadraturas utilizando essa média nas atividades propostas. Em outras palavras, podemos dizer que este trabalho objetiva apresentar o problema da quadratura do círculo, a investigação de métodos desenvolvidos por matemáticos para resolução deste problema ao longo da história e finalmente uma constatação acerca da resposta que estes métodos nos apontam.
120

Température effective d'un système hors équilibre : fluctuations thermiques d'un microlevier soumis à un flux de chaleur / Effective temperature of an out of equilibirum system : thermal fluctuations of a strongly heated cantilever

Geitner, Mickaël 23 October 2015 (has links)
A l’aide d’un interféromètre différentiel à quadrature de phase nous mesurons les fluctuations thermiques de la déflexion d’un micro-levier. Il est alors possible de déduire différentes propriétés mécaniques du levier telles que raideur, fréquences de résonance, facteurs de qualité etc. Dans un tel système, la précision maximale sur les mesures est limitée par le bruit de grenaille des photodiodes (shot-noise). Afin d’augmenter le rapport signal sur bruit, nous augmentons l’intensité lumineuse du laser de mesure, diminuant ainsi le bruit de fond des spectres de fluctuations thermique. En revanche, l’augmentation de l’intensité du laser a pour effet de décaler vers les basses fréquences les résonances du levier. Une première partie de ce travail de thèse a pour objectif la compréhension de ce phénomène. Ainsi, nous associons le décalage en fréquence à un échauffement du levier par le laser de l’interféromètre et au flux de chaleur associé le long du levier. Nous développons alors un modèle permettant de relier cet effet à la température de l’extrémité du levier en se basant sur un profil de température linéaire. Une seconde partie de ce travail vise à mesurer la température effective d’un levier à l’aide d’une extension du théorème fluctuation-dissipation. Nous montrons que les fluctuations de ce système hors équilibre sont plus faibles que celles attendues compte tenu du profil de température. Nous cherchons alors à identifier l’origine de ce déficit de fluctuations. Dans une dernière partie nous estimons les profils de température sur des leviers en faisant varier leurs paramètres géométriques ou leur coefficient d’absorption, ainsi que la position du laser chauffant le levier. / Thanks to a home made quadrature phase differential interferometer, we measure the thermal fluctuations ofa cantilever. It is then possible to infer various mechanical properties such as eigenfrequencies, stiffness,quality factor, etc. In such system, the maximal precision on the measure is limited by the shotnoise of thephotodiodes. To increase the signal-noise ratio we raise the light intensity of the laser, lowering thebackground noise. Doing so, the cantilever eigen frequencies shifts to lower values. A fisrt part of this thesiswork has for objective the understanding of this phenomenon. Thus, we associate this frequency shift with aheating of the cantilever by the laser. We develop a model linking this effect to the temperature at the freeend of the cantilever assuming a linear temperature profile.A second part of this thesis leads us to estimate the effective temperature of a cantilever using thefluctuation-dissipation theorem. We show that the fluctuations of our out of equilibrium system are lower thanthe fluctuations expected at equilibrium.In the last part, we estimate the temperature profiles on cantilevers by varying their geometry, absorptioncoefficient and laser position.

Page generated in 0.0718 seconds