• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 285
  • 45
  • 35
  • 31
  • 8
  • 7
  • 7
  • 6
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 516
  • 159
  • 115
  • 105
  • 90
  • 74
  • 66
  • 66
  • 65
  • 62
  • 59
  • 57
  • 52
  • 51
  • 46
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Limited Dependent Variable Correlated Random Coefficient Panel Data Models

Liang, Zhongwen 2012 August 1900 (has links)
In this dissertation, I consider linear, binary response correlated random coefficient (CRC) panel data models and a truncated CRC panel data model which are frequently used in economic analysis. I focus on the nonparametric identification and estimation of panel data models under unobserved heterogeneity which is captured by random coefficients and when these random coefficients are correlated with regressors. For the analysis of linear CRC models, I give the identification conditions for the average slopes of a linear CRC model with a general nonparametric correlation between regressors and random coefficients. I construct a sqrt(n) consistent estimator for the average slopes via varying coefficient regression. The identification of binary response panel data models with unobserved heterogeneity is difficult. I base identification conditions and estimation on the framework of the model with a special regressor, which is a major approach proposed by Lewbel (1998, 2000) to solve the heterogeneity and endogeneity problem in the binary response models. With the help of the additional information on the special regressor, I can transfer a binary response CRC model to a linear moment relation. I also construct a semiparametric estimator for the average slopes and derive the sqrt(n)-normality result. For the truncated CRC panel data model, I obtain the identification and estimation results based on the special regressor method which is used in Khan and Lewbel (2007). I construct a sqrt(n) consistent estimator for the population mean of the random coefficient. I also derive the asymptotic distribution of my estimator. Simulations are given to show the finite sample advantage of my estimators. Further, I use a linear CRC panel data model to reexamine the return from job training. The results show that my estimation method really makes a difference, and the estimated return of training by my method is 7 times as much as the one estimated without considering the correlation between the covariates and random coefficients. It shows that on average the rate of return of job training is 3.16% per 60 hours training.
132

Fermions in two dimensions and exactly solvable models

de Woul, Jonas January 2011 (has links)
This Ph.D. thesis in mathematical physics concerns systems of interacting fermions with strong correlations. For these systems the physical properties can only be described in terms of the collective behavior of the fermions. Moreover, they are often characterized by a close competition between fermion localization versus delocalization, which can result in complex and exotic physical phenomena. Strongly correlated fermion systems are usually modelled by many-body Hamiltonians for which the kinetic- and interaction energy have the same order of magnitude. This makes them challenging to study as the application of conventional computational methods, like mean field- or perturbation theory, often gives unreliable results. Of particular interest are Hubbard-type models, which provide minimal descriptions of strongly correlated fermions. The research of this thesis focuses on such models defined on two-dimensional square lattices. One motivation for this is the so-called high-Tc problem of the cuprate superconductors. A main hypothesis is that there exists an underlying Fermi surface with nearly flat parts, i.e. regions where the surface is straight. It is shown that a particular continuum limit of the lattice system leads to an effective model amenable to computations. This limit is partial in that it only involves fermion degrees of freedom near the flat parts. The result is an effective quantum field theory that is analyzed using constructive bosonization methods. Various exactly solvable models of interacting fermions in two spatial dimensions are also derived and studied. / QC 20111207
133

Design of an Inverse Photoemission Spectrometer for the Study of Strongly Correlated Materials

McMahon, Christopher January 2012 (has links)
The design and construction of a state-of-the-art ultra-high vacuum spectrometer for the performance of angle-resolved inverse photoemission spectroscopy is presented. Detailed descriptions of its most important components are included, especially the Geiger-Muller ultraviolet photodetectors. By building on recent developments in the literature, we expect our spectrometer to achieve resolution comparable or superior to that of other prominent groups, and in general be one of the foremost apparatus for studying the momentum dependence of the unoccupied states in strongly correlated materials. Summaries of the theory of angle-resolved inverse photoemission spectroscopy and the basics of ultra-high vacuum science are also included.
134

Why be normal? : single crystal growth and X-ray spectroscopy reveal the startlingly unremarkable electronic structure of Tl-2201

Peets, Darren 11 1900 (has links)
High-quality platelet single crystals of Tl₂Ba₂CuO₆±δ (Tl-2201) have been grown using a novel time-varying encapsulation scheme, minimizing the thallium oxide loss that has plagued other attempts and reducing cation substitution. This encapsulation scheme allows the melt to be decanted from the crystals, a step previously impossible, and the remaining cation substitution is homogenized via a high-temperature anneal. Oxygen annealing schemes were developed to produce sharp superconducting transitions from 5 to 85 K without damaging the crystals. The crystals' high homogeneity and high degree of crystalline perfection are further evidenced by narrow rocking curves; the crystals are comparable to YSZ-grown YBa₂Cu₃O₆₊δ by both metrics. Electron probe microanalysis (EPMA) ascertained the crystals' composition to be Tl₁.₉₂₀₍₂₎Ba₁.₉₆₍₂₎Cu₁.₀₈₀₍₂₎O₆₊δ; X-ray diffraction found the composition of a Tc = 75 K crystal to be Tl₁.₉₁₄₍₁₄₎Ba₂Cu₁.₀₈₆₍₁₄₎O₆.₀₇₍₅₎, in excellent agreement. X-ray refinement of the crystal structure found the crystals orthorhombic at most dopings, and their structure to be in general agreement with previous powder data. Cation-substituted Tl-2201 can be orthorhombic, orthorhombic crystals can be prepared, and these superconduct, all new results. X-ray diffraction also found evidence of an as yet unidentified commensurate superlattice modulation. The Tl-2201 crystals' electronic structure were studied by X-ray absorption and emission spectroscopies (XAS/XES). The Zhang-Rice singlet band gains less intensity on overdoping than expected, suggesting a breakdown of the Zhang-Rice singlet approximation, and one thallium oxide band does not disperse as expected. The spectra correspond very closely with LDA band structure calculations, and do not exhibit the upper Hubbard bands arising from strong correlations seen in other cuprates. The spectra are noteworthy for their unprecedented (in the high-Tc cuprates) simplicity. The startling degree to which the electronic structure can be explained bodes well for future research in the cuprates. The overdoped cuprates, and Tl-2201 in particular, may offer a unique opportunity for understanding in an otherwise highly confusing family of materials.
135

Random Walk With Absorbing Barriers Modeled by Telegraph Equation With Absorbing Boundaries

Fan, Rong 01 August 2018 (has links)
Organisms have movements that are usually modeled by particles’ random walks. Under some mathematical technical assumptions the movements are described by diffusion equations. However, empirical data often show that the movements are not simple random walks. Instead, they are correlated random walks and are described by telegraph equations. This thesis considers telegraph equations with and without bias corresponding to correlated random walks with and without bias. Analytical solutions to the equations with absorbing boundary conditions and their mean passage times are obtained. Numerical simulations of the corresponding correlated random walks are also performed. The simulation results show that the solutions are approximated very well by the corresponding correlated random walks and the mean first passage times are highly consistent with those from simulations on the corresponding random walks. This suggests that telegraph equations can be a good model for organisms with the movement pattern of correlated random walks. Furthermore, utilizing the consistency of mean first passage times, we can estimate the parameters of telegraph equations through the mean first passage time, which can be estimated through experimental observation. This provides biologists an easy way to obtain parameter values. Finally, this thesis analyzes the velocity distribution and correlations of movement steps of amoebas, leaving fitting the movement data to telegraph equations as future work.
136

A 1.2V 25MSPS Pipelined ADC Using Split CLS with Op-amp Sharing

January 2012 (has links)
abstract: ABSTRACT As the technology length shrinks down, achieving higher gain is becoming very difficult in deep sub-micron technologies. As the supply voltages drop, cascodes are very difficult to implement and cascade amplifiers are needed to achieve sufficient gain with required output swing. This sets the fundamental limit on the SNR and hence the maximum resolution that can be achieved by ADC. With the RSD algorithm and the range overlap, the sub ADC can tolerate large comparator offsets leaving the linearity and accuracy requirement for the DAC and residue gain stage. Typically, the multiplying DAC requires high gain wide bandwidth op-amp and the design of this high gain op-amp becomes challenging in the deep submicron technologies. This work presents `A 12 bit 25MSPS 1.2V pipelined ADC using split CLS technique' in IBM 130nm 8HP process using only CMOS devices for the application of Large Hadron Collider (LHC). CLS technique relaxes the gain requirement of op-amp and improves the signal-to-noise ratio without increase in power or input sampling capacitor with rail-to-rail swing. An op-amp sharing technique has been incorporated with split CLS technique which decreases the number of op-amps and hence the power further. Entire pipelined converter has been implemented as six 2.5 bit RSD stages and hence decreases the latency associated with the pipelined architecture - one of the main requirements for LHC along with the power requirement. Two different OTAs have been designed to use in the split-CLS technique. Bootstrap switches and pass gate switches are used in the circuit along with a low power dynamic kick-back compensated comparator. / Dissertation/Thesis / M.S. Electrical Engineering 2012
137

Triple Sampling an Application to a 14b 10 MS/s Cyclic Converter

January 2012 (has links)
abstract: Semiconductor device scaling has kept up with Moore's law for the past decades and they have been scaling by a factor of half every one and half years. Every new generation of device technology opens up new opportunities and challenges and especially so for analog design. High speed and low gain is characteristic of these processes and hence a tradeoff that can enable to get back gain by trading speed is crucial. This thesis proposes a solution that increases the speed of sampling of a circuit by a factor of three while reducing the specifications on analog blocks and keeping the power nearly constant. The techniques are based on the switched capacitor technique called Correlated Level Shifting. A triple channel Cyclic ADC has been implemented, with each channel working at a sampling frequency of 3.33MS/s and a resolution of 14 bits. The specifications are compared with that based on a traditional architecture to show the superiority of the proposed technique. / Dissertation/Thesis / Ph.D. Electrical Engineering 2012
138

Investigation of exotic correlated states of matter in low dimension / Etude d'états exotiques corrélés de la matière en basse dimension

Soni, Medha 16 September 2016 (has links)
La physique statistique quantique formule les règles permettant de classifier les différentes particules. Dans cette thèse nous avons étudié deux projets, l'un portant sur les anyons dits de "Fibonacci" et l'autre sur les fermions sur réseau optique. Ici, nous avons naturellement étendu cette étude aux cas pertinent d'anyons itinérants en interaction sur des échelles. Notre but a été de construire le modèle 2D le simple possible d'anyons itinérants en interaction, analogue direct des systèmes fermioniques et inspiré par les études précédentes. En particulier, nous nous sommes demandé si la séparation spin-charge, bien connu à 1D, pouvait subsister dans le cas d'anyons sur une échelle. De plus, dans l'étude de ce modèle, nous avons découvert une nouvelle phase incompressible pouvant présenter un caractère topologique. Dans le cas des fermions confinés sur un réseau optique unidimensionnel, nous avons étudié les effets d'un chargement non-adiabatique et proposé des protocoles visant à minimiser le réchauffement du gaz quantique. Les atomes ultra-froids sur réseau optique constituent une réalisation idéale pour étudier les systèmes fortement corrélés soumis à un potentiel périodique. Le refroidissement évaporatif d'un nuage d'atomes confiné, c.a.d. sans le potentiel du réseau, s'est avéré être un processus très efficace. Les protocoles courants permettent d'obtenir(pour des fermions) des températures aussi basses que T/TF ≈ 0.08, impossible à réaliser en présence du réseau optique. Notre étude concerne les effets de redistribution de densité pour un système 1D de fermions. Notre but était de voir si des défauts causés par la mauvaise répartition des particules lors du chargement du réseau optique pouvaient empêcher les atomes de se refroidir jusqu'à la température voulue. Nous avons conçu des scenario améliorés où certains paramètres sont modifiés de façon dynamique afin de réduire la densité de défauts créés. / Quantum statistics is an important aspect of quantum mechanics and it lays down the rules for identifying dfferent classes of particles. In this thesis, we study two projects, one that surveys models of Fibonacci anyons and another that delves into fermions in optical lattices. We analyse the physics of mobile non-Abelian anyons beyond one-dimension by constructing the simplest possible model of 2D itinerant interacting anyons in close analogy to fermionic systems and inspired by the previous anyonic studies. In particular, we ask the question if spin-charge separation survives in the ladder model for non-Abelian anyons. Furthermore, in the study of this model, we have found a novel physical effective model that possibly hosts a topological gapped state. For fermions in one dimensional optical lattices, we survey the effects of non-adiabatic lattice loading on four different target states, and propose protocols to minimise heating of quantum gases. The evaporative cooling of a trapped atomic cloud, i.e. without the optical lattice potential, has been proven to be a very effective process. Current protocols are able to achieve temperatures as low as T/TF ≈ 0.08, which are lost in the presence of the optical lattice. We aim to understand if defects caused by poor distribution of particles during lattice loading are important for the fermionic case, forbidding the atoms to cool down to the desired level. We device improved ramp up schemes where we dynamically change one or more parameters of the system in order to reduce density defects.
139

EficiÃncia entre os GrÃficos de Controle por Grupos para a mÃdia e o tradicional de Shewhart em processos com fluxos correlacionados / Efficiency between the Control Charts for Groups for the mean and traditional Shewhart in processes with correlated streams

Max Brandao de Oliveira 25 February 2013 (has links)
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior / A utilizaÃÃo dos GrÃficos de Shewhart como ferramenta de monitoramento de processos, cujos produtos advÃm de vÃrios fluxos de produÃÃo (processos paralelos), deve ser vista com cautela, pois as amostras podem estar sendo construÃdas com itens de diferentes populaÃÃes. Na construÃÃo desses grÃficos, nÃo se deve misturar diferentes fontes de variaÃÃo do processo, pois tal atitude pode levar a conclusÃes equivocadas e, assim, reduzir o poder do grÃfico na detecÃÃo de causas assinalÃveis. Uma soluÃÃo para esse problema à o uso de um grÃfico de controle para cada fluxo, o GrÃfico Tradicional de Shewhart (GCS), contudo torna o controle difÃcil e burocrÃtico. Uma outra alternativa à a adoÃÃo do chamado GrÃfico de Controle por Grupos (GG), que permite o controle de mÃltiplos fluxos atravÃs de um Ãnico grÃfico. A presenÃa de uma estrutura de correlaÃÃo no processo produtivo, inserida em ambos os tipos de cartas, pode comprometer a anÃlise violando a restriÃÃo de independÃncia das amostras. A literatura especializada à carente em estudos dessa natureza. Diante deste cenÃrio, o objetivo geral deste trabalho à desenvolver um estudo, por meio de simulaÃÃo com software R (R Development Core Team, 2011), do GrÃfico de Controle por Grupos em termos de seu desempenho e eficiÃncia, como uma alternativa ao modelo de Shewhart em processos paralelos com fluxos correlacionados. O estudo consiste em uma anÃlise acerca da alteraÃÃo na mÃdia e na variÃncia do processo individual e conjuntamente. Este trabalho, entÃo, contextualiza a importÃncia do Controle EstatÃstico de Processo (CEP) dentro da logÃstica de produÃÃo e sua contribuiÃÃo teÃrica e prÃtica para o CEP dentro do objetivo proposto. Resultados indicam que a eficiÃncia do GG em relaÃÃo ao tradicional aumenta à medida que a correlaÃÃo cresce. AlÃm disso, para pequenas perturbaÃÃes e com 3 fluxos, o grÃfico por grupos chega a ser 55% mais lento quanto à detecÃÃo de um deslocamento conjunto na mÃdia e na variÃncia do processo em relaÃÃo ao tradicional de Shewhart. Destaca-se ainda que, para 10 fluxos, o GG apresenta um desempenho superior ao GCS na ordem de 36% para correlaÃÃo 0,5, dando evidÃncias de que, para uma grande quantidade de fluxos (k maior ou igual a 10), o GG à melhor do que o GCS na presenÃa de correlaÃÃo entre os fluxos. / The use of Shewhart charts as a tool for process monitoring, whose products come from various production flows (parallel processes), should be viewed with caution because the samples could be determined based on items from different populations. In the construction of these charts, is not recommended mix different sources of process variation, because such an attitude can lead to wrong conclusions and thus reduce the power of the chart in detecting assignable causes. One solution to this problem is the use of a control chart for each stream, which makes control difficult and bureaucratic. Another alternative is to adopt the Groups Charts, which allows control of multiple streams from a single graph. The presence of a correlation structure in the production process, inserted in both types of charts, can compromise the analysis violating the restriction of independence of samples. The literature is lacking in studies of this nature. Given this scenario, the objective of this work is to develop a study through simulation using the Software R, Group Charts in terms of its performance and efficiency, as an alternative to Model Shewhart in parallel processes with correlated streams. The study is an analysis of the change in the mean and the variance of the process individually and jointly. This work then contextualizes the importance of Statistical Control in the Process (SPC) logistics and its contribution to the theory and practice SPC within the proposed objective. Results indicate that the efficiency of GG compared to traditional increases as the correlation grows. Furthermore, for small disturbances, with 3 flows, the GG becomes 55% slower as to detect a displacement set average and the variance of the process compared to the traditional of Shewhart. Note also that, for 10 streams, the GG has outperformed GCS in the order of 36% correlation to 0:5, giving evidence that, for a large number of flows (k >= 10), the GG is better than the GCS in the presence of correlation between the streams.
140

Centralized and distributed address correlated network coding protocols / Optimisation et application du codage réseau dans l'architecture des futurs réseaux sans fils

Abdul-Nabi, Samih 28 September 2015 (has links)
Le codage de reseau (CR) est une nouvelle technique reposant, sur la realisation par les noeuds du reseau, des fonctions de codage et de decodage des donnees afin d’ameliorerle debit et reduire les retards. En utilisant des algorithmes algebriques, le codage consiste àcombiner ensemble les paquets transmis et le decodage consiste à restaurer ces paquets. Cette operation permet de reduire le nombre total de transmissions de paquets pour echanger les donnees, mais requiere des traitements additionnels au niveau des noeuds. Le codage de reseau peut etre applique au niveau de differentes couches ISO.Toutefois dans ce travail, sa mise en noeuvre est effectuee au niveau de la couche reseau. Dans ce travail de thèse, nous presentons des techniques de codage de reseau s’appuyantsur de nouveaux protocoles permettant d’optimiser l’utilisation de la bande passante,D’ameliorer la qualite de service et de reduire l’impact de la perte de paquets dans les reseaux a pertes. Plusieurs defis ont ete releves notamment concernant les fonctions de codage/decodage et tous les mecanismes connexes utilises pour livrer les paquets echanges entre les noeuds. Des questions comme le cycle de vie des paquets dans le reseau, lacardinalite des messages codes, le nombre total d’octets transmis et la duree du temps de maintien des paquets ont ete adressees analytiquement, en s’appuyant sur des theoremes, qui ont ete ensuite confirmes par des simulations. Dans les reseaux a pertes, les methodes utilisees pour etudier precisement le comportement du reseau conduisent a la proposition de nouveaux mecanismes pour surmonter cette perte et reduire la charge.Dans la premiere partie de la these, un etat de l’art des techniques de codage de reseauxest presente a partir des travaux de Alshwede et al. Les differentes techniques sont detaillees mettant l’accent sur les codages lineaires et binaires. Ces techniques sont decrites en s’appuyant sur differents scenarios pour aider a comprendre les avantages etles inconvenients de chacune d’elles. Dans la deuxieme partie, un nouveau protocole base sur la correlation des adresses (ACNC) est presente, et deux approches utilisant ce protocole sont introduites ; l’approche centralisee ou le decodage se fait aux noeuds d’extremites et l’approche distribueeou chaque noeud dans le reseau participe au decodage. Le decodage centralise est elabore en presentant d’abord ses modeles de decision et le detail du decodage aux noeuds d’extremites. La cardinalite des messages codes recus et les exigences de mise en mémoire tampon au niveau des noeuds d’extremites sont etudiees et les notions d’age et de maturite sont introduites. On montre que le decodage distribue permet de reduire la charge sur les noeuds d’extremite ainsi que la memoire tampon au niveau des noeuds intermediaires. La perte et le recouvrement avec les techniques de codage de reseau sont examines pour les deux approches proposees. Pour l’approche centralisee, deux mecanismes pour limiter l’impact de la perte sont presentes. A cet effet, le concept de fermetures et le concept dessous-ensembles couvrants sont introduits. Les recouvrements optimaux afin de trouver l’ensemble optimal de paquets a retransmettre dans le but de decoder tous les paquets reçus sont definis. Pour le decodage distribue, un nouveau mecanisme de fiabilite saut a saut est propose tirant profit du codage de reseau et permettant de recuperer les paquets perdus sans la mise en oeuvre d’un mecanisme d’acquittement. / Network coding (NC) is a new technique in which transmitted data is encoded and decoded by the nodes of the network in order to enhance throughput and reduce delays. Using algebraic algorithms, encoding at nodes accumulates various packets in one message and decoding restores these packets. NC requires fewer transmissions to transmit all the data but more processing at the nodes. NC can be applied at any of the ISO layers. However, the focus is mainly on the network layer level. In this work, we introduce novelties to the NC paradigm with the intent of building easy to implement NC protocols in order to improve bandwidth usage, enhance QoS and reduce the impact of losing packets in lossy networks. Several challenges are raised by this thesis concerning details in the coding and decoding processes and all the related mechanisms used to deliver packets between end nodes. Notably, questions like the life cycle of packets in coding environment, cardinality of coded messages, number of bytes overhead transmissions and buffering time duration are inspected, analytically counted, supported by many theorems and then verified through simulations. By studying the packet loss problem, new theorems describing the behavior of the network in that case have been proposed and novel mechanisms to overcome this loss have been provided. In the first part of the thesis, an overview of NC is conducted since triggered by the work of Alshwede et al. NC techniques are then detailed with the focus on linear and binary NC. These techniques are elaborated and embellished with examples extracted from different scenarios to further help understanding the advantages and disadvantages of each of these techniques. In the second part, a new address correlated NC (ACNC) protocol is presented and two approaches using ACNC protocol are introduced, the centralized approach where decoding is conducted at end nodes and the distributed decoding approach where each node in the network participates in the decoding process. Centralized decoding is elaborated by first presenting its decision models and the detailed decoding procedure at end nodes. Moreover, the cardinality of received coded messages and the buffering requirements at end nodes are investigated and the concepts of aging and maturity are introduced. The distributed decoding approach is presented as a solution to reduce the overhead on end nodes by distributing the decoding process and buffering requirements to intermediate nodes. Loss and recovery in NC are examined for both centralized and distributed approaches. For the centralized decoding approach, two mechanisms to limit the impact of loss are presented. To this effect, the concept of closures and covering sets are introduced and the covering set discovery is conducted on undecodable messages to find the optimized set of packets to request from the sender in order to decode all received packets. For the distributed decoding, a new hop-to-hop reliability mechanism is proposed that takes advantage of the NC itself and depicts loss without the need of an acknowledgement mechanism.

Page generated in 0.0792 seconds