• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

DESIGN OF THE TRANSCONDUCTANCE AMPLIFIER FOR FREQUENCY DOMAIN SAMPLING RECEIVER

Chen, XI 16 January 2010 (has links)
In this work, the circuit implementation of the front-end for Frequency Domain (FD) Sampling Receiver is presented. Shooting for two different applications, two transconductance amplifiers are designed. A high linear transconductance amplifier with 25 dBm IIP3 is proposed to form the high resolution and high sampling rate FD receiver. The whole system achieves an overall sampling rate of 2 Gs/s and resolution of 10 bits. Another low noise transconductance amplifier exploiting noise cancelling is designed to build up the FD wireless communication receiver, which is an excellent candidate for Software Define Radio (SDR) and Cognitve Radio (CR). The proposed noise cancelling scheme can suppress both thermal noise and flicker noise at the frontend. The system Noise Figure (NF) is improved by 3.28 dB. The two transconductance amplifiers are simulated and fabricated with TI 45nm CMOS technology.
2

Developing a basis for characterizing precision of estimates produced from non-probability samples on continuous domains

Cooper, Cynthia 20 February 2006 (has links)
Graduation date: 2006 / This research addresses sample process variance estimation on continuous domains and for non-probability samples in particular. The motivation for the research is a scenario in which a program has collected non-probability samples for which there is interest in characterizing how much an extrapolation to the domain would vary given similarly arranged collections of observations. This research does not address the risk of bias and a key assumption is that the observations could represent the response on the domain of interest. This excludes any hot-spot monitoring programs. The research is presented as a collection of three manuscripts. The first (to be published in Environmetrics (2006)) reviews and compares model- and design-based approaches for sampling and estimation in the context of continuous domains and promotes a model-assisted sample-process variance estimator. The next two manuscripts are written to be companion papers. With the objective of quantifying uncertainty of an estimator based on a non-probability sample, the proposed approach is to first characterize a class of sets of locations that are similarly arranged to the collection of locations in the non-probability sample, and then to predict variability of an estimate over that class of sets using the covariance structure indicated by the non-probability sample (assuming the covariance structure is indicative of the covariance structure on the study region). The first of the companion papers discusses characterizing classes of similarly arranged sets with the specification of a metric density. Goodness-of-fit tests are demonstrated on several types of patterns (dispersed, random and clustered) and on a non-probability collection of locations surveyed by Oregon Department of Fish & Wildlife on the Alsea River basin in Oregon. The second paper addresses predicting the variability of an estimate over sets in a class of sets (using a Monte Carlo process on a simulated response with appropriate covariance structure).
3

Essays zu methodischen Herausforderungen im Large-Scale Assessment

Robitzsch, Alexander 21 January 2016 (has links)
Mit der wachsenden Verbreitung empirischer Schulleistungsleistungen im Large-Scale Assessment gehen eine Reihe methodischer Herausforderungen einher. Die vorliegende Arbeit untersucht, welche Konsequenzen Modellverletzungen in eindimensionalen Item-Response-Modellen (besonders im Rasch-Modell) besitzen. Insbesondere liegt der Fokus auf vier methodischen Herausforderungen von Modellverletzungen. Erstens, implizieren Positions- und Kontexteffekte, dass gegenüber einem eindimensionalen IRT-Modell Itemschwierigkeiten nicht unabhängig von der Position im Testheft und der Zusammenstellung des Testheftes ausgeprägt sind und Schülerfähigkeiten im Verlauf eines Tests variieren können. Zweitens, verursacht die Vorlage von Items innerhalb von Testlets lokale Abhängigkeiten, wobei unklar ist, ob und wie diese in der Skalierung berücksichtigt werden sollen. Drittens, können Itemschwierigkeiten aufgrund verschiedener Lerngelegenheiten zwischen Schulklassen variieren. Viertens, sind insbesondere in low stakes Tests nicht bearbeitete Items vorzufinden. In der Arbeit wird argumentiert, dass trotz Modellverletzungen nicht zwingend von verzerrten Schätzungen von Itemschwierigkeiten, Personenfähigkeiten und Reliabilitäten ausgegangen werden muss. Außerdem wird hervorgehoben, dass man psychometrisch häufig nicht entscheiden kann und entscheiden sollte, welches IRT-Modell vorzuziehen ist. Dies trifft auch auf die Fragestellung zu, wie nicht bearbeitete Items zu bewerten sind. Ausschließlich Validitätsüberlegungen können dafür Hinweise geben. Modellverletzungen in IRT-Modellen lassen sich konzeptuell plausibel in den Ansatz des Domain Samplings (Item Sampling; Generalisierbarkeitstheorie) einordnen. In dieser Arbeit wird gezeigt, dass die statistische Unsicherheit in der Modellierung von Kompetenzen nicht nur von der Stichprobe der Personen, sondern auch von der Stichprobe der Items und der Wahl statistischer Modelle verursacht wird. / Several methodological challenges emerge in large-scale student assessment studies like PISA and TIMSS. Item response models (IRT models) are essential for scaling student abilities within these studies. This thesis investigates the consequences of several model violations in unidimensional IRT models (especially in the Rasch model). In particular, this thesis focuses on the following four methodological challenges of model violations. First, position effects and contextual effects imply (in comparison to unidimensional IRT models) that item difficulties depend on the item position in a test booklet as well as on the composition of a test booklet. Furthermore, student abilities are allowed to vary among test positions. Second, the administration of items within testlets causes local dependencies, but it is unclear whether and how these dependencies should be taken into account for the scaling of student abilities. Third, item difficulties can vary among different school classes due to different opportunities to learn. Fourth, the amount of omitted items is in general non-negligible in low stakes tests. In this thesis it is argued that estimates of item difficulties, student abilities and reliabilities can be unbiased despite model violations. Furthermore, it is argued that the choice of an IRT model cannot and should not be made (solely) from a psychometric perspective. This also holds true for the problem of how to score omitted items. Only validity considerations provide reasons for choosing an adequate scoring procedure. Model violations in IRT models can be conceptually classified within the approach of domain sampling (item sampling; generalizability theory). In this approach, the existence of latent variables need not be posed. It is argued that statistical uncertainty in modelling competencies does not only depend on the sampling of persons, but also on the sampling of items and on the choice of statistical models.

Page generated in 0.0597 seconds