• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1291
  • 376
  • 212
  • 163
  • 71
  • 63
  • 36
  • 33
  • 28
  • 28
  • 26
  • 14
  • 12
  • 10
  • 10
  • Tagged with
  • 2855
  • 398
  • 284
  • 280
  • 207
  • 195
  • 190
  • 163
  • 157
  • 156
  • 156
  • 152
  • 147
  • 142
  • 128
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
451

High Speed On-Chip Measurment Circuit / Inbyggd krets för höghastighetsmätning på chip

Stridfelt, Arvid January 2005 (has links)
<p>This master thesis describes a design exploration of a circuit capable of measuring high speed signals without adding significant capacitive load to the measuring node. </p><p>It is designed in a 0.13 CMOS process with a supply voltage of 1.2 Volt. The circuit is a master and slave, track-and-hold architecture incorporated with a capacitive voltage divider and a NMOS source follower as input buffer to protect the measuring node and increase the input voltage range. </p><p>This thesis presents the implementation process and the theory needed to understand the design decisions and consideration throughout the design. The results are based on transistor level simulations performed in Cadence Spectre. </p><p>The results show that it is possible to observe the analog behaviour of a high speed signal by down converting it to a lower frequency that can be brought off-chip. The trade off between capacitive load added to the measuring node and input bandwidth of the measurment circuit is also presented.</p>
452

High Speed On-Chip Measurment Circuit / Inbyggd krets för höghastighetsmätning på chip

Stridfelt, Arvid January 2005 (has links)
This master thesis describes a design exploration of a circuit capable of measuring high speed signals without adding significant capacitive load to the measuring node. It is designed in a 0.13 CMOS process with a supply voltage of 1.2 Volt. The circuit is a master and slave, track-and-hold architecture incorporated with a capacitive voltage divider and a NMOS source follower as input buffer to protect the measuring node and increase the input voltage range. This thesis presents the implementation process and the theory needed to understand the design decisions and consideration throughout the design. The results are based on transistor level simulations performed in Cadence Spectre. The results show that it is possible to observe the analog behaviour of a high speed signal by down converting it to a lower frequency that can be brought off-chip. The trade off between capacitive load added to the measuring node and input bandwidth of the measurment circuit is also presented.
453

Sampling Algorithms for Evolving Datasets

Gemulla, Rainer 24 October 2008 (has links) (PDF)
Perhaps the most flexible synopsis of a database is a uniform random sample of the data; such samples are widely used to speed up the processing of analytic queries and data-mining tasks, to enhance query optimization, and to facilitate information integration. Most of the existing work on database sampling focuses on how to create or exploit a random sample of a static database, that is, a database that does not change over time. The assumption of a static database, however, severely limits the applicability of these techniques in practice, where data is often not static but continuously evolving. In order to maintain the statistical validity of the sample, any changes to the database have to be appropriately reflected in the sample. In this thesis, we study efficient methods for incrementally maintaining a uniform random sample of the items in a dataset in the presence of an arbitrary sequence of insertions, updates, and deletions. We consider instances of the maintenance problem that arise when sampling from an evolving set, from an evolving multiset, from the distinct items in an evolving multiset, or from a sliding window over a data stream. Our algorithms completely avoid any accesses to the base data and can be several orders of magnitude faster than algorithms that do rely on such expensive accesses. The improved efficiency of our algorithms comes at virtually no cost: the resulting samples are provably uniform and only a small amount of auxiliary information is associated with the sample. We show that the auxiliary information not only facilitates efficient maintenance, but it can also be exploited to derive unbiased, low-variance estimators for counts, sums, averages, and the number of distinct items in the underlying dataset. In addition to sample maintenance, we discuss methods that greatly improve the flexibility of random sampling from a system's point of view. More specifically, we initiate the study of algorithms that resize a random sample upwards or downwards. Our resizing algorithms can be exploited to dynamically control the size of the sample when the dataset grows or shrinks; they facilitate resource management and help to avoid under- or oversized samples. Furthermore, in large-scale databases with data being distributed across several remote locations, it is usually infeasible to reconstruct the entire dataset for the purpose of sampling. To address this problem, we provide efficient algorithms that directly combine the local samples maintained at each location into a sample of the global dataset. We also consider a more general problem, where the global dataset is defined as an arbitrary set or multiset expression involving the local datasets, and provide efficient solutions based on hashing.
454

Sampling Algorithms for Evolving Datasets

Gemulla, Rainer 20 October 2008 (has links)
Perhaps the most flexible synopsis of a database is a uniform random sample of the data; such samples are widely used to speed up the processing of analytic queries and data-mining tasks, to enhance query optimization, and to facilitate information integration. Most of the existing work on database sampling focuses on how to create or exploit a random sample of a static database, that is, a database that does not change over time. The assumption of a static database, however, severely limits the applicability of these techniques in practice, where data is often not static but continuously evolving. In order to maintain the statistical validity of the sample, any changes to the database have to be appropriately reflected in the sample. In this thesis, we study efficient methods for incrementally maintaining a uniform random sample of the items in a dataset in the presence of an arbitrary sequence of insertions, updates, and deletions. We consider instances of the maintenance problem that arise when sampling from an evolving set, from an evolving multiset, from the distinct items in an evolving multiset, or from a sliding window over a data stream. Our algorithms completely avoid any accesses to the base data and can be several orders of magnitude faster than algorithms that do rely on such expensive accesses. The improved efficiency of our algorithms comes at virtually no cost: the resulting samples are provably uniform and only a small amount of auxiliary information is associated with the sample. We show that the auxiliary information not only facilitates efficient maintenance, but it can also be exploited to derive unbiased, low-variance estimators for counts, sums, averages, and the number of distinct items in the underlying dataset. In addition to sample maintenance, we discuss methods that greatly improve the flexibility of random sampling from a system's point of view. More specifically, we initiate the study of algorithms that resize a random sample upwards or downwards. Our resizing algorithms can be exploited to dynamically control the size of the sample when the dataset grows or shrinks; they facilitate resource management and help to avoid under- or oversized samples. Furthermore, in large-scale databases with data being distributed across several remote locations, it is usually infeasible to reconstruct the entire dataset for the purpose of sampling. To address this problem, we provide efficient algorithms that directly combine the local samples maintained at each location into a sample of the global dataset. We also consider a more general problem, where the global dataset is defined as an arbitrary set or multiset expression involving the local datasets, and provide efficient solutions based on hashing.
455

Effet de l'échantillonnage non proportionnel de cas et de témoins sur une méthode de vraisemblance maximale pour l'estimation de la position d'une mutation sous sélection

Villandré, Luc January 2008 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal.
456

Pesquisas sob amostragem informativa utilizando o FBST / Surveys under informative sampling using the FBST

Azerêdo, Daniel Mendes 28 May 2013 (has links)
Pfeffermann, Krieger e Rinott (1998) apresentaram uma metodologia para modelar processos de amostragem que pode ser utilizada para avaliar se este processo de amostragem é informativo. Neste cenário, as probabilidades de seleção da amostra são aproximadas por uma função polinomial dependendo das variáveis resposta e concomitantes. Nesta abordagem, nossa principal proposta é investigar a aplicação do teste de significância FBST (Full Bayesian Significance Test), apresentado por Pereira e Stern (1999), como uma ferramenta para testar a ignorabilidade amostral, isto é, para avaliar uma relação de significância entre as probabilidades de seleção da amostra e a variável resposta. A performance desta modelagem estatística é testada com alguns experimentos computacionais. / Pfeffermann, Krieger and Rinott (1998) introduced a framework for modeling sampling processes that can be used to assess if a sampling process is informative. In this setting, sample selection probabilities are approximated by a polynomial function depending on outcome and auxiliary variables. Within this framework, our main purpose is to investigate the application of the Full Bayesian Significance Test (FBST), introduced by Pereira and Stern (1999), as a tool for testing sampling ignorability, that is, to detect a significant relation between the sample selection probabilities and the outcome variable. The performance of this statistical modelling framework is tested with some simulation experiments.
457

Bayesian Predictive Inference Under Informative Sampling and Transformation

Shen, Gang 29 April 2004 (has links)
We have considered the problem in which a biased sample is selected from a finite population, and this finite population itself is a random sample from an infinitely large population, called the superpopulation. The parameters of the superpopulation and the finite population are of interest. There is some information about the selection mechanism in that the selection probabilities are linearly related to the measurements. This is typical of establishment surveys where the selection probabilities are taken to be proportional to the previous year's characteristics. When all the selection probabilities are known, as in our problem, inference about the finite population can be made, but inference about the distribution is not so clear. For continuous measurements, one might assume that the the values are normally distributed, but as a practical issue normality can be tenuous. In such a situation a transformation to normality may be useful, but this transformation will destroy the linearity between the selection probabilities and the values. The purpose of this work is to address this issue. In this light we have constructed two models, an ignorable selection model and a nonignorable selection model. We use the Gibbs sampler and the sample importance re-sampling algorithm to fit the nonignorable selection model. We have emphasized estimation of the finite population parameters, although within this framework other quantities can be estimated easily. We have found that our nonignorable selection model can correct the bias due to unequal selection probabilities, and it provides improved precision over the estimates from the ignorable selection model. In addition, we have described the case in which all the selection probabilities are unknown. This is useful because many agencies (e.g., government) tend to hide these selection probabilities when public-used data are constructed. Also, we have given an extensive theoretical discussion on Poisson sampling, an underlying sampling scheme in our models especially useful in the case in which the selection probabilities are unknown.
458

Development and application of a new passive sampling device : the lipid-free tube (LFT) sampler

Quarles, Lucas W. 29 September 2009 (has links)
Contaminants can exist in a wide range of states in aqueous environments, especially in surface waters. They can be freely dissolved or associated with dissolved or particulate organic matter depending on their chemical and physical characteristics. The freely dissolved fraction represents the most bioavailable fraction to an organism. These freely dissolved contaminants can cross biomembranes, potentially exerting toxic effects. Passive sampling devices (PSDs) have been developed to aid in sampling many of these contaminants by having the ability to distinguish between the freely dissolved and bound fraction of a contaminant. A new PSD, the Lipid-Free Tube (LFT) sampler was developed in response to some of the shortcomings of other current PSD that sample hydrophobic organic contaminants (HOCs). The device and laboratory methods were original modeled after a widely utilized PSD, the semipermeable membrane device (SPMD), and then improved upon. The effectiveness, efficiency, and sensitivity of not only the PSD itself, but also the laboratory methods were investigated. One requirement during LFT development was to ensure LFTs could be coupled with biological analyses without deleterious results. In an embryonic zebrafish developmental toxicity assay, embryos exposed to un-fortified LFT extracts did not show significant adverse biological response as compared to controls. Also, LFT technology lends itself to easy application in monitoring pesticides at remote sampling sites. LFTs were utilized during a series of training exchanges between Oregon State University and the Centre de Recherches en Ecotoxicologie pour le Sahel (CERES)/LOCUSTOX laboratory in Dakar, Senegal that sought to build "in country" analytical capacity. Application of LFTs as biological surrogates for predicting potential human health risk endpoints, such as those in a public health assessment was also investigated. LFT mass and accumulated contaminant masses were used directly, representing the amount of contaminants an organism would be exposed to through partitioning assuming steady state without metabolism. These exposure concentrations allow for calculating potential health risks in a human health risk model. LFT prove to be a robust tool not only for assessing bioavailable water concentrations of HOCs, but also potentially providing many insights into the toxicological significance of aquatic contaminants and mixtures. / Graduation date: 2010
459

Ammonia sampling using Ogawa passive samplers [electronic resource] / by Paul Tate.

Tate, Paul. January 2002 (has links)
Document formatted into pages; contains 115 pages. / Title from PDF of title page. / Original thesis was submitted in HTML and can be accessed at http://www.lib.usf.edu/EDT-db/theses/available/etd-10262001-162331/unrestricted/default.htm / Thesis (M.S.)--University of South Florida, 2002. / Includes bibliographical references. / Text (Electronic thesis) in PDF format. / ABSTRACT: The purposes of this research were to determine the efficacy of using the Ogawa]a passive sampling device (PSD) to measure ammonia and to identify significant ammonia sources adjacent to Hillsborough and Tampa Bay. Ninety-four samplers were deployed over a 180-km2 area for two weeks in October 2001. Within the area sampled were located suburbs, an urban center, major highways, port activities, fertilizer manufacturing, wastewater treatment, coal-combustion power plants, warehousing and dairy farming. The sampled locations were arranged in a triangular grid pattern spaced 1.5 km apart. The pattern was designed to locate circular hot spots with a minimum radius of 0.75 km. The minimum, maximum, mean, and median ammonia concentrations were 0.06, 15, 2.0, and 1.5 mg/m3, respectively, and the estimated precision was 16%. Hot spots identified from kriged concentration data coincided with inventoried ammonia sources. / ABSTRACT: The relative bias and precision of the PSD based on collocation with an annular denuder system were (plus or minus) 30 % and 20 %. / System requirements: World Wide Web browser and PDF reader. / Mode of access: World Wide Web.
460

Processor design-space exploration through fast simulation.

Khan, Taj Muhammad 12 May 2011 (has links) (PDF)
Simulation is a vital tool used by architects to develop new architectures. However, because of the complexity of modern architectures and the length of recent benchmarks, detailed simulation of programs can take extremely long times. This impedes the exploration of processor design space which the architects need to do to find the optimal configuration of processor parameters. Sampling is one technique which reduces the simulation time without adversely affecting the accuracy of the results. Yet, most sampling techniques either ignore the warm-up issue or require significant development effort on the part of the user.In this thesis we tackle the problem of reconciling state-of-the-art warm-up techniques and the latest sampling mechanisms with the triple objective of keeping the user effort minimum, achieving good accuracy and being agnostic to software and hardware changes. We show that both the representative and statistical sampling techniques can be adapted to use warm-up mechanisms which can accommodate the underlying architecture's warm-up requirements on-the-fly. We present the experimental results which show an accuracy and speed comparable to latest research. Also, we leverage statistical calculations to provide an estimate of the robustness of the final results.

Page generated in 0.0669 seconds