• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1291
  • 376
  • 212
  • 163
  • 71
  • 63
  • 36
  • 33
  • 28
  • 28
  • 26
  • 14
  • 12
  • 10
  • 10
  • Tagged with
  • 2855
  • 398
  • 284
  • 280
  • 207
  • 195
  • 190
  • 163
  • 157
  • 156
  • 156
  • 152
  • 147
  • 142
  • 128
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
541

Quantum algorithms for searching, resampling, and hidden shift problems

Ozols, Maris January 2012 (has links)
This thesis is on quantum algorithms. It has three main themes: (1) quantum walk based search algorithms, (2) quantum rejection sampling, and (3) the Boolean function hidden shift problem. The first two parts deal with generic techniques for constructing quantum algorithms, and the last part is on quantum algorithms for a specific algebraic problem. In the first part of this thesis we show how certain types of random walk search algorithms can be transformed into quantum algorithms that search quadratically faster. More formally, given a random walk on a graph with an unknown set of marked vertices, we construct a quantum walk that finds a marked vertex in a number of steps that is quadratically smaller than the hitting time of the random walk. The main idea of our approach is to interpolate the random walk from one that does not stop when a marked vertex is found to one that stops. The quantum equivalent of this procedure drives the initial superposition over all vertices to a superposition over marked vertices. We present an adiabatic as well as a circuit version of our algorithm, and apply it to the spatial search problem on the 2D grid. In the second part we study a quantum version of the problem of resampling one probability distribution to another. More formally, given query access to a black box that produces a coherent superposition of unknown quantum states with given amplitudes, the problem is to prepare a coherent superposition of the same states with different specified amplitudes. Our main result is a tight characterization of the number of queries needed for this transformation. By utilizing the symmetries of the problem, we prove a lower bound using a hybrid argument and semidefinite programming. For the matching upper bound we construct a quantum algorithm that generalizes the rejection sampling method first formalized by von~Neumann in~1951. We describe quantum algorithms for the linear equations problem and quantum Metropolis sampling as applications of quantum rejection sampling. In the third part we consider a hidden shift problem for Boolean functions: given oracle access to f(x+s), where f(x) is a known Boolean function, determine the hidden shift s. We construct quantum algorithms for this problem using the "pretty good measurement" and quantum rejection sampling. Both algorithms use the Fourier transform and their complexity can be expressed in terms of the Fourier spectrum of f (in particular, in the second case it relates to "water-filling" of the spectrum). We also construct algorithms for variations of this problem where the task is to verify a given shift or extract only a single bit of information about it.
542

Periodic sampling of stationary time series

January 1950 (has links)
John P. Costas. / "May 16, 1950." / Bibliography: p. 7. / Army Signal Corps Contract No. W36-039-sc-32037 Project No. 102B Dept. of the Army Project No. 3-99-10-022
543

Importance Sampling for Reinforcement Learning with Multiple Objectives

Shelton, Christian Robert 01 August 2001 (has links)
This thesis considers three complications that arise from applying reinforcement learning to a real-world application. In the process of using reinforcement learning to build an adaptive electronic market-maker, we find the sparsity of data, the partial observability of the domain, and the multiple objectives of the agent to cause serious problems for existing reinforcement learning algorithms. We employ importance sampling (likelihood ratios) to achieve good performance in partially observable Markov decision processes with few data. Our importance sampling estimator requires no knowledge about the environment and places few restrictions on the method of collecting data. It can be used efficiently with reactive controllers, finite-state controllers, or policies with function approximation. We present theoretical analyses of the estimator and incorporate it into a reinforcement learning algorithm. Additionally, this method provides a complete return surface which can be used to balance multiple objectives dynamically. We demonstrate the need for multiple goals in a variety of applications and natural solutions based on our sampling method. The thesis concludes with example results from employing our algorithm to the domain of automated electronic market-making.
544

Sequential Optimal Recovery: A Paradigm for Active Learning

Niyogi, Partha 12 May 1995 (has links)
In most classical frameworks for learning from examples, it is assumed that examples are randomly drawn and presented to the learner. In this paper, we consider the possibility of a more active learner who is allowed to choose his/her own examples. Our investigations are carried out in a function approximation setting. In particular, using arguments from optimal recovery (Micchelli and Rivlin, 1976), we develop an adaptive sampling strategy (equivalent to adaptive approximation) for arbitrary approximation schemes. We provide a general formulation of the problem and show how it can be regarded as sequential optimal recovery. We demonstrate the application of this general formulation to two special cases of functions on the real line 1) monotonically increasing functions and 2) functions with bounded derivative. An extensive investigation of the sample complexity of approximating these functions is conducted yielding both theoretical and empirical results on test functions. Our theoretical results (stated insPAC-style), along with the simulations demonstrate the superiority of our active scheme over both passive learning as well as classical optimal recovery. The analysis of active function approximation is conducted in a worst-case setting, in contrast with other Bayesian paradigms obtained from optimal design (Mackay, 1992).
545

SOIL TEST INFORMATION IN COTTON PRODUCTION: ADOPTION, USE, AND VALUE IN POTASSIUM MANAGEMENT

Harper, David Caldwell 01 May 2011 (has links)
Soil sampling can help producers gain more accurate knowledge about soil nutrient properties and field-level characteristics. This information aids in the placement and timing of fertilizer application. Optimal input application may lower variable costs, increase economic returns, and moderate off-site environmental impacts of farming. Yet producer decisions to incorporate soil information into management practices and perceptions about the value of soil test information over time depends on a wide range of economic, social, and producer characteristics. Studies examining the value of soil information for optimal nutrient management may help inform producers considering adopting these technologies about the potential benefits of soil testing. This thesis provides two studies examining (1) the factors associated with the adoption of precision soil sampling and the length of time this information is perceived useful by cotton producers, and (2) the value of soil test information with regards to optimal potassium fertilizer management in cotton production over multiple growing seasons. Perceptions about the usefulness of soil test information over time depend on a variety of factors directly or indirectly related to input management. In the first study, the adoption and frequency of soil testing is examined as a function of off-farm, farm business, information sources, and operator characteristics using a Poisson hurdle regression model. Analyzing data from a survey of cotton farmers in 12 Southern states, the length of time producers perceived soil test information to be useful were influenced by farmer experience, land tenure, and the use of other information gathering technologies such as Greenseeker® and electro conductivity. In the second study, optimal potassium (K) management with information about fertilizer carryover was analyzed using a dynamic programming model. Monte Carlo simulation results suggest the information site-specific technologies provides with respect to residual fertilizer carryover effects of K are greatest when a producer is able to identify the magnitude of soil carryover capacity and incorporate this information to manage K. The information obtained from this research may provide insight for cotton producers, agribusiness firms, and agricultural service providers about the perception and potential benefits of soil sampling information to manage inputs in cotton production.
546

Multiple comparisons using multiple imputation under a two-way mixed effects interaction model

Kosler, Joseph Stephen, January 2006 (has links)
Thesis (Ph. D.)--Ohio State University, 2006. / Title from first page of PDF file. Includes bibliographical references (p. 233-237).
547

Methodology to quantify leaks in aerosol sampling system components

Vijayaraghavan, Vishnu Karthik 15 November 2004 (has links)
Filter holders and continuous air monitors (CAMs) are used extensively in the nuclear industry. It is important to minimize leakage in these devices and in recognition of this consideration, a limit on leakage for sampling systems is specified in ANSI/HPS N13.1-1999; however the protocol given in the standard is really germane to measurement of significant leakage, e.g., several percent of the sampling flow rate. In the present study, a technique for quantifying leakage was developed and that approach was used to measure the sealing integrity of a CAM and two kinds of filter holders. The methodology involves use of sulfur hexafluoride as a tracer gas with the device being tested operated under dynamic flow conditions. The leak rates in these devices were determined in the pressure range from 2.49 kPa (10 In. H2O) vacuum to 2.49 kPa (10 In. H2O) pressure at a typical flow rate of 56.6 L/min (2 cfm). For the two filter holders, the leak rates were less than 0.007% of the nominal flow rate. The leak rate in the CAM was less than 0.2% of the nominal flow rate. These values are well within the limit prescribed in the ANSI standard, which is 5% of the nominal flow rate. Therefore the limit listed in the ANSI standard should be reconsidered as lower values can be achieved, and the methodology presented herein can be used to quantify lower leakage values in sample collectors and analyzers. A theoretical analysis was also done to determine the nature of flow through the leaks and the amount of flow contribution by the different possible mechanisms of flow through leaks.
548

Damage assessment and sampling of the rice stink bug, Oebalus pugnax (Fabricius) (Hemiptera: Pentatomidae), in rice, Oryza sativa L., in Texas

Vargas, Luis Espino 02 June 2009 (has links)
Field and greenhouse experiments were conducted from 2003 to 2006 at the Texas A&M University Agricultural Research and Extension Center at Beaumont and commercial rice fields throughout the Texas Rice Belt with the objectives of characterizing the nature of rice stink bug, Oebalus pugnax (Fabricius), damage to rice, Oryza sativa L., and developing visual sampling methods and sequential sampling plans. During 2005 and 2006, in greenhouse and field experiments, rice plants were caged and infested with adult or nymph O. pugnax during the heading, milk or soft dough stage. No differences were found in the weight of rough, brown or milled rice infested with O. pugnax during different stages. More peck was found in grain from plants infested during milk and soft dough than heading. Adult O. pugnax caused more peck than nymphs in all stages. In field experiments conducted during 2005 and 2006, single rice panicles were caged at the onset of heading and infested with one male or female O. pugnax for 48 h during the heading, milk, soft or hard dough stage. No differences were found in the weight of rough rice per panicle infested with O. pugnax. No differences were detected in percentage peck caused by male and female O. pugnax. Infestation during all stages of panicle development produced significantly more peck than no infestation In greenhouse experiments conducted during 2004 and 2005, rice plants at the pre-heading, heading, milk and soft dough stages were caged together and infested with male and female O. pugnax. Insects were observed for a period of five days and their preferences recorded. More insects were observed on milk and soft dough than on pre-heading or heading plants.Commercial rice fields throughout the Texas Rice Belt were sampled during 2003 and 2004 and visual sampling methods were compared to the sweep net method of sampling. Analysis of covariance showed that one sweep of the “long stick” or two sweeps of the “sweep stick” compared favorably to 10 sweep net sweeps. Analyses revealed that visual sampling using the long stick is more cost-reliable than sweep net sampling for O. pugnax in Texas rice fields.
549

Using Genetic Algorithms to Optimize Bathymetric Surveys for Hydrodynamic Model Input

Manian, Dinesh 2009 December 1900 (has links)
The first part of this thesis deals with studying the effect of the specified bathymetric resolution and ideal bathymetric form parameters on the output from the wave and hydrodynamic modules of Delft-3D. This thesis then describes the use of an optimization to effectively reduce the required bathymetric sampling for input to a numerical forecast model, by using the model’s sensitivity to this input. A genetic algorithm is developed to gradually evolve the survey path for a ship, AUV, or other measurement platform to an optimum, with the resulting effect of the corresponding measured bathymetry on the model, used as a metric. Starting from an initial simulated set of possible random or heuristic sampling paths over the given bathymetry using certain constraints like limited length of track, the algorithm can be used to arrive at the path that would provide the best possible input to the model under those constraints. This suitability is tested by a comparison of the model results obtained by using these new simulated observations, with the results obtained using the best available bathymetry. Two test study areas were considered, and the algorithm was found to consistently converge to a sampling pattern that best captured the bathymetric variability critical to the model prediction.
550

Prüfungsurteile mit Dollar Unit Sampling : ein Vergleich von Fehlerschätzmethoden für Zwecke der Wirtschaftsprüfung: Praxis, Theorie, Simulation

Newiak, Monique January 2009 (has links)
No description available.

Page generated in 0.0351 seconds