631 |
Quantum algorithms for searching, resampling, and hidden shift problemsOzols, Maris 06 November 2014 (has links)
This thesis is on quantum algorithms. It has three main themes:
(1) quantum walk based search algorithms,
(2) quantum rejection sampling, and
(3) the Boolean function hidden shift problem.
The first two parts deal with generic techniques for constructing quantum algorithms, and the last part is on quantum algorithms for a specific algebraic problem.
In the first part of this thesis we show how certain types of random walk search algorithms can be transformed into quantum algorithms that search quadratically faster. More formally, given a random walk on a graph with an unknown set of marked vertices, we construct a quantum walk that finds a marked vertex in a number of steps that is quadratically smaller than the hitting time of the random walk. The main idea of our approach is to interpolate the random walk from one that does not stop when a marked vertex is found to one that stops. The quantum equivalent of this procedure drives the initial superposition over all vertices to a superposition over marked vertices. We present an adiabatic as well as a circuit version of our algorithm, and apply it to the spatial search problem on the 2D grid.
In the second part we study a quantum version of the problem of resampling one probability distribution to another. More formally, given query access to a black box that produces a coherent superposition of unknown quantum states with given amplitudes, the problem is to prepare a coherent superposition of the same states with different specified amplitudes. Our main result is a tight characterization of the number of queries needed for this transformation. By utilizing the symmetries of the problem, we prove a lower bound using a hybrid argument and semidefinite programming. For the matching upper bound we construct a quantum algorithm that generalizes the rejection sampling method first formalized by von~Neumann in~1951. We describe quantum algorithms for the linear equations problem and quantum Metropolis sampling as applications of quantum rejection sampling.
In the third part we consider a hidden shift problem for Boolean functions: given oracle access to f(x+s), where f(x) is a known Boolean function, determine the hidden shift s. We construct quantum algorithms for this problem using the "pretty good measurement" and quantum rejection sampling. Both algorithms use the Fourier transform and their complexity can be expressed in terms of the Fourier spectrum of f (in particular, in the second case it relates to "water-filling" of the spectrum). We also construct algorithms for variations of this problem where the task is to verify a given shift or extract only a single bit of information about it.
|
632 |
Advancing Bioaccumulation Modeling and Water Sampling of Ionogenic Organic ChemicalsCao, Xiaoshu 24 June 2014 (has links)
Although many commercial chemicals can dissociate, the study of the biological and environmental fate of ionogenic organic chemicals (IOCs) is still in its infancy. Uptake of the veterinary drug diclofenac in vultures and cattle was successfully simulated with a newly developed physiologically-based pharmacokinetic model for IOCs, lending credence to diclofenac’s proposed role in South Asian vulture population declines. Proteins and phospholipids rather than total lipids control the tissue distribution of diclofenac.
A method was developed to simultaneously extract neutral and acidic pesticides and benzotriazoles from water samples with recoveries ranging 70-100%. This method was applied to samples from a laboratory calibration experiment of the Polar Organic Chemical Integrative Sampler. The sampler had higher uptake for neutral and acidic pesticides when filled with triphasic sorbent admixture and OASIS MAS sorbent, respectively. While either sorbent can also be applied for methylated benzotriazoles, neither is capable of quantitatively sampling all three compound groups.
|
633 |
Compressed Sensing for 3D Laser Radar / Compressed Sensing för 3D LaserradarFall, Erik January 2014 (has links)
High resolution 3D images are of high interest in military operations, where data can be used to classify and identify targets. The Swedish defence research agency (FOI) is interested in the latest research and technologies in this area. A draw- back with normal 3D-laser systems are the lack of high resolution for long range measurements. One technique for high long range resolution laser radar is based on time correlated single photon counting (TCSPC). By repetitively sending out short laser pulses and measure the time of flight (TOF) of single reflected pho- tons, extremely accurate range measurements can be done. A drawback with this method is that it is hard to create single photon detectors with many pixels and high temporal resolution, hence a single detector is used. Scanning an entire scene with one detector is very time consuming and instead, as this thesis is all about, the entire scene can be measured with less measurements than the number of pixels. To do this a technique called compressed sensing (CS) is introduced. CS utilizes that signals normally are compressible and can be represented sparse in some basis representation. CS sets other requirements on the sampling compared to the normal Shannon-Nyquist sampling theorem. With a digital micromirror device (DMD) linear combinations of the scene can be reflected onto the single photon detector, creating scalar intensity values as measurements. This means that fewer DMD-patterns than the number of pixels can reconstruct the entire 3D-scene. In this thesis a computer model of the laser system helps to evaluate different CS reconstruction methods with different scenarios of the laser system and the scene. The results show how many measurements that are required to reconstruct scenes properly and how the DMD-patterns effect the results. CS proves to enable a great reduction, 85 − 95 %, of the required measurements com- pared to pixel-by-pixel scanning system. Total variation minimization proves to be the best choice of reconstruction method. / Högupplösta 3D-bilder är väldigt intressanta i militära operationer där data kan utnyttjas för klassificering och identifiering av mål. Det är av stort intresse hos Totalförsvarets forskningsinstitut (FOI) att undersöka de senaste teknikerna in- om detta område. Ett stort problem med vanliga 3D-lasersystem är att de saknar hög upplösning för långa mätavstånd. En teknik som har hög avståndsupplös- ning är tidskorrelerande enfotonräknare, som kan räkna enstaka fotoner med extremt bra noggrannhet. Ett sådant system belyser en scen med laserljus och mäter sedan reflektionstiden för enstaka fotoner och kan på så sätt mäta avstånd. Problemet med denna metod är att göra detektion av många pixlar när man bara kan använda en detektor. Att skanna en hel scen med en detektor tar väldigt lång tid och istället handlar det här exjobbet om att göra färre mätningar än antalet pixlar, men ändå återskapa hela 3D-scenen. För att åstadkomma detta används en ny teknik kallad Compressed Sensing (CS). CS utnyttjar att mätdata normalt är komprimerbar och skiljer sig från det traditionella Shannon-Nyquists krav på sampling. Med hjälp av ett Digital Micromirror Device (DMD) kan linjärkombi- nationer av scenen speglas ner på enfotondetektorn och med färre DMD-mönster än antalet pixlar kan hela 3D-scenen återskapas. Med hjälp av en egenutvecklad lasermodell evalueras olika CS rekonstruktionsmetoder och olika scenarier av la- sersystemet. Arbetet visar att basrepresentationen avgör hur många mätningar som behövs och hur olika uppbyggnader av DMD-mönstren påverkar resultatet. CS visar sig möjliggöra att 85 − 95 % färre mätningar än antalet pixlar behövs för att avbilda hela 3D-scener. Total variation minimization visar sig var det bästa valet av rekonstruktionsmetod.
|
634 |
Data spacing and uncertaintyWilde, Brandon Jesse 11 1900 (has links)
Modeling spatial variables involves uncertainty. Uncertainty is affected by the degree to which a spatial variable has been sampled: decreased spacing between samples leads to decreased uncertainty. The reduction in uncertainty due to increased sampling is dependent on the properties of the variable being modeled. A densely sampled erratic variable may have a level of uncertainty similar to a sparsely sampled continuous variable. A simulation based approach is developed to quantify the relationship between uncertainty and data spacing. Reference realizations are simulated and sampled at different spacings. The samples are used to condition additional realizations from which uncertainty is quantified. A number of factors complicate the relationship between uncertainty and data spacing including the proportional effect, nonstationary variogram, classification threshold, number of realizations, data quality and modeling scale. A case study of the relationship between uncertainty and data density for bitumen thickness data from northern Alberta is presented. / Mining Engineering
|
635 |
複雑な内生抽出法に基づく標本への離散選択モデルの適用KITAMURA, Ryuichi, 酒井, 弘, SAKAI, Hiroshi, 北村, 隆一, 山本, 俊行, YAMAMOTO, Toshiyuki 01 1900 (has links)
No description available.
|
636 |
エッジトーン現象によって噴流中に形成された組織構造の特徴 (第1報, レイノルズ応力と乱れの生成項からの考察)河合, 勇太, KAWAI, Yuta, 辻, 義之, TSUJI, Yoshiyuki, 久木田, 豊, KUKITA, Yutaka 04 1900 (has links)
No description available.
|
637 |
Emotional Reactions to Music : Prevalence and Contributing FactorsLiljeström, Simon January 2011 (has links)
People value music mainly for its abilities to induce emotions. Yet little is known about these experiences. The aim of this thesis was thus to investigate the nature and prevalence of emotional reactions to music, and what factors in the listener, the music, and the situation might contribute to such reactions. Study I explored the prevalence of musical emotions and possible factors influencing such experiences through the use of a questionnaire sent out to a random and nationally representative sample. The results indicated that a majority of the responders frequently reacted emotionally to music, and that their reactions included both basic and complex emotions. Prevalence correlated with personality, gender, age, and music education. Study II was designed to obtain a representative sample of situations where music induced emotions in listeners. The results showed that emotional reactions to music occurred in 24% of all episodes, and that the prevalence of specific emotions varied depending on the situation (e.g., other people present). However, causal inferences could not be drawn from Study I and II, so it was considered important to test predictions in a more controlled setting. Study III showed in an experiment that listeners experienced more intense emotions (a) to self-chosen music than to randomly selected music and (b) when listening with a close friend or partner than when listening alone. Moreover, Openness to experience correlated with emotion intensity. All three factors were linked to positive emotions. Overall, the thesis shows that (a) musical emotions are relatively common, (b) music can induce a variety of emotions, and (c) there are several features in the listener, the music, and the situation that may influence emotional reactions to music.
|
638 |
Advances in Cross-Entropy MethodsThomas Taimre Unknown Date (has links)
The cross-entropy method is an established technique for solving difficult estimation, simulation, and optimisation problems. The method has its origins in an adaptive importance sampling procedure for rare-event estimation published by R. Y. Rubinstein in 1997. In that publication, the adaptive procedure produces a parametric probability density function whose parameters minimise the variance of the associated likelihood ratio estimator. This variance minimisation can also be viewed as minimising a measure of divergence to the minimum-variance importance sampling density over all members of the parametric family in question. Soon thereafter it was realised that the same adaptive importance sampling procedure could be used to solve combinatorial optimisation problems by viewing the set of solutions to the optimisation problem as a rare-event. This realisation led to the debut of the cross-entropy method in 1999, where it was introduced as a modification to the existing adaptive importance sampling procedure, with a different choice of directed divergence measure, in particular, the Kullback-Leibler cross-entropy. The contributions of this thesis are threefold. Firstly, in a review capacity, it provides an up-to-date consolidation of material on the cross-entropy method and its generalisations, as well as a collation of background material on importance sampling and Monte Carlo methods. The reviews are elucidated with original commentary and examples. Secondly, two new major applications of the cross-entropy methodology to optimisation problems are presented, advancing the boundary of knowledge on cross-entropy in the applied arena. Thirdly, two contributions to the methodological front are (a) an original extension of the generalised cross-entropy framework which enables one to construct state- and time-dependent importance sampling algorithms, and (b) a new algorithm for counting solutions to difficult binary-encoded problems.
|
639 |
Advances in Cross-Entropy MethodsThomas Taimre Unknown Date (has links)
The cross-entropy method is an established technique for solving difficult estimation, simulation, and optimisation problems. The method has its origins in an adaptive importance sampling procedure for rare-event estimation published by R. Y. Rubinstein in 1997. In that publication, the adaptive procedure produces a parametric probability density function whose parameters minimise the variance of the associated likelihood ratio estimator. This variance minimisation can also be viewed as minimising a measure of divergence to the minimum-variance importance sampling density over all members of the parametric family in question. Soon thereafter it was realised that the same adaptive importance sampling procedure could be used to solve combinatorial optimisation problems by viewing the set of solutions to the optimisation problem as a rare-event. This realisation led to the debut of the cross-entropy method in 1999, where it was introduced as a modification to the existing adaptive importance sampling procedure, with a different choice of directed divergence measure, in particular, the Kullback-Leibler cross-entropy. The contributions of this thesis are threefold. Firstly, in a review capacity, it provides an up-to-date consolidation of material on the cross-entropy method and its generalisations, as well as a collation of background material on importance sampling and Monte Carlo methods. The reviews are elucidated with original commentary and examples. Secondly, two new major applications of the cross-entropy methodology to optimisation problems are presented, advancing the boundary of knowledge on cross-entropy in the applied arena. Thirdly, two contributions to the methodological front are (a) an original extension of the generalised cross-entropy framework which enables one to construct state- and time-dependent importance sampling algorithms, and (b) a new algorithm for counting solutions to difficult binary-encoded problems.
|
640 |
Material Culture and Behaviour in Pleistocene Sahul: Examining the Archaeological Representation of Pleistocene Behavioural Modernity in SahulMichelle Langley Unknown Date (has links)
Sahul, the combined landmass of Australia and New Guinea, provides a record of behavioural modernity extending over at least the last 50,000 years. Colonised solely by anatomically and behaviourally modern humans, this continent provides an alternative record in the investigation of behavioural modernity to the extensively studied Middle Stone Age African and Upper Palaeolithic Eurasian archaeological records. In the past, the archaeological record of behavioural modernity in Sahul has been described as simple, sparse and essentially different to those records of Africa and Eurasia. These differences have been attributed to either low population densities during the Pleistocene or the loss of behavioural ‘traits’ on the journey from Africa to Sahul. While a number of studies have been undertaken, no single researcher has attempted to investigate the role of taphonomy and sampling on the representation of behavioural modernity in the archaeological record, despite Sahul being characterised by extreme environments, highly variable climates, and archaeologically, usually only small excavations. This study compiles the most complete record of chronology, evidence for behavioural modernity and excavation details for 223 Pleistocene sites yet attempted. It is also the most extensive dataset assembled for the examination of the issue of behavioural modernity on a single landmass. Site spatial and temporal distribution, site characteristics, excavations, absolute dating, preservation and sample size are examined to determine how the behavioural complexity of a modern human population is characterised on this isolated southern continent and the impact of taphonomy and archaeological sampling on that representation. Results demonstrate that preservation and sampling play a significant role in structuring the spatial and temporal representation of behavioural modernity in the archaeological record of Pleistocene Sahul. Contrary to previous findings, the evidence for behavioural modernity in Sahul is found to resemble the archaeological records of the African Middle Stone Age and Eurasian Upper Palaeolithic in terms of behaviour and artefact diversity. In terms of global narratives, these results also indicate that current understandings of behavioural modernity are incomplete and may misrepresent levels of behavioural complexity in early periods in some regions.
|
Page generated in 0.0341 seconds