161 |
Intracavity terahertz optical parametric oscillatorsWalsh, David A. January 2011 (has links)
This thesis describes the design and implementation of several novel, nanosecond pulsed, intracavity optical parametric oscillators for the generation of terahertz radiation. The application of the intracavity approach in the context of terahertz optical parametric oscillators has been demonstrated for the first time, and the pump wave energy required was thereby reduced by an order of magnitude. The terahertz wave was tunable from under 1THz up to 3THz with a free running linewidth of ~50GHz and pulse energies up to ~20nJ (pulses were a few nanoseconds in duration). The terahertz beam profile was of Gaussian shape and could be focussed down to 2.3 times the diffraction limited spot size (M² values of 2.3 and 6.7 in the components of the beam parallel and perpendicular to the silicon prism array output coupler respectively). Developments of this intracavity source with regard to the linewidth are also reported. Implementation of etalons in the optical (laser and OPO) cavities was shown to be a promising technique that brings the terahertz linewidth down below 1GHz (close to the transform limit of nanosecond pulses) while retaining the tuning range and beam characteristics of the free running system. Close to Fourier transform limited pulses were obtained (<100MHz linewidth) via an injection seeding technique, although with significantly increased system complexity. A deleterious effect caused by the mode beating of a multimode host laser was also discovered, in that sidebands were induced on the seeded downconverted wave. This has wider implications in the field of intracavity OPOs. Finally, quasi-phasematching techniques implementing periodically poled lithium niobate were investigated as a way to lower the downconversion threshold energy requirement (by collinear propagation of the optical waves), and also to extract the terahertz wave rapidly from the (highly absorbing in the terahertz region) lithium niobate crystal. The existence of two phasematching solutions arising from the bidirectionality of the grating vector was identified as a serious design constraint in the context of an OPO where either solution can build up from noise photons, and so prefers the solution with the lowest walkoff of the downconverted waves - possibly resulting in unextractable terahertz radiation. Quasi-phasematching with an orthogonal grating vector (with identical but opposite phasematching solutions) was demonstrated and cascaded downconversion processes observed and characterised. These cascaded processes are permitted by the collinearality of the optical waves and may allow efficiency improvements through overcoming the quantum defect limit. This research has resulted in four peer reviewed papers in respected journals, and the intracavity terahertz OPO has been licensed to a company who have commercialised the technology (M Squared Lasers, Glasgow).
|
162 |
On testing for the Cox model using resampling methodsFang, Jing, 方婧 January 2007 (has links)
published_or_final_version / abstract / Statistics and Actuarial Science / Master / Master of Philosophy
|
163 |
Simulation of wave propagation in terrain using the FMM code Nero2DHaydar, Adel, Akeab, Imad January 2010 (has links)
<p>In this report we describe simulation of the surface current density on a PEC cylinder and the diffracted field for a line source above a finite PEC ground plane as a means to verify the Nero2D program. The results are compared with the exact solution and give acceptable errors. A terrain model for a communication link is studied in the report and we simulate the wave propagation for terrain with irregular shapes and different materials. The Nero2D program is based on the fast multipole method (FMM) to reduce computation time and memory. Gaussian sources are also studied to make the terrain model more realistic</p>
|
164 |
Microwave power deposition in bounded and inhomogeneous lossy media.Lumori, Mikaya Lasuba Delesuk. January 1988 (has links)
We present Bessel function and Gaussian beam models for a study of microwave power deposition in bounded and inhomogeneous lossy media. The aim is to develop methods that can accurately simulate practical results commonly found in electromagnetic hyperthermic treatment, which is a noninvasive method. The Bessel function method has a closed form solution and can be used to compute accurate results of electromagnetic fields emanating from applicators with cosinusoidal aperture fields. On the other hand, the Gaussian beam method is approximate but has the capability to simplify boundary value problems and to compute fields in three-dimensions with extremely low CPU time (less than 30 sec). Although the Gaussian beam method is derived from geometrical optics theory, it performs very well in domains outside the realm of geometrical optics which stipulates that aperture dimension/λ ≥ 5 in the design of microwave systems. This condition has no relevance to the Gaussian beam method since the method shows that a limit of aperture dimension/ λ ≥ 0.9 is possible, which is a very important achievement in the design and application of microwave systems. Experimental verifications of the two theoretical models are integral parts of the presentation and show the viability of the methods.
|
165 |
A Bayesian hierarchical nonhomogeneous hidden Markov model for multisite streamflow reconstructionsBracken, C., Rajagopalan, B., Woodhouse, C. 10 1900 (has links)
In many complex water supply systems, the next generation of water resources planning models will require simultaneous probabilistic streamflow inputs at multiple locations on an interconnected network. To make use of the valuable multicentury records provided by tree-ring data, reconstruction models must be able to produce appropriate multisite inputs. Existing streamflow reconstruction models typically focus on one site at a time, not addressing intersite dependencies and potentially misrepresenting uncertainty. To this end, we develop a model for multisite streamflow reconstruction with the ability to capture intersite correlations. The proposed model is a hierarchical Bayesian nonhomogeneous hidden Markov model (NHMM). A NHMM is fit to contemporary streamflow at each location using lognormal component distributions. Leading principal components of tree rings are used as covariates to model nonstationary transition probabilities and the parameters of the lognormal component distributions. Spatial dependence between sites is captured with a Gaussian elliptical copula. Parameters of the model are estimated in a fully Bayesian framework, in that marginal posterior distributions of all the parameters are obtained. The model is applied to reconstruct flows at 20 sites in the Upper Colorado River Basin (UCRB) from 1473 to 1906. Many previous reconstructions are available for this basin, making it ideal for testing this new method. The results show some improvements over regression-based methods in terms of validation statistics. Key advantages of the Bayesian NHMM over traditional approaches are a dynamic representation of uncertainty and the ability to make long multisite simulations that capture at-site statistics and spatial correlations between sites.
|
166 |
Planning and exploring under uncertaintyMurphy, Elizabeth M. January 2010 (has links)
Scalable autonomy requires a robot to be able to recognize and contend with the uncertainty in its knowledge of the world stemming from its noisy sensors and actu- ators. The regions it chooses to explore, and the paths it takes to get there, must take this uncertainty into account. In this thesis we outline probabilistic approaches to represent that world; to construct plans over it; and to determine which part of it to explore next. We present a new technique to create probabilistic cost maps from overhead im- agery, taking into account the uncertainty in terrain classification and allowing for spatial variation in terrain cost. A probabilistic cost function combines the output of a multi-class classifier and a spatial probabilistic regressor to produce a probability density function over terrain for each grid cell in the map. The resultant cost map facilitates the discovery of not only the shortest path between points on the map, but also a distribution of likely paths between the points. These cost maps are used in a path planning technique which allows the user to trade-off the risk of returning a suboptimal path for substantial increases in search speed. We precompute a probability distribution which precisely approximates the true distance between any grid cell in the map and goal cell. This distribution under- pins a number of A* search heuristics we present, which can characterize and bound the risk we are prepared to take in gaining search efficiency while sacrificing optimal path length. Empirically, we report efficiency increases in excess of 70% over standard heuristic search methods. Finally, we present a global approach to the problem of robotic exploration, uti- lizing a hybrid of a topological data structure and an underlying metric mapping process. A ‘Gap Navigation Tree’ is used to motivate global target selection and occluded regions of the environment (‘gaps’) are tracked probabilistically using the metric map. In pursuing these gaps we are provided with goals to feed to the path planning process en route to a complete exploration of the environment. The combination of these three techniques represents a framework to facilitate robust exploration in a-priori unknown environments.
|
167 |
Strain Field Modelling using Gaussian ProcessesJidling, Carl January 2017 (has links)
This report deals with reconstruction of strain fields within deformed materials. The method relies upon data generated from Bragg edge measurements, in which information is gained from neutron beams that are sent through the sample. The reconstruction has been made by modelling the strain field as a Gaussian process, assigned a covariance structure customized by incorporation of the so-called equilibrium constraints. By making use of an approximation scheme well suited for the problem, the complexity of the computations has been significantly reduced. The results from numerical simulations indicates a better performance as compared to previous work in this area.
|
168 |
Sensor Planning for Bayesian Nonparametric Target ModelingWei, Hongchuan January 2016 (has links)
<p>Bayesian nonparametric models, such as the Gaussian process and the Dirichlet process, have been extensively applied for target kinematics modeling in various applications including environmental monitoring, traffic planning, endangered species tracking, dynamic scene analysis, autonomous robot navigation, and human motion modeling. As shown by these successful applications, Bayesian nonparametric models are able to adjust their complexities adaptively from data as necessary, and are resistant to overfitting or underfitting. However, most existing works assume that the sensor measurements used to learn the Bayesian nonparametric target kinematics models are obtained a priori or that the target kinematics can be measured by the sensor at any given time throughout the task. Little work has been done for controlling the sensor with bounded field of view to obtain measurements of mobile targets that are most informative for reducing the uncertainty of the Bayesian nonparametric models. To present the systematic sensor planning approach to leaning Bayesian nonparametric models, the Gaussian process target kinematics model is introduced at first, which is capable of describing time-invariant spatial phenomena, such as ocean currents, temperature distributions and wind velocity fields. The Dirichlet process-Gaussian process target kinematics model is subsequently discussed for modeling mixture of mobile targets, such as pedestrian motion patterns. </p><p>Novel information theoretic functions are developed for these introduced Bayesian nonparametric target kinematics models to represent the expected utility of measurements as a function of sensor control inputs and random environmental variables. A Gaussian process expected Kullback Leibler divergence is developed as the expectation of the KL divergence between the current (prior) and posterior Gaussian process target kinematics models with respect to the future measurements. Then, this approach is extended to develop a new information value function that can be used to estimate target kinematics described by a Dirichlet process-Gaussian process mixture model. A theorem is proposed that shows the novel information theoretic functions are bounded. Based on this theorem, efficient estimators of the new information theoretic functions are designed, which are proved to be unbiased with the variance of the resultant approximation error decreasing linearly as the number of samples increases. Computational complexities for optimizing the novel information theoretic functions under sensor dynamics constraints are studied, and are proved to be NP-hard. A cumulative lower bound is then proposed to reduce the computational complexity to polynomial time.</p><p>Three sensor planning algorithms are developed according to the assumptions on the target kinematics and the sensor dynamics. For problems where the control space of the sensor is discrete, a greedy algorithm is proposed. The efficiency of the greedy algorithm is demonstrated by a numerical experiment with data of ocean currents obtained by moored buoys. A sweep line algorithm is developed for applications where the sensor control space is continuous and unconstrained. Synthetic simulations as well as physical experiments with ground robots and a surveillance camera are conducted to evaluate the performance of the sweep line algorithm. Moreover, a lexicographic algorithm is designed based on the cumulative lower bound of the novel information theoretic functions, for the scenario where the sensor dynamics are constrained. Numerical experiments with real data collected from indoor pedestrians by a commercial pan-tilt camera are performed to examine the lexicographic algorithm. Results from both the numerical simulations and the physical experiments show that the three sensor planning algorithms proposed in this dissertation based on the novel information theoretic functions are superior at learning the target kinematics with</p><p>little or no prior knowledge</p> / Dissertation
|
169 |
Gaussian processes for temporal and spatial pattern analysis in the MISR satellite land-surface dataCuthbertson, Adrian John 31 July 2014 (has links)
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science. Johannesburg, 30th May 2014. / The Multi-Angle Imaging SpectroRadiometer (MISR) is an Earth observation instrument operated by
NASA on its Terra satellite. The instrument is unique in imaging the Earth’s surface from nine cameras
at different angles. An extended system MISR-HR, has been developed by the Joint Research Centre
of the European Commission (JRC) and NASA, which derives many values describing the interaction
between solar energy, the atmosphere and different surface characteristics. It also generates estimates
of data at the native resolution of the instrument for 24 of the 36 camera bands for which on-board
averaging has taken place prior to downloading of the data. MISR-HR data potentially yields high
value information in agriculture, forestry, environmental studies, land management and other fields. The
MISR-HR system and the data for the African continent have also been provided by NASA and the
JRC to the South African National Space Agency (SANSA). Generally, satellite remote-sensing of the
Earth’s surface is characterised by irregularity in the time-series of data due to atmospheric, environmental
and other effects. Time-series methods, in particular for vegetation phenology applications, exist
for estimating missing data values, filling gaps and discerning periodic structure in the data. Recent
evaluations of the methods established a sound set of requirements that such methods should satisfy.
Existing methods mostly meet the requirements, but choice of method would largely depend on the
analysis goals and on the nature of the underlying processes. An alternative method for time-series exists
in Gaussian Processes, a long established statistical method, but not previously a common method
for satellite remote-sensing time-series. This dissertation asserts that Gaussian Process regression could
also meet the aforementioned set of time-series requirements, and further provide benefits of a consistent
framework rooted in Bayesian statistical methods. To assess this assertion, a data case study has
been conducted for data provided by SANSA for the Kruger National Park in South Africa. The requirements
have been posed as research questions and answered in the affirmative by analysing twelve
years of historical data for seven sites differing in vegetation types, in and bordering the Park. A further
contribution is made in that the data study was conducted using Gaussian Process software which was
developed specifically for this project in the modern open language Julia. This software will be released
in due course as open source.
|
170 |
A study of the prediction performance and multivariate extensions of the horseshoe estimatorYunfan Li (6624032) 14 May 2019 (has links)
The horseshoe prior has been shown to successfully handle high-dimensional sparse estimation problems. It both adapts to sparsity efficiently and provides nearly unbiased estimates for large signals. In addition, efficient sampling algorithms have been developed and successively applied to a vast array of high-dimensional sparse estimation problems. In this dissertation, we investigate the prediction performance of the horseshoe prior in sparse regression, and extend the horseshoe prior to two multivariate settings.<br><br>We begin with a study of the finite sample prediction performance of shrinkage regression methods, where the risk can be unbiasedly estimated using Stein's approach. We show that the horseshoe prior achieves an improved prediction risk over global shrinkage rules, by using a component-specific local shrinkage term that is learned from the data under a heavy-tailed prior, in combination with a global term providing shrinkage towards zero. We demonstrate improved prediction performance in a simulation study and in a pharmacogenomics data set, confirming our theoretical findings.<br><br>We then shift to extending the horseshoe prior to handle two high-dimensional multivariate problems. First, we develop a new estimator of the inverse covariance matrix for high-dimensional multivariate normal data. The proposed graphical horseshoe estimator has attractive properties compared to other popular estimators. The most prominent benefit is that when the true inverse covariance matrix is sparse, the graphical horseshoe estimator provides estimates with small information divergence from the sampling model. The posterior mean under the graphical horseshoe prior can also be almost unbiased under certain conditions. In addition to these theoretical results, we provide a full Gibbs sampler for implementation. The graphical horseshoe estimator compares favorably to existing techniques in simulations and in a human gene network data analysis.<br><br>In our second setting, we apply the horseshoe prior to the joint estimation of regression coefficients and the inverse covariance matrix in normal models. The computational challenge in this problem is due to the dimensionality of the parameter space that routinely exceeds the sample size. We show that the advantages of the horseshoe prior in estimating a mean vector, or an inverse covariance matrix, separately are also present when addressing both simultaneously. We propose a full Bayesian treatment, with a sampling algorithm that is linear in the number of predictors. Extensive performance comparisons are provided with both frequentist and Bayesian alternatives, and both estimation and prediction performances are verified on a genomic data set.
|
Page generated in 0.0331 seconds