• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 837
  • 93
  • 87
  • 86
  • 34
  • 15
  • 14
  • 11
  • 9
  • 8
  • 8
  • 6
  • 6
  • 6
  • 5
  • Tagged with
  • 1521
  • 266
  • 261
  • 242
  • 213
  • 190
  • 188
  • 170
  • 169
  • 168
  • 163
  • 157
  • 147
  • 138
  • 131
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
491

Robust Signal Detection in Non-Gaussian Noise Using Threshold System and Bistable System

Guo, Gencheng Unknown Date
No description available.
492

Unitary Integrations for Unified MIMO Capacity and Performance Analysis

Ghaderipoor, Alireza Unknown Date
No description available.
493

Modeling of the dispersion of radionuclides around a nuclear power station

Dinoko, Tshepo Samuel January 2009 (has links)
<p>Nuclear reactors release small amounts of radioactivity during their normal operations. The most common method of calculating the dose to the public that results from such releases uses Gaussian Plume models. We are investigating these methods using CAP88-PC, a computer code developed for the Environmental Protection Agency (EPA) in the USA that calculates the concentration of radionuclides released from a stack using Pasquill stability classification. A buoyant or momentum driven part is also included. The uptake of the released radionuclide by plants, animals and humans, directly and indirectly, is then calculated to obtain the doses to the public. This method is well established but is known to suffer from many approximations and does not give answers that are accurate to be better than 50% in many cases. More accurate, though much more computer-intensive methods have been developed to calculate the movement of gases&nbsp / using fluid dynamic models. Such a model, using the code FLUENT can model complex terrains and will also be investigated in this work. This work is a preliminary study to compare the results of the traditional Gaussian plume model and a fluid dynamic model for a simplified case. The results indicate that Computational Fluid Dynamics calculations give qualitatively similar results with the possibility of including much more effects than the simple Gaussian plume model.</p>
494

Multi-task learning with Gaussian processes

Chai, Kian Ming January 2010 (has links)
Multi-task learning refers to learning multiple tasks simultaneously, in order to avoid tabula rasa learning and to share information between similar tasks during learning. We consider a multi-task Gaussian process regression model that learns related functions by inducing correlations between tasks directly. Using this model as a reference for three other multi-task models, we provide a broad unifying view of multi-task learning. This is possible because, unlike the other models, the multi-task Gaussian process model encodes task relatedness explicitly. Each multi-task learning model generally assumes that learning multiple tasks together is beneficial. We analyze how and the extent to which multi-task learning helps improve the generalization of supervised learning. Our analysis is conducted for the average-case on the multi-task Gaussian process model, and we concentrate mainly on the case of two tasks, called the primary task and the secondary task. The main parameters are the degree of relatedness ρ between the two tasks, and πS, the fraction of the total training observations from the secondary task. Among other results, we show that asymmetric multitask learning, where the secondary task is to help the learning of the primary task, can decrease a lower bound on the average generalization error by a factor of up to ρ2πS. When there are no observations for the primary task, there is also an intrinsic limit to which observations for the secondary task can help the primary task. For symmetric multi-task learning, where the two tasks are to help each other to learn, we find the learning to be characterized by the term πS(1 − πS)(1 − ρ2). As far as we are aware, our analysis contributes to an understanding of multi-task learning that is orthogonal to the existing PAC-based results on multi-task learning. For more than two tasks, we provide an understanding of the multi-task Gaussian process model through structures in the predictive means and variances given certain configurations of training observations. These results generalize existing ones in the geostatistics literature, and may have practical applications in that domain. We evaluate the multi-task Gaussian process model on the inverse dynamics problem for a robot manipulator. The inverse dynamics problem is to compute the torques needed at the joints to drive the manipulator along a given trajectory, and there are advantages to learning this function for adaptive control. A robot manipulator will often need to be controlled while holding different loads in its end effector, giving rise to a multi-context or multi-load learning problem, and we treat predicting the inverse dynamics for a context/load as a task. We view the learning of the inverse dynamics as a function approximation problem and place Gaussian process priors over the space of functions. We first show that this is effective for learning the inverse dynamics for a single context. Then, by placing independent Gaussian process priors over the latent functions of the inverse dynamics, we obtain a multi-task Gaussian process prior for handling multiple loads, where the inter-context similarity depends on the underlying inertial parameters of the manipulator. Experiments demonstrate that this multi-task formulation is effective in sharing information among the various loads, and generally improves performance over either learning only on single contexts or pooling the data over all contexts. In addition to the experimental results, one of the contributions of this study is showing that the multi-task Gaussian process model follows naturally from the physics of the inverse dynamics.
495

Delay estimation in computer networks

Johnson, Nicholas Alexander January 2010 (has links)
Computer networks are becoming increasingly large and complex; more so with the recent penetration of the internet into all walks of life. It is essential to be able to monitor and to analyse networks in a timely and efficient manner; to extract important metrics and measurements and to do so in a way which does not unduly disturb or affect the performance of the network under test. Network tomography is one possible method to accomplish these aims. Drawing upon the principles of statistical inference, it is often possible to determine the statistical properties of either the links or the paths of the network, whichever is desired, by measuring at the most convenient points thus reducing the effort required. In particular, bottleneck-link detection methods in which estimates of the delay distributions on network links are inferred from measurements made at end-points on network paths, are examined as a means to determine which links of the network are experiencing the highest delay. Initially two published methods, one based upon a single Gaussian distribution and the other based upon the method-of-moments, are examined by comparing their performance using three metrics: robustness to scaling, bottleneck detection accuracy and computational complexity. Whilst there are many published algorithms, there is little literature in which said algorithms are objectively compared. In this thesis, two network topologies are considered, each with three configurations in order to determine performance in six scenarios. Two new estimation methods are then introduced, both based on Gaussian mixture models which are believed to offer an advantage over existing methods in certain scenarios. Computationally, a mixture model algorithm is much more complex than a simple parametric algorithm but the flexibility in modelling an arbitrary distribution is vastly increased. Better model accuracy potentially leads to more accurate estimation and detection of the bottleneck. The concept of increasing flexibility is again considered by using a Pearson type-1 distribution as an alternative to the single Gaussian distribution. This increases the flexibility but with a reduced complexity when compared with mixture model approaches which necessitate the use of iterative approximation methods. A hybrid approach is also considered where the method-of-moments is combined with the Pearson type-1 method in order to circumvent problems with the output stage of the former. This algorithm has a higher variance than the method-of-moments but the output stage is more convenient for manipulation. Also considered is a new approach to detection algorithms which is not dependant on any a-priori parameter selection and makes use of the Kullback-Leibler divergence. The results show that it accomplishes its aim but is not robust enough to replace the current algorithms. Delay estimation is then cast in a different role, as an integral part of an algorithm to correlate input and output streams in an anonymising network such as the onion router (TOR). TOR is used by users in an attempt to conceal network traffic from observation. Breaking the encryption protocols used is not possible without significant effort but by correlating the un-encrypted input and output streams from the TOR network, it is possible to provide a degree of certainty about the ownership of traffic streams. The delay model is essential as the network is treated as providing a pseudo-random delay to each packet; having an accurate model allows the algorithm to better correlate the streams.
496

LACTONE-CARBOXYLATE INTERCONVERSION AS A DETERMINANT OF THE CLEARANCE AND ORAL BIOAVAILABILTY OF THE LIPOPHILIC CAMPTOTHECIN ANALOG AR-67

Adane, Eyob Debebe 01 January 2010 (has links)
The third generation camptothecin analog, AR-67, is undergoing early phase clinical trials as a chemotherapeutic agent. Like all camptothecins it undergoes pH dependent reversible hydrolysis between the lipophilic lactone and the hydrophilic carboxylate. The physicochemical differences between the lactone and carboxylate could potentially give rise to differences in transport across and/or entry into cells. In vitro studies indicated reduced intracellular accumulation and/or apical to basolateral transport of AR-67 lactone in P-gp and/or BCRP overexpressing MDCKII cells and increased cellular uptake of carboxylate in OATP1B1 and OATP1B3 overexpressing HeLa-pIRESneo cells. Pharmacokinetic studies were conducted in rats to study the disposition and oral bioavailability of the lactone and carboxylate and to evaluate the extent of the interaction with uptake and efflux transporters. A pharmacokinetic model accounting for interconversion in the plasma was developed and its performance evaluated through simulations and in vivo transporter inhibition studies using GF120918 and rifampin. The model predicted well the likely scenarios to be encountered clinically from pharmacogenetic differences in transporter proteins, drug-drug interactions and organ function alterations. Oral bioavailability studies showed similarity following lactone and carboxylate administration and indicated the significant role ABC transporters play in limiting the oral bioavailability.
497

Crystalline order and topological charges on capillary bridges

Schmid, Verena, Voigt, Axel 30 July 2014 (has links) (PDF)
We numerically investigate crystalline order on negative Gaussian curvature capillary bridges. In agreement with the experimental results in [W. Irvine et al., Nature, Pleats in crystals on curved surfaces, 2010, 468, 947] we observe for decreasing integrated Gaussian curvature, a sequence of transitions, from no defects to isolated dislocations, pleats, scars and isolated sevenfold disclinations. We especially focus on the dependency of topological charge on the integrated Gaussian curvature, for which we observe, again in agreement with the experimental results, no net disclination for an integrated curvature down to −10, and an approximately linear behavior from there on until the disclinations match the integrated curvature of −12. In contrast to previous studies in which ground states for each geometry are searched for, we here show that the experimental results, which are likely to be in a metastable state, can be best resembled by mimicking the experimental settings and continuously changing the geometry. The obtained configurations are only low energy local minima. The results are computed using a phase field crystal approach on catenoid-like surfaces and are highly sensitive to the initialization.
498

Spatiotemporal Gene Networks from ISH Images

Puniyani, Kriti 01 September 2013 (has links)
As large-scale techniques for studying and measuring gene expressions have been developed, automatically inferring gene interaction networks from expression data has emerged as a popular technique to advance our understanding of cellular systems. Accurate prediction of gene interactions, especially in multicellular organisms such as Drosophila or humans, requires temporal and spatial analysis of gene expressions, which is not easily obtainable from microarray data. New image based techniques using in-sit hybridization(ISH) have recently been developed to allowlarge-scale spatial-temporal profiling of whole body mRNA expression. However, analysis of such data for discovering new gene interactions still remains an open challenge. This thesis studies the question of predicting gene interaction networks from ISH data in three parts. First, we present SPEX2, a computer vision pipeline to extract informative features from ISH data. Next, we present an algorithm, GINI, for learning spatial gene interaction networks from embryonic ISH images at a single time step. GINI combines multi-instance kernels with recent work in learning sparse undirected graphical models to predict interactions between genes. Finally, we propose NP-MuScL (nonparanormal multi source learning) to estimate a gene interaction network that is consistent with multiple sources of data, having the same underlying relationships between the nodes. NP-MuScL casts the network estimation problem as estimating the structure of a sparse undirected graphical model. We use the semiparametric Gaussian copula to model the distribution of the different data sources, with the different copulas sharing the same covariance matrix, and show how to estimate such a model in the high dimensional scenario. We apply our algorithms on more than 100,000 Drosophila embryonic ISH images from the Berkeley Drosophila Genome Project. Each of the 6 time steps in Drosophila embryonic development is treated as a separate data source. With spatial gene interactions predicted via GINI, and temporal predictions combined via NP-MuScL, we are finally able to predict spatiotemporal gene networks from these images.
499

Adjusting for Selection Bias Using Gaussian Process Models

Du, Meng 18 July 2014 (has links)
This thesis develops techniques for adjusting for selection bias using Gaussian process models. Selection bias is a key issue both in sample surveys and in observational studies for causal inference. Despite recently emerged techniques for dealing with selection bias in high-dimensional or complex situations, use of Gaussian process models and Bayesian hierarchical models in general has not been explored. Three approaches are developed for using Gaussian process models to estimate the population mean of a response variable with binary selection mechanism. The first approach models only the response with the selection probability being ignored. The second approach incorporates the selection probability when modeling the response using dependent Gaussian process priors. The third approach uses the selection probability as an additional covariate when modeling the response. The third approach requires knowledge of the selection probability, while the second approach can be used even when the selection probability is not available. In addition to these Gaussian process approaches, a new version of the Horvitz-Thompson estimator is also developed, which follows the conditionality principle and relates to importance sampling for Monte Carlo simulations. Simulation studies and the analysis of an example due to Kang and Schafer show that the Gaussian process approaches that consider the selection probability are able to not only correct selection bias effectively, but also control the sampling errors well, and therefore can often provide more efficient estimates than the methods tested that are not based on Gaussian process models, in both simple and complex situations. Even the Gaussian process approach that ignores the selection probability often, though not always, performs well when some selection bias is present. These results demonstrate the strength of Gaussian process models in dealing with selection bias, especially in high-dimensional or complex situations. These results also demonstrate that Gaussian process models can be implemented rather effectively so that the benefits of using Gaussian process models can be realized in practice, contrary to the common belief that highly flexible models are too complex to use practically for dealing with selection bias.
500

INFORMATION THEORETIC CRITERIA FOR IMAGE QUALITY ASSESSMENT BASED ON NATURAL SCENE STATISTICS

Zhang, Di January 2006 (has links)
Measurement of visual quality is crucial for various image and video processing applications. <br /><br /> The goal of objective image quality assessment is to introduce a computational quality metric that can predict image or video quality. Many methods have been proposed in the past decades. Traditionally, measurements convert the spatial data into some other feature domains, such as the Fourier domain, and detect the similarity, such as mean square distance or Minkowsky distance, between the test data and the reference or perfect data, however only limited success has been achieved. None of the complicated metrics show any great advantage over other existing metrics. <br /><br /> The common idea shared among many proposed objective quality metrics is that human visual error sensitivities vary in different spatial and temporal frequency and directional channels. In this thesis, image quality assessment is approached by proposing a novel framework to compute the lost information in each channel not the similarities as used in previous methods. Based on natural scene statistics and several image models, an information theoretic framework is designed to compute the perceptual information contained in images and evaluate image quality in the form of entropy. <br /><br /> The thesis is organized as follows. Chapter I give a general introduction about previous work in this research area and a brief description of the human visual system. In Chapter II statistical models for natural scenes are reviewed. Chapter III proposes the core ideas about the computation of the perceptual information contained in the images. In Chapter IV, information theoretic criteria for image quality assessment are defined. Chapter V presents the simulation results in detail. In the last chapter, future direction and improvements of this research are discussed.

Page generated in 0.068 seconds