• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2043
  • 601
  • 261
  • 260
  • 61
  • 32
  • 26
  • 19
  • 15
  • 14
  • 10
  • 8
  • 6
  • 6
  • 5
  • Tagged with
  • 4139
  • 811
  • 760
  • 730
  • 720
  • 719
  • 711
  • 660
  • 576
  • 450
  • 432
  • 416
  • 408
  • 369
  • 314
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
491

Neutral and Adaptive Processes Shaping Genetic Variation in Spruce Species

Stocks, Michael January 2013 (has links)
Population genetic analyses can provide information about both neutral and selective evolutionary processes shaping genetic variation. In this thesis, extensive population genetic methods were used to make inferences about genetic drift and selection in spruce species. In paper I we studied four species from the Qinghai-Tibetan Plateau (QTP): Picea likiangensis, P. purpurea, P. wilsonii and P. schrenkiana. Big differences in estimates of genetic diversity and Ne were observed in the more restricted species, P. schrenkiana, and the other more widely distributed species. Furthermore, P. purpurea appears to be a hybrid between P. likiangensis and P. wilsonii. In paper II we used Approximate Bayesian Computation (ABC) to find that the data support a drastic reduction of Ne in Taiwan spruce around 300-500 kya, in line with evidence from the pollen records. The split from P. wilsonii was dated to between 4-8 mya, around the time that Taiwan was formed. These analyses relied on a small sample size, and so in Paper III we investigated the impact of small datasets on the power to distinguish between models in ABC. We found that when genetic diversity is low there is little power to distinguish between simple coalescent models and this can determine the number of samples and loci required. In paper IV we studied the relative importance of genetic drift and selection in four spruce species with differing Ne: P. abies, P. glauca, P. jezoensis and P. breweriana. P. breweriana, which has a low Ne, exhibits a low fraction of adaptive substitutions, while P. abies has high Ne and a high fraction of adaptive substitutions. The other two spruce, however, do not support this suggesting other factors a more important. In paper V we find that several SNPs correlate with both a key adaptive trait (budset) and latitude. The expression of one in particular (PoFTL2) correlates with budset and was previously indentified in P. abies. These studies have helped characterise the importance of different population genetic processes in shaping genetic variation in spruce species and has laid some solid groundwork for future studies of spruce.
492

ICT and economic growth : a dynamic non-parametric approach

Wang, Bin January 2010 (has links)
One of important issues of the policymakers is to improve output and/or productivity growth associated with information and communication technology (ICT) adoption, where total factor productivity (TFP) growth related with ICT in the 1990s appeared in the US but not in the UK (Jorgenson and Stiroh, 2000; Oliner and Sichel, 2000). The general agreement is that ICT can raise output and/or productivity growth via an increase in productivity growth in the ICT-producing sectors due to rapid technological progress, through capital deepening driven by high levels of investment in ICT equipments, and via increases in efficiency in ICT-using sectors that successfully adopt this new technology by ICT spillover effects (David, 1990). Due to the small size of ICT-producing industries and relatively low level of ICT investments in the UK (Colecchia and Schreyer, 2001; Daveri, 2002; Vijselaar and Albers, 2002), the utilization of ICT spillover effects was crucial to improving output and/or productivity growth for the UK. However, in most of the previous studies, while many concluded ICT spillover effects existed in the US, they had mixed results as to whether ICT spillover effects existed in the UK (Schreyer, 2000; Basu et al., 2003; Inklaar et al., 2005; Jorgenson et al., 2005). The objective of this thesis is to contribute to the existing literature by investigating the existence of ICT spillover effects in the US and the UK and exploring the reasons for the different effects between them. This thesis argues that the mixed findings in the previous studies are due to the ignorance of the General-purpose technology (GPT) theory and weakness in methodology. Thus, the first step is to build a new framework of measuring ICT spillover effects to solve the problems from the existing studies. The main ignorance of the GPT theory is the lack of guidance for the proxy of co-invention related to ICT investments and for the length of lag. The new framework no longer has this ignorance because it uses efficiency as a proxy of co-invention and captures the length of lag by years with negative return on ICT capital. The methodology employed in the previous studies was inappropriate mainly because of the small sample size taken in the ICT study, the two-stage approach used to explore the effect of the environmental variables on efficiency and the linear and concavity assumptions on the frontiers without taking account of ICT as a GPT. The new framework uses Bayesian technique, one-stage approach and non-parametric frontiers to avoid these three drawbacks. In addition, the new framework introduces the persistent level of inefficiency, using a first-order autoregressive (i.e. AR(1)) structure of inefficiency itself, as one of factors that influence ICT spillover effects. In order to model the new framework which takes into account the non-parametric frontiers for capturing negative return of ICT capital, an AR(1) structure of inefficiency, the small sample size and factors that influence ICT spillover effects, this thesis has developed two non-parametric dynamic stochastic frontier analysis (SFA) models with an AR(1) structure and performed the analysis via Bayesian inference. The first model was a semi-parametric dynamic stochastic frontier with a time-variant non-parametric frontier at the basic level along with a time-invariant linear function for the technical inefficiency at the higher-level. The second model relaxed the time-invariant linear functional form for technical inefficiency at the higher level. The results of the new framework showed strong ICT spillover effects in the US with a lag of about 6-8 years during 1982-83 to 1988-89, while relatively weaker ICT spillover effects in the UK. This can be evidenced by the fact that the UK has been in the process of organizational adjustment up to 2000 due to a longer lag. Thus, in the 1990s, there was a lack of TFP growth in the UK. Related to the different ICT spillover effects between the US and the UK, the results from the new framework suggested that the various persistent levels of inefficiency between the two countries was important, apart from the different levels of ICT investment between them mentioned in the previous studies (Inklaar, O Mahony and Timmer, 2003). JEL Classifications: C51, E13, O30, O33
493

Developing integrated data fusion algorithms for a portable cargo screening detection system

Ayodeji, Akiwowo January 2012 (has links)
Towards having a one size fits all solution to cocaine detection at borders; this thesis proposes a systematic cocaine detection methodology that can use raw data output from a fibre optic sensor to produce a set of unique features whose decisions can be combined to lead to reliable output. This multidisciplinary research makes use of real data sourced from cocaine analyte detecting fibre optic sensor developed by one of the collaborators - City University, London. This research advocates a two-step approach: For the first step, the raw sensor data are collected and stored. Level one fusion i.e. analyses, pre-processing and feature extraction is performed at this stage. In step two, using experimentally pre-determined thresholds, each feature decides on detection of cocaine or otherwise with a corresponding posterior probability. High level sensor fusion is then performed on this output locally to combine these decisions and their probabilities at time intervals. Output from every time interval is stored in the database and used as prior data for the next time interval. The final output is a decision on detection of cocaine. The key contributions of this thesis includes investigating the use of data fusion techniques as a solution for overcoming challenges in the real time detection of cocaine using fibre optic sensor technology together with an innovative user interface design. A generalizable sensor fusion architecture is suggested and implemented using the Bayesian and Dempster-Shafer techniques. The results from implemented experiments show great promise with this architecture especially in overcoming sensor limitations. A 5-fold cross validation system using a 12 13 - 1 Neural Network was used in validating the feature selection process. This validation step yielded 89.5% and 10.5% true positive and false alarm rates with 0.8 correlation coefficient. Using the Bayesian Technique, it is possible to achieve 100% detection whilst the Dempster Shafer technique achieves a 95% detection using the same features as inputs to the DF system.
494

Bayesian model of the dynamics of motion integration in smooth pursuit and plaid perception

Dimova, Kameliya January 2010 (has links)
In this thesis, a model of motion integration is described which is based on a recursive Bayesian estimation process. The model displays a dynamic behaviour qualitatively similar to the dynamics of the motion integration process observed experimentally in smooth eye pursuit and plaid perception. The computer simulations of the model applied to smooth pursuit eye movements confirm the psychophysical data both in humans and monkeys, and the physiological data in monkeys. The temporal dynamics of motion integration is demonstrated together with its dependence on contrast, size of the stimulus and added noise. A new theoretical approach to explaining plaid perception has been developed, based on both the application of the model and a novel geometrical analysis of the plaid’s pattern. It is shown that the results from simulating the model are consistent with the psychophysical data about the plaid motion. Furthermore, by formulating the model as an approximate version of a Kalman filter algorithm, it is shown that the model can be put into a neurally plausible, distributed recurrent form which coarsely corresponds to the recurrent circuitry of visual cortical areas V1 and MT. The model thus provides further support for the notion that the motion integration process is based on a form of Bayesian estimation, as has been suggested by many psychophysical studies, and moreover suggests that the observed dynamic properties of this process are the result of the recursive nature of the motion estimation.
495

Nonlinear design of geophysical surveys and processing strategies

Guest, Thomas January 2010 (has links)
The principal aim of all scientific experiments is to infer knowledge about a set of parameters of interest through the process of data collection and analysis. In the geosciences, large sums of money are spent on the data analysis stage but much less attention is focussed on the data collection stage. Statistical experimental design (SED), a mature field of statistics, uses mathematically rigorous methods to optimise the data collection stage so as to maximise the amount of information recorded about the parameters of interest. The uptake of SED methods in geophysics has been limited as the majority of SED research is based on linear and linearised theories whereas most geophysical methods are highly nonlinear and therefore the developed methods are not robust. Nonlinear SED methods are computationally demanding and hence to date the methods that do exist limit the designs to be either very simplistic or computationally infeasible and therefore cannot be used in an industrial setting. In this thesis, I firstly show that it is possible to design industry scale experiments for highly nonlinear problems within a computationally tractable time frame. Using an entropy based method constructed on a Bayesian framework I introduce an iteratively-constructive method that reduces the computational demand by introducing one new datum at a time for the design. The method reduces the multidimensional design space to a single-dimensional space at each iteration by fixing the experimental setup of the previous iteration. Both a synthetic experiment using a highly nonlinear parameter-data relationship, and a seismic amplitude versus offset (AVO) experiment are used to illustrate that the results produced by the iteratively-constructive method closely match the results of a global design method at a fraction of the computational cost. This new method thus extends the class of iterative design methods to nonlinear problems, and makes fully nonlinear design methods applicable to higher dimensional industrial scale problems. Using the new iteratively-constructive method, I show how optimal trace profiles for processing amplitude versus angle (AVA) surveys that account for all prior petrophysical information about the target reservoir can be generated using totally nonlinear methods. I examine how the optimal selections change as our prior knowledge of the rock parameters and reservoir fluid content change, and assess which of the prior parameters has the largest effect on the selected traces. The results show that optimal profiles are far more sensitive to prior information about reservoir porosity than information about saturating fluid properties. By applying ray tracing methods the AVA results can be used to design optimal processing profiles from seismic datasets, for multiple targets each with different prior model uncertainties. Although the iteratively-constructive method can be used to design the data collection stage it has been used here to select optimal data subsets post-survey. Using a nonlinear Bayesian SED method I show how industrial scale amplitude versus offset (AVO) data collection surveys can be constructed to maximise the information content contained in AVO crossplots, the principal source of petrophysical information from seismic surveys. The results show that the optimal design is highly dependant on the model parameters when a low number of receivers is being used, but that a single optimal design exists for the complete range of parameters once the number of receivers is increased above a threshold value. However, when acquisition and processing costs are considered I find that, in the case of AVO experiments, a design with constant spatial receiver separation is close to optimal. This explains why regularly-spaced, 2D seismic surveys have performed so well historically, not only from the point of view of noise attenuation and imaging in which homogeneous data coverage confers distinct advantages, but also as providing data to constrain subsurface petrophysical information. Finally, I discuss the implications of the new methods developed and assess which areas of geophysics would benefit from applying SED methods during the design stage.
496

Variational inference for Gaussian-jump processes with application in gene regulation

Ocone, Andrea January 2013 (has links)
In the last decades, the explosion of data from quantitative techniques has revolutionised our understanding of biological processes. In this scenario, advanced statistical methods and algorithms are becoming fundamental to decipher the dynamics of biochemical mechanisms such those involved in the regulation of gene expression. Here we develop mechanistic models and approximate inference techniques to reverse engineer the dynamics of gene regulation, from mRNA and/or protein time series data. We start from an existent variational framework for statistical inference in transcriptional networks. The framework is based on a continuous-time description of the mRNA dynamics in terms of stochastic differential equations, which are governed by latent switching variables representing the on/off activity of regulating transcription factors. The main contributions of this work are the following. We speeded-up the variational inference algorithm by developing a method to compute a posterior approximate distribution over the latent variables using a constrained optimisation algorithm. In addition to computational benefits, this method enabled the extension to statistical inference in networks with a combinatorial model of regulation. A limitation of this framework is the fact that inference is possible only in transcriptional networks with a single-layer architecture (where a single or couples of transcription factors regulate directly an arbitrary number of target genes). The second main contribution in this work is the extension of the inference framework to hierarchical structures, such as feed-forward loop. In the last contribution we define a general structure for transcription-translation networks. This work is important since it provides a general statistical framework to model complex dynamics in gene regulatory networks. The framework is modular and scalable to realistically large systems with general architecture, thus representing a valuable alternative to traditional differential equation models. All models are embedded in a Bayesian framework; inference is performed using a variational approach and compared to exact inference where possible. We apply the models to the study of different biological systems, from the metabolism in E. coli to the circadian clock in the picoalga O. tauri.
497

Reference object choice in spatial language : machine and human models

Barclay, Michael John January 2010 (has links)
The thesis underpinning this study is as follows; it is possible to build machine models that are indistinguishable from the mental models used by humans to generate language to describe their environment. This is to say that the machine model should perform in such a way that a human listener could not discern whether a description of a scene was generated by a human or by the machine model. Many linguistic processes are used to generate even simple scene descriptions and developing machine models of all of them is beyond the scope of this study. The goal of this study is, therefore, to model a sufficient part of the scene description process, operating in a sufficiently realistic environment, so that the likelihood of being able to build machine models of the remaining processes, operating in the real world, can be established. The relatively under-researched process of reference object selection is chosen as the focus of this study. A reference object is, for instance, the `table' in the phrase ``The flowers are on the table''. This study demonstrates that the reference selection process is of similar complexity to others involved in generating scene descriptions which include: assigning prepositions, selecting reference frames and disambiguating objects (usually termed `generating referring expressions'). The secondary thesis of this study is therefore; it is possible to build a machine model that is indistinguishable from the mental models used by humans in selecting reference objects. Most of the practical work in the study is aimed at establishing this. An environment sufficiently near to the real-world for the machine models to operate on is developed as part of this study. It consists of a series of 3-dimensional scenes containing multiple objects that are recognisable to humans and `readable' by the machine models. The rationale for this approach is discussed. The performance of human subjects in describing this environment is evaluated, and measures by which the human performance can be compared to the performance of the machine models are discussed. The machine models used in the study are variants on Bayesian networks. A new approach to learning the structure of a subset of Bayesian networks is presented. Simple existing Bayesian classifiers such as naive or tree augmented naive networks did not perform sufficiently well. A significant result of this study is that useful machine models for reference object choice are of such complexity that a machine learning approach is required. Earlier proposals based on sum-of weighted-factors or similar constructions will not produce satisfactory models. Two differently derived sets of variables are used and compared in this study. Firstly variables derived from the basic geometry of the scene and the properties of objects are used. Models built from these variables match the choice of reference of a group of humans some 73\% of the time, as compared with 90\% for the median human subject. Secondly variables derived from `ray casting' the scene are used. Ray cast variables performed much worse than anticipated, suggesting that humans use object knowledge as well as immediate perception in the reference choice task. Models combining geometric and ray-cast variables match the choice of reference of the group of humans some 76\% of the time. Although niether of these machine models are likely to be indistinguishable from a human, the reference choices are rarely, if ever, entirely ridiculous. A secondary goal of the study is to contribute to the understanding of the process by which humans select reference objects. Several statistically significant results concerning the necessary complexity of the human models and the nature of the variables within them are established. Problems that remain with both the representation of the near-real-world environment and the Bayesian models and variables used within them are detailed. While these problems cast some doubt on the results it is argued that solving these problems is possible and would, on balance, lead to improved performance of the machine models. This further supports the assertion that machine models producing reference choices indistinguishable from those of humans are possible.
498

Enhanced positioning in harsh environments / Förbättrad positionering i svåra miljöer

Glans, Fredrik January 2013 (has links)
Today’s heavy duty vehicles are equipped with safety and comfort systems, e.g. ABS and ESP, which totally or partly take over the vehicle in certain risk situations. When these systems become more and more autonomous more robust positioning is needed. In the right conditions the GPS system provides precise and robust positioning. However, in harsh environments, e.g. dense urban areas and in dense forests, the GPS signals may be affected by multipaths, which means that the signals are reflected on their way from the satellites to the receiver. This can cause large errors in the positioning and thus can give rise to devastating effects for autonomous systems. This thesis evaluate different methods to enhance a low cost GPS in harsh environments, with focus on mitigating multipaths. Mainly there are four different methods: Regular Unscented Kalman filter, probabilistic multipath mitigation, Unscented Kalman filter with vehicle sensor input and probabilistic multipath mitigation with vehicle sensor input. The algorithms will be tested and validated on real data from both dense forest areas and dense urban areas. The results show that the positioning is enhanced, in particular when integrating the vehicle sensors, compared to a low cost GPS.
499

A Bayesian method to improve sampling in weapons testing

Floropoulos, Theodore C. 12 1900 (has links)
Approved for public release; distribution is unlimited / This thesis describes a Bayesian method to determine the number of samples needed to estimate a proportion or probability with 95% confidence when prior bounds are placed on that proportion. It uses the Uniform [a,b] distribution as the prior, and develops a computer program and tables to find the sample size. Tables and examples are also given to compare these results with other approaches for finding sample size. The improvement that can be obtained with this method is fewer samples, and consequently less cost in Weapons Testing is required to meet a desired confidence size for a proportion or probability. / http://archive.org/details/bayesianmethodto00flor / Lieutenant Commander, Hellenic Navy
500

Making Sense of the Noise: Statistical Analysis of Environmental DNA Sampling for Invasive Asian Carp Monitoring Near the Great Lakes

Song, Jeffery W. 01 May 2017 (has links)
Sensitive and accurate detection methods are critical for monitoring and managing the spread of aquatic invasive species, such as invasive Silver Carp (SC; Hypophthalmichthys molitrix) and Bighead Carp (BH; Hypophthalmichthys nobilis) near the Great Lakes. A new detection tool called environmental DNA (eDNA) sampling, the collection and screening of water samples for the presence of the target species’ DNA, promises improved detection sensitivity compared to conventional surveillance methods. However, the application of eDNA sampling for invasive species management has been challenging due to the potential of false positives, from detecting species’ eDNA in the absence of live organisms. In this dissertation, I study the sources of error and uncertainty in eDNA sampling and develop statistical tools to show how eDNA sampling should be utilized for monitoring and managing invasive SC and BH in the United States. In chapter 2, I investigate the environmental and hydrologic variables, e.g. reverse flow, that may be contributing to positive eDNA sampling results upstream of the electric fish dispersal barrier in the Chicago Area Waterway System (CAWS), where live SC are not expected to be present. I used a beta-binomial regression model, which showed that reverse flow volume across the barrier has a statistically significant positive relationship with the probability of SC eDNA detection upstream of the barrier from 2009 to 2012 while other covariates, such as water temperature, season, chlorophyll concentration, do not. This is a potential alternative explanation for why SC eDNA has been detected upstream of the barrier but intact SC have not. In chapter 3, I develop and parameterize a statistical model to evaluate how changes made to the US Fish and Wildlife Service (USFWS)’s eDNA sampling protocols for invasive BH and SC monitoring from 2013 to 2015 have influenced their sensitivity. The model shows that changes to the protocol have caused the sensitivity to fluctuate. Overall, when assuming that eDNA is randomly distributed, the sensitivity of the current protocol is higher for BH eDNA detection and similar for SC eDNA detection compared to the original protocol used from 2009-2012. When assuming that eDNA is clumped, the sensitivity of the current protocol is slightly higher for BH eDNA detection but worse for SC eDNA detection. In chapter 4, I apply the model developed in chapter 3 to estimate the BH and SC eDNA concentration distributions in two pools of the Illinois River where BH and SC are considered to be present, one pool where they are absent, and upstream of the electric barrier in the CAWS given eDNA sampling data and knowledge of the eDNA sampling protocol used in 2014. The results show that the estimated mean eDNA concentrations in the Illinois River are highest in the invaded pools (La Grange; Marseilles) and are lower in the uninvaded pool (Brandon Road). The estimated eDNA concentrations in the CAWS are much lower compared to the concentrations in the Marseilles pool, which indicates that the few eDNA detections in the CAWS (3% of samples positive for SC and 0.4% samples positive for BH) do not signal the presence of live BH or SC. The model shows that >50% samples positive for BH or SC eDNA are needed to infer AC presence in the CAWS, i.e., that the estimated concentrations are similar to what is found in the Marseilles pool. Finally, in chapter 5, I develop a decision tree model to evaluate the value of information that monitoring provides for making decisions about BH and SC prevention strategies near the Great Lakes. The optimal prevention strategy is dependent on prior beliefs about the expected damage of AC invasion, the probability of invasion, and whether or not BH and SC have already invaded the Great Lakes (which is informed by monitoring). Given no monitoring, the optimal strategy is to stay with the status quo of operating electric barriers in the CAWS for low probabilities of invasion and low expected invasion costs. However, if the probability of invasion is greater than 30% and the cost of invasion is greater than $100 million a year, the optimal strategy changes to installing an additional barrier in the Brandon Road pool. Greater risk-aversion (i.e., aversion to monetary losses) causes less prevention (e.g., status quo instead of additional barriers) to be preferred. Given monitoring, the model shows that monitoring provides value for making this decision, only if the monitoring tool has perfect specificity (false positive rate = 0%).

Page generated in 0.0636 seconds