• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 764
  • 229
  • 138
  • 95
  • 30
  • 29
  • 19
  • 16
  • 14
  • 10
  • 7
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1611
  • 591
  • 340
  • 247
  • 245
  • 235
  • 191
  • 187
  • 176
  • 167
  • 167
  • 160
  • 143
  • 135
  • 131
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
761

Biologically Inspired Cognitive Radio Engine Model Utilizing Distributed Genetic Algorithms for Secure and Robust Wireless Communications and Networking

Rieser, Christian James 22 October 2004 (has links)
This research focuses on developing a cognitive radio that could operate reliably in unforeseen communications environments like those faced by the disaster and emergency response communities. Cognitive radios may also offer the potential to open up secondary or complimentary spectrum markets, effectively easing the perceived spectrum crunch while providing new competitive wireless services to the consumer. A structure and process for embedding cognition in a radio is presented, including discussion of how the mechanism was derived from the human learning process and mapped to a mathematical formalism called the BioCR. Results from the implementation and testing of the model in a hardware test bed and simulation test bench are presented, with a focus on rapidly deployable disaster communications. Research contributions include developing a biologically inspired model of cognition in a radio architecture, proposing that genetic algorithm operations could be used to realize this model, developing an algorithmic framework to realize the cognition mechanism, developing a cognitive radio simulation toolset for evaluating the behavior the cognitive engine, and using this toolset to analyze the cognitive engineà ­s performance in different operational scenarios. Specifically, this research proposes and details how the chaotic meta-knowledge search, optimization, and machine learning properties of distributed genetic algorithm operations could be used to map this model to a computable mathematical framework in conjunction with dynamic multi-stage distributed memories. The system formalism is contrasted with existing cognitive radio approaches, including traditionally brittle artificial intelligence approaches. The cognitive engine architecture and algorithmic framework is developed and introduced, including the Wireless Channel Genetic Algorithm (WCGA), Wireless System Genetic Algorithm (WSGA), and Cognitive System Monitor (CSM). Experimental results show that the cognitive engine finds the best tradeoff between a host radio's operational parameters in changing wireless conditions, while the baseline adaptive controller only increases or decreases its data rate based on a threshold, often wasting usable bandwidth or excess power when it is not needed due its inability to learn. Limitations of this approach include some situations where the engine did not respond properly due to sensitivity in algorithm parameters, exhibiting ghosting of answers, bouncing back and forth between solutions. Future research could be pursued to probe the limits of the engineà ­s operation and investigate opportunities for improvement, including how best to configure the genetic algorithms and engine mathematics to avoid engine solution errors. Future research also could include extending the cognitive engine to a cognitive radio network and investigating implications for secure communications. / Ph. D.
762

Relational Outlier Detection: Techniques and Applications

Lu, Yen-Cheng 10 June 2021 (has links)
Nowadays, outlier detection has attracted growing interest. Unlike typical outlier detection problems, relational outlier detection focuses on detecting abnormal patterns in datasets that contain relational implications within each data point. Furthermore, different from the traditional outlier detection that focuses on only numerical data, modern outlier detection models must be able to handle data in various types and structures. Detecting relational outliers should consider (1) Dependencies among different data types, (2) Data types that are not continuous or do not have ordinal characteristics, such as binary, categorical or multi-label, and (3) Special structures in the data. This thesis focuses on the development of relational outlier detection methods and real-world applications in datasets that contain non-numerical, mixed-type, and special structure data in three tasks, namely (1) outlier detection in mixed-type data, (2) categorical outlier detection in music genre data, and (3) outlier detection in categorized time series data. For the first task, existing solutions for mixed-type data mostly focus on computational efficiency, and their strategies are mostly heuristic driven, lacking a statistical foundation. The proposed contributions of our work include: (1) Constructing a novel unsupervised framework based on a robust generalized linear model (GLM), (2) Developing a model that is capable of capturing large variances of outliers and dependencies among mixed-type observations, and designing an approach for approximating the analytically intractable Bayesian inference, and (3) Conducting extensive experiments to validate effectiveness and efficiency. For the second task, we extended and applied the modeling strategy to a real-world problem. The existing solutions to the specific task are mostly supervised, and the traditional outlier detection methods only focus on detecting outliers by the data distributions, ignoring the input-output relation between the genres and the extracted features. The proposed contributions of our work for this task include: (1) Proposing an unsupervised outlier detection framework for music genre data, (2) Extending the GLM based model in the first task to handle categorical responses and developing an approach to approximate the analytically intractable Bayesian inference, and (3) Conducting experiments to demonstrate that the proposed method outperforms the benchmark methods. For the third task, we focused on improving the outlier detection performance in the second task by proposing a novel framework and expanded the research scope to general categorized time-series data. Existing studies have suggested a large number of methods for automatic time series classification. However, there is a lack of research focusing on detecting outliers from manually categorized time series. The proposed contributions of our work for this task include: (1) Proposing a novel semi-supervised robust outlier detection framework for categorized time-series datasets, (2) Further extending the new framework to an active learning system that takes user insights into account, and (3) Conducting a comprehensive set of experiments to demonstrate the performance of the proposed method in real-world applications. / Doctor of Philosophy / In recent years, outlier detection has been one of the most important topics in the data mining and machine learning research domain. Unlike typical outlier detection problems, relational outlier detection focuses on detecting abnormal patterns in datasets that contain relational implications within each data point. Detecting relational outliers should consider (1) Dependencies among different data types, (2) Data types that are not continuous or do not have ordinal characteristics, such as binary, categorical or multi-label, and (3) Special structures in the data. This thesis focuses on the development of relational outlier detection methods and real-world applications in datasets that contain non-numerical, mixed-type, and special structure data in three tasks, namely (1) outlier detection in mixed-type data, (2) categorical outlier detection in music genre data, and (3) outlier detection in categorized time series data. The first task aims on constructing a novel unsupervised framework, developing a model that is capable of capturing the normal pattern and the effects, and designing an approach for model fitting. In the second task, we further extended and applied the modeling strategy to a real-world problem in the music technology domain. For the third task, we expanded the research scope from the previous task to general categorized time-series data, and focused on improving the outlier detection performance by proposing a novel semi-supervised framework.
763

Development and Use of a Spatially Accurate Polynomial Chaos Method for Aerospace Applications

Schaefer, John Anthony 24 January 2023 (has links)
Uncertainty is prevalent throughout the design, analysis, and optimization of aerospace products. When scientific computing is used to support these tasks, sources of uncertainty may include the freestream flight conditions of a vehicle, physical modeling parameters, geometric fidelity, numerical error, and model-form uncertainty, among others. Moreover, while some uncertainties may be treated as probabilistic, aleatory sources, other uncertainties are non-probabilistic and epistemic due to a lack of knowledge, and cannot be rigorously treated using classical statistics or Bayesian approaches. An additional complication for propagating uncertainty is that many aerospace scientific computing tools may be computationally expensive; for example, a single high-fidelity computational fluid dynamics solution may require several days or even weeks to complete. It is therefore necessary to employ uncertainty propagation strategies that require as few solutions as possible. The Non-Intrusive Polynomial Chaos (NIPC) method has grown in popularity in recent decades due to its ability to propagate both aleatory and epistemic parametric sources of uncertainty in a computationally efficient manner. While traditional Monte Carlo methods might require thousands to millions of function evaluations to achieve statistical convergence, NIPC typically requires tens to hundreds for problems with similar numbers of uncertain dimensions. Despite this efficiency, NIPC is limited in one important aspect: it can only propagate uncertainty at a particular point in a design space or flight envelope. For optimization or aerodynamic database problems that require uncertainty estimates at many more than one point, the use of NIPC quickly becomes computationally intractable. This dissertation introduces a new method entitled Spatially Accurate Polynomial Chaos (SAPC) that extends the original NIPC approach for the spatial regression of aleatory and epistemic parametric sources of uncertainty. Throughout the dissertation, the SAPC method is applied to various aerospace problems of interest. These include the regression of aerodynamic force and moment uncertainties throughout the flight envelope of a commercial aircraft, the design under uncertainty of a two-stream propulsive mixer device, and the robust design of a low-boom supersonic demonstrator aircraft. Collectively the results suggest that SAPC may be useful for a large variety of engineering applications. / Doctor of Philosophy / Uncertainty is prevalent throughout the design, analysis, and optimization of aerospace products. When scientific computer simulations are used to support these tasks, sources of uncertainty may include the speed of an aerospace vehicle, the direction of the wind, physical modeling constants or assumptions, and the vehicle shape, among others. As a result of these sources uncertainty, assessments of vehicle performance are also uncertain. For example, if the speed of a vehicle is not known precisely, then computer simulations will predict a lift force which is also imprecisely known. A challenge when assessing the uncertainty in aerospace vehicle performance is that the computer simulations which predict performance may take a long time to run, even on state-of-the-art super computers. Traditional statistical methods may require thousands or millions of simulations for the prediction of uncertainty, which does not fit within the computational budget of most aerospace analyses. A newer method called Non-Intrusive Polynomial Chaos (NIPC) is more efficient, typically requiring only tens to hundreds of simulations; however, NIPC only provides uncertainty estimates at a single point in an aircraft flight envelope or design condition. In this dissertation, a new method called Spatially Accurate Polynomial Chaos (SAPC) is developed. The SAPC method combines desirable features of NIPC with regression methods for an efficient estimation of uncertainty throughout a vehicle flight envelope or design space. Throughout the dissertation, the SAPC method is applied to various aerospace problems of interest. These include the regression of aerodynamic force and moment uncertainties throughout the flight envelope of a commercial aircraft, the design under uncertainty of a two-stream propulsive mixer device, and the robust design of a low-boom supersonic demonstrator aircraft. Collectively the results suggest that SAPC may be useful for a large variety of engineering applications.
764

Fuzzy Control for an Unmanned Helicopter

Kadmiry, Bourhane January 2002 (has links)
The overall objective of the Wallenberg Laboratory for Information Technology and Autonomous Systems (WITAS) at Linköping University is the development of an intelligent command and control system, containing vision sensors, which supports the operation of a unmanned air vehicle (UAV) in both semi- and full-autonomy modes. One of the UAV platforms of choice is the APID-MK3 unmanned helicopter, by Scandicraft Systems AB. The intended operational environment is over widely varying geographical terrain with traffic networks and vehicle interaction of variable complexity, speed, and density. The present version of APID-MK3 is capable of autonomous take-off, landing, and hovering as well as of autonomously executing pre-defined, point-to-point flight where the latter is executed at low-speed. This is enough for performing missions like site mapping and surveillance, and communications, but for the above mentioned operational environment higher speeds are desired. In this context, the goal of this thesis is to explore the possibilities for achieving stable ‘‘aggressive’’ manoeuvrability at high-speeds, and test a variety of control solutions in the APID-MK3 simulation environment. The objective of achieving ‘‘aggressive’’ manoeuvrability concerns the design of attitude/velocity/position controllers which act on much larger ranges of the body attitude angles, by utilizing the full range of the rotor attitude angles. In this context, a flight controller should achieve tracking of curvilinear trajectories at relatively high speeds in a robust, w.r.t. external disturbances, manner. Take-off and landing are not considered here since APIDMK3 has already have dedicated control modules that realize these flight modes. With this goal in mind, we present the design of two different types of flight controllers: a fuzzy controller and a gradient descent method based controller. Common to both are model based design, the use of nonlinear control approaches, and an inner- and outer-loop control scheme. The performance of these controllers is tested in simulation using the nonlinear model of APID-MK3. / <p>Report code: LiU-Tek-Lic-2002:11. The format of the electronic version of this thesis differs slightly from the printed one: this is due mainly to font compatibility. The figures and body of the thesis are remaining unchanged.</p>
765

Compositional synthesis via convex optimization of assume-guarantee contracts

Ghasemi, Kasra 17 January 2023 (has links)
Ensuring constraint satisfaction in large-scale systems with hard constraints is vital in many safety critical systems. The challenge is to design controllers that are efficiently synthesized offline, easily implementable online, and provide formal correctness guarantees. We take a divide and conquer approach to design controllers for reachability and infinite-time/finite-time constraint satisfaction control problems given large-scale interconnected linear systems with polyhedral constraints on states, controls, and disturbances. Such systems are made of small subsystems with coupled dynamics. Our goals are to design controllers that are i) fully compositional and ii) decentralized, such that online implementation requires only local state information. We treat the couplings among the subsystems as additional disturbances and use assume-guarantee (AG) contracts to characterize these disturbance sets. For each subsystem, we design and implement a robust controller locally, subject to its own constraints and contracts. Our main contribution is a method to derive the contracts via a novel parameterization, and a corresponding potential function that characterizes the distance to the correct composition of controllers and contracts, where all contracts are held. We show that the potential function is convex in the contract parameters. This enables the subsystems to negotiate the contracts with the gradient information from the dual of their local synthesis optimization problems in a distributed way, facilitating compositional control synthesis that scales to large systems. We then incorporate Signal Temporal Logic (STL) specifications into our formulation. We develop a decentralized control method for a network of perturbed linear systems with dynamical couplings subject to STL specifications. We first transform the STL requirements into set containment problems, then we develop controllers to solve these problems. The set containment requirements and parameterized contracts are added to the subsystems’ constraints. We introduce a centralized optimization problem to derive the contracts, reachability tubes, and decentralized closed-loop control laws. We show that, when the STL formula is separable with respect to the subsystems, the centralized optimization problem can be solved in a distributed way, which scales to large systems. We present formal theoretical guarantees on robustness of STL satisfaction. We present numerical examples, including scalability studies on systems with tens of thousands of dimensions, and case studies on applying our method to a distributed Model Predictive Control (MPC) problem in a power system. / 2024-01-16T00:00:00Z
766

Robust optimization for portfolio risk : a ravisit of worst-case risk management procedures after Basel III award.

Özün, Alper January 2012 (has links)
The main purpose of this thesis is to develop methodological and practical improvements on robust portfolio optimization procedures. Firstly, the thesis discusses the drawbacks of classical mean-variance optimization models, and examines robust portfolio optimization procedures with CVaR and worst-case CVaR risk models by providing a clear presentation of derivation of robust optimization models from a basic VaR model. For practical purposes, the thesis introduces an open source software interface called “RobustRisk”, which is developed for producing empirical evidence for the robust portfolio optimization models. The software, which performs Monte-Carlo simulation and out-of-sample performance for the portfolio optimization, is introduced by using a hypothetical portfolio data from selected emerging markets. In addition, the performance of robust portfolio optimization procedures are discussed by providing empirical evidence in the crisis period from advanced markets. Empirical results show that robust optimization with worst-case CVaR model outperforms the nominal CVaR model in the crisis period. The empirical results encourage us to construct a forward-looking stress test procedure based on robust portfolio optimization under regime switches. For this purpose, the Markov chain process is embedded into robust optimization procedure in order to stress regime transition matrix. In addition, assets returns, volatilities, correlation matrix and covariance matrix can be stressed under pre-defined scenario expectations. An application is provided with a hypothetical portfolio representing an internationally diversified portfolio. The CVaR efficient frontier and corresponding optimized portfolio weights are achieved under regime switch scenarios. The research suggests that stressed-CVaR optimization provides a robust and forward-looking stress test procedure to comply with the regulatory requirements stated in Basel II and CRD regulations.
767

Recombinant Adenovirus Vaccines, A Comprehensive Investigation of T Cell Immunity / T Cell Biology of Recombinant Adenovirus Vaccines

Millar, James 07 1900 (has links)
<p> Vaccination is arguably the most effective tool at our disposal to prevent the morbidity and mortality associated with infectious disease. However, there are currently several infectious diseases, notably HIV, malaria and tuberculosis, for which we do not posses effective vaccines. Further complicating matters, traditional methods to construct vaccines for these diseases have been unsuccessful. Advances in our understanding of adaptive immunity have demonstrated that vaccines for these diseases likely rely upon potent T cell immunity to be effective. Recombinant adenovirus (rAd) vectors have shown great promise as vaccination platforms since they are easily constructed, stable, well-tolerated and elicit robust T cell responses. The robust activity of rAd vectors based on the human serotype 5 virus (rHuAd5) in murine and simian models merits futher investigation as a prototypic T cell vaccine. To this end, we have undertaken a comprehensive evaluation of T cell immunity following rAd vaccination. Our previous observations determined that the CD8+ T cell response produced by rHuAd5 vaccines displayed a prolonged effector phase that was associated with long-lived antigen presentation. We have further investigated the mechanisms underlying the maintenance of this memory population. Our results have revealed that the memory phenotype is not due to continual recruitment of naive CD8+ T cells. Rather, the sustained effector phenotype appears to depend upon prolonged expression of the antigen-encoding transgene from the rHuAd5 vector. Interestingly, transgene expression was only required for 60 days after which point the memory population stabilized. Further investigation of the relationship between antigen structure and the CD8+ T cell response revealed that antigens which traffic through the ER produce a CD8+ T cell response that expands more rapidly and displays a more pronounced contraction phase than antigens which are produced within the cytosol. While the exact mechanism underlying this phenomenon is not known, we suspect that pathways related to ER stress may be involved. Despite the more dramatic contraction phase associated with antigens that traffic through the ER, the memory phenotype was unchanged. Interestingly, the CD4+ T cell response was not influenced by antigen structure and displays a sharp contraction phase regardless of whether the antigen traffics through the ER or is produced in the cytosol. We further investigated the relationship between CD4+ T cell help and CD8+ T cell immunity produced by rHuAd5. Based on the partially-exhausted phenotype of the CD8+ T cells produced by rHuAd5 (diminished TNF-a production and little IL-2 production), we suspected that inadequate CD4+ T cell help may have been responsible. However, removal of CD4+ T cells did not further impair the CD8+ T cell response produced by rHuAd5. Rather, a lack of CD4+ T cell help only impacted the magnitude of the primary CD8+ T cell response generated by rHuAd5; the functionality of the CD8+ T cell population, including the ability to proliferate following secondary stimulation, were not affected by the absence of CD4+ T cells. Thus, although CD8+ T cell expansion following immunization with rHuAd5 is dependent upon the availability of CD4+ T cell help, the memory functions of the CD8+ T cell population appears to be independent of CD4+ T cell help. Finally, we compared the magnitude of the CD8+ T cell response produced by rHuAd5 and recombinant vaccinia virus. Our results demonstrated that the functionality of the early T cell response produced by both vectors were identical. However, the primary transgene-specific CD8+ T cell responses produced by rHuAd5 were significantly larger than rVV because the vector specific responses were negligible in the case of rAd but very strong following rVV inoculation. This research has contributed to our understanding of T cell immunity following rAd immunization and will assist in the construction and implementation of future vaccines. </p> / Thesis / Doctor of Philosophy (PhD)
768

Robust Models for Accommodating Outliers in Random Effects Meta Analysis: A Simulation Study and Empirical Study

Stacey, Melanie January 2016 (has links)
In traditional meta-analysis, a random-effects model is used to deal with heterogeneity and the random-effect is assumed to be normally distributed. However, this can be problematic in the presence of outliers. One solution involves using a heavy tailed distribution for the random-effect to more adequately model the excess variation due to the outliers. Failure to consider an alternative approach to the standard in the presence of unusual or outlying points can lead to inaccurate inference. A heavy tailed distribution is favoured because it has the ability to down-weight outlying studies appropriately, therefore the removal of a study does not need to be considered. In this thesis, the performance of the t-distribution and a finite mixture model are assessed as alternatives to the normal distribution through a comprehensive simulation study. The parameters varied are the average mean of the non-outlier studies, the number of studies, the proportion of outliers, the heterogeneity and the outlier shift distance from the average mean. The performance of the distributions is measured using bias, mean squared error, coverage probability, coverage width, Type I error and power. The methods are also compared through an empirical study of meta-analyses from The Cochrane Library (2008). The simulation showed that the performance of the alternative distributions is better than the normal distribution for a number of scenarios, particularly for extreme outliers and high heterogeneity. Generally, the mixture model performed quite well. The empirical study reveals that both alternative distributions are able to reduce the influence of the outlying studies on the overall mean estimate and thus produce more conservative p-values than the normal distribution. It is recommended that a practitioner consider the use of an alternative random-effects distribution in the presence of outliers because they are more likely to provide robust results. / Thesis / Master of Science (MSc)
769

An Efficient Implementation of a Robust Clustering Algorithm

Blostein, Martin January 2016 (has links)
Clustering and classification are fundamental problems in statistical and machine learning, with a broad range of applications. A common approach is the Gaussian mixture model, which assumes that each cluster or class arises from a distinct Gaussian distribution. This thesis studies a robust, high-dimensional extension of the Gaussian mixture model that automatically detects outliers and noise, and a computationally efficient implementation thereof. The contaminated Gaussian distribution is a robust elliptic distribution that allows for automatic detection of ``bad points'', and is used to make robust the usual factor analysis model. In turn, the mixtures of contaminated Gaussian factor analyzers (MCGFA) algorithm allows high-dimesional, robust clustering, classification and detection of bad points. A family of MCGFA models is created through the introduction of different constraints on the covariance structure. A new, efficient implementation of the algorithm is presented, along with an account of its development. The fast implementation permits thorough testing of the MCGFA algorithm, and its performance is compared to two natural competitors: parsimonious Gaussian mixture models (PGMM) and mixtures of modified t factor analyzers (MMtFA). The algorithms are tested systematically on simulated and real data. / Thesis / Master of Science (MSc)
770

Econometric Analysis of Firm-level Production Data

Kealey, John 11 1900 (has links)
In this dissertation, I explore a variety of methods for the econometric analysis of firm-level production data. Three distinct approaches are considered, namely i) proxy variable methods of controlling for unobservable productivity, ii) data envelopment techniques for estimating the boundary of a production set, and iii) stochastic frontier methods for estimating the productive inefficiency of firms. Much of the focus is on semiparametric and nonparametric estimators that allow for a highly flexible specification of the function that relates input combinations to output quantities. / Thesis / Doctor of Philosophy (PhD)

Page generated in 0.1463 seconds