• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 209
  • 31
  • 29
  • 13
  • 12
  • 10
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 408
  • 158
  • 59
  • 58
  • 57
  • 57
  • 55
  • 52
  • 49
  • 45
  • 42
  • 41
  • 39
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Stochastic approach for active and reactive power management in distribution networks

Zubo, Rana H.A., Mokryani, Geev, Rajamani, Haile S., Abd-Alhameed, Raed, Hu, Yim Fun 02 1900 (has links)
Yes / In this paper, a stochastic method is proposed to assess the amount of active and reactive power that can be injected/absorbed to/from grid within a distribution market environment. Also, the impact of wind power penetration on the reactive and active distribution-locational marginal prices is investigated. Market-based active and reactive optimal power flow is used to maximize the social welfare considering uncertainties related to wind speed and load demand. The uncertainties are modeled by Scenario-based approach. The proposed model is examined with 16-bus UK generic distribution system. / Supported by the Higher Education Ministry of Iraqi government.
162

A Bilevel Approach to Resource Allocation for Utility-Based Request-Response Systems

Sundwall, Tanner Jack 08 May 2024 (has links) (PDF)
We present a novel bilevel programming formulation that aims to solve a resource allocation problem for request-response systems. Our formulation is motivated by potential inefficiencies in the allocation of computational resources to incoming user requests in such systems. In our experience, systems often operate with a surplus of resources despite potentially incurring unjustifiable cost. Our work attempts to optimize the tradeoff between the financial cost of resources and the opportunity cost of unfulfilled user demand. Our bilevel formulation consists of an \textit{upper} problem which has a constraint value appearing in the \textit{lower} problem. We derive efficient methods for finding global solutions to the upper problem in two settings; first with logarithmic utility functions, and then with a particular type of sigmoidal utility function. A solution to the model we describe (1) determines the optimal number of total resources to allocate and (2) determines the optimal distribution of such resources across the set of user requests.
163

Studies in the Algorithmic Pricing of Information Goods and Services

Chhabra, Meenal 11 March 2014 (has links)
This thesis makes a contribution to the algorithmic pricing literature by proposing and analyzing techniques for automatically pricing digital and information goods in order to maximize profit in different settings. We also consider the effect on social welfare when agents use these pricing algorithms. The digital goods considered in this thesis are electronic commodities that have zero marginal cost and unlimited supply e.g., iTunes apps. On the other hand, an information good is an entity that bridges the knowledge gap about a product between the consumer and the seller when the consumer cannot assess the utility of owning that product accurately e.g., Carfax provides vehicle history and can be used by a potential buyer of a vehicle to get information about the vehicle. With the emergence of e-commerce, the customers are increasingly price sensitive and search for the best opportunies anywhere. It is almost impossible to manually adjust the prices with rapidly changing demand and competition. Moreover, online shopping platforms also enable sellers to change prices easily and quickly as opposed to updating price labels in brick and mortar stores so they can also experiment with different prices to maximize their revenue. Therefore, e-marketplaces have created a need for designing sophisticated practical algorithms for pricing. This need has evoked interest in algorithmic pricing in the computer science, economics, and operations research communities. In this thesis, we seek solutions to the following two algorithmic pricing problems: (1) In the first problem, a seller launches a new digital good (this good has unlimited supply and zero marginal cost) but is unaware of its demand in a posted-price setting (i.e., the seller quotes a price to a buyer, and the buyer makes a decision depending on her willingness to pay); we look at the question --- how should the seller set the prices in order to maximize her infinite horizon discounted revenue? This is a classic problem of learning while earning. We propose a few algorithms for this problem and demonstrate their effectiveness using rigorous empirical tests on both synthetic datasets and real-world datasets from auctions at eBay and Yahoo!, and ratings on jokes from Jester, an online joke recommender system. We also show that under certain conditions the myopic Bayesian strategy is also Bayes-optimal. Moreover, this strategy has finite regret (independent of time) which means that it also learns very fast. (2) The second problem is based on search markets: a consumer is searching for a product sequentially (i.e., she examines possible options one by one and on observing them decides whether to buy or not). However, merely observing a good, although partially informative, does not typically provide the potential purchaser with the complete information set necessary to execute her buying decision. This lack of perfect information about the good creates a market for intermediaries (we refer to them as experts) who can conduct research on behalf of the buyer and sell her this information about the good. The consumer can pay these intermediaries to learn more about the good which can help her in making a better decision about whether to buy the good or not. In this case, we study various pricing schemes for these information intermediaries in a search-based environment (e.g., selling a package of $k$ reports instead of selling a single report or offering a subscription based service). We show how subsidies can be an effective tool for a market designer to increase the social welfare. We also model quality level in the experts and study competition dynamics by computing equilibrium strategies for the searcher and two experts with different qualities. Surprisingly, sometimes an improvement in the quality of the higher-quality expert (holding everything constant) can be pareto-improving: not only that expert's profit increase, so does the other expert's profit and the searcher's utility. / Ph. D.
164

Three Essays on Price Analysis of Summer Flounder and China's Soybean Imports

Chen, Wei 07 August 2009 (has links)
This dissertation contains three papers from two projects. The first two papers (Chapter Two and Chapter Three) are from a project entitled “Managing Flounder Openings for Maximum Revenue.” The objective of this project is to (1) estimate the monthly dockside price of summer flounder and identify seasonality in this price; and (2) set up a mathematical programming model to maximize the landing revenue by allocating the federal government quota on summer flounder across twelve months. In the first paper (Chapter Two), various forms of inverse demand equations are used to estimate the dockside price of summer flounder. These models are evaluated based on their out-of-sample forecasting performance. A structural functional form is selected. In the second paper (Chapter Three), the selected price equation for summer flounder is applied into a revenue maximization model with both the federal government quota constraint and biological constraints from twelve months. The model is solved using CONPOT Solver of GAMS 21.5. The results of the scenarios indicate that the industry should move the landing effort from the period of October – February to the period of March – August. Comparing with historical data, this method can increase $44.73 million for the industry of landing summer flounder from 1991 to 2005. The third paper (Chapter Four) investigates how China's soybean import prices and domestic prices of soybeans and soybean products affect China's soybean imports. Since 2000, soybeans have been the U.S. leading agricultural exports for bulk commodities. China is the largest importer of U.S. soybean exports. For China's soybean crushing industry, imported soybeans are inputs rather than final products and used to produce soybean meal and oil. A differential production model, which is derived from a two-stage profit maximization model in producer theory, is adopted in this research. Estimates are used to calculate conditional and unconditional price elasticities for China's soybean imports from its major source countries – the United States, Argentina, and Brazil. In addition, the Divisia index and unconditional output price elasticities are obtained for China's soybean imports. Estimation results support the hypothesis that China's soybean imports are determined by its domestic demand for soybean meal, rather than soybean oil. This implies that U.S. agribusinesses should pay attention to the dominant role of China's demand for soybean meal and animal feed. U.S. agribusinesses can also use results in this research to evaluate how China's soybean imports from different source countries will change when either international market prices or China's domestic market prices change. / Ph. D.
165

Network Anomaly Detection with Incomplete Audit Data

Patcha, Animesh 04 October 2006 (has links)
With the ever increasing deployment and usage of gigabit networks, traditional network anomaly detection based intrusion detection systems have not scaled accordingly. Most, if not all, systems deployed assume the availability of complete and clean data for the purpose of intrusion detection. We contend that this assumption is not valid. Factors like noise in the audit data, mobility of the nodes, and the large amount of data generated by the network make it difficult to build a normal traffic profile of the network for the purpose of anomaly detection. From this perspective, the leitmotif of the research effort described in this dissertation is the design of a novel intrusion detection system that has the capability to detect intrusions with high accuracy even when complete audit data is not available. In this dissertation, we take a holistic approach to anomaly detection to address the threats posed by network based denial-of-service attacks by proposing improvements in every step of the intrusion detection process. At the data collection phase, we have implemented an adaptive sampling scheme that intelligently samples incoming network data to reduce the volume of traffic sampled, while maintaining the intrinsic characteristics of the network traffic. A Bloom filters based fast flow aggregation scheme is employed at the data pre-processing stage to further reduce the response time of the anomaly detection scheme. Lastly, this dissertation also proposes an expectation-maximization algorithm based anomaly detection scheme that uses the sampled audit data to detect intrusions in the incoming network traffic. / Ph. D.
166

Robotic Search Planning In Large Environments with Limited Computational Resources and Unreliable Communications

Biggs, Benjamin Adams 24 February 2023 (has links)
This work is inspired by robotic search applications where a robot or team of robots is equipped with sensors and tasked to autonomously acquire as much information as possible from a region of interest. To accomplish this task, robots must plan paths through the region of interest that maximize the effectiveness of the sensors they carry. Receding horizon path planning is a popular approach to addressing the computationally expensive task of planning long paths because it allows robotic agents with limited computational resources to iteratively construct a long path by solving for an optimal short path, traversing a portion of the short path, and repeating the process until a receding horizon path of the desired length has been constructed. However, receding horizon paths do not retain the optimality properties of the short paths from which they are constructed and may perform quite poorly in the context of achieving the robotic search objective. The primary contributions of this work address the worst-case performance of receding horizon paths by developing methods of using terminal rewards in the construction of receding horizon paths. We prove that the proposed methods of constructing receding horizon paths provide theoretical worst-case performance guarantees. Our result can be interpreted as ensuring that the receding horizon path performs no worse in expectation than a given sub-optimal search path. This result is especially practical for subsea applications where, due to use of side-scan sonar in search applications, search paths typically consist of parallel straight lines. Thus for subsea search applications, our approach ensures that expected performance is no worse than the usual subsea search path, and it might be much better. The methods proposed in this work provide desirable lower-bound guarantees for a single robot as well as teams of robots. Significantly, we demonstrate that existing planning algorithms may be easily adapted to use our proposed methods. We present our theoretical guarantees in the context of subsea search applications and demonstrate the utility of our proposed methods through simulation experiments and field trials using real autonomous underwater vehicles (AUVs). We show that our worst-case guarantees may be achieved despite non-idealities such as sub-optimal short-paths used to construct the longer receding horizon path and unreliable communication in multi-agent planning. In addition to theoretical guarantees, An important contribution of this work is to describe specific implementation solutions needed to integrate and implement these ideas for real-time operation on AUVs. / Doctor of Philosophy / This work is inspired by robotic search applications where a robot or team of robots is equipped with sensors and tasked to autonomously acquire as much information as possible from a region of interest. To accomplish this task, robots must plan paths through the region of interest that maximize the effectiveness of the sensors they carry. Receding horizon path planning is a popular approach to addressing the computationally expensive task of planning long paths because it allows robotic agents with limited computational resources to iteratively construct a long path by solving for an optimal short path, traversing a portion of the short path, and repeating the process until a receding horizon path of the desired length has been constructed. However, receding horizon paths do not retain the optimality properties of the short paths from which they are constructed and may perform quite poorly in the context of achieving the robotic search objective. The primary contributions of this work address the worst-case performance of receding horizon paths by developing methods of using terminal rewards in the construction of receding horizon paths. The methods proposed in this work provide desirable lower-bound guarantees for a single robot as well as teams of robots. We present our theoretical guarantees in the context of subsea search applications and demonstrate the utility of our proposed methods through simulation experiments and field trials using real autonomous underwater vehicles (AUVs). In addition to theoretical guarantees, An important contribution of this work is to describe specific implementation solutions needed to integrate and implement these ideas for real-time operation on AUVs.
167

Bayesian Integration and Modeling for Next-generation Sequencing Data Analysis

Chen, Xi 01 July 2016 (has links)
Computational biology currently faces challenges in a big data world with thousands of data samples across multiple disease types including cancer. The challenging problem is how to extract biologically meaningful information from large-scale genomic data. Next-generation Sequencing (NGS) can now produce high quality data at DNA and RNA levels. However, in cells there exist a lot of non-specific (background) signals that affect the detection accuracy of true (foreground) signals. In this dissertation work, under Bayesian framework, we aim to develop and apply approaches to learn the distribution of genomic signals in each type of NGS data for reliable identification of specific foreground signals. We propose a novel Bayesian approach (ChIP-BIT) to reliably detect transcription factor (TF) binding sites (TFBSs) within promoter or enhancer regions by jointly analyzing the sample and input ChIP-seq data for one specific TF. Specifically, a Gaussian mixture model is used to capture both binding and background signals in the sample data; and background signals are modeled by a local Gaussian distribution that is accurately estimated from the input data. An Expectation-Maximization algorithm is used to learn the model parameters according to the distributions on binding signal intensity and binding locations. Extensive simulation studies and experimental validation both demonstrate that ChIP-BIT has a significantly improved performance on TFBS detection over conventional methods, particularly on weak binding signal detection. To infer cis-regulatory modules (CRMs) of multiple TFs, we propose to develop a Bayesian integration approach, namely BICORN, to integrate ChIP-seq and RNA-seq data of the same tissue. Each TFBS identified from ChIP-seq data can be either a functional binding event mediating target gene transcription or a non-functional binding. The functional bindings of a set of TFs usually work together as a CRM to regulate the transcription processes of a group of genes. We develop a Gibbs sampling approach to learn the distribution of CRMs (a joint distribution of multiple TFs) based on their functional bindings and target gene expression. The robustness of BICORN has been validated on simulated regulatory network and gene expression data with respect to different noise settings. BICORN is further applied to breast cancer MCF-7 ChIP-seq and RNA-seq data to identify CRMs functional in promoter or enhancer regions. In tumor cells, the normal regulatory mechanism may be interrupted by genome mutations, especially those somatic mutations that uniquely occur in tumor cells. Focused on a specific type of genome mutation, structural variation (SV), we develop a novel pattern-based probabilistic approach, namely PSSV, to identify somatic SVs from whole genome sequencing (WGS) data. PSSV features a mixture model with hidden states representing different mutation patterns; PSSV can thus differentiate heterozygous and homozygous SVs in each sample, enabling the identification of those somatic SVs with a heterozygous status in the normal sample and a homozygous status in the tumor sample. Simulation studies demonstrate that PSSV outperforms existing tools. PSSV has been successfully applied to breast cancer patient WGS data for identifying somatic SVs of key factors associated with breast cancer development. In this dissertation research, we demonstrate the advantage of the proposed distributional learning-based approaches over conventional methods for NGS data analysis. Distributional learning is a very powerful approach to gain biological insights from high quality NGS data. Successful applications of the proposed Bayesian methods to breast cancer NGS data shed light on underlying molecular mechanisms of breast cancer, enabling biologists or clinicians to identify major cancer drivers and develop new therapeutics for cancer treatment. / Ph. D.
168

Enhancements in Markovian Dynamics

Ali Akbar Soltan, Reza 12 April 2012 (has links)
Many common statistical techniques for modeling multidimensional dynamic data sets can be seen as variants of one (or multiple) underlying linear/nonlinear model(s). These statistical techniques fall into two broad categories of supervised and unsupervised learning. The emphasis of this dissertation is on unsupervised learning under multiple generative models. For linear models, this has been achieved by collective observations and derivations made by previous authors during the last few decades. Factor analysis, polynomial chaos expansion, principal component analysis, gaussian mixture clustering, vector quantization, and Kalman filter models can all be unified as some variations of unsupervised learning under a single basic linear generative model. Hidden Markov modeling (HMM), however, is categorized as an unsupervised learning under multiple linear/nonlinear generative models. This dissertation is primarily focused on hidden Markov models (HMMs). On the first half of this dissertation we study enhancements on the theory of hidden Markov modeling. These include three branches: 1) a robust as well as a closed-form parameter estimation solution to the expectation maximization (EM) process of HMMs for the case of elliptically symmetrical densities; 2) a two-step HMM, with a combined state sequence via an extended Viterbi algorithm for smoother state estimation; and 3) a duration-dependent HMM, for estimating the expected residency frequency on each state. Then, the second half of the dissertation studies three novel applications of these methods: 1) the applications of Markov switching models on the Bifurcation Theory in nonlinear dynamics; 2) a Game Theory application of HMM, based on fundamental theory of card counting and an example on the game of Baccarat; and 3) Trust modeling and the estimation of trustworthiness metrics in cyber security systems via Markov switching models. As a result of the duration dependent HMM, we achieved a better estimation for the expected duration of stay on each regime. Then by robust and closed form solution to the EM algorithm we achieved robustness against outliers in the training data set as well as higher computational efficiency in the maximization step of the EM algorithm. By means of the two-step HMM we achieved smoother probability estimation with higher likelihood than the standard HMM. / Ph. D.
169

Parsimonious, Risk-Aware, and Resilient Multi-Robot Coordination

Zhou, Lifeng 28 May 2020 (has links)
In this dissertation, we study multi-robot coordination in the context of multi-target tracking. Specifically, we are interested in the coordination achieved by means of submodular function optimization. Submodularity encodes the diminishing returns property that arises in multi-robot coordination. For example, the marginal gain of assigning an additional robot to track the same target diminishes as the number of robots assigned increases. The advantage of formulating coordination problems as submodular optimization is that a simple, greedy algorithm is guaranteed to give a good performance. However, often this comes at the expense of unrealistic models and assumptions. For example, the standard formulation does not take into account the fact that robots may fail, either randomly or due to adversarial attacks. When operating in uncertain conditions, we typically seek to optimize the expected performance. However, this does not give any flexibility for a user to seek conservative or aggressive behaviors from the team of robots. Furthermore, most coordination algorithms force robots to communicate at each time step, even though they may not need to. Our goal in this dissertation is to overcome these limitations by devising coordination algorithms that are parsimonious in communication, allow a user to manage the risk of the robot performance, and are resilient to worst-case robot failures and attacks. In the first part of this dissertation, we focus on designing parsimonious communication strategies for target tracking. Specifically, we investigate the problem of determining when to communicate and who to communicate with. When the robots use range sensors, the tracking performance is a function of the relative positions of the robots and the targets. We propose a self-triggered communication strategy in which a robot communicates its own position with its neighbors only when a certain set of conditions are violated. We prove that this strategy converges to the optimal robot positions for tracking a single target and in practice, reduces the number of communication messages by 30%. When tracking multiple targets, we can reduce the communication by forming subsets of robots and assigning one subset to track a target. We investigate a number of measures for tracking quality based on the observability matrix and show which ones are submodular and which ones are not. For non-submodular measures, we show a greedy algorithm gives a 1/(n+1) approximation, if we restrict the subset to n robots. In optimizing submodular functions, a common assumption is that the function value is deterministic, which may not hold in practice. For example, the sensor performance may depend on environmental conditions which are not known exactly. In the second part of the dissertation, we design an algorithm for stochastic submodular optimization. The standard formulation for stochastic optimization optimizes the expected performance. However, the expectation is a risk-neutral measure. Instead, we optimize the Conditional Value-at-Risk (CVaR), which allows the user the flexibility of choosing a risk level. We present an algorithm, based on the greedy algorithm, and prove that its performance has bounded suboptimality and improves with running time. We also present an online version of the algorithm to adapt to real-time scenarios. In the third part of this dissertation, we focus on scenarios where a set of robots may fail naturally or due to adversarial attacks. Our objective is to track as many targets as possible, a submodular measure, assuming worst-case robot failures. We present both centralized and distributed resilient tracking algorithms to cope with centralized and distributed communication settings. We prove these algorithms give a constant-factor approximation of the optimal in polynomial running time. / Doctor of Philosophy / Today, robotics and autonomous systems have been increasingly used in various areas such as manufacturing, military, agriculture, medical sciences, and environmental monitoring. However, most of these systems are fragile and vulnerable to adversarial attacks and uncertain environmental conditions. In most cases, even if a part of the system fails, the entire system performance can be significantly undermined. As robots start to coexist with humans, we need algorithms that can be trusted under real-world, not just ideal conditions. Thus, this dissertation focuses on enabling security, trustworthiness, and long-term autonomy in robotics and autonomous systems. In particular, we devise coordination algorithms that are resilient to attacks, trustworthy in the face of the uncertain conditions, and allow the long-term operation of multi-robot systems. We evaluate our algorithms through extensive simulations and proof-of-concept experiments. Generally speaking, multi-robot systems form the "physical" layer of Cyber-Physical Sytems (CPS), the Internet of Things (IoT), and Smart City. Thus, our research can find applications in the areas of connected and autonomous vehicles, intelligent transportation, communications and sensor networks, and environmental monitoring in smart cities.
170

Optimal Sum-Rate of Multi-Band MIMO Interference Channel

Dhillon, Harpreet Singh 02 September 2010 (has links)
While the channel capacity of an isolated noise-limited wireless link is well-understood, the same is not true for the interference-limited wireless links that coexist in the same area and occupy the same frequency band(s). The performance of these wireless systems is coupled to each other due to the mutual interference. One such wireless scenario is modeled as a network of simultaneously communicating node pairs and is generally referred to as an interference channel (IC). The problem of characterizing the capacity of an IC is one of the most interesting and long-standing open problems in information theory. A popular way of characterizing the capacity of an IC is to maximize the achievable sum-rate by treating interference as Gaussian noise, which is considered optimal in low-interference scenarios. While the sum-rate of the single-band SISO IC is relatively well understood, it is not so when the users have multiple-bands and multiple-antennas for transmission. Therefore, the study of the optimal sum-rate of the multi-band MIMO IC is the main goal of this thesis. The sum-rate maximization problem for these ICs is formulated and is shown to be quite similar to the one already known for single-band MIMO ICs. This problem is reduced to the problem of finding the optimal fraction of power to be transmitted over each spatial channel in each frequency band. The underlying optimization problem, being non-linear and non-convex, is difficult to solve analytically or by employing local optimization techniques. Therefore, we develop a global optimization algorithm by extending the Reformulation and Linearization Technique (RLT) based Branch and Bound (BB) strategy to find the provably optimal solution to this problem. We further show that the spatial and spectral channels are surprisingly similar in a multi-band multi-antenna IC from a sum-rate maximization perspective. This result is especially interesting because of the dissimilarity in the way the spatial and frequency channels affect the perceived interference. As a part of this study, we also develop some rules-of-thumb regarding the optimal power allocation strategies in multi-band MIMO ICs in various interference regimes. Due to the recent popularity of Interference Alignment (IA) as a means of approaching capacity in an IC (in high-interference regime), we also compare the sum-rates achievable by our technique to the ones achievable by IA. The results indicate that the proposed power control technique performs better than IA in the low and intermediate interference regimes. Interestingly, the performance of the power control technique improves further relative to IA with an increase in the number of orthogonal spatial or frequency channels. / Master of Science

Page generated in 0.0798 seconds