Spelling suggestions: "subject:"integer"" "subject:"nteger""
491 |
A case study of disjunctive programming: Determining optimal motion trajectories for a vehicle by mixed-integer optimizationJagstedt, Oskar, Vitell, Elias January 2023 (has links)
This report considers an application of mixed-integer disjunctive programming (MIDP)where a theoretical robot can jump from one point to another and where the number ofjumps is to be minimized. The robot is only able to jump to the north, south, east andwest. Furthermore, the robot should also be able to navigate and jump around or across anypotential obstacles on the way. The algorithm for solving this problem is set to terminatewhen the robot has reached a set of end coordinates. The goal of this report is to find amethod for solving this problem and to investigate the time complexity of such a method.The problem is converted to big-M representation and solved numerically. Gurobi is theoptimization solver used in this thesis. The model created and implemented with Gurobiyielded optimal solutions to problems of the form above of varying complexity. For most ofcases tested, the time complexity appeared to be linear, but this is likely due to presolvingperformed by Gurobi before running the optimization. Further tests are needed to determinethe time complexity of Gurobi’s optimization algorithm for this specific type of problem.
|
492 |
Risk-Averse Bi-Level Stochastic Network Interdiction Model for Cyber-Security Risk ManagementBhuiyan, Tanveer Hossain 10 August 2018 (has links)
This research presents a bi-level stochastic network interdiction model on an attack graph to enable a risk-averse resource constrained cyber network defender to optimally deploy security countermeasures to protect against attackers having an uncertain budget. This risk-averse conditional-value-at-risk model minimizes a weighted sum of the expected maximum loss over all scenarios and the expected maximum loss from the most damaging attack scenarios. We develop an exact algorithm to solve our model as well as several acceleration techniques to improve the computational efficiency. Computational experiments demonstrate that the application of all the acceleration techniques reduces the average computation time of the basic algorithm by 71% for 100-node graphs. Using metrics called mean-risk value of stochastic solution and value of risk-aversion, numerical results suggest that our stochastic risk-averse model significantly outperforms deterministic and risk-neutral models when 1) the distribution of attacker budget is heavy-right-tailed and 2) the defender is highly risk-averse.
|
493 |
An Optimized Resource Allocation Approach to Identify and Mitigate Supply Chain Risks using Fault Tree AnalysisSherwin, Michael D 10 August 2018 (has links)
Low volume high value (LVHV) supply chains such as airline manufacturing, power plant construction, and shipbuilding are especially susceptible to risks. These industries are characterized by long lead times and a limited number of suppliers that have both the technical know-how and manufacturing capabilities to deliver the requisite goods and services. Disruptions within the supply chain are common and can cause significant and costly delays. Although supply chain risk management and supply chain reliability are topics that have been studied extensively, most research in these areas focus on high vol- ume supply chains and few studies proactively identify risks. In this research, we develop methodologies to proactively and quantitatively identify and mitigate supply chain risks within LVHV supply chains. First, we propose a framework to model the supply chain system using fault-tree analysis based on the bill of material of the product being sourced. Next, we put forward a set of mathematical optimization models to proactively identify, mitigate, and resource at-risk suppliers in a LVHV supply chain with consideration for a firm’s budgetary constraints. Lastly, we propose a machine learning methodology to quan- tify the risk of an individual procurement using multiple logistic regression and industry available data, which can be used as the primary input to the fault tree when analyzing overall supply chain system risk. Altogether, the novel approaches proposed within this dissertation provide a set of tools for industry practitioners to predict supply chain risks, optimally choose which risks to mitigate, and make better informed decisions with respect to supplier selection and risk mitigation while avoiding costly delays due to disruptions in LVHV supply chains.
|
494 |
Transformation of Directed Acyclic Graphs into Kubernetes Deployments with Optimized Latency / Transformation av riktade acykliska grafer till Kubernetes-distributioner med optimerad latensAlmgren, Robert, Lidekrans, Robin January 2022 (has links)
In telecommunications, there is currently a lot of work being done to migrate to the cloud, and a lot of specialized hardware is being exchanged for virtualized solutions. One important part of telecommunication networks that is yet to be moved to the cloud is known as the base-band unit, which sits between the antennas and the core network. The base-band unit has very strict latency requirements, making it unsuitable for out-of-the-box cloud solutions. Ericsson is therefore investigating if cloud solutions can be customized in such a way that base-band unit functionality can be virtualized as well. One such customization is to describe the functionality of a base-band unit using a directed acyclic graph (DAG), and deploy it to a cloud environment using Kubernetes. This thesis sets out to take applications represented using a DAG and deploy it using Kubernetes in such a way that the network latency is reduced when compared to the deployment generated by the default Kubernetes scheduler. The problem of placing the applications onto the available hardware resources was formulated as an integer linear programming problem. The problem was then implemented using Pyomo and solved with the open-source solver GLPK to obtain an optimized placement. This placement was then used to generate a configuration file that could be used to deploy the applications using Kubernetes. A mock application was developed in order to evaluate the optimized placement. The evaluation carried out in this thesis shows that the optimized placement obtained from the solution could improve the average round-trip latency of applications represented using a DAG by up to 30% when compared to the default Kubernetes scheduler.
|
495 |
Modeling the Head Effect in Hydropower River Systems using MILP and BLP ApproachesLarsson, Lina, Lindberg, Mikaela January 2022 (has links)
With a fast-growing electricity demand and a larger proportion of intermittent energy sources follows a greater need for flexible and balancing sources of electricity, such as hydropower. Planning of hydropower production is considered to be a difficult problem to solve due to several nonlinearities, combinatorial properties and the fact that it is a large scale system with spatial-temporal coupling. Optimization approaches are used for solving such problems and a common simplification is to disregard the effect of head variation on the power output. This thesis presents two methods for modeling the head dependency in optimization models for hydropower river systems, the Triangulation method and the Bilinear method. The Triangulation method implements a three-dimensional interpolation technique called triangulation, using a MILP formulation. This is a commonly used method found in the literature. The Bilinear method is a novel approach that applies a piecewise bilinear approximation of the power production function, resulting in a BLP problem. Also, a strategy for selecting which hydropower stations to include head dependence for is provided. The performance of the methods was evaluated on authentic test cases from Lule River and compared to results obtained by Vattenfall's current model without head dependency. The Triangulation method and the Bilinear method give higher accuracy, and are therefore considered more realistic, than the current model. Further, the results indicate that it is sufficient to include head dependence for a subset of stations since the error is significantly reduced. Mid- to long-term scenarios were solved with high accuracy when a subset of the stations was modeled as head dependent. Overall, the Bilinear method had a significantly shorter computational time than the Triangulation method.
|
496 |
Design of a Mapping Algorithm for Delay Sensitive Virtual NetworksIvaturi, Karthikeswar 01 January 2012 (has links) (PDF)
In this era of constant evolution of Internet, Network Virtualization is a powerful platform for the existence of heterogeneous and customized networks on a shared infrastructure. Virtual network embedding is pivotal step for network virtualization and also enables the usage of virtual network mapping techniques. The existing state- of-the-art mapping techniques addresses the issues relating to bandwidth, processing capacity and location constraints very effectively. But due to the advancement of real- time and delay sensitive applications on the Internet, there is a need to address the issue of delay in virtual network mapping techniques. As none of the existing state- of-the-art mapping algorithms do not address this issue, in this thesis we address this issue using VHub-Delay and other mapping algorithms. Based on the study and observations, we designed a new mapping technique that can address the issue of delay and finally the effectiveness of the mapping technique is validated by extensive simulations.
|
497 |
Virtual Network Mapping with Traffic MatricesWang, Cong 01 January 2011 (has links) (PDF)
Nowadays Network Virtualization provides a new perspective for running multiple, relatively independent applications on same physical network (the substrate network) within shared substrate resources. This method is especially useful for researchers or investigators to get involved into networking field within a lower barrier. As for network virtualization, Virtual Network Mapping (VNM) problem is one of the most important aspects for investigation. Within years of deeply research, several efficient algorithms have been proposed to solve the Virtual Network Mapping problem, however, most of the current mapping algorithm assumes that the virtual network request topology is known or given by customers, in this thesis, a new VNM assumption based on traffic matrix is proposed, also using existing VNM benchmarks, we evaluated the mapping performance based on various metrics, and by comparing the new traffic matrix based VNM algorithm and existing ones, we provide its advantages and shortcomings and optimization to this new VNM algorithm.
|
498 |
Modeling and Inference for Multivariate Time Series, with Applications to Integer-Valued Processes and Nonstationary Extreme DataGuerrero, Matheus B. 04 1900 (has links)
This dissertation proposes new statistical methods for modeling and inference for two specific types of time series: integer-valued data and multivariate nonstationary extreme data. We rely on the class of integer-valued autoregressive (INAR) processes for the former, proposing a novel, flexible and elegant way of modeling count phenomena. As for the latter, we are interested in the human brain and its multi-channel electroencephalogram (EEG) recordings, a natural source of extreme events. Thus, we develop new extreme value theory methods for analyzing such data, whether in modeling the conditional extremal dependence for brain connectivity or clustering extreme brain communities of EEG channels. Regarding integer-valued time series, INAR processes are generally defined by specifying the thinning operator and either the innovations or the marginal distributions. The major limitations of such processes include difficulties deriving the marginal properties and justifying the choice of the thinning operator. To overcome these drawbacks, this dissertation proposes a novel approach for building an INAR model that offers the flexibility to prespecify both marginal and innovation distributions. Thus, the thinning operator is no longer subjectively selected but is rather a direct consequence of the marginal and innovation distributions specified by the modeler. Novel INAR processes are introduced following this perspective; these processes include a model with geometric marginal and innovation distributions (Geo-INAR) and models with bounded innovations. We explore the Geo-INAR model, which is a natural alternative to the classical Poisson INAR model. The Geo-INAR process has interesting stochastic properties, such as MA($\infty$) representation, time reversibility, and closed forms for the $h$-th-order transition probabilities, which enables a natural framework to perform coherent forecasting. In the front of multivariate nonstationary extreme data, the focus lies on multi-channel epilepsy data. Epilepsy is a chronic neurological disorder affecting more than 50 million people globally. An epileptic seizure acts like a temporary shock to the neuronal system, disrupting normal electrical activity in the brain. Epilepsy is frequently diagnosed with EEGs. Current statistical approaches for analyzing EEGs use spectral and coherence analysis, which do not focus on extreme behavior in EEGs (such as bursts in amplitude), neglecting that neuronal oscillations exhibit non-Gaussian heavy-tailed probability distributions. To overcome this limitation, this dissertation proposes new approaches to characterize brain connectivity based on extremal features of EEG signals. Two extreme-valued methods to study alterations in the brain network are proposed. One method is Conex-Connect, a pioneering approach linking the extreme amplitudes of a reference EEG channel with the other channels in the brain network. The other method is Club Exco, which clusters multi-channel EEG data based on a spherical $k$-means procedure applied to the "pseudo-angles," derived from extreme amplitudes of EEG signals. Both methods provide new insights into how the brain network organizes itself during an extreme event, such as an epileptic seizure, in contrast to a baseline state.
|
499 |
Optimization Approaches for Open-Locating Dominating SetsSweigart, Daniel Blair 01 January 2019 (has links)
An Open Locating-Dominating Set (OLD set) is a subset of vertices in a graph such that every vertex in the graph has a neighbor in the OLD set and every vertex has a unique set of neighbors in the OLD set. This can also represent where sensors, capable of detecting an event occurrence at an adjacent vertex, could be placed such that one could always identify the location of an event by the specific vertices that indicated an event occurred in their neighborhood. By the open neighborhood construct, which differentiates OLD sets from identifying codes, a vertex is not able to report if it is the location of the event. This construct provides a robustness over identifying codes and opens new applications such as disease carrier and dark actor identification in networks. This work explores various aspects of OLD sets, beginning with an Integer Linear Program for quickly identifying the optimal OLD set on a graph. As many graphs do not admit OLD sets, or there may be times when the total size of the set is limited by an external factor, a concept called maximum covering OLD sets is developed and explored. The coverage radius of the sensors is then expanded in a presentation of Mixed-Weight OLD sets where sensors can cover more than just adjacent vertices. Finally, an application is presented to optimally monitor criminal and terrorist networks using OLD sets and related concepts to identify the optimal set of surveillance targets.
|
500 |
A Swarm of Salesman: Algorithmic Approaches to Multiagent ModelingAmlie-Wolf, Alexandre 11 July 2013 (has links)
No description available.
|
Page generated in 0.0293 seconds