Spelling suggestions: "subject:"bobust"" "subject:"arobust""
161 |
Robust Large Margin Approaches for Machine Learning in Adversarial SettingsTorkamani, MohamadAli 21 November 2016 (has links)
Machine learning algorithms are invented to learn from data and to use data to perform predictions and analyses. Many agencies are now using machine learning algorithms to present services and to perform tasks that used to be done by humans. These services and tasks include making high-stake decisions. Determining the right decision strongly relies on the correctness of the input data. This fact provides a tempting incentive for criminals to try to deceive machine learning algorithms by manipulating the data that is fed to the algorithms. And yet, traditional machine learning algorithms are not designed to be safe when confronting unexpected inputs.
In this dissertation, we address the problem of adversarial machine learning; i.e., our goal is to build safe machine learning algorithms that are robust in the presence of noisy or adversarially manipulated data.
Many complex questions -- to which a machine learning system must respond -- have complex answers. Such outputs of the machine learning algorithm can have some internal structure, with exponentially many possible values. Adversarial machine learning will be more challenging when the output that we want to predict has a complex structure itself. In this dissertation, a significant focus is on adversarial machine learning for predicting structured outputs.
In this thesis, first, we develop a new algorithm that reliably performs collective classification: It jointly assigns labels to the nodes of graphed data. It is robust to malicious changes that an adversary can make in the properties of the different nodes of the graph. The learning method is highly efficient and is formulated as a convex quadratic program. Empirical evaluations confirm that this technique not only secures the prediction algorithm in the presence of an adversary, but it also generalizes to future inputs better, even if there is no adversary.
While our robust collective classification method is efficient, it is not applicable to generic structured prediction problems. Next, we investigate the problem of parameter learning for robust, structured prediction models. This method constructs regularization functions based on the limitations of the adversary in altering the feature space of the structured prediction algorithm. The proposed regularization techniques secure the algorithm against adversarial data changes, with little additional computational cost. In this dissertation, we prove that robustness to adversarial manipulation of data is equivalent to some regularization for large-margin structured prediction, and vice versa. This confirms some of the previous results for simpler problems.
As a matter of fact, an ordinary adversary regularly either does not have enough computational power to design the ultimate optimal attack, or it does not have sufficient information about the learner's model to do so. Therefore, it often tries to apply many random changes to the input in a hope of making a breakthrough. This fact implies that if we minimize the expected loss function under adversarial noise, we will obtain robustness against mediocre adversaries. Dropout training resembles such a noise injection scenario. Dropout training was initially proposed as a regularization technique for neural networks. The procedure is simple: At each iteration of training, randomly selected features are set to zero. We derive a regularization method for large-margin parameter learning based on dropout. Our method calculates the expected loss function under all possible dropout values. This method results in a simple objective function that is efficient to optimize. We extend dropout regularization to non-linear kernels in several different directions. We define the concept of dropout for input space, feature space, and input dimensions, and we introduce methods for approximate marginalization over feature space, even if the feature space is infinite-dimensional.
Empirical evaluations show that our techniques consistently outperform the baselines on different datasets.
|
162 |
Solution Methods for Multi-Objective Robust Combinatorial OptimizationThom, Lisa 19 April 2018 (has links)
No description available.
|
163 |
Identification and Quantification of Important Voids and Pockets in ProteinsRaghavendra, G S January 2013 (has links) (PDF)
Many methods of analyzing both the physical and chemical behavior of proteins require information about its structure and stability. Also various other parameters such as energy function, solvation, hydrophobic/hydrophilic effects, surface area and volumes too play an important part in such analysis. The contribution of cavities to these parameters are very important. Existing methods to compute and measure cavities are limited by the inherent inaccuracies in the method of acquisition of data through x-ray crystallography and uncertainities in computation of radii of atoms. We present a topological framework that enables robust computation and visualization of these structures. Given a fixed set of atoms, voids and pockets are represented as subsets of the weighted Delaunay triangulation of atom centers. A novel notion of (ε,π)-stable voids helps identify voids that are stable even after perturbing the atom radii by a small value. An efficient method is described to compute these stable voids for a given input pair of values (ε,π ). We also provide an implementation to visualize, explore (ε.π)-stable voids and also calculate various properties such as volumes, surface areas of the proteins and also of the cavities.
|
164 |
A dynamic heuristics approach for proactive production scheduling under robustness targetsZahid, Taiba 19 June 2017 (has links) (PDF)
In den vergangenen Jahrzehnten konzentrierte sich das Operations Management auf Optimierungsstrategien, insbesondere wurden Meta-Heuristiken für das komplexe, kombinatorische Problem der ressourcenbegrenzten Ablaufplanung erforscht. In einfachen Worten gehört dieses Problem zu den NP-schweren Problemen, die einen derart großen Lösungsraum besitzen, der mittels Enumerationverfahren rechnerisch unlösbar ist. Daher erfordert die Exploration von optimalen Lösungen andere Methoden als Zufallssuchverfahren. Solche Suchalgorithmen in Meta-Heuristik starten mit einer oder mehreren Ausgangslösung und erkunden den Suchraum nach optimalen Lösungen. Jedoch stellen die existierenden Forschungsansätze zur Lösungssuche nur diejenigen Lösungen bereit, die ausschließlich unter den gegebenen Eingangsbedingungen optimal sind. Diese Eingabebedingungen definieren einen Lösungsraum, in dem alles nach Plan geht. Jedoch ist das in der Praxis sicherlich nicht der Fall. Wie wir sagen, der Wandel ist die einzige Konstante in dieser Welt. Risiken und Unsicherheiten begegnen stets im täglichen Leben. Die vorliegende Dissertation untersucht Optimierungsansätze unter Unsicherheit. Der Forschungsbeitrag ist zweigeteilt.
Wie bereits gesagt, wurden Optimierungsstrategien zum Durchsuchen des Lösungsraums in den letzten Jahren stark erforscht. Obwohl es eine anerkannte Tatsache ist, dass die Verbesserung und die Leistung von Optimierungsstrategien stark mit den Initiallösungen korreliert, scheint die Literatur diesbezüglich inexistent, während zumeist auf die Entwicklung von meta-heuristischen Algorithmen wie Genetische Algorithmen und Particle-Swarm-Optimierung fokussiert wird. Die Initiallösungen werden durch simulationsbasierte Strategien entwickelt, die typischerweise gierige Regeln und ereignisbasierte Simulation nutzen. Allerdings verhalten sich kommerzielle Basis-Softwareprodukte meist als Black-Box und stellen keine Informationen über das interne Verhalten bereit. Außerdem erfordern derartige Softwareprodukte meist spezielle Architekturen und missachten Ressourcenbeschränkungen. Die vorliegende Studie diskutiert die ressourcenbeschränkte Projektplanung mit alternativen Modi und schlägt ein simulationsbasiertes Rahmenwerk vor, mit dem ein heuristisches Multi-Pass-Verfahren zur Verfügung gestellt wird. Das erweiterte Multi-Modus-Problem ist in der Lage, den Produktionsbereich in einer besseren Art und Weise nachzubilden, bei dem eine Aktivität von mehreren Ressourcen unterschiedlicher Qualifikation ausgeführt werden kann. Der vorgeschlagene Rahmen diskutiert die Leistung von Algorithmen und verwendet hierfür Benchmark-Instanzen. Das Verhalten verschiedener Projektnetze und deren Eigenschaften werden auch innerhalb des vorgeschlagenen Rahmenwerks bewertet. Darüber hinaus hilft das offene Rahmenwerk, besondere Eigenschaften von Aktivitäten zu analysieren, um deren Verhalten im Fall von Störungen zu prognostizieren.
Die traditionellen Methoden der Risikoanalyse schlagen Slack-basierte Maßzahlen vor, um die Effizienz von Basisplänen zu bestimmen. Das Rahmenwerk wird weiter entwickelt, um mit diesem einen Prüfstand zu gestalten, mit dem nicht-reguläre Maßzahlen bestimmt werden können. Diese Maßnahmen werden als Robustheitsindikatoren bezeichnet und korrelieren mit der Verzögerung derartiger Multi-Modus-Probleme. Solche Leistungsmaße können genutzt werden, um die Wirksamkeit von Basisplänen zu bewerten und ihr Verhalten unter Unsicherheiten zu prognostizieren. Die Ergebnisse dieser Tests werden als modifizierte Zielfunktion verwendet, in der ein bi-objektives Leistungsmaß aus Durchlaufzeit und Robustheit eingesetzt wird, um die Effizienz der vorgeschlagenen Heuristiken zu testen. Da diese Leistungsmaße das Verhalten von Aktivitäten unter Störungen zeigen, werden diese auch genutzt, um die Formfaktoren und Puffergrößen für die Entwicklung eines stochastischen Modells zu bestimmen. Die Analyse der Projektergebnisse, durchgeführt mittels Monte-Carlo-Simulationen, unterstützt das Argument von Teilpuffern für die Modellierung von Aktivitätsdauern anstatt Ansätze mit Extrempuffern und PERT-beta-Schätzungen. / Over the past decades, researches in the field of operations management have focused on optimization strategies based on meta-heuristics for the complex-combinatorial problem of resource constrained scheduling. In simple terms, the solution for this particular problem categorized as NP-hard problem, exhibits a large search space, is computationally intractable, and requires techniques other than random search. Meta-heuristic algorithms start with a single or multiple solutions to explore and optimize using deterministic data and retrieve a valid optimum only under specified input conditions. These input conditions define a solution search space for a theoretical world undergoing no disturbance. But change is inherent to the real world; one is faced with risks and uncertainties in everyday life. The present study explores solution methodologies in the face of uncertainties. The contributions of this thesis are two-fold.
As mentioned earlier, existing optimization strategies have been vigorously investigated in the past decade with respect to exploring large solution search space. Although, it is an established fact that the improvement and performance of optimization strategies is highly correlated with the initial solutions, existing literature regarding this area is not exhaustive and mostly focuses on the development of meta-heuristic algorithms such as genetic algorithms and particle swarm optimization. The initial solutions are developed through simulation based strategies mainly based on greedy rules and event based simulation. However, the available commercial softwares are primarily modeled as a black box and provide little information as to internal processing. Additionally, such planners require special architecture and disregard resource constraints. The present study discusses the multi-mode resource constrained scheduling problem and proposes a simulation-based framework to provide a multi-pass heuristic method. The extended version of multi-mode problem is able to imitate production floor in an improved manner where a task can be performed with multiple resources with certain qualifications. The performance of the proposed framework was analyzed using benchmark instances. The behavior of different project networks and their characteristics is also evaluated within the proposed framework. In addition, the open framework aids in determining the particular characteristic of tasks in order to analyze and forecast their behavior in case of disruptions.
The traditional risk analysis techniques suggest slack-based measures in order to determine the efficiency of baseline schedules. The framework is further developed to design a test bench in order to determine non-regular performance measures named as robustness indicators which correlate with the delay of such cases as multi-mode problem. Such performance measures can be used to indicate the effectiveness of baseline schedules and forecast their behavior. The outputs of these tests are used to modify the objective function which uses makespan and robustness indicators as a bi-objective performance measure in order to test the efficiency of proposed heuristics. Furthermore, since these measures indicate the behavior of tasks under disruptions, they are utilized in order to determine the shape factors and buffers for the development of a stochastic model. The analysis of project outcomes performed through Monte-Carlo simulations supports the argument of partial buffer sizing for modeling activity duration estimates rather than extreme buffer approaches proposed via PERT-beta estimates.
|
165 |
Distributionally robust unsupervised domain adaptation and its applications in 2D and 3D image analysisWang, Yibin 08 August 2023 (has links) (PDF)
Obtaining ground-truth label information from real-world data along with uncertainty quantification can be challenging or even infeasible. In the absence of labeled data for a certain task, unsupervised domain adaptation (UDA) techniques have shown great accomplishment by learning transferable knowledge from labeled source domain data and adapting it to unlabeled target domain data, yet uncertainties are still a big concern under domain shifts. Distributionally robust learning (DRL) is emerging as a high-potential technique for building reliable learning systems that are robust to distribution shifts. In this research, a distributionally robust unsupervised domain adaptation (DRUDA) method is proposed to enhance the machine learning model generalization ability under input space perturbations. The DRL-based UDA learning scheme is formulated as a min-max optimization problem by optimizing worst-case perturbations of the training source data. Our Wasserstein distributionally robust framework can reduce the shifts in the joint distributions across domains. The proposed DRUDA method has been tested on various benchmark datasets. In addition, a gradient mapping-guided explainable network (GMGENet) is proposed to analyze 3D medical images for extracapsular extension (ECE) identification. DRUDA-enhanced GMGENet is evaluated, and experimental results demonstrate that the proposed DRUDA improves transfer performance on target domains for the 3D image analysis task successfully. This research enhances the understanding of distributionally robust optimization in domain adaptation and is expected to advance the current unsupervised machine learning techniques.
|
166 |
An Optimization-Based Framework for Designing Robust Cam-Based Constant-Force Compliant MechanismsMeaders, John Christian 11 June 2008 (has links) (PDF)
Constant-force mechanisms are mechanical devices that provide a near-constant output force over a prescribed deflection range. This thesis develops various optimization-based methods for designing robust constant-force mechanisms. The configuration of the mechanisms that are the focus of this research comprises a cam and a compliant spring fixed at one end while making contact with the cam at the other end. This configuration has proven to be an innovative solution in several applications because of its simplicity in manufacturing and operation. In this work, several methods are introduced to design these mechanisms, and reduce the sensitivity of these mechanisms to manufacturing uncertainties and frictional effects. The mechanism's sensitivity to these factors is critical in small scale applications where manufacturing variations can be large relative to overall dimensions, and frictional forces can be large relative to the output force. The methods in this work are demonstrated on a small scale electrical contact on the order of millimeters in size. The method identifies a design whose output force is 98.20% constant over its operational deflection range. When this design is analyzed using a Monte Carlo simulation the standard deviation in constant force performance is 0.76%. When compared to a benchmark design from earlier research, this represents a 34% increase in constant-force performance, and a reduction from 1.68% in the standard deviation of performance. When this new optimal design is evaluated to reduce frictional effects a design is identifed that shows a 36% reduction in frictional energy loss while giving up, however, 18.63% in constant force.
|
167 |
Robust Model Predictive Control for Marine VesselsAndre do Nascimento, Allan January 2018 (has links)
This master thesis studies the implementation of a Robust MPC controllerin marine vessels on different tasks. A tube based MPC is designed based onsystem linearization around the target point guaranteeing local input to statestability of the respective linearized version of the original nonlinear system.The method is then applied to three different tasks: Dynamic positioningon which recursive feasibility of the nominal MPC is also guaranteed, Speed-Heading control and trajectory tracking with the Line of sight algorithm.Numerical simulation is then provided to show technique’s effectiveness. / Detta examensarbete studerar design och implementering av en robustmodellprediktiv regulator (MPC) för marina fartyg. En tub-baserad MPCär designad baserad på linjärisering av systemdynamiken runt en målpunkt,vilket garanterar local insignal-till-tillstånds stabilitet av det linjäriserade systemet.Metoden är sedan applicerad på tre olika uppgifter: dynamisk positionering,för vilken vi även kan garantera rekursiv lösbarhet för den nominellaregulatorn; riktningsstyrning; och banfötljning med en siktlinje-algoritm. Numeriskasimuleringsstudier bekräftar metodens effektivitet.
|
168 |
Robust and fuzzy logic approaches to the control of uncertain systems with applicationsZhou, Jun January 1992 (has links)
No description available.
|
169 |
Evacuation Distributed Feedback Control and AbstractionWadoo, Sabiha Amin 01 May 2007 (has links)
In this dissertation, we develop feedback control strategies that can be used for evacuating people. Pedestrian models are based on macroscopic or microscopic behavior. We use the macroscopic modeling approach, where pedestrians are treated in an aggregate way and detailed interactions are overlooked. The models representing evacuation dynamics are based on the laws of conservation of mass and momentum and are described by nonlinear hyperbolic partial differential equations. As such the system is distributed in nature.
We address the design of feedback control for these models in a distributed setting where the problem of control and stability is formulated directly in the framework of partial differential equations. The control goal is to design feedback controllers to control the movement of people during evacuation and avoid jams and shocks. We design the feedback controllers for both diffusion and advection where the density of people diffuses as well as moves in a specified direction with time. In order to achieve this goal we are assuming that the control variables have no bounds. However, it is practically impossible to have unbounded controls so we modify the controllers in order to take the effect of control saturation into account. We also discuss the feedback control for these models in presence of uncertainties where the goal is to design controllers to minimize the effect of uncertainties on the movement of people during evacuation. The control design technique adopted in all these cases is feedback linearization which includes backstepping for higher order two-equation models, Lyapunov redesign for uncertain models and robust backstepping for two-equation uncertain models.
The work also focuses on abstraction of evacuation system which focuses on obtaining models with lesser number of partial differential equations than the original one. The feedback control design of a higher level two-equation model is more difficult than the lower order one-equation model. Therefore, it is desirable to perform control design for a simpler abstracted model and then transform control design back to the original model. / Ph. D.
|
170 |
Robust controller for delays and packet dropout avoidance in solar-power wireless networkAl-Azzawi, Waleed January 2013 (has links)
Solar Wireless Networked Control Systems (SWNCS) are a style of distributed control systems where sensors, actuators, and controllers are interconnected via a wireless communication network. This system setup has the benefit of low cost, flexibility, low weight, no wiring and simplicity of system diagnoses and maintenance. However, it also unavoidably calls some wireless network time delays and packet dropout into the design procedure. Solar lighting system offers a clean environment, therefore able to continue for a long period. SWNCS also offers multi Service infrastructure solution for both developed and undeveloped countries. The system provides wireless controller lighting, wireless communications network (WI-FI/WIMAX), CCTV surveillance, and wireless sensor for weather measurement which are all powered by solar energy.
|
Page generated in 0.057 seconds