Spelling suggestions: "subject:"1genetic algorithm"" "subject:"cogenetic algorithm""
371 |
Investigating the empirical relationship between oceanic properties observable by satellite and the oceanic pCO₂ / Marizelle van der WaltVan der Walt, Marizelle January 2011 (has links)
In this dissertation, the aim is to investigate the empirical relationship between the partial pressure
of CO2 (pCO2) and other ocean variables in the Southern Ocean, by using a small percentage of the
available data.
CO2 is one of the main greenhouse gases that contributes to global warming and climate change.
The concentration of anthropogenic CO2 in the atmosphere, however, would have been much higher
if some of it was not absorbed by oceanic and terrestrial sinks. The oceans absorb and release CO2
from and to the atmosphere. Large regions in the Southern Ocean are expected to be a CO2 sink.
However, the measurements of CO2 concentrations in the ocean are sparse in the Southern Ocean,
and accurate values for the sinks and sources cannot be determined. In addition, it is difficult
to develop accurate oceanic and ocean-atmosphere models of the Southern Ocean with the sparse
observations of CO2 concentrations in this part of the ocean.
In this dissertation classical techniques are investigated to determine the empirical relationship between
pCO2 and other oceanic variables using in situ measurements. Additionally, sampling techniques
are investigated in order to make a judicious selection of a small percentage of the total
available data points in order to develop an accurate empirical relationship.
Data from the SANAE49 cruise stretching between Antarctica and Cape Town are used in this dissertation.
The complete data set contains 6103 data points. The maximum pCO2 value in this stretch
is 436.0 μatm, the minimum is 251.2 μatm and the mean is 360.2 μatm. An empirical relationship is
investigated between pCO2 and the variables Temperature (T), chlorophyll-a concentration (Chl),
Mixed Layer Depth (MLD) and latitude (Lat). The methods are repeated with latitude included
and excluded as variable respectively. D-optimal sampling is used to select a small percentage of
the available data for determining the empirical relationship. Least squares optimization is used as
one method to determine the empirical relationship. For 200 D-optimally sampled points, the pCO2
prediction with the fourth order equation yields a Root Mean Square (RMS) error of 15.39 μatm
(on the estimation of pCO2) with latitude excluded as variable and a RMS error of 8.797 μatm with
latitude included as variable. Radial basis function (RBF) interpolation is another method that is
used to determine the empirical relationship between the variables. The RBF interpolation with
200 D-optimally sampled points yields a RMS error of 9.617 μatm with latitude excluded as variable
and a RMS error of 6.716 μatm with latitude included as variable. Optimal scaling is applied to
the variables in the RBF interpolation, yielding a RMS error of 9.012 μatm with latitude excluded
as variable and a RMS error of 4.065 μatm with latitude included as variable for 200 D-optimally sampled points. / Thesis (MSc (Applied Mathematics))--North-West University, Potchefstroom Campus, 2012
|
372 |
Bridge Management System with Integrated Life Cycle Cost OptimizationElbehairy, Hatem January 2007 (has links)
In recent years, infrastructure renewal has been a focus of attention in North America and around the world. Municipal and federal authorities are increasingly recognizing the need for life cycle cost analysis of infrastructure projects in order to facilitate proper prioritization and budgeting of maintenance operations. Several reports have highlighted the need to increase budgets with the goal of overcoming the backlog in maintaining infrastructure facilities. This situation is apparent in the case of bridge networks, which are considered vital links in the road network infrastructure. Because of harsh environments and increasing traffic volumes, bridges are deteriorating rapidly, rendering the task of managing this important asset a complex endeavour. While several bridge management systems (BMS) have been developed at the commercial and research level, they still have serious drawbacks, particularly in integrating bridge-level and network-level decisions, and handling extremely large optimization problems.
To overcome these problems, this study presents an innovative bridge management framework that considers network-level and bridge-level decisions. The initial formulation of the proposed framework was limited to bridge deck management. The model has unique aspects: a deterioration model that uses optimized Markov chain matrices, a life cycle cost analysis that considers different repair strategies along the planning horizon, and a system that considers constraints, such as budget limits and desirable improvement in network condition. To optimize repair decisions for large networks that mathematical programming optimization are incapable of handling, four state-of-the art evolutionary algorithms are used: Genetic algorithms, shuffled frog leaping, particle swarm, and ant colony. These algorithms have been used to experiment on different problem sizes and formulations in order to determine the best optimization setup for further developments.
Based on the experiments using the framework for the bridge deck, an expanded framework is presented that considers multiple bridge elements (ME-BMS) in a much larger formulation that can include thousands of bridges. Experiments were carried out in order to examine the framework???s performance on different numbers of bridges so that system parameters could be set to minimize the degradation in the system performance with the increase in numbers of bridges. The practicality of the ME-BMS was enhanced by the incorporation of two additional models: a user cost model that estimates the benefits gained in terms of the user cost after the repair decisions are implemented, and a work zone user cost model that minimizes user cost in work zones by deciding the optimal work zone strategy (nighttime shifts, weekend shifts, and continuous closure), also, decides on the best traffic control plan that suits the bridge configuration. To verify the ability of the developed ME-BMS to optimize repair decisions on both the network and project levels, a case study obtained from a transportation municipality was employed. Comparisons between the decisions provided by the ME-BMS and the municipality policy for making decisions indicated that the ME-BMS has great potential for optimizing repair decisions for bridge networks and for structuring the planning of the maintenance of transportation systems, thus leading to cost savings and more efficient sustainability of the transportation infrastructure.
|
373 |
Isometry Registration Among Deformable Objects, A Quantum Optimization with Genetic OperatorHadavi, Hamid 04 July 2013 (has links)
Non-rigid shapes are generally known as objects whose three dimensional geometry may deform by internal and/or external forces. Deformable shapes are all around us, ranging from protein molecules, to natural objects such as the trees in the forest or the fruits in our gardens, and even human bodies. Two deformable shapes may be related by isometry, which means their intrinsic geometries are preserved, even though their extrinsic geometries are dissimilar. An important problem in the analysis of the deformable shapes is to identify the three-dimensional correspondence between two isometric shapes, given that the two shapes may be deviated from isometry by intrinsic distortions. A major challenge is that non-rigid shapes have large degrees of freedom on how to deform. Nevertheless, irrespective of how they are deformed, they may be aligned such that the geodesic distance between two arbitrary points on two shapes are nearly equal. Such alignment may be expressed by a permutation matrix (a matrix with binary entries) that corresponds to every paired geodesic distance in between the two shapes. The alignment involves searching the space over all possible mappings (that is all the permutations) to locate the one that minimizes the amount of deviation from isometry. A brute-force search to locate the correspondence is not computationally feasible. This thesis introduces a novel approach created to locate such correspondences, in spite of the large solution space that encompasses all possible mappings and the presence of intrinsic distortion.
In order to find correspondences between two shapes, the first step is to create a suitable descriptor to accurately describe the deformable shapes. To this end, we developed deformation-invariant metric descriptors. A descriptor constitutes pair-wise geodesic distances among arbitrary number of discrete points that represent the topology of the non-rigid shape. Our descriptor provides isometric-invariant representation of the shape irrespective of its circumstantial deformation. Two isometric-invariant descriptors, representing two candidate deformable shapes, are the input parameters to our optimization algorithm. We then proceed to locate the permutation matrix that aligns the two descriptors, that minimizes the deviation from isometry.
Once we have developed such a descriptor, we turn our attention to finding correspondences between non deformable shapes. In this study, we investigate the use of both classical and quantum particle swarm optimization (PSO) algorithms for this task. To explore the merits of variants of PSO, integer optimization involving test functions with large dimensions were performed, and the results and the analysis suggest that quantum PSO is more effective optimization method than its classical PSO counterpart. Further, a scheme is proposed to structure the solution space, composed of permutation matrices, in lexicographic ordering. The search in the solution space is accordingly simplified to integer optimization to find the integer rank of the targeted permutation matrix. Empirical results suggest that this scheme improves the scalability of quantum PSO across large solution spaces. Yet, quantum PSO's global search capability requires assistance in order to more effectively manoeuvre through the local extrema prevalent in the large solution spaces. A mutation based genetic algorithm (GA) is employed to augment the search diversity of quantum PSO when/if the swarm stagnates among the local extrema. The mutation based GA instantly disengages the optimization engine from the local extrema in order to reorient the optimization energy to the trajectories that steer to the global extrema, or the targeted permutation matrix.
Our resultant optimization algorithm combines quantum Particle Swarm Optimization (PSO) and mutation based Genetic Algorithm (GA). Empirical results show that the optimization method presented is scalable and efficient on standard hardware across different solution space sizes. The performance of the optimization method, in simulations and on various near-isometric shapes, is discussed. In all cases investigated, the method could successfully identify the correspondence among the non-rigid deformable shapes that were related by isometry.
|
374 |
A Proactive Risk-Aware Robotic Sensor Network for Critical Infrastructure ProtectionMcCausland, Jamieson 17 December 2013 (has links)
In this thesis a Proactive Risk-Aware Robotic Sensor Network (RSN) is proposed for the application of Critical Infrastructure Protection (CIP). Each robotic member of the RSN is granted a perception of risk by means of a Risk Management Framework (RMF). A fuzzy-risk model is used to extract distress-based risk features and potential intrusion-based risk features for CIP. Detected high-risk events invoke a fuzzy-auction Multi-Robot Task Allocation (MRTA) algorithm to create a response group for each detected risk. Through Evolutionary Multi-Objective (EMO) optimization, a Pareto set of optimal robot configurations for a response group will be generated using the Non-Dominating Sorting Genetic Algorithm II (NSGA-II). The optimization objectives are to maximize sensor coverage of essential spatial regions and minimize the amount of energy exerted by the response group. A set of non-dominated solutions are produced from EMO optimization for a decision maker to select a single response. The RSN response group will re-organize based on the specifications of the selected response.
|
375 |
Design optimization of a microelectromechanical electric field sensor using genetic algorithmsRoy, Mark 24 September 2012 (has links)
This thesis studies the application of a multi-objective niched Pareto genetic algorithm on the design optimization of an electric field mill sensor. The original sensor requires resonant operation. The objective of the algorithm presented is to optimize the geometry eliminating the need for resonant operation which can be difficult to maintain in the presence of an unpredictable changing environment. The algorithm evaluates each design using finite element simulations. A population of sensor designs is evolved towards an optimal Pareto frontier of solutions. Several candidate solutions are selected that offer superior displacement, frequency, and stress concentrations. These designs were modified for fabrication using the PolyMUMPs abrication process but failed to operate due to the process. In order to fabricate the sensors in-house with a silicon-on-glass process, an anodic bonding apparatus has been designed, built, and tested.
|
376 |
Hermes: A Targeted Fuzz Testing FrameworkShortt, Caleb James 12 March 2015 (has links)
The use of security assurance cases (security cases) to provide evidence-based
assurance of security properties in software is a young field in Software Engineering.
A security case uses evidence to argue that a particular claim is true. For example,
the highest-level claim may be that a given system is sufficiently secure, and it would
include sub claims to break that general claim down into more granular, and tangible,
items - such as evidence or other claims. Random negative testing (fuzz testing) is
used as evidence to support security cases and the assurance they provide. Many
current approaches apply fuzz testing to a target system for a given amount of time
due to resource constraints. This may leave entire sections of code untouched [60].
These results may be used as evidence in a security case but their quality varies
based on controllable variables, such as time, and uncontrollable variables, such as
the random paths chosen by the fuzz testing engine.
This thesis presents Hermes, a proof-of-concept fuzz testing framework that provides improved evidence for security cases by automatically targeting problem sections
in software and selectively fuzz tests them in a repeatable and timely manner. During
our experiments Hermes produced results with comparable target code coverage to
a full, exhaustive, fuzz test run while significantly reducing the test execution time
that is associated with an exhaustive fuzz test. These results provide a targeted piece
of evidence for security cases which can be audited and refined for further assurance.
Hermes' design allows it to be easily attached to continuous integration frameworks
where it can be executed in addition to other frameworks in a given test suite. / Graduate / 0984 / cshortt@uvic.ca
|
377 |
Novel Semi-Active Suspension with Tunable Stiffness and Damping CharacteristicsWong, Adrian Louis Kuo-Tian January 2012 (has links)
For the past several decades there have been many attempts to improve suspension performance due to its importance within vehicle dynamics. The suspension system main functions are to connect the chassis to the ground, and to isolate the chassis from the ground. To improve upon these two functions, large amounts of effort are focused on two elements that form the building blocks of the suspension system, stiffness and damping. With the advent of new technologies, such as variable dampers, and powerful microprocessors and sensors, suspension performance can be enhanced beyond the traditional capabilities of a passive suspension system. Recently, Yin et al. [1, 2] have developed a novel dual chamber pneumatic spring that can provide tunable stiffness characteristics, which is rare compared to the sea of tunable dampers. The purpose of this thesis is to develop a controller to take advantage of the novel pneumatic spring’s functionality with a tunable damper to improve vehicle dynamic performance.
Since the pneumatic spring is a slow-acting element (i.e. low bandwidth), the typical control logic for semi-active suspension systems are not practical for this framework. Most semi-active controllers assume the use of fast-acting (i.e. high bandwidth) variable dampers within the suspension design. In this case, a lookup table controller is used to manage the stiffness and damping properties for a wide range of operating conditions.
To determine the optimum stiffness and damping properties, optimization is employed. Four objective functions are used to quantify vehicle performance; ride comfort, rattle space (i.e. suspension deflection), handling (i.e. tire deflection), and undamped sprung mass natural frequency. The goal is to minimize the first three objectives, while maximizing the latter to avoid motion sickness starting from 1Hz and downward. However, these goals cannot be attained simultaneously, necessitating compromises between them. Using the optimization strength of genetic algorithms, a Pareto optima set can be generated to determine the compromises between objective functions that have been normalized. Using a trade-off study, the stiffness and damping properties can be selected from the Pareto optima set for suitability within an operating condition of the control logic.
When implementing the lookup table controller, a practical method is employed to recognize the road profile as there is no direct method to determine road profile. To determine the road profile for the lookup table controller, the unsprung mass RMS acceleration and suspension state are utilized. To alleviate the inherent flip-flopping drawback of lookup table controllers, a temporal deadband is employed to eliminate the flip-flopping of the lookup table controller.
Results from the semi-active suspension with tunable stiffness and damping show that vehicle performance, depending on road roughness and vehicle speed, can improve up to 18% over passive suspension systems. Since the controller does not constantly adjust the damping properties, cost and reliability may increase over traditional semi-active suspension systems. The flip-flopping drawback of lookup table controllers has been reduced through the use of a temporal deadband, however further enhancement is required to eliminate flip-flopping within the control logic. Looking forward, the novel semi-active suspension has great potential to improve vehicle dynamic performance especially for heavy vehicles that have large sprung mass variation, but to increase robustness the following should be considered: better road profile recognition, the elimination of flip-flopping between suspension states, and using state equations model of the pneumatic spring within the vehicle model for optimization and evaluation.
|
378 |
Implied Volatility Function - Genetic Algorithm Approach沈昱昌 Unknown Date (has links)
本文主要探討基因演算法(genetic algorithms)與S&P500指數選擇權為研究對象,利用基因演算法的模型來估測選擇權的隱含波動度後,進而求出選擇權的最適價值,用此來比較過去文獻中利用Jump-Diffusion Model、Stochastic Volatility Model與Local Volatility Model來估算選擇權的隱含波動度,使原始BS model中隱含波動度之估測更趨完善。在此篇論文中,以基因演算法求估的選擇權波動度以0.052的平均誤差值優於以Jump-Diffusion Model、Stochastic Volatility Model與Local Volatility Model求出之平均誤差值0.308,因此基因演算法確實可應用於選擇權波動度之求估。 / In this paper a different approach to the BS Model is proposed, by using genetic algorithms a non-parametric procedure for capturing the volatility smile and assess the stability of it. Applying genetic algorithm to this important issue in option pricing illustrates the strengths of our approach. Volatility forecasting is an appropriate task in which to highlight the characteristics of genetic algorithms as it is an important problem with well-accepted benchmark solutions, the models mention in the previous literatures mentioned above. Genetic algorithms have the ability to detect patterns in the conditional mean on both time and stock depend volatility. In addition, the stability test of the genetic algorithm approach will also be accessed. We evaluate the stability of the new approach by examining how well it predicts future option prices. We estimate the volatility function based on the cross-section of reported option prices one week, and then we examine the price deviations from theoretical values one week later.
|
379 |
曲げ工程の自動設計に対する遺伝的アルゴリズム適用における交叉法に関する研究森, 敏彦, MORI, Toshihiko, 広田, 健治, HIROTA, Kenji, 宮脇, 舞, MIYAWAKI, Mai, 平光, 真二, HIRAMITSU, Shinji 06 1900 (has links)
No description available.
|
380 |
曲げ工程の自動設計に対する遺伝的アルゴリズムの適用森, 敏彦, MORI, Toshihiko, 広田, 健治, HIROTA, Kenji, 宮脇, 舞, MIYAWAKI, Mai, 平光, 真二, HIRAMITSU, Shinji 07 1900 (has links)
No description available.
|
Page generated in 0.0556 seconds