11 |
A Hopfield-Tank Neural Network Approach to Solving the Mobile Agent Planning ProblemWang, Jin-Fu 27 June 2006 (has links)
Mobile agent planning (MAP) is increasingly viewed as an important technique of information retrieval systems to provide location aware services of minimum cost in mobile computing environment. Although Hopfield-Tank neural network has been proposed for solving the traveling salesperson problem, little attention has been paid to the time constraints on resource validity for optimizing the cost of the mobile agent. Consequently, we hypothesized that Hopfield-Tank neural network can be used to solve the MAP problem. To test this hypothesis, we modify Hopfield-Tank neural network and design a new energy function to not only cope with the dynamic temporal features of the computing environment, in particular the server performance and network latency when scheduling mobile agents, but also satisfy the location-based constraints such as the starting and end node of the routing sequence must be the home site of the traveling mobile agent. In addition, the energy function is reformulated into a Lyapunov function to guarantee the convergent stable state and existence of the valid solution. The connection weights between the neurons and the activation function of state variables in the dynamic network are devised in searching for the valid solutions. Moreover, the objective function is derived to estimate the completion time of the valid solutions and predict the optimal routing path. Simulations study was conducted to evaluate the proposed model and algorithm for different time variables and various coefficient values of the energy function. The experimental results quantitatively demonstrate the computational power and speed of the proposed model by producing solutions that are very close to the minimum costs of the location-based and time-constrained distributed MAP problem rapidly. The spatio-temporal technique proposed in this work is an innovative approach in providing knowledge applicable to improving the effectiveness of solving optimization problems.
|
12 |
Fast and accurate estimation of large-scale phylogenetic alignments and treesLiu, Kevin Jensen 06 July 2011 (has links)
Phylogenetics is the study of evolutionary relationships.
Phylogenetic trees and alignments play important roles in a wide range
of biological research, including reconstruction of the Tree of Life
- the evolutionary history of all organisms on Earth - and the
development of vaccines and antibiotics.
Today's phylogenetic studies seek to reconstruct
trees and alignments on a greater number and variety of
organisms than ever before, primarily
due to exponential
growth in affordable sequencing and computing power.
The importance of
phylogenetic trees and alignments motivates the need for
methods to reconstruct them accurately and efficiently
on large-scale datasets.
Traditionally, phylogenetic studies proceed in two phases: first, an
alignment is produced from biomolecular sequences with differing
lengths, and, second, a tree is produced using the alignment. My
dissertation presents the first empirical performance study of leading
two-phase methods on datasets with up to hundreds of thousands of
sequences. Relatively accurate alignments and trees were obtained
using methods with high computational requirements on datasets with a
few hundred sequences, but as datasets grew past 1000 sequences and up
to tens of thousands of sequences, the set of methods capable of
analyzing a dataset diminished and only the methods with the lowest
computational requirements and lowest accuracy remained.
Alternatively, methods have been developed to simultaneously estimate
phylogenetic alignments and trees. Methods optimizing the treelength
optimization problem - the most widely-used approach for simultaneous
estimation - have not been shown to return more accurate trees and alignments
than two-phase approaches. I demonstrate that treelength optimization
under a particular class of optimization criteria represents
a promising means for inferring accurate trees
and alignments.
The other methods for simultaneous estimation are not known to
support analyses of datasets with a few hundred sequences due to their
high computational requirements.
The main contribution of my dissertation is SATe,
the first fast and accurate method for simultaneous
estimation of alignments and trees on datasets with up to several
thousand nucleotide sequences. SATe improves upon the alignment and
topological accuracy of all existing methods, especially
on the most difficult-to-align datasets, while retaining
reasonable computational requirements. / text
|
13 |
A Study on Analysis of Design Variables in Pareto Solutions for Conceptual Design Optimization Problem of Hybrid Rocket EngineFuruhashi, Takeshi, Yoshikawa, Tomohiro, Kudo, Fumiya 06 1900 (has links)
2011 IEEE Congress on Evolutionary Computation (CEC). June 5-8, 2011, Ritz-Carlton, New Orleans, LA, USA
|
14 |
Discrete particle swarm optimization algorithms for orienteering and team orienteering problemsMuthuswamy, Shanthi. January 2009 (has links)
Thesis (Ph. D.)-- State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Systems Science and Industrial Engineering, 2009.
|
15 |
Fastfood nutriční problém / The Fastfood Diet ProblemHyblerová, Zuzana January 2009 (has links)
This thesis is based on the theory of mathematical programming, on the fastfood diet problem in particular. The aim is to find an optimal menu with accordance to nutritional limitations for various groups of fastfood customers and with minimal overall prize. This model is applied on the group of foods and beverages offered in selected fastfood chains in the Czech Republic.
|
16 |
WSN Routing Schedule Based on Energy-aware AdaptationPeng, Tingqing January 2020 (has links)
In view of the problem of uneven load distribution and energy consumption among nodes in a multi-hop wireless sensor network, this research constructs the routing schedule problem as a MOP (Multi-objective Optimization Problem), and proposed an energy-aware routing optimization scheme RDSEGA based on multi-objective optimization. In this scheme, in order to avoid the searching space explosion problem caused by the increase of nodes, KSP Yen's algorithm was applied to prune the searching space, and the candidate paths selected after pruning are recoded based on priority. Then adopted the improved strengthen elitist genetic algorithm to get the entire network routing optimization scheme with the best energy efficiency. At the same time, in view of the problem of routing discontinuity in the process of path crossover and mutation, new crossover and mutation method was proposed that based on the gene fragments connected by the adjacent node or the same node to maximize the effectiveness of the evolution result. The experimental results prove that the scheme reduced the energy consumption of nodes in the network, the load between nodes becomes more balanced, and the working time of the network has been prolonged nearly 40% after the optimization. This brings convenience to practical applications, especially for those that are inconvenient to replace nodes.
|
17 |
Design optimization of multi-ply soft armor targets based on failure modes under projectile normal impactZherui Guo (8698980) 29 April 2020 (has links)
At the ballistic limit velocity of a soft armor target pack, the impact response has been shown to be decoupled in the thickness direction, with the initial few plies behaving in an inelastic fashion via off-axis failure modes such as transverse shear or diametral compression. Past the initial few layers, the remaining plies dissipate energy via membrane-like responses, which only involve in-plane tensile failure modes of the constituent fibers. Since these initial plies only contribute to energy absorption via inelastic kinetic energy transfer, previous studies have shown that these plies may be replaced with another material with other desirable properties, such as lower manufacturing costs or stab-resistance.<div>However, the methodology of determining these parameters is still largely empirical. Armor panels are typically impacted and the shot outcomes subsequently evaluated in order to achieve a quantitative ballistic performance for the panel. Additionally, the ballistic performance is usually determined with respect to a particular projectile. Several models have been proposed to provide an efficient method of predicting ballistic limit determination, but results are sometimes difficult to translate across different projectile-target pairs.<br></div><div>The main research direction in the first volume looking at soft armor impact failure modes and design optimization is obviously of immediate relevance to this dissertation. We start off with an examination of the different types of failure modes that impact on fibrous armors may yield. Subsequently, building on these concepts, we take a deeper look into how different impact parameters cause different failure modes,and we end with a discussion of how the armor panel may be designed around these different failure modes. Although some rudimentary analytical and modeling efforts have been put forth, the current work places more emphasis heavily on experimental techniques and observations, as is the nature of the work typically produced by our research group.<br></div><div><br></div>
|
18 |
Optimalizátor rozvrhu zkoušek na FIT / Optimizer for Exam Scheduling at the FITPaulík, Miroslav January 2015 (has links)
This paper describes automated examination scheduling for the Faculty of Information Technology of Brno University of Technology. It specifies a list of restrictions that must by satisfied. Furthermore, this limitations are classified due to their influence on a quality of the final version of the examination schedule. There are two types of restrictions; soft and hard. The task is to find such a solution that satisfies all hard constraints and breaks the minimum of soft constraints using techniques described in this paper.
|
19 |
Security Countermeasure Selection as a Constraint Solving ProblemKathem, Aya January 2021 (has links)
Network systems often contain vulnerabilities that remain unmitigated in a network for various reasons, such as lack of a patch and limited budget. Adversaries can exploit these existing vulnerabilities through different strategies. The attackers can use the existing vulnerabilities to gain capabilities that will enable them to reach their target goal. This thesis aims to find the most effective defense strategy that can defend against all discovered/known attack scenarios in attempt to secure the system's critical assets. Threat modeling is a well-known technique to find and assess vulnerabilities and threats in the system. Attack graphs are one of the common models used to illustrate and analyze attack scenarios. They provide a logical overview that illustrates how an attacker can combine multiple vulnerabilities to reach a specific part of the system. This project utilizes attack graphs, taking advantage of the causal relationship of their elements to formulate a Constraint Solving Problem, performs a number of analyses to define some constraints and objectives to select the most appropriate actions to be taken by the defender. This is achieved by addressing the security requirements and organization requirements for a given budget. The results show that the selected combination of countermeasures restricts all attack paths presented in the Logical attack graph. The countermeasures are be distributed on the most critical parts of a system and reduce the potential harm for several vulnerabilities rather than provide high protection to a few vulnerabilities. This approach aids in finding the most relevant way to protect system's assets based on the available budget.
|
20 |
Optimizing the Advanced Metering Infrastructure Architecture in Smart GridChasempour, Alireza 01 May 2016 (has links)
Advanced Metering Infrastructure (AMI) is one of the most important components of smart grid (SG) which aggregates data from smart meters (SMs) and sends the collected data to the utility center (UC) to be analyzed and stored. In traditional centralized AMI architecture, there is one meter data management system to process all gathered information in the UC, therefore, by increasing the number of SMs and their data rates, this architecture is not scalable and able to satisfy SG requirements, e.g., delay and reliability. Since scalability is one of most important characteristics of AMI architecture in SG, we have investigated the scalability of different AMI architectures and proposed a scalable hybrid AMI architecture. We have introduced three performance metrics. Based on these metrics, we formulated each AMI architecture and used a genetic-based algorithm to minimize these metrics for the proposed architecture. We simulated different AMI architectures for five demographic regions and the results proved that our proposed AMI hybrid architecture has a better performance compared with centralized and decentralized AMI architectures and it has a good load and geographic scalability.
|
Page generated in 0.1233 seconds