141 |
Modified Niched Pareto Multi-objective Genetic Algorithm for Construction Scheduling OptimizationKim, Kyungki 2011 August 1900 (has links)
This research proposes a Genetic Algorithm based decision support model that provides decision makers with a quantitative basis for multi-criteria decision making related to construction scheduling. In an attempt to overcome the drawbacks of similar efforts, the proposed multi-objective optimization model provides insight into construction scheduling problems. In order to generate optimal solutions in terms of the three important criteria which are project duration, cost, and variation in resource use, a new data structure is proposed to define a solution to the problem and a general Niched Pareto Genetic Algorithm (NPGA) is modified to facilitate optimization procedure.
The main features of the proposed Multi-Objective Genetic Algorithm (MOGA) are:
A fitness sharing technique that maintains diversity of solutions.
A non-dominated sorting method that assigns ranks to each individual solution in the population is beneficial to the tournament selection process.
An external archive to prevent loss of optimal or near optimal solutions due to the random effect of genetic operators.
A space normalization method to avoid scaling deficiencies.
The developed optimization model was applied to two case studies. The results indicate that a wider range of solutions can be obtained by employing the new approach when compared to previous models. Greater area in the decision space is considered and tradeoffs between all the objectives are found. In addition, various resource use options are found and visualized. Most importantly, the creation of a simultaneous optimization model provides better insight into what is obtainable by each option.
A limitation of this research is that schedules are created under the assumption of unlimited resource availability. Schedules created with this assumption in real world situations are often infeasible given that resources are commonly constrained and not readily available. As such, a discussion is provided regarding future research as to what data structure has to be developed in order to perform such scheduling under resource constraints.
|
142 |
Achieving predictable timing and fairness through cooperative pollingSinha, Anirban 05 1900 (has links)
Time-sensitive applications that are also CPU intensive like video games, video playback, eye-candy desktops etc. are increasingly common. These applications run on commodity operating systems that are targeted at diverse hardware, and hence they cannot assume that sufficient CPU is always available. Increasingly, these applications are designed to be adaptive. When executing multiple such applications, the operating system must not only provide good timeliness but also (optionally) allow co-ordinating their adaptations so that applications can deliver uniform fidelity.
In this work, we present a starvation-free, fair, process scheduling algorithm that provides predictable and low latency execution without the use of reservations and assists adaptive time sensitive tasks with achieving consistent quality through cooperation. We combine an event-driven application model called cooperative polling with a fair-share scheduler. Cooperative polling allows sharing of timing or priority information across applications via the kernel thus providing good timeliness, and the fair-share scheduler provides fairness and full utilization.
Our experiments show that cooperative polling leverages the inherent efficiency advantages of voluntary context switching versus involuntary pre-emption. In CPU saturated conditions, we show that the scheduling responsiveness of cooperative polling is five times better than a well-tuned fair-share scheduler, and orders of magnitude better than the best-effort scheduler used in the mainstream Linux kernel.
|
143 |
Codebook design for distributed relay beamforming systemZheng, Min 01 April 2012 (has links)
In FDD amplify-and-forward distributed relay network, codebook techniques are utilized
to feedback quantized CSI with limited cost. First, this thesis focuses on the phaseonly
codebook and with-power-control codebook design methods under individual relay
power constraints. Phase-only codebooks can be generated off-line with the Grassmannian
beamforming criterion. Due to non-uniform distribution of the optimal beamforming vector
in the vector space, The Lloyd’s algorithm is proposed for with-power-control codebook
designs. To reduce search complexity, a suboptimal method for the codebook update stage
in the Lloyd’s algorithm is proposed. Its performance is compared to the performance of
the global search method which provides the optimal solution but incurs high computation
complexity. Second, this thesis investigates the performance difference between phaseonly
and with-power-control codebooks. It is found that the power control gain is tightly
related to the relay locations. When the relays are close to the source node, the gain from
power control is negligible and using phase-only codebooks becomes a viable choice for
feedback due to its simple implantation and off-line computation. Finally, the problem of
codebook design extends to the total relay power constraint case, and the Lloyd’s algorithm
with primary eigenvector method is proposed to design a suboptimal codebook. / UOIT
|
144 |
Investigation in the application of complex algorithms to recurrent generalized neural networks for modeling dynamic systemsYackulic, Richard Matthew Charles 04 April 2011
<p>Neural networks are mathematical formulations that can be "trained" to perform certain functions. One particular application of these networks of interest in this thesis is to "model" a physical system using only input-output information. The physical system and the neural network are subjected to the same inputs. The neural network is then trained to produce an output which is the same as the physical system for any input. This neural network model so created is essentially a "blackbox" representation of the physical system. This approach has been used at the University of Saskatchewan to model a load sensing pump (a component which is used to create a constant flow rate independent of variations in pressure downstream of the pump). These studies have shown the versatility of neural networks for modeling dynamic and non-linear systems; however, these studies also indicated challenges associated with the morphology of neural networks and the algorithms to train them. These challenges were the motivation for this particular research.</p>
<p>Within the Fluid Power Research group at the University of Saskatchewan, a "global" objective of research in the area of load sensing pumps has been to apply dynamic neural networks (DNN) in the modeling of loads sensing systems.. To fulfill the global objective, recurrent generalized neural network (RGNN) morphology along with a non-gradient based training approach called the complex algorithm (CA) were chosen to train a load sensing pump neural network model. However, preliminary studies indicated that the combination of recurrent generalized neural networks and complex training proved ineffective for even second order single-input single-output (SISO) systems when the initial synaptic weights of the neural network were chosen at random.</p>
<p>Because of initial findings the focus of this research and its objectives shifted towards understanding the capabilities and limitations of recurrent generalized neural networks and non-gradient training (specifically the complex algorithm). To do so a second-order transfer function was considered from which an approximate recurrent generalized neural network representation was obtained. The network was tested under a variety of initial weight intervals and the number of weights being optimized. A definite trend was noted in that as the initial values of the synaptic weights were set closer to the "exact" values calculated for the system, the robustness of the network and the chance of finding an acceptable solution increased. Two types of training signals were used in the study; step response and frequency based training. It was found that when step response and frequency based training were compared, step response training was shown to produce a more generalized network.</p>
<p>Another objective of this study was to compare the use of the CA to a proven non-gradient training method; the method chosen was genetic algorithm (GA) training. For the purposes of the studies conducted two modifications were done to the GA found in the literature. The most significant change was the assurance that the error would never increase during the training of RGNNs using the GA. This led to a collapse of the population around a specific point and limited its ability to obtain an accurate RGNN.</p>
<p>The results of the research performed produced four conclusions. First, the robustness of training RGNNs using the CA is dependent upon the initial population of weights. Second, when using GAs a specific algorithm must be chosen which will allow the calculation of new population weights to move freely but at the same time ensure a stable output from the RGNN. Third, when the GA used was compared to the CA, the CA produced more generalized RGNNs. And the fourth is based upon the results of training RGNNs using the CA and GA when step response and frequency based training data sets were used, networks trained using step response are more generalized in the majority of cases.</p>
|
145 |
Algorithm for Handoff in VDL mode 4Andersson, Rickard January 2010 (has links)
VDL mode 4 is a digital data link operating in the VHF band, its mainly use is for the aviation industry.VDL4 can as an example provide with positioning data, speed information of aircrafts or vehicles equipped with a VDL4 transponder. A connection between the groundsystem and the airborne system is called a point to point connection, which can be used for various applications. This data link needs to be transferred between groundstations during flights in order maintain the connection, which is called handoff. The handoff process needs to be quick enough to not drop the link and at the same time a low rate of handoffs is desirable. The data link is regarded as a narrow resource and link management data for handoff is considered as overhead. This thesis studies how to make the handoff procedure optimal with respect to involved aspects. Previous research of handoff algorithms and models of the VHF-channel are treated. Standardized parameters and procedures in VDL4 and are explored in order to find an optimal solution for the handoff procedure in VDL4. The studied topics are analyzed and it is concluded to suggest an algorithm based on an adaptive hysteresis including signal quality and positioning data provided in VDL4. Standardized parameters which could be useful in the handoff procedure are commented, since the VDL4 standards are under development.
|
146 |
Evaluation of Red Colour Segmentation Algorithms in Traffic Signs DetectionFeng, Sitao January 2010 (has links)
Colour segmentation is the most commonly used method in road signs detection. Road sign contains several basic colours such as red, yellow, blue and white which depends on countries.The objective of this thesis is to do an evaluation of the four colour segmentation algorithms. Dynamic Threshold Algorithm, A Modification of de la Escalera’s Algorithm, the Fuzzy Colour Segmentation Algorithm and Shadow and Highlight Invariant Algorithm. The processing time and segmentation success rate as criteria are used to compare the performance of the four algorithms. And red colour is selected as the target colour to complete the comparison. All the testing images are selected from the Traffic Signs Database of Dalarna University [1] randomly according to the category. These road sign images are taken from a digital camera mounted in a moving car in Sweden.Experiments show that the Fuzzy Colour Segmentation Algorithm and Shadow and Highlight Invariant Algorithm are more accurate and stable to detect red colour of road signs. And the method could also be used in other colours analysis research. The yellow colour which is chosen to evaluate the performance of the four algorithms can reference Master Thesis of Yumei Liu.
|
147 |
Investigation in the application of complex algorithms to recurrent generalized neural networks for modeling dynamic systemsYackulic, Richard Matthew Charles 04 April 2011 (has links)
<p>Neural networks are mathematical formulations that can be "trained" to perform certain functions. One particular application of these networks of interest in this thesis is to "model" a physical system using only input-output information. The physical system and the neural network are subjected to the same inputs. The neural network is then trained to produce an output which is the same as the physical system for any input. This neural network model so created is essentially a "blackbox" representation of the physical system. This approach has been used at the University of Saskatchewan to model a load sensing pump (a component which is used to create a constant flow rate independent of variations in pressure downstream of the pump). These studies have shown the versatility of neural networks for modeling dynamic and non-linear systems; however, these studies also indicated challenges associated with the morphology of neural networks and the algorithms to train them. These challenges were the motivation for this particular research.</p>
<p>Within the Fluid Power Research group at the University of Saskatchewan, a "global" objective of research in the area of load sensing pumps has been to apply dynamic neural networks (DNN) in the modeling of loads sensing systems.. To fulfill the global objective, recurrent generalized neural network (RGNN) morphology along with a non-gradient based training approach called the complex algorithm (CA) were chosen to train a load sensing pump neural network model. However, preliminary studies indicated that the combination of recurrent generalized neural networks and complex training proved ineffective for even second order single-input single-output (SISO) systems when the initial synaptic weights of the neural network were chosen at random.</p>
<p>Because of initial findings the focus of this research and its objectives shifted towards understanding the capabilities and limitations of recurrent generalized neural networks and non-gradient training (specifically the complex algorithm). To do so a second-order transfer function was considered from which an approximate recurrent generalized neural network representation was obtained. The network was tested under a variety of initial weight intervals and the number of weights being optimized. A definite trend was noted in that as the initial values of the synaptic weights were set closer to the "exact" values calculated for the system, the robustness of the network and the chance of finding an acceptable solution increased. Two types of training signals were used in the study; step response and frequency based training. It was found that when step response and frequency based training were compared, step response training was shown to produce a more generalized network.</p>
<p>Another objective of this study was to compare the use of the CA to a proven non-gradient training method; the method chosen was genetic algorithm (GA) training. For the purposes of the studies conducted two modifications were done to the GA found in the literature. The most significant change was the assurance that the error would never increase during the training of RGNNs using the GA. This led to a collapse of the population around a specific point and limited its ability to obtain an accurate RGNN.</p>
<p>The results of the research performed produced four conclusions. First, the robustness of training RGNNs using the CA is dependent upon the initial population of weights. Second, when using GAs a specific algorithm must be chosen which will allow the calculation of new population weights to move freely but at the same time ensure a stable output from the RGNN. Third, when the GA used was compared to the CA, the CA produced more generalized RGNNs. And the fourth is based upon the results of training RGNNs using the CA and GA when step response and frequency based training data sets were used, networks trained using step response are more generalized in the majority of cases.</p>
|
148 |
Atomic structure and mechanical properties of of BC2NHuang, Zhi-Quan 06 July 2010 (has links)
Structural motifs for the BC2N superlattices were identified from a systematic search based on a greedy algorithm. Using a tree data structure, we have retrieved seven structural models for c-BC2N 1x1x lattice which were identified previously by Sun et al. [Phys. Rev. B 64, 094108 (2001)]. Furthermore, the atomic structures with the maximum number of C-C bonds for c-BC2N 2x2x2, 3x3x3, and 4x4x4 superlattices were found by imposing the greedy algorithm in the tree data structure. This new structural motif has not been previously proposed in the literature. A total of up to 512 atoms in the c-BC2N superlattice are taken into consideration. The atoms in these superlattices are in diamond-like structural form. Furthermore, the C atoms, as well as B and N atoms, form the octahedral motif separately. The octahedral structure consisting of C is bounded with {111} facets, and each facet is interfaced to a neighboring octahedral structure consisting of B and N atoms. The electronic and mechanical properties of newly identified low energy structures were analyzed.
|
149 |
Circuit Design of Maximum a Posteriori Algorithm for Turbo Code DecoderKao, Chih-wei 30 July 2010 (has links)
none
|
150 |
Botnet Detection Based on Ant ColonyLi, Yu-Yun 14 September 2012 (has links)
Botnet is the biggest threaten now. Botmasters inject bot code into normal computers so that computers become bots under control by the botmasters. Every bot connect to the botnet coordinator called Command and control server (C&C), the C&C delivers commands to bots, supervises the states of bots and keep bots alive. When C&C delivers commands from the botmasters to bots, bots have to do whatever botmasters want, such as DDoS attack, sending spam and steal private information from victims. If we can detect where the C&C is, we can prevent people from network attacking.
Ant Colony Optimization (ACO) studies artificial systems that take inspiration from the behavior of real ant colonies and which are used to solve discrete optimization problems. When ants walk on the path, it will leave the pheromone on the path; more pheromone will attract more ants to walk. Quick convergence and heuristic are two main characteristics of ant algorithm, are adopted in the proposed approach on finding the C&C node.
According to the features of connection between C&C and bots, ants select nodes by these features in order to detect the location of C&C and take down the botnet.
|
Page generated in 0.0465 seconds