91 |
Analysis of the Probabilistic Algorithms for Solving Subset Sum ProblemLin, Shin-Hong 11 August 2005 (has links)
In general, subset sum problem is strongly believed to be computationally difficult to solve.
But in 1983,
Lagarias and Odlyzko proposed a probabilistic algorithm for solving subset sum problems of sufficiently low density
in polynomial time.
In 1991, Coster et. al. improved the Lagarias-Odlyzko algorithm and solved subset sum problems with higher density.
Both algorithms reduce subset sum problem to finding shortest non-zero vectors in special lattices.
In this thesis,
we first proposed a new viewpoint to define the problems which can be solved by this two algorithms
and shows the improved algorithm isn't always better than the Lagarias-Odlyzko algorithm.
Then we verify this notion by experimentation.
Finally, we find that the Lagrias-Odlyzko algorithm can solve the high-density subset sum problems
if the weight of solution is higher than 0.7733n or lower than 0.2267n, even the density is close to 1.
|
92 |
A hybrid genetic algorithm for automatic test data generationWang, Hsiu-Chi 13 July 2006 (has links)
Automatic test data generation is a hot topic in recent software testing research. Various techniques have been proposed with different emphases. Among them, most of the methods are based on Genetic Algorithms (GA). However, whether it is the best Metaheuristic method for such a problem remains unclear. In this paper, we choose to use another approach which arms a GA with an intensive local searcher (the so-called Memetic Algorithm (MA) according to the recent terminology). The idea of incorporating local searcher is based on the observations from many real-world programs. It turns out the results outperform many other known Metaheuristic methods so far. We argue the needs of local search for software testing in the discussion of the paper.
|
93 |
DSP Based Hand written Number and Pattern Recognition SystemHsu, Chia-Hung 09 July 2003 (has links)
The thesis illustrates the development of DSP-based systems-¡§Hand Written Number Recognition System,¡¨ and ¡§Pattern Recognition System.¡¨ Hand written number recognition system consists of three sub-systems and recognition algorithm: Image Acquisition System, Image Preprocessing System, Image Segmentation System and Binary Pattern Match Algorithm. Pattern recognition system, as well, consists of three sub-systems and recognition algorithm: Image Acquisition System, Image Preprocessing System, Image Segmentation System, and Visual Dynamic Time Warping Algorithm. From the result of the experiment, both DSP image recognition systems can meet the expectation and gain good recognition and efficiency.
|
94 |
A Comparison Study between Weighted and Fuzzy Algorithm on Organizational Performance AssessmentChun-Teh Wu, Jerry 08 July 2008 (has links)
Nowadays, the administration environment is changing rapidly. Since International Institute for Management Development¡]IMD¡^selects ¡§government efficiency¡¨ and ¡§business efficiency¡¨ as the major factors and criteria to compute the rankings of world competitiveness in the World Competitiveness Yearbook. Both governments and enterprises force to innovate and need to strengthen organizational performance to promote work efficiency, and to improve the strategic management system.
This research applies the organizational performance theme in public institutions as well as in the operations of private enterprises by using the Balanced Scorecard¡¦s concept for measuring the scale operational activities of organization. To verify the performance of organization management, the questionnaire picks ¡§Likert Scale¡¨ criteria, and adopts the fuzzy membership function transformation to obtain measuring results.
The purposes of this research are: 1) to design a performance measure scale with content validity based on Balanced Scorecord; 2) to compare weighted and fuzzy algorithm; 3) to to verify the worth of fuzzy algorithm.
The results of this research are: 1) fuzzy algorithm is better than weighted algorithm in validity; 2) the Balance Scorecard helps organizational performance; 3) customer perspective is the first priority of performance measures in the Balanced Scorecard.
|
95 |
A Hybrid Algorithm for the Longest Common Subsequence of Multiple SequencesWeng, Hsiang-yi 19 August 2009 (has links)
The k-LCS problem is to find the longest common subsequence (LCS) of k input sequences. It is difficult while the number of input sequences is large.
In the past, researchers focused on finding the LCS of two sequences (2-LCS). However, there is no good algorithm for finding the optimal solution of k-LCS up to now. For solving the k-LCS problem, in this thesis, we first propose a mixed algorithm, which is a combination of a heuristic algorithm, genetic algorithm (GA) and ant colony optimization (ACO) algorithm.
Then, we propose an enhanced ACO (EACO) algorithm, composed of the heuristic algorithm and matching pair algorithm (MPA). In our experiments, we compare our algorithms with expansion algorithm, best next for maximal available symbol algorithm, GA and ACO algorithm. The experimental results on several sets of DNA and protein sequences show that our EACO algorithm outperforms other algorithms in the lengths of solutions.
|
96 |
Approximation Algorithms for (S,T)-Connectivity ProblemsLaekhanukit, Bundit 27 July 2010 (has links)
We study a directed network design problem called the $k$-$(S,T)$-connectivity problem; we design and analyze approximation
algorithms and give hardness results. For each positive integer $k$, the minimum cost $k$-vertex connected spanning subgraph problem is a special case of the $k$-$(S,T)$-connectivity problem. We defer
precise statements of the problem and of our results to the introduction.
For $k=1$, we call the problem the $(S,T)$-connectivity problem. We study three variants of the problem: the standard
$(S,T)$-connectivity problem, the relaxed $(S,T)$-connectivity problem, and the unrestricted $(S,T)$-connectivity problem. We give hardness results for these three variants. We design a $2$-approximation algorithm for the standard $(S,T)$-connectivity problem. We design tight approximation algorithms for the relaxed $(S,T)$-connectivity problem and one of its special cases.
For any $k$, we give an $O(\log k\log n)$-approximation algorithm,
where $n$ denotes the number of vertices. The approximation guarantee
almost matches the best approximation guarantee known for the minimum
cost $k$-vertex connected spanning subgraph problem which is $O(\log
k\log\frac{n}{n-k})$ due to Nutov in 2009.
|
97 |
Improved Interflow and Infiltration Algorithms for Distributed Hydrological ModelsLiu, Guoxiang January 2010 (has links)
The shallow subsurface controls the partitioning of available energy between sensible and latent heat of the land surface, and the partitioning of available water among evaporation, infiltration, and runoff. It is a key component of both the hydrometeorological system and the terrestrial water cycle. A critical part of any hydrological or hydrometeorological forecast model is therefore the algorithms used to represent the shallow soil processes, which include infiltration, evaporation, runoff, and interflow. For climate models, coupled algorithms called “Land Surface Schemes” (LSSs) are developed to represent the lower boundary conditions that deal with the land-to-atmosphere energy and moisture fluxes. Similar algorithms are implemented in regional watershed models and day-to-day operational water resources forecasting models. It is the primary objective of this thesis to provide improved methods for simulating coupled land surface processes, which can be used as components of LSSs or within existing operational hydrology models. These new methods address a number of specific issues inadequately handled by current models, including the presence of shallow boundary conditions, heterogeneity in infiltration, and infiltration and interflow coupling processes.
The main objective of the proposed research is to provide consistent physically-based approach for simulating near surface soil moisture processes, so as to complete the parameterization of the interflow/infiltration algorithm in a Hydrology-Land-Surface scheme MESH. The work mainly focuses on the investigation and development of more physically-based infiltration and interflow algorithms. The hope is to determine appropriate relationships between internal state variables (specifically bulk soil moisture) and system boundary fluxes, while simultaneously reducing the number of nonphysical or unknown model parameters. Fewer parameters lead to reduced calibration requirements for distributed hydrological models and consequently accelerate the transfer of such models to engineering practice.
Multiple approaches were taken to provide improved relationships between infiltration and lateral drainage, fluxes and storage. These algorithms were tested by a specialized Richards' equation for sloping soils and Monte Carlo simulations. These tests demonstrated reasonable accuracy and improved representation for the hydrological processes.
|
98 |
Problems on Geometric Graphs with Applications to Wireless NetworksNUNEZ RODRIGUEZ, YURAI 26 November 2009 (has links)
It is hard to imagine the modern world without wireless communication. Wireless
networks are, however, challenging inasmuch as they are useful. Because of their
complexity, wireless networks have introduced a myriad of new problems into the field of algorithms. To tackle the new computational challenges, specific models have been devised to suit wireless networks. Most remarkably, wireless networks can be modelled as geometric graphs. This allows us to address problems in wireless networks using tools from the fields of graph theory, graph algorithms, and computational geometry.
Our results have applications to routing, coverage detection, data collection, and fault recovery, among other issues in this area.
To be more specific, we investigate three major problems: a geometric approach to
fault recovery; the distributed computation of Voronoi diagrams; and finding Hamiltonian cycles in hexagonal networks. Our geometric approach to fault recovery has been experimentally proved superior to an existing combinatorial approach. The distributed algorithm we propose for computing Voronoi diagrams of networks is the
first non-trivial algorithm that has been proved to perform this task correctly; its efficiency has been demonstrated through simulations. Regarding the Hamiltonian
cycle problem of hexagonal networks, we have advanced this topic by providing conditions for the existence of such a cycle, in addition to new insight on its complexity for
the "solid" hexagonal grid case. Although we present satisfying solutions to several
of the addressed problems, plenty is left to be done. In this regard, we identify a set
of open problems that will be subject of research for years to come. / Thesis (Ph.D, Computing) -- Queen's University, 2009-11-25 21:04:37.0
|
99 |
The synthesis of artificial neural networks using single string evolutionary techniquesMacLeod, Christopher January 1999 (has links)
The research presented in this thesis is concerned with optimising the structure of Artificial Neural Networks. These techniques are based on computer modelling of biological evolution or foetal development. They are known as Evolutionary, Genetic or Embryological methods. Specifically, Embryological techniques are used to grow Artificial Neural Network topologies. The Embryological Algorithm is an alternative to the popular Genetic Algorithm, which is widely used to achieve similar results. The algorithm grows in the sense that the network structure is added to incrementally and thus changes from a simple form to a more complex form. This is unlike the Genetic Algorithm, which causes the structure of the network to evolve in an unstructured or random way. The thesis outlines the following original work: The operation of the Embryological Algorithm is described and compared with the Genetic Algorithm. The results of an exhaustive literature search in the subject area are reported. The growth strategies which may be used to evolve Artificial Neural Network structure are listed. These growth strategies are integrated into an algorithm for network growth. Experimental results obtained from using such a system are described and there is a discussion of the applications of the approach. Consideration is given of the advantages and disadvantages of this technique and suggestions are made for future work in the area. A new learning algorithm based on Taguchi methods is also described. The report concludes that the method of incremental growth is a useful and powerful technique for defining neural network structures and is more efficient than its alternatives. Recommendations are also made with regard to the types of network to which this approach is best suited. Finally, the report contains a discussion of two important aspects of Genetic or Evolutionary techniques related to the above. These are Modular networks (and their synthesis) and the functionality of the network itself.
|
100 |
Theory and Applications of Weighted Least Squares Surface Matching for Accurate Spatial Data RegistrationPâquet, Robert Jean Marc January 2004 (has links)
This thesis discusses matching of 3D surfaces, in particular, their registration in a common coordinate system. This differs from object recognition in the sense that the surfaces are generally close to registration, sometimes so close that the mismatch cannot be detected on visual inspection. The surface matching algorithm, based on least squares theory, is therefore an estimation of the matching parameters, sometimes very small, which provides the most statistically accurate registration. High redundancy is achieved with the algorithm, as each point of one surface can potentially participate in the formation of an observation equation for the least squares adjustment. The algorithm minimises the separation between the surfaces. The surfaces are defined by sets of points represented by their cartesian coordinates in 3D space, without restrictions on the mode of sampling used in the capture of the data. The registration is executed without control points. Modern non-thematic sampling methods, for instance airborne laser scanning, can benefit from such an algorithm. Other applications include processes where permanent control markers cannot be used, for example, medical applications or coastal erosion. Surface matching has been used previously by a small number of people. The particular interest of this thesis, however, has been to test the accuracy and other characteristics of the matching, especially when weighting is used with the surface separations. This thesis presents and compares several weighting techniques including one technique based on the covariance function. In addition, a statistical method to model matching accuracy as a function of the density of the control surface is formulated. The method is useful to ascertain the interpolation component of the matching error. The remaining component of the error can be deducted and analysed according to the project under consideration. Examples of project might be filtering in data fusion assessment, or volume displacement in landslide analysis. The theory is developed using artificial data. This helps to isolate and analyse in turn the various characteristics of the surface matching. The thesis is then illustrated with examples involving real data sampled in Newcastle, NSW, Australia, using methods such as ALS, photogrammetry and GPS. / PhD Doctorate
|
Page generated in 0.0653 seconds