• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2593
  • 911
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5932
  • 1421
  • 871
  • 724
  • 721
  • 667
  • 490
  • 489
  • 477
  • 445
  • 421
  • 414
  • 386
  • 364
  • 339
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Fractal tilings in Euclidean space.

January 2008 (has links)
Liu, Xin. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 61-63). / Abstracts in English and Chinese. / Abstract --- p.1 / Acknowledgments --- p.4 / Chapter 0 --- Introduction --- p.5 / Chapter 1 --- Basics of self-affine tiles --- p.8 / Chapter 1.1 --- Self-affine sets --- p.8 / Chapter 1.2 --- Self-affine tiles --- p.11 / Chapter 1.3 --- Structure of tiling sets --- p.15 / Chapter 1.4 --- Integral self-affine tiles --- p.20 / Chapter 1.4.1 --- Lebesgue measure of integral self-affine tile --- p.22 / Chapter 1.4.2 --- Classification of digit set --- p.24 / Chapter 2 --- Connectedness of self-affine tiles --- p.28 / Chapter 2.1 --- Connectedness --- p.28 / Chapter 2.2 --- Disk-likeness --- p.33 / Chapter 3 --- Tiles with rotations and reflections --- p.39 / Chapter 3.1 --- Tiles --- p.39 / Chapter 3.2 --- Integral tiles --- p.54 / Bibliography --- p.60
132

Object localization using deformable templates

Spiller, Jonathan Michael 12 March 2008 (has links)
Object localization refers to the detection, matching and segmentation of objects in images. The localization model presented in this paper relies on deformable templates to match objects based on shape alone. The shape structure is captured by a prototype template consisting of hand-drawn edges and contours representing the object to be localized. A multistage, multiresolution algorithm is utilized to reduce the computational intensity of the search. The first stage reduces the physical search space dimensions using correlation to determine the regions of interest where a match it likely to occur. The second stage finds approximate matches between the template and target image at progressively finer resolutions, by attracting the template to salient image features using Edge Potential Fields. The third stage entails the use of evolutionary optimization to determine control point placement for a Local Weighted Mean warp, which deforms the template to fit the object boundaries. Results are presented for a number of applications, showing the successful localization of various objects. The algorithm’s invariance to rotation, scale, translation and moderate shape variation of the target objects is clearly illustrated.
133

Different scales and integration of data in reservoir simulation

Hartanto, Lina January 2004 (has links)
The term upscaling and determination of pseudo curves, or effective parameters, used on a coarse-scale simulation grid are related to the complex and extensive problems associated with reservoir studies. The primary strategy mainly focuses on having a good physical and practical understanding of the particular processes in question, and an appreciation of reservoir model sensitivities. Thus the building of the reservoir simulation models can be optimally determined.By concentrating on the modelling and upscaling gas injection for Enhanced Oil Recovery (EOR) process, which includes Interfacial Tension (IFT) and the amicability effect, a new effective and efficient algorithm of upscaling will be investigated and determined by using several upscaled parameters. The sensitivities of these determined coarse scale parameters (i.e. porosity, absolute and relative permeability and capillary pressure), will also be studied through a history matching of the existing field.
134

Best-first Decision Tree Learning

Shi, Haijian January 2007 (has links)
In best-first top-down induction of decision trees, the best split is added in each step (e.g. the split that maximally reduces the Gini index). This is in contrast to the standard depth-first traversal of a tree. The resulting tree will be the same, just how it is built is different. The objective of this project is to investigate whether it is possible to determine an appropriate tree size on practical datasets by combining best-first decision tree growth with cross-validation-based selection of the number of expansions that are performed. Pre-pruning, post-pruning, CART-pruning can be performed this way to compare.
135

Algorithm for Handoff in VDL mode 4

Andersson, Rickard January 2010 (has links)
<p>VDL mode 4 is a digital data link operating in the VHF band, its mainly use is for the aviation industry.VDL4 can as an example provide with positioning data, speed information of aircrafts or vehicles equipped with a VDL4 transponder. A connection between the groundsystem and the airborne system is called a point to point connection, which can be used for various applications. This data link needs to be transferred between groundstations during flights in order maintain the connection, which is called handoff.</p><p>The handoff process needs to be quick enough to not drop the link and at the same time a low rate of handoffs is desirable. The data link is regarded as a narrow resource and link management data for handoff is considered as overhead.</p><p>This thesis studies how to make the handoff procedure optimal with respect to involved aspects. Previous research of handoff algorithms and models of the VHF-channel are treated. Standardized parameters and procedures in VDL4 and are explored in order to find an optimal solution for the handoff procedure in VDL4.</p><p>The studied topics are analyzed and it is concluded to suggest an algorithm based on an adaptive hysteresis including signal quality and positioning data provided in VDL4. Standardized parameters which could be useful in the handoff procedure are commented, since the VDL4 standards are under development.</p>
136

Novel cost allocation framework for natural gas processes: methodology and application to plan economic optimization

Jang, Won-Hyouk 30 September 2004 (has links)
Natural gas plants can have multiple owners for raw natural gas streams and processing facilities as well as for multiple products. Therefore, a proper cost allocation method is necessary for taxation of the profits from natural gas and crude oil as well as for cost sharing among gas producers. However, cost allocation methods most often used in accounting, such as the sales value method and the physical units method, may produce unacceptable or even illogical results when applied to natural gas processes. Wright and Hall (1998) proposed a new approach called the design benefit method (DBM), based upon engineering principles, and Wright et al. (2001) illustrated the potential of the DBM for reliable cost allocation for natural gas processes by applying it to a natural gas process. In the present research, a rigorous modeling technique for the DBM has been developed based upon a Taylor series approximation. Also, we have investigated a cost allocation framework that determines the virtual flows, models the equipment, and evaluates cost allocation for applying the design benefit method to other scenarios, particularly those found in the petroleum and gas industries. By implementing these individual procedures on a computer, the proposed framework easily can be developed as a software package, and its application can be extended to large-scale processes. To implement the proposed cost allocation framework, we have investigated an optimization methodology specifically geared toward economic optimization problems encountered in natural gas plants. Optimization framework can provide co-producers who share raw natural gas streams and processing plants not only with optimal operating conditions but also with valuable information that can help evaluate their contracts. This information can be a reasonable source for deciding new contracts for co-producers. For the optimization framework, we have developed a genetic-quadratic search algorithm (GQSA) consisting of a general genetic algorithm and a quadratic search that is a suitable technique for solving optimization problems including process flowsheet optimization. The GQSA inherits the advantages of both genetic algorithms and quadratic search techniques, and it can find the global optimum with high probability for discontinuous as well as non-convex optimization problems much faster than general genetic algorithms.
137

Sim-paramecium Evolution Algorithm based on Enhanced Livability and Competition

Sie, Kun-Sian 16 August 2007 (has links)
This thesis proposes an algorithm to enhance the convergence speed of genetic algorithm by modifying the function flow of a simple GA. Additional operators, such as asexual reproduction, competition, and livability, are added before the survival operation. After adding these three operators to the genetic algorithm, the convergence speed can be increased. Experiments indicate that simulations with the proposed algorithm have a 47% improvement in convergence speed on the traveling salesman problem. As for the graph coloring problem, the proposed algorithm also has a 10% improvement. Also, since these operators are additional parts to the original GA, the algorithm can be further improved by enhancing the operators, such as selection, crossover, and mutation.
138

Table Driven Algorithm for Joint Sparse Form

Chen, Bing-hong 25 August 2007 (has links)
In Cryptography, computing a^xb^y mod n is the most important and the most time-consuming calculation The problem can be solved by classical binary method. Later research is based on this basis to increase computational efficiency. Furthermore, Binary signed-digit representation recoding algorithm, the Sparse Form, the DJM recoding method, and the Joint Sparse Form can be used to decrease the number of multiplication by aligning more non-zero bits. Another method is to pre-compute and store the part of the results to decrease the number of computations by shifting bits. Joint Sparse Form recording method is not a table driven algorithm in converting source codes into joint sparse form. In this paper, we first proposed a table driven algorithm for joint sparse form to simply recording concept. This algorithm can be constructed a finite state machine to denote the recording procedure. According to this finite state machine, we show that the average joint Hamming weight among joint sparse form is 0.5n when n approaches infinity. Finally, we show that the average joint Hamming weights of SS1 method and DS1 method among joint sparse form are 0.469n and 0.438n by using a similar method, respectively.
139

Algorithms for the Traffic Light Setting Problem on the Graph Model

Chen, Shiuan-wen 28 August 2007 (has links)
As the number of vehicles increases rapidly, traffic congestion has become a serious problem in a city. Over the past years, a considerable number of studies have been made on traffic light setting. The traffic light setting problem is to investigate how to set the given traffic lights such that the total waiting time of vehicles on the roads is minimized. In this thesis, we use a graph model to represent the traffic network. On this model, some characteristics of the setting problem can be presented and analyzed. We first devise a branch and bound algorithm for obtaining the optimal solution of the traffic light setting problem. In addition, the genetic algorithm (GA), the particle swarm optimization (PSO) and the ant colony optimization (ACO) algorithm are also adopted to get the near optimal solution. Then, to extend this model, we add the assumption that each vehicle can change its direction. By comparing the results of various algorithms, we can study the impact of these algorithms on the traffic light setting problem. In our experiments, we also transform the map of Kaohsiung city into our graph model and test each algorithm on this graph.
140

Modified Niched Pareto Multi-objective Genetic Algorithm for Construction Scheduling Optimization

Kim, Kyungki 2011 August 1900 (has links)
This research proposes a Genetic Algorithm based decision support model that provides decision makers with a quantitative basis for multi-criteria decision making related to construction scheduling. In an attempt to overcome the drawbacks of similar efforts, the proposed multi-objective optimization model provides insight into construction scheduling problems. In order to generate optimal solutions in terms of the three important criteria which are project duration, cost, and variation in resource use, a new data structure is proposed to define a solution to the problem and a general Niched Pareto Genetic Algorithm (NPGA) is modified to facilitate optimization procedure. The main features of the proposed Multi-Objective Genetic Algorithm (MOGA) are: A fitness sharing technique that maintains diversity of solutions. A non-dominated sorting method that assigns ranks to each individual solution in the population is beneficial to the tournament selection process. An external archive to prevent loss of optimal or near optimal solutions due to the random effect of genetic operators. A space normalization method to avoid scaling deficiencies. The developed optimization model was applied to two case studies. The results indicate that a wider range of solutions can be obtained by employing the new approach when compared to previous models. Greater area in the decision space is considered and tradeoffs between all the objectives are found. In addition, various resource use options are found and visualized. Most importantly, the creation of a simultaneous optimization model provides better insight into what is obtainable by each option. A limitation of this research is that schedules are created under the assumption of unlimited resource availability. Schedules created with this assumption in real world situations are often infeasible given that resources are commonly constrained and not readily available. As such, a discussion is provided regarding future research as to what data structure has to be developed in order to perform such scheduling under resource constraints.

Page generated in 0.0353 seconds