Spelling suggestions: "subject:"algorithm"" "subject:"allgorithm""
171 |
Surface reconstruction from three dimensional range data.Myers, Andrew January 2005 (has links)
This thesis looks at the problem of reconstructing a single surface representation from multiple range images acquired from a terrestrial laser scanner. A solution to this problem is important to industries such as mining, where accurate spatial measurement is required for mapping and volumetric calculations. Laser scanners for 3D measurement are now commercially available and software for deriving useful information from the data these devices generate is essential. A reconstruction technique based on an implicit surface representation of the range images and a polygonisation algorithm called marching triangles has been implemented in software and its performance investigated. This work improves upon the existing techniques in that it takes into account the particular differences of terrestrial range data as compared with data from small scale laser scanners. The implementation is robust with respect to noisy data and environments and requires minimal user input. A new approach to 3D spatial indexing is also developed to allow rapid evaluation of the true closest point to a surface which is the basis of the signed distance function implicit surface representation. A new technique for locating step discontinuities in the range image is presented, which caters for the varying sampling densities of terrestrial range images. The algorithm is demonstrated using representative range images acquired for surface erosion monitoring and for underground mine surveying. The results indicate that this reconstruction technique represents an improvement over current techniques for this type of range data. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1169106 / Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2005
|
172 |
Surface reconstruction from three dimensional range data.Myers, Andrew January 2005 (has links)
This thesis looks at the problem of reconstructing a single surface representation from multiple range images acquired from a terrestrial laser scanner. A solution to this problem is important to industries such as mining, where accurate spatial measurement is required for mapping and volumetric calculations. Laser scanners for 3D measurement are now commercially available and software for deriving useful information from the data these devices generate is essential. A reconstruction technique based on an implicit surface representation of the range images and a polygonisation algorithm called marching triangles has been implemented in software and its performance investigated. This work improves upon the existing techniques in that it takes into account the particular differences of terrestrial range data as compared with data from small scale laser scanners. The implementation is robust with respect to noisy data and environments and requires minimal user input. A new approach to 3D spatial indexing is also developed to allow rapid evaluation of the true closest point to a surface which is the basis of the signed distance function implicit surface representation. A new technique for locating step discontinuities in the range image is presented, which caters for the varying sampling densities of terrestrial range images. The algorithm is demonstrated using representative range images acquired for surface erosion monitoring and for underground mine surveying. The results indicate that this reconstruction technique represents an improvement over current techniques for this type of range data. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1169106 / Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2005
|
173 |
Polynomial invariants of the Euclidean group action on multiple screws : a thesis submitted to the Victoria University of Wellington in fulfilment of the requirements for the degree of Master of Science in Mathematics /Crook, Deborah. January 2009 (has links)
Thesis (M.Sc.)--Victoria University of Wellington, 2009. / Includes bibliographical references.
|
174 |
[en] NESTING OF GENERAL PLANE FIGURES / [pt] ENCAIXE GERAL DE FIGURAS PLANASALTAMIR DIAS 28 June 2012 (has links)
[pt] O uso cada vez mais corrente de métodos heurístico tem permitido contribuir para a automação e otimização de inúmeros processos industriais complexos.
Um dos processos que vem sendo beneficiado é o corte de roupas na indústria do vestuário, onde o encaixe de moldes deve ser feito de forma a minimizar o desperdício de tecido.
Este trabalho visa a dar uma contribuição ao problema geral de encaixe de figuras planas irregulares. Assim, busca-se resolver este problema através do uso de regras heurísticas implementadas num algoritmo computacional.
Como ponto principal, o apresenta uma sistemática de construção de alternativas de encaixe, em forma de uma árvore, facilitando a busca de um encaixe solução, de alto rendimento, entre as praticamente infinitas possibilidades.
A viabilização do algoritmo de encaixe é alcançada através de duas técnicas de posicionamento dos moldes que previnem sua superposição. As vantagens das duas técnicas são combinadas para melhor proveito do algoritmo.
Nas conclusões são discutidas as dificuldades encontradas e formulados novos caminhos para a investigação. / [en] The increasing use of heuristical methods has advanced the frontier of application of optimization and automatization techniques in complex industrial processes.
One emerging utilization for these methods in the pattern nesting process in the garment industry. The aim is to nest the pattern in such a way as to minimize the waste of fabric.
The present work aims to contribute to the optimal nesting of general planes figures. The methods which will be discussed, employ heustical rules implemented thorough computacional algorithms.
The focal point of the work is a methodology of obtaining a sequence of partial and complete nesting from which the best one can be selected. The computacional algorithm embodies two distinct methods for the placement of the figures on the nesting plane avoiding superposition. Both methods are used in such way that the resulting algorithm profits from their advantages.
Present diffuclties and future trends are outlined in the conclusions.
|
175 |
Algorithmic sovereigntyRoio, Denis January 2018 (has links)
This thesis describes a practice based research journey across various projects dealing with the design of algorithms, to highlight the governance implications in design choices made on them. The research provides answers and documents methodologies to address the urgent need for more awareness of decisions made by algorithms about the social and economical context in which we live. Algorithms consitute a foundational basis across different fields of studies: policy making, governance, art and technology. The ability to understand what is inscribed in such algorithms, what are the consequences of their execution and what is the agency left for the living world is crucial. Yet there is a lack of interdisciplinary and practice based literature, while specialised treatises are too narrow to relate to the broader context in which algorithms are enacted. This thesis advances the awareness of algorithms and related aspects of sovereignty through a series of projects documented as participatory action research. One of the projects described, Devuan, leads to the realisation of a new, worldwide renown operating system. Another project, "sup", consists of a minimalist approach to mission critical software and literate programming to enhance security and reliability of applications. Another project, D-CENT, consisted in a 3 year long path of cutting edge research funded by the EU commission on the emerging dynamics of participatory democracy connected to the technologies adopted by citizen organizations. My original contribution to knowledge lies within the function that the research underpinning these projects has on the ability to gain a better understanding of sociopolitical aspects connected to the design and management of algorithms. It suggests that we can improve the design and regulation of future public, private and common spaces which are increasingly governed by algorithms by understanding not only economical and legal implications, but also the connections between design choices and the sociopolitical context for their development and execution.
|
176 |
Ambulance Optimization AllocationNasiri, Faranak 01 August 2014 (has links)
Facility Location problem refers to placing facilities (mostly vehicles) in appropriate locations to yield the best coverage with respect to other important factors which are specific to the problem. For instance in a fire station some of the important factors are traffic time, distribution of stations, time of the service and so on. Furthermore, budget limitation, time constraints and the great importance of the operation, make the optimum allocation very complex. In the past few years, several research in this area have been done to help managers by designing some effective algorithm to allocating facilities in the best way possible. Most early proposed models were focused on static and deterministic methods. In static models, once a facility assigns to a location, it will not relocate anymore. Although these methods could be utilized in some simple settings, there are so many factors in real world that make a static model of limited application. The demands may change over time or facilities may be dropped or added. In these cases a more flexible model is desirable, thus dynamic models are proposed to be used in such cases. Facilities can be located and relocated based on the situations. More recently, dynamic models became more popular but there were still many aspects of facility allocation problems which were challenging and would require more complex solutions. The importance of facility location problem becomes significantly more relevant when it relates to hospitals and emergency responders. Even one second of improvement in response time is important in this area. For this reason, we selected ambulance facility allocation problem as a case study to analyze this problem domain. Much research has been done on ambulances allocation. We will review some of these models and their advantages and disadvantages. One of the best model in this areas introduced by Rajagopalan. In this work, his model is analyzed and its major drawback is addressed by applying some modifications to its methodology. Genetic Algorithm is utilized in this study as a heuristic method to solve the allocation model.
|
177 |
Maximum Clique Search in Circulant k-HypergraphsPlant, Lachlan 23 November 2018 (has links)
The search for max-cliques in graphs is a well established NP-complete problem in graph theory and algorithm design, with many algorithms designed to make use of internal structures of specific types of graphs. We study the extension of the problem of searching for max-cliques in graphs to hypergraphs with constant edge size k, and adapt existing algorithms for graphs to work in k-hypergraphs. In particular, we are interested in the generalization of circulant graphs to circulant k-hypergraphs, and provide a definition of this type of hypergraph. We design and implement a new algorithm to perform max-clique searches on circulant k-hypergraphs. This algorithm combines ideas from a Russian doll algorithm for max-cliques in graphs (Ostergard 2002) with an algorithm based on necklaces for a class of circulant k-hypergraphs (Tzanakis, Moura, Stevens and Panario 2016).
We examine the performance of our new algorithm against a set of adapted algorithms (backtracking and Russian doll search for general k-hypergraphs, and necklace-based search for circulant k-hypergraphs) in a set of benchmarking experiments across various densities and edge sizes. This study reveals that the new algorithm outperforms the others when edge density of the hypergraph is high, and that the pure necklace-based algorithm is best in the case of low densities. Finally, we use our new algorithm to perform an exhaustive search on circulant 4-hypergraphs constructed from linear feedback shift register sequences on finite fields of order q that yields covering arrays. The search is completed for 2 <= q <= 5 which solves the open case of q=5 left by Tzanakis et al.
|
178 |
Techniques of design optimisation for algorithms implemented in softwareHopson, Benjamin Thomas Ken January 2016 (has links)
The overarching objective of this thesis was to develop tools for parallelising, optimising, and implementing algorithms on parallel architectures, in particular General Purpose Graphics Processors (GPGPUs). Two projects were chosen from different application areas in which GPGPUs are used: a defence application involving image compression, and a modelling application in bioinformatics (computational immunology). Each project had its own specific objectives, as well as supporting the overall research goal. The defence / image compression project was carried out in collaboration with the Jet Propulsion Laboratories. The specific questions were: to what extent an algorithm designed for bit-serial for the lossless compression of hyperspectral images on-board unmanned vehicles (UAVs) in hardware could be parallelised, whether GPGPUs could be used to implement that algorithm, and whether a software implementation with or without GPGPU acceleration could match the throughput of a dedicated hardware (FPGA) implementation. The dependencies within the algorithm were analysed, and the algorithm parallelised. The algorithm was implemented in software for GPGPU, and optimised. During the optimisation process, profiling revealed less than optimal device utilisation, but no further optimisations resulted in an improvement in speed. The design had hit a local-maximum of performance. Analysis of the arithmetic intensity and data-flow exposed flaws in the standard optimisation metric of kernel occupancy used for GPU optimisation. Redesigning the implementation with revised criteria (fused kernels, lower occupancy, and greater data locality) led to a new implementation with 10x higher throughput. GPGPUs were shown to be viable for on-board implementation of the CCSDS lossless hyperspectral image compression algorithm, exceeding the performance of the hardware reference implementation, and providing sufficient throughput for the next generation of image sensor as well. The second project was carried out in collaboration with biologists at the University of Arizona and involved modelling a complex biological system – VDJ recombination involved in the formation of T-cell receptors (TCRs). Generation of immune receptors (T cell receptor and antibodies) by VDJ recombination is an enormously complex process, which can theoretically synthesize greater than 1018 variants. Originally thought to be a random process, the underlying mechanisms clearly have a non-random nature that preferentially creates a small subset of immune receptors in many individuals. Understanding this bias is a longstanding problem in the field of immunology. Modelling the process of VDJ recombination to determine the number of ways each immune receptor can be synthesized, previously thought to be untenable, is a key first step in determining how this special population is made. The computational tools developed in this thesis have allowed immunologists for the first time to comprehensively test and invalidate a longstanding theory (convergent recombination) for how this special population is created, while generating the data needed to develop novel hypothesis.
|
179 |
Quality-of-service-based approach for dimensioning and optimisation of mobile cellular networksKourtis, Stamatis January 2002 (has links)
Next generation high performance systems are being standardised assuming a generic service delivery paradigm capable of supporting a diversity of circuit and importantly packet services. However, this flexibility comes at a cost which is the increased complexity of the dimensioning, planning, optimisation and QoS provisioning with respect to previous generation single-service mobile systems. Accurate system dimensioning is of fundamental importance and this thesis explores this requirement at two levels. Firstly, it departs from the common assumption of static users and examines what is the impact of mobile users on the system capacity. Secondly, it examines the impact of voice and web browsing services on the system dimensioning. In spite of the accuracy of dimensioning and planning, load imbalances occur for different reasons, which result in small-scale congestion events in the system. A load equalisation scheme is proposed which utilises the overlapping areas between neighbouring cells in order to eliminate the load imbalances. Essentially, coverage overlapping is needed in order to achieve ubiquitous coverage, hence to eliminate coverage holes. However, excessive overlapping results in capacity loss in interference-limited systems which is virtually the case with all modern systems. Radio coverage optimisation is needed but today this is performed on a cell-by- cell basis producing sub-optimal results. This thesis proposes an advanced coverage optimisation algorithm which takes into consideration simultaneously all cells within the considered area. For the operators (and also the proposed coverage optimisation algorithm) it is Imperative to have accurate path loss predictions. However, contemporary planning tools come with certain limitations, and often time-consuming and expensive measurement campaigns are organised. This thesis builds on the assumption that mobile systems will be able to locate the position of mobile terminals and subsequently proposes an automated process for the estimation of the radio coverage of the network. Lastly, the assumption regarding the positioning capabilities of the mobile systems Is further exploited in order to enhance the QoS guarantees to mobile users. Thus, various algorithms are examined which perform handovers towards base stations which maximise the survivability of the handed over calls.
|
180 |
Impact study of length in detecting algorithmically generated domainsAhluwalia, Aashna 30 April 2018 (has links)
Domain generation algorithm (DGA) is a popular technique for evading detection used by many sophisticated malware families. Since the DGA domains are randomly generated, they tend to exhibit properties that are different from legitimate domain names. It is observed that shorter DGA domains used in emerging malware are more difficult to detect, in contrast to regular DGA domains that are unusually long. While length was considered as a contributing feature in earlier approaches, there has not been a systematic focus on how to leverage its impact on DGA domains detection accuracy. Through our study, we present a new detection model based on semantic and information theory features. The research applies concept of domain length threshold to detect DGA domains regardless of their lengths. The experimental evaluation of the proposed approach, using public datasets, yield a detection rate (DR) of 98.96% and a false positive rate (FPR) of 2.1%, when using random forests classification technique / Graduate
|
Page generated in 0.036 seconds