• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • Tagged with
  • 258
  • 258
  • 258
  • 222
  • 221
  • 219
  • 48
  • 26
  • 18
  • 18
  • 17
  • 17
  • 16
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Towards the on-line development of visual interactive simulation models

Withers, Stephen John January 1981 (has links)
Reviews of previous work on visual interactive simulation, and on the interface between humans and computers were undertaken, the latter considering the physical and psychological aspects of the subject. Two simulation projects carried out in association with Rolls-Royce Aero Engines and the British Steel Corporation are described in detail. As a result of these projects and the review of previous studies, a major weakness in the technology of visual interactive simulation was identified: while the visual representation aids validation, verification, and experimentation, no facilities are provided to assist the analyst in the task of model construction. Simulation program generators are of proven use for non-interactive models, but a visual model requires a graphically oriented approach. The main section describes the design and implementation of a substantial extension to the simulation software developed at Warwick. This allows the design and development of displays to be carried out 'on-line', while preserving the one-to-one correspondence between simulation entities and their visual representation. It is suggested that this has the potential to significantly reduce the elapsed time taken to develop visual simulation models, while increasing the involvement of the user (or sponsor) in the modelling process, especially when 'pre-defined' entity types are used to minimise the amount of model- specific coding required. Finally, potential routes for the further development of visual interactive simulation are discussed, including the implementation of a 'simulation language' interpreter within the existing software. This would result in a system which was fully interactive, easing model development as well as experimentation.
172

An object oriented approach to automating the product specification concept in the automotive industry

Markomichalis, Panayiotis S. January 1991 (has links)
The research in this thesis is focused around the control of rapid automotive product specification changes which are due to multiple and unexpected factors ie. legal requirements, technological improvements, climate conditions. Automotive companies use the Product Specification Concept which consists of a multidisciplinary theory using Boolean logic as the applications environment and a team of auditors - people who check the validity of such a theory - to control the complexity of the changes in its products. Although the specifications data are stored electronically in data bases, the core of such business is dependent on the knowledge and experience of people within the automotive companies and still generally operates manually. Thus, human characteristics have an affect upon the business (ie. the inability of people to work with codes and many different data at once, people tend to forget or they lack proper training and skills, etc.) which makes it less efficient and consequently more costly. In this thesis possible ways of computerising such an environment (specifically, Rover's Auditing function and Product Specification Concept) are investigated. The characteristics of the problem domain indicate the need to use knowledge based reasoning and Object Oriented Programming. A system, ROOVESP (Rover's Object Oriented VEhicle Specification) was developed as the "vehicle" to explore the area and it proved that knowledge and experience can be automatically acquired from the existing data and procedures. When these are coded into rules, computer intelligence can contribute to this traditionally human oriented environment and automate fully both the Auditing area and the Product Specification Concept in Rover. The techniques adopted were proved applicable to other similar areas.
173

Parallel architectures for image analysis

Francis, Nicholas David January 1991 (has links)
This thesis is concerned with the problem of designing an architecture specifically for the application of image analysis and object recognition. Image analysis is a complex subject area that remains only partially defined and only partially solved. This makes the task of designing an architecture aimed at efficiently implementing image analysis and recognition algorithms a difficult one. Within this work a massively parallel heterogeneous architecture, the Warwick Pyramid Machine is described. This architecture consists of SIMD, MIMD and MSIMD modes of parallelism each directed at a different part of the problem. The performance of this architecture is analysed with respect to many tasks drawn from very different areas of the image analysis problem. These tasks include an efficient straight line extraction algorithm and a robust and novel geometric model based recognition system. The straight line extraction method is based on the local extraction of line segments using a Hough style algorithm followed by careful global matching and merging. The recognition system avoids quantising the pose space, hence overcoming many of the problems inherent with this class of methods and includes an analytical verification stage. Results and detailed implementations of both of these tasks are given.
174

On the implementation of P-RAM algorithms on feasible SIMD computers

Ziani, Ridha January 1992 (has links)
The P-RAM model of computation has proved to be a very useful theoretical model for exploiting and extracting inherent parallelism in problems and thus for designing parallel algorithms. Therefore, it becomes very important to examine whether results obtained for such a model can be translated onto machines considered to be more realistic in the face of current technological constraints. In this thesis, we show how the implementation of many techniques and algorithms designed for the P-RAM can be achieved on the feasible SIMD class of computers. The first investigation concerns classes of problems solvable on the P-RAM model using the recursive techniques of compression, tree contraction and 'divide and conquer'. For such problems, specific methods are emphasised to achieve efficient implementations on some SIMD architectures. Problems such as list ranking, polynomial and expression evaluation are shown to have efficient solutions on the 2—dimensional mesh-connected computer. The balanced binary tree technique is widely employed to solve many problems in the P-RAM model. By proposing an implicit embedding of the binary tree of size n on a (√n x√n) mesh-connected computer (contrary to using the usual H-tree approach which requires a mesh of size ≈ (2√n x 2√n), we show that many of the problems solvable using this technique can be efficiently implementable on this architecture. Two efficient O (√n) algorithms for solving the bracket matching problem are presented. Consequently, the problems of expression evaluation (where the expression is given in an array form), evaluating algebraic expressions with a carrier of constant bounded size and parsing expressions of both bracket and input driven languages are all shown to have efficient solutions on the 2—dimensional mesh-connected computer. Dealing with non-tree structured computations we show that the Eulerian tour problem for a given graph with m edges and maximum vertex degree d can be solved in O(d√n) parallel time on the 2 —dimensional mesh-connected computer. A way to increase the processor utilisation on the 2-dimensional mesh-connected computer is also presented. The method suggested consists of pipelining sets of iteratively solvable problems each of which at each step of its execution uses only a fraction of available PE's.
175

Towards efficient error detection in large-scale HPC systems

Gurumdimma, Nentawe Y. January 2016 (has links)
The need for computer systems to be reliable has increasingly become important as the dependence on their accurate functioning by users increases. The failure of these systems could very costly in terms of time and money. In as much as system's designers try to design fault-free systems, it is practically impossible to have such systems as different factors could affect them. In order to achieve system's reliability, fault tolerance methods are usually deployed; these methods help the system to produce acceptable results even in the presence of faults. Root cause analysis, a dependability method for which the causes of failures are diagnosed for the purpose of correction or prevention of future occurrence is less efficient. It is reactive and would not prevent the first failure from occurring. For this reason, methods with predictive capabilities are preferred; failure prediction methods are employed to predict the potential failures to enable preventive measures to be applied. Most of the predictive methods have been supervised, requiring accurate knowledge of the system's failures, errors and faults. However, with changing system components and system updates, supervised methods are ineffective. Error detection methods allows error patterns to be detected early to enable preventive methods to be applied. Performing this detection in an unsupervised way could be more effective as changes to systems or updates would less affect such a solution. In this thesis, we introduced an unsupervised approach to detecting error patterns in a system using its data. More specifically, the thesis investigates the use of both event logs and resource utilization data to detect error patterns. It addresses both the spatial and temporal aspects of achieving system dependability. The proposed unsupervised error detection method has been applied on real data from two different production systems. The results are positive; showing average detection F-measure of about 75%.
176

Approximation algorithms for packing and buffering problems

Matsakis, Nicolaos January 2015 (has links)
This thesis studies online and offine approximation algorithms for packing and buffering problems. In the second chapter of this thesis, we study the problem of packing linear programs online. In this problem, the online algorithm may only increase the values of the variables of the linear program and his goal is to maximize the value of the objective function of it. The online algorithm has initially full knowledge of all parameters of the linear program, except for the right-hand sides of the constraints which are gradually revealed to him by the adversary. This online problem has been introduced by Ochel et al. [2012]. Our contribution (Englert et al. [2014]) is to provide improved upper bounds for the competitiveness of both deterministic and randomized online algorithms for this problem, as well as an optimal deterministic online algorithm for the special case of linear programs involving two variables. In the third chapter we study the offine COLORFUL BIN PACKING problem. This problem is a variant of the BIN PACKING problem, where each item is associated with a color and where there exists the additional restriction that two items packed consecutively into the same bin cannot share the same color. The COLORFUL BIN PACKING problem has been studied mainly from an online perspective and has been introduced as a generalization of the BLACK AND WHITE BIN PACKING problem (Balogh et al. [2012]), i.e., the special case of this problem for two colors. We provide (joint work with Matthias Englert) a 2-appoximate algorithm for the COLORFUL BIN PACKING problem. In the fourth chapter we study the Longest Queue Drop (LQD) online algorithm for shared-memory switches with three and two output ports. The Longest Queue Drop algorithm is a well-known online algorithm used to direct the packet ow of shared-memory switches. According to LQD, when the buffer of the switch becomes full, a packet is preempted from the longest queue in the buffer to free buffer space for the newly arriving packet which is accepted. We show (Matsakis [2016], to appear) that the Longest Queue Drop algorithm is (3/2)-competitive for three-port switches, improving the previously best upper bound of 5/3 (Kobayashi et al. [2007]). Additionally, we show that this algorithm is exactly (4/3)-competitive for two-port switches, correcting a previously published result claiming a tight upper bound of 4M-4/3M-2 < 4=3, where M 2 Z+ denotes the buffer size.
177

Computational aspects of lattice theory

Buckle, John Francis January 1989 (has links)
The use of computers to produce a user-friendly safe environment is an important area of research in computer science. This dissertation investigates how computers can be used to create an interactive environment for lattice theory. The dissertation is divided into three parts. Chapters two and three discuss mathematical aspects of lattice theory, chapter four describes methods of representing and displaying distributive lattices and chapters five, six and seven describe a definitive based environment for lattice theory. Chapter two investigates lattice congruences and pre-orders and demonstrates that any lattice congruence or pre-order can be determined by sets of join-irreducibles. By this correspondence it is shown that lattice operations in a quotient lattice can be calculated by set operations on the join-irreducibles that determine the congruence. This alternative characterisation is used in chapter three to obtain closed forms for all replacements of the form "h can replace g when computing an element f", and hence extends the results of Beynon and Dunne into general lattices. Chapter four investigates methods of representing and displaying distributive lattices. Techniques for generating Hasse diagrams of distributive lattices are discussed and two methods for performing calculations on free distributive lattices and their respective advantages are given. Chapters five and six compare procedural and functional based notations with computer environments based on definitive notations for creating an interactive environment for studying set theory. Chapter seven introduces a definitive based language called Pecan for creating an interactive environment for lattice theory. The results of chapters two and three are applied so that quotients, congruences and homomorphic images of lattices can be calculated efficiently.
178

Phase relationships in stereoscopic computation

Langley, Keith January 1990 (has links)
We apply the notion that phase differences can be used to interpret disparity between a pair of stereoscopic images. Indeed, phase relationships can also be used to obtain orientation and probabilistic measures from both edges and comers, as well as the directional instantaneous frequency of an image field. The method of phase differences is shown to be equivalent to a Newton-Raphson root finding iteration through the resolutions of band-pass filtering. The method does, however, suffer from stability problems, and in particular stationary phase. The stability problems associated with this technique are implicitly derived from the mechanism used to interpet disparity, which in general requires an assumption of linear phase and the local instantaneous frequency. We present two techniques. Firstly, we use the centre frequency of the applied band-pass filter to interpret disparity. This interpretation, however, suffers heavily from phase error and requires considerable damping prior to convergence. Secondly, we use the derivative of phase to obtain the instantaneous frequency from an image, which is then used to improve the disparity estimate. The second measure is implicitly sensitive to regions that exhibit stationary phase. We prove that stationary phase is a form of aliasing. To maintain stability with this technique, it is essential to smooth the disparity signal at each resolution of filtering. These ideas are extended into 2-D where it is possible to extract both vertical and horizontal disparities. Unfortunately, extension into 2-D also introduces a similar form of the motion aperture problem. The best image regions to disambiguate both horizontal and vertical disparities lie in the presence of comers. Fortunately, we introduce a measure for identifying orthogonal image signals based upon the same filters that we use to interpret disparity. We find that in the presence of dominant edge energy, there is an error in horizontal disparity interpretation that varies as a cosine function. This error can be reduced by iteration or resolving the horizontal component of the disparity signal. These ideas are also applied towards the computation of deformation, which is related to the magnitude and direction of surface slant. This is a natural application to the ideas presented in this thesis.
179

Expected length of longest common subsequences

Dancík, Vladimír January 1994 (has links)
A longest common subsequence of two sequences is a sequence that is a subsequence of both the given sequences and has largest possible length. It is known that the expected length of a longest common subsequence is proportional to the length of the given sequences. The proportion, denoted by 7k, is dependent on the alphabet size k and the exact value of this proportion is not known even for a binary alphabet. To obtain lower bounds for the constants 7k, finite state machines computing a common subsequence of the inputs are built. Analysing the behaviour of the machines for random inputs we get lower bounds for the constants 7k. The analysis of the machines is based on the theory of Markov chains. An algorithm for automated production of lower bounds is described. To obtain upper bounds for the constants 7k, collations pairs of sequences with a marked common subsequence - are defined. Upper bounds for the number of collations of ‘small size’ can be easily transformed to upper bounds for the constants 7k. Combinatorial analysis is used to bound the number of collations. The methods used for producing bounds on the expected length of a common subsequence of two sequences are also used for other problems, namely a longest common subsequence of several sequences, a shortest common supersequence and a maximal adaptability.
180

Performance-oriented service management in clouds

Chen, Chao January 2016 (has links)
Cloud computing has provided the convenience for many IT-related and traditional industries to use feature-rich services to process complex requests. Various services are deployed in the cloud and they interact with each other to deliver the required results. How to effectively manage these services, the number of which is ever increasing, within the cloud has unavoidably become a critical issue for both tenants and service providers of the cloud. In this thesis, we develop the novel resource provision frameworks to determine resources provision for interactive services. Next, we propose the algorithms for mapping Virtual Machines (VMs) to Physical Machines (PMs) under different constraints, aiming to achieve the desired Quality-of-Services (QoS) while optimizing the provisions in both computing resources and communication bandwidth. Finally, job scheduling may become a performance bottleneck itself in such a large scale cloud. In order to address this issue, the distributed job scheduling framework has been proposed in the literature. However, such distributed job scheduling may cause resource conflict among distributed job schedulers due to the fact that individual job schedulers make their job scheduling decisions independently. In this thesis, we investigate the methods for reducing resource conflict. We apply the game theoretical methodology to capture the behaviour of the distributed schedulers in the cloud. The frameworks and methods developed in this thesis have been evaluated with a simulated workload, a large-scale workload trace and a real cloud testbed.

Page generated in 0.0934 seconds