• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 3
  • 2
  • 1
  • Tagged with
  • 937
  • 143
  • 105
  • 73
  • 73
  • 63
  • 44
  • 39
  • 35
  • 21
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

From tool to instrument : an experiential analysis of interacting with information visualization

Alsaud, Sara Faisal Bander January 2008 (has links)
Information Visualizations (InfoVis) are tools that represent huge amount of abstract data visually on a computer screen. These tools are not reaching the users since constituents of good InfoVis design are still an unknown. In this thesis I argue that good design is one that delivers positive experiences due to the subjectivity of the knowledge gaining processes. Hence, what constitutes a positive experience is the focus of this research. The application domain chosen was the Academic Literature Domain (ALD). ALD InfoVis tools exist however they do not cater for users' requirements or interface usability, both of which are crucial for a better experience. As a result, an ALD InfoVis tool was created following a User Centred Design (UCD) approach, starting with requirements and ending with usability. The requirements were first generated based on a qualitative study from which it became clear that researchers equate authors with their publications and position them in terms of the ideas they portray. Based on this, the tool was designed and implemented. The tool's usability was then evaluated through a set of low and high level tasks. Low-level tasks target the visual syntax whereas high-level tasks tap into the generated semantics. The latter allowed for subjective reasoning and interaction, and were therefore used as the basis of the experiential study. The experiential study captured users' experiences by relying on a Grounded Theory (GT) analysis. This study resulted in the generation of a base theory of InfoVis interaction that properly fitted within the context of the instrumental genesis theoretical framework which argues for the design of instruments not tools, where instruments are mental appropriations of tools. The theoretical approach applied by this research has value across InfoVis even if not tailored for evaluation.

Stochastic Bayesian estimation using efficient particle filters for vehicle tracking applications

Kravaritis, Giorgos January 2006 (has links)
The central focus of our work is on designing particle filters which use more efficiently their particles by seeding them instate space areas with greater significance and/or by varying their number. We begin by introducing the auxiliary local linearization particle filter (ALLPF) whose importance sampling density brings together the auxiliary sequential importance resampling technique and the local linearization particle filter (LLPF). A simulation study assesses it suitability for tracking manoeuvring targets. We next incorporate the prediction mechanism of the LLPF within a multi-target algorithm. The proposed particle filter (A-MLLPF) performs simultaneously the functions of measurement-to-track assignment and particle prediction while employing an adaptive number of prediction particles. Compared to the equivalent standard multi-target particle filter, we show that the A-MLLPF performs better both in terms of tracking accuracy and measurement association. The remaining of the thesis is devoted to vehicle tracking which exploits information from the road map. We first focus on the variable-structure multiple model particle filter (VSMMPF) from the literature and we enhance it with a varying particle scheme for using adaptively fewer particles when the vehicle travels on the road. Simulation results show that the proposed variation results in a similar performance but with a significant decrease of the particle usage. We then incorporate a gating and a joint probabilistic data association logic into the VSMMPE and use the resulting algorithm (MGTPF) to track multiple vehicles. Simulations demonstrate the suitability of the MGTPF in the multiple vehicle environment and quantify the performance improvement compared to a standard particle filter with an analogous association logic. Returning to the single-vehicle tracking problem, we introduce lastly the variable mass particle filter (VMPF). The VMPF uses a varying number of particles which allocates efficiently to its propagation modes according to the modes’ likelihood and difficulty. For compensating for the resulting statistical irregularities, it assigns to the particles appropriate masses which scale their weights. Other novel features of the proposed algorithm include an on-road propagation mechanism which uses just one particle and a technique for dealing with random road departure angles. Simulation results demonstrate the improved efficiency of the VMPF, since it requires in general fewer particles than the VSMMPF for achieving a better estimation accuracy.

Structured matrix methods for a polynomial root solver using approximate greatest common divisor computations and approximate polynomial factorisations

Lao, Xinyuan January 2011 (has links)
This thesis discusses the use of structure preserving matrix methods for the numerical approximation of all the zeros of a univariate polynomial in the presence of noise. In particular, a robust polynomial root solver is developed for the calculation of the multiple roots and their multiplicities, such that the knowledge of the noise level is not required. This designed root solver involves repeated approximate greatest common divisor computations and polynomial divisions, both of which are ill-posed computations. A detailed description of the implementation of this root solver is presented as the main work of this thesis. Moreover, the root solver, implemented in MATLAB using 32-bit floating point arithmetic, can be used to solve non-trivial polynomials with a great degree of accuracy in numerical examples.

Reengineering software to three-tier applications and services

Matos, Carlos Manuel Pinto de January 2012 (has links)
Driven by the need of a very demanding world, new technology arises as a way to solve problems found in practice. In the context of software, this occurs in the form of new programming paradigms, new application design methodologies, new tool support and new architectural patterns. Newly developed systems can take advantage of recent advances and choose from a state-of-the-art portfolio of techniques, taking stock of an understanding built across the years, learning from past, good and bad, experiences. However, existing software was built in a completely different context. Software engineering advances occur at a very fast pace, and applications are quickly seen as legacy due to a number of reasons, including difficulties to adapt to business needs, lack of integration capabilities with other systems, or general maintenance issues. There are various approaches to address these problems depending on the requirements or major concerns. The solution can either be rewriting the applications from scratch or evolving the existing systems. This thesis presents a methodology for systematically addressing the evolution of existing application into more modern architectures, including proposing implementations to address several classes of modernisation, with particular emphasis in reengineering towards tiered architectures and service-oriented architectures. The methodology is based on a combination of source code pattern detection guiding the extraction of structural graph models, rule-based transformations of these models, and the generation and execution of code-level refactoring scripts to affect the actual changes to the software. This dissertation presents the process, methodology, and tool support. Additionally, the proposed techniques are evaluated in the context of case studies, in order to allow conclusions regarding applicability, scalability, and overall benefits, both in terms of computational and human effort.

Local-search and hybrid evolutionary algorithms for Pareto optimization

Knowles, Joshua D. January 2002 (has links)
No description available.

The applicability of hardware design strategies to improve software application performance in multi-core architectures

Quintal, Luis Fernando Curi January 2014 (has links)
Multi-core architectures have become the main trend in the last decade in microprocessor design in order to deliver increasing computing power in computing systems. This trend in microprocessor architecture is leading towards the increment of the number of processing cores in the same die, thanks to transistor miniaturisation, with tens to hundreds of cores integrated in a system in the near future. Moreover, multi-core architectures have become pervasive in computing devices at all levels, from smartphones and tablets to multi-user servers and supercomputers. A multi-core architecture poses a challenge in software application development, since sequential applications can no longer benefit from new multi-core generations. Alternative programming models, tools and algorithms are needed in order to efficiently exploit computing power inherent in multi-core architectures. This thesis presents map-merge, a parallel processing method to formulate parallel versions of software applications that classify and process large input data arrays. Map-merge uses and advantages are illustrated with a parallel formulation of a generic bucket sort algorithm, which gives a peak speedup gain of 9 when it is executed in a dual 6-core system; and also with a parallel formulation of a branch predictor simulator, which gives a peak speedup gain of 7 in the same multi-core system. In addition, two methods, based on bit-representation and bit-manipulation strategies, are presented: bit-slice as an alternative algorithm to rank elements in a set, and bit-index as an alternative algorithm to sort a set of integers. The bit-slice method is implemented as an application to compute the median from a set of integers. This implementation outperformed median calculation versions by up to 6 times. These versions are based on two sorting algorithms: quicksort and counting sort. On the other hand, the bit-index method is implemented as a sorting algorithm for permutations of a set of integers. The approach also outperformed quicksort and counting sort implementations with peak speedup gains of 10 and 6 respectively. These methods are inspired by traditional

On the synthesis of choreographies

Lange, Julien January 2013 (has links)
The theories based on session types stand out as effective methodologies to specify and verify properties of distributed systems. A key result in the area shows the suitability of choreography languages and session types as a basis for a choreography-driven methodology for distributed software development. The methodology it advocates is as follows: a team of programmers designs a global view of the interactions to be implemented (i.e., a choreography), then the choreography is projected onto each role. Finally, each program implementing one or more roles in the choreography is validated against its corresponding projection(s). This is an ideal methodology but it may not always be possible to design one set of choreographies that will drive the overall development of a distributed system. Indeed, software needs maintenance, specifications may evolve (sometimes also during the development), and issues may arise during the implementation phase. Therefore, there is a need for an alternative approach whereby it is possible to infer a choreography from local behavioural specifications (i.e., session types). We tackle the problem of synthesising choreographies from local behavioural specifications by introducing a type system which assigns – if possible – a choreography to set of session types. We demonstrate the importance of obtaining a choreography from local specifications through two applications. Firstly, we give three algorithms and a methodology to help software designers refine a choreography into a global assertion, i.e., a choreography decorated with logical formulae specifying senders’ obligations and receivers’ requirements. Secondly, we introduce a formal model for distributed systems where each participant advertises its requirements and obligations as behavioural contracts (in the form of session types), and where multiparty sessions are started when a set of contracts allows to synthesise a choreography.

Performance studies of file system design choices for two concurrent processing paradigms

Lee, Yong-Woo January 1996 (has links)
This thesis studies the file access performance in the distributed file systems and in the shared memory systems in comparative manners. The three major changes in computing practice such as the trend of the computer communication speed growth, the trend of the computing power growth and the trend of the transaction size growth have influence on the file access performance of the two computing paradigms. This study investigates the effect of the three on the file access performance comparatively in the two system paradigms using the validated virtual performance models. This study investigates the file access performance of various design alternatives such as multiple CPUs, multiple disks, multiple networks, multiple file servers, enhanced concurrency, caching, local processing, etc. and discusses various file system design issues comparatively in the two system paradigms in terms of the file access performance. Theoretical limit of the file access performance is investigated in many cases. The effect of the workload characteristics such as the workload pattern, the workload fluctuation, the transaction size, etc. on the file access performance is quantitatively evaluated in the two system paradigms. This study proposes the virtual server concept for the performance modelling based on queuing network theory and presents the virtual server models for the two system paradigms. The models which were used are found to predict the file access performance of the real systems very precisely. A parameterization methodology is proposed to obtain the performance parameters and their values. The workload characterization methodology is proposed which consists of the six steps of procedure and the six realistic and representative artificial workloads were obtained. The simulation is used as the main methodology and the analytic approach is used as an auxiliary method to solve the performance models in this research. The simulation results is compared with the analytic solutions case by case to be confirmed that the two are exactly the same as each other.

Modelling environments for large scale process system problems

Mitchell, David Riach January 2000 (has links)
This thesis presents a novel modelling environment for large scale process systems problems. Traditional modelling environments attempt to provide maximal functionality within a fixed modelling language. The intention of such systems is to provide the user with a complete package that requires no further development or coding on their part. This approach limits the user to the functionality provided within the package but requires little or no programming experience on the part of the user. The environment provides sufficient capability for the user to describe the model in terms of a variable set and a set of methods with which to manipulate the variables. Many of these methods will describe equations but there is no restriction limiting methods to representing equations. These methods can act as agents, linking the modelling environment to external systems such as physical property databanks and non-JFMS format models. Separating the description of the model from its' processing allows the complexities to be dealt with in a full programming language (external functions are written in Fortran90 or C). The behaviour of the system is tailored by the user, the modelling environment existing solely to store the model structure and provide the interface layer between the external systems.

Branching transactions : a transaction model for parallel database systems

Burger, Albert G. January 1996 (has links)
In order to exploit parallel computers, database management systems must achieve a high level of concurrency when executing transactions. In a high contention environment, however, concurrency is severely limited due to transaction blocking, and the utilisation of parallel hardware resources, e.g. multiple CPUs, can be low. In this dissertation, a new transaction model, <I>Branching Transactions, </I>is proposed. Under branching transactions, more than one possible path of execution of a transaction is followed up in parallel, which allows us to avoid unnecessary transaction blockings and restarts. This approach uses additional hardware resources, mainly CPU - which would otherwise sit idle due to data contention - to improve transaction response time and throughput. A new transaction model has implications for many transaction processing algorithms, in particular concurrency control. A family of locking algorithms, based on multi-version two-phase locking, has been developed for branching transactions, including an algorithm which can dynamically switch between branching and non-branching modes. The issues of deadlock handling and recovery are also considered. The correctness of all new concurrency control algorithms is proved by extending traditional serializability theory so that it is able to cope with the notion of a branching transaction. Architectural descriptions of branching transaction systems for shared-memory parallel data-bases and hybrid shared-disk/shared-memory systems are discussed. In particular, the problem of cache coherence is addressed. The performance of branching transactions in a shared-memory parallel database system has been investigated, using discrete-event simulation. One field which may potentially benefit greatly from branching transactions is that of so-called "real-time" database systems, in which transactions have execution deadlines. A new real-time concurrency control algorithm based on branching transactions is introduced.

Page generated in 0.0931 seconds