• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 487
  • 487
  • 487
  • 171
  • 157
  • 155
  • 155
  • 68
  • 57
  • 48
  • 33
  • 29
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Smart object, not smart environment : cooperative augmentation of smart objects using projector-camera systems

Molyneaux, David January 2008 (has links)
Smart objects research explores embedding sensing and computing into everyday objects - augmenting objects to be a source of information on their identity, state, and context in the physical world. A major challenge for the design of smart objects is to preserve their original appearance, purpose and function. Consequently, many research projects have focussed on adding input capabilities to objects, while neglecting the requirement for an output capability which would provide a balanced interface. This thesis presents a new approach to add output capability by smart objects cooperating with projector-camera systems. The concept of Cooperative Augmentation enables the knowledge required for visual detection, tracking and projection on smart objects to be embedded within the object itself. This allows projector-camera systems to provide generic display services, enabling spontaneous use by any smart object to achieve non-invasive and interactive projected displays on their surfaces. Smart objects cooperate to achieve this by describing their appearance directly to the projector-camera systems and use embedded sensing to constrain the visual detection process. We investigate natural appearance vision-based detection methods and perform an experimental study specifically analysing the increase in detection performance achieved with movement sensing in the target object. We find that detection performance significantly increases with sensing, indicating the combination of different sensing modalities is important, and that different objects require different appearance representations and detection methods. These studies inform the design and implementation of a system architecture which serves as the basis for three applications demonstrating the aspects of visual detection, integration of sensing, projection, interaction with displays and knowledge updating. The displays achieved with Cooperative Augmentation allow any smart object to deliver visual feedback to users from implicit and explicit interaction with information represented or sensed by the physical object, supporting objects as both input and output medium simultaneously. This contributes to the central vision of Ubiquitous Computing by enabling users to address tasks in physical space with direct manipulation and have feedback on the objects themselves, where it belongs in the real world.
12

Extracting place semantics from geo-folksonomies

Elgindy, Ehab January 2013 (has links)
Massive interest in geo-referencing of personal resources is evident on the web. People are collaboratively digitising maps and building place knowledge resources that document personal use and experiences in geographic places. Understanding and discovering these place semantics can potentially lead to the development of a different type of place gazetteer that holds not only standard information of place names and geographic location, but also activities practiced by people in a place and vernacular views of place characteristics. The main contributions of this research are as follows. A novel framework is proposed for the analysis of geo-folksonomies and the automatic discovery of place-related semantics. The framework is based on a model of geographic place that extends the definition of place as defined in traditional gazetteers and geospatial ontologies to include the notion of place affordance. A method of clustering place resources to overcome the inaccuracy and redundancy inherent in the geo-folksonomy structure is developed and evaluated. Reference ontologies are created and used in a tag resolution stage to discover place-related concepts of interest. Folksonomy analysis techniques are then used to create a place ontology and its component type and activity ontologies. The resulting concept ontologies are compared with an expert ontology of place type and activities and evaluated through a user questionnaire. To demonstrate the utility of the proposed framework, an application is developed to illustrate the possible enrichment of search experience by exposing the derived semantics to users of web mapping abstract applications. Finally, the value of using the discovered place semantics is also demonstrated by proposing two semantic based similarity approaches; user similarity and place similarity. The validity of the approaches was confirmed by the results of an experiment conducted on a realistic folksonomy dataset.
13

Performance implications of using diverse redundancy for database replication

Stankovic, Vladimir January 2008 (has links)
Using diverse redundancy for database replication is the focus of this thesis. Traditionally, database replication solutions have been built on the fail-stop failure assumption, i.e. that crashes are believed to cause a majority of failures. However, recent findings refuted this common assumption, showing that many of the faults cause systematic non-crash failures. These findings demonstrate that the existing, non-diverse database replication solutions, which use the same database server products, are ineffective fault-tolerant mechanisms. At the same time, the findings motivated the use of diverse redundancy (when different database server products are used) as a promising way of improving dependability. It seems that using a fault-tolerant server, built with diverse database servers, would deliver improvements in availability and failure rates compared with the individual database servers or their replicated, non-diverse configurations. Besides the potential for improving dependability, one would like to evaluate the performance implications of using diverse redundancy in the context of database replication. This is the focal point of the research. The work performed to that end can be summarised as follows: - We conducted a substantial performance evaluation of database replication using diverse redundancy. We compared its performance to the ones of various non-diverse configurations as well as non-replicated databases. The experiments revealed systematic differences in behaviour of diverse servers. They point to the potential for performance improvement when diverse servers are used. Under particular workloads diverse servers performed better than both non-diverse and non-replicated configurations. - We devised a middleware-based database replication protocol, which provides dependability assurance and guarantees database consistency. It uses an eager update everywhere approach for replica control. Although we focus on the use of diverse database servers, the protocol can be used with the database servers from the same vendor too. We provide the correctness criteria of the protocol. Different regimes of operation of the protocol are defined, which would allow it to be dynamically optimised for either dependability or performance improvements. Additionally, it can be used in conjunction with high-performance replication solutions. - We developed an experimental test harness for performance evaluation of different database replication solutions. It enabled us to evaluate the performance of the diverse database replication protocol, e.g. by comparing it against known replication solutions. We show that, as expected, the improved dependability exhibited by our replication protocol carries a performance overhead. Nevertheless, when optimised for performance improvement our protocol shows good performance. - In order to minimise the performance penalty introduced by the replication we propose a scheme whereby the database server processes are prioritised to deliver performance improvements in cases of low to modest resource utilisation by the database servers. - We performed an uncertainty-explicit assessment of database server products. Using an integrated approach, where both performance and reliability are considered, we rank different database server products to aid selection of the components for the fault-tolerant server built out of diverse databases.
14

SVG 3D graphical presentation for Web-based applications

Lu, Jisheng January 2015 (has links)
Due to the rapid developments in the field of computer graphics and computer hardware, web-based applications are becoming more and more powerful, and the performance distance between web-based applications and desktop applications is increasingly closer. The Internet and the WWW have been widely used for delivering, processing, and publishing 3D data. There is increasingly demand for more and easier access to 3D content on the web. The better the browser experience, the more potential revenue that web-based content can generate for providers and others. The main focus of this thesis is on the design, develop and implementation of a new 3D generic modelling method based on Scalable Vector Graphics (SVG) for web-based applications. While the model is initialized using classical 3D graphics, the scene model is extended using SVG. A new algorithm to present 3D graphics with SVG is proposed. This includes the definition of a 3D scene in the framework, integration of 3D objects, cameras, transformations, light models and textures in a 3D scene, and the rendering of 3D objects on the web page, allowing the end-user to interactively manipulate objects on the web page. A new 3D graphics library for 3D geometric transformation and projection in the SVG GL is design and develop. A set of primitives in the SVG GL, including triangle, sphere, cylinder, cone, etc. are designed and developed. A set of complex 3D models in the SVG GL, including extrusion, revolution, Bezier surface, and point clouds are designed and developed. The new Gouraud shading algorithm and new Phong Shading algorithm in the SVG GL are proposed, designed and developed. The algorithms can be used to generate smooth shading and create highlight for 3D models. The new texture mapping algorithms for the SVG GL oriented toward web-based 3D modelling applications are proposed, designed and developed. Texture mapping algorithms for different 3D objects such as triangle, plane, sphere, cylinder, cone, etc. will also be proposed, designed and developed. This constitutes a unique and significant contribution to the disciplines of web-based 3D modelling, as well as to the process of 3D model popularization.
15

Active database behaviour : the REFLEX approach

Naqvi, Waseem Hadder January 1995 (has links)
Modern day and new generation applications have more demanding requirements than traditional database management systems (DBMS) are able to support. Two of these requirements, timely responses to the change of database state and application domain knowledge stored within the database, are embodied within active database technology. Currently, there are a number of research prototype active database systems throughout the world. In order for an organisation to use any such prototype system, it may have to forsake existing products and resources and embark on substantial reinvestment in the new database products and associated resources and retraining costs. This approach would clearly be unfavourable as it is expensive both in terms of time and money. A more suitable approach would be to allow active behaviour to be added onto their existing systems. This scenario is addressed within this research. It investigates how best active behaviour can be augmented to existing DBMSs, so as to preserve the investments in an organisation's resources, by examining the following issues, (i.) what form the knowledge model should take, (ii.) should rules and events be modelled as first class objects, (iii.) how will the triggering events be specified, (iv.) how the user will interact with the system. Various design decisions were taken, which were investigated by implementation of a series of working prototypes, on the ONTOS DBMS platform. The resultant REFLEX model was successfully ported and adapted onto a second POET platform. The porting process uncovered some interesting issues regarding preconceived ideas about the portability of open systems.
16

Reducing deadline miss rate for grid workloads running in virtual machines : a deadline-aware and adaptive approach

Khalid, Omer January 2011 (has links)
This thesis explores three major areas of research; integration of virutalization into scientific grid infrastructures, evaluation of the virtualization overhead on HPC grid job’s performance, and optimization of job execution times to increase their throughput by reducing job deadline miss rate. Integration of the virtualization into the grid to deploy on-demand virtual machines for jobs in a way that is transparent to the end users and have minimum impact on the existing system poses a significant challenge. This involves the creation of virtual machines, decompression of the operating system image, adapting the virtual environment to satisfy software requirements of the job, constant update of the job state once it’s running with out modifying batch system or existing grid middleware, and finally bringing the host machine back to a consistent state. To facilitate this research, an existing and in production pilot job framework has been modified to deploy virtual machines on demand on the grid using virtualization administrative domain to handle all I/O to increase network throughput. This approach limits the change impact on the existing grid infrastructure while leveraging the execution and performance isolation capabilities of virtualization for job execution. This work led to evaluation of various scheduling strategies used by the Xen hypervisor to measure the sensitivity of job performance to the amount of CPU and memory allocated under various configurations. However, virtualization overhead is also a critical factor in determining job execution times. Grid jobs have a diverse set of requirements for machine resources such as CPU, Memory, Network and have inter-dependencies on other jobs in meeting their deadlines since the input of one job can be the output from the previous job. A novel resource provisioning model was devised to decrease the impact of virtualization overhead on job execution. Finally, dynamic deadline-aware optimization algorithms were introduced using exponential smoothing and rate limiting to predict job failure rates based on static and dynamic virtualization overhead. Statistical techniques were also integrated into the optimization algorithm to flag jobs that are at risk to miss their deadlines, and taking preventive action to increase overall job throughput.
17

Model updating of modal parameters from experimental data and applications in aerospace

Keye, Stefan January 2003 (has links)
The research in this thesis is associated with different aspects of experimental analyses of structural dynamic systems and the correction of the corresponding mathematical models using the results of experimental investigations as a reference. A comprehensive finite-element model updating software technology is assembled and various novel features are implemented. The software technology is integrated into an experimental test facility for structural dynamic identification and used in a number of real life aerospace applications which illustrate the advantages of the new features. To improve the quality of the experimental reference data a novel non-iterative method for the computation of optimised multi-point excitation force vectors for Phase Resonance Testing is introduced. The method is unique in that it is based entirely on experimental data, allows to determine both the locations and force components resulting in the highest phase purity, and enable to predict the corresponding mode indicator. A minimisation criterion for the real-part response of the test structure with respect to the total response is utilised and, unlike with the application of other methods, no further information such as a mass matrix from a finite-element model or assumptions on the structure's damping characteristics is required. Performance in comparison to existing methods is assessed in a numerical study using an analytical eleven-degrees-of-freedom model. Successful applications to a simple laboratory satellite structure and under realistic test conditions during the Ground Vibration Test on the European Space Agency's Polar Platform are described. Considerable improvements are achieved with respect to the phase purity of the identified mode shapes as compared to other methods or manual tuning strategies as well as the time and effort involved in the application during Ground Vibration Testing. Various aspects regarding the application of iterative model updating methods to aerospace-related test structures and live experimental data are discussed. A new iterative correction parameter selection technique enabling to create a physically correct updated analytical model and a novel approach for the correction of structural components with viscous material properties are proposed. A finite-element model of the GARTEUR SM-AG19 laboratory test structure is updated using experimental modal data from a Ground Vibration Test. In order to assess the accuracy and physical consistency of the updated model a novel approach is applied where only a fraction of the mode shapes and natural frequencies from the experimental data base is used in the model correction process and analytical and experimental modal data beyond the range utilised for updating are correlated. To evaluate the influence of experimental errors on the accuracy of finite-element model corrections a numerical simulation procedure is developed. The effects of measurement uncertainties on the substructure correction factors, natural frequency deviations, and mode shape correlation are investigated using simulated experimental modal data. Various numerical models are generated to study the effects of modelling error magnitudes and locations. As a result, the correction parameter uncertainty increases with the magnitude of the experimental errors and decreases with the number of modes involved in the updating process. Frequency errors, however, since they are not averaged during updating, must be measured with an adequately high precision. Next, the updating procedure is applied to an authentic industrial aerospace structure. The finite-element model of the EC 135 helicopter is utilised and a novel technique for the parameterisation of substructures with non-isotropic material properties is suggested. Experimental modal parameters are extracted from frequency responses recorded during a Shake Test on the EC 135-S01 prototype. In this test case, the correction process involves the handling of a high degree of modal and spatial incompleteness in the experimental reference data. Accordingly, new effective strategies for the selection of updating parameters are developed which are both physically significant and likewise have a sufficient sensitivity with respect to the analytical modal parameters. Finally, possible advantages of model updating in association with a model-based method for the identification and localisation of structural damage are investigated. A new technique for identifying and locating delamination damages in carbon fibre reinforced polymers is introduced. The method is based on a correlation of damage-induced modal damping variations from an elasto-mechanic structure to the corresponding data from a numerical model in order to derive information on the damage location. Using a numerical model enables the location of damages in a three-dimensional structure from experimental data obtained with only a single response sensor. To acquire sufficiently accurate experimental data a novel criterion for the determination of most appropriate actuator and sensor positions and a polynomial curve fitting technique are suggested. It will be shown that in order to achieve a good location precision the numerical model must retain a high degree of accuracy and physical consistency.
18

A programming system for end-user functional programming

Alam, Abu S. January 2015 (has links)
This research involves the construction of a programming system, HASKEU, to support end-user programming in a purely functional programming language. An end-user programmer is someone who may program a computer to get their job done, but has no interest in becoming a computer programmer. A purely functional programming language is one that does not require the expression of statement sequencing or variable updating. The end-user is offered two views of their functional program. The primary view is a visual one, in which the program is presented as a collection of boxes (representing processes) and lines (representing data flow). The secondary view is a textual one, in which the program is presented as a collection of written function definitions. It is expected that the end-user programmer will begin with the visual view, perhaps later moving on to the textual view. The task of the programming system is to ensure that the visual and textual views are kept consistent as the program is constructed. The foundation of the programming system is a implementation of the Model-View-Controller (MVC) design pattern as a reactive program using the elegant Functional Reactive Programming (FRP) framework. Human-Computer Interaction (HCI) principles and methods are considered in all design decisions. A usabilty study was made to �find out the effectiveness of the new system.
19

Computational fluid dynamics based diagnostics and optimal design of hydraulic capsule pipelines

Asim, Taimoor January 2013 (has links)
Scarcity of fossil fuels and rapid escalation in the energy prices around the world is affecting efficiency of established modes of cargo transport within transportation industry. Extensive research is being carried out on improving efficiency of existing modes of cargo transport, as well as to develop alternative means of transporting goods. One such alternative method can be through the use of energy contained within fluid flowing in pipelines in order to transfer goods from one place to another. Although the concept of using fluid pipelines for transportation purposes has been in practice for more than a millennium now, but the detailed knowledge of the flow behaviour in such pipelines is still a subject of active research. This is due to the fact that most of the studies conducted on transporting goods in pipelines are based on experimental measurements of global flow parameters, and only a rough approximation of the local flow behaviour within these pipelines has been reported. With the emergence of sophisticated analytical tools and the use of high performance computing facilities being installed throughout the globe, it is now possible to simulate the flow conditions within these pipelines and get better understanding of the underlying flow phenomena. The present study focuses on the use of advanced modelling tools to simulate the flow within Hydraulic Capsule Pipelines (HCPs) in order to quantify the flow behaviour within such pipelines. Hydraulic Capsule Pipeline is the term which refers to the transport of goods in hollow containers, typically of spherical or cylindrical shapes, termed as capsules, being carried along the pipeline by water. A novel modelling technique has been employed to carry out the investigations under various geometric and flow conditions within HCPs. Both qualitative and quantitative flow diagnostics has been carried out on the flow of both spherical and cylindrical shaped capsules in a horizontal HCP for on-shore applications. A train of capsules consisting of a single to multiple capsules per unit length of the pipeline has been modelled for practical flow velocities within HCPs. It has been observed that the flow behaviour within HCP depends on a number of fluid and geometric parameters. The pressure drop in such pipelines cannot be predicted from established methods. Development of a predictive tool for such applications is one of the aims that is been achieved in this study. Furthermore, investigations have been conducted on vertical pipelines as well, which are very important for off-shore applications of HCPs. The energy requirements for vertical HCPs are significantly higher than horizontal HCPs. It has been shown that a minimum average flow velocity is required to transport a capsule in a vertical HCP, depending upon the geometric and physical properties of the capsules. The concentric propagation, along the centreline of pipe, of heavy density capsules in vertical HCPs marks a significant variation from horizontal HCPs transporting heavy density capsules. Bends are an integral part of pipeline networks. In order to design any pipeline, it is essential to consider the effects of the bends on the overall energy requirements within the pipelines. In order to accurately design both horizontal and vertical HCPs, analysis of the flow behaviour and energy requirements, of varying geometric configurations, has been carried out. A novel modelling technique has been incorporated in order to accurately predict the velocity, trajectory and orientation of the capsules in pipe bends. Optimisation of HCPs plays a crucial rule towards worldwide commercial acceptability of such pipelines. Based on Least-Cost Principle, an optimisation methodology has been developed for single stage HCPs for both on-shore and off-shore applications. The input to the optimisation model is the solid throughput required from the system, and the outputs are the optimal diameter of the HCPs and the pumping requirements for the capsule transporting system. The optimisation model presented in the present study is both robust and user-friendly. A complete flow diagnostics and design, including optimisation, of Hydraulic Capsule Pipelines has been presented in this study. The advanced computational skills being incorporated in this study has made it possible to map and analyse the flow structure within HCPs. Detailed analysis on even the smallest scale flow variations in HCPs has led to a better understanding of the flow behaviour.
20

Neural trust model for multi-agent systems

Lu, Gehao January 2011 (has links)
Introducing trust and reputation into multi-agent systems can significantly improve the quality and efficiency of the systems. The computational trust and reputation also creates an environment of survival of the fittest to help agents recognize and eliminate malevolent agents in the virtual society. The thesis redefines the computational trust and analyzes its features from different aspects. A systematic model called Neural Trust Model for Multi-agent Systems is proposed to support trust learning, trust estimating, reputation generation, and reputation propagation. In this model, the thesis innovates the traditional Self Organizing Map (SOM) and creates a SOM based Trust Learning (STL) algorithm and SOM based Trust Estimation (STE) algorithm. The STL algorithm solves the problem of learning trust from agents' past interactions and the STE solve the problem of estimating the trustworthiness with the help of the previous patterns. The thesis also proposes a multi-agent reputation mechanism for generating and propagating the reputations. The mechanism exploits the patterns learned from STL algorithm and generates the reputation of the specific agent. Three propagation methods are also designed as part of the mechanism to guide path selection of the reputation. For evaluation, the thesis designs and implements a test bed to evaluate the model in a simulated electronic commerce scenario. The proposed model is compared with a traditional arithmetic based trust model and it is also compared to itself in situations where there is no reputation mechanism. The results state that the model can significantly improve the quality and efficacy of the test bed based scenario. Some design considerations and rationale behind the algorithms are also discussed based on the results.

Page generated in 0.1302 seconds