• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 487
  • 487
  • 487
  • 171
  • 157
  • 155
  • 155
  • 68
  • 57
  • 48
  • 33
  • 29
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Extracting place semantics from geo-folksonomies

Elgindy, Ehab January 2013 (has links)
Massive interest in geo-referencing of personal resources is evident on the web. People are collaboratively digitising maps and building place knowledge resources that document personal use and experiences in geographic places. Understanding and discovering these place semantics can potentially lead to the development of a different type of place gazetteer that holds not only standard information of place names and geographic location, but also activities practiced by people in a place and vernacular views of place characteristics. The main contributions of this research are as follows. A novel framework is proposed for the analysis of geo-folksonomies and the automatic discovery of place-related semantics. The framework is based on a model of geographic place that extends the definition of place as defined in traditional gazetteers and geospatial ontologies to include the notion of place affordance. A method of clustering place resources to overcome the inaccuracy and redundancy inherent in the geo-folksonomy structure is developed and evaluated. Reference ontologies are created and used in a tag resolution stage to discover place-related concepts of interest. Folksonomy analysis techniques are then used to create a place ontology and its component type and activity ontologies. The resulting concept ontologies are compared with an expert ontology of place type and activities and evaluated through a user questionnaire. To demonstrate the utility of the proposed framework, an application is developed to illustrate the possible enrichment of search experience by exposing the derived semantics to users of web mapping abstract applications. Finally, the value of using the discovered place semantics is also demonstrated by proposing two semantic based similarity approaches; user similarity and place similarity. The validity of the approaches was confirmed by the results of an experiment conducted on a realistic folksonomy dataset.
52

Infrastructure support for adaptive mobile applications

Friday, Adrian January 1996 (has links)
Recent growth in the number and quality of wireless network technologies has led to an increased interest in mobile computing. Furthermore, these technologies have now advanced sufficiently to allow 'advanced applications' to be engineered. Applications such as these are characterised by complex patterns of distribution and interaction, support for collaboration and multimedia data, and are typically required to operate over heterogeneous networks and end-systems. Given these operating requirements, it is the author's contention that advanced applications must adapt their behaviour in response to changes in their environment in order to operate effectively. Such applications are termed adaptive applications. This thesis investigates the support required by advanced applications to facilitate operation in heterogeneous networked environments. A set of generic techniques are presented that enable existing distributed systems platforms to provide support for adaptive applications. These techniques are based on the provision of a QoS framework and a supporting infrastructure comprising a new remote procedure call package and supporting services. The QoS framework centres on the ability to establish explicit bindings between objects. Explicit bindings enable application requirements to be specified and provide a handle through which they can exert control and, more significantly, be informed of violations in the requested QoS. These QoS violations enable the applications to discover changes in their underlying environment and offer them the opportunity to adapt. The proposed architecture is validated through an implementation of the framework based on an existing distributed systems platform. The resulting architecture is used to underpin a novel collaborative mobile application aimed at supporting field workers within the utilities industry. The application in turn is used as a measure to gauge the effectiveness of the support provided by the platform. In addition, the design, implementation and evaluation of the application is used throughout the thesis to illustrate various aspects of platform support.
53

Robust steganographic techniques for secure biometric-based remote authentication

Rashid, Rasber Dhahir January 2015 (has links)
Biometrics are widely accepted as the most reliable proof of identity, entitlement to services, and for crime-related forensics. Using biometrics for remote authentication is becoming an essential requirement for the development of knowledge-based economy in the digital age. Ensuring security and integrity of the biometric data or templates is critical to the success of deployment especially because once the data compromised the whole authentication system is compromised with serious consequences for identity theft, fraud as well as loss of privacy. Protecting biometric data whether stored in databases or transmitted over an open network channel is a serious challenge and cryptography may not be the answer. The main premise of this thesis is that Digital Steganography can provide an alternative security solutions that can be exploited to deal with the biometric transmission problem. The main objective of the thesis is to design, develop and test steganographic tools to support remote biometric authentication. We focus on investigating the selection of biometrics feature representations suitable for hiding in natural cover images and designing steganography systems that are specific for hiding such biometric data rather than being suitable for general purpose. The embedding schemes are expected to have high security characteristics resistant to several types of steganalysis tools and maintain accuracy of recognition post embedding. We shall limit our investigations to embedding face biometrics, but the same challenges and approaches should help in developing similar embedding schemes for other biometrics. To achieve this our investigations and proposals are done in different directions which explain in the rest of this section. Reviewing the literature on the state-of-art in steganography has revealed a rich source of theoretical work and creative approaches that have helped generate a variety of embedding schemes as well as steganalysis tools but almost all focused on embedding random looking secrets. The review greatly helped in identifying the main challenges in the field and the main criteria for success in terms of difficult to reconcile requirements on embedding capacity, efficiency of embedding, robustness against steganalysis attacks, and stego image quality. On the biometrics front the review revealed another rich source of different face biometric feature vectors. The review helped shaping our primary objectives as (1) identifying a binarised face feature factor with high discriminating power that is susceptible to embedding in images, (2) develop a special purpose content-based steganography schemes that can benefit from the well-defined structure of the face biometric data in the embedding procedure while preserving accuracy without leaking information about the source biometric data, and (3) conduct sufficient sets of experiments to test the performance of the developed schemes, highlight the advantages as well as limitations, if any, of the developed system with regards to the above mentioned criteria. We argue that the well-known LBP histogram face biometric scheme satisfies the desired properties and we demonstrate that our new more efficient wavelet based versions called LBPH patterns is much more compact and has improved accuracy. In fact the wavelet version schemes reduce the number of features by 22% to 72% of the original version of LBP scheme guaranteeing better invisibility post embedding. We shall then develop 2 steganographic schemes. The first is the LSB-witness is a general purpose scheme that avoids changing the LSB-plane guaranteeing robustness against targeted steganalysis tools, but establish the viability of using steganography for remote biometric-based recognition. However, it may modify the 2nd LSB of cover pixels as a witness for the presence of the secret bits in the 1st LSB and thereby has some disadvantages with regards to the stego image quality. Our search for a new scheme that exploits the structure of the secret face LBPH patterns for improved stego image quality has led to the development of the first content-based steganography scheme. Embedding is guided by searching for similarities between the LBPH patterns and the structure of the cover image LSB bit-planes partitioned into 8-bit or 4-bit patterns. We shall demonstrate the excellent benefits of using content-based embedding scheme in terms of improved stego image quality, greatly reduced payload, reduced lower bound on optimal embedding efficiency, robustness against all targeted steganalysis tools. Unfortunately our scheme was not robust against the blind or universal SRM steganalysis tool. However we demonstrated robustness against SRM at low payload when our scheme was modified by restricting embedding to edge and textured pixels. The low payload in this case is sufficient to embed a secret full face LBPH patterns. Our work opens new exciting opportunities to build successful real applications of content-based steganography and presents plenty of research challenges.
54

Many-objective genetic type-2 fuzzy logic based workforce optimisation strategies for large scale organisational design

Starkey, Andrew J. January 2018 (has links)
Workforce optimisation aims to maximise the productivity of a workforce and is a crucial practice for large organisations. The more effective these workforce optimisation strategies are, the better placed the organisation is to meet their objectives. Usually, the focus of workforce optimisation is scheduling, routing and planning. These strategies are particularly relevant to organisations with large mobile workforces, such as utility companies. There has been much research focused on these areas. One aspect of workforce optimisation that gets overlooked is organisational design. Organisational design aims to maximise the potential utilisation of all resources while minimising costs. If done correctly, other systems (scheduling, routing and planning) will be more effective. This thesis looks at organisational design, from geographical structures and team structures to skilling and resource management. A many-objective optimisation system to tackle large-scale optimisation problems will be presented. The system will employ interval type-2 fuzzy logic to handle the uncertainties with the real-world data, such as travel times and task completion times. The proposed system was developed with data from British Telecom (BT) and was deployed within the organisation. The techniques presented at the end of this thesis led to a very significant improvement over the standard NSGA-II algorithm by 31.07% with a P-Value of 1.86-10. The system has delivered an increase in productivity in BT of 0.5%, saving an estimated £1million a year, cut fuel consumption by 2.9%, resulting in an additional saving of over £200k a year. Due to less fuel consumption Carbon Dioxide (CO2) emissions have been reduced by 2,500 metric tonnes. Furthermore, a report by the United Kingdom’s (UK’s) Department of Transport found that for every billion vehicle miles travelled, there were 15,409 serious injuries or deaths. The system saved an estimated 7.7 million miles, equating to preventing more than 115 serious casualties and fatalities.
55

Tutoring systems based on user-interface dialogue specification

Martin, Frank A. January 1990 (has links)
This thesis shows how the appropriate specification of a user interface to an application software package can be used as the basis for constructing a tutorial for teaching the use of that interface. An economy can hence be made by sharing the specification between the application development and tutorial development stages. The major part of the user-interface specification which is utilised, the task classification structure, must be transformed from an operational to a pedagogic ordering. Heuristics are proposed to achieve this, although human expertise is required to apply them. The report approach is best suited to domains with hierarchically-ordered command sets. A portable rule-based shell has been developed in Common Lisp which supports the delivery of tutorials for a range of software application package interfaces. The use of both the shell and tutorials for two such interfaces is reported. A computer-based authoring environment provides support for tutorial development. The shell allows the learner of a software interface to interact directly with the application software being learnt while remaining under tutorial control. The learner can always interrupt in order to request a tutorial on any topic, although advice may be offered against this in the light of the tutor's current knowledge of the learner. This advice can always be over-ridden. The key-stroke sequences of the tutorial designer and the learner interacting with the package are parsed against an application model based on the task classification structure. Diagnosis is effected by a differential modelling technique applied to the structures generated by the parsing processes. The approach reported here is suitable for an unsupported software interface learner and is named LIY (`Learn It Yourself'). It provides a promising method for augmenting a software engineering tool-kit with a new technique for producing tutorials for application software.
56

Strategies and tools for the exploitation of massively parallel computer systems

Evans, Emyr Wyn January 2000 (has links)
The aim of this thesis is to develop software and strategies for the exploitation of parallel computer hardware, in particular distributed memory systems, and embedding these strategies within a parallelisation tool to allow the automatic generation of these strategies. The parallelisation of four structured mesh codes using the Computer Aided Parallelisation Tools provided a good initial parallelisation of the codes. However, investigation revealed that simple optimisation of the communications within these codes provided an even better improvement in performance. The dominant factor within the communications was the data transfer time with communication start-up latencies also significant. This was significant throughout the codes but especially in sections of pipelined code where there were large amounts of communication present. This thesis describes the development and testing of the methods used to increase the performance of these communications by overlapping them with unrelated calculation. This method of overlapping the communications was applied to the exchange of data communications as well as the pipelined communications. The successful application by hand provided the motivation for these methods to be incorporated and automatically generated within the Computer Aided Parallelisation Tools. These methods were integrated within these tools as an additional stage of the parallelisation. This required a generic algorithm that made use of many of the symbolic algebra tests and symbolic variable manipulation routines within the tools. The automatic generation of overlapped communications was applied to the four codes previously parallelised as well as a further three codes, one of which was a real world Computational Fluid Dynamics code. The methods to apply automatic generation of overlapped communications to unstructured mesh codes were also discussed. These methods are similar to those applied to the structured mesh codes and their automation is viewed to be of a similar fashion.
57

Improving the regulatory acceptance and numerical performance of CFD based fire-modelling software

Grandison, Angus Joseph January 2003 (has links)
The research of this thesis was concerned with practical aspects of Computational Fluid Dynamics (CFD) based fire modelling software, specifically its application and performance. Initially a novel CFD based fire suppression model was developed (FIREDASS). The FIREDASS (FIRE Detection And Suppression Simulation) programme was concerned with the development of water misting systems as a possible replacement for halon based fire suppression systems currently used in aircraft cargo holds and ship engine rooms. A set of procedures was developed to test the applicability of CFD fire modelling software. This methodology was demonstrated on three CFD products that can be used for fire modelling purposes. The proposed procedure involved two phases. Phase 1 allowed comparison between different computer codes without the bias of the user or specialist features that may exist in one code and not another by rigidly defining the case set-up. Phase 2 allowed the software developer to perform the test using the best modelling features available in the code to best represent the scenario being modelled. In this way it was hoped to demonstrate that in addition to achieving a common minimum standard of performance, the software products were also capable of achieving improved agreement with the experimental or theoretical results. A significant conclusion drawn from this work suggests that an engineer using the basic capabilities of any of the products tested would be likely to draw the same conclusions from the results irrespective of which product was used. From a regulators view, this is an important result as it suggests that the quality of the predictions produced are likely to be independent of the tool used - at least in situations where the basic capabilities of the software were used. The majority of this work has focussed on the use of specialised proprietary hardware generally based around the UNIX operating system. The majority of engineering firms that would benefit from the reduced timeframes offered by parallel processing rarely have access to such specialised systems. However, in recent years with the increasing power of individual office PCs and the improved performance of Local Area Networks (LAN) it has now come to the point where parallel processing can be usefully utilised in a typical office environment where many such PCs maybe connected to a LAN. Harnessing this power for fire modelling has great promise. Modern low cost supercomputers are now typically constructed from commodity PC motherboards connected via a dedicated high-speed network. However, virtually no work has been published on using office based PCs connected via a LAN in a parallel manner on real applications. The SMARTFIRE fire field model was modified to utilise multiple PCs on a typical office based LAN. It was found that good speedup could be achieved on homogeneous PCs, for example for a problem composed of-100,000 cells would run on a network of 12 PCs with a speedup of 9.3 over a single PC. A dynamic load balancing scheme was devised to allow the effective use of the software on heterogeneous PC networks. This scheme also ensured that the impact of the parallel processing on other computer users was minimised. This scheme also minimised the impact of other computer users on the parallel processing performed by the FSE.
58

Columbus : a solution using metadata for integrating document management, project hosting and document control in the construction industry

Herrero, Juan Jose January 2003 (has links)
This thesis presents a solution for integrating document handling technologies within the construction industry using metadata in a novel way and providing a working solution in the form of an application called Columbus. The research analyses in detail the problem of project collaboration. It concentrates on the usage of document management, project hosting and document control systems as important enabling technologies. The creation, exchange and recording of information are addressed as key factors for having a unified document handling solution. Metadata is exploited as a technology providing for effective open information exchange within and between project participants. The technical issues relating to the use of metadata are addressed at length. The Columbus application is presented as a working solution to this problem. Columbus is currently used by over 20000 organisations in 165 countries and has become a standard for information exchange. The main benefit of Columbus has been in getting other project participants to send metadata with their electronic documents and in dealing with project archival. This has worked very well on numerous projects, saving countless man-hours of data input time, document cataloguing and searching. The application is presented in detail from both commercial and technical perspectives and is shown as an open solution, which can be extended by third parties. The commercial success of Columbus is discussed by means of a number of reviews and case studies that cover its usage within the industry. In 2000, it was granted an Institution of Civil Engineers' Special Award in recognition of its contribution to the Latham and Egan initiatives for facilitating information exchange within the construction industry.
59

Applying case based reasoning and structural similarity for effective retrieval of expert knowledge from software designs

Wolf, Markus Adrian January 2012 (has links)
Due to the proliferation of object-oriented software development, UML software designs are ubiquitous. The creation of software designs already enjoys wide software support through CASE (Computer-Aided Software Engineering) tools. However, there has been limited application of computer reasoning to software designs in other areas. Yet there is expert knowledge embedded in software design artefacts which could be useful if it were successfully retrieved. Thus, there is a need for automated support for expert knowledge retrieval from software design artefacts. A software design is an abstract representation of a software product and, in the case of a class diagram, contains information about its structure. It is therefore possible to extract knowledge about a software application from its design. For a human expert an important aspect of a class diagram are the semantic tags associated with each composing element, as these provide a link to the concept each element represents. For implemented code, however, the semantic tags have no bearing. The focus of this research has been on the question of whether is it possible to retrieve knowledge from class diagrams in the absence of semantic information. This thesis formulates an approach which combines case-based reasoning with graph matching to retrieve knowledge from class diagrams using only structural information. The practical applicability of this research has been demonstrated in the areas of cost estimation and plagiarism detection. It was shown that by applying case-based reasoning and graph matching to measure similarity between class diagrams it is possible to identify properties of an implementation not encoded within the actual diagram, such as the domain, programming language, quality and implementation cost. An approach for increasing users’ confidence in automatic class diagram matching by providing explanation is also presented. The findings show that the technique applied here can contribute to industry and academia alike in obtaining solutions from class diagrams where semantic information is lacking. The approach presented here, as well as its evaluation, were automated through the development of the UMLSimilator software tool.
60

Designing and evaluating information spaces : a navigational perspective

McCall, Roderick January 2003 (has links)
Navigation in two and three dimensional electronic environments has become an important usability issue.Research in to the use of hypertext systems would appear to suggest that people suffer from a variety of navigational problems in these environments. In addition users also encounter problems in 3D environments and in applications software. Therefore in order to enhance the ease of use from the point of view of preventing errors and making it more pleasurable the navigating in information space approach to HCI has been adopted. The research presented in this thesis examines whether the study of real world environments, in particular aspects of the built environment, urban planning and environmental psychology are beneficial in the development of guidelines for interface design and evaluation. In doing so the thesis examines three main research questions (1) is there a transfer of design knowledge from real to electronic spaces? (2) can concepts be provided in a series of useful guidelines? (3) are the guidelines useful for the design and evaluation of electronic spaces? Based upon the results of the two main studies contained within this thesis it is argued that the navigational perspective is one which is relevant to user interface design and evaluation and that navigation in electronic spaces is comparable to but not identical with actions within the real world. Moreover, the studies pointed to the validity of the core concepts when evaluating 2D and 3D spaces and designing 3D spaces. The thesis also points to the relevancy of the overall design guidance in 2D and 3D environments and the ability to make such information available through a software tool.

Page generated in 0.1806 seconds