• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 720
  • 715
  • 707
  • 398
  • 385
  • 382
  • 164
  • 97
  • 86
  • 82
  • 44
  • 42
  • 39
  • 30
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Context aware Web-service monitoring

Contreas, Ricardo January 2013 (has links)
Monitoring the correct behaviour of a service-based system is a necessity and a key challenge in Service Oriented Computing. Several efforts have been directed towards the development of approaches dealing with the monitoring activity of service-based systems. However, these approaches are in general not suitable when dealing with modifications in service-based systems. Furthermore, existing monitoring approaches do not take into consideration the context of the users and how this context may affect the monitor activity. Consequently, a holistic monitor approach, capable of dealing with the dynamic nature of service-based systems and of taking into consideration the user context, would be highly desirable. In this thesis we present a monitor adaptation framework capable of dealing with changes in a service-based system and different types of users interacting with it. More specifically, the framework obtains a set of monitor rules, necessary to verify the correct behaviour of a service-based system, for a particular user. Moreover, the monitor rules verifying the behaviour of a service-based system relate to properties of the context types defined for a user. The main contributions of our work include the general characterisation of a user interacting with a service-based system and the generation of suitable monitor rules.The proposed framework can be applied to any service composition without the need of further modifications. Our work complements previous research carried on in the area of web service monitoring. More specifically, our work generates a set of suitable monitor rules - related to the user context - which are deployed in a run-time monitor component. Our framework has been tested and validated in several cases considering different scenarios.
52

The Agile Web Engineering (AWE) process

McDonald, Andrew Gregory January 2004 (has links)
During the late 1990s commerce and academia voiced major concerns about the problems with development processes for Web Engineering. These concerns primarily centred upon the perceived chaotic and 'ad-hoc' approach to developing Web-based applications in extremely short time-scales when compared to traditional software development. Based on personal experience, conducting a survey of current practice, and collecting supporting evidence from the literature, I proposed a set of seven criteria that need to be addressed by a successful Web engineering process: 1. Short development life-cycle times; 2. Delivery of bespoke solutions and different business models; 3. Multidisciplinary development teams; 4. Small development teams working in parallel on similar tasks; 5. Business analysis and evaluation with end-users; 6. Requirements capture and rigorous testing; 7. Maintenance (evolution) of Web-based applications. These seven criteria are discussed in detail and the relevance of each to Web engineering is justified. They are then used to provide a framework to assess the suitability of a representative sample of well-known software engineering processes for Web engineering. The software engineering processes assessed comprise: the Unified Software Development Process; Dynamic Systems Development Method; and eXtreme Programming. These seven criteria were also used to motivate the definition of the Agile Web Engineering (AWE) process. A WE is based on the principles given in the Agile Manifesto and is specifically designed to address the major issues in Web Engineering, listed above. A number of other processes for Web Engineering have been proposed and a sample of these is systematically compared against the criteria given above. The Web engineering processes assessed are: Collaborative Web Development; Crystal Orange Web; Extensions to the Rational Unified Process; and Web OPEN. In order to assess the practical application of A WE, two commercial pilot projects were carried out in a Fortune 500 financial service sector company. The first commercial pilot of A WE increased end-user task completion on a retail Internet banking application from 47% to 79%. The second commercial pilot of A WE used by an Intranet development team won the company's global technology prize for 'value add' for 2003. In order to assess the effect of AWE within the company three surveys were carried out: an initial survey to establish current development practice within the company and two further surveys, one after each of the pilot projects. Despite the success of both pilots, AWE was not officially adopted by the company for Webbased projects. My surveys showed that this was primarily because there are significant cultural hurdles and organisational inertia to adopting different process approaches for different types of software development activity within the company. If other large companies, similar to the one discussed in this dissertation, are to adopt AWE, or other processes specific to Web engineering, then many will have to change their corporate goal of a one size fits all process approach for all software technology projects.
53

Extracting place semantics from geo-folksonomies

Elgindy, Ehab January 2013 (has links)
Massive interest in geo-referencing of personal resources is evident on the web. People are collaboratively digitising maps and building place knowledge resources that document personal use and experiences in geographic places. Understanding and discovering these place semantics can potentially lead to the development of a different type of place gazetteer that holds not only standard information of place names and geographic location, but also activities practiced by people in a place and vernacular views of place characteristics. The main contributions of this research are as follows. A novel framework is proposed for the analysis of geo-folksonomies and the automatic discovery of place-related semantics. The framework is based on a model of geographic place that extends the definition of place as defined in traditional gazetteers and geospatial ontologies to include the notion of place affordance. A method of clustering place resources to overcome the inaccuracy and redundancy inherent in the geo-folksonomy structure is developed and evaluated. Reference ontologies are created and used in a tag resolution stage to discover place-related concepts of interest. Folksonomy analysis techniques are then used to create a place ontology and its component type and activity ontologies. The resulting concept ontologies are compared with an expert ontology of place type and activities and evaluated through a user questionnaire. To demonstrate the utility of the proposed framework, an application is developed to illustrate the possible enrichment of search experience by exposing the derived semantics to users of web mapping abstract applications. Finally, the value of using the discovered place semantics is also demonstrated by proposing two semantic based similarity approaches; user similarity and place similarity. The validity of the approaches was confirmed by the results of an experiment conducted on a realistic folksonomy dataset.
54

Infrastructure support for adaptive mobile applications

Friday, Adrian January 1996 (has links)
Recent growth in the number and quality of wireless network technologies has led to an increased interest in mobile computing. Furthermore, these technologies have now advanced sufficiently to allow 'advanced applications' to be engineered. Applications such as these are characterised by complex patterns of distribution and interaction, support for collaboration and multimedia data, and are typically required to operate over heterogeneous networks and end-systems. Given these operating requirements, it is the author's contention that advanced applications must adapt their behaviour in response to changes in their environment in order to operate effectively. Such applications are termed adaptive applications. This thesis investigates the support required by advanced applications to facilitate operation in heterogeneous networked environments. A set of generic techniques are presented that enable existing distributed systems platforms to provide support for adaptive applications. These techniques are based on the provision of a QoS framework and a supporting infrastructure comprising a new remote procedure call package and supporting services. The QoS framework centres on the ability to establish explicit bindings between objects. Explicit bindings enable application requirements to be specified and provide a handle through which they can exert control and, more significantly, be informed of violations in the requested QoS. These QoS violations enable the applications to discover changes in their underlying environment and offer them the opportunity to adapt. The proposed architecture is validated through an implementation of the framework based on an existing distributed systems platform. The resulting architecture is used to underpin a novel collaborative mobile application aimed at supporting field workers within the utilities industry. The application in turn is used as a measure to gauge the effectiveness of the support provided by the platform. In addition, the design, implementation and evaluation of the application is used throughout the thesis to illustrate various aspects of platform support.
55

Robust steganographic techniques for secure biometric-based remote authentication

Rashid, Rasber Dhahir January 2015 (has links)
Biometrics are widely accepted as the most reliable proof of identity, entitlement to services, and for crime-related forensics. Using biometrics for remote authentication is becoming an essential requirement for the development of knowledge-based economy in the digital age. Ensuring security and integrity of the biometric data or templates is critical to the success of deployment especially because once the data compromised the whole authentication system is compromised with serious consequences for identity theft, fraud as well as loss of privacy. Protecting biometric data whether stored in databases or transmitted over an open network channel is a serious challenge and cryptography may not be the answer. The main premise of this thesis is that Digital Steganography can provide an alternative security solutions that can be exploited to deal with the biometric transmission problem. The main objective of the thesis is to design, develop and test steganographic tools to support remote biometric authentication. We focus on investigating the selection of biometrics feature representations suitable for hiding in natural cover images and designing steganography systems that are specific for hiding such biometric data rather than being suitable for general purpose. The embedding schemes are expected to have high security characteristics resistant to several types of steganalysis tools and maintain accuracy of recognition post embedding. We shall limit our investigations to embedding face biometrics, but the same challenges and approaches should help in developing similar embedding schemes for other biometrics. To achieve this our investigations and proposals are done in different directions which explain in the rest of this section. Reviewing the literature on the state-of-art in steganography has revealed a rich source of theoretical work and creative approaches that have helped generate a variety of embedding schemes as well as steganalysis tools but almost all focused on embedding random looking secrets. The review greatly helped in identifying the main challenges in the field and the main criteria for success in terms of difficult to reconcile requirements on embedding capacity, efficiency of embedding, robustness against steganalysis attacks, and stego image quality. On the biometrics front the review revealed another rich source of different face biometric feature vectors. The review helped shaping our primary objectives as (1) identifying a binarised face feature factor with high discriminating power that is susceptible to embedding in images, (2) develop a special purpose content-based steganography schemes that can benefit from the well-defined structure of the face biometric data in the embedding procedure while preserving accuracy without leaking information about the source biometric data, and (3) conduct sufficient sets of experiments to test the performance of the developed schemes, highlight the advantages as well as limitations, if any, of the developed system with regards to the above mentioned criteria. We argue that the well-known LBP histogram face biometric scheme satisfies the desired properties and we demonstrate that our new more efficient wavelet based versions called LBPH patterns is much more compact and has improved accuracy. In fact the wavelet version schemes reduce the number of features by 22% to 72% of the original version of LBP scheme guaranteeing better invisibility post embedding. We shall then develop 2 steganographic schemes. The first is the LSB-witness is a general purpose scheme that avoids changing the LSB-plane guaranteeing robustness against targeted steganalysis tools, but establish the viability of using steganography for remote biometric-based recognition. However, it may modify the 2nd LSB of cover pixels as a witness for the presence of the secret bits in the 1st LSB and thereby has some disadvantages with regards to the stego image quality. Our search for a new scheme that exploits the structure of the secret face LBPH patterns for improved stego image quality has led to the development of the first content-based steganography scheme. Embedding is guided by searching for similarities between the LBPH patterns and the structure of the cover image LSB bit-planes partitioned into 8-bit or 4-bit patterns. We shall demonstrate the excellent benefits of using content-based embedding scheme in terms of improved stego image quality, greatly reduced payload, reduced lower bound on optimal embedding efficiency, robustness against all targeted steganalysis tools. Unfortunately our scheme was not robust against the blind or universal SRM steganalysis tool. However we demonstrated robustness against SRM at low payload when our scheme was modified by restricting embedding to edge and textured pixels. The low payload in this case is sufficient to embed a secret full face LBPH patterns. Our work opens new exciting opportunities to build successful real applications of content-based steganography and presents plenty of research challenges.
56

Many-objective genetic type-2 fuzzy logic based workforce optimisation strategies for large scale organisational design

Starkey, Andrew J. January 2018 (has links)
Workforce optimisation aims to maximise the productivity of a workforce and is a crucial practice for large organisations. The more effective these workforce optimisation strategies are, the better placed the organisation is to meet their objectives. Usually, the focus of workforce optimisation is scheduling, routing and planning. These strategies are particularly relevant to organisations with large mobile workforces, such as utility companies. There has been much research focused on these areas. One aspect of workforce optimisation that gets overlooked is organisational design. Organisational design aims to maximise the potential utilisation of all resources while minimising costs. If done correctly, other systems (scheduling, routing and planning) will be more effective. This thesis looks at organisational design, from geographical structures and team structures to skilling and resource management. A many-objective optimisation system to tackle large-scale optimisation problems will be presented. The system will employ interval type-2 fuzzy logic to handle the uncertainties with the real-world data, such as travel times and task completion times. The proposed system was developed with data from British Telecom (BT) and was deployed within the organisation. The techniques presented at the end of this thesis led to a very significant improvement over the standard NSGA-II algorithm by 31.07% with a P-Value of 1.86-10. The system has delivered an increase in productivity in BT of 0.5%, saving an estimated £1million a year, cut fuel consumption by 2.9%, resulting in an additional saving of over £200k a year. Due to less fuel consumption Carbon Dioxide (CO2) emissions have been reduced by 2,500 metric tonnes. Furthermore, a report by the United Kingdom’s (UK’s) Department of Transport found that for every billion vehicle miles travelled, there were 15,409 serious injuries or deaths. The system saved an estimated 7.7 million miles, equating to preventing more than 115 serious casualties and fatalities.
57

Tutoring systems based on user-interface dialogue specification

Martin, Frank A. January 1990 (has links)
This thesis shows how the appropriate specification of a user interface to an application software package can be used as the basis for constructing a tutorial for teaching the use of that interface. An economy can hence be made by sharing the specification between the application development and tutorial development stages. The major part of the user-interface specification which is utilised, the task classification structure, must be transformed from an operational to a pedagogic ordering. Heuristics are proposed to achieve this, although human expertise is required to apply them. The report approach is best suited to domains with hierarchically-ordered command sets. A portable rule-based shell has been developed in Common Lisp which supports the delivery of tutorials for a range of software application package interfaces. The use of both the shell and tutorials for two such interfaces is reported. A computer-based authoring environment provides support for tutorial development. The shell allows the learner of a software interface to interact directly with the application software being learnt while remaining under tutorial control. The learner can always interrupt in order to request a tutorial on any topic, although advice may be offered against this in the light of the tutor's current knowledge of the learner. This advice can always be over-ridden. The key-stroke sequences of the tutorial designer and the learner interacting with the package are parsed against an application model based on the task classification structure. Diagnosis is effected by a differential modelling technique applied to the structures generated by the parsing processes. The approach reported here is suitable for an unsupported software interface learner and is named LIY (`Learn It Yourself'). It provides a promising method for augmenting a software engineering tool-kit with a new technique for producing tutorials for application software.
58

Strategies and tools for the exploitation of massively parallel computer systems

Evans, Emyr Wyn January 2000 (has links)
The aim of this thesis is to develop software and strategies for the exploitation of parallel computer hardware, in particular distributed memory systems, and embedding these strategies within a parallelisation tool to allow the automatic generation of these strategies. The parallelisation of four structured mesh codes using the Computer Aided Parallelisation Tools provided a good initial parallelisation of the codes. However, investigation revealed that simple optimisation of the communications within these codes provided an even better improvement in performance. The dominant factor within the communications was the data transfer time with communication start-up latencies also significant. This was significant throughout the codes but especially in sections of pipelined code where there were large amounts of communication present. This thesis describes the development and testing of the methods used to increase the performance of these communications by overlapping them with unrelated calculation. This method of overlapping the communications was applied to the exchange of data communications as well as the pipelined communications. The successful application by hand provided the motivation for these methods to be incorporated and automatically generated within the Computer Aided Parallelisation Tools. These methods were integrated within these tools as an additional stage of the parallelisation. This required a generic algorithm that made use of many of the symbolic algebra tests and symbolic variable manipulation routines within the tools. The automatic generation of overlapped communications was applied to the four codes previously parallelised as well as a further three codes, one of which was a real world Computational Fluid Dynamics code. The methods to apply automatic generation of overlapped communications to unstructured mesh codes were also discussed. These methods are similar to those applied to the structured mesh codes and their automation is viewed to be of a similar fashion.
59

Improving the regulatory acceptance and numerical performance of CFD based fire-modelling software

Grandison, Angus Joseph January 2003 (has links)
The research of this thesis was concerned with practical aspects of Computational Fluid Dynamics (CFD) based fire modelling software, specifically its application and performance. Initially a novel CFD based fire suppression model was developed (FIREDASS). The FIREDASS (FIRE Detection And Suppression Simulation) programme was concerned with the development of water misting systems as a possible replacement for halon based fire suppression systems currently used in aircraft cargo holds and ship engine rooms. A set of procedures was developed to test the applicability of CFD fire modelling software. This methodology was demonstrated on three CFD products that can be used for fire modelling purposes. The proposed procedure involved two phases. Phase 1 allowed comparison between different computer codes without the bias of the user or specialist features that may exist in one code and not another by rigidly defining the case set-up. Phase 2 allowed the software developer to perform the test using the best modelling features available in the code to best represent the scenario being modelled. In this way it was hoped to demonstrate that in addition to achieving a common minimum standard of performance, the software products were also capable of achieving improved agreement with the experimental or theoretical results. A significant conclusion drawn from this work suggests that an engineer using the basic capabilities of any of the products tested would be likely to draw the same conclusions from the results irrespective of which product was used. From a regulators view, this is an important result as it suggests that the quality of the predictions produced are likely to be independent of the tool used - at least in situations where the basic capabilities of the software were used. The majority of this work has focussed on the use of specialised proprietary hardware generally based around the UNIX operating system. The majority of engineering firms that would benefit from the reduced timeframes offered by parallel processing rarely have access to such specialised systems. However, in recent years with the increasing power of individual office PCs and the improved performance of Local Area Networks (LAN) it has now come to the point where parallel processing can be usefully utilised in a typical office environment where many such PCs maybe connected to a LAN. Harnessing this power for fire modelling has great promise. Modern low cost supercomputers are now typically constructed from commodity PC motherboards connected via a dedicated high-speed network. However, virtually no work has been published on using office based PCs connected via a LAN in a parallel manner on real applications. The SMARTFIRE fire field model was modified to utilise multiple PCs on a typical office based LAN. It was found that good speedup could be achieved on homogeneous PCs, for example for a problem composed of-100,000 cells would run on a network of 12 PCs with a speedup of 9.3 over a single PC. A dynamic load balancing scheme was devised to allow the effective use of the software on heterogeneous PC networks. This scheme also ensured that the impact of the parallel processing on other computer users was minimised. This scheme also minimised the impact of other computer users on the parallel processing performed by the FSE.
60

Columbus : a solution using metadata for integrating document management, project hosting and document control in the construction industry

Herrero, Juan Jose January 2003 (has links)
This thesis presents a solution for integrating document handling technologies within the construction industry using metadata in a novel way and providing a working solution in the form of an application called Columbus. The research analyses in detail the problem of project collaboration. It concentrates on the usage of document management, project hosting and document control systems as important enabling technologies. The creation, exchange and recording of information are addressed as key factors for having a unified document handling solution. Metadata is exploited as a technology providing for effective open information exchange within and between project participants. The technical issues relating to the use of metadata are addressed at length. The Columbus application is presented as a working solution to this problem. Columbus is currently used by over 20000 organisations in 165 countries and has become a standard for information exchange. The main benefit of Columbus has been in getting other project participants to send metadata with their electronic documents and in dealing with project archival. This has worked very well on numerous projects, saving countless man-hours of data input time, document cataloguing and searching. The application is presented in detail from both commercial and technical perspectives and is shown as an open solution, which can be extended by third parties. The commercial success of Columbus is discussed by means of a number of reviews and case studies that cover its usage within the industry. In 2000, it was granted an Institution of Civil Engineers' Special Award in recognition of its contribution to the Latham and Egan initiatives for facilitating information exchange within the construction industry.

Page generated in 0.0493 seconds