• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 487
  • 487
  • 487
  • 171
  • 157
  • 155
  • 155
  • 68
  • 57
  • 48
  • 33
  • 29
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Smart object, not smart environment : cooperative augmentation of smart objects using projector-camera systems

Molyneaux, David January 2008 (has links)
Smart objects research explores embedding sensing and computing into everyday objects - augmenting objects to be a source of information on their identity, state, and context in the physical world. A major challenge for the design of smart objects is to preserve their original appearance, purpose and function. Consequently, many research projects have focussed on adding input capabilities to objects, while neglecting the requirement for an output capability which would provide a balanced interface. This thesis presents a new approach to add output capability by smart objects cooperating with projector-camera systems. The concept of Cooperative Augmentation enables the knowledge required for visual detection, tracking and projection on smart objects to be embedded within the object itself. This allows projector-camera systems to provide generic display services, enabling spontaneous use by any smart object to achieve non-invasive and interactive projected displays on their surfaces. Smart objects cooperate to achieve this by describing their appearance directly to the projector-camera systems and use embedded sensing to constrain the visual detection process. We investigate natural appearance vision-based detection methods and perform an experimental study specifically analysing the increase in detection performance achieved with movement sensing in the target object. We find that detection performance significantly increases with sensing, indicating the combination of different sensing modalities is important, and that different objects require different appearance representations and detection methods. These studies inform the design and implementation of a system architecture which serves as the basis for three applications demonstrating the aspects of visual detection, integration of sensing, projection, interaction with displays and knowledge updating. The displays achieved with Cooperative Augmentation allow any smart object to deliver visual feedback to users from implicit and explicit interaction with information represented or sensed by the physical object, supporting objects as both input and output medium simultaneously. This contributes to the central vision of Ubiquitous Computing by enabling users to address tasks in physical space with direct manipulation and have feedback on the objects themselves, where it belongs in the real world.
12

An empirical investigation of issues relating to software immigrants

Hutton, Alistair James January 2008 (has links)
This thesis focuses on the issue of people in software maintenance and, in particular, on software immigrants – developers who are joining maintenance teams to work with large unfamiliar software systems. By means of a structured literature review this thesis identifies a lack of empirical literature in Software Maintenance in general and an even more distinct lack of papers examining the role of People in Software Maintenance. Whilst there is existing work examining what maintenance programmers do the vast majority of it is from a managerial perspective, looking at the goals of maintenance programers rather than their day-to-day activities. To help remedy this gap in the research a series of interviews with maintenance programmers were undertaken across a variety of different companies. Four key results were identified: maintainers specialise; companies do not provide adequate system training; external sources of information about the system are not guaranteed to be available; even when they are available they are not considered trustworthy. These results combine together to form a very challenging picture for software immigrants. Software immigrants are maintainers who are new to working with a system, although they are not normally new to programming. Although there is literature on software immigrants and the activities they undertake, there is no comparative literature. That is, literature that examines and compares different ways for software immigrants to learn about the system they have to maintain. Furthermore, a common feature of software immigrants learning patterns is the existence and use of mentors to impart system knowledge. However, as the interviews show, often mentors are not available which makes examining alternative ways of building a software immigrants level-of-understanding about the system they must maintain all the more important. As a result the final piece of work in this thesis is the design, running and results of a controlled laboratory experiment comparing different, work based, approaches to developing a level-of-understanding about a system. Two approaches were compared, one where subjects actively worked and altered the code while a second group took a passive ‘hands-off’ approach. The end result showed no difference in the level-of-understanding gained between the subjects who performed the active task and those that performed the passive task. This means that there is no benefit to taking a hands-off approach to building a level-of-understanding about new code in the hostile environment identified from the literature and interviews and software immigrants should start working with the code, fulfilling maintenance requests as soon as possible.
13

An environment for protecting the privacy of e-shoppers

Galvez-Cruz, Dora Carmen January 2009 (has links)
Privacy, an everyday topic with weekly media coverage of loss of personal records, faces its bigger risk during the uncontrolled, involuntary or inadvertent disclosure and collection of personal and sensitive information. Preserving one's privacy while e-shopping, especially when personalisation is involved, is a big challenge. Current initiatives only offer customers opt-out options. This research proposes a `privacy-preserved' shopping environment (PPSE) which empowers customers to disclose information safely by facilitating a personalised e- shopping experience that protects their privacy. Evaluation delivered positive results which suggest that such a product would indeed have a market in a world where customers are increasingly concerned about their privacy.
14

Evaluation of information systems deployment in Libyan oil companies : towards an assessment framework

Akeel, Hosian January 2013 (has links)
This research work provides an explorative study of information systems deployment in two oil companies in Libya, one domestic and one foreign. It focuses on evaluation, review and assessment methodologies for information system deployment in oil companies in Libya. It also takes into consideration related issues such as information systems strategies and strategic business alignment with information systems. The study begins with an overview of information systems deployment in Libyan-based oil companies. The study thereafter reviews Libya as a business environment and provides a literature review on information systems deployment, information systems strategies and existing assessment models for information systems deployment. A case study of each company is then presented. The research investigates information systems deployment along with associated business functions in the Libyan-based oil companies chosen as case studies. Detailed analysis of information systems deployed in the company has been carried out, following a comprehensive information gathering process from the case study companies. The analysis was done using existing scholarly models which include process mapping, system portfolio analysis, Nolan’s model, Zuboff’s model, the CPIT model and the MacFarlan-Peppard model. Earl’s model and Gottschalk’s model have also been reviewed in the literature and used to provide insightful analysis of the information system strategies of the case study companies. In the concluding section of this research work, a framework is established for the assessment of information systems deployment in similar business contexts - starting from the basis of process analysis and the information systems used, considering interfaces and linkages of the information systems and the suitability of the information systems in similar business contexts. The developed framework builds on the foundation of the existing assessment models for information systems deployment. This newly developed framework presented in this study is the contribution of this research work to knowledge. The developed framework is suited to assessing information systems deployment in oil companies in Libya and can be adapted to other oil companies in developing countries.
15

Information security based on temporal order and ergodic matrix

Zhou, Xiaoyi January 2012 (has links)
This thesis proposes some information security systems to aid network temporal security applications with multivariate quadratic polynomial equations, image cryptography and image hiding. In the first chapter, some general terms of temporal logic, multivariate quadratic equations (MQ) problems and image cryptography/hiding are introduced. In particular, explanations of the need for them and research motivations are given, i.e., a formal characterization of time-series, an alternative scheme of MQ systems, a hybrid-key based image encryption and authentication system and a DWT-SVD (Discrete Wavelet Transform and Singular Value Decomposition) based image hiding system. This is followed by a literature review of temporal basis, ergodic matrix, cryptography and information hiding. After these tools are introduced, they are used to show how they can be applied in our research. The main part of this thesis is about using ergodic matrix and temporal logic in cryptography and hiding information. Specifically, it can be described as follows: A formal characterization of time-series has been presented for both complete and incomplete situations, where the time-series are formalized as a triple (ts, R, Dur) which denote the temporal order of time-elements, the temporal relationship between time-elements and the temporal duration of each time-element, respectively. A cryptosystem based on MQ is proposed. The security of many recently proposed cryptosystems is mainly based on the difficulty of solving large MQ systems. Apart from UOV schemes with proper parameter values, the basic types of these schemes can be broken down without great difficulty. Moreover, there are some shortages lying in some of these examined schemes. Therefore, a bisectional multivariate quadratic equation (BMQE) system over a finite field of degree q is proposed. The BMQE system is analysed by Kipnis and Shamir’s relinearization and fixing-variables method. It is shown that if the number of the equations is larger or equal to twice the number of the variables, and qn is large enough, the system is complicated enough to prevent attacks from some existing attacking schemes. A hybrid-key and ergodic-matrix based image encryption/authentication scheme has been proposed in this work. Because the existing traditional cryptosystems, such as RSA, DES, IDEA, SAFER and FEAL, are not ideal for image encryption for their slow speed and not removing the correlations of the adjacent pixels effectively. Another reason is that the chaos-based cryptosystems, which have been extensively used since last two decades, almost rely on symmetric cryptography. The experimental results, statistical analysis and sensitivity-based tests confirm that, compared to the existing chaos-based image cryptosystems, the proposed scheme provides more secure way for image encryption and transmission. However, the visible encrypted image will easily arouse suspicion. Therefore, a hybrid digital watermarking scheme based on DWT-SVD and ergodic matrix is introduced. Compared to other watermarking schemes, the proposed scheme has shown both significant improvement in perceptibility and robustness under various types of image processing attacks, such as JPEG compression, median filtering, average filtering, histogram equalization, rotation, cropping, Gaussian noise, speckle noise, salt-pepper noise. In general, the proposed method is a useful tool for ownership identification and copyright protection. Finally, two applications based on temporal issues were studied. This is because in real life, when two or more parties communicate, they probably send a series of messages, or they want to embed multiple watermarks for themselves. Therefore, we apply a formal characterization of time-series to cryptography (esp. encryption) and steganography (esp. watermarking). Consequently, a scheme for temporal ordered image encryption and a temporal ordered dynamic multiple digital watermarking model is introduced.
16

Empirical evidence that proves a serious game is an educationally effective tool for learning computer programming constructs at the computational thinking level

Kazimoglu, Cagin January 2013 (has links)
Owing to their easy engagement and motivational nature, games predominantly in young age groups, have been omnipresent in education since ancient times. More recently, computer video games have become widely used, particularly in secondary and tertiary education, as a method of enhancing the understanding of some subject areas (especially in English language education, geography, history and health) and also used as an aid to attracting and retaining students. Many academics have proposed a number of approaches using video game-based learning (GBL), to impart theoretical and applied knowledge, especially in the Computer Science discipline. Despite several years of considerable effort, the empirical evidence in the GBL literature is still missing, specifically that which identifies what students learn from a serious game regarding programming constructs, and whether or not they acquire additional skills after they have been introduced to a GBL approach. Much of the existing work in this area explores the motivational aspect of video games and does not necessarily focus on what people can learn or which cognitive skills they can acquire that would be beneficial to support their learning in introductory computer programming. Hence, this research is concerned with the design, and determining the educational effectiveness, of a game model focused on the development of computational thinking (CT) skills through the medium of learning introductory programming constructs. The research is aimed at designing, developing and evaluating a serious game through a series of empirical studies in order to identify whether or not this serious game can be an educationally effective tool for learning computer programming at the CT level. The game model and its implementation are created to achieve two main purposes. Firstly, to develop a model that would allow students to practise a series of cognitive abilities that characterise CT, regardless of their programming background. Secondly, to support the learning of applied knowledge in introductory programming by demonstrating how a limited number of key introductory computer programming constructs which introductory programming students often find challenging and/or difficult to understand. In order to measure the impact of the serious game and its underlying game model, a pilot-study and a series of rigorous empirical studies have been designed. The pilot study was conducted as a freeform evaluation to obtain initial feedback on the game’s usability. A group of students following Computer Science and related degree programmes with diverse backgrounds and experience participated in the pilot-study and confirmed that they found the game enjoyable. The feedback obtained also showed that the majority of students believed the game would be beneficial in helping introductory programming students learn computational thinking skills. Having incorporated the feedback into a revised version of the game, a further series of rigorous studies were conducted, analysed and evaluated. In order to accurately measure the effect of the game, the findings of the studies were statistically analysed using parametric or non-parametric measures depending on the distribution of data gathered. Moreover, the correlations between how well students did in the game, the knowledge gain students felt, and the skills they felt they acquired after their game-play are thoroughly investigated. It was found that intrinsic motivation, attitude towards learning through game-play, students’ perception of their programming knowledge, how well students visualise programming constructs and their problem solving abilities were significantly enhanced after playing the game. The correlations of the studies provided evidence that there is no strong and significant relationship between the progress of students in the game and the computational thinking skills they felt they gained from it. It was concluded that students developed their computational thinking skills regardless of whether or not they reached the higher levels in the game. In addition to this, it was found that there are no strong and significant correlations between the key computer programming constructs and the computational thinking skills, which provides strong evidence that learning how introductory computer programming constructs work and developing computational thinking skills, are not directly connected to each other in the game environment. It was also found that students felt that their conditional logic, algorithmic thinking and simulation abilities had significantly developed after playing the game. As a result, this research concludes that the designed serious game is an educationally effective tool for a) learning how key introductory computer programming constructs work and b) developing cognitive skills in computational thinking.
17

Towards a general temporal theory

Ma, Jixin January 1994 (has links)
The research work presented herein addresses time representation and temporal reasoning in the domain of artificial intelligence. A general temporal theory, as an extension of Alien and Hayes', Gallon's and Vilain's theories, is proposed which treats both time intervals and time points on an equal footing; that is, both intervals and points are taken as primitive time elements in the theory. This means that neither do intervals have to be constructed out of points, nor do points have to be created as some limiting construction of intervals. This approach is different from that of Ladkin, of Van Beek, of Dechter, Meiri and Pearl, and of Maiocchi, which is either to construct intervals out of points, or to treat points and intervals separately. The theory is presented in terms of a series of axioms which characterise a single temporal relation, "meets", over time elements. The axiomatisation allows non-linear time structures such as branching time and parallel time, and additional axioms specifying the linearity and density of time are specially presented. A formal characterisation for the open and closed nature of primitive intervals, which has been a problematic question of time representation in artificial intelligence, is provided in terms of the "meets" relation. It is shown to be consistent with the conventional definitions of open/closed intervals which are constructed out of points. It is also shown that this general theory is powerful enough to subsume some representative temporal theories, such as Alien and Hayes's interval based theory, Bruce's and McDermott's point based theories, and the interval and point based theory of Vilain, and of Gallon. A finite time network based on the theory is specially addressed, where a consistency checker in two different forms is provided for cases with, and without, duration reasoning, respectively. Utilising the time axiomatisation, the syntax and semantics of a temporal logic for reasoning about propositions whose truth values are associated with particular intervals/points are explicitly defined. It is shown that the logic is more expressive than that of some existing systems, such as Alien's interval-based logic, the revised theory proposed by Gallon, Shoham's point-based interval logic, and Haugh's MTA based logic; and the corresponding problems with these systems are satisfactorily solved. Finally, as an application of the temporal theory, a new architecture for a temporal database system which allows the expression of relative temporal knowledge of data transaction and data validity times is proposed. A general retrieval mechanism is presented for a database with a purely qualitative temporal component which allows queries with temporal constraints in terms of any logical combination of Alien's temporal relations. To reduce the computational complexity of the consistency checking algorithm when quantitative time duration knowledge is added, a class of databases, termed time-limited databases, is introduced. This class allows absolute-time-stamped and relative time information in a form which is suitable for many practical applications, where qualitative temporal information is only occasionally needed, and the efficient retrieval mechanisms for absolute-time-stamped databases may be adopted.
18

Intelligent monitoring of business processes using case-based reasoning

Kapetanakis, Stylianos January 2012 (has links)
The work in this thesis presents an approach towards the effective monitoring of business processes using Case-Based Reasoning (CBR). The rationale behind this research was that business processes constitute a fundamental concept of the modern world and there is a constantly emerging need for their efficient control. They can be efficiently represented but not necessarily monitored and diagnosed effectively via an appropriate platform. Motivated by the above observation this research pursued to which extent there can be efficient monitoring, diagnosis and explanation of the workflows. Workflows and their effective representation in terms of CBR were investigated as well as how similarity measures among them could be established appropriately. The monitoring results and their following explanation to users were questioned as well as which should be an appropriate software architecture to allow monitoring of workflow executions. Throughout the progress of this research, several sets of experiments have been conducted using existing enterprise systems which are coordinated via a predefined workflow business process. Past data produced over several years have been used for the needs of the conducted experiments. Based on those the necessary knowledge repositories were built and used afterwards in order to evaluate the suggesting approach towards the effective monitoring and diagnosis of business processes. The produced results show to which extent a business process can be monitored and diagnosed effectively. The results also provide hints on possible changes that would maximize the accuracy of the actual monitoring, diagnosis and explanation. Moreover the presented approach can be generalised and expanded further to enterprise systems that have as common characteristics a possible workflow representation and the presence of uncertainty. Further work motivated by this thesis could investigate how the knowledge acquisition can be transferred over workflow systems and be of benefit to large-scale multidimensional enterprises. Additionally the temporal uncertainty could be investigated further, in an attempt to address it while reasoning. Finally the provenance of cases and their solutions could be explored further, identifying correlations with the process of reasoning.
19

A new framework for supporting and managing multi-disciplinary system-simulation in a PLM environment

Mahler, Michael January 2014 (has links)
In order to keep products and systems attractive to consumers, developers have to do what they can to meet growing customers’ requirements. These requirements could be direct demands of customers but could also be the consequence of other influences such as globalization, customer fragmentation, product portfolio, regulations and so on. In the manufacturing industry, most companies are able to meet these growing requirements with mechatronic and interdisciplinary designed and developed products, which demand the collaboration between different disciplines. For example, the generation of a virtual prototype and its simulation tools of a mechatronic and multi-disciplinary product or system could require the cooperation of multiple departments within a company or between business partners. In a simulation, a virtual prototype is used for testing a product or a system. This virtual prototype and test approach could be used from the early stages of the development process to the end of the product or system lifecycle. Over years, different approaches/systems to generating virtual prototypes and testing have been designed and developed. But these systems have not been properly integrated, although some efforts have been made with limited success. Therefore, the requirement exists to propose and develop new technologies, methods and methodologies for achieving this integration. In addition, the use of simulation tools requires special expertise for the generation of simulation models, plus the formats of product prototypes and simulation data are different for each system. This adds to the requirements of a guideline or framework for implementing the integration of a multi- and inter- disciplinary product design, simulation software and data management during the entire product lifecycle. The main functionality and metadata structures of the new framework have been identified and optimised. The multi-disciplinary simulation data and their collection processes, the existing PLM (product lifecycle management) software and their applications have been analysed. In addition, the inter-disciplinary collaboration between a variety of simulation software has been analysed and evaluated. The new framework integrates the identified and optimised functionality and metadata structures to support and manage multi- and inter-disciplinary simulation in a PLM system environment. It is believed that this project has made 6 contributions to new knowledge generation: (1) the New Conceptual Framework to Enhance the Support and Management of Multi-Disciplinary System-Simulation, (2) the New System-Simulation Oriented and Process Oriented Data Handling Approach, (3) the Enhanced Traceability of System-Simulation to Sources and Represented Products and Functions, (4) the New System-Simulation Derivation Approach, (5) the New Approach for the Synchronisation of System Describing Structures and (6) the Enhanced System-Simulation Result Data Handling Approach. In addition, the new framework would bring significant benefits to each industry it is applied to. They are: (1) the more effective re-use of individual simulation models in system-simulation context, (2) the effective pre-defining and preparing of individual simulation models, (3) the easy and native reviewable system-simulation structures in relation to input-sources, such as products and / or functions, (4) the easy authoring-software independent update of system-simulation-structures, product-structures and function-structures, (5) the effective, distributed and cohesive post-process and interpretation of system-simulation-results, (6) the effective, easy and unique traceability of the data which means cost reductions in documentation and data security, and (7) the greater openness and flexibility in simulation software interactions with the data holding system. Although the proposed and developed conceptual framework has not been implemented (that would require vast resources), it can be expected that the benefits in 7 above will lead to significant advances in the simulation of new product design and development over the whole lifecycle, offering enormous practical value to the manufacturing industry. Due to time and resource constraints as well as the effort that would be involved in the implementation of the proposed new framework, it is clear there are some limitations to this PhD thesis. Five areas have been identified where further work is needed to improve the quality of this project: (1) an expanded industrial sector and product design and development processes, (2) parameter oriented system and production description in the new framework, (3) the improved user interface design of the new framework, (4) the automatic generation of simulation processes and (5) enhancement of the individual simulation models.
20

Compiler architecture using a portable intermediate language

Reig Galilea, Fermín Javier January 2002 (has links)
The back end of a compiler performs machine-dependent tasks and low-level optimisations that are laborious to implement and difficult to debug. In addition, in languages that require run-time services such as garbage collection, the back end must interface with the run-time system to provide those services. The net result is that building a compiler back end entails a high implementation cost. In this dissertation I describe reusable code generation infrastructure that enables the construction of a complete programming language implementation (compiler and run-time system) with reduced effort. The infrastructure consists of a portable intermediate language, a compiler for this language and a low-level run-time system. I provide an implementation of this system and I show that it can support a variety of source programming languages, it reduces the overall eort required to implement a programming language, it can capture and retain information necessary to support run-time services and optimisations, and it produces efficient code.

Page generated in 0.1331 seconds