• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 720
  • 715
  • 707
  • 398
  • 385
  • 382
  • 164
  • 97
  • 86
  • 82
  • 44
  • 42
  • 39
  • 30
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Embedding expert systems in semi-formal domains : examining the boundaries of the knowledge base

Whitley, Edgar A. January 1990 (has links)
This thesis examines the use of expert systems in semi-formal domains. The research identifies the main problems with semi-formal domains and proposes and evaluates a number of different solutions to them. The thesis considers the traditional approach to developing expert systems, which sees domains as being formal, and notes that it continuously faces problems that result from informal features of the problem domain. To circumvent these difficulties experience or other subjective qualities are often used but they are not supported by the traditional approach to design. The thesis examines the formal approach and compares it with a semiformal approach to designing expert systems which is heavily influenced by the socio-technical view of information systems. From this basis it examines a number of problems that limit the construction and use of knowledge bases in semi-formal domains. These limitations arise from the nature of the problem being tackled, in particular problems of natural language communication and tacit knowledge and also from the character of computer technology and the role it plays. The thesis explores the possible mismatch between a human user and the machine and models the various types of confusion that arise. The thesis describes a number of practical solutions to overcome the problems identified. These solutions are implemented in an expert system shell (PESYS), developed as part of the research. The resulting solutions, based on non-linear documents and other software tools that open up the reasoning of the system, support users of expert systems in examining the boundaries of the knowledge base to help them avoid and overcome any confusion that has arisen. In this way users are encouraged to use their own skills and experiences in conjunction with an expert system to successfully exploit this technology in semi-formal domains.
12

Flexible physical interfaces

Villar, Nicolas January 2007 (has links)
Human-computer interface devices are rigid, and afford little or no opportunity for end-user adaptation. This thesis proposes that valuable new interaction possibilities can be generated through the development of user interface hardware that is increasingly flexible, and allows end-users to physically shape, construct and modify physical interfaces for interactive systems. The work is centred around the development of a novel platform for flexible user interfaces (called VoodooIO) that allows end-users to compose and adapt physical control structures in a manner that is both versatile and simple to use. VoodooIO has two main physical elements: a pliable material (called the substrate), and a set of physical user interface controls, which can be arranged on the surface of the substrate.The substrate can be shaped, applied to existing surfaces, attached to objects and placed on walls and furniture to designate interface areas on which users can spatially lay out controls. From a technical perspective, the design of VoodooIO is based on a novel architecture for user interfaces as networks of controls, where each control is implemented as a network node with physical input and output capabilities. The architecture overcomes the inflexibility that is usually imposed by hard-wired circuitry in traditional interface devices, by enabling individual control elements that can be connected and disconnected ad hoc from a shared network bus. The architecture includes support for a wide and extensible range of control types; fast control identification and presence detection, and an application-level interface that abstracts from low level implementation details and network management processes. The concrete contributions to the field of human-computer interaction include a motivation for the development of flexible physical interfaces, a fully working example of such a technology, and insights gathered from its application and study.
13

Smart object, not smart environment : cooperative augmentation of smart objects using projector-camera systems

Molyneaux, David January 2008 (has links)
Smart objects research explores embedding sensing and computing into everyday objects - augmenting objects to be a source of information on their identity, state, and context in the physical world. A major challenge for the design of smart objects is to preserve their original appearance, purpose and function. Consequently, many research projects have focussed on adding input capabilities to objects, while neglecting the requirement for an output capability which would provide a balanced interface. This thesis presents a new approach to add output capability by smart objects cooperating with projector-camera systems. The concept of Cooperative Augmentation enables the knowledge required for visual detection, tracking and projection on smart objects to be embedded within the object itself. This allows projector-camera systems to provide generic display services, enabling spontaneous use by any smart object to achieve non-invasive and interactive projected displays on their surfaces. Smart objects cooperate to achieve this by describing their appearance directly to the projector-camera systems and use embedded sensing to constrain the visual detection process. We investigate natural appearance vision-based detection methods and perform an experimental study specifically analysing the increase in detection performance achieved with movement sensing in the target object. We find that detection performance significantly increases with sensing, indicating the combination of different sensing modalities is important, and that different objects require different appearance representations and detection methods. These studies inform the design and implementation of a system architecture which serves as the basis for three applications demonstrating the aspects of visual detection, integration of sensing, projection, interaction with displays and knowledge updating. The displays achieved with Cooperative Augmentation allow any smart object to deliver visual feedback to users from implicit and explicit interaction with information represented or sensed by the physical object, supporting objects as both input and output medium simultaneously. This contributes to the central vision of Ubiquitous Computing by enabling users to address tasks in physical space with direct manipulation and have feedback on the objects themselves, where it belongs in the real world.
14

An empirical investigation of issues relating to software immigrants

Hutton, Alistair James January 2008 (has links)
This thesis focuses on the issue of people in software maintenance and, in particular, on software immigrants – developers who are joining maintenance teams to work with large unfamiliar software systems. By means of a structured literature review this thesis identifies a lack of empirical literature in Software Maintenance in general and an even more distinct lack of papers examining the role of People in Software Maintenance. Whilst there is existing work examining what maintenance programmers do the vast majority of it is from a managerial perspective, looking at the goals of maintenance programers rather than their day-to-day activities. To help remedy this gap in the research a series of interviews with maintenance programmers were undertaken across a variety of different companies. Four key results were identified: maintainers specialise; companies do not provide adequate system training; external sources of information about the system are not guaranteed to be available; even when they are available they are not considered trustworthy. These results combine together to form a very challenging picture for software immigrants. Software immigrants are maintainers who are new to working with a system, although they are not normally new to programming. Although there is literature on software immigrants and the activities they undertake, there is no comparative literature. That is, literature that examines and compares different ways for software immigrants to learn about the system they have to maintain. Furthermore, a common feature of software immigrants learning patterns is the existence and use of mentors to impart system knowledge. However, as the interviews show, often mentors are not available which makes examining alternative ways of building a software immigrants level-of-understanding about the system they must maintain all the more important. As a result the final piece of work in this thesis is the design, running and results of a controlled laboratory experiment comparing different, work based, approaches to developing a level-of-understanding about a system. Two approaches were compared, one where subjects actively worked and altered the code while a second group took a passive ‘hands-off’ approach. The end result showed no difference in the level-of-understanding gained between the subjects who performed the active task and those that performed the passive task. This means that there is no benefit to taking a hands-off approach to building a level-of-understanding about new code in the hostile environment identified from the literature and interviews and software immigrants should start working with the code, fulfilling maintenance requests as soon as possible.
15

An environment for protecting the privacy of e-shoppers

Galvez-Cruz, Dora Carmen January 2009 (has links)
Privacy, an everyday topic with weekly media coverage of loss of personal records, faces its bigger risk during the uncontrolled, involuntary or inadvertent disclosure and collection of personal and sensitive information. Preserving one's privacy while e-shopping, especially when personalisation is involved, is a big challenge. Current initiatives only offer customers opt-out options. This research proposes a `privacy-preserved' shopping environment (PPSE) which empowers customers to disclose information safely by facilitating a personalised e- shopping experience that protects their privacy. Evaluation delivered positive results which suggest that such a product would indeed have a market in a world where customers are increasingly concerned about their privacy.
16

Evaluation of information systems deployment in Libyan oil companies : towards an assessment framework

Akeel, Hosian January 2013 (has links)
This research work provides an explorative study of information systems deployment in two oil companies in Libya, one domestic and one foreign. It focuses on evaluation, review and assessment methodologies for information system deployment in oil companies in Libya. It also takes into consideration related issues such as information systems strategies and strategic business alignment with information systems. The study begins with an overview of information systems deployment in Libyan-based oil companies. The study thereafter reviews Libya as a business environment and provides a literature review on information systems deployment, information systems strategies and existing assessment models for information systems deployment. A case study of each company is then presented. The research investigates information systems deployment along with associated business functions in the Libyan-based oil companies chosen as case studies. Detailed analysis of information systems deployed in the company has been carried out, following a comprehensive information gathering process from the case study companies. The analysis was done using existing scholarly models which include process mapping, system portfolio analysis, Nolan’s model, Zuboff’s model, the CPIT model and the MacFarlan-Peppard model. Earl’s model and Gottschalk’s model have also been reviewed in the literature and used to provide insightful analysis of the information system strategies of the case study companies. In the concluding section of this research work, a framework is established for the assessment of information systems deployment in similar business contexts - starting from the basis of process analysis and the information systems used, considering interfaces and linkages of the information systems and the suitability of the information systems in similar business contexts. The developed framework builds on the foundation of the existing assessment models for information systems deployment. This newly developed framework presented in this study is the contribution of this research work to knowledge. The developed framework is suited to assessing information systems deployment in oil companies in Libya and can be adapted to other oil companies in developing countries.
17

Information security based on temporal order and ergodic matrix

Zhou, Xiaoyi January 2012 (has links)
This thesis proposes some information security systems to aid network temporal security applications with multivariate quadratic polynomial equations, image cryptography and image hiding. In the first chapter, some general terms of temporal logic, multivariate quadratic equations (MQ) problems and image cryptography/hiding are introduced. In particular, explanations of the need for them and research motivations are given, i.e., a formal characterization of time-series, an alternative scheme of MQ systems, a hybrid-key based image encryption and authentication system and a DWT-SVD (Discrete Wavelet Transform and Singular Value Decomposition) based image hiding system. This is followed by a literature review of temporal basis, ergodic matrix, cryptography and information hiding. After these tools are introduced, they are used to show how they can be applied in our research. The main part of this thesis is about using ergodic matrix and temporal logic in cryptography and hiding information. Specifically, it can be described as follows: A formal characterization of time-series has been presented for both complete and incomplete situations, where the time-series are formalized as a triple (ts, R, Dur) which denote the temporal order of time-elements, the temporal relationship between time-elements and the temporal duration of each time-element, respectively. A cryptosystem based on MQ is proposed. The security of many recently proposed cryptosystems is mainly based on the difficulty of solving large MQ systems. Apart from UOV schemes with proper parameter values, the basic types of these schemes can be broken down without great difficulty. Moreover, there are some shortages lying in some of these examined schemes. Therefore, a bisectional multivariate quadratic equation (BMQE) system over a finite field of degree q is proposed. The BMQE system is analysed by Kipnis and Shamir’s relinearization and fixing-variables method. It is shown that if the number of the equations is larger or equal to twice the number of the variables, and qn is large enough, the system is complicated enough to prevent attacks from some existing attacking schemes. A hybrid-key and ergodic-matrix based image encryption/authentication scheme has been proposed in this work. Because the existing traditional cryptosystems, such as RSA, DES, IDEA, SAFER and FEAL, are not ideal for image encryption for their slow speed and not removing the correlations of the adjacent pixels effectively. Another reason is that the chaos-based cryptosystems, which have been extensively used since last two decades, almost rely on symmetric cryptography. The experimental results, statistical analysis and sensitivity-based tests confirm that, compared to the existing chaos-based image cryptosystems, the proposed scheme provides more secure way for image encryption and transmission. However, the visible encrypted image will easily arouse suspicion. Therefore, a hybrid digital watermarking scheme based on DWT-SVD and ergodic matrix is introduced. Compared to other watermarking schemes, the proposed scheme has shown both significant improvement in perceptibility and robustness under various types of image processing attacks, such as JPEG compression, median filtering, average filtering, histogram equalization, rotation, cropping, Gaussian noise, speckle noise, salt-pepper noise. In general, the proposed method is a useful tool for ownership identification and copyright protection. Finally, two applications based on temporal issues were studied. This is because in real life, when two or more parties communicate, they probably send a series of messages, or they want to embed multiple watermarks for themselves. Therefore, we apply a formal characterization of time-series to cryptography (esp. encryption) and steganography (esp. watermarking). Consequently, a scheme for temporal ordered image encryption and a temporal ordered dynamic multiple digital watermarking model is introduced.
18

Empirical evidence that proves a serious game is an educationally effective tool for learning computer programming constructs at the computational thinking level

Kazimoglu, Cagin January 2013 (has links)
Owing to their easy engagement and motivational nature, games predominantly in young age groups, have been omnipresent in education since ancient times. More recently, computer video games have become widely used, particularly in secondary and tertiary education, as a method of enhancing the understanding of some subject areas (especially in English language education, geography, history and health) and also used as an aid to attracting and retaining students. Many academics have proposed a number of approaches using video game-based learning (GBL), to impart theoretical and applied knowledge, especially in the Computer Science discipline. Despite several years of considerable effort, the empirical evidence in the GBL literature is still missing, specifically that which identifies what students learn from a serious game regarding programming constructs, and whether or not they acquire additional skills after they have been introduced to a GBL approach. Much of the existing work in this area explores the motivational aspect of video games and does not necessarily focus on what people can learn or which cognitive skills they can acquire that would be beneficial to support their learning in introductory computer programming. Hence, this research is concerned with the design, and determining the educational effectiveness, of a game model focused on the development of computational thinking (CT) skills through the medium of learning introductory programming constructs. The research is aimed at designing, developing and evaluating a serious game through a series of empirical studies in order to identify whether or not this serious game can be an educationally effective tool for learning computer programming at the CT level. The game model and its implementation are created to achieve two main purposes. Firstly, to develop a model that would allow students to practise a series of cognitive abilities that characterise CT, regardless of their programming background. Secondly, to support the learning of applied knowledge in introductory programming by demonstrating how a limited number of key introductory computer programming constructs which introductory programming students often find challenging and/or difficult to understand. In order to measure the impact of the serious game and its underlying game model, a pilot-study and a series of rigorous empirical studies have been designed. The pilot study was conducted as a freeform evaluation to obtain initial feedback on the game’s usability. A group of students following Computer Science and related degree programmes with diverse backgrounds and experience participated in the pilot-study and confirmed that they found the game enjoyable. The feedback obtained also showed that the majority of students believed the game would be beneficial in helping introductory programming students learn computational thinking skills. Having incorporated the feedback into a revised version of the game, a further series of rigorous studies were conducted, analysed and evaluated. In order to accurately measure the effect of the game, the findings of the studies were statistically analysed using parametric or non-parametric measures depending on the distribution of data gathered. Moreover, the correlations between how well students did in the game, the knowledge gain students felt, and the skills they felt they acquired after their game-play are thoroughly investigated. It was found that intrinsic motivation, attitude towards learning through game-play, students’ perception of their programming knowledge, how well students visualise programming constructs and their problem solving abilities were significantly enhanced after playing the game. The correlations of the studies provided evidence that there is no strong and significant relationship between the progress of students in the game and the computational thinking skills they felt they gained from it. It was concluded that students developed their computational thinking skills regardless of whether or not they reached the higher levels in the game. In addition to this, it was found that there are no strong and significant correlations between the key computer programming constructs and the computational thinking skills, which provides strong evidence that learning how introductory computer programming constructs work and developing computational thinking skills, are not directly connected to each other in the game environment. It was also found that students felt that their conditional logic, algorithmic thinking and simulation abilities had significantly developed after playing the game. As a result, this research concludes that the designed serious game is an educationally effective tool for a) learning how key introductory computer programming constructs work and b) developing cognitive skills in computational thinking.
19

Towards a general temporal theory

Ma, Jixin January 1994 (has links)
The research work presented herein addresses time representation and temporal reasoning in the domain of artificial intelligence. A general temporal theory, as an extension of Alien and Hayes', Gallon's and Vilain's theories, is proposed which treats both time intervals and time points on an equal footing; that is, both intervals and points are taken as primitive time elements in the theory. This means that neither do intervals have to be constructed out of points, nor do points have to be created as some limiting construction of intervals. This approach is different from that of Ladkin, of Van Beek, of Dechter, Meiri and Pearl, and of Maiocchi, which is either to construct intervals out of points, or to treat points and intervals separately. The theory is presented in terms of a series of axioms which characterise a single temporal relation, "meets", over time elements. The axiomatisation allows non-linear time structures such as branching time and parallel time, and additional axioms specifying the linearity and density of time are specially presented. A formal characterisation for the open and closed nature of primitive intervals, which has been a problematic question of time representation in artificial intelligence, is provided in terms of the "meets" relation. It is shown to be consistent with the conventional definitions of open/closed intervals which are constructed out of points. It is also shown that this general theory is powerful enough to subsume some representative temporal theories, such as Alien and Hayes's interval based theory, Bruce's and McDermott's point based theories, and the interval and point based theory of Vilain, and of Gallon. A finite time network based on the theory is specially addressed, where a consistency checker in two different forms is provided for cases with, and without, duration reasoning, respectively. Utilising the time axiomatisation, the syntax and semantics of a temporal logic for reasoning about propositions whose truth values are associated with particular intervals/points are explicitly defined. It is shown that the logic is more expressive than that of some existing systems, such as Alien's interval-based logic, the revised theory proposed by Gallon, Shoham's point-based interval logic, and Haugh's MTA based logic; and the corresponding problems with these systems are satisfactorily solved. Finally, as an application of the temporal theory, a new architecture for a temporal database system which allows the expression of relative temporal knowledge of data transaction and data validity times is proposed. A general retrieval mechanism is presented for a database with a purely qualitative temporal component which allows queries with temporal constraints in terms of any logical combination of Alien's temporal relations. To reduce the computational complexity of the consistency checking algorithm when quantitative time duration knowledge is added, a class of databases, termed time-limited databases, is introduced. This class allows absolute-time-stamped and relative time information in a form which is suitable for many practical applications, where qualitative temporal information is only occasionally needed, and the efficient retrieval mechanisms for absolute-time-stamped databases may be adopted.
20

Intelligent monitoring of business processes using case-based reasoning

Kapetanakis, Stylianos January 2012 (has links)
The work in this thesis presents an approach towards the effective monitoring of business processes using Case-Based Reasoning (CBR). The rationale behind this research was that business processes constitute a fundamental concept of the modern world and there is a constantly emerging need for their efficient control. They can be efficiently represented but not necessarily monitored and diagnosed effectively via an appropriate platform. Motivated by the above observation this research pursued to which extent there can be efficient monitoring, diagnosis and explanation of the workflows. Workflows and their effective representation in terms of CBR were investigated as well as how similarity measures among them could be established appropriately. The monitoring results and their following explanation to users were questioned as well as which should be an appropriate software architecture to allow monitoring of workflow executions. Throughout the progress of this research, several sets of experiments have been conducted using existing enterprise systems which are coordinated via a predefined workflow business process. Past data produced over several years have been used for the needs of the conducted experiments. Based on those the necessary knowledge repositories were built and used afterwards in order to evaluate the suggesting approach towards the effective monitoring and diagnosis of business processes. The produced results show to which extent a business process can be monitored and diagnosed effectively. The results also provide hints on possible changes that would maximize the accuracy of the actual monitoring, diagnosis and explanation. Moreover the presented approach can be generalised and expanded further to enterprise systems that have as common characteristics a possible workflow representation and the presence of uncertainty. Further work motivated by this thesis could investigate how the knowledge acquisition can be transferred over workflow systems and be of benefit to large-scale multidimensional enterprises. Additionally the temporal uncertainty could be investigated further, in an attempt to address it while reasoning. Finally the provenance of cases and their solutions could be explored further, identifying correlations with the process of reasoning.

Page generated in 0.0389 seconds