• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 77
  • 28
  • 10
  • 9
  • Tagged with
  • 422
  • 80
  • 74
  • 44
  • 40
  • 40
  • 40
  • 39
  • 39
  • 29
  • 28
  • 27
  • 26
  • 24
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Context-aware and adaptive usage control model

Almutairi, Abdulgader January 2013 (has links)
Information protection is a key issue for the acceptance and adoption of pervasive computing systems where various portable devices such as smart phones, Personal Digital Assistants (PDAs) and laptop computers are being used to share information and to access digital resources via wireless connection to the Internet. Because these are resources constrained devices and highly mobile, changes in the environmental context or device context can affect the security of the system a great deal. A proper security mechanism must be put in place which is able to cope with changing environmental and system context. Usage CONtrol (UCON) model is the latest major enhancement of the traditional access control models which enables mutability of subject and object attributes, and continuity of control on usage of resources. In UCON, access permission decision is based on three factors: authorisations, obligations and conditions. While authorisations and obligations are requirements that must be fulfilled by the subject and the object, conditions are subject and object independent requirements that must be satisfied by the environment. As a consequence, access permission may be revoked (and the access stopped) as a result of changes in the environment regardless of whether the authorisations and obligations requirements are met. This constitutes a major shortcoming of the UCON model in pervasive computing systems which constantly strive to adapt to environmental changes so as to minimise disruptions to the user. We propose a Context-Aware and Adaptive Usage Control (CA-UCON) model which extends the traditional UCON model to enable adaptation to environmental changes in the aim of preserving continuity of access. Indeed, when the authorisation and obligations requirements are fulfilled by the subject and object, and the conditions requirements fail due to changes in the environmental or the system context, our proposed model CA-UCON triggers specific actions in order to adapt to the new situation, so as to ensure continuity of usage. We then propose an architecture of CA-UCON model, presenting its various components. In this model, we integrated the adaptation decision with usage decision architecture, the comprehensive definition of each components and reveals the functions performed by each components in the architecture are presented. We also propose a novel computational model of our CA-UCON architecture. This model is formally specified as a finite state machine. It demonstrates how the access request of the subject is handled in CA-UCON model, including detail with regards to revoking of access and actions undertaken due to context changes. The extension of the original UCON architecture can be understood from this model. The formal specification of the CA-UCON is presented utilising the Calculus of Context-aware Ambients (CCA). This mathematical notation is considered suitable for modelling mobile and context-aware systems and has been preferred over alternatives for the following reasons: (i) Mobility and Context awareness are primitive constructs in CCA; (ii) A system's properties can be formally analysed; (iii) Most importantly, CCA specifications are executable allowing early validation of system properties and accelerated development of prototypes. For evaluation of CA-UCON model, a real-world case study of a ubiquitous learning (u-learning) system is selected. We propose a CA-UCON model for the u-learning system. This model is then formalised in CCA and the resultant specification is executed and analysed using an execution environment of CCA. Finally, we investigate the enforcement approaches for CA-UCON model. We present the CA-UCON reference monitor architecture with its components. We then proceed to demonstrate three types of enforcement architectures of the CA-UCON model: centralised architecture, distributed architecture and hybrid architecture. These are discussed in detail, including the analysis of their merits and drawbacks.
272

Practical improvements to the deformation method for point counting

Pancratz, Sebastian Friedrich January 2013 (has links)
In this thesis we investigate practical aspects related to point counting problems on algebraic varieties over finite fields. In particular, we present significant improvements to Lauder’s deformation method for smooth projective hypersurfaces, which allow this method to be successfully applied to previously intractable instances. Part I is dedicated to the deformation method, including a complete description of the algorithm but focussing on aspects for which we contribute original improvements. In Chapter 3 we describe the computation of the action of Frobenius on the rigid cohomology space associated to a diagonal hypersurface; in Chapter 4 we develop a method for fast computations in the de Rham cohomology spaces associated to the family, which allows us to compute the Gauss–Manin connection matrix. We conclude this part with a small selection of examples in Chapter 6. In Part II we present an improvement to Lauder’s fibration method. We manage to resolve the bottleneck in previous computation, which is formed by so-called polynomial radix conversions, employing power series inverses and a more efficient implementation. Finally, Part III is dedicated to a comprehensive treatment of the arithmetic in unramified extensions of Qp , which is connected to the previous parts where our computations rely on efficient implementations of p-adic arithmetic. We have made these routines available for others in FLINT as individual modules for p-adic arithmetic.
273

Feature-based approach to bridge the information technology and business gap

Alazemi, Fayez January 2014 (has links)
The gap between business goals (problem domain), such as cost reduction, new business processes, increasing competitive advantage, etc., and the supporting Information Technology infrastructure (solution domain), such as the ability to implement software solutions to achieve these goals, is complex and challenging to bridge. This gap emerges for many reasons; for instance, inefficient communication, domain terminology misunderstanding or external factors, e.g. business change. As most business and software products can be described by a set of features, a promising solution would be to link both the problem and solution domains based on these features. Thus, the proposed approach aims to bridge the gap between the problem and the solution domains by using a feature-based technique in order to provide a quick and efficient means for understanding the relationships between IT solutions and business goals. The novelty of the proposed framework emanates from the three characteristics of the business-IT gap: the problem domain, the solution domain and the matching process. Besides the proposed feature-based IT-business framework, other contributions are proposed: a feature extracting method and feature matching algorithms. The proposed approach is achieved in three phases. The first phase is to decompose business needs and transform them into a feature model (presented in UML diagrams); this is represented as a top-to-middle process. The second phase is a reverse engineering process. A system program code is sliced into modules and transformed into feature-based models (again, in UML diagrams); these are represented as a bottom-to-middle process. The third phase is a model-driven engineering process. It uses model comparison techniques to match the UML feature models of the top-to-middle and bottom-to-middle phases. The presented approach in this research shows that features elicited from the business goals can be matched to features extracted from in the IT side. This proposed approach is feasible and able to provide a quick and efficient means for improving feature-based business IT matching. Two case studies are presented to demonstrate that the feature-oriented view of features from the users' perspective can be matched to the feature-oriented view of features in the IT side. This matching can serve to remove any ambiguities that may cause difficulties in the cases of system maintenance or system evolution, in particular when there are changes in requirements, which is to be expected when there is any business change.
274

Stable marriage problem based adaptation for clone detection and service selection

Al Hakami, Hosam Hasan January 2015 (has links)
Current software engineering topics such as clone detection and service selection need to improve the capability of detection process and selection process. The clone detection is the process of finding duplicated code through the system for several purposes such as removal of repeated portions as maintenance part of legacy system. Service selection is the process of finding the appropriate web service which meets the consumer’s request. Both problems can be converted into a matching problem. Matching process forms an essential part of software engineering activities. In this research, a well-known mathematical algorithm Stable Marriage Problem (SMP) and its variations are investigated to fulfil the purposes of matching processes in software engineering area. We aim to provide a competitive matching algorithm that can help to detect cloned software accurately and ensure high scalability, precision and recall. We also aim to apply matching algorithm on incoming request and service profile to deal with the web service as a clever independent object so that we can allow the services to accept or decline requests (equal opportunity) rather than the current state of service selection (search-based), in which service lacks of interacting as an independent candidate. In order to meet the above aims, the traditional SMP algorithm has been extended to achieve the cardinality of many-to-many. This adaptation is achieved by defining the selective strategy which is the main engine of the new adaptations. Two adaptations, Dual-Proposed and Dual-Multi-Allocation, have been proposed to both service selection and clone detection process. The proposed approach (SMP-based) shows very competitive results compare to existing software clone approaches, especially in identifying type 3 (copy with further modifications such update, add and delete statements) of cloned software. It performs the detection process with a relatively high precision and recall compare to the CloneDR tool and shows good scalability on a middle sized program. For service selection, the proposed approach has several advantages such as service protection and service quality. The services gain equal opportunity against the incoming requests. Therefore, the intelligent service interaction is achieved, and both stability and satisfaction of the candidates are ensured. This dissertation contributes to several contributions firstly, the new extended SMP algorithm by introducing selective strategy to accommodate many-to-many matching problems, to improve overall features. Secondly, a new SMP-based clone detection approach to detect cloned software accurately and ensures high precision and recall. Ultimately, a new SMPbased service selection approach allows equal opportunity between services and requests. This led to improve service protection and service quality. Case studies are carried out for experiments with the proposed approach, which show that the new adaptations can be applied effectively to clone detection and service selection processes with several features (e.g. accuracy). It can be concluded that the match based approach is feasible and promising in software engineering domain.
275

A multilayer framework for quality of context in context-aware systems

Al-Shargabi, Asma Abdulghani Qassem January 2015 (has links)
Context-aware systems use context information to decide what adaptation actions to perform in response to changes in their environment. Depending on applications, context information includes physical context (e.g. temperature and location), user context (e.g. user preferences and user activity), and ICT context (e.g. device capabilities and battery power). Sensors are the main mean of capturing context. Unfortunately, sensed context data are commonly prone to imperfection due to the technical limitations of sensors, their availability, dysfunction, and the highly dynamic nature of environment. Consequently, sensed context data might be imprecise, erroneous, conflicting, or simply missing. To limit the impact of context imperfection on the behavior of a context-aware system, a notion of Quality of Context (QoC) is used to measure quality of any information that is used as context information. Adaptation is performed only if the context data used in the decision-making has an appropriate quality level. This thesis conducts a novel framework for QoC in context-aware systems, which is called MCFQoC (Multilayered-Context Framework for Quality of Context). The main innovative features of our framework, MCFQoC, include: (1) a new definition that generalizes the notion of QoC to encompass sensed context as well as user profiled context; (2) a novel multilayer context model, that distinguishes between three context abstractions: context situation, context object, and context element in descending order. A context element represents a single value and many context elements can be compound into a context object. Many context objects in turn form a context situation; (3) a novel model of QoC parameters which extends the existing parameters with new quality parameter and explicitly distributes the quality parameters across the three layers of context abstraction; (4) a novel algorithm, RCCAR (Resolving Context Conflicts Using Association Rules), which has been developed to resolve conflicts in context data using the Association Rules (AR) technique; (5) a novel mechanism to define QoC policy by assigning weights to QoC parameters using a multi-criteria decision-making technique called Analytical Hierarchy Process (AHP); (6) and finally, a novel quality control algorithm called IPQP (Integrating Prediction with Quality of context Parameters for Context Quality Control) for handling context conflicts, context missing values, and context erroneous values. IPQP is extension of RCCAR. Our framework, MCFQoC, has been implemented in MatLab and evaluated using a case study of a flood forecast system. Results show that the framework is expressive and modular, thanks to the multilayer context model and also to the notion QoC policy which enables us to assign weights for QoC’s parameters depending on quality requirements of each specific application. This flexibility makes it easy to apply our approach to a wider type of context-aware applications. As a part of MCFQoC framework, IPQP algorithm has been successfully tested and evaluated for QoC control using a variety of scenarios. The algorithm RCCAR has been tested and evaluated either individually and as a part of MCFQoC framework with a significant performance concerning resolving context conflicts. In addition, RCCAR has achieved a good success comparing to traditional prediction methods such as moving average (MA), weighted moving average, exponential smoothing, doubled exponential smoothing, and autoregressive moving average (ARMA).
276

Context-aware GPS integrity monitoring for Intelligent Transport Systems (ITS)

Binjammaz, Tareq January 2015 (has links)
The integrity of positioning systems has become an increasingly important requirement for location-based Intelligent Transports Systems (ITS). The navigation systems, such as Global Positioning System (GPS), used in ITS cannot provide the high quality positioning information required by most services, due to the various type of errors from GPS sensor, such as signal outage, and atmospheric effects, all of which are difficult to measure, or from the map matching process. Consequently, an error in the positioning information or map matching process may lead to inaccurate determination of a vehicle’s location. Thus, the integrity is require when measuring both vehicle’s positioning and other related information such as speed, to locate the vehicle in the correct road segment, and avoid errors. The integrity algorithm for the navigation system should include a guarantee that the systems do not produce misleading or faulty information; as this may lead to a significant error arising in the ITS services. Hence, to achieve the integrity requirement a navigation system should have a robust mechanism, to notify the user of any potential errors in the navigation information. The main aim of this research is to develop a robust and reliable mechanism to support the positioning requirement of ITS services. This can be achieved by developing a high integrity GPS monitoring algorithm with the consideration of speed, based on the concept of context-awareness which can be applied with real time ITS services to adapt changes in the integrity status of the navigation system. Context-aware architecture is designed to collect contextual information about the vehicle, including location, speed and heading, reasoning about its integrity and reactions based on the information acquired. In this research, three phases of integrity checks are developed. These are, (i) positioning integrity, (ii) speed integrity, and (iii) map matching integrity. Each phase uses different techniques to examine the consistency of the GPS information. A receiver autonomous integrity monitoring (RAIM) algorithm is used to measure the quality of the GPS positioning data. GPS Doppler information is used to check the integrity of vehicle’s speed, adding a new layer of integrity and improving the performance of the map matching process. The final phase in the integrity algorithm is intended to verify the integrity of the map matching process. In this phase, fuzzy logic is also used to measure the integrity level, which guarantees the validity and integrity of the map matching results. This algorithm is implemented successfully, examined using real field data. In addition, a true reference vehicle is used to determine the reliability and validity of the output. The results show that the new integrity algorithm has the capability to support a various types of location-based ITS services.
277

Features interaction detection and resolution in smart home systems using agent-based negotiation approach

Alghamdi, Ahmed Saeed January 2015 (has links)
Smart home systems (SHS) have become an increasingly important technology in modern life. Apart from safety, security, convenience and entertainment, they offer significant potential benefits for the elderly, disabled and others who cannot live independently. Furthermore, smart homes are environmentally friendly. SHS functionality is based on perceiving residents’ needs and desires, then offering services accordingly. In order to be smart, homes have to be equipped with sensors, actuators and intelligent devices and appliances, as well as connectivity and control mechanisms. A typical SHS comprises heterogeneous services and appliances that are designed by many different developers and which may meet for the first time in the home network. The heterogeneous nature of the systems, in addition to the dynamic environment in which they are deployed, exposes them to undesirable interactions between services, known as Feature Interaction (FI). Another reason for FI is the divergence between the policies, needs and desires of different residents. Proposed approaches to FI detection and resolution should take these different types of interaction into account. Negotiation is an effective mechanism to address FI, as conflicting features can then negotiate with each other to reach a compromise agreement. The ultimate goal of this study is to develop an Agent-Based Negotiation Approach (ABNA) to detect and resolve feature interaction in a SHS. A smart home architecture incorporating the components of the ABNA has been proposed. The backbone of the proposed approach is a hierarchy in which features are organised according to their importance in terms of their functional contribution to the overall service. Thus, features are categorised according to their priority, those which are essential for the service to function having the highest priority. An agent model of the ABNA is proposed and comprehensive definitions of its components are presented. A computational model of the system also has been proposed which is used to explain the behaviour of different components when a proposal to perform a task is raised. To clarify the system requirements and also to aid the design and implementation of its properties, a formal specification of the ABNA is presented using the mathematical notations of Calculus of Context-aware Ambient (CCA), then in order to evaluate the approach a case study is reported, involving two services within the SHS: ventilation and air conditioning. For the purpose of evaluation, the execution environment of CCA is utilised to execute and analyse the ABNA.
278

Classroom simulation for trainee teachers using 3D virtual environments and simulated smartbot student behaviours

Alotaibi, Fahad Mazaed January 2014 (has links)
his thesis consists of an analysis of a classroom simulation using a Second Life (SL) experiment that aims to investigate the teaching impact on smartbots (virtual students) from trainee teacher avatars with respect to interaction, simulated behaviour, and observed teaching roles. The classroom-based SL experiments' motivation is to enable the trainee teacher to acquire the necessary skills and experience to manage a real classroom environment through simulations of a real classroom. This type of training, which is almost a replica of the real-world experience, gives the trainee teacher enough confidence to become an expert teacher. In this classroom simulation, six trainee teachers evaluated the SL teaching experience by survey using qualitative and quantitative methods that measured interaction, simulated behaviour, and safety. Additionally, six observers evaluated trainee teachers' performance according to a set of teaching roles and roleplay approaches. The experiment scenario was set up between smartbots, trainee teacher avatars, and observer avatars in the virtual classroom, where smartbots are intelligent agents managing SL bots, and where groups are similar to one another but are under programming control.
279

Constraint specification by example in a meta-CASE tool

Qattous, Hazem Kathem January 2011 (has links)
Meta-CASE tools offer the ability to specialise and customise diagram-based software modelling editors. Constraints play a major role in these specialisation and customisation tasks. However, constraint definition is complicated. This thesis addresses the problem of constraint specification complexity in meta-CASE tools. Constraint Specification by Example (CSBE), a novel variant of Programming by Example, is proposed as a technique that can simplify and facilitate constraint specification in meta-CASE tools. CSBE involves a user presenting visual examples of diagrams to the tool which engages in a synergistic interaction with the user, based on system inference and additional user input, to arrive at the user’s intended constraint. A prototype meta-CASE tool has been developed that incorporates CSBE. This prototype was used to perform several empirical studies to investigate the feasibility and potential advantages of CSBE. An empirical study was conducted to evaluate the performance in terms of effectiveness, efficiency and user satisfaction of CSBE compared to a typical form-filling technique. Results showed that users using CSBE correctly specified significantly more constraints and required less time to accomplish the task. Users reported higher satisfaction when using CSBE. A second empirical online study has been conducted with the aim of discovering the preference of participants for positive or negative natural language polarity when expressing constraints. Results showed that subjects preferred positive constraint expression over negative expression. A third empirical study aimed to discover the effect of example polarity (negative vs. positive) on the performance of CSBE. A multi-polarity tool offering both positive and negative examples scored significantly higher correctness in a significantly shorter time to accomplish the task with a significantly higher user satisfaction compared to a tool offering only one example polarity. A fourth empirical study examined user-based addition of new example types and inference rules into the CSBE technique. Results demonstrated that users are able to add example types and that performance is improved when they do so. Overall, CSBE has been shown to be feasible and to offer potential advantages compared to other commonly-used constraint specification techniques.
280

Evaluation and improvement of semantically-enhanced tagging system

Alsharif, Majdah Hussain January 2013 (has links)
The Social Web or ‘Web 2.0’ is focused on the interaction and collaboration between web sites users. It is credited for the existence of tagging systems, amongst other things such as blogs and Wikis. Tagging systems like YouTube and Flickr offer their users the simplicity and freedom in creating and sharing their own contents and thus folksonomy is a very active research area where many improvements are presented to overcome existing disadvantages such as the lack of semantic meaning, ambiguity, and inconsistency. TE is a tagging system proposing solutions to the problems of multilingualism, lack of semantic meaning and shorthand writing (which is very common in the social web) through the aid of semantic and social resources. The current research is presenting an addition to the TE system in the form of an embedded stemming component to provide a solution to the different lexical form problems. Prior to this, the TE system had to be explored thoroughly and then its efficiency had to be determined in order to decide on the practicality of embedding any additional components as enhancements to the performance. Deciding on this involved analysing the algorithm efficiency using an analytical approach to determine its time and space complexity. The TE had a time growth rate of O (N²) which is polynomial, thus the algorithm is considered efficient. Nonetheless, recommended modifications like patch SQL execution can improve this. Regarding space complexity, the number of tags per photo represents the problem size which, if it grows, will increase linearly the required memory space. Based on the findings above, the TE system is re-implemented on Flickr instead of YouTube, because of a recent YouTube restriction, which is of greater benefit in multi languages tagging system since the language barrier is meaningless in this case. The re-implementation is achieved using ‘flickrj’ (Java Interface for Flickr APIs). Next, the stemming component is added to perform tags normalisation prior to the ontologies querying. The component is embedded using the Java encoding of the porter 2 stemmer which support many languages including Italian. The impact of the stemming component on the performance of the TE system in terms of the size of the index table and the number of retrieved results is investigated using an experiment that showed a reduction of 48% in the size of the index table. This also means that search queries have less system tags to compare them against the search keywords and this can speed up the search. Furthermore, the experiment runs similar search trails on two versions of the TE systems one without the stemming component and the other with the stemming component and found out that the latter produced more results on the conditions of working with valid words and valid stems. The embedding of the stemming component in the new TE system has lessened the effect of the storage overhead needed for the generated system tags by their reduction for the size of the index table which make the system suited for many applications such as text classification, summarization, email filtering, machine translation…etc.

Page generated in 0.0451 seconds