• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

Using interaction data for improving the offline and online evaluation of search engines

Kharitonov, Evgeny January 2016 (has links)
This thesis investigates how web search evaluation can be improved using historical interaction data. Modern search engines combine offline and online evaluation approaches in a sequence of steps that a tested change needs to pass through to be accepted as an improvement and subsequently deployed. We refer to such a sequence of steps as an evaluation pipeline. In this thesis, we consider the evaluation pipeline to contain three sequential steps: an offline evaluation step, an online evaluation scheduling step, and an online evaluation step. In this thesis we show that historical user interaction data can aid in improving the accuracy or efficiency of each of the steps of the web search evaluation pipeline. As a result of these improvements, the overall efficiency of the entire evaluation pipeline is increased. Firstly, we investigate how user interaction data can be used to build accurate offline evaluation methods for query auto-completion mechanisms. We propose a family of offline evaluation metrics for query auto-completion that represents the effort the user has to spend in order to submit their query. The parameters of our proposed metrics are trained against a set of user interactions recorded in the search engine’s query logs. From our experimental study, we observe that our proposed metrics are significantly more correlated with an online user satisfaction indicator than the metrics proposed in the existing literature. Hence, fewer changes will pass the offline evaluation step to be rejected after the online evaluation step. As a result, this would allow us to achieve a higher efficiency of the entire evaluation pipeline. Secondly, we state the problem of the optimised scheduling of online experiments. We tackle this problem by considering a greedy scheduler that prioritises the evaluation queue according to the predicted likelihood of success of a particular experiment. This predictor is trained on a set of online experiments, and uses a diverse set of features to represent an online experiment. Our study demonstrates that a higher number of successful experiments per unit of time can be achieved by deploying such a scheduler on the second step of the evaluation pipeline. Consequently, we argue that the efficiency of the evaluation pipeline can be increased. Next, to improve the efficiency of the online evaluation step, we propose the Generalised Team Draft interleaving framework. Generalised Team Draft considers both the interleaving policy (how often a particular combination of results is shown) and click scoring (how important each click is) as parameters in a data-driven optimisation of the interleaving sensitivity. Further, Generalised Team Draft is applicable beyond domains with a list-based representation of results, i.e. in domains with a grid-based representation, such as image search. Our study using datasets of interleaving experiments performed both in document and image search domains demonstrates that Generalised Team Draft achieves the highest sensitivity. A higher sensitivity indicates that the interleaving experiments can be deployed for a shorter period of time or use a smaller sample of users. Importantly, Generalised Team Draft optimises the interleaving parameters w.r.t. historical interaction data recorded in the interleaving experiments. Finally, we propose to apply the sequential testing methods to reduce the mean deployment time for the interleaving experiments. We adapt two sequential tests for the interleaving experimentation. We demonstrate that one can achieve a significant decrease in experiment duration by using such sequential testing methods. The highest efficiency is achieved by the sequential tests that adjust their stopping thresholds using historical interaction data recorded in diagnostic experiments. Our further experimental study demonstrates that cumulative gains in the online experimentation efficiency can be achieved by combining the interleaving sensitivity optimisation approaches, including Generalised Team Draft, and the sequential testing approaches. Overall, the central contributions of this thesis are the proposed approaches to improve the accuracy or efficiency of the steps of the evaluation pipeline: the offline evaluation frameworks for the query auto-completion, an approach for the optimised scheduling of online experiments, a general framework for the efficient online interleaving evaluation, and a sequential testing approach for the online search evaluation. The experiments in this thesis are based on massive real-life datasets obtained from Yandex, a leading commercial search engine. These experiments demonstrate the potential of the proposed approaches to improve the efficiency of the evaluation pipeline.
382

Query routing in cooperative semi-structured peer-to-peer information retrieval networks

Alkhawaldeh, Rami Suleiman Ayed January 2016 (has links)
Conventional web search engines are centralised in that a single entity crawls and indexes the documents selected for future retrieval, and the relevance models used to determine which documents are relevant to a given user query. As a result, these search engines suffer from several technical drawbacks such as handling scale, timeliness and reliability, in addition to ethical concerns such as commercial manipulation and information censorship. Alleviating the need to rely entirely on a single entity, Peer-to-Peer (P2P) Information Retrieval (IR) has been proposed as a solution, as it distributes the functional components of a web search engine – from crawling and indexing documents, to query processing – across the network of users (or, peers) who use the search engine. This strategy for constructing an IR system poses several efficiency and effectiveness challenges which have been identified in past work. Accordingly, this thesis makes several contributions towards advancing the state of the art in P2P-IR effectiveness by improving the query processing and relevance scoring aspects of a P2P web search. Federated search systems are a form of distributed information retrieval model that route the user’s information need, formulated as a query, to distributed resources and merge the retrieved result lists into a final list. P2P-IR networks are one form of federated search in routing queries and merging result among participating peers. The query is propagated through disseminated nodes to hit the peers that are most likely to contain relevant documents, then the retrieved result lists are merged at different points along the path from the relevant peers to the query initializer (or namely, customer). However, query routing in P2P-IR networks is considered as one of the major challenges and critical part in P2P-IR networks; as the relevant peers might be lost in low-quality peer selection while executing the query routing, and inevitably lead to less effective retrieval results. This motivates this thesis to study and propose query routing techniques to improve retrieval quality in such networks. Cluster-based semi-structured P2P-IR networks exploit the cluster hypothesis to organise the peers into similar semantic clusters where each such semantic cluster is managed by super-peers. In this thesis, I construct three semi-structured P2P-IR models and examine their retrieval effectiveness. I also leverage the cluster centroids at the super-peer level as content representations gathered from cooperative peers to propose a query routing approach called Inverted PeerCluster Index (IPI) that simulates the conventional inverted index of the centralised corpus to organise the statistics of peers’ terms. The results show a competitive retrieval quality in comparison to baseline approaches. Furthermore, I study the applicability of using the conventional Information Retrieval models as peer selection approaches where each peer can be considered as a big document of documents. The experimental evaluation shows comparative and significant results and explains that document retrieval methods are very effective for peer selection that brings back the analogy between documents and peers. Additionally, Learning to Rank (LtR) algorithms are exploited to build a learned classifier for peer ranking at the super-peer level. The experiments show significant results with state-of-the-art resource selection methods and competitive results to corresponding classification-based approaches. Finally, I propose reputation-based query routing approaches that exploit the idea of providing feedback on a specific item in the social community networks and manage it for future decision-making. The system monitors users’ behaviours when they click or download documents from the final ranked list as implicit feedback and mines the given information to build a reputation-based data structure. The data structure is used to score peers and then rank them for query routing. I conduct a set of experiments to cover various scenarios including noisy feedback information (i.e, providing positive feedback on non-relevant documents) to examine the robustness of reputation-based approaches. The empirical evaluation shows significant results in almost all measurement metrics with approximate improvement more than 56% compared to baseline approaches. Thus, based on the results, if one were to choose one technique, reputation-based approaches are clearly the natural choices which also can be deployed on any P2P network.
383

Anonymizing large transaction data using MapReduce

Memon, Neelam January 2016 (has links)
Publishing transaction data is important to applications such as marketing research and biomedical studies. Privacy is a concern when publishing such data since they often contain person-specific sensitive information. To address this problem, different data anonymization methods have been proposed. These methods have focused on protecting the associated individuals from different types of privacy leaks as well as preserving utility of the original data. But all these methods are sequential and are designed to process data on a single machine, hence not scalable to large datasets. Recently, MapReduce has emerged as a highly scalable platform for data-intensive applications. In this work, we consider how MapReduce may be used to provide scalability in large transaction data anonymization. More specifically, we consider how setbased generalization methods such as RBAT (Rule-Based Anonymization of Transaction data) may be parallelized using MapReduce. Set-based generalization methods have some desirable features for transaction anonymization, but their highly iterative nature makes parallelization challenging. RBAT is a good representative of such methods. We propose a method for transaction data partitioning and representation. We also present two MapReduce-based parallelizations of RBAT. Our methods ensure scalability when the number of transaction records and domain of items are large. Our preliminary results show that a direct parallelization of RBAT by partitioning data alone can result in significant overhead, which can offset the gains from parallel processing. We propose MR-RBAT that generalizes our direct parallel method and allows to control parallelization overhead. Our experimental results show that MR-RBAT can scale linearly to large datasets and to the available resources while retaining good data utility.
384

Analysis of bandwidth attacks in a bittorrent swarm

Adamsky, Florian January 2016 (has links)
The beginning of the 21st century saw a widely publicized lawsuit against Napster. This was the first Peer-to-Peer software that allowed its users to search for and share digital music with other users. At the height of its popularity, Napster boasted 80 million registered users. This marked the beginning of a Peer-to-Peer paradigm and the end of older methods of distributing cultural possessions. But Napster was not entirely rooted in a Peer-to-Peer paradigm. Only the download of a file was based on Peer-to-Peer interactions; the search process was still based on a central server. It was thus easy to shutdown Napster. Shortly after the shutdown, Bram Cohen developed a new Peer-to-Peer protocol called BitTorrent. The main principle behind BitTorrent is an incentive mechanism, called a choking algorithm, which rewards peers that share. Currently, BitTorrent is one of the most widely used protocols on the Internet. Therefore, it is important to investigate the security of this protocol. While significant progress has been made in understanding the Bit- Torrent choking mechanism, its security vulnerabilities have not yet been thoroughly investigated. This dissertation provides a security analysis of the Peer-to-Peer protocol BitTorrent on the application and transport layer. The dissertation begins with an experimental analysis of bandwidth attacks against different choking algorithms in the BitTorrent seed state. I reveal a simple exploit that allows malicious peers to receive a considerably higher download rate than contributing leechers, thereby causing a significant loss of efficiency for benign peers. I show the damage caused by the proposed attack in two different environments—a lab testbed comprised of 32 peers and a global testbed called PlanetLab with 300 peers. Our results show that three malicious peers can degrade the download rate by up to 414.99 % for all peers. Combined with a Sybil attack with as many attackers as leechers, it is possible to degrade the download rate by more than 1000 %. I propose a novel choking algorithm which is immune against bandwidth attacks and a countermeasure against the revealed attack. This thesis includes a security analysis of the transport layer. To make BitTorrent more Internet Service Provider friendly, BitTorrent Inc. invented the Micro Transport Protocol. It is based on User Datagram Protocol with a novel congestion control called Low Extra Delay Background Transport. This protocol assumes that the receiver always provides correct feedback, otherwise this deteriorates throughput or yields to corrupted data. I show through experimental evaluation, that a misbehaving Micro Transport Protocol receiver which is not interested in data integrity, can increase the bandwidth of the sender by up to five times. This can cause a congestion collapse and steal a large share of a victim’s bandwidth. I present three attacks, which increase bandwidth usage significantly. I have tested these attacks in real world environments and demonstrate their severity both in terms of the number of packets and total traffic generated. I also present a countermeasure for protecting against these attacks and evaluate the performance of this defensive strategy. In the last section, I demonstrate that the BitTorrent protocol family is vulnerable to Distributed Reflective Denial-of-Service attacks. Specifically, I show that an attacker can exploit BitTorrent protocols (Micro Transport Protocol, Distributed Hash Table, Message Stream Encryption and BitTorrent Sync) to reflect and amplify traffic from Bit-Torrent peers to any target on the Internet. I validate the efficiency, robustness, and the difficulty of defence of the exposed BitTorrent vulnerabilities in a Peer-to-Peer lab testbed. I further substantiate lab results by crawling more than 2.1 million IP addresses over Mainline Distributed Hash Table and analyzing more than 10,000 BitTorrent handshakes. The experiments suggest that an attacker is able to exploit BitTorrent peers to amplify traffic by a factor of 50, and in the case of BitTorrent Sync 120. Additionally, I observe that the most popular BitTorrent clients are the most vulnerable ones.
385

On the probability of perfection of software-based systems

Zhao, Xingyu January 2016 (has links)
The probability of perfection becomes of interest as the realization of its role in the reliability assessment of software-based systems. It is not only important on its own, but also in the reliability assessment of 1-out-of-2 diverse systems. By “perfection”, it means that thesoftware will never fail in a specific operating environment. If we assume that failures of a software system can occur if and only if it contains faults, then it means that the system is “fault-free”. Such perfection is possible for sufficiently simple software. While the perfection can never be certain, so the interest lies in claims for the probability of perfection. In this thesis, firstly two different probabilities of perfection – an objective parameter characterizing a population property and a subjective confidence in the perfection of the specific software of interest – are distinguished and discussed. Then a conservative Bayesian method is used to claim about probability of perfection from various types of evidence, i.e. failure-free testing evidence, process evidence and formal proof evidence. Also, a “quasiperfection” notion is realized as a potentially useful approach to cover some shortages of perfection models. A possible framework to incorporate the various models is discussed at the end. There are generally two themes in this thesis: tackling the failure dependence issue in the reliability assessment of 1-out-of-2 diverse systems at both aleatory and epistemic levels; and degrading the well-known difficulty of specifying complete Bayesian priors into reasoning with only partial priors. Both of them are solved at the price of conservatism. In summary, this thesis provides 3 parallel sets of (quasi-)perfection models which could be used individually as a conservative end-to-end argument that reasoning from various types of evidence to the reliability of a software-based system. Although in some cases models here are providing very conservative results, some ways are proposed of dealing with the excessive conservatism. In other cases, the very conservative results could serve as warnings/support to safety engineers/regulators in the face of claims based on reasoning that is less rigorous than the reasoning in this thesis.
386

Cryptographic privacy-preserving enhancement method for investigative data acquisition

Kwecka, Zbigniew January 2011 (has links)
The current processes involved in the acquisition of investigative data from third parties, such as banks, Internet Service Providers (ISPs) and employers, by the public authorities can breach the rights of the individuals under investigation. This is mainly caused by the necessity to identify the records of interest, and thus the potential suspects, to the dataholders. Conversely, the public authorities often put pressure on legislators to provide a more direct access to the third party data, mainly in order to improve on turnaround times for enquiries and to limit the likelihood of compromising the investigations. This thesis presents a novel methodology for improving privacy and the performance of the investigative data acquisition process. The thesis shows that it is possible to adapt Symmetric Private Information Retrieval (SPIR) protocols for use in the acquisition process, and that it is possible to dynamically adjust the balance between the privacy and performance based on the notion of k-anonymity. In order to evaluate the findings an Investigative Data Acquisition Platform (IDAP) is formalised, as a cryptographic privacy-preserving enhancement to the current data acquisition process. SPIR protocols are often computationally intensive, and therefore, they are generally unsuitable to retrieve records from large datasets, such as the ISP databases containing records of the network traffic data. This thesis shows that, despite the fact that many potential sources of investigative data exist, in most cases the data acquisition process can be treated as a single-database SPIR. Thanks to this observation, the notion of k-anonymity, developed for privacy-preserving statistical data-mining protocols, can be applied to the investigative scenarios, and used to narrow down the number of records that need to be processed by a SPIR protocol. This novel approach makes the application of SPIR protocols in the retrieval of investigative data feasible. The dilution factor is defined, by this thesis, as a parameter that expresses the range of records used to hide a single identity of a suspect. Interestingly, the value of this parameter does not need to be large in order to protect privacy, if the enquiries to a given dataholder are frequent. Therefore, IDAP is capable of retrieving an interesting record from a dataholder in a matter of seconds, while an ordinary SPIR protocol could take days to complete retrieval of a record from a large dataset. This thesis introduces into the investigative scenario a semi-trusted third party, which is a watchdog organisation that could proxy the requests for investigative data from all public authorities. This party verifies the requests for data and hides the requesting party from the dataholder. This limits the dataholders ability to judge the nature of the enquiry. Moreover, the semi-trusted party would filter the SPIR responses from the dataholders, by securely discarding the records unrelated to enquiries. This would prevent the requesting party from using a large computational power to decrypt the diluting records in the future, and would allow the watchdog organisation to verify retrieved data in court, if such a need arises. Therefore, this thesis demonstrates a new use for the semi-trusted third parties in SPIR protocols. Traditionally used to improve on the complexity of SPIR protocols, such party can potentially improve the perception of the cryptographic trapdoor-based privacy- preserving information retrieval systems, by introducing policy-based controls. The final contribution to knowledge of this thesis is definition of the process of privacy-preserving matching records from different datasets based on multiple selection criteria. This allows for the retrieval of records based on parameters other than the identifier of the interesting record. Thus, it is capable of adding a degree of fuzzy matching to the SPIR protocols that traditionally require a perfect match of the request to the records being retrieved. This allows for searching datasets based on circumstantial knowledge and suspect profiles, thus, extends the notion of SPIR to more complex scenarios. The constructed IDAP is thus a platform for investigative data acquisition employing the Private Equi-join (PE) protocol – a commutative cryptography SPIR protocol. The thesis shows that the use of commutative cryptography in enquiries where multiple records need to be matched and then retrieved (m-out-of-n enquiries) is beneficial to the computational performance. However, the above customisations can be applied to other SPIR protocols in order to make them suitable for the investigative data acquisition process. These customisations, together with the findings of the literature review and the analysis of the field presented in this thesis, contribute to knowledge and can improve privacy in the investigative enquiries.
387

Integrating measurement techniques in an object-oriented systems design process

Li-Thiao-Te, Philippe January 1999 (has links)
The theme of this thesis is the assessment of quality in class hierarchies. In particular, the notion of inheritance and the mechanism of redefinition from a modelling perspective are reviewed. It is shown that, in Object-Oriented languages, controversial uses of inheritance can be implemented and are subject of debate as they contradict the essence of inheritance. The discovery of an unexpected use of the method redefinition mechanism confirmed that potential design inconsistencies occur more often than expected in class hierarchies. To address such problems, design heuristics and measurement techniques are investigated as the main instrument tools for the evaluation "goodness" or "badness" in class hierarchies. Their benefits are demonstrated within the design process. After the identification of an obscure use of the method redefinition mechanism referred to as the multiple descendant redefinition (MDR) problem, a set of metrics based on the GQMlMEDEA [Bri&aI94] model is proposed. To enable a measurement programme to take place within a design process, the necessary design considerations are detailed and the technical issues involved in the measurement process are presented. Both aspects form ~. methodological approach for class hierarchy assessment and especially concentrate on the use of the redefinition mechanism. . . As one of the main criticisms of the measure~ent science is the lack orgood design feedback, the , analysis and interpretation phase. of the metfics results is seen: as a crucial phase for inferring, meaningful conclusions. A novel· data interpretation framework is pr~posed' and includes the use of various graphical data representations and detection techniques. Also, the notion of redefinition profiles suggested a, more generic approach whereby a pattern profile can be found for a metric. The benefits of the data interpretation method for the extraction of meaningful design feedback from the metrics results are discussed. The implementation of a metric tool collector enabled a set of experiments to be carried out on the Smalltalk class hierarchy. Surprisingly, the analysis of metrics results showed that method redefmition is heavily used compared to method extension. This suggested the existence of potential design inconsistencies in the class hierarchy and permitted the discovery of the MDR problem on many occasions. In addition, a set of experiments demonstrates the benefits of example graphical representations together with detection techniques such as alarmers. In the light of facilitating the interpretation phase, the need for additional supporting tools is highlighted. This thesis illustrates the potential benefits of integration of measurement techniques within an Object-Oriented design process. Given the identification of the MDR problem, it is believed that the redefinition metrics are strong and simple candidates for detecting complex design problems occurring within a class hierarchy. An integrated design assessment model is proposed which logically fits into an incremental design development process. Benefits and disadvantages of the approach are discussed together with future work.
388

Network firewall dynamic performance evaluation and formalisation

Saliou, Lionel January 2009 (has links)
Computer network security is key to the daily operations of an organisation, its growth and its future. It is unrealistic for an organisation to devote all of its resources to computer network security, but equally an organisation must be able to determine whether its security policy is achievable and under which criteria. Yet, it is not often possible for an organisation: to define its security policy, especially to fully comply with the laws of the land; ensure the actual implementation on network devices; and finally audit the overall system for compliance. This thesis argues that one of the obstacles to the complete realisation of such an Integrated Security Framework is the lack of deep understanding, in particular in terms of dynamic performance, of the network devices on which the security policy will be deployed. Thus, one novelty of this research is a Dynamic Evaluation Environment for Network Security that allows the identification of the strengths and weaknesses of networked security devices, such as in network firewalls. In turn, it enables organisations to model the dynamic performance impact of security policies deployed on these devices, as well as identifying the benefit of various implementation choices, or prioritisations. Hence, this novel evaluation environment allows the creation of instances of a network firewall dynamic performance model, and this modelling is part of the Integrated Security Framework, thus enabling it to highlight when particular security requirements cannot be met by the underlying systems, or how best to achieve the objectives. More importantly, perhaps, the evaluation environment enables organisations to comply with up-coming legislation that increases an organisation's legal cover, which demands consistent and scientific evidence of fitness prior to security incidents. Dynamic evaluations produce a large amount of raw data and this often does not allow for a comprehensive analysis and interpretation of the results obtained. Along with this, it is necessary to relate the data collected to a dynamic firewall performance model. To overcome this, this research proposes a unique formalisation of the inputs and outputs of the proposed model, and this, in turn, allows for performance analysis from multiple view-points, such as: the increase security requirements in the form of larger rule-set sizes; effects of changes in terms of the underlying network equipment; or the complexity of filtering. These view-points are considered as evaluation scenarios and also have unique formalisations. Evaluations focused on two types of network firewalls and key findings include the fact that strong security policy overhead can be kept acceptable on embedded firewalls provided that out-going filtering is used. Along with this, dynamic evaluation allows the identification of the additional performance impact of unoptimised configurations, and such findings complement work that focuses on the logical properties of network firewalls. Also, these evaluations demonstrate the need for scientific rigour as the data show that the embedded and software network firewalls evaluated have different areas of strengths and weaknesses. Indeed, it appears that software firewalls are not as affected as embedded firewalls by the complexity of filtering. On the other hand, the number of rules software firewalls enforce is the main performance factor, especially for high network speeds.
389

MARIAN : a hybrid, metric-driven, agent-based routing protocol for multihop ad-hoc networks

Migas, Nikos January 2008 (has links)
Recent advances in technology provided the ground for highly dynamic, mobile, infrastructure-less networks, namely, ad-hoc networks. Despite their enormous benefits, the full potential cannot be reached unless certain issues are resolved. These mainly involve routing, as the lack of an infrastructure imposes a heavy burden on mobile devices that must maintain location information and route data packets in a multi-hop fashion. Specifically, typical adhoc routing devices, such as Personal Digital Assistants (PDAs), are limited in respect to the available throughput, life-time, and performance, that these may provide, as routing elements. Thus, there is a need for metric-driven ad-hoc routing, that is, devices should be utilised for routing according to their fitness, as different device types significantly vary in terms of routing fitness. In addition, a concrete agent-based approach can provide a set of advantages over a non-agent-based one, which includes: better design practice; and automatic reconfigurability. This research work aims to investigate the applicability of stationary and mobile agent technology in multi-hop ad-hoc routing. Specifically, this research proposes a novel hybrid, metric-driven, agent-based routing protocol for multi-hop ad-hoc networks that will enhance current routing schemes. The novelties that are expected to be achieved include: maximum network performance, increased scalability, dynamic adaptation, Quality of Service (QoS), energy conservation, reconfigurability, and security. The underlying idea is based on the fact that stationary and mobile agents can be ideal candidates for such dynamic environments due to their advanced characteristics, and thus offer state of the art support in terms of organising the otherwise disoriented network into an efficient and flexible hierarchical structure, classifying the routing fitness of participating devices, and therefore allow intelligent routing decisions to be taken on that basis.
390

An approach to cross-domain situation-based context management and highly adaptive services in pervasive environments

Jaroucheh, Zakwan January 2012 (has links)
The concept of context-awareness is widely used in mobile and pervasive computing to reduce explicit user input and customization through the increased use of implicit input. It is considered to be the corner stone technique for developing pervasive computing applications that are flexible, adaptable, and capable of acting autonomously on behalf of the user. This requires the applications to take advantage of the context in order to infer the user's objective and relevant environmental features. However, context-awareness introduces various software engineering challenges such as the need to provide developers with middleware infrastructure to acquire the context information available in distributed domains, reasoning about contextual situations that span one or more domains, and providing tools to facilitate building context-aware adaptive services. The separation of concerns is a promising approach in the design of such applications where the core logic is designed and implemented separately from the context handling and adaptation logics. In this respect, the aim of this dissertation is to introduce a unified approach for developing such applications and software infrastructure for efficient context management that together address these software engineering challenges and facilitate the design and implementation tasks associated with such context-aware services. The approach is based around a set of new conceptual foundations, including a context modelling technique that describes context at different levels of abstraction, domain-based context management middleware architecture, cross-domain contextual situation recognition, and a generative mechanism for context-aware service adaptation. Prototype tool has been built as an implementation of the proposed unified approach. Case studies have been done to illustrate and evaluate the approach, in terms of its effectiveness and applicability in real-life application scenarios to provide users with personalized services.

Page generated in 0.0617 seconds