• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3287
  • 1226
  • 892
  • 505
  • 219
  • 178
  • 161
  • 161
  • 160
  • 160
  • 160
  • 160
  • 160
  • 159
  • 77
  • Tagged with
  • 8757
  • 4095
  • 2548
  • 2471
  • 2471
  • 808
  • 805
  • 588
  • 580
  • 555
  • 554
  • 525
  • 486
  • 480
  • 472
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

A User preference-based Matchmaking Approach for Services Discovery in B2B e-commerce Applications

Obwoge, Justus January 2009 (has links)
No description available.
342

Quality of service support for service discovery and selection in service oriented computing environment

Deora, Vikas January 2007 (has links)
Service oriented computing (SOC) represents a new generation of web architecture. Central to SOC is the notion of services, which are self-contained, self-describing, modular applications that can be published, located, and invoked across the Internet. The services represent capability, which can be anything from simple operations to complicated business processes. This new architecture offers great potential for e-commerce applications, where software agents can automatically find and select the services that best serve a consumer's interests. Many techniques have been proposed for discovery and selection of services, most of which have been constructed without a formal Quality of Service (QoS) model or much regard to understanding the needs of consumers. This thesis aims to provide QoS support for the entire SOC life cycle, namely: (i) extend current approaches to service discovery that allow service providers to advertise their services in a format that supports quality specifications, and allows service consumers to request services by stating required quality levels, (ii) support matchmaking between advertised and requested services based on functional as well as quality requirements, (iii) perform QoS assessment to support consumers in service selection. Many techniques exists for performing QoS assessment, most of which are based on collecting quality ratings from the users of a service. This thesis argues that collecting quality ratings alone from the users is not sufficient for deriving a reliable and accurate quality measure for a service. This is because different users often have different expectations and judgements on the quality of a service and their ratings tend to be closely related to these expectations, i.e., how their expectations are met. The thesis proposes a new model for QoS assessment, based on user expectations that collects expectations as well as ratings from the users of a service, then calculates the QoS using only the ratings which were judged on similar expectations.
343

Development of tangible acoustic interfaces for human computer interaction

Ji, Ze January 2007 (has links)
Tangible interfaces, such as keyboards, mice, touch pads, and touch screens, are widely used in human computer interaction. A common disadvantage with these devices is the presence of mechanical or electronic devices at the point of interaction with the interface. The aim of this work has been to investigate and develop new tangible interfaces that can be adapted to virtually any surface, by acquiring and studying the acoustic vibrations produced by the interaction of the user's finger on the surface. Various approaches have been investigated in this work, including the popular time difference of arrival (TDOA) method, time-frequency analysis of dispersive velocities, the time reversal method, and continuous object tracking. The received signal due to a tap at a source position can be considered the impulse response function of the wave propagation between the source and the receiver. With the time reversal theory, the signals induced by impacts from one position contain the unique and consistent information that forms its signature. A pattern matching method, named Location Template Matching (LTM), has been developed to identify the signature of the received signals from different individual positions. Various experiments have been performed for different purposes, such as consistency testing, acquisition configuration, and accuracy of recognition. Eventually, this can be used to implement HCI applications on any arbitrary surfaces, including those of 3D objects and inhomogeneous materials. The resolution with the LTM method has been studied by different experiments, investigating factors such as optimal sensor configurations and the limitation of materials. On plates of the same material, the thickness is the essential determinant of resolution. With the knowledge of resolution for one material, a simple but faster search method becomes feasible to reduce the computation. Multiple simultaneous impacts are also recognisable in certain cases. The TDOA method has also been evaluated with two conventional approaches. Taking into account the dispersive properties of the vibration propagation in plates, time-frequency analysis, with continuous wavelet transformation, has been employed for the accurate localising of dispersive signals. In addition, a statistical estimation of maximum likelihood has been developed to improve the accuracy and reliability of acoustic localisation. A method to measure and verify the dispersive velocities has also been introduced. To enable the commonly required "drag &amp; drop" function in the operation of graphical user interface (GUI) software, the tracking of a finger scratching on a surface needs to be implemented. To minimise the tracking error, <italic> a priori</italic> knowledge of previous measurements of source locations is needed to linearise the state model that enables prediction of the location of the contact point and the direction of movement. An adaptive Kalman filter has been used for this purpose.
344

Context of processes : achieving thorough documentation in provenance systems through context awareness

Wooten, Ian January 2009 (has links)
To fully understand real world processes, having evidence which is as comprehensive as possible is essential. Comprehensive evidence enables the reviewer to have some confidence that they are aware of the nuances of a past scenario and can act appropriately upon them in the future. There are examples of this throughout everyday life the outcome of a court case could be affected by available evidence or an antique could be considered more valuable if certain facts about its history are known. Similarly, in computer systems, evidence of processes allow users to make more informed decisions than if it were not captured. Where computer based experimentation has enabled scientists to perform complicated experiments quickly with ease, understanding the precise circumstances of the process which created a particular set of results is important. Significant recent research has sought to address the problem of understanding the provenance of an data item&mdash;the process which led to that data item. Increasingly, these experiments are being performed using systems which are distributed, large scale and open. Comprehensive evidence in these environments is achieved when both documentation of the actions per formed and the circumstances in which they occur are captured. Therefore, in order for a user to achieve confidence in results, we argue the importance of documenting the context of a process. This thesis addresses the problem of how context may be suitably modeled, captured and queried to later answer questions concerning data origin. We begin by defining context as any information describing a scenario which has some bearing on a process's outcome. Based on a number of use cases from a Functional Magnetic Resonance Imaging (fMRI) workflow, we present a model for representation of context. Our model treats each actor in a process as capable of progressing over a number of finite states as they perform actions. We show that each state can be encoded by using a set of monitored variables from an actor's host. Each transition between states therefore is a series of variable changes and this model is shown to be capable of measuring similarity of context when comparing multiple executions of the same process. It also allows us to consider future state changes for actors based on their past execution. We evaluate through the use of our own context capture system which allows common monitoring tools to be used as an indication of state change and recording of context transparently from stake holders. Our experimental findings suggest our approach to both be acceptable in terms of performance (with an overhead of 4&ndash;8% against a non context capturing approach) and use case satisfaction.
345

Equivalence semantics for concurrency : comparison and application

Galpin, Vashti C. January 1998 (has links)
Since the development of CCS and other process algebras, many extensions to these process algebras have been proposed to model different aspects of concurrent computation. It is important both theoretically and practically to understand the relationships between these process algebras and between the semantic equivalences that are defined for them. In this thesis, I investigate the comparison of semantic equivalences based on bisimulation which are defined for process algebras whose behaviours are described by structured operational semantics, and expressed as labelled transition systems. I first consider a hierarchy of bisimulations for extensions to CCS, using both existing and new results to describe the relationships between their equivalences with respect to pure CCS terms. I then consider a more general approach to comparison by investigating labelled transition systems with structured labels. I define bisimulation homomorphisms between labelled transition systems with different labels, and show how these can be used to compare equivalences. Next, I work in the meta-theory of process algebras and consider a new format that is an extension of the tyft/tyxt format for transition system specifications. This format treats labels syntactically instead of schematically, and hence I use a definition of bisimulation which requires equivalence between labels instead of exact matching. I show that standard results such as congruence and conservative extension hold for the new format. I then investigate how comparison of equivalences can be approached through the notion of extension to transition system specifications. This leads to the main results of this study which show how in a very general fashion the bisimulations defined for two different process algebras can be compared over a subset of terms of the process algebras. I also consider what implications the conditions which are required to obtain these results have for modelling process algebras, and show that these conditions do not impose significant limitations. Finally, I show how these results can be applied to existing process algebras. I model a number of process algebras with the extended format and derive new results from the meta-theory developed.
346

The parameterized complexity of degree constrained editing problems

Mathieson, Luke January 2009 (has links)
This thesis examines degree constrained editing problems within the framework of parameterized complexity. A degree constrained editing problem takes as input a graph and a set of constraints and asks whether the graph can be altered in at most k editing steps such that the degrees of the remaining vertices are within the given constraints. Parameterized complexity gives a framework for examining problems that are traditionally considered intractable and developing efficient exact algorithms for them, or showing that it is unlikely that they have such algorithms, by introducing an additional component to the input, the parameter, which gives additional information about the structure of the problem. If the problem has an algorithm that is exponential in the parameter, but polynomial, with constant degree, in the size of the input, then it is considered to be fixed-parameter tractable. Parameterized complexity also provides an intractability framework for identifying problems that are likely to not have such an algorithm. Degree constrained editing problems provide natural parameterizations in terms of the total cost k of vertex deletions, edge deletions and edge additions allowed, and the upper bound r on the degree of the vertices remaining after editing. We define a class of degree constrained editing problems, WDCE, which generalises several well know problems, such as Degree r Deletion, Cubic Subgraph, r-Regular Subgraph, f-Factor and General Factor. We show that in general if both k and r are part of the parameter, problems in the WDCE class are fixed-parameter tractable, and if parameterized by k or r alone, the problems are intractable in a parameterized sense. We further show cases of WDCE that have polynomial time kernelizations, and in particular when all the degree constraints are a single number and the editing operations include vertex deletion and edge deletion we show that there is a kernel with at most O(kr(k + r)) vertices. If we allow vertex deletion and edge addition, we show that despite remaining fixed-parameter tractable when parameterized by k and r together, the problems are unlikely to have polynomial sized kernelizations, or polynomial time kernelizations of a certain form, under certain complexity theoretic assumptions. We also examine a more general case where given an input graph the question is whether with at most k deletions the graph can be made r-degenerate. We show that in this case the problems are intractable, even when r is a constant.
347

The integrity of serial data highway systems

Cowan, D. January 1983 (has links)
The Admiralty Surface Weapons Establishment (ASWE) have developed a Local Area Network System. This thesis describes the development of a replacement for this LAN system, based around 16 bit microprocessor hosts, as opposed to the minicomputers currently used. This change gave a substantial reduction in size, and allowed the new system to be installed on a ship and tested under operational conditions. Analysis of the data collected during the tests gave performance information on the ASWE system. The performance of this LAN is compared to that of other leading types of LAN. The design of a portable network controller/ monitor unit is presented, which may be manufactured as a standard controller for the ASWE Serial Highway.
348

Dynamic integration of evolving distributed databases using services

Weng, Bin January 2010 (has links)
This thesis investigates the integration of many separate existing heterogeneous and distributed databases which, due to organizational changes, must be merged and appear as one database. A solution to some database evolution problems is presented. It presents an Evolution Adaptive Service-Oriented Data Integration Architecture (EA-SODIA) to dynamically integrate heterogeneous and distributed source databases, aiming to minimize the cost of the maintenance caused by database evolution. An algorithm, named Relational Schema Mapping by Views (RSMV), is designed to integrate source databases that are exposed as services into a pre-designed global schema that is in a data integrator service. Instead of producing hard-coded programs, views are built using relational algebra operations to eliminate the heterogeneities among the source databases. More importantly, the definitions of those views are represented and stored in the meta-database with some constraints to test their validity. Consequently, the method, called Evolution Detection, is then able to identify in the meta-database the views affected by evolutions and then modify them automatically. An evaluation is presented using case study. Firstly, it is shown that most types of heterogeneity defined in this thesis can be eliminated by RSMV, except semantic conflict. Secondly, it presents that few manual modification on the system is required as long as the evolutions follow the rules. For only three types of database evolutions, human intervention is required and some existing views are discarded. Thirdly, the computational cost of the automatic modification shows a slow linear growth in the number of source database. Other characteristics addressed include EA-SODIA’ scalability, domain independence, autonomy of source databases, and potential of involving other data sources (e.g.XML). Finally, the descriptive comparison with other data integration approaches is presented. It shows that although other approaches may provide better performance of query processing in some circumstances, the service-oriented architecture provide better autonomy, flexibility and capability of evolution.
349

Randomised load balancing

Nagel, Lars January 2011 (has links)
Due to the increased use of parallel processing in networks and multi-core architectures, it is important to have load balancing strategies that are highly efficient and adaptable to specific requirements. Randomised protocols in particular are useful in situations in which it is costly to gather and update information about the load distribution (e.g. in networks). For the mathematical analysis randomised load balancing schemes are modelled by balls-into-bins games, where balls represent tasks and bins computers. If m balls are allocated to n bins and every ball chooses one bin at random, the gap between maximum and average load is known to grow with the number of balls m. Surprisingly, this is not the case in the multiple-choice process in which each ball chooses d > 1 bins and allocates itself to the least loaded. Berenbrink et al. proved that then the gap remains ln ln(n) / ln(d). This thesis analyses generalisations and variations of the multiple-choice process. For a scenario in which batches of balls are allocated in parallel, it is shown that the gap between maximum and average load is still independent of m. Furthermore, we look into a process in which only predetermined subsets of bins can be chosen by a ball. Assuming that the number and composition of the subsets can change with every ball, we examine under which circumstances the maximum load is one. Finally, we consider a generalisation of the basic process allowing the bins to have different capacities. Adapting the probabilities of the bins, it is shown how the load can be balanced over the bins according to their capacities.
350

Fighting Internet fraud : anti-phishing effectiveness for phishing websites detection

Alnajim, Abdullah M. January 2009 (has links)
Recently, the Internet has become a very important medium of communication. Many people go online and conduct a wide range of business. They can sell and buy goods, perform different banking activities and even participate in political and social elections by casting a vote online. The parties involved in any transaction never need to meet and a buyer can sometimes be dealing with a fraudulent business that does not actually exist. So, security for conducting businesses online is vital and critical. All security-critical applications (e.g. online banking login pages) that are accessed using the Internet are at the risk of fraud. A common risk comes from so-called Phishing websites, which have become a problem for online banking and e-commerce users. Phishing websites attempt to trick people into revealing their sensitive personal and security information in order for the fraudster to access their accounts. They use websites that look similar to those of legitimate organizations and exploit the end-user's lack of knowledge of web browser clues and security indicators. This thesis addresses the effectiveness of Phishing website detection. It reviews existing anti-Phishing approaches and then makes the following contributions. First of all, the research in this thesis evaluates the effectiveness of the current most common users' tips for detecting Phishing websites. A novel effectiveness criteria is proposed and used to examine every tip and rank it based on its effectiveness score, thus revealing the most effective tips to enable users to detect Phishing attacks. The most effective tips can then be used by anti-Phishing training approaches. Secondly, this thesis proposes a novel Anti-Phishing Approach that uses Training Intervention for Phishing Websites' Detection (APTIPWD) and shows that it can be easily implemented. Thirdly, the effectiveness of the New Approach (APTIPWD) is evaluated using a set of user experiments showing that it is more effective in helping users distinguish between legitimate and Phishing websites than the Old Approach of sending anti-Phishing tips by email. The experiments also address the issues of the effects of technical ability and Phishing knowledge on Phishing websites' detection. The results of the investigation show that technical ability has no effect whereas Phishing knowledge has a positive effect on Phishing website detection. Thus, there is need to ensure that, regardless their technical ability level (expert or non-expert), the participants do not know about Phishing before they evaluate the effectiveness of a new anti-Phishing approach. This thesis then evaluates the anti-Phishing knowledge retention of the New Approach users and compares it with the knowledge retention of users who are sent anti-Phishing tips by email.

Page generated in 0.0541 seconds