• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 15
  • 3
  • 1
  • Tagged with
  • 94
  • 20
  • 14
  • 11
  • 11
  • 9
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Middleware to support accountability of business to business interactions

Mortimer, Derek John January 2013 (has links)
Enabling technologies have driven standardisation efforts specifying B2B interactions between organisations including the information to be exchanged and its associated business level requirements. These interactions are encoded as conversations to which organisations agree and execute. It is pivotal to continued cooperation with these interactions that their regulation be supported; minimally, that all actions taken are held accountable and no participant is placed at a disadvantage having remained compliant. Technical protocols exist to support regulation (e.g., provide fairness and accountability). However, such protocols incur expertise, infrastructure and integration requirements, possibly diverting an organisation’s attention from fulfilling obligations to interactions in which they are involved. Guarantees provided by these protocols can be paired with functional properties, declaratively describing the support they provide. By encapsulating properties and protocols in intermediaries through which messages are routed, expertise, infrastructure and integration requirements can be alleviated from interacting organisations while their interactions are transparently provided with additional support. Previous work focused on supporting individual issues without tackling concerns of asynchronicity, transparency and loose coupling. This thesis develops on previous work by designing generalised intermediary middleware capable of intercepting messages and transparently satisfying supportive properties. By enforcing loose coupling and transparency, all interactions may be provided with additional support without modification, independent of the higher level (i.e., B2B) standards in use and existing work may be expressed as instances of the proposed generalised design. This support will be provided at lower levels, justified by a survey of B2B and messaging standards. Proof of concept implementations will demonstrate the suitability of the approach. The work will demonstrate that providing transparent, decoupled support at lower levels of abstraction is useful and can be applied to domains beyond B2B and message oriented interactions.
2

Towards open services on the Web : a semantic approach

Maleshkova, Maria January 2015 (has links)
The World Wide Web (WWW) has significantly evolved since it was first released as a publicly available service on the Internet, developing from a collection of a few interlinked static pages to a global ubiquitous platform for sharing, searching and browsing dynamic and customisable content, in a variety of different media formats. It plays a major role in the lives of individuals, as a source of information, knowledge and entertainment, as well as in the way business and communication are done within and between companies. This transformation was triggered by the ever-growing number of users and websites, and continues to be supported by current developments such as the SocialWeb, Linked Data, andWeb APIs and Services, which together pave the way for the Web as a dynamic data environment. The work presented in this thesis aims to contribute to a more integrated Web, where services, data and Web content can be seamlessly combined and interlinked, without having to deal with the intricacies of the separate data sources or the specific technology implementations. The vision of Open Services on the Web aims to facilitate the unified use of Web APIs, Web Services and Linked Data sources, so that users can retrieve data without differentiating whether its source is a website, a Web API or even a mashup. However, before this can be achieved, there are a number of problems that need to be addressed. In particular, the integrated and unified handling of services, and especially Web APIs, is very challenging because of the heterogeneous landscape of implementation approaches, underlying technologies and forms of documentation. In particular, in the context of Web APIs, the main limitations are caused by the fact that currently documentation is commonly provided directly in HTML, as part of a webpage, which is not meant for automated machine processing of the service properties, in contrast to XML, for example. This situation is aggravated by the fact that Web APIs are proliferating quite autonomously, without adhering to particular guidelines and specifications. This results in a wide variety of description forms and structures, accompanied by a range of diverse underlying technologies, forcing developers to individually interpret the documentation, and carry out complicated and tedious development work. The result is the implementation of individual custom solutions that are rarely reusable and have very low support for interoperation. We contribute towards achieving the vision Open Services on theWeb by tackling some of these challenges and supporting the wider, integrated and more automated use of Web APIs. In particular, we present a thorough analysis of the current state of Web APIs, giving the results of two Web API surveys. We use the collected data in order to draw conclusions about occurring practices and trends, and common API characteristics. The results provide essential input for acquiring a real-world view onWeb APIs, for identifying key service properties, for determining best practices, for pointing out difficulties and implementation challenges, and for deducing a baseline for the support that any solution approach needs to provide. The so gathered details are used for developing a shared formal model for describing, modelling and annotating Web APIs, which serves as the basis for decreasing the level of manual effort, involved in completing common service tasks, and provides a unifying overlay on top of the heterogeneous API landscape. This shared model - the Core Service Model captures all essential API characteristics, thus providing common grounds for developing support solutions in the context of using Web APIs, but also enables a unified view over traditional Web services and APIs, facilitating their interoperable handing and enabling the reuse of existing Web service approaches and solutions.
3

Analyzing and pruning ensembles utilizing bias and variance theory

Zor, Cemre January 2014 (has links)
Ensemble methods are widely preferred over single classifiers due to the advantages they offer in terms of accuracy, complexity and flexibility. In this doctoral study, the aim is to understand and analyze ensembles while offering new design and pruning techniques. Bias-variance frameworks have been used as the main means of analysis, and Error Correcting Output Coding (ECOC) as an ensemble technique has been studied as a case study within each chapter. ECOC is a powerful multiclass ensemble classification technique, in which multiple two class base classifiers are trained using relabeled sets of the multiclass training data. The relabeling information is obtained from a preset code matrix. The main idea behind this procedure is to solve the original multiclass problem by combining the decision boundaries obtained from simpler two class decompositions. While ECOC is one of the best solutions to multiclass problems, it is still suboptimal. In this thesis, we have initially presented two algorithms that iteratively update the ECOC framework to improve the performance without a need of re-training. As a second step, in order to explain the underlying reasons behind the improved performance of ensembles and give hints on their designs, we have used bias and variance analysis. The ECOC framework has been theoretically analyzed using Tumer and Ghosh (T&G) bias-variance model, and its performance has been linked to that of its base classifiers. Accordingly, design hints on ECOC have been proposed. Moreover, the definition of James has been used for experimentation in order to explain the reasoning behind the success of ECOC compared to single multiclass classifiers and bagging ensembles. Furthermore for bias-variance analysis, we have established the missing links between some of the popular theories (theories of Geman, T&G and James) existing in the literature by providing closed form solutions. The final contribution of this thesis is on ensemble pruning. In order to increase efficiency and decrease computational and storage costs without sacrificing and preferably enhancing the generalization performance, two novel pruning algorithms to be used for bagging and ECOC ensembles have been proposed. The proposed methods, which are shown to achieve results better than the state of the art, are theoretically and experimentally analysed. The analysis also embodies the bias and variance theory.
4

A partial syntactic analysis-based pre-processor for automatic indexing and retrieval of Chinese texts

Wu, Zimin January 1992 (has links)
Automatic indexing is the automatic creation of a text surrogate, normally keywords or phrases, to represent the original text. In the current English text retrieval systems, this process of content representation is accomplished by extracting words using spaces and punctuations as word delimiters. The same technique cannot easily be applied to Chinese texts which contain no obvious word boundaries; they appear to be a linear sequence of non-spaced or equally spaced ideographic characters and thenumber of characters in words varies. The solution to the problem lies in morphological and syntactic analyses of Chinese morphemes, words and phrases. The idea is inspired by the experiments on English computational morphology and its application to English text retrieval, mainly automatic compound and phrase indexing. These areas are particularly germane to Chinese because typographically there are no morph and phrase boundaries in either Chinese or English texts. The experiment is based on the hypothesis that words and phrases exceeding two Chinese characters can be characterised by a grammar that describes the concatenation behaviour of morphological and syntactic categories. This is examined using the following three procedures: (1) text segmentation - texts are divided into one and two character segments by searching a dictionary containing over 17000 morphemes and words, which are tagged with 'morphological and syntactic categories. (2) category disambiguation - for the resulting morphemes and words tagged with more than one category, the correct one is selected based on context (3) parsing - the segments are analysed using the grammar, which combines them into compound and complex words and phrases for indexing and retrieval. The utilities employed in the experiment include CCOOS, an extended version of MSOOS providing for Chinese I/O system,Chinese Wordstar for text input and Chinese dBASEIII for dictionary construction. Source codes are written in Turbo BASIC including its database toolbox. Thiny texts are drawn randomly from newspapers to form thcsample for the experiment. The results prove that the partial syntactic analysis-based approach can extract keywords with a good degree of accuracy.
5

Techniques for the development of time-constraint telemetric data processing system

Sidyakin, Ivan Mikhailovich January 2006 (has links)
No description available.
6

Perceptually relevant browsing environments for large texture databases

Halley, Fraser January 2012 (has links)
This thesis describes the development of a large database of texture stimuli, the production of a similarity matrix re ecting human judgements of similarity about the database, and the development of three browsing models that exploit structure in the perceptual information for navigation. Rigorous psychophysical comparison experiments are carried out and the SOM (Self Organising Map) found to be the fastest of the three browsing models under examination. We investigate scalable methods of augmenting a similarity matrix using the SOM browsing environment to introduce previously unknown textures. Further psychophysical experiments reveal our method produces a data organisation that is as fast to navigate as that derived from the perceptual grouping experiments.
7

Configuration evaluation and optimisation of technical systems

Murdoch, Tim January 1993 (has links)
No description available.
8

Bloom maps for big data

Talbot, David January 2010 (has links)
The ability to retrieve a value given a key is fundamental in computer science. Unfortunately as the a priori set from which keys are drawn grows in size, any exact data structure must use more space per key. This motivates our interest in approximate data structures. We consider the problem of succinctly encoding a map to support queries with bounded error when the distribution over values is known. We give a lower bound on the space required per key in terms of the entropy of the distribution over values and the error rate and present a generalization of the Bloom filter, the Bloom map, that achieves the lower bound up to a small constant factor. We then develop static and on-line approximation schemes for frequency data that use constant space per key to store frequencies with bounded relative error when these follow a power law. Our on-line construction has constant expected update complexity per observation and requires only a single pass over a data set. Finally we present a simple framework for using a priori knowledge to reduce the error rate of an approximate data structure with one-sided error. We evaluate the data structures proposed here empirically and use them to construct randomized language models that significantly reduce the space requirements of a state-of-the-art statistical machine translation system.
9

Network-aware big data processing

Rupprecht, Lukas January 2017 (has links)
The scale-out approach of modern data-parallel frameworks such as Apache Flink or Apache Spark has enabled them to deal with large amounts of data. These applications are often deployed in large-scale data centres with many resources. However, as deployments and data continue to grow, more network communication is incurred during a data processing query. At the same time, data centre networks (DCNs) are becoming increasingly more complex in terms of the physical network topology, the variety of applications that are sharing the network, and the different requirements of these applications on the network. The high complexity of DCNs combined with the increased traffic demands of applications has made the network a bottleneck for query performance. In this thesis, we explore ways of making data-parallel frameworks network-aware, i.e. we combine specific knowledge about the application and the physical network to reduce query completion times. We identify three main types of traffic that occur during query processing and add network-awareness to each of them to optimise network usage. 1) Traffic reduction for aggregatable traffic exploits the physical network topology and the associativity and commutativity of aggregation queries to reduce traffic as early as possible. In-network aggregation trees utilise existing networking hardware and the tree topology of DCNs to partially aggregate and thereby reduce data as it flows through the network. 2) Traffic balancing for non-aggregatable traffic monitors the network throughput of an application and uses knowledge about the query to optimise the overall network utilisation. By dynamically changing the destinations of parts of the transferred data, network hotspots, which can occur when many applications share the network, can be avoided. 3) Traffic elimination for storage traffic gives control over data placement to the application instead of the distributed storage system. This allows the application to optimise where data is stored across the cluster based on application properties and thereby eliminate unnecessary network traffic.
10

Understanding and improving navigation within electronic documents

Alexander, Jason January 2009 (has links)
Electronic documents form an integral part of the modern computer age—virtually all personal computers have the ability to create, store and display their content. A connection to the Internet provides users with an almost endless source of documents, be they web-pages, word-processor files or emails. However, the entire contents of an electronic document are often too large to be usefully presented on a user’s screen, at a single point in time. This issue is usually overcome by placing the content inside a scrolling environment. The view onto the document is then modified by directly adjusting a scrollbar or by employing tools such as the mousewheel or paging keys. Applications may also provide methods for adjusting the document’s zoom and page layout. The scrollbar has seen widespread adoption, becoming the default tool used to visualise large information spaces. Despite its extensive deployment, researchers have little knowledge on how this and related navigation tools are used in an everyday work environment. A characterisation of users’ actions would allow designers to identify common behaviours and areas of inefficiency as they strive to improve navigation techniques. To fill this knowledge gap, this thesis aims to understand and improve navigation within desktop-based electronic documents. This goal is achieved using a five step process. First, the literature is used to explore document navigation tasks and the tools currently available to support electronic document navigation. Second, a software tool called AppMonitor, that logs users’ navigation actions, was developed. Third, AppMonitor was deployed in a longitudinal study to characterise document navigation actions in Microsoft Word and Adobe Reader. Forth, to compliment this study, two task-centric observations of electronic document navigation were performed, to probe the reasons for navigation tool selection. Finally, the Footprints Scrollbar was developed to improve one common aspect of navigation—within-document revisitation. To begin, two areas of current knowledge in this domain are overviewed: paper and electronic document navigation and electronic document navigation tools. The literature review produced five categories of document navigation tasks: ‘overviewing and browsing’, ‘reading’, ‘annotating and writing’, ‘searching’ and ‘revisitation’. In a similar fashion, electronic document navigation tools were reviewed and divided into eight categories: core navigation tools (those commonly found in today’s navigation systems), input devices, scrollbar augmentations, content-aware navigation aids, visualisations that provide multiple document views, indirect manipulation techniques, zooming tools and revisitation tools. The literature lacked evidence of an understanding of how these current document navigation tools are used. To aid the gathering of empirical data on tool use, the AppMonitor tool was developed. It records user actions in unmodified Windows applications—specifically for this research, Microsoft Word and Adobe Reader. It logs low-level interactions such as “left mouse button pressed” and “Ctrl-f pressed” as well as high level ‘logical’ actions such as menu selections and scrollbar manipulations. It requires no user input to perform these tasks, allowing study participants to continue with their everyday work. To collect data to form a characterisation of document navigation actions, 14 participants installed AppMonitor on their computer for 120 days. This study found that users primarily employ the mousewheel, scrollbar thumb and paging keys for navigation. Further, many advanced navigation tools that are lauded for their efficiency, including bookmarks and search tools, are rarely used. The longitudinal study provided valuable insights into the use of navigation tools. To understand the reasons behind this tool use, two task-centric observations of electronic document navigation were conducted. The first asked participants to perform a series of specific navigation tasks while AppMonitor logged their actions. The second was performed as a series of interactive sessions, where users performed a particular task and were then probed on their tool choice. These two studies found that many users are not aware of the advanced navigation tools that could significantly improve their navigation efficiency. Finally, the characterisations highlighted within-document revisitation as a commonly performed task, with current tools that support this action rarely used. To address this problem, the analysis, design and evaluation of a Footprints Scrollbar is presented. It places marks inside the scrollbar trough and provides shortcuts to aid users return to previously visited locations. The Footprints Scrollbar was significantly faster and subjectively preferred over a standard scrollbar for revisitation tasks. To summarise, this thesis contributes a literature review of document navigation and electronic document navigation tools; the design and implementation of AppMonitor—a tool to monitor user actions in unmodifiedWindows applications; a longitudinal study describing the navigation actions users perform; two taskcentric studies examining why actions are performed; and the Footprints Scrollbar, a tool to aid within-document revisitation tasks.

Page generated in 0.0204 seconds