Spelling suggestions: "subject:"informatics, computer science"" "subject:"informatics, coomputer science""
21 |
Hyperreconfigurable architectures and the partition into hypercontexts problemLange, Sebastian, Middendorf, Martin 29 January 2019 (has links)
Dynamically reconfigurable architectures or systems are able to reconfigure their function and/or structure to suit the changing needs of a computation during run time. The increasing flexibility of modern dynamically reconfigurable systems improves their adaptability to computational needs but also makes fast reconfiguration difficult because of the large amount of reconfiguration information which has to be transferred. However, even when a computation uses this flexibility it will not use it all the time. Therefore, we propose to make the potential for reconfiguration itself reconfigurable. Such architectures are called hyperreconfigurable. Different models of hyperreconfigurable architectures are proposed in this paper. We also study a fundamental problem that emerges on such architectures, namely, to determine for a given computation when and how the potential for reconfiguration should be changed during run time so that the reconfiguration overhead is minimal. It is shown that the general problem is NP-hard but fast polynomial time algorithms are given to solve this problem for special types of hyperreconfigurable architectures. We define two example hyperreconfigurable architectures and illustrate the introduced concepts for corresponding application problems.
|
22 |
Compatibility of Shelah and Stupp's and of Muchnik's iteration with fragments of monadic second order logicKuske, Dietrich 29 January 2019 (has links)
We investigate the relation between the theory of the itera- tions in the sense of Shelah-Stupp and of Muchnik, resp., and the theory of the base structure for several logics. These logics are obtained from the restriction of set quantification in monadic second order logic to cer- tain subsets like, e.g., finite sets, chains, and finite unions of chains. We show that these theories of the Shelah-Stupp iteration can be reduced to corresponding theories of the base structure. This fails for Muchnik's iteration.
|
23 |
MOMA - A Mapping-based Object Matching SystemThor, Andreas, Rahm, Erhard 01 February 2019 (has links)
Object matching or object consolidation is a crucial task for data integration and data cleaning. It addresses the problem of identifying object instances in data sources referring to the same real world entity. We propose a flexible framework called MOMA for mapping-based object matching. It allows the construction of matchworkflows combining the results of several matcher algorithms on both attribute values and contextual information. The output of a match task is an instance-level mapping that supports information fusion in P2P data integration systems and can be re-used for other match tasks. MOMA utilizes further semantic mappings of different cardinalities and provides merge and compose operators for mapping combination. We propose and evaluate several strategies for both object matching between different sources as well as for duplicate
identification within a single data source.
|
24 |
The Future of the MainframeSpruth, Wilhelm G. 01 February 2019 (has links)
No description available.
|
25 |
Business Process Modelling with Continuous ValidationKühne, Stefan, Kern, Heiko, Gruhn, Volker, Laue, Ralf 01 February 2019 (has links)
In this paper, we demonstrate the prototype of a modelling tool that applies graph-based rules for identifying problems in business process models. The advantages of our approach are twofold. Firstly, it is not necessary to compute the complete state space of the model in order to find errors. Secondly, our technique can even be applied to incomplete business process models. Thus, the modeller can be supported by direct feedback during the model construction. This feedback does not only report problems, but it also identifies their reasons and makes suggestions for improvements.
|
26 |
TID Hash JoinsMarek, Robert, Rahm, Erhard 04 February 2019 (has links)
TID hash joins are a simple and memory-efficient method for processing large join queries. They are based on standard hash join algorithms but only store TID/key pairs in the hash table instead of entire tuples. This typically reduces memory requirements by more than an order of magnitude bringing substantial benefits. In particular, performance for joins on Giga-Byte relations can substantially be
improved by reducing the amount of disk I/O to a large extent. Furthermore efficient processing of mixed multi-user workloads consisting of both join queries and OLTP transactions is supported. We present a detailed simulation study to analyze the performance of TID hash joins. In particular, we identify the conditions under which TID hash joins are most beneficial. Furthermore, we compare TID hash join with adaptive hash join algorithms that have been proposed to deal with mixed workloads.
|
27 |
Characterizing the query behavior in peer-to-peer file sharing systemsKlemm, Alexander, Lindemann, Christoph, Vernon, Mary K., Waldhorst, Oliver P. 06 February 2019 (has links)
This paper characterizes the query behavior of peers in a peer-to-peer (P2P) file sharing system. In contrast to previous work, which provides various aggregate workload statistics, we characterize peer behavior in a form that can be used for constructing representative synthetic workloads for evaluating new P2P system designs. In particular, the analysis exposes heterogeneous behavior that occurs on different days, in different geographical regions (i. e., Asia, Europe, and North America) or during different periods of the day. The workload measures include the fraction of connected sessions that are passive (i. e., issue no queries), the duration of such sessions, and for each active session, the number of queries issued, time until first query, query interarrival time, time after last query, and distribution of query popularity. Moreover, the key correlations in these workload measures are captured in the form of conditional distributions, such that the correlations can be accurately reproduced in a synthetic workload. The characterization is based on trace data gathered in the Gnutella P2P system over a period of 40 days. To characterize system-independent user behavior, we eliminate queries that are specific to the Gnutella system software, such as re-queries that are automatically issued by some client implementations to improve system responsiveness.
|
28 |
Sprite learning and object category recognition using invariant featuresAllan, Moray January 2007 (has links)
This thesis explores the use of invariant features for learning sprites from image sequences, and for recognising object categories in images. A popular framework for the interpretation of image sequences is the layers or sprite model of e.g. Wang and Adelson (1994), Irani et al. (1994). Jojic and Frey (2001) provide a generative probabilistic model framework for this task, but their algorithm is slow as it needs to search over discretised transformations (e.g. translations, or affines) for each layer. We show that by using invariant features (e.g. Lowe’s SIFT features) and clustering their motions we can reduce or eliminate the search and thus learn the sprites much faster. The algorithm is demonstrated on example image sequences. We introduce the Generative Template of Features (GTF), a parts-based model for visual object category detection. The GTF consists of a number of parts, and for each part there is a corresponding spatial location distribution and a distribution over ‘visual words’ (clusters of invariant features). We evaluate the performance of the GTF model for object localisation as compared to other techniques, and show that such a relatively simple model can give state-of- the-art performance. We also discuss the connection of the GTF to Hough-transform-like methods for object localisation.
|
29 |
Musical acts and musical agents : theory, implementation and practiceMurray-Rust, David January 2008 (has links)
Musical Agents are an emerging technology, designed to provide a range of new musical opportunities to human musicians and composers. Current systems in this area lack certain features which are necessary for a high quality musician; in particular, they lack the ability to structure their output in terms of a communicative dialogue, and reason about the responses of their partners. In order to address these issues, this thesis develops Musical Act Theory (MAT). This is a novel theory, which models musical interactions between agents, allowing a dialogue oriented analysis of music, and an exploration of intention and communication in the context of musical performance. The work here can be separated into four main contributions: a specification for a Musical Middleware system, which can be implemented computationally, and allows distributed agents to collaborate on music in real-time; a computational model of musical interaction, which allows musical agents to analyse the playing of others as part of a communicative process, and formalises the workings of the Musical Middleware system; MAMA, a musical agent system which embodies this theory, and which can function in a variety of Musical Middleware applications; a pilot experiment which explores the use of MAMA and the utility of MAT under controlled conditions. It is found that the Musical Middleware architecture is computationally implementable, and allows for a system which can respond to both direct musical communi- cation and extramusical inputs, including the use of a custom-built tangible interface. MAT is found to capture certain aspects of music which are of interest — an intuitive notion of performative actions in music, and an existing model of musical interaction. Finally, the fact that a number of different levels — theory, architecture and implementation — are tied together gives a coherent model which can be applied to many computational musical situations.
|
30 |
Active learning : an explicit treatment of unreliable parametersBecker, Markus January 2008 (has links)
Active learning reduces annotation costs for supervised learning by concentrating labelling efforts on the most informative data. Most active learning methods assume that the model structure is fixed in advance and focus upon improving parameters within that structure. However, this is not appropriate for natural language processing where the model structure and associated parameters are determined using labelled data. Applying traditional active learning methods to natural language processing can fail to produce expected reductions in annotation cost. We show that one of the reasons for this problem is that active learning can only select examples which are already covered by the model. In this thesis, we better tailor active learning to the need of natural language processing as follows. We formulate the Unreliable Parameter Principle: Active learning should explicitly and additionally address unreliably trained model parameters in order to optimally reduce classification error. In order to do so, we should target both missing events and infrequent events. We demonstrate the effectiveness of such an approach for a range of natural language processing tasks: prepositional phrase attachment, sequence labelling, and syntactic parsing. For prepositional phrase attachment, the explicit selection of unknown prepositions significantly improves coverage and classification performance for all examined active learning methods. For sequence labelling, we introduce a novel active learning method which explicitly targets unreliable parameters by selecting sentences with many unknown words and a large number of unobserved transition probabilities. For parsing, targeting unparseable sentences significantly improves coverage and f-measure in active learning.
|
Page generated in 0.1025 seconds