• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 6
  • 1
  • Tagged with
  • 17
  • 9
  • 8
  • 8
  • 7
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Communication and memory scheduling in reconfigurable image processing systems

Heithecker, Sven January 2008 (has links)
Zugl.: Braunschweig, Techn. Univ., Diss., 2008
12

End-to-End mechanisms for rate-adaptive multicast streaming over the Internet

Rimac, Ivica. Unknown Date (has links)
Techn. University, Diss., 2004--Darmstadt.
13

Protection of data networks by enforcing congestion control on UDP flows /

Hessler, Sven. January 2008 (has links)
Zugl.: Darmstadt, Techn. University, Diss., 2008.
14

Quality-of-Service-Aware Data Stream Processing

Schmidt, Sven 13 March 2007 (has links)
Data stream processing in the industrial as well as in the academic field has gained more and more importance during the last years. Consider the monitoring of industrial processes as an example. There, sensors are mounted to gather lots of data within a short time range. Storing and post-processing these data may occasionally be useless or even impossible. On the one hand, only a small part of the monitored data is relevant. To efficiently use the storage capacity, only a preselection of the data should be considered. On the other hand, it may occur that the volume of incoming data is generally too high to be stored in time or–in other words–the technical efforts for storing the data in time would be out of scale. Processing data streams in the context of this thesis means to apply database operations to the stream in an on-the-fly manner (without explicitly storing the data). The challenges for this task lie in the limited amount of resources while data streams are potentially infinite. Furthermore, data stream processing must be fast and the results have to be disseminated as soon as possible. This thesis focuses on the latter issue. The goal is to provide a so-called Quality-of-Service (QoS) for the data stream processing task. Therefore, adequate QoS metrics like maximum output delay or minimum result data rate are defined. Thereafter, a cost model for obtaining the required processing resources from the specified QoS is presented. On that basis, the stream processing operations are scheduled. Depending on the required QoS and on the available resources, the weight can be shifted among the individual resources and QoS metrics, respectively. Calculating and scheduling resources requires a lot of expert knowledge regarding the characteristics of the stream operations and regarding the incoming data streams. Often, this knowledge is based on experience and thus, a revision of the resource calculation and reservation becomes necessary from time to time. This leads to occasional interruptions of the continuous data stream processing, of the delivery of the result, and thus, of the negotiated Quality-of-Service. The proposed robustness concept supports the user and facilitates a decrease in the number of interruptions by providing more resources.
15

Algorithms for streaming graphs

Zelke, Mariano 06 May 2009 (has links)
Für einen Algorithmus zum Lösen eines Graphenproblems wird üblicherweise angenommen, dieser sei mit wahlfreiem Zugriff (random access) auf den Eingabegraphen G ausgestattet, als auch mit einem Arbeitsspeicher, der G vollständig aufzunehmen vermag. Diese Annahmen erweisen sich als fragwürdig, wenn Graphen betrachtet werden, deren Größe jene konventioneller Arbeitsspeicher übersteigt. Solche Graphen können nur auf externen Speichern wie Festplatten oder Magnetbändern vorrätig gehalten werden, auf denen wahlfreier Zugriff sehr zeitaufwändig ist. Um riesige Graphen zu bearbeiten, die auf externen Speichern liegen, hat Muthukrishnan 2003 das Modell eines Semi-Streaming Algorithmus vorgeschlagen. Dieses Modell beschränkt die Größe des Arbeitsspeichers und verbietet den wahlfreien Zugriff auf den Eingabegraphen G. Im Gegenteil wird angenommen, die Eingabe sei ein Datenstrom bestehend aus Kanten von G in beliebiger Reihenfolge. In der vorliegenden Dissertation entwickeln wir Algorithmen im Semi-Streaming Modell für verschiedene Graphenprobleme. Für das Testen des Zusammenhangs und der Bipartität eines Graphen, als auch für die Berechnung eines minimal spannenden Baumes stellen wir Algorithmen vor, die asymptotisch optimale Laufzeiten erreichen. Es ist bekannt, dass kein Semi-Streaming Algorithmus existieren kann, der ein größtes gewichtetes Matching in einem Graphen findet. Für dieses Problem geben wir den besten bekannten Approximationsalgorithmus an. Schließlich zeigen wir, dass sowohl ein minimaler als auch ein maximaler Schnitt in einem Graphen nicht von einem Semi-Streaming Algorithmus berechnet werden kann. Für beide Probleme stellen wir randomisierte Approximationsalgorithmen im Semi-Streaming Modell vor. / An algorithm solving a graph problem is usually expected to have fast random access to the input graph G and a working memory that is able to store G completely. These powerful assumptions are put in question by massive graphs that exceed common working memories and that can only be stored on disks or even tapes. Here, random access is very time-consuming. To tackle massive graphs stored on external memories, Muthukrishnan proposed the semi-streaming model in 2003. It permits a working memory of restricted size and forbids random access to the input graph. In contrast, the input is assumed to be a stream of edges in arbitrary order. In this thesis we develop algorithms in the semi-streaming model approaching different graph problems. For the problems of testing graph connectivity and bipartiteness and for the computation of a minimum spanning tree, we show how to obtain running times that are asymptotically optimal. For the problem of finding a maximum weighted matching, which is known to be intractable in the semi-streaming model, we present the best known approximation algorithm. Finally, we show the minimum and the maximum cut problem in a graph both to be intractable in the semi-streaming model and give semi-streaming algorithms that approximate respective solutions in a randomized fashion.
16

Representing Data Quality in Sensor Data Streaming Environments

Lehner, Wolfgang, Klein, Anja 20 May 2022 (has links)
Sensors in smart-item environments capture data about product conditions and usage to support business decisions as well as production automation processes. A challenging issue in this application area is the restricted quality of sensor data due to limited sensor precision and sensor failures. Moreover, data stream processing to meet resource constraints in streaming environments introduces additional noise and decreases the data quality. In order to avoid wrong business decisions due to dirty data, quality characteristics have to be captured, processed, and provided to the respective business task. However, the issue of how to efficiently provide applications with information about data quality is still an open research problem. In this article, we address this problem by presenting a flexible model for the propagation and processing of data quality. The comprehensive analysis of common data stream processing operators and their impact on data quality allows a fruitful data evaluation and diminishes incorrect business decisions. Further, we propose the data quality model control to adapt the data quality granularity to the data stream interestingness.
17

Deferred Maintenance of Disk-Based Random Samples

Gemulla, Rainer, Lehner, Wolfgang 12 January 2023 (has links)
Random sampling is a well-known technique for approximate processing of large datasets. We introduce a set of algorithms for incremental maintenance of large random samples on secondary storage. We show that the sample maintenance cost can be reduced by refreshing the sample in a deferred manner. We introduce a novel type of log file which follows the intuition that only a “sample” of the operations on the base data has to be considered to maintain a random sample in a statistically correct way. Additionally, we develop a deferred refresh algorithm which updates the sample by using fast sequential disk access only, and which does not require any main memory. We conducted an extensive set of experiments and found, that our algorithms reduce maintenance cost by several orders of magnitude.

Page generated in 0.0554 seconds