• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 77
  • 24
  • 15
  • Tagged with
  • 114
  • 114
  • 69
  • 69
  • 69
  • 42
  • 32
  • 19
  • 19
  • 17
  • 16
  • 14
  • 13
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Using Cloud Technologies to Optimize Data-Intensive Service Applications

Lehner, Wolfgang, Habich, Dirk, Richly, Sebastian, Assmann, Uwe 01 November 2022 (has links)
The role of data analytics increases in several application domains to cope with the large amount of captured data. Generally, data analytics are data-intensive processes, whose efficient execution is a challenging task. Each process consists of a collection of related structured activities, where huge data sets have to be exchanged between several loosely coupled services. The implementation of such processes in a service-oriented environment offers some advantages, but the efficient realization of data flows is difficult. Therefore, we use this paper to propose a novel SOA-aware approach with a special focus on the data flow. The tight interaction of new cloud technologies with SOA technologies enables us to optimize the execution of data-intensive service applications by reducing the data exchange tasks to a minimum. Fundamentally, our core concept to optimize the data flows is found in data clouds. Moreover, we can exploit our approach to derive efficient process execution strategies regarding different optimization objectives for the data flows.
62

Exploiting big data in time series forecasting: A cross-sectional approach

Lehner, Wolfgang, Hartmann, Claudio, Hahmann, Martin, Rosenthal, Frank 12 January 2023 (has links)
Forecasting time series data is an integral component for management, planning and decision making. Following the Big Data trend, large amounts of time series data are available from many heterogeneous data sources in more and more applications domains. The highly dynamic and often fluctuating character of these domains in combination with the logistic problems of collecting such data from a variety of sources, imposes new challenges to forecasting. Traditional approaches heavily rely on extensive and complete historical data to build time series models and are thus no longer applicable if time series are short or, even more important, intermittent. In addition, large numbers of time series have to be forecasted on different aggregation levels with preferably low latency, while forecast accuracy should remain high. This is almost impossible, when keeping the traditional focus on creating one forecast model for each individual time series. In this paper we tackle these challenges by presenting a novel forecasting approach called cross-sectional forecasting. This method is especially designed for Big Data sets with a multitude of time series. Our approach breaks with existing concepts by creating only one model for a whole set of time series and requiring only a fraction of the available data to provide accurate forecasts. By utilizing available data from all time series of a data set, missing values can be compensated and accurate forecasting results can be calculated quickly on arbitrary aggregation levels.
63

F2DB: The Flash-Forward Database System

Lehner, Wolfgang, Fischer, Ulrike, Rosenthal, Frank 29 November 2022 (has links)
Forecasts are important to decision-making and risk assessment in many domains. Since current database systems do not provide integrated support for forecasting, it is usually done outside the database system by specially trained experts using forecast models. However, integrating model-based forecasting as a first-class citizen inside a DBMS speeds up the forecasting process by avoiding exporting the data and by applying database-related optimizations like reusing created forecast models. It especially allows subsequent processing of forecast results inside the database. In this demo, we present our prototype F2DB based on PostgreSQL, which allows for transparent processing of forecast queries. Our system automatically takes care of model maintenance when the underlying dataset changes. In addition, we offer optimizations to save maintenance costs and increase accuracy by using derivation schemes for multidimensional data. Our approach reduces the required expert knowledge by enabling arbitrary users to apply forecasting in a declarative way.
64

Intuitive Visualisierung universitätsinterner Publikationsdaten zur Unterstützung von Entscheidungsprozessen / Intuitive visualization of university-internal publication data as support for decision processes

Bolte, Fabian 23 November 2016 (has links) (PDF)
Die vorliegende Arbeit nutzt die Publikationsdaten der TU Chemnitz zur Darstellung der Entwicklung von Kooperationen zwischen Instituten und Fakultäten über die Zeit. Dabei wird die Unzulänglichkeit gängiger Netzwerkanalysen mithilfe von Graphen, die komplexen Beziehungen um eine zeitliche Dimension zu erweitern, aufgezeigt. Stattdessen wird eine Anwendung auf Basis des Streamgraphen vorgestellt, welche nicht nur den Vergleich der Entwicklung beliebiger Kombinationen von Instituten und Fakultäten ermöglicht, sondern auch spezifische Auskünfte zu den Kooperationsarten und deren zeitlicher Verlagerung gibt. Dafür werden zwei Erweiterungen für den Streamgraphen vorgestellt, welche seinen Informationsumfang erweitern und ihn damit zur Erfüllung der gesetzten Anforderungen befähigen. / This thesis uses data about publications from members of the TU Chemnitz to visualize the progress of cooperations between institutes and faculties over time. Thereby it is shown, that the attempt to expand common used network analyses, via graphs, by a temporal dimension, is insufficient for this task. Instead we present an application, based on a streamgraph, which enables the user to compare the development of any combination of institutes and faculties, as well as giving specific information about cooperation types and their temporal shift. Therefore, two extensions to the streamgraph are proposed, which increase the amount of information visible and provide tools to satisfy the stated requirements.
65

Delphin 6 Output File Specification

Vogelsang, Stefan, Nicolai, Andreas 12 April 2016 (has links) (PDF)
Abstract This paper describes the file formats of the output data and geometry files generated by the Delphin program, a simulation model for hygrothermal transport in porous media. The output data format is suitable for any kind of simulation output generated by transient transport simulation models. Implementing support for the Delphin output format enables use of the advanced post-processing functionality provided by the Delphin post-processing tool and its dedicated physical analysis functionality.
66

Framework zur Innenraumpositionierung unter Verwendung freier, offener Innenraumkarten und Inertialsensorik / An Indoor Positioning Framework Using Free and Open Map Data and Inertial Sensors

Graichen, Thomas, Weichold, Steffen, Bilda, Sebastian 07 February 2017 (has links) (PDF)
In der vorliegenden Publikation wird ein Verfahren beschrieben, dass eine infrastrukturlose Positionierung im Inneren von Gebäuden ermöglicht. Unter infrastrukturlos wird in diesem Zusammenhang die autarke Positionierung eines Systems auf Basis seiner Inertialsensorik ohne den Einsatz von im Gebäude installierter Zusatzlösungen, wie Funksysteme, verstanden. Aufgrund der insbesondere über die Zeit erhöhten Fehlerbehaftung solcher Sensoren werden bei diesem Verfahren Innenraumkarten in den Lokalisierungsprozess einbezogen. Diese Kartendaten erlauben den Ausschluss invalider Positionen und Bewegungen, wie das Durchqueren von Wänden, und ermöglichen somit eine wesentliche Verbesserung der Ortungsgenauigkeit.
67

Multi-view point cloud fusion for LiDAR based cooperative environment detection

Jähn, Benjamin, Lindner, Philipp, Wanielik, Gerd 11 November 2015 (has links) (PDF)
A key component for automated driving is 360◦ environment detection. The recognition capabilities of mod- ern sensors are always limited to their direct field of view. In urban areas a lot of objects occlude important areas of in- terest. The information captured by another sensor from an- other perspective could solve such occluded situations. Fur- thermore, the capabilities to detect and classify various ob- jects in the surrounding can be improved by taking multiple views into account. In order to combine the data of two sensors into one co- ordinate system, a rigid transformation matrix has to be de- rived. The accuracy of modern e.g. satellite based relative pose estimation systems is not sufficient to guarantee a suit- able alignment. Therefore, a registration based approach is used in this work which aligns the captured environment data of two sensors from different positions. Thus their relative pose estimation obtained by traditional methods is improved and the data can be fused. To support this we present an approach which utilizes the uncertainty information of modern tracking systems to de- termine the possible field of view of the other sensor. Fur- thermore, it is estimated which parts of the captured data is directly visible to both, taking occlusion and shadowing ef- fects into account. Afterwards a registration method, based on the iterative closest point (ICP) algorithm, is applied to that data in order to get an accurate alignment. The contribution of the presented approch to the achiev- able accuracy is shown with the help of ground truth data from a LiDAR simulation within a 3-D crossroad model. Re- sults show that a two dimensional position and heading esti- mation is sufficient to initialize a successful 3-D registration process. Furthermore it is shown which initial spatial align- ment is necessary to obtain suitable registration results.
68

Weißeritz-Info - ein internetgestütztes Informations- und Entscheidungsunterstützungssystem für das Flussgebiet der Weißeritz

Walz, Ulrich 28 February 2013 (has links) (PDF)
In diesem Beitrag wird das am Leibniz-Institut für ökologische Raumentwicklung eV. (IÖR) entwickelte Informations- und Entscheidungsunterstützungssystem „Weißeritz-Info“ vorgestellt, das der Aufbereitung und Bereitstellung von Informationen zum Hochwasserrisikomanagement für das Einzugsgebiet der Weißeritz dient. Zielgruppen sind sowohl Bürger und Landnutzer als auch Entscheidungsträger in Kommunen, Behörden und Verbänden. Erstellt wurde das WebGIS-basierte System für die Initiative „Weißeritz-Regio“, einem Verbund von 26 Institutionen, die seit Ende 2003 auf informeller Basis zusammenarbeiten, um die Hochwasservorsorge im Flussgebiet zu verbessern.
69

Delphin 6 Output File Specification

Vogelsang, Stefan, Nicolai, Andreas 29 June 2011 (has links) (PDF)
This paper describes the file formats of the output data and geometry files generated by the Delphin program, a simulation model for hygrothermal transport in porous media. The output data format is suitable for any kind of simulation output generated by transient transport simulation models. Implementing support for the Delphin output format enables use of the advanced post-processing functionality provided by the Delphin post- processing tool and its dedicated physical analysis functionality. The article also discusses the application programming interface of the DataIO library that can be used to read/write Delphin output data and geometry files conveniently and efficiently.
70

Round-trip engineering concept for hierarchical UML models in AUTOSAR-based safety projects

Pathni, Charu 09 November 2015 (has links) (PDF)
Product development process begins at a very abstract level of understanding the requirements. The data needs to be passed on the next phase of development. This happens after every stage for further development and finally a product is made. This thesis deals with the data exchange process of software development process in specific. The problem lies in handling of data in terms of redundancy and versions of the data to be handled. Also, once data passed on to next stage, the ability to exchange it in reveres order is not existent in evident forms. The results found during this thesis discusses the solutions for the problem by getting all the data at same level, in terms of its format. Having the concept ready, provides an opportunity to use this data based on our requirements. In this research, the problem of data consistency, data verification is dealt with. This data is used during the development and data merging from various sources. The concept that is formulated can be expanded to a wide variety of applications with respect to development process. If the process involves exchange of data - scalability and generalization are the main foundation concepts that are contained within the concept.

Page generated in 0.0787 seconds