• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 157
  • 133
  • 25
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 319
  • 235
  • 153
  • 143
  • 137
  • 137
  • 55
  • 38
  • 37
  • 31
  • 28
  • 25
  • 22
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Efficient Generation of Standard Customer Reports for Airbag Simulation Results

Jayanthi, Sagar 02 November 2023 (has links)
Passive safety systems like airbags have significantly improved road safety. These occupant safety systems help in reducing the severity of injuries, and save lives in the event of a road accident. The airbag systems must be configured correctly to minimize the impact of collision and protect the occupants. To configure the airbag, test crashes are performed and data is recorded. This data is simulated to find out appropriate parameters for the airbag deployment. The airbag simulation results are stored into databases. Airbag application tools are used to handle the data stored in databases. The airbag simulation results must be extracted efficiently and required computations needs to be performed. This data is then stored to reports. RSDBnext is an airbag application tool, it stands for Result Database next generation. This tool is used for extraction of data from the database. The RSDBnext tool should be adapted to generate Standard Customer Reports. These reports are to be generated based on customer requirements. The existing methodology to generate Standard Customer Reports used Excel macros, which took a lot of time to generate the reports. This method was complex and unstable. Hence, a new methodology was proposed without using macros. In the proposed method, an XML file and XSLT StyleSheet were used to generate the report in Excel using C# with Visual Studio. This approach reduces report generation time, and overcomes the drawbacks of the previous approach. From the results, this methodology to generate reports is faster, easier, and more reliable.
152

The Planning OLAP Model

Jaecksch, Bernhard, Lehner, Wolfgang 26 January 2023 (has links)
A wealth of multidimensional OLAP models has been suggested in the past, tackling various problems of modeling multidimensional data. However, all of these models focus on navigational and query operators for grouping, selection and aggregation. We argue that planning functionality is, next to reporting and analysis, an important part of OLAP in many businesses and as such should be represented as part of a multidimensional model. Navigational operators are not enough for planning, instead new factual data is created or existing data is changed. To our knowledge we are the first to suggest a multidimensional model with support for planning. Because the main data entities of a typical multidimensional model are used both by planning and reporting, we concentrate on the extension of an existing model, where we add a set of novel operators that support an extensive set of typical planning functions.
153

A Domain-Specific Language for Do-It-Yourself Analytical Mashups

Eberius, Julian, Thiele, Maik, Lehner, Wolfgang 26 January 2023 (has links)
The increasing amount and variety of data available in the web leads to new possibilities in end-user focused data analysis. While the classic data base technologies for data integration and analysis (ETL and BI) are too complex for the needs of end users, newer technologies like web mashups are not optimal for data analysis. To make productive use of the data available on the web, end users need easy ways to find, join and visualize it. We propose a domain specific language (DSL) for querying a repository of heterogeneous web data. In contrast to query languages such as SQL, this DSL describes the visualization of the queried data in addition to the selection, filtering and aggregation of the data. The resulting data mashup can be made interactive by leaving parts of the query variable. We also describe an abstraction layer above this DSL that uses a recommendation-driven natural language interface to reduce the difficulty of creating queries in this DSL.
154

Flexible Relational Data Model: A Common Ground for Schema-Flexible Database Systems

Voigt, Hannes, Lehner, Wolfgang 03 February 2023 (has links)
An increasing number of application fields represent dynamic and open discourses characterized by high mutability, variety, and pluralism in data. Data in dynamic and open discourses typically exhibits an irregular schema. Such data cannot be directly represented in the traditional relational data model. Mapping strategies allow representation but increase development and maintenance costs. Likewise, NoSQL systems offer the required schema flexibility but introduce new costs by not being directly compatible with relational systems that still dominate enterprise information systems. With the Flexible Relational Data Model (FRDM) we propose a third way. It allows the direct representation of data with irregular schemas. It combines tuple-oriented data representation with relation-oriented data processing. So that, FRDM is still relational, in contrast to other flexible data models currently in vogue. It can directly represent relational data and builds on the powerful, well-known, and proven set of relational operations for data retrieval and manipulation. In addition to FRDM, we present the flexible constraint framework FRDM-C. It explicitly allows restricting the flexibility of FRDM when and where needed. All this makes FRDM backward compatible to traditional relational applications and simplifies the interoperability with existing pure relational databases.
155

Partner datenverarbeitender Services

Wagner, Christoph 19 January 2015 (has links)
Diese Arbeit untersucht den Einfluss von Daten auf das Verhalten und die Korrektheit eines verteilten Systems. Ein verteiltes System besteht aus mehreren Services. Ein Service ist eine selbständige, plattformunabhängige Einheit, die anderen Services eine bestimmte Funktionalität über eine wohldefinierte Schnittstelle zur Verfügung stellt. In dieser Arbeit betrachten wir die Interaktion von jeweils genau zwei Services miteinander. Zwei Services, die erfolgreich miteinander zusammenarbeiten können, nennen wir Partner. Ein Service heißt bedienbar, wenn er mindestens einen Partner hat. Ziel der Arbeit ist es, zu untersuchen, wann zwei Services Partner sind, und für einen Service zu entscheiden, ob dieser bedienbar ist. Aufgrund der Daten kann der Zustandsraum eines Service sehr groß oder sogar unendlich groß werden. Wir untersuchen zwei Klassen von Services mit unendlich vielen Zuständen. Für diese Klassen stellen wir Algorithmen vor, welche zu einem gegebenen Service einen Partner synthetisieren, falls ein solcher existiert. Auf diese Weise entscheiden wir konstruktiv die Bedienbarkeit eines Service. Weiterhin stellen wir Transformationsregeln für Partner vor und untersuchen, wie viel Speicherplatz ein Partner eines Services mindestens benötigt. / This thesis studies the influence of data on the behavior and the correctness of a distributed system. A distributed system consists of several services. A service is a self-contained, platform-independent entity which provides a certain functionality to other services via a well-defined interface.In this thesis, we consider the interaction of exactly two services. Two services that can successfully cooperate with each other are called partners. We call a service controllable, if the service has at least one partner. The goal of this thesis is to study the conditions for which two services are partners and to decide whether a given service is controllable. Due to the data, the state space of a service may be very large or even infinite. We investigate two classes of services with infinitely many states. For these classes, we present algorithms that synthesize a partner of a service, if it exists. This allows us to decide the controllability of a service constructively. Furthermore, we present transformation rules for partners and investigate the minimum amount of memory that a partner of a service needs.
156

Die Evaluation von Daten aus erster und zweiter Hand im naturwissenschaftlichen Unterricht

Pfeiler, Stephan 01 April 2019 (has links)
Im naturwissenschaftlichen Unterricht werden Daten z.B. in Erkenntnisgewinnungsprozessen eingesetzt und somit ist hier auch die Evaluation von Daten wichtig. Es können Daten aus erster und zweiter Hand unterschieden werden, wobei die Unterscheidung auf Basis des Autoren und der Beteiligung an der Datenerhebung geschieht. Im Physikunterricht werden Schüler*innen mit Daten aus unterschiedlichen Quellen konfrontiert. Es wird angenommen, dass die Evaluation von Daten durch Schüler*innen als Glaubwürdigkeitsbewertung dieser Daten verstanden werden kann. Ergänzend zur Theorie wurde in einer Studie untersucht, welche Kriterien Schüler*innen bei der Evaluation unterschiedlicher Datensätze verwenden, die sich nur durch den Autor unterschieden. Dafür wurden 17 Interviews mit Schüler*innen durchgeführt (13-16 Jahre). Eine qualitative Inhaltsanalyse führte zu einem Codesystem mit vier Codes und diversen Subcodes. Die Codes bezogen sich auf die Themen Eigenschaften des Experiments, Eigenschaften von Autoren, Eigenschaften der Daten und Prüfen/Abgleichen. Unterschiede in der Verwendung der Kriterien für verschiedene Datentypen, wurden in einer zweiten Studie überprüft. Dazu wurden 42 Interviews mit schüler*innen (14-16 Jahre) durchgeführt. Alle Probanden erzeugten selbstständig physikalische Daten und wählten Hypothesen über den Ausgang des Experiments aus. Im Anschluss wurden sie mit einem von drei verschiedenen Datensätzen konfrontiert: ihren eigenen Daten, den Daten eines anderen Schülers oder Daten eines Lehrers. Das Codesystem war die Grundlage einer quantitativen Inhaltsanalyse dieser Interviews. Diese erlaubte es, Unterschiede zwischen den Versuchsgruppen zu finden. Es ergaben sich keine Unterschiede für das Hypothesenwechselverhalten, die Verwendung von Kriterien für die Glaubwürdigkeitsbewertung oder das Rating der Wichtigkeit der Codes zwischen den Versuchsgruppen. Folgerungen für den Unterricht und die Unterscheidung der Datentypen werden erläutert. / In science education data is used in the process of gaining knowledge and it is therefore important to evaluate the data. First- and second-hand-data can be differentiated, whereby the distinction is based on the authorship and the involvement in the data acquisition. In physics education students are regularly confronted with data from different sources. It is assumed that the evaluation of data by students can be understood as the evaluation of the datas credibility. To complement a theoretical model, an interview study was conducted to find out which criteria for the evaluation of the credibility of data are used by students when evaluating different types of data. 17 students (13-16 years) were interviewed. A qualitative content analysis yielded a systems of four different codes and several subcodes. These codes where representations of statements that dealt with properties of the experiment, properties of the author, properties of the data, and testing and comparing. A second study was conducted to test if there are differences in the use of those criteria when students are confronted with different types of data. 42 Interviews with students (14-16 years) were conducted. All subjects acquired a set of first-hand data in an experiment and where asked to choose between 3 hypotheses about the outcome. Afterwards they were confronted with three different sets of data: their own data, another student's set of data, and a teacher's set of data. The system of codes from the previous study was used as the basis for a quantitative content analyses of these interviews. This analysis made it possible to find differences between the experimental groups. No differences were found for the change of the hypothesis, the use of criteria for the evaluation of credibility or the rating of importance of the codes. Implications for education and the differentiation of types of data are discussed.
157

Visual Analysis of High-Dimensional Point Clouds using Topological Abstraction

Oesterling, Patrick 17 May 2016 (has links) (PDF)
This thesis is about visualizing a kind of data that is trivial to process by computers but difficult to imagine by humans because nature does not allow for intuition with this type of information: high-dimensional data. Such data often result from representing observations of objects under various aspects or with different properties. In many applications, a typical, laborious task is to find related objects or to group those that are similar to each other. One classic solution for this task is to imagine the data as vectors in a Euclidean space with object variables as dimensions. Utilizing Euclidean distance as a measure of similarity, objects with similar properties and values accumulate to groups, so-called clusters, that are exposed by cluster analysis on the high-dimensional point cloud. Because similar vectors can be thought of as objects that are alike in terms of their attributes, the point cloud\'s structure and individual cluster properties, like their size or compactness, summarize data categories and their relative importance. The contribution of this thesis is a novel analysis approach for visual exploration of high-dimensional point clouds without suffering from structural occlusion. The work is based on implementing two key concepts: The first idea is to discard those geometric properties that cannot be preserved and, thus, lead to the typical artifacts. Topological concepts are used instead to shift away the focus from a point-centered view on the data to a more structure-centered perspective. The advantage is that topology-driven clustering information can be extracted in the data\'s original domain and be preserved without loss in low dimensions. The second idea is to split the analysis into a topology-based global overview and a subsequent geometric local refinement. The occlusion-free overview enables the analyst to identify features and to link them to other visualizations that permit analysis of those properties not captured by the topological abstraction, e.g. cluster shape or value distributions in particular dimensions or subspaces. The advantage of separating structure from data point analysis is that restricting local analysis only to data subsets significantly reduces artifacts and the visual complexity of standard techniques. That is, the additional topological layer enables the analyst to identify structure that was hidden before and to focus on particular features by suppressing irrelevant points during local feature analysis. This thesis addresses the topology-based visual analysis of high-dimensional point clouds for both the time-invariant and the time-varying case. Time-invariant means that the points do not change in their number or positions. That is, the analyst explores the clustering of a fixed and constant set of points. The extension to the time-varying case implies the analysis of a varying clustering, where clusters appear as new, merge or split, or vanish. Especially for high-dimensional data, both tracking---which means to relate features over time---but also visualizing changing structure are difficult problems to solve.
158

Monitoring Tools File Specification

Vogelsang, Stefan 22 March 2016 (has links) (PDF)
This paper describes the format of monitoring data files that are collected for external measuring sites and at laboratory experiments at the Institute for Building Climatology (IBK). The Monitoring Data Files are containers for storing time series or event driven data collected as input for transient heat and moisture transport simulations. Further applications are the documentation of real world behaviour, laboratory experiments or the collection of validation data sets for simulation results ( whole building / energy consumption / HAM ). The article also discusses the application interface towards measurement data verification tools as well as data storage solutions that can be used to archive measurement data files conveniently and efficiently.
159

Towards Accurate and Efficient Cell Tracking During Fly Wing Development

Blasse, Corinna 05 December 2016 (has links) (PDF)
Understanding the development, organization, and function of tissues is a central goal in developmental biology. With modern time-lapse microscopy, it is now possible to image entire tissues during development and thereby localize subcellular proteins. A particularly productive area of research is the study of single layer epithelial tissues, which can be simply described as a 2D manifold. For example, the apical band of cell adhesions in epithelial cell layers actually forms a 2D manifold within the tissue and provides a 2D outline of each cell. The Drosophila melanogaster wing has become an important model system, because its 2D cell organization has the potential to reveal mechanisms that create the final fly wing shape. Other examples include structures that naturally localize at the surface of the tissue, such as the ciliary components of planarians. Data from these time-lapse movies typically consists of mosaics of overlapping 3D stacks. This is necessary because the surface of interest exceeds the field of view of todays microscopes. To quantify cellular tissue dynamics, these mosaics need to be processed in three main steps: (a) Extracting, correcting, and stitching individ- ual stacks into a single, seamless 2D projection per time point, (b) obtaining cell characteristics that occur at individual time points, and (c) determine cell dynamics over time. It is therefore necessary that the applied methods are capable of handling large amounts of data efficiently, while still producing accurate results. This task is made especially difficult by the low signal to noise ratios that are typical in live-cell imaging. In this PhD thesis, I develop algorithms that cover all three processing tasks men- tioned above and apply them in the analysis of polarity and tissue dynamics in large epithelial cell layers, namely the Drosophila wing and the planarian epithelium. First, I introduce an efficient pipeline that preprocesses raw image mosaics. This pipeline accurately extracts the stained surface of interest from each raw image stack and projects it onto a single 2D plane. It then corrects uneven illumination, aligns all mosaic planes, and adjusts brightness and contrast before finally stitching the processed images together. This preprocessing does not only significantly reduce the data quantity, but also simplifies downstream data analyses. Here, I apply this pipeline to datasets of the developing fly wing as well as a planarian epithelium. I additionally address the problem of determining cell polarities in chemically fixed samples of planarians. Here, I introduce a method that automatically estimates cell polarities by computing the orientation of rootlets in motile cilia. With this technique one can for the first time routinely measure and visualize how tissue polarities are established and maintained in entire planarian epithelia. Finally, I analyze cell migration patterns in the entire developing wing tissue in Drosophila. At each time point, cells are segmented using a progressive merging ap- proach with merging criteria that take typical cell shape characteristics into account. The method enforces biologically relevant constraints to improve the quality of the resulting segmentations. For cases where a full cell tracking is desired, I introduce a pipeline using a tracking-by-assignment approach. This allows me to link cells over time while considering critical events such as cell divisions or cell death. This work presents a very accurate large-scale cell tracking pipeline and opens up many avenues for further study including several in-vivo perturbation experiments as well as biophysical modeling. The methods introduced in this thesis are examples for computational pipelines that catalyze biological insights by enabling the quantification of tissue scale phenomena and dynamics. I provide not only detailed descriptions of the methods, but also show how they perform on concrete biological research projects.
160

Konzeption und Entwicklung einer E-Learning-Lektion zur Arbeit mit der Kartenherstellungssoftware OCAD

Goerlich, Franz 15 May 2012 (has links) (PDF)
In dieser Bachelorarbeit wird eine Beispiel-E-Learning-Lektion zum Import von GPS Daten in die Kartenherstellungssoftware OCAD erstellt. Dabei liegt im theoretischen Teil der Hauptschwerpunkt auf nutzergenerierten Daten (Volunteered Geographic Infor-mation; kurz: VGI). Nach einer kurzen, allgemeinen Einführung wird auf die Bedeutung der Kartographie im Zusammenhang mit VGI eingegangen. Der zweite Teil beinhaltet Didaktik mit Schwerpunkt E-Learning. Dazu werden das Goal Based Scenario Modell und das Cognitive Apprenticeship Modell kurz vorgestellt und anschließend das Projekt GITTA mit der enthaltenen ECLASS-Struktur näher erklärt. Den immer wichtiger werdenden Content Management Systemen (CMS) widmet sich der dritte Theorieteil. Für die Realisierung der zu erstellenden Lektion wird das Open-Source CMS Joomla! ausführlicher erläutert. Die Implementierung beschreibt die Umsetzung der E-Leraning-Lektion mittels Joomla! und die Nutzung des ECLASS-Modells. Bevor auf die Vorgehensweise eingegangen wird, enthält der Implementierungsteil die Erstellung eines groben Gesamtkonzeptes für eine komplette E-Leraning-Anwendung zu OCAD mit entsprechenden Erläuterungen. Anschließend folgen eine Zusammenfassung und ein Ausblick über die Weiterführung der E-Learning-Anwendung.

Page generated in 0.0832 seconds