• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 154
  • 131
  • 25
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 314
  • 230
  • 151
  • 141
  • 135
  • 135
  • 54
  • 35
  • 34
  • 30
  • 28
  • 24
  • 22
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Globale Collaboration im Kontext mit PLM

Muschiol, Michael, Schulte, Stefan 25 September 2017 (has links)
Aus der Einleitung: "Bedingt durch eine geforderte, lokale Präsenz eines global agierenden Unternehmens gegenüber weltweiten Kunden sowie durch global verteiltes Engineering, Produktion und Service sind Unternehmen zunehmend in der Pflicht, sich global zu positionieren. Auch die Verlagerung von Engineeringaufgaben zu externen Zulieferern und Partnern erfordern organisatorische sowie prozesstechnische Maßnahmen, die durch eine entsprechende informationstechnische Unterstützung flankiert werden müssen. Für diese Unterstützung können sogenannte PLM-Systemumgebungen genutzt werden, die sich auf PDM-Systemen abstützen."
142

Big Data in Bicycle Traffic: A user-oriented guide to the use of smartphone-generated bicycle traffic data

Francke, Angela, Lißner, Sven January 2017 (has links)
For cycling to be attractive, the infrastructure must be of high quality. Due to the high level of resources required to record it locally, the available data on the volume of cycling traffic has to date been patchy. At the moment, the most reliable and usable numbers seem to be derived from permanently installed automatic cycling traffic counters, already used by many local authorities. One disadvantage of these is that the number of data collection points is generally far too low to cover the entirety of a city or other municipality in a way that achieves truly meaningful results. The effect of side roads on cycling traffic is therefore only incompletely assessed. Furthermore, there is usually no data at all on other parameters, such as waiting times, route choices and cyclists’ speed. This gap might in future be filled by methods such as GPS route data, as is now possible by today’s widespread use of smartphones and the relevant tracking apps. The results of the project presented in this guide have been supported by the BMVI [Federal Ministry of Transport and Digital Infrastructure] within the framework of its 2020 National Cycling Plan. This research project seeks to investigate the usability of user data generated using a smartphone app for bicycle traffic planning by local authorities. In summary, it can be stated that, taking into account the factors described in this guide, GPS data are usable for bicycle traffic planning within certain limitations. (The GPS data evaluated in this case were provided by Strava Inc.) Nowadays it is already possible to assess where, when and how cyclists are moving around across the entire network. The data generated by the smartphone app could be most useful to local authorities as a supplement to existing permanent traffic counters. However, there are a few aspects that need to be considered when evaluating and interpreting the data, such as the rather fitness-oriented context of the routes surveyed in the examples examined. Moreover, some of the data is still provided as database or GIS files, although some online templates that are easier to use are being set up, and some can already be used in a basic initial form. This means that evaluation and interpretation still require specialist expertise as well as human resources. However, the need for these is expected to reduce in the future with the further development of web interfaces and supporting evaluation templates. For this to work, developers need to collaborate with local authorities to work out what parameters are needed as well as the most suitable formats. This research project carried out an approach to extrapolating cycling traffic volumes from random samples of GPS data over the whole network. This was also successfully verified in another municipality. Further research is still nevertheless required in the future, as well as adaptation to the needs of different localities. Evidence for the usability of GPS data in practice still needs to be acquired in the near future. The cities of Dresden, Leipzig and Mainz could be taken as examples for this, as they have all already taken their first steps in the use of GPS data in planning for and supporting cycling. These steps make sense in the light of the increasing digitisation of traffic and transport and the growing amount of data available as a result – despite the limitations on these data to date – so that administrative bodies can start early in building up the appropriate skills among their staff. The use of GPS data would yield benefits for bicycle traffic planning in the long run. In addition, the active involvement of cyclists opens up new possibilities in communication and citizen participation – even without requiring specialist knowledge. This guide delivers a practical introduction to the topic, giving a comprehensive overview of the opportunities, obstacles and potential offered by GPS data.
143

Automotive Diagnosis Data Aggregation, Management and Evaluation in Cloud based Environment

Zamani, Farshad 11 September 2018 (has links)
Automotive diagnosis data are useful for the automotive OEMs and third parties in order to analyze the vehicle performance and driving behavior. This data can be access and read in real time situation using the On-Board Diagnosis system which is located inside the vehicle and is accessible by its socket. However, as the diagnosis data are in real time and not being stored, analyzing them is not easy. Therefore, in this project, a program on Raspberry Pi will be developed which is going to read diagnosis data and store them in a cloud database. The cloud database will handle the data storage and keep the data for further analysis and evaluations. The database will be accessible everywhere as it is using cloud technology. On the other hand, in order to provide easy and meaningful access to the data, a web application is developed in order to visualize the data by means of Graphs, Texts, and maps.
144

Conceptual Factors and Fuzzy Data

Glodeanu, Cynthia Vera 20 December 2012 (has links)
With the growing number of large data sets, the necessity of complexity reduction applies today more than ever before. Moreover, some data may also be vague or uncertain. Thus, whenever we have an instrument for data analysis, the questions of how to apply complexity reduction methods and how to treat fuzzy data arise rather naturally. In this thesis, we discuss these issues for the very successful data analysis tool Formal Concept Analysis. In fact, we propose different methods for complexity reduction based on qualitative analyses, and we elaborate on various methods for handling fuzzy data. These two topics split the thesis into two parts. Data reduction is mainly dealt with in the first part of the thesis, whereas we focus on fuzzy data in the second part. Although each chapter may be read almost on its own, each one builds on and uses results from its predecessors. The main crosslink between the chapters is given by the reduction methods and fuzzy data. In particular, we will also discuss complexity reduction methods for fuzzy data, combining the two issues that motivate this thesis. / Komplexitätsreduktion ist eines der wichtigsten Verfahren in der Datenanalyse. Mit ständig wachsenden Datensätzen gilt dies heute mehr denn je. In vielen Gebieten stößt man zudem auf vage und ungewisse Daten. Wann immer man ein Instrument zur Datenanalyse hat, stellen sich daher die folgenden zwei Fragen auf eine natürliche Weise: Wie kann man im Rahmen der Analyse die Variablenanzahl verkleinern, und wie kann man Fuzzy-Daten bearbeiten? In dieser Arbeit versuchen wir die eben genannten Fragen für die Formale Begriffsanalyse zu beantworten. Genauer gesagt, erarbeiten wir verschiedene Methoden zur Komplexitätsreduktion qualitativer Daten und entwickeln diverse Verfahren für die Bearbeitung von Fuzzy-Datensätzen. Basierend auf diesen beiden Themen gliedert sich die Arbeit in zwei Teile. Im ersten Teil liegt der Schwerpunkt auf der Komplexitätsreduktion, während sich der zweite Teil der Verarbeitung von Fuzzy-Daten widmet. Die verschiedenen Kapitel sind dabei durch die beiden Themen verbunden. So werden insbesondere auch Methoden für die Komplexitätsreduktion von Fuzzy-Datensätzen entwickelt.
145

Untersuchungen in ternären chalkogenhaltigen Systemen Ag-Ga-Te und Sn-Sb-Se

Shen, Jun 19 February 2003 (has links)
Chalkogensysteme gewinnen aufgrund der halbleitenden Eigenschaften immer mehr an Bedeutung und sind Ziel technischer Anwendungen. Die Aufklärung der Phasengleichgewichte binäreren und ternäreren Chalkogensystemen sind für diesen Forschungszweig von besonderem Interesse. Einen wichtigen Beitrag zur Aufklärung der Phasengleichgewichte liefert die Bestimmung thermodynamischer Daten und die experimentellen Untersuchungen von Zustandsdiagrammen. Reaktivitätsuntersuchungen der intermetallischen und chalkogenhaltigen Verbindungen führen zu neuen Kenntnissen zum Reaktionsverhalten schwerzugängiger Verbindungen. Um eine schnelle und phasenreine Herstellung zu erzielen, kann die mechanische Synthese angewandt werden. Ein anderer Aspekt zur Synthese liegt auf dem Gebiet der Strukturanalyse von Chalkogenidometallaten. Mittels Hydrothermalsynthese werden verschiedene organische Template als Reaktionspartner eingesetzt, um neue Strukturen zu entdecken. Im Rahmen dieser Arbeit wurden Differenz-Thermoanalyse und röntgenographische sowie mikroanalytische Methoden und mikroskopische Gefügeuntersuchungen an den chalkogenhaltigen ternären Systemen Silber-Gallium-Tellur und Zinn-Antimon-Selen durchgeführt.
146

Context Similarity for Retrieval-Based Imputation

Ahmadov, Ahmad, Thiele, Maik, Lehner, Wolfgang, Wrembel, Robert 30 June 2022 (has links)
Completeness as one of the four major dimensions of data quality is a pervasive issue in modern databases. Although data imputation has been studied extensively in the literature, most of the research is focused on inference-based approach. We propose to harness Web tables as an external data source to effectively and efficiently retrieve missing data while taking into account the inherent uncertainty and lack of veracity that they contain. Existing approaches mostly rely on standard retrieval techniques and out-of-the-box matching methods which result in a very low precision, especially when dealing with numerical data. We, therefore, propose a novel data imputation approach by applying numerical context similarity measures which results in a significant increase in the precision of the imputation procedure, by ensuring that the imputed values are of the same domain and magnitude as the local values, thus resulting in an accurate imputation. We use Dresden Web Table Corpus which is comprised of more than 125 million web tables extracted from the Common Crawl as our knowledge source. The comprehensive experimental results demonstrate that the proposed method well outperforms the default out-of-the-box retrieval approach.
147

A Stochastic Programming Method for OD Estimation Using LBSN Check-in Data

Lu, Qing-Long, Qurashi, Moeid, Antoniou, Constantinos 23 June 2023 (has links)
Dynamic OD estimators based on traffic measurements inevitably encounter the indeterminateness problem on the posterior OD flows as such systems structurally have more unknowns than constraints. To resolve this problem and take advantage of the emerging urban mobility data, the paper proposes a dynamic OD estimator based on location-based social networking (LBSN) data, leveraging the two-stage stochastic programming framework, under the assumption that similar check-in patterns are generated by the same OD pattern. The search space of the OD flows will be limited by integrating a batch of realizations/scenarios of the second-stage problem state (i.e. check-in pattern) in the model. The two-stage stochastic programming model decomposes in a master problem and a set of subproblems (one per scenario) via the Benders decomposition algorithm, which will be tackled alternately. The preliminary results from experiments conducted with the Foursquare data of Tokyo, Japan, show that the proposed OD estimator can effectively recurrent the check-in patterns and result in a good posterior OD estimate.
148

Measuring exposure for cyclists and micro-mobility users

Fyhri, Aslak, Pokorny, Petr, Ellis, Ingunn Opheim, Weber, Christian 03 January 2023 (has links)
Data about bicycle usage is an important input parameter for several purposes. They are used to describe changes towards more sustainable transport, and partly to say something about changes towards more active transport as opposed to passive modes oftransport. Importantly such data are used as the denominator when calculating crash risk: for cyclists. In Norway, as in most countries, these data are captured in several ways today. This is partly done by using data from the national travel behavior survey, partly using figures from stationazy or mobile bicycle counters, and partly using other methods such as manual counts, etc. The technological development has provided several new opportunities to register such travel, in the form of more advanced stationary counters, advanced algorithms that interprets signal data, video recording solutions and app-based measurement systems. At the sam.e time, we see that development in the transport sector also creates new challenges. In just a few years, electric scooters have radically changed the traffic picture in cities and towns in Norway. There is therefore a need for more knowledge about different forms of ways to measure bicycle and micro-mobility use, their strengths and weaknesses, and what kind of strategies the authorities should have to be equipped to meet future changes in the transport field, as exemplified by the recent intlux of e-scooters. The current paper aims to respond to these challenges by answering the following research questions: • What are the relative strengths and weaknesses of different data sources for measuring cycling and micromobility use? • How weil do the different sources function to capture micromobility and to differentiate between traditional cycling and micromobility? • How can the different data sources be used as input for calculating crash risk for various forms of soft mobility (i.e. cycling and micromobility)?
149

Anwendung computertomographischer Daten in Werkzeugen der Produktentwicklung

Hofmann, Dirk 27 June 2022 (has links)
Die vorliegende Arbeit zeigt einen Prozess zur direkten Anwendung computertomographischer Daten in der Produktentwicklung. Grundlage bilden die nach der Erfassung und einer mathematischen Rekonstruktion generierten Schichtbildsequenzen. Dieser Prozess besteht aus zwei unabhängigen Umgebungen, einer CT- und einer CAD-Umgebung. Beide sind interaktiv über einen dritten Baustein der Überführung und Interpretation miteinander verbunden. Die CT-Umgebung dient zur Initialisierung, Visualisierung und Verwaltung der computertomographischen Daten. Die CAD-Umgebung bildet als etabliertes Werkzeug in der Produktentwicklung die systematische Basis zum modellieren und validieren der analytischen dreidimensionalen Modelldaten. Über eine bidirektionale Kommunikations- und Interaktionsebene ist es möglich, ausgehend vom CAD-System, Informationen aus den CT-Daten gezielt, variabel und nutzerspezifisch für mechanisch konstruktive Modellierungsprozesse zu generieren.:Abkürzungsverzeichnis Symbolverzeichnis 1 Einleitung 1.1 Motivation 1.2 Problemstellung 1.3 Zielstellung 1.4 Aufbau der Arbeit 2 Stand der Technik 2.1 CT-Daten und deren Weiterverarbeitung 2.1.1 Allgemeines 2.1.2 Datenstruktur und Abbildungseigenschaften 2.1.3 Visualisierungsformen 2.1.4 Repräsentationsformen und Datenformate 2.1.5 Segmentierungsverfahren 2.1.6 Bestimmung charakteristischer Objektmerkmale 2.2 Methoden und Werkzeuge der Konstruktion 2.2.1 Übersicht 2.2.2 Modellierung auf Basis diskreter Oberflächen 2.2.3 Flächenrückführung 2.2.4 Direkte und parametrische Modellierung 2.2.5 Modellierung mit Bilddaten 2.3 Prozessuale und technische Analyse 3 Computertomographische Daten in der CAD-Umgebung 3.1 Das Konzept 3.1.1 Definition der Rahmenbedingungen 3.1.2 Anforderungen an die Prozesskette 3.1.3 Objektinformationen aus CT-Daten 3.1.4 Voraussetzungen in CAD-Systemen 3.2 Der Entwurf des Gesamtprozesses 3.3 Die Datenvorbereitung 3.3.1 Eingangsinformationen und Visualisierungsformen 3.3.2 Ausrichtung und Registrierung 3.3.3 Eingrenzen des Betrachtungsbereiches 3.4 Das Prinzip der Datenüberführung und Interpretation 3.5 Die Kommunikations- und Interaktionsebene 3.5.1 Methodenbeschreibung 3.5.2 Ergänzende Schnittansichten 3.5.3 Erstellen eines Freiformschnittes 3.5.4 Kontur- und Geometrieableitung 3.5.5 Partiell oberflächenbasierte Modellerstellung 4 Applikation und exemplarische Anwendung 4.1 Technische Realisierung 4.2 Beschreibung der Systemumgebung 4.3 Neukonstruktion eines individuellen Schädelimplantats 4.3.1 Anatomische Grundlagen 4.3.2 Problemstellung und Analyse 4.3.3 Modellierung 4.4 Anpassungskonstruktion des Räderwerkes einer historischen Taschenuhr 4.4.1 Technische Grundlagen 4.4.2 Problemstellung und Analyse 4.4.3 Modellierung 4.5 Auswertung 5 Zusammenfassung und Ausblick 5.1 Zusammenfassung 5.2 Ausblick Literaturverzeichnis Abbildungsverzeichnis Tabellenverzeichnis Anlagen
150

Leveraging Flexible Data Management with Graph Databases

Vasilyeva, Elena, Thiele, Maik, Bornhövd, Christof, Lehner, Wolfgang 01 September 2022 (has links)
Integrating up-to-date information into databases from different heterogeneous data sources is still a time-consuming and mostly manual job that can only be accomplished by skilled experts. For this reason, enterprises often lack information regarding the current market situation, preventing a holistic view that is needed to conduct sound data analysis and market predictions. Ironically, the Web consists of a huge and growing number of valuable information from diverse organizations and data providers, such as the Linked Open Data cloud, common knowledge sources like Freebase, and social networks. One desirable usage scenario for this kind of data is its integration into a single database in order to apply data analytics. However, in today's business intelligence tools there is an evident lack of support for so-called situational or ad-hoc data integration. What we need is a system which 1) provides a flexible storage of heterogeneous information of different degrees of structure in an ad-hoc manner, and 2) supports mass data operations suited for data analytics. In this paper, we will provide our vision of such a system and describe an extension of the well-studied property graph model that allows to 'integrate and analyze as you go' external data exposed in the RDF format in a seamless manner. The proposed integration approach extends the internal graph model with external data from the Linked Open Data cloud, which stores over 31 billion RDF triples (September 2011) from a variety of domains.

Page generated in 0.0387 seconds