681 |
Komplementäre Datenbasiserzeugung für das maschinelle Lernen zur Qualitätsprognose beim KaltringwalzenWang, Qinwen, Seitz, Johannes, Lafarge, Rémi, Kuhlenkötter, Bernd, Brosius, Alexander 28 November 2023 (has links)
Die Reduzierung von Ausschuss und unnötiger Nacharbeit ist ein elementares Ziel der Fertigungsindustrie. Mit der zunehmenden Datenverfügbarkeit und den Entwicklungen
auf dem Gebiet der künstlichen Intelligenz (KI) für industrielle Anwendungen, wird auch im Bereich des Radial-Axial Ringwalzens (RAW) der Einsatz des maschinellen Lernens
(ML) eruiert. Die Anwendungen hier sind beispielsweise die Prozessauslegung oder die Vorhersage der Ringqualität [1]. Allerdings ist die Genauigkeit dieser Vorhersagen
derzeit noch durch die Menge und Qualität der Daten begrenzt [2]. Um das überwachte Lernen zur Vorhersage der Ringqualität anzuwenden, muss eine umfangreiche
Datenbasis für Gut- und Ausschussteile erzeugt werden. Eine Möglichkeit, bestehende Datenbasen zu erweitern, besteht in der Nutzung von Prozesssimulationen
zur Generierung synthetischer Daten. Im Bereich des Warmringwalzens gibt es jedoch derzeit keine schnelle Simulationsmethode, mit der eine ausreichend große synthetische Datenbank von gewalzten Teilen mit Form- oder Prozessfehlern generiert werden kann. Die Forschung zum Transferlernen zwischen verschiedenen Walzwerken und Datensätzen hat die neuartige Idee hervorgebracht, das Kaltringwalzen als Untersuchungsgegenstand heranzuziehen [2]. Im Folgenden wird untersucht, inwieweit das
Kaltringwalzen, als ähnlicher Prozess, für die zukünftige Übertragung von Modellen und Ergebnissen auf das RAW verwendet werden kann. Im Vergleich zum RAW wird
die Umformung beim Kaltringwalzen nur durch zwei Radialwalzen erreicht und der Prozess wird bei Raumtemperatur durchgeführt. Diese vereinfachte Verfahrensweise erlaubt
es, ein halb-analytisches Modell zu entwickeln, das im Vergleich zu herkömmlichen FEM-Ansätzen, bei akzeptabler Genauigkeit, viel weniger Berechnungszeit erfordert.
Zudem ermöglichen die geringere Ringgröße und der einfachere Walzprozess die Durchführung umfangreicher Forschungswalzungen zur Überprüfung der Qualität
der synthetischen Daten.
|
682 |
Complementary database generation for machine learning in quality prediction of cold ring rollingWang, Qinwen, Seitz, Johannes, Lafarge, Rémi, Kuhlenkötter, Bernd, Brosius, Alexander 28 November 2023 (has links)
Reducing scrap products and unnecessary rework has always been a goal of the manufacturing industry. With the increasing data availability and the developments in the
field of artificial intelligence (AI) for industrial applications, machine learning (ML) has been applied to radial-axial ring rolling (RARR) to predict product quality [1]. However,
the accuracy of these predictions is currently still limited by the quantity and quality of the data [2]. In order to apply supervised learning to predict part quality and possible
scrap parts, there must be plenty of datasets logged for both good and scrap parts. One suitable way to increase the number of datasets is to utilize simulation strategies
to generate synthetic datasets. However, in the hot ring rolling field, there is no fast simulation method that can be used to generate a sufficiently large synthetic database
of rolled parts with form or process errors. The research on transfer learning between different mills and datasets has offered a new idea of taking a cold ring rolling process
as the object of study [2]. Next it will investigate the extent to which the cold ring rolling can be used as a similar process for future transfer of models and results to radial-axial
ring rolling. Compared to RARR, the cold ring rolling is a process under room temperature and contains complete radial forming instead of simultaneous forming in the radial
and axial directions. The simpler forming mechanism makes it possible to build a semi-analytical model, which takes much less time compared to conventional FEMapproaches
under acceptable accuracies. Furthermore, the smaller ring geometry, simplified rolling process and reduced energy consumption mean that in-house experiments can be conducted to verify the quality of the synthetic data based on confidence intervals.
|
683 |
KEEPING TRACK OF NETWORK FLOWS: AN INEXPENSIVE AND FLEXIBLE SOLUTIONFedyukin, Alexander V. January 2005 (has links)
No description available.
|
684 |
Characteristics of a real-time digital terrain database Integrity Monitor for a Synthetic Vision SystemCampbell, Jacob January 2001 (has links)
No description available.
|
685 |
Analysing, Designing, and Evaluating Database Schema Designs in Azure Data Explorer / Analys, design och utvärdering av databasscheman i Azure Data ExplorerPetersson, Linn, Ferlin, Angelica January 2024 (has links)
Today, data warehouses are used to store large amounts of data. This thesis investigates the impact of various database schema designs on query execution time within the cloud platform Azure Data Explorer. As Azure Data Explorer is a relatively new platform, limited research exists on designing database schemas within the platform. Further, the design of the database schema has a direct impact on the query execution times. The design should also align with the use case of the data warehouse. This thesis conducts a requirements analysis, determines the use case, and designs three database schemas. The three database schemas are implemented and evaluated through a performance test. Schema 1 is designed to utilize results tables from stored functions, while schema 2 utilizes sub-functions divided by different departments or products to minimize the data accessed per query. Finally, schema 3 uses the results tables from the sub-functions found in schema 2. The result from the performance tests shows that schema 3 has the best overall improvement in query execution time compared to the other designs and the original design. The findings emphasize the critical role of database schema design in influencing query performance. Additionally, a conclusion is reached that using more than one approach to enhance query performance increases the potential query performance.
|
686 |
Object oriented database management systemsNassis, Antonios 11 1900 (has links)
Modern data intensive applications, such as multimedia systems require the ability to store and manipulate complex data. The classical Database Management Systems (DBMS), such as relational databases, cannot support these types of applications efficiently. This dissertation presents the salient features of Object Database Management Systems (ODBMS) and Persistent Programming Languages (PPL), which have been developed to address the data management needs of these difficult applications. An 'impedance mismatch' problem occurs in the traditional DBMS because the data and computational aspects of the application are implemented using two different systems, that of query and programming language. PPL's provide facilities to cater for both persistent and transient data within the same language, hence avoiding the impedance mismatch problem. This dissertation presents a method of implementing a PPL by extending the language C++ with pre-compiled classes. The classes are first developed and then used to implement object persistence in two simple applications. / Computing / M. Sc. (Information Systems)
|
687 |
Extracting a Relational Database Schema from a Document DatabaseWheeler, Jared Thomas 01 January 2017 (has links)
As NoSQL databases become increasingly used, more methodologies emerge for migrating from relational databases to NoSQL databases. Meanwhile, there is a lack of methodologies that assist in migration in the opposite direction, from NoSQL to relational. As software is being iterated upon, use cases may change. A system which was originally developed with a NoSQL database may accrue needs which require Atomic, Consistency, Isolation, and Durability (ACID) features that NoSQL systems lack, such as consistency across nodes or consistency across re-used domain objects. Shifting requirements could result in the system being changed to utilize a relational database. While there are some tools available to transfer data between an existing document database and existing relational database, there has been no work for automatically generating the relational database based upon the data already in the NoSQL system. Not taking the existing data into account can lead to inconsistencies during data migration. This thesis describes a methodology to automatically generate a relational database schema from the implicit schema of a document database. This thesis also includes details of how the methodology is implemented, and what could be enhanced in future works.
|
688 |
Modelagem e implementação de banco de dados clínicos e moleculares de pacientes com câncer e seu uso para identificação de marcadores em câncer de pâncreas / Database design and implementation of clinical and molecular data of cancer patients and its application for biomarker discovery in pancreatic cancerBertoldi, Ester Risério Matos 20 October 2017 (has links)
O adenocarcinoma pancreático (PDAC) é uma neoplasia de difícil diagnóstico precoce e cujo tratamento não tem apresentado avanços expressivos desde a última década. As tecnologias de sequenciamento de nova geração (next generation sequencing - NGS) podem trazer importantes avanços para a busca de novos marcadores para diagnóstico de PDACs, podendo também contribuir para o desenvolvimento de terapias individualizadas. Bancos de dados são ferramentas poderosas para integração, padronização e armazenamento de grandes volumes de informação. O objetivo do presente estudo foi modelar e implementar um banco de dados relacional (CaRDIGAn - Cancer Relational Database for Integration and Genomic Analysis) que integra dados disponíveis publicamente, provenientes de experimentos de NGS de amostras de diferentes tipos histopatológicos de PDAC, com dados gerados por nosso grupo no IQ-USP, facilitando a comparação entre os mesmos. A funcionalidade do CaRDIGAn foi demonstrada através da recuperação de dados clínicos e dados de expressão gênica de pacientes a partir de listas de genes candidatos, associados com mutação no oncogene KRAS ou diferencialmente expressos em tumores identificados em dados de RNAseq gerados em nosso grupo. Os dados recuperados foram utilizados para a análise de curvas de sobrevida que resultou na identificação de 11 genes com potencial prognóstico no câncer de pâncreas, ilustrando o potencial da ferramenta para facilitar a análise, organização e priorização de novos alvos biomarcadores para o diagnóstico molecular do PDAC. / Pancreatic Ductal Adenocarcinoma (PDAC) is a type of cancer difficult to diagnose early on and treatment has not improved over the last decade. Next Generation Sequencing (NGS) technology may contribute to discover new biomarkers, develop diagnose strategies and personalised therapy applications. Databases are powerfull tools for data integration, normalization and storage of large data volumes. The main objective of this study was the design and implementation of a relational database to integrate publicly available data of NGS experiments of PDAC pacients with data generated in by our group at IQ-USP, alowing comparisson between both data sources. The database was called CaRDIGAn (Cancer Relational Database for Integration and Genomic Analysis) and its funcionalities were tested by retrieving clinical and expression data of public data of genes differencially expressed genes in our samples or genes associated with KRAS mutation. The output of those queries were used to fit survival curves of patients, which led to the identification of 11 genes potencially usefull for PDAC prognosis. Thus, CaRDIGAn is a tool for data storage and analysis, with promissing applications to identification and priorization of new biomarkers for molecular diagnosis in PDAC.
|
689 |
On fast and space-efficient database normalization : a dissertation presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Systems at Massey University, Palmerston North, New ZealandKoehler, Henning January 2007 (has links)
A common approach in designing relational databases is to start with a relation schema, which is then decomposed into multiple subschemas. A good choice of sub- schemas can often be determined using integrity constraints defined on the schema. Two central questions arise in this context. The first issue is what decompositions should be called "good", i.e., what normal form should be used. The second issue is how to find a decomposition into the desired form. These question have been the subject of intensive research since relational databases came to life. A large number of normal forms have been proposed, and methods for their computation given. However, some of the most popular proposals still have problems: - algorithms for finding decompositions are inefficient - dependency preserving decompositions do not always exist - decompositions need not be optimal w.r.t. redundancy/space/update anomalies We will address these issues in this work by: - designing effcient algorithms for finding dependency preserving decompositions - proposing a new normal form which minimizes overall storage space. This new normal form is then characterized syntactically, and shown to extend existing normal forms.
|
690 |
Object oriented database management systemsNassis, Antonios 11 1900 (has links)
Modern data intensive applications, such as multimedia systems require the ability to store and manipulate complex data. The classical Database Management Systems (DBMS), such as relational databases, cannot support these types of applications efficiently. This dissertation presents the salient features of Object Database Management Systems (ODBMS) and Persistent Programming Languages (PPL), which have been developed to address the data management needs of these difficult applications. An 'impedance mismatch' problem occurs in the traditional DBMS because the data and computational aspects of the application are implemented using two different systems, that of query and programming language. PPL's provide facilities to cater for both persistent and transient data within the same language, hence avoiding the impedance mismatch problem. This dissertation presents a method of implementing a PPL by extending the language C++ with pre-compiled classes. The classes are first developed and then used to implement object persistence in two simple applications. / Computing / M. Sc. (Information Systems)
|
Page generated in 0.1124 seconds