• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 13
  • 1
  • 1
  • Tagged with
  • 69
  • 69
  • 36
  • 26
  • 15
  • 15
  • 15
  • 13
  • 12
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

A study of transfer learning on data-driven motion synthesis frameworks / En studie av kunskapsöverföring på datadriven rörelse syntetiseringsramverk

Chen, Nuo January 2022 (has links)
Various research has shown the potential and robustness of deep learning-based approaches to synthesise novel motions of 3D characters in virtual environments, such as video games and films. The models are trained with the motion data that is bound to the respective character skeleton (rig). It inflicts a limitation on the scalability and the applicability of the models since they can only learn motions from one particular rig (domain) and produce motions in that domain only. Transfer learning techniques can be used to overcome this issue and allow the models to better adapt to other domains with limited data. This work presents a study of three transfer learning techniques for the proposed Objective-driven motion generation model (OMG), which is a model for procedurally generating animations conditioned on positional and rotational objectives. Three transfer learning approaches for achieving rig-agnostic encoding (RAE) are proposed and experimented with: Feature encoding (FE), Feature clustering (FC) and Feature selection (FS), to improve the learning of the model on new domains with limited data. All three approaches demonstrate significant improvement in both the performance and the visual quality of the generated animations, when compared to the vanilla performance. The empirical results indicate that the FE and the FC approaches yield better transferring quality than the FS approach. It is inconclusive which of them performs better, but the FE approach is more computationally efficient, which makes it the more favourable choice for real-time applications. / Många studier har visat potentialen och robustheten av djupinlärningbaserade modeller för syntetisering av nya rörelse för 3D karaktärer i virtuell miljö, som datorspel och filmer. Modellerna är tränade med rörelse data som är bunden till de respektive karaktärskeletten (rig). Det begränsar skalbarheten och tillämpningsmöjligheten av modellerna, eftersom de bara kan lära sig av data från en specifik rig (domän) och därmed bara kan generera animationer i den domänen. Kunskapsöverföringsteknik (transfer learning techniques) kan användas för att överkomma denna begränsning och underlättar anpassningen av modeller på nya domäner med begränsade data. I denna avhandling presenteras en studie av tre kunskapsöverföringsmetoder för den föreslagna måldriven animationgenereringsnätverk (OMG), som är ett neural nätverk-baserad modell för att procedurellt generera animationer baserade på positionsmål och rotationsmål. Tre metoder för att uppnå rig-agnostisk kodning är presenterade och experimenterade: Feature encoding (FE), Feature clustering (FC) and Feature selection (FS), för att förbättra modellens lärande på nya domäner med begränsade data. All tre metoderna visar signifikant förbättring på både prestandan och den visuella kvaliteten av de skapade animationerna, i jämförelse med den vanilla prestandan. De empiriska resultaten indikerar att både FE och FC metoderna ger bättre överföringskvalitet än FS metoden. Det går inte att avgöra vilken av de presterar bättre, men FE metoden är mer beräkningseffektiv, vilket är fördelaktigt för real-time applikationer.
62

HIGH-THROUGHPUT CALCULATIONS AND EXPERIMENTATION FOR THE DISCOVERY OF REFRACTORY COMPLEX CONCENTRATED ALLOYS WITH HIGH HARDNESS

Austin M Hernandez (12468585) 27 April 2022 (has links)
<p>Ni-based superalloys continue to exert themselves as the industry standards in high stress and highly corrosive/oxidizing environments, such as are present in a gas turbine engine, due to their excellent high temperature strengths, thermal and microstructural stabilities, and oxidation and creep resistances. Gas turbine engines are essential components for energy generation and propulsion in the modern age. However, Ni-based superalloys are reaching their limits in the operating conditions of these engines due to their melting onset temperatures, which is approximately 1300 °C. Therefore, a new class of materials must be formulated to surpass the capabilities Ni-based superalloys, as increasing the operating temperature leads to increased efficiency and reductions in fuel consumption and greenhouse gas emissions. One of the proposed classes of materials is termed refractory complex concentrated alloys, or RCCAs, which consist of 4 or more refractory elements (in this study, selected from: Ti, Zr, Hf, V, Nb, Ta, Cr, Mo, and W) in equimolar or near-equimolar proportions. So far, there have been highly promising results with these alloys, including far higher melting points than Ni-based superalloys and outstanding high-temperature strengths in non-oxidizing environments. However, improvements in room temperature ductility and high-temperature oxidation resistance are still needed for RCCAs. Also, given the millions of possible alloy compositions spanning various combinations and concentrations of refractory elements, more efficient methods than just serial experimental trials are needed for identifying RCCAs with desired properties. A coupled computational and experimental approach for exploring a wide range of alloy systems and compositions is crucial for accelerating the discovery of RCCAs that may be capable of replacing Ni-based superalloys. </p> <p>In this thesis, the CALPHAD method was utilized to generate basic thermodynamic properties of approximately 67,000 Al-bearing RCCAs. The alloys were then down-selected on the basis of certain criteria, including solidus temperature, volume percent BCC phase, and aluminum activity. Machine learning models with physics-based descriptors were used to select several BCC-based alloys for fabrication and characterization, and an active learning loop was employed to aid in rapid alloy discovery for high hardness and strength. This method resulted in rapid identification of 15 BCC-based, four component, Al-bearing RCCAs exhibiting room-temperature Vickers hardness from 1% to 35% above previously reported alloys. This work exemplifies the advantages of utilizing Integrated Computational Materials Engineering- and Materials Genome Initiative-driven approaches for the discovery and design of new materials with attractive properties.</p> <p> </p> <p><br></p>
63

Measuring the Technical and Process Benefits of Test Automation based on Machine Learning in an Embedded Device / Undersökning av teknik- och processorienterade fördelar med testautomation baserad på maskininlärning i ett inbyggt system

Olsson, Jakob January 2018 (has links)
Learning-based testing is a testing paradigm that combines model-based testing with machine learning algorithms to automate the modeling of the SUT, test case generation, test case execution and verdict construction. A tool that implements LBT been developed at the CSC school at KTH called LBTest. LBTest utilizes machine learning algorithms with off-the-shelf equivalence- and model-checkers, and the modeling of user requirements by propositional linear temporal logic. In this study, it is be investigated whether LBT may be suitable for testing a micro bus architecture within an embedded telecommunication device. Furthermore ideas to further automate the testing process by designing a data model to automate user requirement generation are explored. / Inlärningsbaserad testning är en testningsparadigm som kombinerar model-baserad testning med maskininlärningsalgoritmer för att automatisera systemmodellering, testfallsgenering, exekvering av tester och utfallsbedömning. Ett verktyg som är byggt på LBT är LBTest, utvecklat på CSC skolan på KTH. LBTest nyttjar maskininlärningsalgoritmer med färdiga ekvivalent- och model-checkers, och modellerar användarkrav med linjär temporal logik. I denna studie undersöks det om det är lämpat att använda LBT för att testa en mikrobus arkitektur inom inbyggda telekommunikationsenheter. Utöver det undersöks även hur testprocessen skulle kunna ytterligare automatiseras med hjälp av en data modell för att automatisera generering av användarkrav.
64

Machine Learning-Based Predictive Methods for Polyphase Motor Condition Monitoring

David Matthew LeClerc (13048125) 29 July 2022 (has links)
<p>  This paper explored the application of three machine learning models focused on predictive motor maintenance. Logistic Regression, Sequential Minimal Optimization (SMO), and NaïveBayes models. A comparative analysis of these models illustrated that while each had an accuracy greater than 95% in this study, the Logistic Regression Model exhibited the most reliable operation.</p>
65

Automatic Burns Analysis Using Machine Learning

Abubakar, Aliyu January 2022 (has links)
Burn injuries are a significant global health concern, causing high mortality and morbidity rates. Clinical assessment is the current standard for diagnosing burn injuries, but it suffers from interobserver variability and is not suitable for intermediate burn depths. To address these challenges, machine learning-based techniques were proposed to evaluate burn wounds in a thesis. The study utilized image-based networks to analyze two medical image databases of burn injuries from Caucasian and Black-African cohorts. The deep learning-based model, called BurnsNet, was developed and used for real-time processing, achieving high accuracy rates in discriminating between different burn depths and pressure ulcer wounds. The multiracial data representation approach was also used to address data representation bias in burn analysis, resulting in promising performance. The ML approach proved its objectivity and cost-effectiveness in assessing burn depths, providing an effective adjunct for clinical assessment. The study's findings suggest that the use of machine learning-based techniques can reduce the workflow burden for burn surgeons and significantly reduce errors in burn diagnosis. It also highlights the potential of automation to improve burn care and enhance patients' quality of life. / Petroleum Technology Development Fund (PTDF); Gombe State University study fellowship
66

Arcabouço para análise de eventos em vídeos. / Framework for analyzing events in videos.

SILVA, Adson Diego Dionisio da. 07 May 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-05-07T15:29:04Z No. of bitstreams: 1 ADSON DIEGO DIONISIO DA SILVA - DISSERTAÇÃO PPGCC 2015..pdf: 2453030 bytes, checksum: 863c817f9714377b827d4d6fa0770c51 (MD5) / Made available in DSpace on 2018-05-07T15:29:04Z (GMT). No. of bitstreams: 1 ADSON DIEGO DIONISIO DA SILVA - DISSERTAÇÃO PPGCC 2015..pdf: 2453030 bytes, checksum: 863c817f9714377b827d4d6fa0770c51 (MD5) Previous issue date: 2015-08-31 / O reconhecimento automático de eventos de interesse em vídeos envolvendo conjuntos de ações ou de interações entre objetos. Pode agregar valor a sistemas de vigilância,aplicações de cidades inteligentes, monitoramento de pessoas com incapacidades físicas ou mentais, dentre outros. Entretanto, conceber um arcabouço que possa ser adaptado a diversas situações sem a necessidade de um especialista nas tecnologias envolvidas, continua sendo um desafio para a área. Neste contexto, a pesquisa realizada tem como base a criação de um arcabouço genérico para detecção de eventos em vídeo com base em regras. Para criação das regras, os usuários formam expressões lógicas utilizando Lógica de Primeira Ordem e relacionamos termos com a álgebra de intervalos de Allen, adicionando assim um contexto temporal às regras. Por ser um arcabouço, ele é extensível, podendo receber módulos adicionais para realização de novas detecções e inferências Foi realizada uma avaliação experimental utilizando vídeos de teste disponíveis no site Youtube envolvendo um cenário de trânsito, com eventos de ultrapassagem do sinal vermelho e vídeos obtidos de uma câmera ao vivo do site Camerite, contendo eventos de carros estacionando. O foco do trabalho não foi criar detectores de objetos (e.g. carros ou pessoas) melhores do que aqueles existentes no estado da arte, mas propor e desenvolver uma estrutura genérica e reutilizável que integra diferentes técnicas de visão computacional. A acurácia na detecção dos eventos ficou no intervalo de 83,82% a 90,08% com 95% de confiança. Obteve acurácia máxima (100%) na detecção dos eventos, quando substituído os detectores de objetos por rótulos atribuídos manualmente, o que indicou a eficácia do motor de inferência desenvolvido para o arcabouço. / Automatic recognition of relevant events in videos involving sets of actions or interactions between objects can improve surveillance systems, smart cities applications, monitoring of people with physical or mental disabilities, among others. However, designing a framework that can be adapted to several situations without an expert in the involved technologies remains a challenge. In this context, this work is based on the creation of a rule-based generic framework for event detection in video. To create the rules, users form logical expressions using firstorder logic (FOL) and relate the terms with the Allen’s interval algebra, adding a temporal context to the rules. Once it is a framework, it is extensible, and may receive additional modules for performing new detections and inferences. Experimental evaluation was performed using test videos available on Youtube, involving a scenario of traffic with red light crossing events and videos from Camerite website containing parking car events. The focus of the work was not to create object detectors (e.g. cars or people) better than those existing in the state-of-the-art, but, propose and develop a generic and reusable framework that integrates differents computer vision techniques. The accuracy in the detection of the events was within the range of 83.82% and 90.08% with 95% confidence. Obtained maximum accuracy (100 %) in the detection of the events, when replacing the objects detectors by labels manually assigned, what indicated the effectiveness of the inference engine developed for this framework.
67

Leakage Conversion For Training Machine Learning Side Channel Attack Models Faster

Rohan Kumar Manna (8788244) 01 May 2020 (has links)
Recent improvements in the area of Internet of Things (IoT) has led to extensive utilization of embedded devices and sensors. Hence, along with utilization the need for safety and security of these devices also increases proportionately. In the last two decades, the side-channel attack (SCA) has become a massive threat to the interrelated embedded devices. Moreover, extensive research has led to the development of many different forms of SCA for extracting the secret key by utilizing the various leakage information. Lately, machine learning (ML) based models have been more effective in breaking complex encryption systems than the other types of SCA models. However, these ML or DL models require a lot of data for training that cannot be collected while attacking a device in a real-world situation. Thus, in this thesis, we try to solve this issue by proposing the new technique of leakage conversion. In this technique, we try to convert the high signal to noise ratio (SNR) power traces to low SNR averaged electromagnetic traces. In addition to that, we also show how artificial neural networks (ANN) can learn various non-linear dependencies of features in leakage information, which cannot be done by adaptive digital signal processing (DSP) algorithms. Initially, we successfully convert traces in the time interval of 80 to 200 as the cryptographic operations occur in that time frame. Next, we show the successful conversion of traces lying in any time frame as well as having a random key and plain text values. Finally, to validate our leakage conversion technique and the generated traces we successfully implement correlation electromagnetic analysis (CEMA) with an approximate minimum traces to disclosure (MTD) of 480.
68

ENABLING RIDE-SHARING IN ON-DEMAND AIR SERVICE OPERATIONS THROUGH REINFORCEMENT LEARNING

Apoorv Maheshwari (11564572) 22 November 2021 (has links)
The convergence of various technological and operational advancements has reinstated the interest in On-Demand Air Service (ODAS) as a viable mode of transportation. ODAS enables an end-user to be transported in an aircraft between their desired origin and destination at their preferred time without advance notice. Industry, academia, and the government organizations are collaborating to create technology solutions suited for large-scale implementation of this mode of transportation. Market studies suggest reducing vehicle operating cost per passenger as one of the biggest enablers of this market. To enable ODAS, an ODAS operator controls a fleet of aircraft that are deployed across a set of nodes (e.g., airports, vertiports) to satisfy end-user transportation requests. There is a gap in the literature for a tractable and online methodology that can enable ride-sharing in the on-demand operations while maintaining a publicly acceptable level of service (such as with low waiting time). The need for an approach that not only supports a dynamic-stochastic formulation but can also handle uncertainty with unknowable properties, drives me towards the field of Reinforcement Learning (RL). In this work, a novel two-layer hierarchical RL framework is proposed that can distribute a fleet of aircraft across a nodal network as well as perform real-time scheduling for an ODAS operator. The top layer of the framework - the Fleet Distributor - is modeled as a Partially Observable Markov Decision Process whereas the lower layer - the Trip Request Manager - is modeled as a Semi-Markov Decision Process. This framework is successfully demonstrated and assessed through various studies for a hypothetical ODAS operator in the Chicago region. This approach provides a new way of solving fleet distribution and scheduling problems in aviation. It also bridges the gap between the state-of-the-art RL advancements and node-based transportation network problems. Moreover, this work provides a non-proprietary approach to reasonably model ODAS operations that can be leveraged by researchers and policy makers.
69

AUTOMATING BIG VISUAL DATA COLLECTION AND ANALYTICS TOWARD LIFECYCLE MANAGEMENT OF ENGINEERING SYSTEMS

Jongseong Choi (9011111) 09 September 2022 (has links)
Images have become a ubiquitous and efficient data form to record information. Use of this option for data capture has largely increased due to the widespread availability of image sensors and sensor platforms (e.g., smartphones and drones), the simplicity of this approach for broad groups of users, and our pervasive access to the internet as one class of infrastructure in itself. Such data contains abundant visual information that can be exploited to automate asset assessment and management tasks that traditionally are manually conducted for engineering systems. Automation of the data collection, extraction and analytics is however, key to realizing the use of these data for decision-making. Despite recent advances in computer vision and machine learning techniques extracting information from an image, automation of these real-world tasks has been limited thus far. This is partly due to the variety of data and the fundamental challenges associated with each domain. Due to the societal demands for access to and steady operation of our infrastructure systems, this class of systems represents an ideal application where automation can have high impact. Extensive human involvement is required at this time to perform everyday procedures such as organizing, filtering, and ranking of the data before executing analysis techniques, consequently, discouraging engineers from even collecting large volumes of data. To break down these barriers, methods must be developed and validated to speed up the analysis and management of data over the lifecycle of infrastructure systems. In this dissertation, big visual data collection and analysis methods are developed with the goal of reducing the burden associated with human manual procedures. The automated capabilities developed herein are focused on applications in lifecycle visual assessment and are intended to exploit large volumes of data collected periodically over time. To demonstrate the methods, various classes of infrastructure, commonly located in our communities, are chosen for validating this work because they: (i) provide commodities and service essential to enable, sustain, or enhance our lives; and (ii) require a lifecycle structural assessment in a high priority. To validate those capabilities, applications of infrastructure assessment are developed to achieve multiple approaches of big visual data such as region-of-interest extraction, orthophoto generation, image localization, object detection, and image organization using convolution neural networks (CNNs), depending on the domain of lifecycle assessment needed in the target infrastructure. However, this research can be adapted to many other applications where monitoring and maintenance are required over their lifecycle.

Page generated in 0.0544 seconds