• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5598
  • 577
  • 282
  • 275
  • 167
  • 157
  • 83
  • 66
  • 50
  • 42
  • 24
  • 21
  • 20
  • 19
  • 12
  • Tagged with
  • 9041
  • 9041
  • 3028
  • 1688
  • 1534
  • 1522
  • 1416
  • 1358
  • 1192
  • 1186
  • 1157
  • 1128
  • 1113
  • 1024
  • 1020
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
631

Machine Learning Based Listener Classification and Authentication Using Frequency Following Responses to English Vowels for Biometric Applications

Borzou, Bijan 10 July 2023 (has links)
Auditory Evoked Potentials (AEPs) have recently gained attention as a biometric feature that may improve security and address reliability shortfalls of other commonly-used biometric features. The objective of this thesis is to investigate the accuracy with which subjects can be automatically identified or authenticated with machine learning (ML) techniques using a type of AEP known as the speech-evoked frequency following response (FFR). Accordingly, the results show more accurate discrimination between FFRs from different subjects than what has been reported in past studies. The accuracy improvement is searched either by optimized hyperparameter tuning of the ML model or extracting new features from FFRs and feeding them as inputs to the model. Finally, the accuracy of authenticating subjects using FFRs is investigated using a "sheep vs. wolves" scenario. The results of this work shed more light on the potential of use of speech-evoked FFRs in biometric identification and authentication systems.
632

Evaluation under Real-world Distribution Shifts

Alhamoud, Kumail 07 1900 (has links)
Recent advancements in empirical and certified robustness have shown promising results in developing reliable and deployable Deep Neural Networks (DNNs). However, most evaluations of DNN robustness have focused on testing models on images from the same distribution they were trained on. In real-world scenarios, DNNs may encounter dynamic environments with significant distribution shifts. This thesis aims to investigate the interplay between empirical and certified adversarial robustness and domain generalization. We take the first step by training robust models on multiple domains and evaluating their accuracy and robustness on an unseen domain. Our findings reveal that: (1) both empirical and certified robustness exhibit generalization to unseen domains, and (2) the level of generalizability does not correlate strongly with the visual similarity of inputs, as measured by the Fréchet Inception Distance (FID) between source and target domains. Furthermore, we extend our study to a real-world medical application, where we demonstrate that adversarial augmentation significantly enhances robustness generalization while minimally affecting accuracy on clean data. This research sheds light on the importance of evaluating DNNs under real-world distribution shifts and highlights the potential of adversarial augmentation in improving robustness in practical applications.
633

Ein Ordnungsrahmen zur Modellierung von Qualitätsmerkmalen in Produktionsprozessen

Kuhn, Fabian, Gruczyk, Thomas, Kröhn, Michael 12 February 2024 (has links)
Ein standardisiertes Vorgehen erleichtert die Anwendung von KI-Modellen in der Praxis, auch durch Mitarbeitende ohne erweiterte KI-Kenntnisse, bspw. Prozessexperten, die dem Thema „Künstliche Intelligenz“ (KI) oftmals unbedarft oder skeptisch gegenüberstehen (im Folgenden: KI-Laien). Im Kontext von industriellen Fertigungsprozessen ist daher ein geordnetes Vorgehen wichtig, dass auch KI-Laien ermöglicht, Methoden der künstlichen Intelligenz erfolgreich auf ihre Prozessdaten anzuwenden. Wir skizzieren einen Ordnungsrahmen für diesen Typ von Problemstellungen, der im Rahmen von Abschlussarbeiten und in Zusammenarbeit von Hochschulen mit ROBUR Automation entstanden ist. Er stellt den Zusammenhang zwischen den einzelnen Schritten her und gewährt somit einen Überblick über die komplexe Modellierung, der sich auch KI-Laien erschließt. Der Ordnungsrahmen als abstraktes Konzept findet seine Umsetzung in einem Framework. Im Beitrag fokussieren wir uns auf einen Baustein des Ordnungsrahmens, die Darstellung von Qualität. Dieser und weitere bilden gesamthaft eine pipeline, die in die von ROBUR Automation entwickelte Datenplattform Mia integriert wird.
634

Flexible Fertigungssysteme basierend auf zentralen, leistungsfähigen Steuerungen für integriertes maschinelles Lernen und Produkttranstransport

Wree, Christoph, Raßmann, Rando, Salazar Gesell, Gianmarco, Schönfeld, Tobias 13 February 2024 (has links)
Die gestellten Anforderungen an moderne Fertigungssysteme wachsen stetig. So werden neben einem großen Durchsatz bei gleichzeitig hoher Produktqualität auch Anforderungen in Hinblick auf Kompaktheit, Energieeffizienz, Erweiterbarkeit und Cyber-Resilienz gestellt. Derzeitige Fertigungssysteme basieren häufig auf einer Vielzahl an Subsystemen. Dies erhöht die Komplexität des Systems und es kann auf Grund der vielen Schnittstellen zu Performanceeinbußen in der Datenübertragung kommen. Um den Bedürfnissen für flexible und hochintegrierte Fertigungssysteme gerecht zu werden, werden daher leistungsfähige Steuerungen benötigt, die alle Funktionen zentral in einer Steuerung vereinen. In diesem Beitrag werden zwei Maschinendemonstratoren betrachtet, die eine Industrie-PC basierte Maschinensteuerung einsetzen und bei denen das Produkttransportsystem im Zentrum der Anlage steht. Die Vor- und Nachteile eines zentralen gegenüber einem dezentralen Konzept werden gegenübergestellt und diskutiert. Es wird außerdem demonstriert, dass durch die Verwendung einer zentralen Steuerung Synergien effizient ausgenutzt werden können, wodurch der Lebenszyklus des Fertigungssystems verlängert werden kann. So werden z. B. auf beiden Systemen Convolutional Neural Networks eingesetzt, um den Produktstrom in der Anlage zu steuern. Die Inferenzzeit der optimierten Modelle beträgt hierbei weniger als 430 μs bei einer Klassifizierungsgenauigkeit von 100%.
635

Towards Automating IP-Network Operations with Machine Learning from Raw Network Data

Mohammed, Ayse Rumeysa 19 January 2024 (has links)
The ever-increasing size and complexity of communication networks today complicate Network Operation Centers (NOC) to function efficiently in manually operated tasks such as network status detection, network fault localization, cost-aware traffic engineering, failure management, and network quality assurance. These tasks have traditionally been managed by expert technicians who make decisions on when and where to take which actions based on specific network rules. Due to the complexity of the process, NOC actions are still performed manually. However, automating this process could be a valuable input for network providers and service operators. In this context, we developed an Artificial Intelligence based (AI-based) action recommendation engine (ARE) which, as its name suggests, recommends the best available operational expenditure aware (OPEX-aware) action, either with (Stateful ARE) or without (Stateless ARE) measuring the network state. Our experimental results show that Stateful ARE can recommend the suitable action and yield up to 99% accuracy. This high accuracy percentage is due to the correct classification of the Normal state, which represents 64.5% of the dataset, and its corresponding action of Do Nothing, which accounts for 68.3% of all actions While Stateful ARE’s overall accuracy is satisfactory, it was unable to achieve this performance in minority classes, and it suffered from performance degradation due to state classification process. Therefore, we introduced Stateless ARE, which recommends actions without measuring the network state. The initial results of Stateless ARE using a Feed Forward Neural Network (FFNN) did not exceed Stateful ARE’s performance. The classification accuracy of minority classes were still around 89% and 93%, but it outperformed the static network, indicating that it could be improved with further optimization techniques. Based on this insight, we introduced state-of-the-art Transformer model as Stateless ARE model. The transformer model significantly improved the accuracy of the minority classes to 97% and 99%, which other methodologies struggled to classify. This result shows that the transformer model can be an effective tool in improving the performance of action recommendation engines.
636

HYBRID AND DATA DRIVEN MODELS OF DISTILLATION TOWERS

Carlos Daniel, Rodriguez Sotelo January 2024 (has links)
This thesis presents advancements in the development of hybrid and data-driven models of distillation columns. First, it introduces a hybrid model structure that incorporates a novel multiplicative correction term for inferential monitoring. This model architecture outperforms previous hybrid structures, especially in extrapolation conditions, and can be adapted for different conditions. Second, it presents a methodology for selecting temperature measurement for inferential models. This methodology demonstrates that nonlinear columns can be effectively modeled with linear models requiring two temperature measurements per section (previous works state requiring more) when the measurements are selected systematically. Finally, an iterative Real-Time Optimization (RTO) based on an augmented inferential data-driven model is demonstrated. The accuracy of the model enables estimation of the sensitivity matrix of the plant from the model without the need for additional plant measurements. The proposed RTO framework produces results similar to those achieved by optimizing rigorous tray to tray distillation models. / Thesis / Candidate in Philosophy / This thesis presents advancements in the development of hybrid and data-driven models of distillation columns. First, it introduces a hybrid model structure that incorporates a novel multiplicative correction term for inferential monitoring. This model architecture outperforms previous hybrid structures, especially in extrapolation conditions, and can be adapted for different conditions. Second, it presents a methodology for selecting temperature measurement for inferential models. This methodology demonstrates that nonlinear columns can be effectively modeled with linear models requiring two temperature measurements per section (previous works state requiring more) when the measurements are selected systematically. Finally, an iterative Real-Time Optimization (RTO) based on an augmented inferential data-driven model is demonstrated. The accuracy of the model enables estimation of the sensitivity matrix of the plant from the model without the need for additional plant measurements. The proposed RTO framework produces results similar to those achieved by optimizing rigorous tray to tray distillation models.
637

Using Machine Learning Techniques to Improve Operational Flash Flood Forecasting

Della Libera Zanchetta, Andre January 2022 (has links)
Compared with other types of floods, timely and accurately predicting flash floods is particularly challenging due to the small spatiotemporal scales in which the hydrologic and hydraulic processes tend to develop, and to the short lead time between the causative event and the inundation scenario. With continuous increased availability of data and computational power, the interest in applying techniques based on machine learning for hydrologic purposes in the context of operational forecasting has also been increasing. The primary goal of the research activities developed in the context of this thesis is to explore the use of emerging machine learning techniques for enhancing flash flood forecasting. The studies presented start with a review on the state-of-the-art of documented forecasting systems suitable for flash floods, followed by an assessment of the potential of using multiple concurrent precipitation estimates for early prediction of high-discharge scenarios in a flashy catchment. Then, the problem of rapidly producing realistic highresolution flood inundation maps is explored through the use of hybrid machine learning models based on Non-linear AutoRegressive with eXogenous inputs (NARX) and SelfOrganizing Maps (SOM) structures as surrogates of a 2D hydraulic model. In this context, the use of k-fold ensemble is proposed and evaluated as an approach for estimating uncertainties related to the surrogating of a physics-based model. The results indicate that, in a small and flashy catchment, the abstract nature of data processing in machine learning models benefits from the presentation of multiple concurrent precipitation products to perform rainfall-runoff simulations when compared to the business-as-usual single-precipitation approach. Also, it was found that the hybrid NARX-SOM models, previously explored for slowly developing flood scenarios, present acceptable performances for surrogating high-resolution models in rapidly evolving inundation events for the production of both deterministic and probabilistic inundation maps in which uncertainties are adequately estimated. / Thesis / Doctor of Science (PhD) / Flash floods are among the most hazardous and impactful environmental disasters faced by different societies across the globe. The timely adoption of mitigation actions by decision makers and response teams is particularly challenging due to the rapid development of such events after (or even during) the occurrence of an intense rainfall. The short time interval available for response teams imposes a constraint for the direct use of computationally demanding components in real-time forecasting chains. Examples of such are high-resolution 2D hydraulic models based on physics laws, which are capable to produce valuable flood inundation maps dynamically. This research explores the potential of using machine learning models to reproduce the behavior of hydraulic models designed to simulate the evolution of flood inundation maps in a configuration suitable for operational flash flood forecasting application. Contributions of this thesis include (1) a comprehensive literature review on the recent advances and approaches adopted in operational flash flood forecasting systems with the identification and the highlighting of the main research gaps on this topic, (2) the identification of evidences that machine learning models have the potential to identify patterns in multiple quantitative precipitation estimates from different sources for enhancing the performance of rainfall-runoff estimation in urban catchments prone to flash floods, (3) the assessment that hybrid data driven structures based on self-organizing maps (SOM) and nonlinear autoregressive with exogenous inputs (NARX), originally proposed for large scale and slow-developing flood scenarios, can be successfully applied on flashy catchments, and (4) the proposal of using k-folding ensemble as a technique to produce probabilistic flood inundation forecasts in which the uncertainty inherent to the surrogating step is represented.
638

An integrated neural network and optimization framework for the inverse design of optical devices

Chen, Yuyao 01 September 2022 (has links)
The inverse design of optical devices that exhibit desired functionalities as well as the solution of complex inverse problems are becoming essential research directions in modern optical engineering. Recent advancements in computation algorithms, machine learning architectures and optimization methods offer efficient means to deal with complex photonics problems with a large number of degrees of freedom. In this thesis, I present our work on developing an integrated framework for the inverse design of diffractive optical elements and nanophotonic media with tailored optical responses. In the first part of our work, we introduce the design of single-layer diffractive optical devices that extend conventional imaging functions to include dual-band multi-focal microlenses for multi-band imaging, modulated axilenses for ultracompact spectrometers, and hyperuniform phase plates for lensless imaging systems. We design these diffractive elements based on Rayleigh-Sommerfeld scalar diffraction simulations. We also fabricate them using scalable lithography and experimentally characterize their predicted diffraction and imaging performances. While we successfully validated our designs, we also identified the fundamental limitations and challenges of single-layer diffractive devices. In order to address these problems, in the second part of the work we introduce a novel and flexible approach for the inverse design of diffractive optical elements based on adaptive deep diffractive neural networks (a-D2NNs). In particular, we demonstrate two-layer dual-band multi-focal devices that exceed the efficiency limit of traditional single-layer devices and we leverage the powerful a-D2NN inverse design platform to engineer systems with targeted spectral lineshapes and focusing point-spread functions. Moreover, we apply a-D2NNs to the inverse design of ultracompact spectrometers and demonstrate nanometer-range spectral resolution for 100 micron-size devices that can be fabricated using conventional lithographic procedures. Finally, we apply the a-D2NNs approach to the design of hyperuniform scalar random fields that we have introduced as novel lensless imaging systems with modulated transfer functions that produce enhanced image quality compared to state-of-the-art phase plates based on the Perlin noise. We additionally show that a-D2NNs can be used to efficiently design different classes of hyperuniform random media that are currently being explored for a number of optical applications. In the third part of my thesis, we propose and develop a deep learning framework for solving inverse photonics problems by employing physics-informed neural networks (PINNs). We solve the non-local effective medium problem for finite-size metamaterials and address losses and radiation effects. Furthermore, we apply PINNs to solve the invisible cloaking inverse problem beyond the quasi-static limit. Finally, we develop a general PINN framework for inverse retrieval of optical parameters based on near-field data information. Based on our approach, we show the successful retrieval of the electric and magnetic optical parameters (i.e., non-local permittivity and permeability functions) of two-dimensional and three-dimensional scatterers in the presence of absorption losses. Additionally, we demonstrate the application of the inverse PINN design to the scanning near-field microscopy technique under localized excitation and in the presence of noise. In the last part of our work, we couple adjoint optimization methods with the rigorous multiple scattering theory of cylinder arrays (i.e., two-dimensional generalized Mie theory) for the inverse design of small-size, photonic structures, called “photonic patches”, that achieve different functionalities with optimal efficiencies. Specifically, we present the inverse design of photonic patches that angularly shape incoming radiation and that focus light intensity over Fresnel-zone distances (~ 10μm) with engineered spectral lineshapes, enhanced local density of states and resonance quality factors.
639

Use of Machine Learning for Outlier Detection in Healthy Human Brain Magnetic Resonance Imaging (MRI) Diffusion Tensor (DT) Datasets / Outlier Detection in Brain MRI Diffusion Datasets

MacPhee, Neil January 2022 (has links)
Machine learning (ML) and deep learning (DL) are powerful techniques that allow for analysis and classification of large MRI datasets. With the growing accessibility of high-powered computing and large data storage, there has been an explosive interest in their uses for assisting clinical analysis and interpretation. Though these methods can provide insights into the data which are not possible through human analysis alone, they require significantly large datasets for training which can difficult for anyone (researcher and clinician) to obtain on their own. The growing use of publicly available, multi-site databases helps solve this problem. Inadvertently, however, these databases can sometimes contain outliers or incorrectly labeled data as the subjects may or may not have subclinical or underlying pathology unbeknownst to them or to those who did the data collection. Due to the outlier sensitivity of ML and DL techniques, inclusion of such data can lead to poor classification rates and subsequent low specificity and sensitivity. Thus, the focus of this work was to evaluate large brain MRI datasets, specifically diffusion tensor imaging (DTI), for the presence of anomalies and to validate and compare different methods of anomaly detection. A total of 1029 male and female subjects ages 22 to 35 were downloaded from a global imaging repository and divided into 6 cohorts depending on their age and sex. Care was made to minimize variance due to hardware and hence only data from a specific vendor (General Electric Healthcare) and MRI B0 field strength (i.e. 3 Tesla) were obtained. The raw DTI data (i.e. in this case DICOM images) was first preprocessed into scalar metrics (i.e. FA, RD, AD, MD) and warped to MNI152 T1 1mm standardized space using the FMRIB software library (FSL). Subsequently data was segmented into regions of interest (ROI) using the JHU DTI-based white-matter atlas and a mean was calculated for each ROI defined by that atlas. The ROI data was standardized and a Z-score, for each ROI over all subjects, was calculated. Four different algorithms were used for anomaly detection, including Z-score outlier detection, maximum likelihood estimator (MLE) and minimum covariance determinant (MCD) based Mahalanobis distance outlier detection, one-class support vector machine (OCSVM) outlier detection, and OCSVM novelty detection trained on MCD based Mahalanobis distance data. The best outlier detector was found to be MCD based Mahalanobis distance, with the OCSVM novelty detector performing exceptionally well on the MCD based Mahalanobis distance data. From the results of this study, it is clear that these global databases contain outliers within their healthy control datasets, further reinforcing the need for the inclusion of outlier or novelty detection as part of the preprocessing pipeline for ML and DL related studies. / Thesis / Master of Applied Science (MASc) / Artificial intelligence (AI) refers to the ability of a computer or robot to mimic human traits such as problem solving or learning. Recently there has been an explosive interest in its uses for assisting in clinical analysis. However, successful use of these methods require a significantly large training set which can often contain outliers or incorrectly labeled data. Due to the sensitivity of these techniques to outliers, this often leads to poor classification rates as well as low specificity and sensitivity. The focus of this work was to evaluate different methods of outlier detection and investigate the presence of anomalies in large brain MRI datasets. The results of this study show that these large brain MRI datasets contain anomalies and provide a method best fit for identifying them.
640

A Machine Learning Approach to Genome Assessment

Thrash, Charles Adam 09 August 2019 (has links)
A key use of high throughput sequencing technology is the sequencing and assembly of full genome sequences. These genome assemblies are commonly assessed using statistics relating to contiguity of the assembly. Measures of contiguity are not strongly correlated with information about the biological completion or correctness of the assembly, and a commonly reported metric, N50, can be misleading. Over the past ten years, multiple research groups have rejected the overuse of N50 and sought to develop more informative metrics. This research seeks to create a ranking method that includes biologically relevant information about the genome, such as completeness and correctness of the genome. Approximately eight hundred genomes were initially selected, and information about their completeness, contiguity, and correctness was gathered using publicly available tools. Using this information, these genomes were scored by subject matter experts. This rating system was explored using supervised machine learning techniques. A number of classifiers and regressors were tested using cross validation. Two metrics were explored in this research. First, a metric that describes the distance to the ideal genome was created as a way to explore the incorporation of human subject matter expert knowledge into the genome assembly assessment process. Second, random forest regression was found to be the method of supervised learning with the highest scores. A model created by an optimized random forest regressor was saved, and a tool was created to load the saved model and rank genomes provided by the end user. These metrics both serve as ways to incorporate human subject matter expert knowledge into genome assembly assessment.

Page generated in 0.1183 seconds