• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5598
  • 577
  • 282
  • 275
  • 167
  • 157
  • 83
  • 66
  • 50
  • 42
  • 24
  • 21
  • 20
  • 19
  • 12
  • Tagged with
  • 9041
  • 9041
  • 3028
  • 1688
  • 1534
  • 1522
  • 1416
  • 1358
  • 1192
  • 1186
  • 1157
  • 1128
  • 1113
  • 1024
  • 1020
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
641

A machine learning framework for prediction of Diagnostic Trouble Codes in automobiles

Kopuru, Mohan 01 May 2020 (has links)
Predictive Maintenance is an important solution to the rising maintenance costs in the industries. With the advent of intelligent computer and availability of data, predictive maintenance is seen as a solution to predict and prevent the occurrence of the faults in the different types of machines. This thesis provides a detailed methodology to predict the occurrence of critical Diagnostic Trouble codes that are observed in a vehicle in order to take necessary maintenance actions before occurrence of the fault in automobiles using Convolutional Neural Network architecture.
642

An exploration of success factors in the healthcare supply chain

Tidwell, Matthew 07 August 2020 (has links)
This research builds off previous research conducted in 2009 which included a survey of healthcare professionals assessing their organization’s levels of supply chain maturity (SCM) and data standard readiness (DSR) from 1 to 5 [Smith, 2011]. With the survey data, Smith developed a 0-1 quadratic program to conserve the maximum amount of survey data while removing non-responses. This research uses the quadratic program as well as other machine learning algorithms and analysis methods to investigate what factors contribute to an organization’s SCM and DSR levels the most. No specific factors were found; however, different levels of prediction accuracy were achieved across the five different subsets and algorithms. he best accuracy prediction SCM model was linear discriminant analysis on the Reduced subset at 50.84% while the highest prediction accuracy for DSR was stepwise regression on the PCA subset at 45.00%. Most misclassifications found in this study were minimal.
643

USING RANDOMNESS TO DEFEND AGAINST ADVERSARIAL EXAMPLES IN COMPUTER VISION

Huangyi Ge (14187059) 29 November 2022 (has links)
<p>Computer vision applications such as image classification and object detection often suffer from adversarial examples. For example, adding a small amount of noise to input images can trick the model into misclassification. Over the years, many defense mechanisms have been proposed, and different researchers have made seemingly contradictory claims on their effectiveness. This dissertation first presents an analysis of possible adversarial models and proposes an evaluation framework for comparing different more powerful and realistic adversary strategies. Then, this dissertation proposes two randomness-based defense mechanisms Random Spiking (RS) and MoNet to improve the robustness of image classifiers. Random Spiking generalizes dropout and introduces random noises in the training process in a controlled manner. MoNet uses the combination of secret randomness and Floyd-Steinberg dithering. Specifically, input images are first processed using Floyd-Steinberg dithering to reduce their color depth, and then the pixels are encrypted using the AES block cipher under a secret, random key. Evaluations under our proposed framework suggest RS and MoNet deliver better protection against adversarial examples than many existing schemes. Notably, MoNet significantly improves the resilience against transferability of adversarial examples, at the cost of a small drop in prediction accuracy. Furthermore, we extend the usage of MoNet to the object detection network and use it to align with model ensemble strategies (Affirmative and WBF (weighted fusion boxes)) and Test Time Augmentation (TTA). We call such a strategy 3MIX. Evaluations found that 3Mix can significantly improve the mean average precision (mAP) on both benign inputs and adversarial examples. In addition, 3Mix is a lightweight approach to migrate the adversarial examples without training new models.</p>
644

Measuring Democracy: From Texts to Data

Marzagao, Thiago 26 December 2014 (has links)
No description available.
645

Semantic Analysis of Ladder Logic

Gad, Soumyashree Shrikant, Gad January 2017 (has links)
No description available.
646

Judgment Post-Stratication with Machine Learning Techniques: Adjusting for Missing Data in Surveys and Data Mining

Chen, Tian 02 October 2013 (has links)
No description available.
647

Detection of Malicious Applications in Android using Machine Learning

Baskaran, Balaji January 2016 (has links)
No description available.
648

Amino Acid Properties Provide Insight to a Protein’s Subcellular Location

Powell, Brian T. January 2016 (has links)
No description available.
649

Automatic Source Code Transformation To Pass Compiler Optimization

Kahla, Moustafa Mohamed 03 January 2024 (has links)
Loop vectorization is a powerful optimization technique that can significantly boost the runtime of loops. This optimization depends on functional equivalence between the original and optimized code versions, a requirement typically established through the compiler's static analysis. When this condition is not met, the compiler will miss the optimization. The process of manually rewriting the source code to pass an already missed compiler optimization is time-consuming, given the multitude of potential code variations, and demands a high level of expertise, making it impractical in many scenarios. In this work, we propose a novel framework that aims to take the code blocks that the compiler failed to optimize and transform them to another code block that passes the compiler optimization. We develop an algorithm to efficiently search for a code structure that automatically passes the compiler optimization (weakly verified through a correctness test). We focus on loop-vectorize optimization inside OpenMP directives, where the introduction of parallelism adds complexity to the compiler's vectorization task and is shown to hinder optimizations. Furthermore, we introduce a modified version of TSVC, a loop vectorization benchmark in which all original loops are executed within OpenMP directives. Our evaluation shows that our framework enables " loop-vectorize" optimizations that the compiler failed to pass, resulting in a speedup up to 340× in the blocks optimized. Furthermore, applying our tool to HPC benchmark applications, where those applications are already built with optimization and performance in mind, demonstrates that our technique successfully enables extended compiler optimization, thereby accelerating the execution time of the optimized blocks in 15 loops and the entire execution time of the three applications by up to 1.58 times. / Master of Science / Loop vectorization is a powerful technique for improving the performance of specific sections in computer programs known as loops. Particularly, it simultaneously executes instructions of different iterations in a loop, providing a considerable speedup on its runtime due to this parallelism. To apply this optimization, the code needs to meet certain conditions, which are usually checked by the compiler. However, sometimes the compiler cannot verify these conditions, and the optimization fails. Our research introduces a new approach to fix these issues automatically. Normally, fixing the code manually to meet these conditions is time-consuming and requires high expertise. To overcome this, we've developed a tool that can efficiently find ways to make the code satisfy the conditions needed for optimization. Our focus is on a specific type of code that uses OpenMP directives to split the loop on multiple processor cores and runs them simultaneously, where adding this parallelism makes the code more complex for the compiler to optimize. Our tests show that our approach successfully improves the speed of computer programs by enabling optimizations initially missed by the compiler. This results in significant speed improvements for specific parts of the code, sometimes up to 340 times faster. We've also applied our method to well-optimized computer programs, and it still managed to make them run up to 1.58 times faster.
650

Machine Learning applications in Hydrology

Zanoni, Maria Grazia 24 July 2023 (has links)
This work focuses on the use of Artificial Intelligence (AI), and in particular Machine Learning (ML) to tackle quality and quantity aspects of both surface water and groundwater. Traditionally, river water quality modelling and contaminant transport in groundwater studies resort to the solution of physical-based (PB) equations, which aim to define a conceptual model of reality. The complexity of the processes involved, in some cases undisclosed or indiscernible, calls for a sensitive parameterization by the modeler. For such reason, the PB models can be limited by the complexity of the system, the availability of data, and the consequent need for simplifying assumptions. On the other hand, ML models are data-driven and rely on algorithms to identify patterns in data. These techniques aim to extract a surrogate representation of the reality by learning existing correlations in data. They can handle complex and non-linear relationships between variables and can be more flexible and adaptable to new environments. However, they are directly affected by the quality and quantity of available data, requiring larger datasets than PB models. To explore the potential of these methods in addressing surface and groundwater challenges,we experimented with different algorithms in three distinct applications. First, we compared two ML techniques for a water quality catchment-scale model and the most performing was then employed to fill the gaps in environmental time series and to enhance the prediction of a PB model in the groundwater context. Therefore, in the first part of this work, a water quality model of the Adige River Basin is presented and discussed. For this purpose, Random Forest (RF) and Dense feed-forward Neural Network (DNN) were applied and compared to a standard linear regression (LR) approach and an Importance Features Assessment (IFA) of the drivers was performed. DNN showed to be more flexible and effective in detecting non-linear relationships than RF. LR performed at a satisfactory level, similar to RF and DNN, only when drivers linearly correlated to the observational variable were used, and a limited fraction of variability was explained. However, important drivers, non-linearly related to the water quality variables of interest introduced a significant gain when DNN was used. Regarding the variables investigated, water temperature and dissolved oxygen were modeled accurately, using RF or DNN, and sufficient accuracy was obtained by using the minimum information available, represented here by the Julian day of the measurements embodying the seasonality. The other variables showed instead a more balanced influence by the complete set of drivers, appreciable in the IFA procedure for DNN and RF, and a geogenic origin and anthropogenic disturbances were confirmed for chemical contaminants. The proposed analysis, by means of ML algorithms and through the IFA of the drivers, can be applied to predict spatial and temporal variability of contaminant concentrations and physical parameters and to identify the external forcing exerting the most relevant impacts on the dynamics of water quality variables. The second part of the thesis investigated the use of the DNN algorithm to gap-fill time series measurements, for daily flow rate and daily water temperatures from different sites downstream of the Careser glacier, in Pejo valley (northeastern Italy). Thus, an in-depth analysis of the streamflow response to the hydrological regime alterations of the glacier was carried out, through the reconstruction of the time series of the flow rate measured at a gauging station downstream the glacier, in the period 1976-2019. The water temperature time series, instead, were correlated to the macro-invertebrate population’s statistics in the same period at four sites along the Careser stream from the glacier to the reservoir immediately downstream the Careser Baia gauging station. In the first step, the water temperature was modelled just through the Julian day and air temperature information and, subsequently, precipitation, reconstructed flow rate, and evapotranspiration were introduced for sensitivity analysis of the features. With air temperature projections, the DNN model of the water temperature was also applied to simulate future scenarios up to 2050, considering different emission pathways. In this case, DNN proved to be a reliable tool for gap-filling the observational time series, even for time series with many gaps. The reconstructions of the water temperature allowed us to estimate the delay between the warming in air and water temperature and the effect on the biological invertebrate species in the glacier streams. The sensitivity analysis of the features was again key in underlining the contributions of the forcing available, unveiling the combined effects of the warming in air temperature and the decline of flow rate on the water temperature increase. The in-depth analysis of the flow rate revealed, besides the dramatic reduction of streamflow, the anticipation of the summer peak and the negligible influence of the precipitation in these alterations. Lastly, the framework for an ML-PB hybrid model in the context of contaminant transport by groundwater was presented. In this procedure, the contaminant concentration at several sampling locations was associated with physical parameters characterizing the aquifer. Through a synthetic case, a DNN model was employed to predict the physical parameters and a simplified PB equation was used to project the concentration into the future. The analysis demonstrated the capability of DNN to predict physical parameters by capitalizing on the information contained in the available concentration measurements. The thesis is articulated through 7 chapters. In Chapter 1, a broad overview of Machine Learning is presented, with its specific applications in Water sciences and the consequent motivations and objectives of this research. In Chapter 2 the main Machine Learning basic concepts are clarified and presented, in order to set the floor for the successive developments in which ML is applied to surface and subsurface hydrology. Chapter 3 covers the Machine Learning and statistical algorithms employed for modeling in the current research. In Chapter 4, Adige water catchment case study is presented and discussed. In Chapter 5, the gap-filling time series procedure for Careser case study is presented for both the variables investigated. In Chapter 6 the results of the hybrid Machine-Learning Physics-Based application of a groundwater model on synthetic data are presented. Finally, remarks and conclusions are summarized in Chapter 7, which provides also perspective work for these applications.

Page generated in 0.11 seconds