Spelling suggestions: "subject:"artificial neural networks"" "subject:"aartificial neural networks""
91 |
Solving adaptive multiple criteria problems by using artificial neural networksZhou, Yingqing January 1992 (has links)
No description available.
|
92 |
THE RECONSTRUCTION OF CLOUD-FREE REMOTE SENSING IMAGES: AN ARTIFICIAL NEURAL NETWORKS (ANN) APPROACHXu, Siyao 20 July 2009 (has links)
No description available.
|
93 |
Artificial neural network training for semi-autonomous robotic surgery applicationsSneath, Evan B. January 2014 (has links)
No description available.
|
94 |
Symbol Grounding Using Neural NetworksHorvitz, Richard P. 05 October 2012 (has links)
No description available.
|
95 |
Pattern-recognition schedulingYao, Xiaoqiang January 1996 (has links)
No description available.
|
96 |
Combining genetic algorithms and artificial neural networks to select heterogeneous dispatching rules for a job shop systemWilson, Daniel B. January 1996 (has links)
No description available.
|
97 |
A Hybrid algorithm to solve the traveling-salesman problem using operations research heuristics and artificial neural networksToure, Serge Eric January 1996 (has links)
No description available.
|
98 |
Enhancing Privacy in Federated Learning: Mitigating Model Inversion Attacks through Selective Model Transmission and Algorithmic ImprovementsJonsson, Isak January 2024 (has links)
This project aims to identify a sustainable way to construct and train machine learning models. A crucial factor in creating effective machine learning models lies in having access to vast amounts of data. However, this can pose a challenge due to the confidentiality and dispersion of data across various entities. Collecting all the data can thus become a security concern, as transmitting it to a centralized computing location may expose the data to security risks. One solution to this issue is federated learning, which utilizes locally trained AI models. Instead of transmitting data to a centralized computing location, this approach entails sending locally trained AI models and combining them into a global model. In recent years, a method called Model Inversion Attacks has emerged, revealing their potential risk in the context of extracting training data from trained AI models. This methodology potentially heightens the vulnerability of sending models instead of data, posing a security risk. In this project, various Model Inversion Attack methodologies will be examined to further understand the risk of sending models instead of data. The papers examined showed some results of extracting data from trained AI models, although they do not raise significant concerns. Nonetheless, future research in MIA may create security concerns when sending models between parties. Sending parts of the locally trained models to the global model effectively neutralizes the effectiveness of all the examined Model Inversion Attack studies. However, from the results presented in this project, it is evident that challenges persist when only sending parts of a trained model. The challenge was to construct a usable federated learning model while only sending parts of a trained model. To achieve a good federated learning model, several adjustments had to be made to the algorithm, which showed some promising results for the future of federated learning.
|
99 |
Scalability Analysis of Synchronous Data-Parallel Artificial Neural Network (ANN) LearnersSun, Chang 14 September 2018 (has links)
Artificial Neural Networks (ANNs) have been established as one of the most important algorithmic tools in the Machine Learning (ML) toolbox over the past few decades. ANNs' recent rise to widespread acceptance can be attributed to two developments: (1) the availability of large-scale training and testing datasets; and (2) the availability of new computer architectures for which ANN implementations are orders of magnitude more efficient. In this thesis, I present research on two aspects of the second development. First, I present a portable, open source implementation of ANNs in OpenCL and MPI. Second, I present performance and scaling models for ANN algorithms on state-of-the-art Graphics Processing Unit (GPU) based parallel compute clusters. / Master of Science / Artificial Neural Networks (ANNs) have been established as one of the most important algorithmic tools in the Machine Learning (ML) toolbox over the past few decades. ANNs’ recent rise to widespread acceptance can be attributed to two developments: (1) the availability of large-scale training and testing datasets; and (2) the availability of new computer architectures for which ANN implementations are orders of magnitude more efficient. In this thesis, I present research on two aspects of the second development. First, I present a portable, open source implementation of ANNs in OpenCL and MPI. Second, I present performance and scaling models for ANN algorithms on state-of-the-art Graphics Processing Unit (GPU) based parallel compute clusters.
|
100 |
Artificial neural networks modelling the prednisolone nanoprecipitation in microfluidic reactorsAli, Hany S.M., Blagden, Nicholas, York, Peter, Amani, Amir, Brook, Toni 2009 June 1928 (has links)
No / This study employs artificial neural networks (ANNs) to create a model to identify relationships between
variables affecting drug nanoprecipitation using microfluidic reactors. The input variables examined were
saturation levels of prednisolone, solvent and antisolvent flowrates, microreactor inlet angles and internal
diameters, while particle size was the single output. ANNs software was used to analyse a set of data
obtained by random selection of the variables. The developed model was then assessed using a separate
set of validation data and provided good agreement with the observed results. The antisolvent flow rate
was found to have the dominant role on determining final particle size.
|
Page generated in 0.0933 seconds