• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 3
  • 1
  • Tagged with
  • 45
  • 22
  • 20
  • 19
  • 19
  • 16
  • 16
  • 11
  • 10
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

HIGH SPEED IMAGING VIA ADVANCED MODELING

Soumendu Majee (10942896) 04 August 2021 (has links)
<div>There is an increasing need to accurately image objects at a high temporal resolution for different applications in order to analyze the underlying physical, chemical, or biological processes. In this thesis, we use advanced models exploiting the image structure and the measurement process in order to achieve an improved temporal resolution. The thesis is divided into three chapters, each corresponding to a different imaging application.</div><div><br></div><div>In the first chapter, we propose a novel method to localize neurons in fluorescence microscopy images. Accurate localization of neurons enables us to scan only the neuron locations instead of the full brain volume and thus improve the temporal resolution of neuron activity monitoring. We formulate the neuron localization problem as an inverse problem where we reconstruct an image that encodes the location of the neuron centers. The sparsity of the neuron centers serves as a prior model, while the forward model comprises of shape models estimated from training data.</div><div><br></div><div>In the second chapter, we introduce multi-slice fusion, a novel framework to incorporate advanced prior models for inverse problems spanning many dimensions such as 4D computed tomography (CT) reconstruction. State of the art 4D reconstruction methods use model based iterative reconstruction (MBIR), but it depends critically on the quality of the prior modeling. Incorporating deep convolutional neural networks (CNNs) in the 4D reconstruction problem is difficult due to computational difficulties and lack of high-dimensional training data. Multi-Slice Fusion integrates the tomographic forward model with multiple low dimensional CNN denoisers along different planes to produce a 4D regularized reconstruction. The improved regularization in multi-slice fusion allows each time-frame to be reconstructed from fewer measurements, resulting in an improved temporal resolution in the reconstruction. Experimental results on sparse-view and limited-angle CT data demonstrate that Multi-Slice Fusion can substantially improve the quality of reconstructions relative to traditional methods, while also being practical to implement and train.</div><div><br></div><div>In the final chapter, we introduce CodEx, a synergistic combination of coded acquisition and a non-convex Bayesian reconstruction for improving acquisition speed in computed tomography (CT). In an ideal ``step-and-shoot'' tomographic acquisition, the object is rotated to each desired angle, and the view is taken. However, step-and-shoot acquisition is slow and can waste photons, so in practice the object typically rotates continuously in time, leading to views that are blurry. This blur can then result in reconstructions with severe motion artifacts. CodEx works by encoding the acquisition with a known binary code that the reconstruction algorithm then inverts. The CodEx reconstruction method uses the alternating direction method of multipliers (ADMM) to split the inverse problem into iterative deblurring and reconstruction sub-problems, making reconstruction practical. CodEx allows for a fast data acquisition leading to a good temporal resolution in the reconstruction.</div>
32

Applications of Deep Neural Networks in Computer-Aided Drug Design

Ahmadreza Ghanbarpour Ghouchani (10137641) 01 March 2021 (has links)
<div>Deep neural networks (DNNs) have gained tremendous attention over the recent years due to their outstanding performance in solving many problems in different fields of science and technology. Currently, this field is of interest to many researchers and growing rapidly. The ability of DNNs to learn new concepts with minimal instructions facilitates applying current DNN-based methods to new problems. Here in this dissertation, three methods based on DNNs are discussed, tackling different problems in the field of computer-aided drug design.</div><div><br></div><div>The first method described addresses the problem of prediction of hydration properties from 3D structures of proteins without requiring molecular dynamics simulations. Water plays a major role in protein-ligand interactions and identifying (de)solvation contributions of water molecules can assist drug design. Two different model architectures are presented for the prediction the hydration information of proteins. The performance of the methods are compared with other conventional methods and experimental data. In addition, their applications in ligand optimization and pose prediction is shown.</div><div><br></div><div>The design of de novo molecules has always been of interest in the field of drug discovery. The second method describes a generative model that learns to derive features from protein sequences to design de novo compounds. We show how the model can be used to generate molecules similar to the known for the targets the model have not seen before and compare with benchmark generative models.</div><div><br></div><div>Finally, it is demonstrated how DNNs can learn to predict secondary structure propensity values derived from NMR ensembles. Secondary structure propensities are important in identifying flexible regions in proteins. Protein flexibility has a major role in drug-protein binding, and identifying such regions can assist in development of methods for ligand binding prediction. The prediction performance of the method is shown for several proteins with two or more known secondary structure conformations.</div>
33

Karst Database Implementation in Minnesota: Analysis of Sinkhole Distribution

Gao, Y., Alexander, E. C., Barnes, R. J. 01 May 2005 (has links)
This paper presents the overall sinkhole distributions and conducts hypothesis tests of sinkhole distributions and sinkhole formation using data stored in the Karst Feature Database (KFD) of Minnesota. Nearest neighbor analysis (NNA) was extended to include different orders of NNA, different scales of concentrated zones of sinkholes, and directions to the nearest sinkholes. The statistical results, along with the sinkhole density distribution, indicate that sinkholes tend to form in highly concentrated zones instead of scattered individuals. The pattern changes from clustered to random to regular as the scale of the analysis decreases from 10-100 km2 to 5-30 km 2 to 2-10 km2. Hypotheses that may explain this phenomenon are: (1) areas in the highly concentrated zones of sinkholes have similar geologic and topographical settings that favor sinkhole formation; (2) existing sinkholes change the hydraulic gradient in the surrounding area and increase the solution and erosional processes that eventually form more new sinkholes.
34

Co-designing Communication Middleware and Deep Learning Frameworks for High-Performance DNN Training on HPC Systems

Awan, Ammar Ahmad 10 September 2020 (has links)
No description available.
35

Permanganate Reaction Kinetics and Mechanisms and Machine Learning Application in Oxidative Water Treatment

Zhong, Shifa 21 June 2021 (has links)
No description available.
36

Probing Human Category Structures with Synthetic Photorealistic Stimuli

Chang Cheng, Jorge 08 September 2022 (has links)
No description available.
37

Study of evaluation metrics while predicting the yield of lettuce plants in indoor farms using machine learning models

Chedayan, Divya, Geo Fernandez, Harry January 2023 (has links)
A key challenge for maximizing the world’s food supply is crop yield prediction. In this study, three machine models are used to predict the fresh weight (yield) of lettuce plants that are grown inside indoor farms hydroponically using the vertical farming infrastructure, namely, support vector regressor (SVR), random forest regressor (RFR), and deep neural network (DNN).The climate data, nutrient data, and plant growth data are passed as input to train the models to understand the growth pattern based on the available features. The study of evaluation metrics majorly covers Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), R-squared, and Adjusted R-squared values.The results of the project have shown that the Random Forest with all the features is the best model having the best results with the least cross-validated MAE score and good cross-validated Adjusted R-squared value considering that the error of the prediction is minimal. This is followed by the DNN model with minor differences in the resulting values. The Support Vector Regressor (SVR) model gave a very poor performance with a huge error value that cannot be afforded in this scenario. In this study, we have also compared various evaluating metrics mentioned above and considered the cross-validated MAE and cross-validated Adjusted R-squared metrics. According to our study, MAE had the lowest error value, which is less sensitive to the outliers and adjusted R-squared value helps to understand the variance of the target variable with the predictor variable and adjust the metric to prevent the issues of overfitting.
38

EMONAS : Evolutionary Multi-objective Neuron Architecture Search of Deep Neural Network / EMONAS : Evolutionär multi-objektiv neuronarkitektursökning av djupa neurala nätverk för inbyggda system

Feng, Jiayi January 2023 (has links)
Customized Deep Neural Network (DNN) accelerators have been increasingly popular in various applications, from autonomous driving and natural language processing to healthcare and finance, etc. However, deploying them directly on embedded system peripherals within real-time operating systems (RTOS) is not easy due to the paradox of the complexity of DNNs and the simplicity of embedded system devices. As a result, DNN implementation on embedded system devices requires customized accelerators with tailored hardware due to their numerous computations, latency, power consumption, etc. Moreover, the computational capacity, provided by potent microprocessors or graphics processing units (GPUs), is necessary to unleash the full potential of DNN, but these computational resources are often not easily available in embedded system devices. In this thesis, we propose an innovative method to evaluate and improve the efficiency of DNN implementation within the constraints of resourcelimited embedded system devices. The Evolutionary Multi-Objective Neuron Architecture Search-Binary One Optimization (EMONAS-BOO) optimizes both the image classification accuracy and the innovative Binary One Optimization (BOO) objectives, with Multiple Objective Optimization (MOO) methods. The EMONAS-BOO automates neural network searching and training, and the neural network architectures’ diversity is also guaranteed with the help of an evolutionary algorithm that consists of tournament selection, polynomial mutation, and point crossover mechanisms. Binary One Optimization (BOO) is used to evaluate the difficulty in implementing DNNs on resource-limited embedded system peripherals, employing a binary format for DNN weights. A deeper implementation of the innovative Binary One Optimization will significantly boost not only computation efficiency but also memory storage, power dissipation, etc. It is based on the reduction of weights binary 1’s that need to be computed and stored, where the reduction of binary 1 brings reduced arithmetic operations and thus simplified neural network structures. In addition, analyzed from a digital circuit waveform perspective, the embedded system, in interpreting the neural network, will register an increase in zero weights leading to a reduction in voltage transition frequency, which, in turn, benefits power efficiency improvement. The proposed EMONAS employs the MOO method which optimizes two objectives. The first objective is image classification accuracy, and the second objective is Binary One Optimization (BOO). This approach enables EMONAS to outperform manually constructed and randomly searched DNNs. Notably, 12 out of 100 distinct DNNs maintained their image classification accuracy. At the same time, they also exhibit superior BOO performance. Additionally, the proposed EMONAS ensures automated searching and training of DNNs. It achieved significant reductions in key performance metrics: Compared with random search, evolutionary-searched BOO was lowered by up to 85.1%, parameter size by 85.3%, and FLOPs by 83.3%. These improvements were accomplished without sacrificing the image classification accuracy, which saw an increase of 8.0%. These results demonstrate that the EMONAS is an excellent choice for optimizing innovative objects that did not exist before, and greater multi-objective optimization performance can be guaranteed simultaneously if computational resources are adequate. / Customized Deep Neural Network (DNN)-acceleratorer har blivit alltmer populära i olika applikationer, från autonom körning och naturlig språkbehandling till sjukvård och ekonomi, etc. Att distribuera dem direkt på kringutrustning för inbyggda system inom realtidsoperativsystem (RTOS) är dock inte lätt på grund av paradoxen med komplexiteten hos DNN och enkelheten hos inbyggda systemenheter. Som ett resultat kräver DNNimplementering på inbäddade systemenheter skräddarsydda acceleratorer med skräddarsydd hårdvara på grund av deras många beräkningar, latens, strömförbrukning, etc. Dessutom är beräkningskapaciteten, som tillhandahålls av potenta mikroprocessorer eller grafikprocessorer (GPU), nödvändig för att frigöra den fulla potentialen hos DNN, men dessa beräkningsresurser är ofta inte lätt tillgängliga i inbyggda systemenheter. I den här avhandlingen föreslår vi en innovativ metod för att utvärdera och förbättra effektiviteten av DNN-implementering inom begränsningarna av resursbegränsade inbäddade systemenheter. Den evolutionära Multi-Objective Neuron Architecture Search-Binary One Optimization (EMONAS-BOO) optimerar både bildklassificeringsnoggrannheten och de innovativa Binary One Optimization (BOO) målen, med Multiple Objective Optimization (MOO) metoder. EMONAS-BOO automatiserar sökning och träning av neurala nätverk, och de neurala nätverksarkitekturernas mångfald garanteras också med hjälp av en evolutionär algoritm som består av turneringsval, polynommutation och punktövergångsmekanismer. Binary One Optimization (BOO) används för att utvärdera svårigheten att implementera DNN på resursbegränsade kringutrustning för inbäddade system, med ett binärt format för DNN-vikter. En djupare implementering av den innovativa Binary One Optimization kommer att avsevärt öka inte bara beräkningseffektiviteten utan också minneslagring, effektförlust, etc. Den är baserad på minskningen av vikter binära 1:or som behöver beräknas och lagras, där minskningen av binär 1 ger minskade aritmetiska operationer och därmed förenklade neurala nätverksstrukturer. Dessutom, analyserat ur ett digitalt kretsvågformsperspektiv, kommer det inbäddade systemet, vid tolkning av det neurala nätverket, att registrera en ökning av nollvikter, vilket leder till en minskning av spänningsövergångsfrekvensen, vilket i sin tur gynnar en förbättring av effekteffektiviteten. Den föreslagna EMONAS använder MOO-metoden som optimerar två mål. Det första målet är bildklassificeringsnoggrannhet och det andra målet är Binary One Optimization (BOO). Detta tillvägagångssätt gör det möjligt för EMONAS att överträffa manuellt konstruerade och slumpmässigt genomsökta DNN. Noterbart behöll 12 av 100 distinkta DNN:er sin bildklassificeringsnoggrannhet. Samtidigt uppvisar de också överlägsen BOOprestanda. Dessutom säkerställer den föreslagna EMONAS automatisk sökning och utbildning av DNN. Den uppnådde betydande minskningar av nyckelprestandamått: BOO sänktes med upp till 85,1%, parameterstorleken med 85,3% och FLOP:s med 83,3%. Dessa förbättringar åstadkoms utan att offra bildklassificeringsnoggrannheten, som såg en ökning med 8,0%. Dessa resultat visar att EMONAS är ett utmärkt val för att optimera innovativa objekt som inte existerade tidigare, och större multi-objektiv optimeringsprestanda kan garanteras samtidigt om beräkningsresurserna är tillräckliga.
39

Deep Learning for Compressive SAR Imaging with Train-Test Discrepancy

McCamey, Morgan R. 21 June 2021 (has links)
No description available.
40

AI on the Edge with CondenseNeXt: An Efficient Deep Neural Network for Devices with Constrained Computational Resources

Kalgaonkar, Priyank B. 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Research work presented within this thesis propose a neoteric variant of deep convolutional neural network architecture, CondenseNeXt, designed specifically for ARM-based embedded computing platforms with constrained computational resources. CondenseNeXt is an improved version of CondenseNet, the baseline architecture whose roots can be traced back to ResNet. CondeseNeXt replaces group convolutions in CondenseNet with depthwise separable convolutions and introduces group-wise pruning, a model compression technique, to prune (remove) redundant and insignificant elements that either are irrelevant or do not affect performance of the network upon disposition. Cardinality, a new dimension to the existing spatial dimensions, and class-balanced focal loss function, a weighting factor inversely proportional to the number of samples, has been incorporated in order to relieve the harsh effects of pruning, into the design of CondenseNeXt’s algorithm. Furthermore, extensive analyses of this novel CNN architecture was performed on three benchmarking image datasets: CIFAR-10, CIFAR-100 and ImageNet by deploying the trained weight on to an ARM-based embedded computing platform: NXP BlueBox 2.0, for real-time image classification. The outputs are observed in real-time in RTMaps Remote Studio’s console to verify the correctness of classes being predicted. CondenseNeXt achieves state-of-the-art image classification performance on three benchmark datasets including CIFAR-10 (4.79% top-1 error), CIFAR-100 (21.98% top-1 error) and ImageNet (7.91% single model, single crop top-5 error), and up to 59.98% reduction in forward FLOPs compared to CondenseNet. CondenseNeXt can also achieve a final trained model size of 2.9 MB, however at the cost of 2.26% in accuracy loss. Thus, performing image classification on ARM-Based computing platforms without requiring a CUDA enabled GPU support, with outstanding efficiency.

Page generated in 0.0745 seconds