• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 264
  • 35
  • 14
  • 10
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 384
  • 384
  • 384
  • 246
  • 163
  • 159
  • 139
  • 87
  • 83
  • 80
  • 79
  • 75
  • 69
  • 67
  • 67
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Artificial intelligence for segmentation of nuclei from transmitted images

Klintberg Sakal, Norah January 2020 (has links)
State-of-the-art fluorescent imaging research is strictly limited to eight fluorophore labels duringthe study of intercellular interactions among organelles. The number of excited fluorophore colorsis restricted due to overlap in the narrow spectra of visual wavelength. However, this requires aconsiderable effort of analysis to be able to tell the overlapping signals apart. Significant overlapalready occurs with the use of more than four fluorophores and is leaving researchers limited to asmall number of labels and the hard decision to prioritize between cellular labels to use. Except for the physical limitations of fluorescent labeling, the labeling itself causes behavioralabnormalities due to sample perturbation. In addition to this, the labeling dye or dye-adjacentantibodies are potentially causing phototoxicity and photobleaching thus limiting the timescale oflive cell imaging. Nontoxic imaging modalities such as transmitted-light microscopes, such asbright-field and phase contrast methods, are available but not nearly achieving images of thespecificity as when using fluorophore labeling. An approach that could increase the number of organelles simultaneously studied withfluorophore labels, while being cost-effective and nontoxic as transmitted-light microscopes wouldbe an invaluable tool in the quest to enhance knowledge of cellular studies of organelles. Here wepresent a deep learning solution, using convolutional neural networks built to predict thefluorophore labeling effect on the nucleus, from a transmitted-light input. This solution renders afluorescent channel available for another marker and would eliminate the process of labeling thenucleus with dye or dye-conjugated antibodies by instead using deep convolutional neuralnetworks. / Allra senaste forskningen inom fluorescensmikroskopi är begränsat till upp till åtta fluoroforer förstudier av intracellulära kommunikationer mellan organeller. Antalet fluorescerande färger ärbegränsade till följd av spektralt överlapp i det synliga våglängdsområdet. Överlappande signalerbehöver matematiskt bearbetas vilket innebär ökad arbetsinsats och signifikant överlappning skerredan vid användning av fler än fyra fluoroforer. Denna begräsning innebär i slutändan att forskarehar ett litet antal fluoroforer att arbeta med och behöver därmed prioritera vilka cellulära strukturersom kan märkas samtidigt. Utöver de spektrala begräsningarna med fluorescensmikroskopi, så innebär även själva färgningenav cellulära komponenter en negativ cellulär påverkan i form av avvikande beteende.Fluorescerande färgämnen och märkta antikroppar orsakar potentiellt fototoxicitet ochljusblekning, vilket begränsar tidsrymden vid studier av levande celler. Ljusfältsmikroskop sombright-field and faskontrast har inte en toxisk påverkan men producerar inte i närheten likadetaljerade bilder som fluorescensmikroskop gör. Ett tillvägagångssätt som skulle kunna öka antalet organeller som simultant kan undersökas medfluoroforer, som samtidigt är kostnadseffektiv och inte har en toxisk påverkan somljusfältsmikroskop, skulle vara ett ovärderligt verktyg för utökad kunskap vid cellulära studier avorganeller. Här presenteras en maskininlärningsmetod byggd med artificiella neuronnät för attpredicera fluorescerande infärgningen av cellkärnan i fluorescensmikroskop, med bilder frånljusfältsmikroskop. Denna lösning frigör en fluorofor som kan användas till andra organellersamtidigt som arbetet med fluorescerande infärgning av cellkärnan inte längre är nödvändigt ochersätts med ett artificiellt neuronnät.
112

Detection of Humans in Video Streams Using Convolutional Neural Networks / Detektion av människor i videoströmmar med hjälp av convolutional neural networks

Wang, Huijie January 2017 (has links)
This thesis is focused on human detection in video streams using Convolutional Neural Networks (CNNs). In recent years, CNNs have become common methods in various computer vision problems, and image detection is one popular application. The performance of CNNs on the detection problem has undergone a rapid increase in both accuracy and speed. In this thesis, we focus on a specific sub-domain of detection: human detection. Furthermore, it makes the problem more challenging as the data extracted from video streams captured by a head-mounted camera and therefore include difficult view points and strong motion blur. Considering both accuracy and speed, we choose two models with typical structures--You Only Look Once (YOLO) and Single Shot MultiBox Detector (SSD)--to experiment how robust the models perform on human domain with motion blur, and how the differences between the structures may influence the results. Several experiments are carried out in this thesis. With a better design of structure, SSD outperforms YOLO in various aspects. It is further proved as we fine-tuned YOLO and SSD300 on human data in Pascal VOC 2012 trainval dataset, showing the efficiency of SSD with fewer classes trained. As for motion blur problem, it is shown in the experiments that SSD300 has good ability to learn blurred patterns. The structure of SSD300 is further tested with regard to the design of default boxes and its performance on different scales and locations. The results show that the SSD model has a superior performance on online detection in video streams, but with a more customized structure it has potential to achieve even better results. / Detta examensarbete undersöker problemet att detektera människor i videströmmar med hjälp av convolutional neural networks (CNNs). Under de senaste åren har CNNs ökat i användning, vilket medfört stora förbättringar i noggrannhet och beräkningshastighet. CNN är nu en populär metod i olika datorseende- och bildigenkänningsproblem. I detta projekt fokuserar vi på en specifik subdomän: detektion av människor. Problemet försvåras ytterligare av att vår videodata är inspelad från en huvudmonterad kamera. Detta medför att vårt system behöver hantera ovanliga betraktningsvinklar och rörelseoskärpa. Efter att ha tagit hänsyn till beräkningshastighet och detektionskvalitet har vi valt att undersöka två olika CNN-modeller: You Only Look Once (YOLO) och Single Shot MultiBox Detector (SSD). Experimenten har designats för att visa hur robusta metoderna är på att detektera människor i bilder med rörelseoskärpa. Vi har också undersökt hur modifikationer på nätverksstrukturer kan påverka slutresultaten. Flera experiment har gjorts i detta projekt. Vi visar att SSD ger bättre resultat än YOLO i många avseenden, vilket beror på att SSD har en bättre designad nätverksstruktur. Genom att utföra fin-anpassning av YOLO och SSD på bildkollektionen i Pascal VOC 2012 kan vi visa att SSD fungerar bra även när vi tränar på färre objektklasser. SSD300 har också god förmåga att lära mönster som påverkats av oskärpa. Vi analyserar även hur valet av position och skalor av de predefinierade sökområdenen påverkar resultaten från SSD300. Resultaten visar att SSD-modellen presterar överlägset i realtidsdetektion i videoströmmar. Genom att anpassa strukturerna ytterligare finns potential att uppnå ännu bättre resultat.
113

Electricity Price Forecasting Using a Convolutional Neural Network

Winicki, Elliott 01 March 2020 (has links) (PDF)
Many methods have been used to forecast real-time electricity prices in various regions around the world. The problem is difficult because of market volatility affected by a wide range of exogenous variables from weather to natural gas prices, and accurate price forecasting could help both suppliers and consumers plan effective business strategies. Statistical analysis with autoregressive moving average methods and computational intelligence approaches using artificial neural networks dominate the landscape. With the rise in popularity of convolutional neural networks to handle problems with large numbers of inputs, and convolutional neural networks conspicuously lacking from current literature in this field, convolutional neural networks are used for this time series forecasting problem and show some promising results. This document fulfills both MSEE Master's Thesis and BSCPE Senior Project requirements.
114

Neural Network Based Diagnosis of Breast Cancer Using the Breakhis Dataset

Dalke, Ross E 01 June 2022 (has links) (PDF)
Breast cancer is the most common type of cancer in the world, and it is the second deadliest cancer for females. In the fight against breast cancer, early detection plays a large role in saving people’s lives. In this work, an image classifier is designed to diagnose breast tumors as benign or malignant. The classifier is designed with a neural network and trained on the BreakHis dataset. After creating the initial design, a variety of methods are used to try to improve the performance of the classifier. These methods include preprocessing, increasing the number of training epochs, changing network architecture, and data augmentation. Preprocessing includes changing image resolution and trying grayscale images rather than RGB. The tested network architectures include VGG16, ResNet50, and a custom structure. The final algorithm creates 50 classifier models and keeps the best one. Classifier designs are primarily judged on the classification accuracies of their best model and their median model. Designs are also judged on how consistently they produce their highest performing models. The final classifier design has a median accuracy of 93.62% and best accuracy of 96.35%. Of the 50 models generated, 46 of them performed with over 85% accuracy. The final classifier design is compared to the works of two groups of researchers who created similar classifiers for the same dataset. This will show that the classifier performs at the same level or better than the classifiers designed by other researchers. The classifier achieves similar performance to the classifier made by the first group of researchers and performs better than the classifier from the second. Finally, the learned lessons and future steps are discussed.
115

A multi-biometric iris recognition system based on a deep learning approach

Al-Waisy, Alaa S., Qahwaji, Rami S.R., Ipson, Stanley S., Al-Fahdawi, Shumoos, Nagem, Tarek A.M. 24 October 2017 (has links)
Yes / Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. In this paper, an efficient and real-time multimodal biometric system is proposed based on building deep learning representations for images of both the right and left irises of a person, and fusing the results obtained using a ranking-level fusion method. The trained deep learning system proposed is called IrisConvNet whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from the input image without any domain knowledge where the input image represents the localized iris region and then classify it into one of N classes. In this work, a discriminative CNN training scheme based on a combination of back-propagation algorithm and mini-batch AdaGrad optimization method is proposed for weights updating and learning rate adaptation, respectively. In addition, other training strategies (e.g., dropout method, data augmentation) are also proposed in order to evaluate different CNN architectures. The performance of the proposed system is tested on three public datasets collected under different conditions: SDUMLA-HMT, CASIA-Iris- V3 Interval and IITD iris databases. The results obtained from the proposed system outperform other state-of-the-art of approaches (e.g., Wavelet transform, Scattering transform, Local Binary Pattern and PCA) by achieving a Rank-1 identification rate of 100% on all the employed databases and a recognition time less than one second per person.
116

Deep learning technology for predicting solar flares from (Geostationary Operational Environmental Satellite) data

Nagem, Tarek A.M., Qahwaji, Rami S.R., Ipson, Stanley S., Wang, Z., Al-Waisy, Alaa S. January 2018 (has links)
Yes / Solar activity, particularly solar flares can have significant detrimental effects on both space-borne and grounds based systems and industries leading to subsequent impacts on our lives. As a consequence, there is much current interest in creating systems which can make accurate solar flare predictions. This paper aims to develop a novel framework to predict solar flares by making use of the Geostationary Operational Environmental Satellite (GOES) X-ray flux 1-minute time series data. This data is fed to three integrated neural networks to deliver these predictions. The first neural network (NN) is used to convert GOES X-ray flux 1-minute data to Markov Transition Field (MTF) images. The second neural network uses an unsupervised feature learning algorithm to learn the MTF image features. The third neural network uses both the learned features and the MTF images, which are then processed using a Deep Convolutional Neural Network to generate the flares predictions. To the best of our knowledge, this work is the first flare prediction system that is based entirely on the analysis of pre-flare GOES X-ray flux data. The results are evaluated using several performance measurement criteria that are presented in this paper.
117

Quantum ReLU activation for Convolutional Neural Networks to improve diagnosis of Parkinson’s disease and COVID-19

Parisi, Luca, Neagu, Daniel, Ma, R., Campean, Felician 17 September 2021 (has links)
Yes / This study introduces a quantum-inspired computational paradigm to address the unresolved problem of Convolutional Neural Networks (CNNs) using the Rectified Linear Unit (ReLU) activation function (AF), i.e., the ‘dying ReLU’. This problem impacts the accuracy and the reliability in image classification tasks for critical applications, such as in healthcare. The proposed approach builds on the classical ReLU and Leaky ReLU, applying the quantum principles of entanglement and superposition at a computational level to derive two novel AFs, respectively the ‘Quantum ReLU’ (QReLU) and the ‘modified-QReLU’ (m-QReLU). The proposed AFs were validated when coupled with a CNN using seven image datasets on classification tasks involving the detection of COVID-19 and Parkinson’s Disease (PD). The out-of-sample/test classification accuracy and reliability (precision, recall and F1-score) of the CNN were compared against those of the same classifier when using nine classical AFs, including ReLU-based variations. Findings indicate higher accuracy and reliability for the CNN when using either QReLU or m-QReLU on five of the seven datasets evaluated. Whilst retaining the best classification accuracy and reliability for handwritten digits recognition on the MNIST dataset (ACC = 99%, F1-score = 99%), avoiding the ‘dying ReLU’ problem via the proposed quantum AFs improved recognition of PD-related patterns from spiral drawings with the QReLU especially, which achieved the highest classification accuracy and reliability (ACC = 92%, F1-score = 93%). Therefore, with these increased accuracy and reliability, QReLU and m-QReLU can aid critical image classification tasks, such as diagnoses of COVID-19 and PD. / The authors declare that this was the result of a HEIF 2020 University of Bradford COVID-19 response-funded project ‘Quantum ReLU-based COVID-19 Detector: A Quantum Activation Function for Deep Learning to Improve Diagnostics and Prognostics of COVID-19 from Non-ionising Medical Imaging’. However, the funding source was not involved in conducting the study and/or preparing the article.
118

Robustness of Convolutional Neural Networks for Surgical Tool Classification in Laparoscopic Videos from Multiple Sources and of Multiple Types: A Systematic Evaluation

Tamer, Abdulbaki Alshirbaji, Jalal, Nour Aldeen, Docherty, Paul David, Neumuth, Thomas, Möller, Knut 27 March 2024 (has links)
Deep learning approaches have been explored for surgical tool classification in laparoscopic videos. Convolutional neural networks (CNN) are prominent among the proposed approaches. However, concerns about the robustness and generalisability of CNN approaches have been raised. This paper evaluates CNN generalisability across different procedures and in data from different surgical settings. Moreover, generalisation performance to new types of procedures is assessed and insights are provided into the effect of increasing the size and representativeness of training data on the generalisation capabilities of CNN. Five experiments were conducted using three datasets. The DenseNet-121 model showed high generalisation capability within the dataset, with a mean average precision of 93%. However, the model performance diminished on data from different surgical sites and across procedure types (27% and 38%, respectively). The generalisation performance of the CNN model was improved by increasing the quantity of training videos on data of the same procedure type (the best improvement was 27%). These results highlight the importance of evaluating the performance of CNN models on data from unseen sources in order to determine their real classification capabilities. While the analysed CNN model yielded reasonably robust performance on data from different subjects, it showed a moderate reduction in performance for different surgical settings.
119

3D Position Estimation using Deep Learning

Pedrazzini, Filippo January 2018 (has links)
The estimation of the 3D position of an object is one of the most important topics in the computer vision field. Where the final aim is to create automated solutions that can localize and detect objects from images, new high-performing models and algorithms are needed. Due to lack of relevant information in the single 2D images, approximating the 3D position can be considered a complex problem. This thesis describes a method based on two deep learning models: the image net and the temporal net that can tackle this task. The former is a deep convolutional neural network with the intention to extract meaningful features from the images, while the latter exploits the temporal information to reach a more robust prediction. This solution reaches a better Mean Absolute Error compared to already existing computer vision methods on different conditions and configurations. A new data-driven pipeline has been created to deal with 2D videos and extract the 3D information of an object. The same architecture can be generalized to different domains and applications. / Uppskattning av 3D-positionen för ett objekt är ett viktigt område inom datorseende. Då det slutliga målet är att skapa automatiserade lösningar som kan lokalisera och upptäcka objekt i bilder, behövs nya, högpresterande modeller och algoritmer. Bristen på relevant information i de enskilda 2D-bilderna gör att approximering av 3D-positionen blir ett komplext problem. Denna uppsats beskriver en metod baserad på två djupinlärningsmodeller: image net och temporal net. Den förra är ett djupt nätverk som kan extrahera meningsfulla egenskaper från bilderna, medan den senare utnyttjar den tidsmässiga informationen för att kunna göra mer robusta förutsägelser. Denna lösning erhåller ett lägre genomsnittligt absolut fel jämfört med existerande metoder, under olika villkor och konfigurationer. En ny datadriven arkitektur har skapats för att hantera 2D-videoklipp och extrahera 3D-informationen för ett objekt. Samma arkitektur kan generaliseras till olika domäner och applikationer.
120

Deep Transferable Intelligence for Wearable Big Data Pattern Detection

Gangadharan, Kiirthanaa 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Biomechanical Big Data is of great significance to precision health applications, among which we take special interest in Physical Activity Detection (PAD). In this study, we have performed extensive research on deep learning-based PAD from biomechanical big data, focusing on the challenges raised by the need for real-time edge inference. First, considering there are many places we can place the motion sensors, we have thoroughly compared and analyzed the location difference in terms of deep learning-based PAD performance. We have further compared the difference among six sensor channels (3-axis accelerometer and 3-axis gyroscope). Second, we have selected the optimal sensor and the optimal sensor channel, which can not only provide sensor usage suggestions but also enable ultra-lowpower application on the edge. Third, we have investigated innovative methods to minimize the training effort of the deep learning model, leveraging the transfer learning strategy. More specifically, we propose to pre-train a transferable deep learning model using the data from other subjects and then fine-tune the model using limited data from the target-user. In such a way, we have found that, for single-channel case, the transfer learning can effectively increase the deep model performance even when the fine-tuning effort is very small. This research, demonstrated by comprehensive experimental evaluation, has shown the potential of ultra-low-power PAD with minimized sensor stream, and minimized training effort. / 2023-06-01

Page generated in 0.0974 seconds