• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 260
  • 35
  • 14
  • 10
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 380
  • 380
  • 380
  • 242
  • 161
  • 157
  • 136
  • 86
  • 83
  • 80
  • 79
  • 73
  • 67
  • 67
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Object Recognition with Progressive Refinement for Collaborative Robots Task Allocation

Wu, Wenbo 18 December 2020 (has links)
With the rapid development of deep learning techniques, the application of Convolutional Neural Network (CNN) has benefited the task of target object recognition. Several state-of-the-art object detectors have achieved excellent performance on the precision for object recognition. When it comes to applying the detection results for the real world application of collaborative robots, the reliability and robustness of the target object detection stage is essential to support efficient task allocation. In this work, collaborative robots task allocation is based on the assumption that each individual robotic agent possesses specialized capabilities to be matched with detected targets representing tasks to be performed in the surrounding environment which impose specific requirements. The goal is to reach a specialized labor distribution among the individual robots based on best matching their specialized capabilities with the corresponding requirements imposed by the tasks. In order to further improve task recognition with convolutional neural networks in the context of robotic task allocation, this thesis proposes an innovative approach for progressively refining the target detection process by taking advantage of the fact that additional images can be collected by mobile cameras installed on robotic vehicles. The proposed methodology combines a CNN-based object detection module with a refinement module. For the detection module, a two-stage object detector, Mask RCNN, for which some adaptations on region proposal generation are introduced, and a one-stage object detector, YOLO, are experimentally investigated in the context considered. The generated recognition scores serve as input for the refinement module. In the latter, the current detection result is considered as the a priori evidence to enhance the next detection for the same target with the goal to iteratively improve the target recognition scores. Both the Bayesian method and the Dempster-Shafer theory are experimentally investigated to achieve the data fusion process involved in the refinement process. The experimental validation is conducted on indoor search-and-rescue (SAR) scenarios and the results presented in this work demonstrate the feasibility and reliability of the proposed progressive refinement framework, especially when the combination of adapted Mask RCNN and D-S theory data fusion is exploited.
12

Investigation of real-time lightweight object detection models based on environmental parameters

Persson, Dennis January 2022 (has links)
As the world is moving towards a more digital world with the majority of people having tablets, smartphones and smart objects, solving real-world computational problems with handheld devices seems more common. Detection or tracking of objects using a camera is starting to be used in all kinds of fields, from self-driving cars, sorting items to x-rays, referenced in Introduction. Object detection is very calculation heavy which is why a good computer is necessary for it to work relatively fast. Object detection using lightweight models is not as accurate as a heavyweight model because the model trades accuracy for inference to work relatively fast on such devices. As handheld devices get more powerful and people have better access to object detection models that can work on limited-computing devices, the ability to build their own small object detection machines at home or at work increases substantially. Knowing what kind of factors that have a big impact on object detection can help the user to design or choose the correct model. This study aims to explore what kind of impact distance, angle and light have on Inceptionv2 SSD, MobileNetv3 Large SSD and MobileNetv3 Small SSD on the COCO dataset. The results indicate that distance is the most dominant factor on the Inceptionv2 SSD model using the COCO dataset. The data for the MobileNetv3 SSD models indicate that the angle might have the biggest impact on these models but the data is too inconclusive to say that with certainty. With the knowledge of knowing what kind of factors that affect a certain model’s performance the most, the user can make a more informed choice to their field of use.
13

Investigation on how presentation attack detection can be used to increase security for face recognition as biometric identification : Improvements on traditional locking system

Öberg, Fredrik January 2021 (has links)
Biometric identification has already been applied to society today, as today’s mobile phones use fingerprints and other methods like iris and the face itself. With growth for technologies like computer vision, the Internet of Things, Artificial Intelligence, The use of face recognition as a biometric identification on ordinary doors has become increasingly common. This thesis studies is looking into the possibility of replacing regular door locks with face recognition or supplement the locks to increase security by using a pre-trained state-of-the-art face recognition method based on a convolution neural network. A subsequent investigation concluded that a networks based face recognition are is highly vulnerable to attacks in the form of presentation attacks. This study investigates protection mechanisms against these forms of attack by developing a presentation attack detection and analyzing its performance. The obtained results from the proof of concept  showed that local binary patterns histograms as a presentation attack detection could help the state of art face recognition to avoid attacks up to 88\% of the attacks the convolution neural network approved without the presentation attack detection. However, to replace traditional locks, more work must be done to detect more attacks in form of both higher percentage of attacks blocked by the system and the types of attack that can be done. Nevertheless, as a supplement face recognition represents a promising technology to supplement traditional door locks, enchaining their security by complementing the authorization with biometric authentication. So the main contributions is that  by using simple older methods LBPH can help modern state of the art face regognition to detect presentation attacks according to the results of the tests. This study also worked to adapt this PAD to be suitable for low end edge devices to be able to adapt in an environment where modern solutions are used, which LBPH have.
14

Deep neural networks for food waste analysis and classification : Subtraction-based methods for the case of data scarcity

Brunell, David January 2022 (has links)
Machine learning generally requires large amounts of data, however data is often limited. On the whole the amount of data needed grows with the complexity of the problem to be solved. Utilising transfer learning, data augmentation and problem reduction, acceptable performance can be achieved with limited data for a multitude of tasks. The goal of this master project is to develop an artificial neural network-based model for food waste analysis, an area in which large quantities of data is not yet readily available. Given two images an algorithm is expected to identify what has changed in the image, ignore the uncharged areas even though they might contain objects which can be classified and finally classify the change. The approach chosen in this project was to attempt to reduce the problem the machine learning algorithm has to solve by subtracting the images before they are handled by the neural network. In theory this should resolve both object localisation and filtering of uninteresting objects, which only leaves classification to the neural network. Such a procedure significantly simplifies the task to be resolved by the neural network, which results in reduced need for training data as well as keeping the process of gathering data relatively simple and fast. Several models were assessed and theories of adaptation of the neural network to this particular task were evaluated. Test accuracy of at best 78.9% was achieved with a limited dataset of about 1000 images with 10 different classes. This performance was accomplished by a siamese neural network based on VGG19 utilising triplet loss and training data using subtraction as a basis for ground truth mask creation, which was multiplied with the image containing the changed object. / Maskininlärning kräver generellt mycket data, men stora mängder data står inte alltid till förfogande. Generellt ökar behovet av data med problemets komplexitet. Med hjälp av överföringsinlärning, dataaugumentation och problemreduktion kan dock acceptabel prestanda erhållas på begränsad datamängd för flera uppgifter.  Målet med denna masteruppsats är att ta fram en modell baserad på artificiella neurala nätverk för matavfallsanalys, ett område inom vilket stora mängder data ännu inte finns tillgängligt. Givet två bilder väntas en algoritm identifiera vad som ändrats i bilden, ignorera de oförändrade områdena även om dessa innehåller objekt som kan klassificeras och slutligen klassificera ändringen. Tillvägagångssättet som valdes var att försöka reducera problemet som maskininlärningsalgoritmen, i detta fall ett artificiellt neuralt nätverk, behöver hantera genom att subtrahera bilderna innan de hanterades av det neurala nätverket. I teorin bör detta ta hand om både objektslokaliseringen och filtreringen av ointressanta objekt, vilket endast lämnar klassificeringen till det neurala nätverket. Ett sådant tillvägagångssätt förenklar problemet som det neurala nätverket behöver lösa avsevärt och resulterar i minskat behov av träningsdata, samtidigt som datainsamling hålls relativt snabbt och simpelt.  Flera olika modeller utvärderades och teorier om specialanpassningar av neurala nätverk för denna uppgift evaluerades. En testnoggrannhet på som bäst 78.9% uppnåddes med begränsad datamängd om ca 1000 bilder med 10 klasser. Denna prestation erhölls med ett siamesiskt neuralt nätverk baserat på VGG19 med tripletförlust och träningsdata som använde subtraktion av bilder som grund för framställning av grundsanningsmasker (eng. Ground truth masks) multiplicerade med bilden innehållande förändringen.
15

Homography Estimation using Deep Learning for Registering All-22 Football Video Frames / Homografiuppskattning med deep learning för registrering av bildrutor från video av amerikansk fotboll

Fristedt, Hampus January 2017 (has links)
Homography estimation is a fundamental task in many computer vision applications, but many techniques for estimation rely on complicated feature extraction pipelines. We extend research in direct homography estimation (i.e. without explicit feature extraction) by implementing a convolutional network capable of estimating homographies. Previous work in deep learning based homography estimation calculates homographies between pairs of images, whereas our network takes single image input and registers it to a reference view where no image data is available. The application of the work is registering frames from American football video to a top-down view of the field. Our model manages to register frames in a test set with an average corner error equivalent to less than 2 yards. / Homografiuppskattning är ett förkrav för många problem inom datorseende, men många tekniker för att uppskatta homografier bygger på komplicerade processer för att extrahera särdrag mellan bilderna. Vi bygger på tidigare forskning inom direkt homografiuppskattning (alltså, utan att explicit extrahera särdrag) genom att  implementera ett Convolutional Neural Network (CNN) kapabelt av att direkt uppskatta homografier. Arbetet tillämpas för att registrera bilder från video av amerikansk fotball till en referensvy av fotbollsplanen. Vår modell registrerar bildramer från ett testset till referensvyn med ett snittfel i bildens hörn ekvivalent med knappt 2 yards.
16

Wildfire Risk Assessment Using Convolutional Neural Networks and Modis Climate Data

Nesbit, Sean F 01 June 2022 (has links) (PDF)
Wildfires burn millions of acres of land each year leading to the destruction of homes and wildland ecosystems while costing governments billions in funding. As climate change intensifies drought volatility across the Western United States, wildfires are likely to become increasingly severe. Wildfire risk assessment and hazard maps are currently employed by fire services, but can often be outdated. This paper introduces an image-based dataset using climate and wildfire data from NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS). The dataset consists of 32 climate and topographical layers captured across 0.1 deg by 0.1 deg tiled regions in California and Nevada between 2015 and 2020, associated with whether the region later saw a wildfire incident. We trained a convolutional neural network (CNN) with the generated dataset to predict whether a region will see a wildfire incident given the climate data of that region. Convolutional neural networks are able to find spatial patterns in their multi-dimensional inputs, providing an additional layer of inference when compared to logistic regression (LR) or artificial neural network (ANN) models. To further understand feature importance, we performed an ablation study, concluding that vegetation products, fire history, water content, and evapotranspiration products resulted in increases in model performance, while land information products did not. While the novel convolutional neural network model did not show a large improvement over previous models, it retained the highest holistic measures such as area under the curve and average precision, indicating it is still a strong competitor to existing models. This introduction of the convolutional neural network approach expands the wealth of knowledge for the prediction of wildfire incidents and proves the usefulness of the novel, image-based dataset.
17

Variational networks in magnetic resonance imaging - Application to spiral cardiac MRI and investigations on image quality / Variational Networks in der Magnetresonanztomographie - Anwendung auf spirale Herzbildgebung und Untersuchungen zur Bildqualität

Kleineisel, Jonas January 2024 (has links) (PDF)
Acceleration is a central aim of clinical and technical research in magnetic resonance imaging (MRI) today, with the potential to increase robustness, accessibility and patient comfort, reduce cost, and enable entirely new kinds of examinations. A key component in this endeavor is image reconstruction, as most modern approaches build on advanced signal and image processing. Here, deep learning (DL)-based methods have recently shown considerable potential, with numerous publications demonstrating benefits for MRI reconstruction. However, these methods often come at the cost of an increased risk for subtle yet critical errors. Therefore, the aim of this thesis is to advance DL-based MRI reconstruction, while ensuring high quality and fidelity with measured data. A network architecture specifically suited for this purpose is the variational network (VN). To investigate the benefits these can bring to non-Cartesian cardiac imaging, the first part presents an application of VNs, which were specifically adapted to the reconstruction of accelerated spiral acquisitions. The proposed method is compared to a segmented exam, a U-Net and a compressed sensing (CS) model using qualitative and quantitative measures. While the U-Net performed poorly, the VN as well as the CS reconstruction showed good output quality. In functional cardiac imaging, the proposed real-time method with VN reconstruction substantially accelerates examinations over the gold-standard, from over 10 to just 1 minute. Clinical parameters agreed on average. Generally in MRI reconstruction, the assessment of image quality is complex, in particular for modern non-linear methods. Therefore, advanced techniques for precise evaluation of quality were subsequently demonstrated. With two distinct methods, resolution and amplification or suppression of noise are quantified locally in each pixel of a reconstruction. Using these, local maps of resolution and noise in parallel imaging (GRAPPA), CS, U-Net and VN reconstructions were determined for MR images of the brain. In the tested images, GRAPPA delivers uniform and ideal resolution, but amplifies noise noticeably. The other methods adapt their behavior to image structure, where different levels of local blurring were observed at edges compared to homogeneous areas, and noise was suppressed except at edges. Overall, VNs were found to combine a number of advantageous properties, including a good trade-off between resolution and noise, fast reconstruction times, and high overall image quality and fidelity of the produced output. Therefore, this network architecture seems highly promising for MRI reconstruction. / Eine Beschleunigung des Bildgebungsprozesses ist heute ein wichtiges Ziel von klinischer und technischer Forschung in der Magnetresonanztomographie (MRT). Dadurch könnten Robustheit, Verfügbarkeit und Patientenkomfort erhöht, Kosten gesenkt und ganz neue Arten von Untersuchungen möglich gemacht werden. Da sich die meisten modernen Ansätze hierfür auf eine fortgeschrittene Signal- und Bildverarbeitung stützen, ist die Bildrekonstruktion ein zentraler Baustein. In diesem Bereich haben Deep Learning (DL)-basierte Methoden in der jüngeren Vergangenheit bemerkenswertes Potenzial gezeigt und eine Vielzahl an Publikationen konnte deren Nutzen in der MRT-Rekonstruktion feststellen. Allerdings besteht dabei das Risiko von subtilen und doch kritischen Fehlern. Daher ist das Ziel dieser Arbeit, die DL-basierte MRT-Rekonstruktion weiterzuentwickeln, während gleichzeitig hohe Bildqualität und Treue der erzeugten Bilder mit den gemessenen Daten gewährleistet wird. Eine Netzwerkarchitektur, die dafür besonders geeignet ist, ist das Variational Network (VN). Um den Nutzen dieser Netzwerke für nicht-kartesische Herzbildgebung zu untersuchen, beschreibt der erste Teil dieser Arbeit eine Anwendung von VNs, welche spezifisch für die Rekonstruktion von beschleunigten Akquisitionen mit spiralen Auslesetrajektorien angepasst wurden. Die vorgeschlagene Methode wird mit einer segmentierten Rekonstruktion, einem U-Net, und einem Compressed Sensing (CS)-Modell anhand von qualitativen und quantitativen Metriken verglichen. Während das U-Net schlecht abschneidet, zeigen die VN- und CS-Methoden eine gute Bildqualität. In der funktionalen Herzbildgebung beschleunigt die vorgeschlagene Echtzeit-Methode mit VN-Rekonstruktion die Aufnahme gegenüber dem Goldstandard wesentlich, von etwa zehn zu nur einer Minute. Klinische Parameter stimmen im Mittel überein. Die Bewertung von Bildqualität in der MRT-Rekonstruktion ist im Allgemeinen komplex, vor allem für moderne, nichtlineare Methoden. Daher wurden anschließend forgeschrittene Techniken zur präsizen Analyse von Bildqualität demonstriert. Mit zwei separaten Methoden wurde einerseits die Auflösung und andererseits die Verstärkung oder Unterdrückung von Rauschen in jedem Pixel eines untersuchten Bildes lokal quantifiziert. Damit wurden lokale Karten von Auflösung und Rauschen in Rekonstruktionen durch Parallele Bildgebung (GRAPPA), CS, U-Net und VN für MR-Aufnahmen des Gehirns berechnet. In den untersuchten Bildern zeigte GRAPPA gleichmäßig eine ideale Auflösung, aber merkliche Rauschverstärkung. Die anderen Methoden verhalten sich lokal unterschiedlich je nach Struktur des untersuchten Bildes. Die gemessene lokale Unschärfe unterschied sich an den Kanten gegenüber homogenen Bildbereichen, und Rauschen wurde überall außer an Kanten unterdrückt. Insgesamt wurde für VNs eine Kombination von verschiedenen günstigen Eigenschaften festgestellt, unter anderem ein guter Kompromiss zwischen Auflösung und Rauschen, schnelle Laufzeit, und hohe Qualität und Datentreue der erzeugten Bilder. Daher erscheint diese Netzwerkarchitektur als ein äußerst vielversprechender Ansatz für MRT-Rekonstruktion.
18

Digital Architecture for real-time face detection for deep video packet inspection systems

Bhattarai, Smrity January 2017 (has links)
No description available.
19

Hierarchical Auto-Associative Polynomial Convolutional Neural Networks

Martell, Patrick Keith January 2017 (has links)
No description available.
20

Refinement of Raman spectra from extreme background and noise interferences: Cancer diagnostics using Raman spectroscopy

Gebrekidan, Medhanie Tesfay 01 March 2022 (has links)
Die Raman-Spektroskopie ist eine optische Messtechnik, die in der Lage ist, spektroskopische Information zu liefern, welche molekülspezifisch und einzigartig in Bezug auf die Eigenschaften der untersuchten Spezies sind. Sie ist ein unverzichtbares analytisches Instrument, das Anwendung in verschiedenen Bereichen findet, wie etwa der Medizin oder der in situ Beobachtung von chemischen Prozessen. Wegen ihren Eigenschaften, wie der hohen Spezifität und der Möglichkeit von Tracer-freien Messung, hat die Raman-Spektroskopie die Tumordiagnostik stark beeinflusst. Aufgrund einer äußerst starken Beeinflussung der Raman-Spektren durch Hintergrundsignale, ist das Isolieren und Interpretieren von Raman-Spektren eine große Herausforderung. Im Rahmen dieser Arbeit wurden verschiedene Ansätze der Spektrenbearbeitung entwickelt, die benötigt werden um Raman-Spektren aus verrauschten und stark mit Hintergrundsignalen behafteten Rohspektren zu extrahieren. Diese Ansätze beinhalten im Speziellen eine auf dem Vector-Casting basierende Methode zur Rauschminimierung und eine auf dem deep neural networks basierende Methoden zur Entfernung von Rauschen und Hintergrundsignalen. Verschiedene neuronale Netze wurden mittels simulierter Spektren trainiert und an experimentell gemessenen Spektren evaluiert. Die im Rahmen dieser Arbeit vorgeschlagenen Ansätze wurden mit alternativen Methoden auf dem aktuellen Stand der Entwicklung unter Zuhilfenahme von verschiedenen Signal-Rausch-Verhältnissen, Standardabweichungen und dem Structural Similarity Index verglichen. Die hier entwickelten Ansätze zeigen gute Ergebnisse und sind bisher bekannten Methoden überlegen, vor allem für Raman-Spektren mit einem niedrigem Signal-Rausch-Verhältnis und extrem starken Fluoreszenz-Hintergrund. Zusätzlich erfordern die auf Deep Neural Networks basierten Methoden keinerlei menschliches Eingreifen. Die Motivation hinter dieser Arbeit ist die Verbesserung der Raman-Spektroskopie, vor allem der Shifted-Excitation Raman Difference Spectroscopy (SERDS) hin zu einem noch besseren Instrument in der Prozessanalytik und Tumordiagnostik. Die Integration der oben genannten Ansätze zur Spektrenbearbeitung von SERDS in Kombination mit Methoden des maschinellen Lernens ermöglichen es, physiologische Schleimhaut, nicht-maligne Läsionen und orale Plattenepithelkarzinome mit einer Genauigkeit zu unterscheiden, die bisherigen Methoden überlegen ist. Die spezifischen Merkmale in den bearbeiteten Raman-Spektren können verschiedenen chemischen Zusammensetzungen in den jeweiligen Geweben zugeordnet werden. Die Übertragbarkeit auf einen ähnlichen Ansatz zur Erkennung von Brusttumoren wurde überprüft. Die bereinigten Raman-Spektren von normalem Brustgewebe, Fibroadenoma und invasiven Mammakarzinom konnten mithilfe der spektralen Eigenschaften von Proteinen, Lipiden und Nukleinsäuren unterschieden werden. Diese Erkenntnisse lassen das Potential von SERDS in Kombination mit Ansätzen des maschinellen Lernens als universelles Werkzeug zur Tumordiagnose erkennen.:Versicherung Abstract Zusammenfassung der Ergebnisse der Dissertation Table of Contents Abbreviations and symbols 1 Introduction 2 State of the art of the purification of Raman spectra 2.1 Experimental methods for the enhancement of the signal-to-background ratio and the signal-to-noise ratio 2.2 Mathematical methods for the extraction of pure Raman spectra from raw spectra 2.3 Raman based cancer diagnostics 2.4 Neural networks for the evaluation of Raman spectra 2.5 Objective 3 Application relevant fundaments 3.1 Basics of Raman spectroscopy 3.2 Simulation of raw Raman spectra 3.3 Shifted-excitation Raman difference Spectroscopy 3.4 Raman experimental setup 3.5 Mathematical method for Raman spectra refinement 3.6 Deep neural networks 4 Summary of the published results 4.1 A shifted-excitation Raman difference spectroscopy evaluation strategy for the efficient isolation of Raman spectra from extreme fluorescence interference 4.2 Vector casting for noise reduction 4.3 Refinement of spectra using a deep neural network; fully automated removal of noise and background 4.4 Breast Tumor Analysis using Shifted Excitation Raman difference Spectroscopy 4.5 Optical diagnosis of clinically apparent lesions of oral cavity by label free Raman spectroscopy Conclusion / Raman spectroscopy is an optical measurement technique able to provide spectroscopic information that is molecule-specific and unique to the nature of the specimen under investigation. It is an invaluable analytical tool that finds application in several fields such as medicine and in situ chemical processing. Due to its high specificity and label-free features, Raman spectroscopy greatly impacted cancer diagnostics. However, retrieving and interpreting the Raman spectrum that contains the molecular information is challenging because of extreme background interference. I have developed various spectra-processing approaches required to purify Raman spectra from noisy and heavily background interfered raw Raman spectra. In detail, these are a new noise reduction method based on vector casting and new deep neural networks for the efficient removal of noise and background. Several neural network models were trained on simulated spectra and then tested with experimental spectra. The here proposed approaches were compared with the state-of-the-art techniques via different signal-to-noise ratios, standard deviation, and the structural similarity index metric. The methods presented here perform well and are superior in comparison to what has been reported before, especially at small signal-to-noise ratios, and for extreme fluorescence interfered raw Raman spectra. Furthermore, the deep neural network-based methods do not rely on any human intervention. The motivation behind this study is to make Raman spectroscopy, especially the shifted-excitation Raman difference spectroscopy (SERDS), an even better tool for process analytics and cancer diagnostics. The integration of the above-mentioned spectra-processing approaches into SERDS in combination with machine learning tools enabled the differentiation between physiological mucosa, non-malignant lesions, and oral squamous cell carcinomas with high accuracy, above the state of the art. The distinguishable features obtained in the purified Raman spectra are assignable to different chemical compositions of the respective tissues. The feasibility of a similar approach for breast tumors was also investigated. The purified Raman spectra of normal breast tissue, fibroadenoma, and invasive carcinoma were discriminable with respect to the spectral features of proteins, lipids, and nucleic acid. These findings suggest the potential of SERDS combined with machine learning techniques as a universal tool for cancer diagnostics.:Versicherung Abstract Zusammenfassung der Ergebnisse der Dissertation Table of Contents Abbreviations and symbols 1 Introduction 2 State of the art of the purification of Raman spectra 2.1 Experimental methods for the enhancement of the signal-to-background ratio and the signal-to-noise ratio 2.2 Mathematical methods for the extraction of pure Raman spectra from raw spectra 2.3 Raman based cancer diagnostics 2.4 Neural networks for the evaluation of Raman spectra 2.5 Objective 3 Application relevant fundaments 3.1 Basics of Raman spectroscopy 3.2 Simulation of raw Raman spectra 3.3 Shifted-excitation Raman difference Spectroscopy 3.4 Raman experimental setup 3.5 Mathematical method for Raman spectra refinement 3.6 Deep neural networks 4 Summary of the published results 4.1 A shifted-excitation Raman difference spectroscopy evaluation strategy for the efficient isolation of Raman spectra from extreme fluorescence interference 4.2 Vector casting for noise reduction 4.3 Refinement of spectra using a deep neural network; fully automated removal of noise and background 4.4 Breast Tumor Analysis using Shifted Excitation Raman difference Spectroscopy 4.5 Optical diagnosis of clinically apparent lesions of oral cavity by label free Raman spectroscopy Conclusion

Page generated in 0.1433 seconds