• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1850
  • 57
  • 54
  • 38
  • 37
  • 37
  • 19
  • 13
  • 10
  • 7
  • 4
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 2668
  • 2668
  • 1104
  • 955
  • 832
  • 608
  • 579
  • 488
  • 487
  • 463
  • 438
  • 432
  • 411
  • 410
  • 373
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Numerical Modeling and Inverse Design of Complex Nanophotonic Systems

Baxter, Joshua Stuart Johannes 10 January 2024 (has links)
Nanophotonics is the study and technological application of the interaction of electromagnetic waves (light) and matter at the nanometer scale. The field's extensive research focuses on generating, detecting, and controlling light using nanoscale features such as nanoparticles, waveguides, resonators, nanoantennas, and more. Exploration in the field is highly dependent on computational methods, which simulate how light will interact with matter in specific situations. However, as nanophotonics advances, so must the computational techniques. In this thesis, I present my work in various numerical studies in nanophotonics, sorted into three categories; plasmonics, inverse design, and deep learning. In plasmonics, I have developed methods for solving advanced material models (including nonlinearities) for small metallic and epsilon-near-zero features and validated them with other theoretical and experimental results. For inverse design, I introduce new methods for designing optical pulse shapes and metalenses for focusing high-harmonic generation. Finally, I used deep learning to model plasmonic colour generation from structured metal surfaces and to predict plasmonic nanoparticle multipolar responses.
322

Multiscale Modeling with Meshfree Methods

Xu, Wentao January 2023 (has links)
Multiscale modeling has become an important tool in material mechanics because material behavior can exhibit varied properties across different length scales. The use of multiscale modeling is essential for accurately capturing these characteristics and predicting material behavior. Mesh-free methods have also been gaining attention in recent years due to their innate ability to handle complex geometries and large deformations. These methods provide greater flexibility and efficiency in modeling complex material behavior, especially for problems involving discontinuities, such as fractures and cracks. Moreover, mesh-free methods can be easily extended to multiple lengths and time scales, making them particularly suitable for multiscale modeling. The thesis focuses on two specific problems of multiscale modeling with mesh-free methods. The first problem is the atomistically informed constitutive model for the study of high-pressure induced densification of silica glass. Molecular Dynamics (MD) simulations are carried out to study the atomistic level responses of fused silica under different pressure and strain-rate levels, Based on the data obtained from the MD simulations, a novel continuum-based multiplicative hyper-elasto-plasticity model that accounts for the anomalous densification behavior is developed and then parameterized using polynomial regression and deep learning techniques. To incorporate dynamic damage evolution, a plasticity-damage variable that controls the shrinkage of the yield surface is introduced and integrated into the elasto-plasticity model. The resulting coupled elasto-plasticity-damage model is reformulated to a non-ordinary state-based peridynamics (NOSB-PD) model for the computational efficiency of impact simulations. The developed peridynamics (PD) model reproduces coarse-scale quantities of interest found in MD simulations and can simulate at a component level. Finally, the proposed atomistically-informed multiplicative hyper-elasto-plasticity-damage model has been validated against limited available experimental results for the simulation of hyper-velocity impact simulation of projectiles on silica glass targets. The second problem addressed in the thesis involves the upscaling approach for multi-porosity media, analyzed using the so-called MultiSPH method, which is a sequential SPH (Smoothed Particle Hydrodynamics) solver across multiple scales. Multi-porosity media is commonly found in natural and industrial materials, and their behavior is not easily captured with traditional numerical methods. The upscaling approach presented in the thesis is demonstrated on a porous medium consisting of three scales, it involves using SPH methods to characterize the behavior of individual pores at the microscopic scale and then using a homogenization technique to upscale to the meso and macroscopic level. The accuracy of the MultiSPH approach is confirmed by comparing the results with analytical solutions for simple microstructures, as well as detailed single-scale SPH simulations and experimental data for more complex microstructures.
323

Detection and Classification of Diabetic Retinopathy using Deep Learning Models

Olatunji, Aishat 01 May 2024 (has links) (PDF)
Healthcare analytics leverages extensive patient data for data-driven decision-making, enhancing patient care and results. Diabetic Retinopathy (DR), a complication of diabetes, stems from damage to the retina’s blood vessels. It can affect both type 1 and type 2 diabetes patients. Ophthalmologists employ retinal images for accurate DR diagnosis and severity assessment. Early detection is crucial for preserving vision and minimizing risks. In this context, we utilized a Kaggle dataset containing patient retinal images, employing Python’s versatile tools. Our research focuses on DR detection using deep learning techniques. We used a publicly available dataset to apply our proposed neural network and transfer learning models, classifying images into five DR stages. Python libraries like TensorFlow facilitate data preprocessing, model development, and evaluation. Rigorous cross-validation and hyperparameter tuning optimized model accuracy, demonstrating their effectiveness in early risk identification, personalized healthcare recommendations, and improving patient outcomes.
324

AUTOMATIC EXTRACTION OF COMPUTER SCIENCE CONCEPT PHRASES USING A HYBRID MACHINE LEARNING PARADIGM

S. M. Abrar Jahin (14300654) 31 May 2023 (has links)
<p> With the proliferation of computer science in recent years in modern society, the number of computer science-related employment is expanding quickly. Software engineer has been chosen as the best job for 2023 based on pay, stress level, opportunity for professional growth, and balance between work and personal life. This was decided by a rankings of different news, journals, and publications. Computer science occupations are anticipated to be in high demand not just in 2023, but also for the foreseeable future. It’s not surprising that the number of computer science students at universities is growing and will continue to grow. The enormous increase in student enrolment in many subdisciplines of computers has presented some distinct issues. If computer science is to be incorporated into the K-12 curriculum, it is vital that K-12 educators are competent. But one of the biggest problems with this plan is that there aren’t enough trained computer science professors. Numerous new fields and applications, for instance, are being introduced to computer science. In addition, it is difficult for schools to recruit skilled computer science instructors for a variety of reasons including low salary issue. Utilizing the K-12 teachers who are already in the schools, have a love for teaching, and consider teaching as a vocation is therefore the most effective strategy to improve or fix this issue. So, if we want teachers to quickly grasp computer science topics, we need to give them an easy way to learn about computer science. To simplify and expedite the study of computer science, we must acquaint school-treachers with the terminology associated with computer science concepts so they can know which things they need to learn according to their profile. If we want to make it easier for schoolteachers to comprehend computer science concepts, it would be ideal if we could provide them with a tree of words and phrases from which they could determine where the phrases originated and which phrases are connected to them so that they can be effectively learned. To find a good concept word or phrase, we must first identify concepts and then establish their connections or linkages. As computer science is a fast developing field, its nomenclature is also expanding at a frenetic rate. Therefore, adding all concepts and terms to the knowledge graph would be a challenging endeavor. Creating a system that automatically adds all computer science domain terms to the knowledge graph 11 would be a straightforward solution to the issue. We have identified knowledge graph use cases for the school-teacher training program, which motivates the development of a knowl?edge graph. We have analyzed the knowledge graph’s use case and the knowledge graph’s ideal characteristics. We have designed a web-based system for adding, editing, and remov?ing words from a knowledge graph. In addition, a term or phrase can be represented with its children list, parent list, and synonym list for enhanced comprehension. We’ve developed an automated system for extracting words and phrases that can extract computer science idea phrases from any supplied text, therefore enriching the knowledge graph. Therefore, we have designed the knowledge graph for use in teacher education so that school-teachers can educate K-12 students computer science topicses effectively. </p>
325

Song Popularity Prediction with Deep Learning : Investigating predictive power of low level audio features

Holst, Gustaf, Niia, Jan January 2023 (has links)
Today streaming services are the most popular way to consume music, and with this the field of Music Information Retrieval (MIR) has exploded. Tangy market is a music investment platform and they want to use MIR techniques to estimate the value of not yet released songs. In this thesis we collaborate with them investigating how a song’s financial success can be predicted using machine learning models. Previous research has shown that well-known algorithms used for tasks such as image recognition and machine translation, also can be used for audio analysis and prediction. We show that a lot of previous work has been done regarding different aspects of audio analysis and prediction, but that most of that work has been related to genre classification and hit song prediction. The popularity prediction of audio is still quite new and this is where we will contribute by researching if low-level audio features can be used to predict streams. We are using an existing dataset with more than 100 000 songs containing low-level features, which we extend with streaming information. We are using the features in two shapes, summarized and full, and the dataset only contains the summarized digital representation of features. We use Librosa to extend the dataset to also have the digital representation of the full version for the audio features.  A previous study by Martín-Gutiérrez et al. [1] successfully used a combination of low-level and high level audio features as well as non musical features such as number of social media followers. The aim of this thesis is to explore five of the low-level features used in a previous study in [1] in order to assess the predictive power that these features have on their own. The five features we explore is; Chromagram, Mel Spectrogram, Tonnetz, Spectral Contrast, and MFCC. These features are selected for our research specifically because they were used in [1], and we want to investigate to what extent these low-level features contribute to the final predictions made by their model. Our conclusion is that neither of these features could be used for prediction with any accuracy, which indicates that other high-level and external features are of more importance. However, Chromagram and Mel Spectrogram in their full feature states show some potential but they will need to be researched more.
326

Accuracy Considerations in Deep Learning Using Memristive Crossbar Arrays

Paudel, Bijay Raj 01 May 2023 (has links) (PDF)
Deep neural networks (DNNs) are receiving immense attention because of their ability to solve complex problems. However, running a DNN requires a very large number of computations. Hence, dedicated hardware optimized for running deep learning algorithms known as neuromorphic architectures is often utilized. This dissertation focuses on evaluating andenhancing the accuracy of these neuromorphic architectures considering the designs of components, process variations, and adversarial attacks. The first contribution of the dissertation (Chapter 2) proposes design enhancements in analog Memristive Crossbar Array(MCA)-based neuromorphic architectures to improve classification accuracy. It introduces an analog Winner-Take-All (WTA) architecture and an on-chip training architecture. WTA ensures that the classification of the analog MCA is correct at the final selection level and the highest probability is selected. In particular, this dissertation presents a design of a highly scalable and precise current-mode WTA circuit with digital address generation. The design is based on current mirrors and comparators that use the cross-coupled latch structure. A post-silicon calibration circuit is also presented to handle process variations. On-chip training ensures that there is consistency in classification accuracy among different all analog MCA-based neuromorphic chips. Finally, an enhancement to the analog on-chip training architecture by implementing the Convolutional Neural Network (CNN) on MCA and software considerations to accelerate the training is presented.The second focus of the dissertation (Chapter 3) is on producing correct classification in the presence of malicious inputs known as adversarial attacks. This dissertation shows that MCA-based neuromorphic architectures ensure correct classification when the input is compromised using existing adversarial attack models. Furthermore, it shows that adversarialrobustness can be further improved by compression-based preprocessing steps that can be implemented on MCAs. It also evaluates the impact of the architecture in Chapter 2 under adversarial attacks. It shows that adversarial attacks do not uniformly affect the classification accuracy of different MCA-based chips. Experimental evidence using a variety of datasets and attack models supports the impact of MCA-based neuromorphic architectures and compression-based preprocessing implemented on MCAs to mitigate adversarial attacks. It is also experimentally shown that the on-chip training improves consistency in mitigating adversarial attacks among different chips. The final contribution (Chapter 4) of this dissertation introduces an enhancement of the method in Chapter 3. It consists of input preprocessing using compression and subsequent rescale and rearrange operations that are implemented using MCAs. This approach further improves the robustness against adversarial attacks. The rescale and rearrange operations are implemented using a DNN consisting of fully connected and convolutional layers. Experimental results show improved defense compared to similar input preprocessing techniques on MCAs.
327

Detection and Localization of Root Damages in Underground Sewer Systems using Deep Neural Networks and Computer Vision Techniques

Muzi Zheng (14226701) 03 February 2023 (has links)
<p>  </p> <p>The maintenance of a healthy sewer infrastructure is a major challenge due to the root damages from nearby plants that grow through pipe cracks or loose joints, which may lead to serious pipe blockages and collapse. Traditional inspections based on video surveillance to identify and localize root damages within such complex sewer networks are inefficient, laborious, and error-prone. Therefore, this study aims to develop a robust and efficient approach to automatically detect root damages and localize their circumferential and longitudinal positions in CCTV inspection videos by applying deep neural networks and computer vision techniques. With twenty inspection videos collected from various resources, keyframes were extracted from each video according to the difference in a LUV color space with certain selections of local maxima. To recognize distance information from video subtitles, OCR models such as Tesseract and CRNN-CTC were implemented and led to a 90% of recognition accuracy. In addition, a pre-trained segmentation model was applied to detect root damages, but it also found many false positive predictions. By applying a well-tuned YoloV3 model on the detection of pipe joints leveraging the Convex Hull Overlap (<em>CHO</em>) feature, we were able to achieve a 20% improvement on the reliability and accuracy of damage identifications. Moreover, an end-to-end deep learning pipeline that involved Triangle Similarity Theorem (<em>TST</em>) was successfully designed to predict the longitudinal position of each identified root damage. The prediction error was less than 1.0 feet. </p>
328

Detection and Localization of Root Damages in Underground Sewer Systems using Deep Neural Networks and Computer Vision Techniques

Zheng, Muzi 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The maintenance of a healthy sewer infrastructure is a major challenge due to the root damages from nearby plants that grow through pipe cracks or loose joints, which may lead to serious pipe blockages and collapse. Traditional inspections based on video surveillance to identify and localize root damages within such complex sewer networks are inefficient, laborious, and error-prone. Therefore, this study aims to develop a robust and efficient approach to automatically detect root damages and localize their circumferential and longitudinal positions in CCTV inspection videos by applying deep neural networks and computer vision techniques. With twenty inspection videos collected from various resources, keyframes were extracted from each video according to the difference in a LUV color space with certain selections of local maxima. To recognize distance information from video subtitles, OCR models such as Tesseract and CRNN-CTC were implemented and led to a 90% of recognition accuracy. In addition, a pre-trained segmentation model was applied to detect root damages, but it also found many false positive predictions. By applying a well-tuned YoloV3 model on the detection of pipe joints leveraging the Convex Hull Overlap (CHO) feature, we were able to achieve a 20% improvement on the reliability and accuracy of damage identifications. Moreover, an end-to-end deep learning pipeline that involved Triangle Similarity Theorem (TST) was successfully designed to predict the longitudinal position of each identified root damage. The prediction error was less than 1.0 feet.
329

Optical Medieval Music Recognition / Optical Medieval Music Recognition

Wick, Christoph January 2020 (has links) (PDF)
In recent years, great progress has been made in the area of Artificial Intelligence (AI) due to the possibilities of Deep Learning which steadily yielded new state-of-the-art results especially in many image recognition tasks. Currently, in some areas, human performance is achieved or already exceeded. This great development already had an impact on the area of Optical Music Recognition (OMR) as several novel methods relying on Deep Learning succeeded in specific tasks. Musicologists are interested in large-scale musical analysis and in publishing digital transcriptions in a collection enabling to develop tools for searching and data retrieving. The application of OMR promises to simplify and thus speed-up the transcription process by either providing fully-automatic or semi-automatic approaches. This thesis focuses on the automatic transcription of Medieval music with a focus on square notation which poses a challenging task due to complex layouts, highly varying handwritten notations, and degradation. However, since handwritten music notations are quite complex to read, even for an experienced musicologist, it is to be expected that even with new techniques of OMR manual corrections are required to obtain the transcriptions. This thesis presents several new approaches and open source software solutions for layout analysis and Automatic Text Recognition (ATR) for early documents and for OMR of Medieval manuscripts providing state-of-the-art technology. Fully Convolutional Networks (FCN) are applied for the segmentation of historical manuscripts and early printed books, to detect staff lines, and to recognize neume notations. The ATR engine Calamari is presented which allows for ATR of early prints and also the recognition of lyrics. Configurable CNN/LSTM-network architectures which are trained with the segmentation-free CTC-loss are applied to the sequential recognition of text but also monophonic music. Finally, a syllable-to-neume assignment algorithm is presented which represents the final step to obtain a complete transcription of the music. The evaluations show that the performances of any algorithm is highly depending on the material at hand and the number of training instances. The presented staff line detection correctly identifies staff lines and staves with an $F_1$-score of above $99.5\%$. The symbol recognition yields a diplomatic Symbol Accuracy Rate (dSAR) of above $90\%$ by counting the number of correct predictions in the symbols sequence normalized by its length. The ATR of lyrics achieved a Character Error Rate (CAR) (equivalently the number of correct predictions normalized by the sentence length) of above $93\%$ trained on 771 lyric lines of Medieval manuscripts and of 99.89\% when training on around 3.5 million lines of contemporary printed fonts. The assignment of syllables and their corresponding neumes reached $F_1$-scores of up to $99.2\%$. A direct comparison to previously published performances is difficult due to different materials and metrics. However, estimations show that the reported values of this thesis exceed the state-of-the-art in the area of square notation. A further goal of this thesis is to enable musicologists without technical background to apply the developed algorithms in a complete workflow by providing a user-friendly and comfortable Graphical User Interface (GUI) encapsulating the technical details. For this purpose, this thesis presents the web-application OMMR4all. Its fully-functional workflow includes the proposed state-of-the-art machine-learning algorithms and optionally allows for a manual intervention at any stage to correct the output preventing error propagation. To simplify the manual (post-) correction, OMMR4all provides an overlay-editor that superimposes the annotations with a scan of the original manuscripts so that errors can easily be spotted. The workflow is designed to be iteratively improvable by training better models as soon as new Ground Truth (GT) is available. / In den letzten Jahre wurden aufgrund der Möglichkeiten durch Deep Learning, was insbesondere in vielen Bildbearbeitungsaufgaben stetig neue Bestwerte erzielte, große Fortschritte im Bereich der künstlichen Intelligenz (KI) gemacht. Derzeit wird in vielen Gebieten menschliche Performanz erreicht oder mittlerweile sogar übertroffen. Diese großen Entwicklungen hatten einen Einfluss auf den Forschungsbereich der optischen Musikerkennung (OMR), da verschiedenste Methodiken, die auf Deep Learning basierten in spezifischen Aufgaben erfolgreich waren. Musikwissenschaftler sind in großangelegter Musikanalyse und in das Veröffentlichen von digitalen Transkriptionen als Sammlungen interessiert, was eine Entwicklung von Werkzeugen zur Suche und Datenakquise ermöglicht. Die Anwendung von OMR verspricht diesen Transkriptionsprozess zu vereinfachen und zu beschleunigen indem vollautomatische oder semiautomatische Ansätze bereitgestellt werden. Diese Arbeit legt den Schwerpunkt auf die automatische Transkription von mittelalterlicher Musik mit einem Fokus auf Quadratnotation, die eine komplexe Aufgabe aufgrund der komplexen Layouts, der stark variierenden Notationen und der Alterungsprozesse der Originalmanuskripte darstellt. Da jedoch die handgeschriebenen Musiknotationen selbst für erfahrene Musikwissenschaftler aufgrund der Komplexität schwer zu lesen sind, ist davon auszugehen, dass selbst mit den neuesten OMR-Techniken manuelle Korrekturen erforderlich sind, um die Transkription zu erhalten. Diese Arbeit präsentiert mehrere neue Ansätze und Open-Source-Software-Lösungen zur Layoutanalyse und zur automatischen Texterkennung (ATR) von frühen Dokumenten und für OMR von Mittelalterlichen Mauskripten, die auf dem Stand der aktuellen Technik sind. Fully Convolutional Networks (FCN) werden zur Segmentierung der historischen Manuskripte und frühen Buchdrucke, zur Detektion von Notenlinien und zur Erkennung von Neumennotationen eingesetzt. Die ATR-Engine Calamari, die eine ATR von frühen Buchdrucken und ebenso eine Erkennung von Liedtexten ermöglicht wird vorgestellt. Konfigurierbare CNN/LSTM-Netzwerkarchitekturen, die mit dem segmentierungsfreien CTC-loss trainiert werden, werden zur sequentiellen Texterkennung, aber auch einstimmiger Musik, eingesetzt. Abschließend wird ein Silben-zu-Neumen-Algorithmus vorgestellt, der dem letzten Schritt entspricht eine vollständige Transkription der Musik zu erhalten. Die Evaluationen zeigen, dass die Performanz eines jeden Algorithmus hochgradig abhängig vom vorliegenden Material und der Anzahl der Trainingsbeispiele ist. Die vorgestellte Notenliniendetektion erkennt Notenlinien und -zeilen mit einem $F_1$-Wert von über 99,5%. Die Symbolerkennung erreichte eine diplomatische Symbolerkennungsrate (dSAR), die die Anzahl der korrekten Vorhersagen in der Symbolsequenz zählt und mit der Länge normalisiert, von über 90%. Die ATR von Liedtext erzielte eine Zeichengenauigkeit (CAR) (äquivalent zur Anzahl der korrekten Vorhersagen normalisiert durch die Sequenzlänge) von über 93% bei einem Training auf 771 Liedtextzeilen von mittelalterlichen Manuskripten und von 99,89%, wenn auf 3,5 Millionen Zeilen von moderner gedruckter Schrift trainiert wird. Die Zuordnung von Silben und den zugehörigen Neumen erreicht $F_1$-werte von über 99,2%. Ein direkter Vergleich zu bereits veröffentlichten Performanzen ist hierbei jedoch schwer, da mit verschiedenen Material und Metriken evaluiert wurde. Jedoch zeigen Abschätzungen, dass die Werte dieser Arbeit den aktuellen Stand der Technik darstellen. Ein weiteres Ziel dieser Arbeit war es, Musikwissenschaftlern ohne technischen Hintergrund das Anwenden der entwickelten Algorithmen in einem vollständigen Workflow zu ermöglichen, indem eine benutzerfreundliche und komfortable graphische Benutzerschnittstelle (GUI) bereitgestellt wird, die die technischen Details kapselt. Zu diesem Zweck präsentiert diese Arbeit die Web-Applikation OMMR4all. Ihr voll funktionsfähiger Workflow inkludiert die vorgestellten Algorithmen gemäß dem aktuellen Stand der Technik und erlaubt optional manuell zu jedem Schritt einzugreifen, um die Ausgabe zur Vermeidung von Folgefehlern zu korrigieren. Zur Vereinfachung der manuellen (Nach-)Korrektur stellt OMMR4all einen Overlay-Editor zur Verfügung, der die Annotationen mit dem Scan des Originalmanuskripts überlagert, wodurch Fehler leicht erkannt werden können. Das Design des Workflows erlaubt iterative Verbesserungen, indem neue performantere Modelle trainiert werden können, sobald neue Ground Truth (GT) verfügbar ist.
330

DEEP LEARNING FOR DETECTING AND CLASSIFYING THE GROWTH STAGES OF WEEDS ON FIELDS

Almalky, Abeer Matar 01 May 2023 (has links) (PDF)
Due to the current and anticipated massive increase of world population, expanding the agriculture cycle is necessary for accommodating the expected human’s demand. However, weeds invasion, which is a detrimental factor for agricultural production and quality, is a challenge for such agricultural expansion. Therefore, controlling weeds on fields by accurate,automatic, low-cost, environment-friendly, and real-time weeds detection technique is required. Additionally, automating the process of detecting, classifying, and counting of weeds per their growth stages is vital for using appropriate weeds controlling techniques. The literature review shows that there is a gap in the research efforts that handle the automation of weeds’ growth stages classification using DL models. Accordingly, in this thesis, a dataset of four weed (Consolida Regalis) growth stages was collected using unnamed arial vehicle. In addition, we developed and trained one-stage and two-stages deep learning models: YOLOv5, RetinaNet (with Resnet-101-FPN, Resnet-50-FPN backbones), and Faster R-CNN (with Resnet-101-DC5, Resnet-101-FPN, Resnet-50-FPN backbones) respectively. Comparing the results of all trained models, we concluded that, in one hand, the Yolov5-small model succeeds in detecting weeds and classifying the weed’s growth stages in the shortest inference time in real-time with the highest recall of 0.794 and succeeds in counting the instances of weeds per the four growth stages in real-time with counting time of 0.033 millisecond per frame. On the other hand, RetinaNet with ResNet-101-FPN backbone shows accurate and precise results in the testing phase (average precision of 87.457). Even though the Yolov5-large model showed the highest precision value in classifying almost all weed’s growth stages in training phase, Yolov5-large could not detect all objects in tested images. As a whole, RetinaNet with ResNet-101-FPN backbone shows accurate and high precision, while Yolov5-small has the shortest real inference time of detection and growth stages classification. Farmers can use the resulted deep learning model to detect, classify, and count weeds per growth stages automatically and as a result decrease not only the needed time and labor cost, but also the use of chemicals to control weeds on fields.

Page generated in 0.0967 seconds