• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 49
  • 11
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 87
  • 87
  • 55
  • 24
  • 24
  • 19
  • 15
  • 13
  • 13
  • 11
  • 10
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

A comparative evaluation of 3d and spatio-temporal deep learning techniques for crime classification and prediction

Matereke, Tawanda Lloyd January 2021 (has links)
>Magister Scientiae - MSc / This research is on a comparative evaluation of 3D and spatio-temporal deep learning methods for crime classification and prediction using the Chicago crime dataset, which has 7.29 million records, collected from 2001 to 2020. In this study, crime classification experiments are carried out using two 3D deep learning algorithms, i.e., 3D Convolutional Neural Network and the 3D Residual Network. The crime classification models are evaluated using accuracy, F1 score, Area Under Receiver Operator Curve (AUROC), and Area Under Curve - Precision-Recall (AUCPR). The effectiveness of spatial grid resolutions on the performance of the classification models is also evaluated during training, validation and testing.
32

Low Complexity Hybrid Precoding and Combining for Millimeter Wave Systems

Alouzi, Mohamed 27 April 2023 (has links)
The evolution to 5G and its use cases is driven by data-intensive applications requiring higher data rates over wireless channels. This has led to research in massive multiple input multiple output (MIMO) techniques and the use of the millimeter wave (mm wave) band. Because of the higher path loss at mm wave frequencies and the poor scattering nature of the mm wave channel (fewer paths exist), this thesis first proposes the use of the sphere decoding (SD) algorithm, and the semidefinite relaxation (SDR) detector to improve the performance of a uniform planar array (UPA) hybrid beamforming technique with large antenna arrays. The second contributions of this thesis consist of a low-complexity algorithm using the gradient descent for hybrid precoding and combining designs in mm wave systems. Also, in this thesis we present a low-complexity algorithm for hybrid precoding and combining designs that uses momentum gradient descent and Newton’s Method for mm wave systems which makes the objective function converge faster compared to other iterative methods in the literature; the two proposed low-complexity algorithms for hybrid precoding and combining do not depend on the antenna array geometry, unlike the orthogonal matching pursuit (OMP) hybrid precoding/combining approach. Moreover, these algorithms allow hybrid precoders/combiners to yield a performance very close to that of the optimal unconstrained digital precoders and combiners with a small number of iterations. Simulation results verify that the proposed hybrid precoding/combining scheme that uses momentum gradient descent and Newton’s Method outperforms previous methods that appear in the literature in terms of bit error rate (BER) and achievable spectral efficiency with lower complexity. Finally, an iterative algorithm that directly converts the hybrid precoding/combining in the full array (FA) architecture to subarray (SA) architecture is proposed and examined in this thesis. It is called direct conversion of iterative hybrid precoding/combining from FA to SA (DCIFS) hybrid precoding/combining. The proposed DCIFS design takes into consideration the matrix structure of the analog and baseband precoding and combining in the design derivation. Moreover, it does not depend on the antenna array geometry, unlike other techniques, such as the orthogonal matching pursuit (OMP) hybrid precoding/combining approach, nor does it assume any other constraints. Simulation results show that the proposed DCIFS hybrid design, when compared to the FA hybrid designs counterpart, can provide a spectral efficiency that is close to optimum while maintaining a very low complexity and better spectral efficiency than the conventional SA hybrid design with the same hardware complexity.
33

Improving Neural Network Classification Training

Rimer, Michael Edwin 05 September 2007 (has links) (PDF)
The following work presents a new set of general methods for improving neural network accuracy on classification tasks, grouped under the label of classification-based methods. The central theme of these approaches is to provide problem representations and error functions that more directly improve classification accuracy than conventional learning and error functions. The CB1 algorithm attempts to maximize classification accuracy by selectively backpropagating error only on misclassified training patterns. CB2 incorporates a sliding error threshold to the CB1 algorithm, interpolating between the behavior of CB1 and standard error backpropagation as training progresses in order to avoid prematurely saturated network weights. CB3 learns a confidence threshold for each combination of training pattern and output class. This models an error function based on the performance of the network as it trains in order to avoid local overfit and premature weight saturation. PL1 is a point-wise local binning algorithm used to calibrate a learning model to output more accurate posterior probabilities. This algorithm is used to improve the reliability of classification-based networks while retaining their higher degree of classification accuracy. These approaches are demonstrated to be robust to a variety of learning parameter settings and have better classification accuracy than standard approaches on a variety of applications, such as OCR and speech recognition.
34

Exploring the Noise Resilience of Combined Sturges Algorithm

Agarwal, Akrita January 2015 (has links)
No description available.
35

Reinforcement Learning-based Human Operator Decision Support Agent for Highly Transient Industrial Processes

Jianqi Ruan (18066763) 03 March 2024 (has links)
<p dir="ltr"> Most industrial processes are not fully-automated. Although reference tracking can be handled by low-level controllers, initializing and adjusting the reference, or setpoint, values, are commonly tasks assigned to human operators. A major challenge that arises, though, is control policy variation among operators which in turn results in inconsistencies in the final product. In order to guide operators to pursue better and more consistent performance, researchers have explored the optimal control policy through different approaches. Although in different applications, researchers use different approaches, an accurate process model is still crucial to the approaches. However, for a highly transient process (e.g., the startup of a manufacturing process), modeling can be challenging and inaccurate, and approaches highly relying on a process model may not work well. One example is process startup in a twin-roll steel strip casting process and motivates this work. </p><p dir="ltr"><br></p><p dir="ltr"> In this dissertation, I propose three offline reinforcement learning (RL) algorithms which require the RL agent to learn a control policy from a fixed dataset that is pre-collected by human operators during operations of the twin-roll casting process. Compared to existing offline RL algorithms, the proposed algorithms focus on exploiting the best control policy used by human operators rather than exploring new control policies constrained by the existing policies. In addition, in existing offline RL algorithms, there is not enough consideration of the imbalanced dataset problem. In the second and the third proposed algorithms, I leverage the idea of cost sensitive learning to incentivize the RL agent to learn the most valuable control policy, rather than the most common one represented in the dataset. In addition, since the process model is not available, I propose a performance metric that does not require a process model or simulator for agent testing. The third proposed algorithm is compared with benchmark offline RL algorithms and achieves better and more consistent performance.</p>
36

Building occupancy analytics based on deep learning through the use of environmental sensor data

Zhang, Zheyu 24 May 2023 (has links)
Balancing indoor comfort and energy consumption is crucial to building energy efficiency. Occupancy information is a vital aspect in this process, as it determines the energy demand. Although there are various sensors used to gather occupancy information, environmental sensors stand out due to their low cost and privacy benefits. Machine learning algorithms play a critical role in estimating the relationship between occupancy levels and environmental data. To improve performance, more complex models such as deep learning algorithms are necessary. Long Short-Term Memory (LSTM) is a powerful deep learning algorithm that has been utilized in occupancy estimation. However, recently, an algorithm named Attention has emerged with improved performance. The study proposes a more effective model for occupancy level estimation by incorporating Attention into the existing Long Short-Term Memory algorithm. The results show that the proposed model is more accurate than using a single algorithm and has the potential to be integrated into building energy control systems to conserve even more energy. / Master of Science / The motivation for energy conservation and sustainable development is rapidly increasing, and building energy consumption is a significant part of overall energy use. In order to make buildings more energy efficient, it is necessary to obtain information on the occupancy level of rooms in the building. Environmental sensors are used to measure factors such as humidity and sound to determine occupancy information. However, the relationship between sensor readings and occupancy levels is complex, making it necessary to use machine learning algorithms to establish a connection. As a subfield of machine learning, deep learning is capable of processing complex data. This research aims to utilize advanced deep learning algorithms to estimate building occupancy levels based on environmental sensor data.
37

Μελέτη και σχεδίαση συστήματος ανάλυσης εικόνας κατατμημένου σπερματικού DNA με χρήση τεχνικών υπολογιστικής νοημοσύνης / Study and design of an image analysis system for sperm DNA fragmentation using computational intelligence techniques

Αλμπάνη, Ελένη 13 July 2010 (has links)
Ιατρικές έρευνες έχουν δείξει ότι η ανδρική υπογονιμότητα σχετίζεται άμεσα με την ύπαρξη κατατμημένου DNA στον πυρήνα των σπερματοζωαρίων. Οι διαταραχές στις τιμές της συγκέντρωσης σπερματοζωαρίων, της κινητικότητάς τους, του όγκου της εκσπερμάτισης και στη μορφολογία τους που παρατηρούνται σε ένα σπερμοδιάγραμμα έχουν σα βαθύτερο αίτιο την ύπαρξη κατατμημένου DNA. Το εργαστήριο πειραματικής εμβρυολογίας και ιστολογίας της Ιατρικής Αθηνών χρησιμοποιεί τη μέθοδο TUNEL (deoxynucleotidyl transferase-mediated dUTP nick end labeling) για να σηματοδοτήσει τα άκρα κάθε τμήματος του DNA με χρώμα διαφορετικό από αυτό που χρησιμοποιεί για το υπόλοιπο τμήμα του DNA. Αποτέλεσμα της επεξεργασίας που υφίστανται τα σπερματοζωάρια σε μια αντικειμενοφόρο πλάκα είναι ένα σύνολο από μπλε φθορίζοντα σπερματοζωάρια με πιθανό κόκκινο στο πυρήνα τους, στην περίπτωση που υπάρχει κατατμημένο DNA. Όσο μεγαλύτερος είναι ο βαθμός κατάτμησης, τόσο περισσότερο είναι το κόκκινο και τόσο περισσότερο παθολογικό το σπερματοζωάριο και άρα λιγότερο ικανό να γονιμοποιήσει. Τη διαδικασία της TUNEL ακολουθεί η φωτογράφηση της αντικειμενοφόρου πλάκας με κάμερα υψηλής ανάλυσης και μεγάλης ευαισθησίας, ειδική για εφαρμογές φθορισμού. Στη συνέχεια, οι εικόνες επεξεργάζονται με ειδικό λογισμικό, όπως έχει προταθεί στο «Automatic Analysis of TUNEL assay Microscope Images» από τους Kontaxakis et al. στο 2007 IEEE International Symposium on Signal Processing and Information Technology. Το αποτέλεσμα της επεξεργασίας των εικόνων είναι η ταξινόμηση των αντικειμένων που απεικονίζονται σε ομάδες από α) σπερματοζωάρια μονήρη β) επικαλυπτόμενα και γ) «σκουπίδια» όπως λευκοκύτταρα ή θραύσματα σπερματοζωαρίων. Στη συνέχεια για κάθε μονήρες σπερματοζωάριο γίνεται ο υπολογισμός των κόκκινων και μπλε pixels. Κατ’ αυτό τον τρόπο έχουμε ποσοτικοποιημένη την έκταση του κερματισμού κάθε σπερματοζωαρίου. Στόχος της διπλωματικής εργασίας είναι αρχικά η μελέτη και στη συνέχεια η σχεδίαση και υλοποίηση ενός συστήματος, το οποίο λαμβάνοντας υπόψη τα δεδομένα από την επεξεργασία εικόνας καθώς και δεδομένα που είναι γνωστά από το σπερμοδιάγραμμα, όπως η κινητικότητα και η συγκέντρωση των σπερματοζωαριών, χρησιμοποιώντας τεχνικές της υπολογιστικής νοημοσύνης θα εκπαιδεύεται και θα ταξινομεί αυτόματα ασθενείς ανάλογα με το συνολικό βαθμό κερματισμού του DNA τους. Τέλος, θα υπολογίζει και ένα κατώφλι ή μία περιοχή τιμών άνω της οποίας ένας ασθενής θα χαρακτηρίζεται ως στείρος. Απώτερος στόχος είναι να γίνει όλη η παραπάνω διαδικασία ένας έλεγχος ρουτίνας για τα εργαστήρια που ασχολούνται με την ανδρική υπογονιμότητα και την τεχνητή γονιμοποίηση, προφυλάσσοντας ζευγάρια από άσκοπες και επιβλαβείς για την υγεία της γυναίκας προσπάθειες τεχνητής γονιμοποίησης. / Studies have proven that male infertility is directly connected with the existence of fragmented DNA in sperm nucleus Structural disorders and functional abnormalities are often present in spermatozoa from infertile men, as they are the impact of DNA fragmentation. The histology and embryology laboratory in Medical School in Athens uses the TUNEL assay to mark the edges of DNA helix with color different from the rest of the helix. The result of this procedure is that the human spermatozoa are blue and in the interior of every cell, an area proportional to the degree of the cell DNA fragmentation has been stained in reddish color. The more reddish the area is, the more fragmented the DNA is and the more infertile the patient is. The TUNEL assay is followed by image collection using a camera of high sensitivity appropriate for fluorescence applications. Afterwards, the obtained images are processed as described in “Automatic Analysis of TUNEL assay Microscope Images” at IEEE International Symposium on Signal Processing and Information Technology in 2007. The results of the processing above is image segmentation, shapes classification in 3 groups, solitary spermatozoa, overlapped spermatozoa and debris and at last the area measurement of red pixel for each solitary spermatozoon. This way, we have in numbers how much fragmented the DNA is. This master thesis aims at the study and the design of a system, that taking into consideration the data from the image analysis accompanied by the data from the basic sperm analysis, like sperm concentration and motility, and using computational intelligence techniques, it will be trained and will automatically classify the patients according their DNA fragmentation degree. In the end, it will estimate a threshold or an area of values above which a patient will be considered as infertile. Our ultimate goal is the above procedure to be a routine for the labs that are dealing with male infertility and artificial insemination, so that couples are protected against pointless and prejudicial artificial insemination attempts.
38

Implementing and Evaluating Automaton Learning Algorithms for a Software Testing Platform

Khosravi Bakhtiari, Mohsen January 2015 (has links)
The Software Reliability group at KTH-CSC has designed and built a novel test platform LBTest for black-box requirements testing of reactive and embedded software systems (e.g. web servers, automobile control units, etc). The main concept of LBTest is to create a large number of test cases by incorporation of an automata learning algorithm with a model checking algorithm (NuSMV). This platform aims to support different learned automata, learning algorithms and different model checking algorithms which can be combined to implement the paradigm of learning-based testing (LBT).This thesis project investigates an existing published algorithm for learning deterministic finite automata (DFA)known as Kearns algorithm. The aimof this thesis is to investigate how effective Kearns algorithm is from a software testing perspective.Angluin’s well-known L* DFA learning algorithm has a simple structure and implementation. On the other hand, Kearnsalgorithm has more complex, difficult structure and harder implementation than L* algorithm, however it is more efficient and faster. For this reason, the plan is to implement an advanced DFA learning algorithm, Kearns algorithm[4], from a description in the literature (using Java).We consider a methodology to compare Kearns algorithm with Angluin’s DFA learning algorithm based on the master thesis of Czerny[8].The comparisonsbetween the Kearns and the L* algorithmsare based on the number of membership and equivalence queriesto investigate the difficulty of learning
39

CREDIT CARD FRAUD DETECTION (Machine learning algorithms) / Kreditkortsbedrägeri med användning av maskininlärningsalgoritmer

Westerlund, Fredrik January 2017 (has links)
Credit card fraud is a field with perpetrators performing illegal actions that may affect other individuals or companies negatively. For instance, a criminalcan steal credit card information from an account holder and then conduct fraudulent transactions. The activities are a potential contributory factor to how illegal organizations such as terrorists and drug traffickers support themselves financially. Within the machine learning area, there are several methods that possess the ability to detect credit card fraud transactions; supervised learning and unsupervised learning algorithms. This essay investigates the supervised approach, where two algorithms (Hellinger Distance Decision Tree (HDDT) and Random Forest) are evaluated on a real life dataset of 284,807 transactions. Under those circumstances, the main purpose is to develop a “well-functioning” model with a reasonable capacity to categorize transactions as fraudulent or legit. As the data is heavily unbalanced, reducing the false-positive rate is also an important part when conducting research in the chosen area. In conclusion, evaluated algorithms present a fairly similar outcome, where both models have the capability to distinguish the classes from each other. However, the Random Forest approach has a better performance than HDDT in all measures of interest.
40

Parametric kernels for structured data analysis

Shin, Young-in 04 May 2015 (has links)
Structured representation of input physical patterns as a set of local features has been useful for a veriety of robotics and human computer interaction (HCI) applications. It enables a stable understanding of the variable inputs. However, this representation does not fit the conventional machine learning algorithms and distance metrics because they assume vector inputs. To learn from input patterns with variable structure is thus challenging. To address this problem, I propose a general and systematic method to design distance metrics between structured inputs that can be used in conventional learning algorithms. Based on the observation of the stability in the geometric distributions of local features over the physical patterns across similar inputs, this is done combining the local similarities and the conformity of the geometric relationship between local features. The produced distance metrics, called “parametric kernels”, are positive semi-definite and require almost linear time to compute. To demonstrate the general applicability and the efficacy of this approach, I designed and applied parametric kernels to handwritten character recognition, on-line face recognition, and object detection from laser range finder sensor data. Parametric kernels achieve recognition rates competitive to state-of-the-art approaches in these tasks. / text

Page generated in 0.1327 seconds