• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 5
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 51
  • 51
  • 21
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Learning-based methods for resource allocation and interference management in energy-efficient small cell networks

Samarakoon, S. (Sumudu) 07 November 2017 (has links)
Abstract Resource allocation and interference management in wireless small cell networks have been areas of key research interest in the past few years. Although a large number of research studies have been carried out, the needs for high capacity, reliability, and energy efficiency in the emerging fifth-generation (5G) networks warrants the development of methodologies focusing on ultra-dense and self-organizing small cell network (SCN) scenarios. In this regard, the prime motivation of this thesis is to propose an array of distributed methodologies to solve the problem of joint resource allocation and interference management in SCNs pertaining to different network architectures. The present dissertation proposes and investigates distributed control mechanisms for wireless SCNs mainly in three cases: a backhaul-aware interference management mechanism of the uplink of wireless SCNs, a dynamic cluster-based approach for maximizing the energy efficiency of dense wireless SCNs, and a joint power control and user scheduling mechanism for optimizing energy efficiency in ultra-dense SCNs. Optimizing SCNs, especially in the ultra-dense regime, is extremely challenging due to the severe coupling in interference and the dynamics of both queues and channel states. Moreover, due to the lack of inter-base station/cluster communications, smart distributed learning mechanisms are required to autonomously choose optimal transmission strategies based on local information. To overcome these challenges, an array of distributed algorithms are developed by combining the tools from machine learning, Lyapunov optimization and mean-field theory. For each of the above proposals, extensive sets of simulations have been carried out to validate the performance of the proposed methods compared to conventional models that fail to account for the limitations due to network scale, dynamics of queue and channel states, backhaul heterogeneity and capacity constraints, and the lack of coordination between network elements. The results of the proposed methods yield significant gains of the proposed methods in terms of energy savings, rate improvements, and delay reductions compared to the conventional models studied in the existing literature. / Tiivistelmä Langattomien piensoluverkkojen resurssien allokointi ja häiriön hallinta on ollut viime vuosina tärkeä tutkimuskohde. Tutkimuksia on tehty paljon, mutta uudet viidennen sukupolven (5G) verkot vaativat suurta kapasiteettia, luotettavuutta ja energiatehokkuutta. Sen vuoksi on kehitettävä menetelmiä, jotka keskittyy ultratiheisiin ja itseorganisoituviin piensoluverkkoihin. (SCN). Tämän väitöskirjan tärkein tavoite onkin esittää joukko hajautettuja menetelmiä piensoluverkkojen yhteisten resurssien allokointiin ja häiriön hallintaan, kun käytössä on erilaisia verkkoarkkitehtuureja. Tässä väitöskirjassa ehdotetaan ja tutkitaan hajautettuja menetelmiä langattomien piensoluverkkojen hallintaan kolmessa eri tilanteessa: välityskanavan huomioiva häiriönhallinta menetelmä langattomissa piensoluverkoissa, dynaamisiin klustereihin perustuva malli tiheiden langattomien piensoluverkkojen energiatehokkuuden maksimointiin ja yhteinen tehonsäädön ja käyttäjien allokaatio menetelmä ultratiheiden piensoluverkkojen energiatehokkuuden optimointiin. Ultratiheiden piensoluverkkojen optimointi on erittäin haastavaa häiriön sekä jonojen ja kanavatilojen vahvojen kytkösten vuoksi. Lisäksi, koska klustereilla/tukiasemilla ei ole kommunikaatiota, tarvitaan hajautettuja oppimisalgoritmeja, jotta saadaan itsenäisesti valittua optimaaliset lähetys menetelmät hyödyntäen vain paikallista tietoa. Tämän vuoksi kehitetään useita hajautettuja algoritmeja, jotka hyödyntävät koneoppimista, Lyapunov optimointia ja mean-field teoriaa. Kaikki yllä olevat esitetyt menetelmät on validoitu laajoilla simulaatioilla, joilla on voitu todentaa niiden suorituskyky perinteisiin malleihin verrattuna. Perinteiset mallit eivät pysty ottamaan huomioon verkon laajuuden, jonon ja kanavatilojen dynamiikan, eri välityskanavien ja rajallisen kapasiteetin asettamia rajoituksia sekä verkon elementtien välisen koordinoinnin puuttumista. Esitetyt menetelmät tuottavat huomattavia parannuksia energiansäästöön, siirtonopeuteen ja viiveiden vähentämiseen verrattuna perinteisiin malleihin, joita kirjallisuudessa on tarkasteltu.
32

Computer aided identification of biological specimens using self-organizing maps

Dean, Eileen J 12 January 2011 (has links)
For scientific or socio-economic reasons it is often necessary or desirable that biological material be identified. Given that there are an estimated 10 million living organisms on Earth, the identification of biological material can be problematic. Consequently the services of taxonomist specialists are often required. However, if such expertise is not readily available it is necessary to attempt an identification using an alternative method. Some of these alternative methods are unsatisfactory or can lead to a wrong identification. One of the most common problems encountered when identifying specimens is that important diagnostic features are often not easily observed, or may even be completely absent. A number of techniques can be used to try to overcome this problem, one of which, the Self Organizing Map (or SOM), is a particularly appealing technique because of its ability to handle missing data. This thesis explores the use of SOMs as a technique for the identification of indigenous trees of the Acacia species in KwaZulu-Natal, South Africa. The ability of the SOM technique to perform exploratory data analysis through data clustering is utilized and assessed, as is its usefulness for visualizing the results of the analysis of numerical, multivariate botanical data sets. The SOM’s ability to investigate, discover and interpret relationships within these data sets is examined, and the technique’s ability to identify tree species successfully is tested. These data sets are also tested using the C5 and CN2 classification techniques. Results from both these techniques are compared with the results obtained by using a SOM commercial package. These results indicate that the application of the SOM to the problem of biological identification could provide the start of the long-awaited breakthrough in computerized identification that biologists have eagerly been seeking. / Dissertation (MSc)--University of Pretoria, 2011. / Computer Science / unrestricted
33

Learning Decentralized Goal-Based Vector Quantization

Gupta, Piyush 05 1900 (has links) (PDF)
No description available.
34

Optimalizační algoritmy v logistických kombinatorických úlohách / Algorithms for Computerized Optimization of Logistic Combinatorial Problems

Bokiš, Daniel January 2015 (has links)
This thesis deals with optimization problems with main focus on logistic Vehicle Routing Problem (VRP). In the first part term optimization is established and most important optimization problems are presented. Next section deals with methods, which are capable of solving those problems. Furthermore it is explored how to apply those methods to specific VRP, along with presenting some enhancement of those algorithms. This thesis also introduces learning method capable of using knowledge of previous solutions. At the end of the paper, experiments are performed to tune the parameters of used algorithms and to discuss benefit of suggested improvements.
35

Algorithmes de machine learning adaptatifs pour flux de données sujets à des changements de concept / Adaptive machine learning algorithms for data streams subject to concept drifts

Loeffel, Pierre-Xavier 04 December 2017 (has links)
Dans cette thèse, nous considérons le problème de la classification supervisée sur un flux de données sujets à des changements de concepts. Afin de pouvoir apprendre dans cet environnement, nous pensons qu’un algorithme d’apprentissage doit combiner plusieurs caractéristiques. Il doit apprendre en ligne, ne pas faire d’hypothèses sur le concept ou sur la nature des changements de concepts et doit être autorisé à s’abstenir de prédire lorsque c’est nécessaire. Les algorithmes en ligne sont un choix évident pour traiter les flux de données. De par leur structure, ils sont capables de continuellement affiner le modèle appris à l’aide des dernières observations reçues. La structure instance based a des propriétés qui la rende particulièrement adaptée pour traiter le problème des flux de données sujet à des changements de concept. En effet, ces algorithmes font très peu d’hypothèses sur la nature du concept qu’ils essaient d’apprendre ce qui leur donne une flexibilité qui les rend capable d’apprendre un vaste éventail de concepts. Une autre force est que stocker certaines des observations passées dans la mémoire peux amener de précieuses meta-informations qui pourront être utilisées par la suite par l’algorithme. Enfin, nous mettons en valeur l’importance de permettre à un algorithme d’apprentissage de s’abstenir de prédire lorsque c’est nécessaire. En effet, les changements de concepts peuvent être la source de beaucoup d’incertitudes et, parfois, l’algorithme peux ne pas avoir suffisamment d’informations pour donner une prédiction fiable. / In this thesis, we investigate the problem of supervised classification on a data stream subject to concept drifts. In order to learn in this environment, we claim that a successful learning algorithm must combine several characteristics. It must be able to learn and adapt continuously, it shouldn’t make any assumption on the nature of the concept or the expected type of drifts and it should be allowed to abstain from prediction when necessary. On-line learning algorithms are the obvious choice to handle data streams. Indeed, their update mechanism allows them to continuously update their learned model by always making use of the latest data. The instance based (IB) structure also has some properties which make it extremely well suited to handle the issue of data streams with drifting concepts. Indeed, IB algorithms make very little assumptions about the nature of the concept they are trying to learn. This grants them a great flexibility which make them likely to be able to learn from a wide range of concepts. Another strength is that storing some of the past observations into memory can bring valuable meta-informations which can be used by an algorithm. Furthermore, the IB structure allows the adaptation process to rely on hard evidences of obsolescence and, by doing so, adaptation to concept changes can happen without the need to explicitly detect the drifts. Finally, in this thesis we stress the importance of allowing the learning algorithm to abstain from prediction in this framework. This is because the drifts can generate a lot of uncertainties and at times, an algorithm might lack the necessary information to accurately predict.
36

Hierarchical Temporal Memory Cortical Learning Algorithm for Pattern Recognition on Multi-core Architectures

Price, Ryan William 01 January 2011 (has links)
Strongly inspired by an understanding of mammalian cortical structure and function, the Hierarchical Temporal Memory Cortical Learning Algorithm (HTM CLA) is a promising new approach to problems of recognition and inference in space and time. Only a subset of the theoretical framework of this algorithm has been studied, but it is already clear that there is a need for more information about the performance of HTM CLA with real data and the associated computational costs. For the work presented here, a complete implementation of Numenta's current algorithm was done in C++. In validating the implementation, first and higher order sequence learning was briefly examined, as was algorithm behavior with noisy data doing simple pattern recognition. A pattern recognition task was created using sequences of handwritten digits and performance analysis of the sequential implementation was performed. The analysis indicates that the resulting rapid increase in computing load may impact algorithm scalability, which may, in turn, be an obstacle to widespread adoption of the algorithm. Two critical hotspots in the sequential code were identified and a parallelized version was developed using OpenMP multi-threading. Scalability analysis of the parallel implementation was performed on a state of the art multi-core computing platform. Modest speedup was readily achieved with straightforward parallelization. Parallelization on multi-core systems is an attractive choice for moderate sized applications, but significantly larger ones are likely to remain infeasible without more specialized hardware acceleration accompanied by optimizations to the algorithm.
37

Strategies for Discriminating Earthquakes Using a Repeating Signal Detector to Investigate Induced Seismicity in Eastern Ohio

Chiorini, Sutton 01 December 2019 (has links)
No description available.
38

SMART-LEARNING ENABLED AND THEORY-SUPPORTED OPTIMAL CONTROL

Sixiong You (14374326) 03 May 2023 (has links)
<p> This work focuses on solving the general optimal control problems with smart-learning-enabled and theory-supported optimal control (SET-OC) approaches. The proposed SET-OC includes two main directions. Firstly, according to the basic idea of the direct method, the smart-learning-enabled iterative optimization algorithm (SEIOA) is proposed for solving discrete optimal control problems. Via discretization and reformulation, the optimal control problem is converted into a general quadratically constrained quadratic programming (QCQP) problem. Then, the SEIOA is applied to solving QCQPs. To be specific, first, a structure-exploiting decomposition scheme is introduced to reduce the complexity of the original problem. Next, an iterative search, combined with an intersection-cutting plane, is developed to achieve global convergence. Furthermore, considering the implicit relationship between the algorithmic parameters and the convergence rate of the iterative search, deep learning is applied to design the algorithmic parameters from an appropriate amount of training data to improve convergence property. To demonstrate the effectiveness and improved computational performance of the proposed SEIOA, the developed algorithms have been implemented in extensive real-world application problems, including unmanned aerial vehicle path planning problems and general QCQP problems. According to the theoretical analysis of global convergence and the simulation results, the efficiency, robustness, and improved convergence rate of the optimization framework compared to the state-of-the-art optimization methods for solving general QCQP problems are analyzed and verified. Secondly, the onboard learning-based optimal control method (L-OCM) is proposed to solve the optimal control problems. Supported by the optimal control theory, the necessary conditions of optimality for optimal control of the optimal control problem can be derived, which leads to two two-point-boundary-value-problems (TPBVPs). Then, critical parameters are identified to approximate the complete solutions of the TPBVPs. To find the implicit relationship between the initial states and these critical parameters, deep neural networks are constructed to learn the values of these critical parameters in real-time with training data obtained from the offline solutions.  To demonstrate the effectiveness and improved computational performance of the proposed L-OCM approaches, the developed algorithms have been implemented in extensive real-world application problems, including two-dimensional human-Mars entry, powered-descent, landing guidance problems, and fuel-optimal powered descent guidance (PDG) problems. In addition, considering there is no thorough analysis of the properties of the optimal control profile for PDG when considering the state constraints, a rigid theoretical analysis of the fuel-optimal PDG problem with state constraints is further provided. According to the theoretical analysis and simulation results, the optimality, robustness, and real-time performance of the proposed L-OCM are analyzed and verified, which indicates the potential for onboard implementation. </p>
39

Разработка аналитического обеспечения технологии машинного обучения в деятельности страховой компании : магистерская диссертация / Development of analytical support for machine learning technology in the activities of an insurance company

Денисенко, Н. С., Denisenko, N. S. January 2022 (has links)
В диссертации были изучены особенности использования методов машинного обучения в сфере страхования. Рассмотрены возможности архитектурного подхода в разработке модели машинного обучения. Осуществлен анализ тенденций цифровой трансформации сферы страхования. Осуществлена оценка результативности использования машинного обучения в страховании. Построена полная модель архитектуры ПАО СК «Росгосстрах». Разработана аналитическая модель машинного обучения в сфере тарификации страховой компании. На основе процессного подхода детально рассмотрены все фазы проекта по внедрению модели машинного обучения в деятельность страховой компании. Разработана и реализована имитационная модель управления проектом разработки и внедрения модели машинного обучения в деятельность страховой компании на основе различных сценарием. / The dissertation studied the features of using machine learning methods in the field of insurance. The possibilities of the architectural approach in the development of a machine learning model are considered. The analysis of trends in the digital transformation of the insurance industry has been carried out. The effectiveness of the use of machine learning in insurance has been evaluated. A complete model of the architecture of PJSC IC Rosgosstrakh was built. An analytical model of machine learning in the field of tariffing of an insurance company has been developed. Based on the process approach, all phases of the project to introduce a machine learning model into the activities of an insurance company are considered in detail. A simulation model for project management for the development and implementation of a machine learning model in the activities of an insurance company has been developed and implemented based on various scenarios.
40

<b>A Study on the Use of Unsupervised, Supervised, and Semi-supervised Modeling for Jamming Detection and Classification in Unmanned Aerial Vehicles</b>

Margaux Camille Marie Catafort--Silva (18477354) 02 May 2024 (has links)
<p dir="ltr">In this work, first, unsupervised machine learning is proposed as a study for detecting and classifying jamming attacks targeting unmanned aerial vehicles (UAV) operating at a 2.4 GHz band. Three scenarios are developed with a dataset of samples extracted from meticulous experimental routines using various unsupervised learning algorithms, namely K-means, density-based spatial clustering of applications with noise (DBSCAN), agglomerative clustering (AGG) and Gaussian mixture model (GMM). These routines characterize attack scenarios entailing barrage (BA), single- tone (ST), successive-pulse (SP), and protocol-aware (PA) jamming in three different settings. In the first setting, all extracted features from the original dataset are used (i.e., nine in total). In the second setting, Spearman correlation is implemented to reduce the number of these features. In the third setting, principal component analysis (PCA) is utilized to reduce the dimensionality of the dataset to minimize complexity. The metrics used to compare the algorithms are homogeneity, completeness, v-measure, adjusted mutual information (AMI) and adjusted rank index (ARI). The optimum model scored 1.00, 0.949, 0.791, 0.722, and 0.791, respectively, allowing the detection and classification of these four jamming types with an acceptable degree of confidence.</p><p dir="ltr">Second, following a different study, supervised learning (i.e., random forest modeling) is developed to achieve a binary classification to ensure accurate clustering of samples into two distinct classes: clean and jamming. Following this supervised-based classification, two-class and three-class unsupervised learning is implemented considering three of the four jamming types: BA, ST, and SP. In this initial step, the four aforementioned algorithms are used. This newly developed study is intended to facilitate the visualization of the performance of each algorithm, for example, AGG performs a homogeneity of 1.0, a completeness of 0.950, a V-measure of 0.713, an ARI of 0.557 and an AMI of 0.713, and GMM generates 1, 0.771, 0.645, 0.536 and 0.644, respectively. Lastly, to improve the classification of this study, semi-supervised learning is adopted instead of unsupervised learning considering the same algorithms and dataset. In this case, GMM achieves results of 1, 0.688, 0.688, 0.786 and 0.688 whereas DBSCAN achieves 0, 0.036, 0.028, 0.018, 0.028 for homogeneity, completeness, V-measure, ARI and AMI respectively. Overall, this unsupervised learning is approached as a method for jamming classification, addressing the challenge of identifying newly introduced samples.</p>

Page generated in 0.0724 seconds