• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 570
  • 336
  • 39
  • 21
  • 15
  • 12
  • 11
  • 8
  • 8
  • 8
  • 8
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 1191
  • 1191
  • 1191
  • 571
  • 556
  • 423
  • 157
  • 134
  • 129
  • 128
  • 120
  • 110
  • 94
  • 93
  • 92
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1171

Predikce časových řad pomocí statistických metod / Prediction of Time Series Using Statistical Methods

Beluský, Ondrej January 2011 (has links)
Many companies consider essential to obtain forecast of time series of uncertain variables that influence their decisions and actions. Marketing includes a number of decisions that depend on a reliable forecast. Forecasts are based directly or indirectly on the information derived from historical data. This data may include different patterns - such as trend, horizontal pattern, and cyclical or seasonal pattern. Most methods are based on the recognition of these patterns, their projection into the future and thus create a forecast. Other approaches such as neural networks are black boxes, which uses learning.
1172

Methodische Aspekte bei der Entwicklung mechanischer Simulationen zur Messung der Funktionalitäten eines Handballschuhs

Krumm, Dominik 17 March 2020 (has links)
Ziel der vorliegenden Arbeit war es, die methodischen Aspekte bei der Entwicklung mechanischer Simulationen zur Messung der Funktionalitäten von Handballschuhen systematisch zu untersuchen und aus den Ergebnissen allgemeingültige Aussagen zum Abstraktionsgrad abzuleiten. Die Untersuchungen der vier methodischen Aspekte Messgerät, Auswertemodell, Einfluss- und Eingangsgröße haben ergeben, dass insgesamt drei Aspekte einen Einfluss auf den Messwert hatten. Mit Ausnahme der Ergebnisse zum Aspekt Eingangsgröße besaßen die untersuchten methodischen Aspekte jeweils einen Einfluss auf den Messwert. Anhand der Ergebnisse konnte abgeleitet werden, dass der Abstraktionsgrad einen Einfluss auf die Messwerte besitzt. / The aim of the current work was to investigate systematically the methodological aspects used in the development of mechanical simulations, which are capable of measuring the functionalities of handball shoes, and to derive general conclusions about the proper degree of abstraction from the results. The investigations of the four methodological aspects, namely measuring instrument, evaluation model, influence quantity and input quantity, have shown that three aspects had an influence on the measurand. Except for the results on the aspect of input quantity, each of the examined methodological aspects had an influence on the measurand. Based on the results, it could be deduced that the degree of abstraction has an influence on the measurand.
1173

Neurala nätverk försjälvkörande fordon : Utforskande av olika tillvägagångssätt / Neural Networks for Autonomous Vehicles : An Exploration of Different Approaches

Hellner, Simon, Syvertsson, Henrik January 2021 (has links)
Artificiella neurala nätverk (ANN) har ett brett tillämpningsområde och blir allt relevantare på flera håll, inte minst för självkörande fordon. För att träna nätverken användsmeta-algoritmer. Nätverken kan styra fordonen med hjälp av olika typer av indata. I detta projekt har vi undersökt två meta-algoritmer: genetisk algoritm (GA) och gradient descent tillsammans med bakåtpropagering (GD & BP). Vi har även undersökt två typer av indata: avståndssensorer och linjedetektering. Vi redogör för teorin bakom de metoder vi har försökt implementera. Vi lyckades inte använda GD & BP för att träna nätverk att köra fordon, men vi redogör för hur vi försökte. I resultatdelen redovisar vi hur det med GA gick att träna ANN som använder avståndssensorer och linjedetektering som indata. Sammanfattningsvis lyckades vi implementera självkörande fordon med två olika typer av indata. / Artificial Neural Networks (ANN) have a broad area of application and are growing increasingly relevant, not least in the field of autonomous vehicles. Meta algorithms are used to train networks, which can control a vehicle using several kinds of input data. In this project we have looked at two meta algorithms: genetic algorithm (GA), and gradient descent with backpropagation (GD & BP). We have looked at two types of input to the ANN: distance sensors and line detection. We explain the theory behind the methods we have tried to implement. We did not succeed in using GD & BP to train ANNs to control vehicles, but we describe our attemps. We did however succeeded in using GA to train ANNs using a combination of distance sensors and line detection as input. In summary we managed to train ANNs to control vehicles using two methods of input, and we encountered interesting problems along the way.
1174

Design, Analysis, and Applications of Approximate Arithmetic Modules

Ullah, Salim 06 April 2022 (has links)
From the initial computing machines, Colossus of 1943 and ENIAC of 1945, to modern high-performance data centers and Internet of Things (IOTs), four design goals, i.e., high-performance, energy-efficiency, resource utilization, and ease of programmability, have remained a beacon of development for the computing industry. During this period, the computing industry has exploited the advantages of technology scaling and microarchitectural enhancements to achieve these goals. However, with the end of Dennard scaling, these techniques have diminishing energy and performance advantages. Therefore, it is necessary to explore alternative techniques for satisfying the computational and energy requirements of modern applications. Towards this end, one promising technique is analyzing and surrendering the strict notion of correctness in various layers of the computation stack. Most modern applications across the computing spectrum---from data centers to IoTs---interact and analyze real-world data and take decisions accordingly. These applications are broadly classified as Recognition, Mining, and Synthesis (RMS). Instead of producing a single golden answer, these applications produce several feasible answers. These applications possess an inherent error-resilience to the inexactness of processed data and corresponding operations. Utilizing these applications' inherent error-resilience, the paradigm of Approximate Computing relaxes the strict notion of computation correctness to realize high-performance and energy-efficient systems with acceptable quality outputs. The prior works on circuit-level approximations have mainly focused on Application-specific Integrated Circuits (ASICs). However, ASIC-based solutions suffer from long time-to-market and high-cost developing cycles. These limitations of ASICs can be overcome by utilizing the reconfigurable nature of Field Programmable Gate Arrays (FPGAs). However, due to architectural differences between ASICs and FPGAs, the utilization of ASIC-based approximation techniques for FPGA-based systems does not result in proportional performance and energy gains. Therefore, to exploit the principles of approximate computing for FPGA-based hardware accelerators for error-resilient applications, FPGA-optimized approximation techniques are required. Further, most state-of-the-art approximate arithmetic operators do not have a generic approximation methodology to implement new approximate designs for an application's changing accuracy and performance requirements. These works also lack a methodology where a machine learning model can be used to correlate an approximate operator with its impact on the output quality of an application. This thesis focuses on these research challenges by designing and exploring FPGA-optimized logic-based approximate arithmetic operators. As multiplication operation is one of the computationally complex and most frequently used arithmetic operations in various modern applications, such as Artificial Neural Networks (ANNs), we have, therefore, considered it for most of the proposed approximation techniques in this thesis. The primary focus of the work is to provide a framework for generating FPGA-optimized approximate arithmetic operators and efficient techniques to explore approximate operators for implementing hardware accelerators for error-resilient applications. Towards this end, we first present various designs of resource-optimized, high-performance, and energy-efficient accurate multipliers. Although modern FPGAs host high-performance DSP blocks to perform multiplication and other arithmetic operations, our analysis and results show that the orthogonal approach of having resource-efficient and high-performance multipliers is necessary for implementing high-performance accelerators. Due to the differences in the type of data processed by various applications, the thesis presents individual designs for unsigned, signed, and constant multipliers. Compared to the multiplier IPs provided by the FPGA Synthesis tool, our proposed designs provide significant performance gains. We then explore the designed accurate multipliers and provide a library of approximate unsigned/signed multipliers. The proposed approximations target the reduction in the total utilized resources, critical path delay, and energy consumption of the multipliers. We have explored various statistical error metrics to characterize the approximation-induced accuracy degradation of the approximate multipliers. We have also utilized the designed multipliers in various error-resilient applications to evaluate their impact on applications' output quality and performance. Based on our analysis of the designed approximate multipliers, we identify the need for a framework to design application-specific approximate arithmetic operators. An application-specific approximate arithmetic operator intends to implement only the logic that can satisfy the application's overall output accuracy and performance constraints. Towards this end, we present a generic design methodology for implementing FPGA-based application-specific approximate arithmetic operators from their accurate implementations according to the applications' accuracy and performance requirements. In this regard, we utilize various machine learning models to identify feasible approximate arithmetic configurations for various applications. We also utilize different machine learning models and optimization techniques to efficiently explore the large design space of individual operators and their utilization in various applications. In this thesis, we have used the proposed methodology to design approximate adders and multipliers. This thesis also explores other layers of the computation stack (cross-layer) for possible approximations to satisfy an application's accuracy and performance requirements. Towards this end, we first present a low bit-width and highly accurate quantization scheme for pre-trained Deep Neural Networks (DNNs). The proposed quantization scheme does not require re-training (fine-tuning the parameters) after quantization. We also present a resource-efficient FPGA-based multiplier that utilizes our proposed quantization scheme. Finally, we present a framework to allow the intelligent exploration and highly accurate identification of the feasible design points in the large design space enabled by cross-layer approximations. The proposed framework utilizes a novel Polynomial Regression (PR)-based method to model approximate arithmetic operators. The PR-based representation enables machine learning models to better correlate an approximate operator's coefficients with their impact on an application's output quality.:1. Introduction 1.1 Inherent Error Resilience of Applications 1.2 Approximate Computing Paradigm 1.2.1 Software Layer Approximation 1.2.2 Architecture Layer Approximation 1.2.3 Circuit Layer Approximation 1.3 Problem Statement 1.4 Focus of the Thesis 1.5 Key Contributions and Thesis Overview 2. Preliminaries 2.1 Xilinx FPGA Slice Structure 2.2 Multiplication Algorithms 2.2.1 Baugh-Wooley’s Multiplication Algorithm 2.2.2 Booth’s Multiplication Algorithm 2.2.3 Sign Extension for Booth’s Multiplier 2.3 Statistical Error Metrics 2.4 Design Space Exploration and Optimization Techniques 2.4.1 Genetic Algorithm 2.4.2 Bayesian Optimization 2.5 Artificial Neural Networks 3. Accurate Multipliers 3.1 Introduction 3.2 Related Work 3.3 Unsigned Multiplier Architecture 3.4 Motivation for Signed Multipliers 3.5 Baugh-Wooley’s Multiplier 3.6 Booth’s Algorithm-based Signed Multipliers 3.6.1 Booth-Mult Design 3.6.2 Booth-Opt Design 3.6.3 Booth-Par Design 3.7 Constant Multipliers 3.8 Results and Discussion 3.8.1 Experimental Setup and Tool Flow 3.8.2 Performance comparison of the proposed accurate unsigned multiplier 3.8.3 Performance comparison of the proposed accurate signed multiplier with the state-of-the-art accurate multipliers 3.8.4 Performance comparison of the proposed constant multiplier with the state-of-the-art accurate multipliers 3.9 Conclusion 4. Approximate Multipliers 4.1 Introduction 4.2 Related Work 4.3 Unsigned Approximate Multipliers 4.3.1 Approximate 4 × 4 Multiplier (Approx-1) 4.3.2 Approximate 4 × 4 Multiplier (Approx-2) 4.3.3 Approximate 4 × 4 Multiplier (Approx-3) 4.4 Designing Higher Order Approximate Unsigned Multipliers 4.4.1 Accurate Adders for Implementing 8 × 8 Approximate Multipliers from 4 × 4 Approximate Multipliers 4.4.2 Approximate Adders for Implementing Higher-order Approximate Multipliers 4.5 Approximate Signed Multipliers (Booth-Approx) 4.6 Results and Discussion 4.6.1 Experimental Setup and Tool Flow 4.6.2 Evaluation of the Proposed Approximate Unsigned Multipliers 4.6.3 Evaluation of the Proposed Approximate Signed Multiplier 4.7 Conclusion 5. Designing Application-specific Approximate Operators 5.1 Introduction 5.2 Related Work 5.3 Modeling Approximate Arithmetic Operators 5.3.1 Accurate Multiplier Design 5.3.2 Approximation Methodology 5.3.3 Approximate Adders 5.4 DSE for FPGA-based Approximate Operators Synthesis 5.4.1 DSE using Bayesian Optimization 5.4.2 MOEA-based Optimization 5.4.3 Machine Learning Models for DSE 5.5 Results and Discussion 5.5.1 Experimental Setup and Tool Flow 5.5.2 Accuracy-Performance Analysis of Approximate Adders 5.5.3 Accuracy-Performance Analysis of Approximate Multipliers 5.5.4 AppAxO MBO 5.5.5 ML Modeling 5.5.6 DSE using ML Models 5.5.7 Proposed Approximate Operators 5.6 Conclusion 6. Quantization of Pre-trained Deep Neural Networks 6.1 Introduction 6.2 Related Work 6.2.1 Commonly Used Quantization Techniques 6.3 Proposed Quantization Techniques 6.3.1 L2L: Log_2_Lead Quantization 6.3.2 ALigN: Adaptive Log_2_Lead Quantization 6.3.3 Quantitative Analysis of the Proposed Quantization Schemes 6.3.4 Proposed Quantization Technique-based Multiplier 6.4 Results and Discussion 6.4.1 Experimental Setup and Tool Flow 6.4.2 Image Classification 6.4.3 Semantic Segmentation 6.4.4 Hardware Implementation Results 6.5 Conclusion 7. A Framework for Cross-layer Approximations 7.1 Introduction 7.2 Related Work 7.3 Error-analysis of approximate arithmetic units 7.3.1 Application Independent Error-analysis of Approximate Multipliers 7.3.2 Application Specific Error Analysis 7.4 Accelerator Performance Estimation 7.5 DSE Methodology 7.6 Results and Discussion 7.6.1 Experimental Setup and Tool Flow 7.6.2 Behavioral Analysis 7.6.3 Accelerator Performance Estimation 7.6.4 DSE Performance 7.7 Conclusion 8. Conclusions and Future Work
1175

Layout Analysis for Handwritten Documents. A Probabilistic Machine Learning Approach

Quirós Díaz, Lorenzo 21 March 2022 (has links)
[ES] El Análisis de la Estructura de Documentos (Document Layout Analysis), aplicado a documentos manuscritos, tiene como objetivo obtener automáticamente la estructura intrínseca de dichos documentos. Su desarrollo como campo de investigación se extiende desde los sistemas de segmentación de caracteres desarrollados a principios de la década de 1960 hasta los sistemas complejos desarrollados en la actualidad, donde el objetivo es analizar estructuras de alto nivel (líneas de texto, párrafos, tablas, etc.) y la relación que existe entre ellas. Esta tesis, en primer lugar, define el objetivo del Análisis de la Estructura de Documentos desde una perspectiva probabilística. A continuación, la complejidad del problema se reduce a un conjunto de subproblemas complementarios bien conocidos, de manera que pueda ser gestionado por medio de recursos informáticos modernos. Concretamente se abordan tres de los principales problemas del Análisis de la Estructura de Documentos siguiendo una formulación probabilística. Específicamente se aborda la Detección de Línea Base (Baseline Detection), la Segmentación de Regiones (Region Segmentation) y la Determinación del Orden de Lectura (Reading Order Determination). Uno de los principales aportes de esta tesis es la formalización de los problemas de Detección de Línea Base y Segmentación de Regiones bajo un marco probabilístico, donde ambos problemas pueden ser abordados por separado o de forma integrada por los modelos propuestos. Este último enfoque ha demostrado ser muy útil para procesar grandes colecciones de documentos con recursos informáticos limitados. Posteriormente se aborda el subproblema de la Determinación del Orden de Lectura, que es uno de los subproblemas más importantes, aunque subestimados, del Análisis de la Extructura de Documentos, ya que es el nexo que permite convertir los datos extraídos de los sistemas de Reconocimiento Automático de Texto (Automatic Text Recognition Systems) en información útil. Por lo tanto, en esta tesis abordamos y formalizamos la Determinación del Orden de Lectura como un problema de clasificación probabilística por pares. Además, se proponen dos diferentes algoritmos de decodificación que reducen la complejidad computacional del problema. Por otra parte, se utilizan diferentes modelos estadísticos para representar la distribución de probabilidad sobre la estructura de los documentos. Estos modelos, basados en Redes Neuronales Artificiales (desde un simple Perceptrón Multicapa hasta complejas Redes Convolucionales y Redes de Propuesta de Regiones), se estiman a partir de datos de entrenamiento utilizando algoritmos de aprendizaje automático supervisados. Finalmente, todas las contribuciones se evalúan experimentalmente, no solo en referencias académicas estándar, sino también en colecciones de miles de imágenes. Se han considerado documentos de texto manuascritos y documentos musicales manuscritos, ya que en conjunto representan la mayoría de los documentos presentes en bibliotecas y archivos. Los resultados muestran que los métodos propuestos son muy precisos y versátiles en una amplia gama de documentos manuscritos. / [CA] L'Anàlisi de l'Estructura de Documents (Document Layout Analysis), aplicada a documents manuscrits, pretén automatitzar l'obtenció de l'estructura intrínseca d'un document. El seu desenvolupament com a camp d'investigació comprén des dels sistemes de segmentació de caràcters creats al principi dels anys 60 fins als complexos sistemes de hui dia que busquen analitzar estructures d'alt nivell (línies de text, paràgrafs, taules, etc) i les relacions entre elles. Aquesta tesi busca, primer de tot, definir el propòsit de l'anàlisi de l'estructura de documents des d'una perspectiva probabilística. Llavors, una vegada reduïda la complexitat del problema, es processa utilitzant recursos computacionals moderns, per a dividir-ho en un conjunt de subproblemes complementaris més coneguts. Concretament, tres dels principals subproblemes de l'Anàlisi de l'Estructura de Documents s'adrecen seguint una formulació probabilística: Detecció de la Línia Base Baseline Detection), Segmentació de Regions (Region Segmentation) i Determinació de l'Ordre de Lectura (Reading Order Determination). Una de les principals contribucions d'aquesta tesi és la formalització dels problemes de la Detecció de les Línies Base i dels de Segmentació de Regions en un entorn probabilístic, sent els dos problemes tractats per separat o integrats en conjunt pels models proposats. Aquesta última aproximació ha demostrat ser de molta utilitat per a la gestió de grans col·leccions de documents amb uns recursos computacionals limitats. Posteriorment s'ha adreçat el subproblema de la Determinació de l'Ordre de Lectura, sent un dels subproblemes més importants de l'Anàlisi d'Estructures de Documents, encara així subestimat, perquè és el nexe que permet transformar en informació d'utilitat l'extracció de dades dels sistemes de reconeixement automàtic de text. És per això que el fet de determinar l'ordre de lectura s'adreça i formalitza com un problema d'ordenació probabilística per parells. A més, es proposen dos algoritmes descodificadors diferents que reducix la complexitat computacional del problema. Per altra banda s'utilitzen diferents models estadístics per representar la distribució probabilística sobre l'estructura dels documents. Aquests models, basats en xarxes neuronals artificials (des d'un simple perceptron multicapa fins a complexes xarxes convolucionals i de propostes de regió), s'estimen a partir de dades d'entrenament mitjançant algoritmes d'aprenentatge automàtic supervisats. Finalment, totes les contribucions s'avaluen experimentalment, no només en referents acadèmics estàndard, sinó també en col·leccions de milers d'imatges. S'han considerat documents de text manuscrit i documents musicals manuscrits, ja que representen la majoria de documents presents a biblioteques i arxius. Els resultats mostren que els mètodes proposats són molt precisos i versàtils en una àmplia gamma de documents manuscrits. / [EN] Document Layout Analysis, applied to handwritten documents, aims to automatically obtain the intrinsic structure of a document. Its development as a research field spans from the character segmentation systems developed in the early 1960s to the complex systems designed nowadays, where the goal is to analyze high-level structures (lines of text, paragraphs, tables, etc) and the relationship between them. This thesis first defines the goal of Document Layout Analysis from a probabilistic perspective. Then, the complexity of the problem is reduced, to be handled by modern computing resources, into a set of well-known complementary subproblems. More precisely, three of the main subproblems of Document Layout Analysis are addressed following a probabilistic formulation, namely Baseline Detection, Region Segmentation and Reading Order Determination. One of the main contributions of this thesis is the formalization of Baseline Detection and Region Segmentation problems under a probabilistic framework, where both problems can be handled separately or in an integrated way by the proposed models. The latter approach is proven to be very useful to handle large document collections under restricted computing resources. Later, the Reading Order Determination subproblem is addressed. It is one of the most important, yet underestimated, subproblem of Document Layout Analysis, since it is the bridge that allows us to convert the data extracted from Automatic Text Recognition systems into useful information. Therefore, Reading Order Determination is addressed and formalized as a pairwise probabilistic sorting problem. Moreover, we propose two different decoding algorithms that reduce the computational complexity of the problem. Furthermore, different statistical models are used to represent the probability distribution over the structure of the documents. These models, based on Artificial Neural Networks (from a simple Multilayer Perceptron to complex Convolutional and Region Proposal Networks), are estimated from training data using supervised Machine Learning algorithms. Finally, all the contributions are experimentally evaluated, not only on standard academic benchmarks but also in collections of thousands of images. We consider handwritten text documents and handwritten musical documents as they represent the majority of documents in libraries and archives. The results show that the proposed methods are very accurate and versatile in a very wide range of handwritten documents. / Quirós Díaz, L. (2022). Layout Analysis for Handwritten Documents. A Probabilistic Machine Learning Approach [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/181483 / TESIS
1176

Evolving Complex Neuro-Controllers with Interactively Constrained Neuro-Evolution

Rempis, Christian Wilhelm 17 October 2012 (has links)
In the context of evolutionary robotics and neurorobotics, artificial neural networks, used as controllers for animats, are examined to identify principles of neuro-control, network organization, the interaction between body and control, and other likewise properties. Before such an examination can take place, suitable neuro-controllers have to be identified. A promising and widely used technique to search for such networks are evolutionary algorithms specifically adapted for neural networks. These allow the search for neuro-controllers with various network topologies directly on physically grounded (simulated) animats. This neuro-evolution approach works well for small neuro-controllers and has lead to interesting results. However, due to the exponentially increasing search space with respect to the number of involved neurons, this approach does not scale well with larger networks. This scaling problem makes it difficult to find non-trivial, larger networks, that show interesting properties. In the context of this thesis, networks of this class are called mid-scale networks, having between 50 and 500 neurons. Searching for networks of this class involves very large search spaces, including all possible synaptic connections between the neurons, the bias terms of the neurons and (optionally) parameters of the neuron model, such as the transfer function, activation function or parameters of learning rules. In this domain, most evolutionary algorithms are not able to find suitable, non-trivial neuro-controllers in feasible time. To cope with this problem and to shift the frontier for evolvable network topologies a bit further, a novel evolutionary method has been developed in this thesis: the Interactively Constrained Neuro-Evolution method (ICONE). A way to approach the problem of increasing search spaces is the introduction of measures that reduce and restrict the search space back to a feasible domain. With ICONE, this restriction is realized with a unified, extensible and highly adaptable concept: Instead of evolving networks freely, networks are evolved within specifically designed constraint masks, that define mandatory properties of the evolving networks. These constraint masks are defined primarily using so called functional constraints, that actively modify a neural network to enforce the adherence of all required limitations and assumptions. Consequently, independently of the mutations taking place during evolution, the constraint masks repair and readjust the networks so that constraint violations are not able to evolve. Such functional constraints can be very specific and can enforce various network properties, such as symmetries, structure reuse, connectivity patterns, connectivity density heuristics, synaptic pathways, local processing assemblies, and much more. Constraint masks therefore describe a narrow, user defined subset of the parameter space -- based on domain knowledge and user experience -- that focuses the search on a smaller search space leading to a higher success rate for the evolution. Due to the involved domain knowledge, such evolutions are strongly biased towards specific classes of networks, because only networks within the defined search space can evolve. This, surely, can also be actively used to lead the evolution towards specific solution approaches, allowing the experimenter not only to search for any upcoming solution, but also to confirm assumptions about possible solutions. This makes it easier to investigate specific neuro-control principles, because the experimenter can systematically search for networks implementing the desired principles, simply by using suitable constraints to enforce them. Constraint masks in ICONE are built up by functional constraints working on so called neuro-modules. These modules are used to structure the networks, to define the scope for constraints and to simplify the reuse of (evolved) neural structures. The concept of functional, constrained neuro-modules allows a simple and flexible way to construct constraint masks and to inherit constraints when neuro-modules are reused or shared. A final cornerstone of the ICONE method is the interactive control of the evolution process, that allows the adaptation of the evolution parameters and the constraint masks to guide evolution towards promising domains and to counteract undesired developments. Due to the constraint masks, this interactive guidance is more effective than the adaptation of the evolution parameters alone, so that the identification of promising search space regions becomes easier. This thesis describes the ICONE method in detail and shows several applications of the method and the involved features. The examples demonstrate that the method can be used effectively for problems in the domain of mid-scale networks. Hereby, as effects of the constraint masks and the herewith reduced complexity of the networks, the results are -- despite their size -- often easy to comprehend, well analyzable and easy to reuse. Another benefit of constraint masks is the ability to deliberately search for very specific network configurations, which allows the effective and systematic exploration of distinct variations for an evolution experiment, simply by changing the constraint masks over the course of multiple evolution runs. The ICONE method therefore is a promising novel evolution method to tackle the problem of evolving mid-scale networks, pushing the frontier of evolvable networks a bit further. This allows for novel evolution experiments in the domain of neurorobotics and evolutionary robotics and may possibly lead to new insights into neuro-dynamical principles of animat control.
1177

Using Laser-Induced Breakdown Spectroscopy (LIBS) for Material Analysis / Using Laser-Induced Breakdown Spectroscopy (LIBS) for Material Analysis

Pořízka, Pavel January 2014 (has links)
Tato doktorská práce je zaměřena na vývoj algoritmu ke zpracování dat naměřených zařízením pro spektrometrii laserem indukovaného plazmatu (angl. LIBS). Zařízení LIBS s tímto algoritmem by mělo být následně schopno provést třídění vzorků a kvantitativní analýzu analytu in-situ a v reálném čase. Celá experimentální část této práce byla provedena ve Spolkovém institutu pro materiálový výzku a testování (něm. BAM) v Berlíně, SRN, kde byl sestaven elementární LIBS systém. Souběžně s experimentílní prací byl vytvořen přehled literárních zdrojů s cílem podat ucelený pohled na problematiku chemometrických metod používaných k analýze LIBS měření. Použití chemometrických metod pro analýzu dat získaných pomocí LIBS měření je obecně doporučováno především tehdy, jsou-li analyzovány vzorky s komplexní matricí. Vývoj algoritmu byl zaměřen na kvantitativní analýzu a třídění vyvřelých hornin na základě měření pomocí LIBS aparatury. Sada vzorků naměřených použitím metody LIBS sestávala z certifikovaných referenčních materiálů a vzorků hornin shromážděných přímo na nalezištích mědi v Íránu. Vzorky z Íránu byly následně na místě roztříděny zkušeným geologem a množství mědi v daných vzorcích bylo změřeno na Univerzitě v Clausthalu, SRN. Výsledné kalibrační křivky byly silně nelineární, přestože byly sestaveny i z měření referenčních vzorků. Kalibrační křivku bylo možné rozložit na několik dílčích tak, že závislost intenzity měděné čáry na množství mědi se nacházela v jiném trendu pro jednotlivé druhy hornin. Rozdělení kalibrační křivky je zpravidla přisuzováno tzv. matričnímu jevu, který silně ovlivňuje měření metodou LIBS. Jinými slovy, pokud určujeme množství analytu ve vzorcích s různou matricí, je výsledná kalibrační křivka sestavená pouze z jedné proměnné (intenzity zvolené spektrální čáry analytu) nepřesná. Navíc, normalizace takto vytvořených kalibračních křivek k intenzitě spektrální čáry matrčního prvku nevedla k výraznému zlepšení linearity. Je obecně nemožné vybrat spektrální čáru jednoho matričního prvku pokud jsou analyzovány prvky s komplexním složením matric. Chemometrické metody, jmenovitě regrese hlavních komponent (angl. PCR) a regrese metodou nejmenších čtverců (angl. PLSR), byly použity v multivariační kvantitatvní analýze, tj. za použití více proměnných/spektrálních čar analytu a matričních prvků. Je potřeba brát v potaz, že PCR a PLSR mohou vyvážit matriční jev pouze do určité míry. Dále byly vzorky úspěšně roztříděny pomocí analýzy hlavních komponent (angl. PCA) a Kohonenových map na základě složení matričních prvků (v anglické literatuře se objevuje termín ‚spectral fingerprint‘) Na základě teorie a experimentálních měření byl navržen algoritmus pro spolehlivé třídění a kvantifikaci neznámých vzorků. Tato studie by měla přispět ke zpracování dat naměřených in-situ přístrojem pro dálkovou LIBS analýzu. Tento přístroj je v současnosti vyvíjen v Brně na Vysokém učení technickém. Toto zařízení bude nenahraditelné při kvantifikaci a klasifikaci vzorků pouze tehdy, pokud bude použito zároveň s chemometrickými metodami a knihovnami dat. Pro tyto účely byla již naměřena a testována část knihoven dat v zaměření na aplikaci metody LIBS do těžebního průmyslu.
1178

Deep Learning in the Web Browser for Wind Speed Forecasting using TensorFlow.js / Djupinlärning i Webbläsaren för Vindhastighetsprognoser med TensorFlow.js

Moazez Gharebagh, Sara January 2023 (has links)
Deep Learning is a powerful and rapidly advancing technology that has shown promising results within the field of weather forecasting. Implementing and using deep learning models can however be challenging due to their complexity. One approach to potentially overcome the challenges with deep learning is to run deep learning models directly in the web browser. This approach introduces several advantages, including accessibility, data privacy, and the ability to access device sensors. The ability to run deep learning models on the web browser thus opens new possibilities for research and development in areas such as weather forecasting. In this thesis, two deep learning models that run in the web browser are implemented using JavaScript and TensorFlow.js to predict wind speed in the near future. Specifically, the application of Long Short-Term Memory and Gated Recurrent Units models are investigated. The results demonstrate that both the Long Short-Term Memory and Gated Recurrent Units models achieve similar performance and are able to generate predictions that closely align with the expected patterns when the variations in the data are less significant. The best performing Long Short-Term Memory model achieved a mean squared error of 0.432, a root mean squared error of 0.657 and a mean average error of 0.459. The best performing Gated Recurrent Units model achieved a mean squared error of 0.435, a root mean squared error of 0.660 and a mean average error of 0.461. / Djupinlärning är en kraftfull teknik som genomgår snabb utveckling och har uppnått lovande resultat inom väderprognoser. Att implementera och använda djupinlärningsmodeller kan dock vara utmanande på grund av deras komplexitet. Ett möjligt sätt att möta utmaningarna med djupinlärning är att köra djupinlärningsmodeller direkt i webbläsaren. Detta sätt medför flera fördelar, inklusive tillgänglighet, dataintegritet och möjligheten att använda enhetens egna sensorer. Att kunna köra djupinlärningsmodeller i webbläsaren bidrar därför med möjligheter för forskning och utveckling inom områden såsom väderprognoser. I denna studie implementeras två djupinlärningsmodeller med JavaScript och TensorFlow.js som körs i webbläsaren för att prediktera vindhastighet i en nära framtid. Specifikt undersöks tillämpningen av modellerna Long Short-Term Memory och Gated Recurrent Units. Resultaten visar att både Long Short-Term Memory och Gated Recurrent Units modellerna presterar lika bra och kan generera prediktioner som är nära förväntade mönster när variationen i datat är mindre signifikant. Den Long Short-Term Memory modell som presterade bäst uppnådde en mean squared error på 0.432, en root mean squared error på 0.657 och en mean average error på 0.459. Den Gated Recurrent Units modell som presterade bäst uppnådde en mean squared error på 0.435, en root mean squared error på 0.660 och en mean average error på 0.461.
1179

Detecting and Measuring Corruption and Inefficiency in Infrastructure Projects Using Machine Learning and Data Analytics

Seyedali Ghahari (11182092) 19 February 2022 (has links)
Corruption is a social evil that resonates far and deep in societies, eroding trust in governance, weakening the rule of law, impairing economic development, and exacerbating poverty, social tension, and inequality. It is a multidimensional and complex societal malady that occurs in various forms and contexts. As such, any effort to combat corruption must be accompanied by a thorough examination of the attributes that might play a key role in exacerbating or mitigating corrupt environments. This dissertation identifies a number of attributes that influence corruption, using machine learning techniques, neural network analysis, and time series causal relationship analysis and aggregated data from 113 countries from 2007 to 2017. The results suggest that improvements in technological readiness, human development index, and e-governance index have the most profound impacts on corruption reduction. This dissertation discusses corruption at each phase of infrastructure systems development and engineering ethics that serve as a foundation for corruption mitigation. The dissertation then applies novel analytical efficiency measurement methods to measure infrastructure inefficiencies, and to rank infrastructure administrative jurisdictions at the state level. An efficiency frontier is developed using optimization and the highest performing jurisdictions are identified. The dissertation’s framework could serve as a starting point for governmental and non-governmental oversight agencies to study forms and contexts of corruption and inefficiencies, and to propose influential methods for reducing the instances. Moreover, the framework can help oversight agencies to promote the overall accountability of infrastructure agencies by establishing a clearer connection between infrastructure investment and performance, and by carrying out comparative assessments of infrastructure performance across the jurisdictions under their oversight or supervision.
1180

Renormalization group theory, scaling laws and deep learning

Haggi Mani, Parviz 08 1900 (has links)
The question of the possibility of intelligent machines is fundamentally intertwined with the machines’ ability to reason. Or not. The developments of the recent years point in a completely different direction : What we need is simple, generic but scalable algorithms that can keep learning on their own. This thesis is an attempt to find theoretical explanations to the findings of recent years where empirical evidence has been presented in support of phase transitions in neural networks, power law behavior of various entities, and even evidence of algorithmic universality, all of which are beautifully explained in the context of statistical physics, quantum field theory and statistical field theory but not necessarily in the context of deep learning where no complete theoretical framework is available. Inspired by these developments, and as it turns out, with the overly ambitious goal of providing a solid theoretical explanation of the empirically observed power laws in neu- ral networks, we set out to substantiate the claims that renormalization group theory may be the sought-after theory of deep learning which may explain the above, as well as what we call algorithmic universality. / La question de la possibilité de machines intelligentes est intimement liée à la capacité de ces machines à raisonner. Ou pas. Les développements des dernières années indiquent une direction complètement différente : ce dont nous avons besoin sont des algorithmes simples, génériques mais évolutifs qui peuvent continuer à apprendre de leur propre chef. Cette thèse est une tentative de trouver des explications théoriques aux constatations des dernières années où des preuves empiriques ont été présentées en faveur de transitions de phase dans les réseaux de neurones, du comportement en loi de puissance de diverses entités, et même de l'universialité algorithmique, tout cela étant parfaitement expliqué dans le contexte de la physique statistique, de la théorie quantique des champs et de la théorie statistique des champs, mais pas nécessairement dans le contexte de l'apprentissage profond où aucun cadre théorique complet n'est disponible. Inspiré par ces développements, et comme il s'avère, avec le but ambitieux de fournir une explication théorique solide des lois de puissance empiriquement observées dans les réseaux de neurones, nous avons entrepris de étayer les affirmations selon lesquelles la théorie du groupe de renormalisation pourrait être la théorie recherchée de l'apprentissage profond qui pourrait expliquer cela, ainsi que ce que nous appelons l'universialité algorithmique.

Page generated in 0.0926 seconds