• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 49
  • 49
  • 15
  • 13
  • 12
  • 12
  • 12
  • 12
  • 11
  • 11
  • 10
  • 9
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Are Particle-Based Methods the Future of Sampling in Joint Energy Models? A Deep Dive into SVGD and SGLD

Shah, Vedant Rajiv 19 August 2024 (has links)
This thesis investigates the integration of Stein Variational Gradient Descent (SVGD) with Joint Energy Models (JEMs), comparing its performance to Stochastic Gradient Langevin Dynamics (SGLD). We incorporated a generative loss term with an entropy component to enhance diversity and a smoothing factor to mitigate numerical instability issues commonly associated with the energy function in energy-based models. Experiments on the CIFAR-10 dataset demonstrate that SGLD, particularly with Sharpness-Aware Minimization (SAM), outperforms SVGD in classification accuracy. However, SVGD without SAM, despite its lower classification accuracy, exhibits lower calibration error underscoring its potential for developing well-calibrated classifiers required in safety-critical applications. Our results emphasize the importance of adaptive tuning of the SVGD smoothing factor ($alpha$) to balance generative and classification objectives. This thesis highlights the trade-offs between computational cost and performance, with SVGD demanding significant resources. Our findings stress the need for adaptive scaling and robust optimization techniques to enhance the stability and efficacy of JEMs. This thesis lays the groundwork for exploring more efficient and robust sampling techniques within the JEM framework, offering insights into the integration of SVGD with JEMs. / Master of Science / This thesis explores advanced techniques for improving machine learning models with a focus on developing well-calibrated and robust classifiers. We concentrated on two methods, Stein Variational Gradient Descent (SVGD) and Stochastic Gradient Langevin Dynamics (SGLD), to evaluate their effectiveness in enhancing classification accuracy and reliability. Our research introduced a new mathematical approach to improve the stability and performance of Joint Energy Models (JEMs). By leveraging the generative capabilities of SVGD, the model is guided to learn better data representations, which are crucial for robust classification. Using the CIFAR-10 image dataset, we confirmed prior research indicating that SGLD, particularly when combined with an optimization method called Sharpness-Aware Minimization (SAM), delivered the best results in terms of accuracy and stability. Notably, SVGD without SAM, despite yielding slightly lower classification accuracy, exhibited significantly lower calibration error, making it particularly valuable for safety-critical applications. However, SVGD required careful tuning of hyperparameters and substantial computational resources. This study lays the groundwork for future efforts to enhance the efficiency and reliability of these advanced sampling techniques, with the overarching goal of improving classifier calibration and robustness with JEMs.
22

A secure communication framework for wireless sensor networks

Uluagac, Arif Selcuk 14 June 2010 (has links)
Today, wireless sensor networks (WSNs) are no longer a nascent technology and future networks, especially Cyber-Physical Systems (CPS) will integrate more sensor-based systems into a variety of application scenarios. Typical application areas include medical, environmental, military, and commercial enterprises. Providing security to this diverse set of sensor-based applications is necessary for the healthy operations of the overall system because untrusted entities may target the proper functioning of applications and disturb the critical decision-making processes by injecting false information into the network. One way to address this issue is to employ en-route-filtering-based solutions utilizing keys generated by either static or dynamic key management schemes in the WSN literature. However, current schemes are complicated for resource-constrained sensors as they utilize many keys and more importantly as they transmit many keying messages in the network, which increases the energy consumption of WSNs that are already severely limited in the technical capabilities and resources (i.e., power, computational capacities, and memory) available to them. Nonetheless, further improvements without too much overhead are still possible by sharing a dynamically created cryptic credential. Building upon this idea, the purpose of this thesis is to introduce an efficient and secure communication framework for WSNs. Specifically, three protocols are suggested as contributions using virtual energies and local times onboard the sensors as dynamic cryptic credentials: (1) Virtual Energy-Based Encryption and Keying (VEBEK); (2) TIme-Based DynamiC Keying and En-Route Filtering (TICK); (3) Secure Source-Based Loose Time Synchronization (SOBAS) for WSNs.
23

Structure and spectroscopy of bio- and nano-materials from first-principles simulations

Hua, Weijie January 2011 (has links)
This thesis is devoted to first-principles simulations of bio- and nano-materials,focusing on various soft x-ray spectra, ground-state energies and structures of isolated largemolecules, bulk materials, and small molecules in ambient solutions. K-edge near-edge x-ray absorption fine structure (NEXAFS) spectra, x-ray emission spectra, andresonant inelastic x-ray scattering spectra of DNA duplexes have been studied by means oftheoretical calculations at the density functional theory level. By comparing a sequence of DNAduplexes with increasing length, we have found that the stacking effect of base pairs has verysmall influence on all kinds of spectra, and suggested that the spectra of a general DNA can bewell reproduced by linear combinations of composed base pairs weighted by their ratio. The NEXAFS spectra study has been extended to other realistic systems. We have used cluster modelswith increasing sizes to represent the infinite crystals of nucleobases and nucleosides, infinitegraphene sheet, as well as a short peptide in water solution. And the equivalent core holeapproximation has been extensively adopted, which provides an efficient access to these largesystems. We have investigated the influence of external perturbations on the nitrogen NEXAFSspectra of guanine, cytosine, and guanosine crystals, and clarified early discrepancies betweenexperimental and calculated spectra. The effects of size, stacking, edge, and defects to theabsorption spectra of graphene have been systematically analyzed, and the debate on theinterpretation of the new feature has been resolved. We have illustrated the influence of watersolvent to a blocked alanine molecule by using the snapshots generated from molecular dynamics. Multi-scale computational study on four short peptides in a self-assembled cage is presented. It isshown that the conformation of a peptide within the cage does not corresponds to its lowest-energyconformation in vacuum, due to the Zn-O bond formed between the peptide and the cage, and theconfinement effect of the cage. Special emphasis has been paid on a linear-scaling method, the generalized energy basedfragmentation energy (GEBF) approach. We have derived the GEBF energy equation at the Hartree-Focklevel with the Born approximation of the electrostatic potential. Numerical calculations for amodel system have explained the accuracy of the GEBF equation and provides a starting point forfurther refinements. We have also presented an automatic and efficient implementation of the GEBFapproach which is applicable for general large molecules. / QC 20110404
24

Robot semantic place recognition based on deep belief networks and a direct use of tiny images

Hasasneh, Ahmad 23 November 2012 (has links) (PDF)
Usually, human beings are able to quickly distinguish between different places, solely from their visual appearance. This is due to the fact that they can organize their space as composed of discrete units. These units, called ''semantic places'', are characterized by their spatial extend and their functional unity. Such a semantic category can thus be used as contextual information which fosters object detection and recognition. Recent works in semantic place recognition seek to endow the robot with similar capabilities. Contrary to classical localization and mapping works, this problem is usually addressed as a supervised learning problem. The question of semantic places recognition in robotics - the ability to recognize the semantic category of a place to which scene belongs to - is therefore a major requirement for the future of autonomous robotics. It is indeed required for an autonomous service robot to be able to recognize the environment in which it lives and to easily learn the organization of this environment in order to operate and interact successfully. To achieve that goal, different methods have been already proposed, some based on the identification of objects as a prerequisite to the recognition of the scenes, and some based on a direct description of the scene characteristics. If we make the hypothesis that objects are more easily recognized when the scene in which they appear is identified, the second approach seems more suitable. It is however strongly dependent on the nature of the image descriptors used, usually empirically derived from general considerations on image coding.Compared to these many proposals, another approach of image coding, based on a more theoretical point of view, has emerged the last few years. Energy-based models of feature extraction based on the principle of minimizing the energy of some function according to the quality of the reconstruction of the image has lead to the Restricted Boltzmann Machines (RBMs) able to code an image as the superposition of a limited number of features taken from a larger alphabet. It has also been shown that this process can be repeated in a deep architecture, leading to a sparse and efficient representation of the initial data in the feature space. A complex problem of classification in the input space is thus transformed into an easier one in the feature space. This approach has been successfully applied to the identification of tiny images from the 80 millions image database of the MIT. In the present work, we demonstrate that semantic place recognition can be achieved on the basis of tiny images instead of conventional Bag-of-Word (BoW) methods and on the use of Deep Belief Networks (DBNs) for image coding. We show that after appropriate coding a softmax regression in the projection space is sufficient to achieve promising classification results. To our knowledge, this approach has not yet been investigated for scene recognition in autonomous robotics. We compare our methods with the state-of-the-art algorithms using a standard database of robot localization. We study the influence of system parameters and compare different conditions on the same dataset. These experiments show that our proposed model, while being very simple, leads to state-of-the-art results on a semantic place recognition task.
25

Spectrum Sensing in Cognitive Radio Networks

Bokharaiee Najafee, Simin 07 1900 (has links)
Given the ever-growing demand for radio spectrum, cognitive radio has recently emerged as an attractive wireless communication technology. This dissertation is concerned with developing spectrum sensing algorithms in cognitive radio networks where a single or multiple cognitive radios (CRs) assist in detecting licensed primary bands employed by single or multiple primary users. First, given that orthogonal frequency-division multiplexing (OFDM) is an important wideband transmission technique, detection of OFDM signals in low-signal-to-noise-ratio scenario is studied. It is shown that the cyclic prefix correlation coefficient (CPCC)-based spectrum sensing algorithm, which was previously introduced as a simple and computationally efficient spectrum-sensing method for OFDM signals, is a special case of the constrained generalized likelihood ratio test (GLRT) in the absence of multipath. The performance of the CPCC-based algorithm degrades in a multipath scenario. However when OFDM is implemented, by employing the inherent structure of OFDM signals and exploiting multipath correlation in the GLRT algorithm a simple and low-complexity algorithm called the multipath-based constrained-GLRT (MP-based C-GLRT) algorithm is obtained. Further performance improvement is achieved by combining both the CPCC- and MP-based C-GLRT algorithms. A simple GLRT-based detection algorithm is also developed for unsynchronized OFDM signals. In the next part of the dissertation, a cognitive radio network model with multiple CRs is considered in order to investigate the benefit of collaboration and diversity in improving the overall sensing performance. Specially, the problem of decision fusion for cooperative spectrum sensing is studied when fading channels are present between the CRs and the fusion center (FC). Noncoherent transmission schemes with on-off keying (OOK) and binary frequency-shift keying (BFSK) are employed to transmit the binary decisions to the FC. The aim is to maximize the achievable secondary throughput of the CR network. Finally, in order to reduce the required transmission bandwidth in the reporting phase of the CRs in a cooperative sensing scheme, the last part of the dissertation examines nonorthogonal transmission of local decisions by means of on-off keying. Proposed and analyzed is a novel decoding-based fusion rule for combining the hard decisions in a linear manner.
26

Towards a novel medical diagnosis system for clinical decision support system applications

Kanwal, Summrina January 2016 (has links)
Clinical diagnosis of chronic disease is a vital and challenging research problem which requires intensive clinical practice guidelines in order to ensure consistent and efficient patient care. Conventional medical diagnosis systems inculcate certain limitations, like complex diagnosis processes, lack of expertise, lack of well described procedures for conducting diagnoses, low computing skills, and so on. Automated clinical decision support system (CDSS) can help physicians and radiologists to overcome these challenges by combining the competency of radiologists and physicians with the capabilities of computers. CDSS depend on many techniques from the fields of image acquisition, image processing, pattern recognition, machine learning as well as optimization for medical data analysis to produce efficient diagnoses. In this dissertation, we discuss the current challenges in designing an efficient CDSS as well as a number of the latest techniques (while identifying best practices for each stage of the framework) to meet these challenges by finding informative patterns in the medical dataset, analysing them and building a descriptive model of the object of interest and thus aiding in medical diagnosis. To meet these challenges, we propose an extension of conventional clinical decision support system framework, by incorporating artificial immune network (AIN) based hyper-parameter optimization as integral part of it. We applied the conventional as well as optimized CDSS on four case studies (most of them comprise medical images) for efficient medical diagnosis and compared the results. The first key contribution is the novel application of a local energy-based shape histogram (LESH) as the feature set for the recognition of abnormalities in mammograms. We investigated the implication of this technique for the mammogram datasets of the Mammographic Image Analysis Society and INbreast. In the evaluation, regions of interest were extracted from the mammograms, their LESH features were calculated, and they were fed to support vector machine (SVM) and echo state network (ESN) classifiers. In addition, the impact of selecting a subset of LESH features based on the classification performance was also observed and benchmarked against a state-of-the-art wavelet based feature extraction method. The second key contribution is to apply the LESH technique to detect lung cancer. The JSRT Digital Image Database of chest radiographs was selected for research experimentation. Prior to LESH feature extraction, we enhanced the radiograph images using a contrast limited adaptive histogram equalization (CLAHE) approach. Selected state-of-the-art cognitive machine learning classifiers, namely the extreme learning machine (ELM), SVM and ESN, were then applied using the LESH extracted features to enable the efficient diagnosis of a correct medical state (the existence of benign or malignant cancer) in the x-ray images. Comparative simulation results, evaluated using the classification accuracy performance measure, were further benchmarked against state-of-the-art wavelet based features, and authenticated the distinct capability of our proposed framework for enhancing the diagnosis outcome. As the third contribution, this thesis presents a novel technique for detecting breast cancer in volumetric medical images based on a three-dimensional (3D) LESH model. It is a hybrid approach, and combines the 3D LESH feature extraction technique with machine learning classifiers to detect breast cancer from MRI images. The proposed system applies CLAHE to the MRI images before extracting the 3D LESH features. Furthermore, a selected subset of features is fed to a machine learning classifier, namely the SVM, ELM or ESN, to detect abnormalities and to distinguish between different stages of abnormality. The results indicate the high performance of the proposed system. When compared with the wavelet-based feature extraction technique, statistical analysis testifies to the significance of our proposed algorithm. The fourth contribution is a novel application of the (AIN) for optimizing machine learning classification algorithms as part of CDSS. We employed our proposed technique in conjunction with selected machine learning classifiers, namely the ELM, SVM and ESN, and validated it using the benchmark medical datasets of PIMA India diabetes and BUPA liver disorders, two-dimensional (2D) medical images, namely MIAS and INbreast and JSRT chest radiographs, as well as on the three-dimensional TCGA-BRCA breast MRI dataset. The results were investigated using the classification accuracy measure and the learning time. We also compared our methodology with the benchmarked multi-objective genetic algorithm (ES)-based optimization technique. The results authenticate the potential of the AIN optimised CDSS.
27

matlab scripts: mmc periodic signal model

Fehr, Hendrik 21 July 2021 (has links)
Calculate solutions of a dynamic MMC energy-based model, when the system variables, i.e. the voltages and currents, are given as periodic signals. The signals are represented by a finite number distinct frequency components. As a result, the arm energies and cell voltages are given in this signal domain and can easily be translated to time domain as well.:cplx_series.m cplx_series_demo.m energy_series.m denergy_series.m check_symmetry.m transf2arm.m LICENSE.GNU_AGPLv3 sconv2.m
28

Charakterisierung des Deformations- und Versagensverhaltens von Elastomeren unter querdehnungsbehinderter Zugbelastung

Euchler, Eric 19 April 2021 (has links)
Das Deformations- und Versagensverhalten von Elastomeren wird maßgeblich von der rezepturspezifischen Zusammensetzung und den wirkenden Belastungsbedingungen beeinflusst. Untersuchungen zum Einfluss spezifischer Belastungsparameter, wie Deformationsgeschwindigkeit oder Belastungsszenario (statisch oder zyklisch, Zug oder Druck sowie Schub), auf das mechanische Verhalten von Elastomeren sind grundlegend für die technische Auslegung von Elastomerprodukten. Zur Beschreibung des Versagensverhaltens von Elastomeren unter zyklischer oder dynamischer Belastung sind bruchmechanische Konzepte in Industrie und Forschung bereits etabliert. Dabei basiert die Analyse des bruchmechanischen Verhaltens von Elastomeren meist auf makroskopischen Eigenschaften, obwohl (sub-) mikrostrukturelle Änderungen und Schädigungen erheblichen Einfluss haben wer-den. Ein spezifisches Phänomen mikrostruktureller Schädigung ist die Kavitation unter querdehnungsbehinderter Zugbelastung. Infolge geometrischer Zwangsbedingungen und einer dadurch eingeschränkten Kontrahierbarkeit, kann sich bei Zugbelastung ein mehrachsiger Spannungszustand einstellen. Infolge dessen können sich Defekte, sogenannte Kavitäten, bilden. Diese Kavitäten wachsen bei zunehmender äußerer Belastung und bauen dadurch die Zwangsbedingungen sowie die inneren Spannungen ab. Das Wissen über den Kavitationsprozess bei Elastomeren ist grundlegend für das Verständnis des makroskopischen Versagensverhaltens, doch bislang nur eingeschränkt vorhanden. In dieser Arbeit wurden Methoden für in situ Experimente, wie Dilatometrie und Mikrotomographie, entwickelt und optimiert. Dadurch konnte die Kavitation in Elastomeren umfassend untersucht und der Schädigungsverlauf mit aussagekräftigen Kennwerten beschrieben werden. Verschiedene Einflussfaktoren, wie Netzwerkeigenschaften und Füllstoffverstärkung, wurden ebenso beleuchtet wie der Einfluss von geometrischen Zwangsbedingungen. Für die Experimente wurden spezielle Prüfkörper verwendet, die durch ein ausgeprägtes Geometrieverhältnis gekennzeichnet sind. Sogenannte Pancake-Prüfkörper sind dünne scheibenförmige Zylinderproben, die zwischen steifen Probenhaltern verklebt werden. Sowohl an ungefüllten als auch rußverstärkten Elastomeren auf Basis von Styrol-Butadien-Kautschuk (SBR) konnte die Charakterisierung des Beginns der Kavitation, insbesondere dank hochauflösender Messtechnik, gelingen. Neben einem spannungsbasierten konnte auch ein energiebasiertes Kavitationskriterium definiert werden. In jedem Fall zeigten die Ergebnisse, dass die traditionellen Vorhersagen den werkstoffimmanenten Widerstand gegen Kavitation deutlich überschätzen. Entgegen der oft getroffenen Annahme, dass Kavitation ausschließlich infolge eines Grenzflächenversagens zwischen weicher Elastomermatrix und steifen Füllstoffpartikeln auftritt, konnte gezeigt werden, dass dieses Schädigungsphänomen auch bei ungefüllten Elastomeren auftreten kann. Des Weiteren wurde das Phänomen Kavitation mittels Kleinwinkel-Röntgenstreuung auch an gekerbten Flach-Prüfkörpern untersucht. Dabei konnten Kavitäten entlang der Rissfronten nachgewiesen werden. Im Zusammenhang von Kavitation und bruchmechanischem Verhalten konnte auch eine Korrelation zwischen Beginn der Kavitation und makroskopischer Rissinitiierung gefunden werden. Dies deutet zum einen darauf hin, dass die Kavitation durch bruchmechanische Vorgänge, wie Kettenbruch, bestimmt wird und zum anderen, dass die Kavitation das makroskopische Versagensverhalten beeinflusst. Weiterhin konnte sowohl mittels numerischer Berechnungen als auch anhand experimenteller Daten gezeigt werden, dass infolge geometrischer oder struktureller Zwangsbedingungen, entgegen der allgemeinen Annahme, für Elastomere nicht ausschließlich von inkompressiblem Deformationsverhalten ausgegangen werden sollte. Die vorgestellten experimentellen Methoden zur Charakterisierung der Kavitation in Elastomeren sind geeignet, um in weiteren Studien die Bestimmung z.B. von dynamisch-bruchmechanischen Eigenschaften unter Berücksichtigung mikrostruktureller Schädigung für verschiedene Elastomere zu untersuchen.:1 EINLEITUNG UND ZIELSTELLUNG 2 STAND DER FORSCHUNG ZUM DEFORMATIONS- UND VERSAGENSVERHALTEN VON ELASTOMEREN 2.1 GRUNDLAGEN ZUR KAUTSCHUKMISCHUNGSHERSTELLUNG UND -VERARBEITUNG 2.2 TYPISCHE MERKMALE DES PHYSIKALISCH-MECHANISCHEN EIGENSCHAFTSPROFILS VON ELASTOMEREN 2.3 CHARAKTERISIERUNG DES MECHANISCHEN UND BRUCHMECHANISCHEN VERHALTENS VON ELASTOMEREN 2.4 ANALYSE DES VERSAGENSVERHALTENS VON ELASTOMEREN INFOLGE QUERDEHNUNGSBEHINDERTER ZUGBELASTUNG 2.5 ABLEITUNG VON UNTERSUCHUNGSANSÄTZEN ZUR CHARAKTERISIERUNG UND BESCHREIBUNG DER KAVITATION IN ELASTOMEREN 3 VORBETRACHTUNGEN ZUM DEFORMATIONSVERHALTEN VON ELASTOMEREN 3.1 ALLGEMEINE GRUNDLAGEN 3.2 DEFORMATIONSVERHALTEN VON ELASTOMEREN UNTER KOMPLEXEN BELASTUNGSZUSTÄNDEN 3.3 FE-ANALYSE ZUR CHARAKTERISIERUNG DES DEFORMATIONSVERHALTENS VON PANCAKE-PRÜFKÖRPERN 4 EXPERIMENTELLES 4.1 WERKSTOFFE 4.2 PRÜFKÖRPER 4.3 KONVENTIONELLE CHARAKTERISIERUNG DER ELASTOMERE 4.4 OBERFLÄCHENANALYSE 4.5 IN SITU DILATOMETRIE AN PANCAKE-PRÜFKÖRPERN 4.6 IN SITU RÖNTGEN-MIKROTOMOGRAPHIE AN PANCAKE-PRÜFKÖRPERN 4.7 IN SITU KLEINWINKEL-RÖNTGENSTREUUNG AN GEKERBTEN FLACH-PRÜFKÖRPERN 4.8 ERMITTLUNG DES WERKSTOFFIMMANENTEN MAKROSKOPISCHEN WIDERSTANDS GEGEN RISSINITIIERUNG AN FLACH-PRÜFKÖRPERN 5 ERGEBNISSE UND DISKUSSION 5.1 PHYSIKALISCH-MECHANISCHE EIGENSCHAFTEN 5.2 DEFORMATIONS- UND VERSAGENSVERLAUF VON UNGEFÜLLTEN ELASTOMEREN UNTER QUERDEHNUNGSBEHINDERTER ZUGBELASTUNG 5.2.1 Typische Verlaufsform der Kavitation und grundlegende Erkenntnisse 5.2.2 Beginn der Kavitation – Besonderheiten bei kleinen Dehnungen 5.2.3 Ursprung der Kavitation – Nukleierung und Bildung von Kavitäten 5.2.4 Fortschreitende Kavitation – Besonderheiten bei hohen Dehnungen 5.3 EINFLUSS TYPISCHER MISCHUNGSBESTANDTEILE AUF DEN DEFORMATIONS- UND VERSAGENSVERLAUF UNTER QUERDEHNUNGSBEHINDERTER ZUGBELASTUNG 5.3.1 Unterschiedliche Netzwerkeigenschaften durch Variation von Schwefel- und ZnO-Anteilen 5.3.2 Einfluss des Verstärkungseffekts durch Variation des Rußanteils 5.4 EINFLUSS GEOMETRISCHER ZWANGSBEDINGUNGEN AUF DEN DEFORMATIONS- UND VERSAGENSVERLAUF UNTER QUERDEHNUNGSBEHINDERTER ZUGBELASTUNG 5.4.1 Variation des Geometriefaktors von Pancake-Prüfkörpern ungefüllter Elastomere 5.4.2 Ermittlung einer effektiven Querkontraktionszahl als Maß der Kompressibilität des Deformationsverhaltens 5.4.3 Kavitation in der Rissprozesszone gekerbter Flach-Prüfkörper 5.5 BEWERTUNG DER KRITERIEN ZUR CHARAKTERISIERUNG DES BEGINNS DER KAVITATION 5.5.1 Diskussion zur Bestimmung eines spannungsbasierten sowie eines energiebasierten Kavitationskriteriums 5.5.2 Vergleich des energiebasierten Kavitationskriteriums mit dem werkstoffimmanenten Widerstands gegen Rissinitiierung 6 ZUSAMMENFASSUNG 6.1 ÜBERBLICK ZU GEWONNENEN ERKENNTNISSEN 6.2 AUSBLICK 6.3 PRAKTISCHE RELEVANZ LITERATURVERZEICHNIS BILDVERZEICHNIS TABELLENVERZEICHNIS ANHANG PUBLIKATIONSLISTE / The deformation and failure behavior of rubbers is significantly influenced by the chemical composition and loading conditions. Investigations on how loading parameters, such as strain rate or type of loading, e.g. quasi-static vs. cyclic or tension vs. compression, affect the mechanical behavior of rubbers are elementary for designing elastomeric products. Some fracture mechanical concepts describing the failure behavior of rubbers are widely accepted in industrial and academic research Although structural changes on the network scale may affect the mechanical properties of rubbers, the most common failure analyses are based on macroscopic approaches not considering microscopic damage. A specific phenomenon in (micro-) structural failure is cavitation due to strain constraints. Under geometrical constraints, the lateral contraction is suppressed. As a result, stress triaxiality causes inhomogeneous, nonaffine deformation and internal defects, so-called cavities, appear. The formation and growth of cavities release stress and reduce the degree of constraints. Although cavitation in rubber has been studied for several decades, the knowledge about the fundamental mechanisms triggering the cavitation process is still very limited. This study aimed to characterize and describe the cavitation process comprehensively using convincing material parameters. Therefore several influencing factors, such as network properties and filler reinforcement, have been considered. Hence, advanced experimental methods, such as dilatometry and microtomography have been used for in situ investigations. Thin disk-shaped rubber samples have been used to prepare pancake specimens, which are characterized by a high aspect ratio. As a result, the degree of stress triaxiality is high and the dominating hydrostatic tensile stress causes the initiation of cavitation. For unfilled and carbon black reinforced styrene-butadiene-rubbers the onset of cavitation was determined precisely by highly sensitive data acquisition. Both, a stress-related and an energy-based cavitation criterion were found indicating that traditional approaches predicting internal failure indeed overestimate the material resistance against cavitation. Of special interest was the often suspected cavitation in unfilled rubbers, because, cavitation in rubbers is mainly attributed to interfacial failure between soft rubber matrix and rigid filler particles. Furthermore, cavitation in the process zone of notched planar specimens could be monitored by in situ X-ray scattering experiments. As a result, cavities appear in a region along the crack front. To understand the correlation between cavitation and macroscopic crack initiation additional tests were realized, i.e. intrinsic strength analysis. The results have shown that the macro failure is affected by microfracture, e.g. growth of cavities, controlled by the breakage of polymer chains. Both, numerical and experimental data indicate that under strain constraints rubbers do not exhibit incompressible deformation behavior. The presented experimental methods to characterize cavitation are suitable for future studies to investigate further aspects of cavitation, e.g. the behavior under dynamic loading, in rubbers or other rubber-like materials.:1 EINLEITUNG UND ZIELSTELLUNG 2 STAND DER FORSCHUNG ZUM DEFORMATIONS- UND VERSAGENSVERHALTEN VON ELASTOMEREN 2.1 GRUNDLAGEN ZUR KAUTSCHUKMISCHUNGSHERSTELLUNG UND -VERARBEITUNG 2.2 TYPISCHE MERKMALE DES PHYSIKALISCH-MECHANISCHEN EIGENSCHAFTSPROFILS VON ELASTOMEREN 2.3 CHARAKTERISIERUNG DES MECHANISCHEN UND BRUCHMECHANISCHEN VERHALTENS VON ELASTOMEREN 2.4 ANALYSE DES VERSAGENSVERHALTENS VON ELASTOMEREN INFOLGE QUERDEHNUNGSBEHINDERTER ZUGBELASTUNG 2.5 ABLEITUNG VON UNTERSUCHUNGSANSÄTZEN ZUR CHARAKTERISIERUNG UND BESCHREIBUNG DER KAVITATION IN ELASTOMEREN 3 VORBETRACHTUNGEN ZUM DEFORMATIONSVERHALTEN VON ELASTOMEREN 3.1 ALLGEMEINE GRUNDLAGEN 3.2 DEFORMATIONSVERHALTEN VON ELASTOMEREN UNTER KOMPLEXEN BELASTUNGSZUSTÄNDEN 3.3 FE-ANALYSE ZUR CHARAKTERISIERUNG DES DEFORMATIONSVERHALTENS VON PANCAKE-PRÜFKÖRPERN 4 EXPERIMENTELLES 4.1 WERKSTOFFE 4.2 PRÜFKÖRPER 4.3 KONVENTIONELLE CHARAKTERISIERUNG DER ELASTOMERE 4.4 OBERFLÄCHENANALYSE 4.5 IN SITU DILATOMETRIE AN PANCAKE-PRÜFKÖRPERN 4.6 IN SITU RÖNTGEN-MIKROTOMOGRAPHIE AN PANCAKE-PRÜFKÖRPERN 4.7 IN SITU KLEINWINKEL-RÖNTGENSTREUUNG AN GEKERBTEN FLACH-PRÜFKÖRPERN 4.8 ERMITTLUNG DES WERKSTOFFIMMANENTEN MAKROSKOPISCHEN WIDERSTANDS GEGEN RISSINITIIERUNG AN FLACH-PRÜFKÖRPERN 5 ERGEBNISSE UND DISKUSSION 5.1 PHYSIKALISCH-MECHANISCHE EIGENSCHAFTEN 5.2 DEFORMATIONS- UND VERSAGENSVERLAUF VON UNGEFÜLLTEN ELASTOMEREN UNTER QUERDEHNUNGSBEHINDERTER ZUGBELASTUNG 5.2.1 Typische Verlaufsform der Kavitation und grundlegende Erkenntnisse 5.2.2 Beginn der Kavitation – Besonderheiten bei kleinen Dehnungen 5.2.3 Ursprung der Kavitation – Nukleierung und Bildung von Kavitäten 5.2.4 Fortschreitende Kavitation – Besonderheiten bei hohen Dehnungen 5.3 EINFLUSS TYPISCHER MISCHUNGSBESTANDTEILE AUF DEN DEFORMATIONS- UND VERSAGENSVERLAUF UNTER QUERDEHNUNGSBEHINDERTER ZUGBELASTUNG 5.3.1 Unterschiedliche Netzwerkeigenschaften durch Variation von Schwefel- und ZnO-Anteilen 5.3.2 Einfluss des Verstärkungseffekts durch Variation des Rußanteils 5.4 EINFLUSS GEOMETRISCHER ZWANGSBEDINGUNGEN AUF DEN DEFORMATIONS- UND VERSAGENSVERLAUF UNTER QUERDEHNUNGSBEHINDERTER ZUGBELASTUNG 5.4.1 Variation des Geometriefaktors von Pancake-Prüfkörpern ungefüllter Elastomere 5.4.2 Ermittlung einer effektiven Querkontraktionszahl als Maß der Kompressibilität des Deformationsverhaltens 5.4.3 Kavitation in der Rissprozesszone gekerbter Flach-Prüfkörper 5.5 BEWERTUNG DER KRITERIEN ZUR CHARAKTERISIERUNG DES BEGINNS DER KAVITATION 5.5.1 Diskussion zur Bestimmung eines spannungsbasierten sowie eines energiebasierten Kavitationskriteriums 5.5.2 Vergleich des energiebasierten Kavitationskriteriums mit dem werkstoffimmanenten Widerstands gegen Rissinitiierung 6 ZUSAMMENFASSUNG 6.1 ÜBERBLICK ZU GEWONNENEN ERKENNTNISSEN 6.2 AUSBLICK 6.3 PRAKTISCHE RELEVANZ LITERATURVERZEICHNIS BILDVERZEICHNIS TABELLENVERZEICHNIS ANHANG PUBLIKATIONSLISTE
29

Mixed Signal Detection, Estimation, and Modulation Classification

Qu, Yang 18 December 2019 (has links)
No description available.
30

Training deep convolutional architectures for vision

Desjardins, Guillaume 08 1900 (has links)
Les tâches de vision artificielle telles que la reconnaissance d’objets demeurent irrésolues à ce jour. Les algorithmes d’apprentissage tels que les Réseaux de Neurones Artificiels (RNA), représentent une approche prometteuse permettant d’apprendre des caractéristiques utiles pour ces tâches. Ce processus d’optimisation est néanmoins difficile. Les réseaux profonds à base de Machine de Boltzmann Restreintes (RBM) ont récemment été proposés afin de guider l’extraction de représentations intermédiaires, grâce à un algorithme d’apprentissage non-supervisé. Ce mémoire présente, par l’entremise de trois articles, des contributions à ce domaine de recherche. Le premier article traite de la RBM convolutionelle. L’usage de champs réceptifs locaux ainsi que le regroupement d’unités cachées en couches partageant les même paramètres, réduit considérablement le nombre de paramètres à apprendre et engendre des détecteurs de caractéristiques locaux et équivariant aux translations. Ceci mène à des modèles ayant une meilleure vraisemblance, comparativement aux RBMs entraînées sur des segments d’images. Le deuxième article est motivé par des découvertes récentes en neurosciences. Il analyse l’impact d’unités quadratiques sur des tâches de classification visuelles, ainsi que celui d’une nouvelle fonction d’activation. Nous observons que les RNAs à base d’unités quadratiques utilisant la fonction softsign, donnent de meilleures performances de généralisation. Le dernière article quand à lui, offre une vision critique des algorithmes populaires d’entraînement de RBMs. Nous montrons que l’algorithme de Divergence Contrastive (CD) et la CD Persistente ne sont pas robustes : tous deux nécessitent une surface d’énergie relativement plate afin que leur chaîne négative puisse mixer. La PCD à "poids rapides" contourne ce problème en perturbant légèrement le modèle, cependant, ceci génère des échantillons bruités. L’usage de chaînes tempérées dans la phase négative est une façon robuste d’adresser ces problèmes et mène à de meilleurs modèles génératifs. / High-level vision tasks such as generic object recognition remain out of reach for modern Artificial Intelligence systems. A promising approach involves learning algorithms, such as the Arficial Neural Network (ANN), which automatically learn to extract useful features for the task at hand. For ANNs, this represents a difficult optimization problem however. Deep Belief Networks have thus been proposed as a way to guide the discovery of intermediate representations, through a greedy unsupervised training of stacked Restricted Boltzmann Machines (RBM). The articles presented here-in represent contributions to this field of research. The first article introduces the convolutional RBM. By mimicking local receptive fields and tying the parameters of hidden units within the same feature map, we considerably reduce the number of parameters to learn and enforce local, shift-equivariant feature detectors. This translates to better likelihood scores, compared to RBMs trained on small image patches. In the second article, recent discoveries in neuroscience motivate an investigation into the impact of higher-order units on visual classification, along with the evaluation of a novel activation function. We show that ANNs with quadratic units using the softsign activation function offer better generalization error across several tasks. Finally, the third article gives a critical look at recently proposed RBM training algorithms. We show that Contrastive Divergence (CD) and Persistent CD are brittle in that they require the energy landscape to be smooth in order for their negative chain to mix well. PCD with fast-weights addresses the issue by performing small model perturbations, but may result in spurious samples. We propose using simulated tempering to draw negative samples. This leads to better generative models and increased robustness to various hyperparameters.

Page generated in 0.0483 seconds