• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 31
  • 9
  • 1
  • Tagged with
  • 82
  • 56
  • 41
  • 40
  • 30
  • 29
  • 29
  • 13
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Automatische Erkennung von Zuständen in Anthropomatiksystemen

Moldenhauer, Jörg January 2005 (has links)
Zugl.: Karlsruhe, Univ., Diss., 2005
52

Automatische Erkennung von Zuständen in Anthropomatiksystemen

Moldenhauer, Jörg. January 2006 (has links)
Universiẗat, Diss., 2005--Karlsruhe.
53

Algorithmen der Bildanalyse und -synthese für große Bilder und Hologramme

Kienel, Enrico 27 November 2012 (has links)
Die vorliegende Arbeit befasst sich mit Algorithmen aus dem Bereich der Bildsegmentierung sowie der Datensynthese für das so genannte Hologrammdruck-Prinzip. Angelehnt an ein anatomisch motiviertes Forschungsprojekt werden aktive Konturen zur halbautomatischen Segmentierung digitalisierter histologischer Schnitte herangezogen. Die besondere Herausforderung liegt dabei in der Entwicklung von verschiedenen Ansätzen, die der Anpassung des Verfahrens für sehr große Bilder dienen, welche in diesem Kontext eine Größe von einigen hundert Megapixel erreichen können. Unter dem Aspekt der größtmöglichen Effizienz, jedoch mit der Beschränkung auf die Verwendung von Consumer-Hardware, werden Ideen vorgestellt, welche eine auf aktiven Konturen basierende Segmentierung bei derartigen Bildgrößen erstmals ermöglichen sowie zur Beschleunigung und Reduktion des Speicheraufwandes beitragen. Darüber hinaus wurde das Verfahren um ein intuitives Werkzeug erweitert, das eine interaktive lokale Korrektur der finalen Kontur gestattet und damit die Praxistauglichkeit der Methode maßgeblich erhöht. Der zweite Teil der Arbeit beschäftigt sich mit einem Druckprinzip für die Herstellung von Hologrammen, basierend auf virtuellen Abbildungsgegenständen. Der Hologrammdruck, der namentlich an die Arbeitsweise eines Tintenstrahldruckers erinnern soll, benötigt dazu spezielle diskrete Bilddaten, die als Elementarhologramme bezeichnet werden. Diese tragen die visuelle Information verschiedener Blickrichtungen durch einen festen geometrischen Ort auf der Hologrammebene. Ein vollständiges, aus vielen Elementarhologrammen zusammengesetztes Hologramm erzeugt dabei ein erhebliches Datenvolumen, das parameterabhängig schnell im Terabyte-Bereich liegen kann. Zwei unabhängige Algorithmen zur Erzeugung geeignet aufbereiteter Daten unter intensiver Ausnutzung von Standard-Graphikhardware werden präsentiert, hinsichtlich ihrer Berechnungs- sowie Speicherkomplexität verglichen und unter Berücksichtigung von Qualitätsaspekten bewertet.
54

Radiomics risk modelling using machine learning algorithms for personalised radiation oncology

Leger, Stefan 18 June 2019 (has links)
One major objective in radiation oncology is the personalisation of cancer treatment. The implementation of this concept requires the identification of biomarkers, which precisely predict therapy outcome. Besides molecular characterisation of tumours, a new approach known as radiomics aims to characterise tumours using imaging data. In the context of the presented thesis, radiomics was established at OncoRay to improve the performance of imaging-based risk models. Two software-based frameworks were developed for image feature computation and risk model construction. A novel data-driven approach for the correction of intensity non-uniformity in magnetic resonance imaging data was evolved to improve image quality prior to feature computation. Further, different feature selection methods and machine learning algorithms for time-to-event survival data were evaluated to identify suitable algorithms for radiomics risk modelling. An improved model performance could be demonstrated using computed tomography data, which were acquired during the course of treatment. Subsequently tumour sub-volumes were analysed and it was shown that the tumour rim contains the most relevant prognostic information compared to the corresponding core. The incorporation of such spatial diversity information is a promising way to improve the performance of risk models.:1. Introduction 2. Theoretical background 2.1. Basic physical principles of image modalities 2.1.1. Computed tomography 2.1.2. Magnetic resonance imaging 2.2. Basic principles of survival analyses 2.2.1. Semi-parametric survival models 2.2.2. Full-parametric survival models 2.3. Radiomics risk modelling 2.3.1. Feature computation framework 2.3.2. Risk modelling framework 2.4. Performance assessments 2.5. Feature selection methods and machine learning algorithms 2.5.1. Feature selection methods 2.5.2. Machine learning algorithms 3. A physical correction model for automatic correction of intensity non-uniformity in magnetic resonance imaging 3.1. Intensity non-uniformity correction methods 3.2. Physical correction model 3.2.1. Correction strategy and model definition 3.2.2. Model parameter constraints 3.3. Experiments 3.3.1. Phantom and simulated brain data set 3.3.2. Clinical brain data set 3.3.3. Abdominal data set 3.4. Summary and discussion 4. Comparison of feature selection methods and machine learning algorithms for radiomics time-to-event survival models 4.1. Motivation 4.2. Patient cohort and experimental design 4.2.1. Characteristics of patient cohort 4.2.2. Experimental design 4.3. Results of feature selection methods and machine learning algorithms evaluation 4.4. Summary and discussion 5. Characterisation of tumour phenotype using computed tomography imaging during treatment 5.1. Motivation 5.2. Patient cohort and experimental design 5.2.1. Characteristics of patient cohort 5.2.2. Experimental design 5.3. Results of computed tomography imaging during treatment 5.4. Summary and discussion 6. Tumour phenotype characterisation using tumour sub-volumes 6.1. Motivation 6.2. Patient cohort and experimental design 6.2.1. Characteristics of patient cohorts 6.2.2. Experimental design 6.3. Results of tumour sub-volumes evaluation 6.4. Summary and discussion 7. Summary and further perspectives 8. Zusammenfassung
55

Assessing, monitoring and mapping forest resources in the Blue Nile Region of Sudan using an object-based image analysis approach

Mahmoud El-Abbas Mustafa, Mustafa 28 January 2015 (has links)
Following the hierarchical nature of forest resource management, the present work focuses on the natural forest cover at various abstraction levels of details, i.e. categorical land use/land cover (LU/LC) level and a continuous empirical estimation of local operational level. As no single sensor presently covers absolutely all the requirements of the entire levels of forest resource assessment, multisource imagery (i.e. RapidEye, TERRA ASTER and LANDSAT TM), in addition to other data and knowledge have been examined. To deal with this structure, an object-based image analysis (OBIA) approach has been assessed in the destabilized Blue Nile region of Sudan as a potential solution to gather the required information for future forest planning and decision making. Moreover, the spatial heterogeneity as well as the rapid changes observed in the region motivates the inspection for more efficient, flexible and accurate methods to update the desired information. An OBIA approach has been proposed as an alternative analysis framework that can mitigate the deficiency associated with the pixel-based approach. In this sense, the study examines the most popular pixel-based maximum likelihood classifier, as an example of the behavior of spectral classifier toward respective data and regional specifics. In contrast, the OBIA approach analyzes remotely sensed data by incorporating expert analyst knowledge and complimentary ancillary data in a way that somehow simulates human intelligence for image interpretation based on the real-world representation of the features. As the segment is the basic processing unit, various combinations of segmentation criteria were tested to separate similar spectral values into groups of relatively homogeneous pixels. At the categorical subtraction level, rules were developed and optimum features were extracted for each particular class. Two methods were allocated (i.e. Rule Based (RB) and Nearest Neighbour (NN) Classifier) to assign segmented objects to their corresponding classes. Moreover, the study attempts to answer the questions whether OBIA is inherently more precise at fine spatial resolution than at coarser resolution, and how both pixel-based and OBIA approaches can be compared regarding relative accuracy in function of spatial resolution. As anticipated, this work emphasizes that the OBIA approach is can be proposed as an advanced solution particulary for high resolution imagery, since the accuracies were improved at the different scales applied compare with those of pixel-based approach. Meanwhile, the results achieved by the two approaches are consistently high at a finer RapidEye spatial resolution, and much significantly enhanced with OBIA. Since the change in LU/LC is rapid and the region is heterogeneous as well as the data vary regarding the date of acquisition and data source, this motivated the implementation of post-classification change detection rather than radiometric transformation methods. Based on thematic LU/LC maps, series of optimized algorithms have been developed to depict the dynamics in LU/LC entities. Therefore, detailed change “from-to” information classes as well as changes statistics were produced. Furthermore, the produced change maps were assessed, which reveals that the accuracy of the change maps is consistently high. Aggregated to the community-level, social survey of household data provides a comprehensive perspective additionally to EO data. The predetermined hot spots of degraded and successfully recovered areas were investigated. Thus, the study utilized a well-designed questionnaire to address the factors affecting land-cover dynamics and the possible solutions based on local community's perception. At the operational structural forest stand level, the rationale for incorporating these analyses are to offer a semi-automatic OBIA metrics estimates from which forest attribute is acquired through automated segmentation algorithms at the level of delineated tree crowns or clusters of crowns. Correlation and regression analyses were applied to identify the relations between a wide range of spectral and textural metrics and the field derived forest attributes. The acquired results from the OBIA framework reveal strong relationships and precise estimates. Furthermore, the best fitted models were cross-validated with an independent set of field samples, which revealed a high degree of precision. An important question is how the spatial resolution and spectral range used affect the quality of the developed model this was also discussed based on the different sensors examined. To conclude, the study reveals that the OBIA has proven capability as an efficient and accurate approach for gaining knowledge about the land features, whether at the operational forest structural attributes or categorical LU/LC level. Moreover, the methodological framework exhibits a potential solution to attain precise facts and figures about the change dynamics and its driving forces. / Da das Waldressourcenmanagement hierarchisch strukturiert ist, beschäftigt sich die vorliegende Arbeit mit der natürlichen Waldbedeckung auf verschiedenen Abstraktionsebenen, das heißt insbesondere mit der Ebene der kategorischen Landnutzung / Landbedeckung (LU/LC) sowie mit der kontinuierlichen empirischen Abschätzung auf lokaler operativer Ebene. Da zurzeit kein Sensor die Anforderungen aller Ebenen der Bewertung von Waldressourcen und von Multisource-Bildmaterialien (d.h. RapidEye, TERRA ASTER und LANDSAT TM) erfüllen kann, wurden zusätzlich andere Formen von Daten und Wissen untersucht und in die Arbeit mit eingebracht. Es wurde eine objekt-basierte Bildanalyse (OBIA) in einer destabilisierten Region des Blauen Nils im Sudan eingesetzt, um nach möglichen Lösungen zu suchen, erforderliche Informationen für die zukünftigen Waldplanung und die Entscheidungsfindung zu sammeln. Außerdem wurden die räumliche Heterogenität, sowie die sehr schnellen Änderungen in der Region untersucht. Dies motiviert nach effizienteren, flexibleren und genaueren Methoden zu suchen, um die gewünschten aktuellen Informationen zu erhalten. Das Konzept von OBIA wurde als Substitution-Analyse-Rahmen vorgeschlagen, um die Mängel vom früheren pixel-basierten Konzept abzumildern. In diesem Sinne untersucht die Studie die beliebtesten Maximum-Likelihood-Klassifikatoren des pixel-basierten Konzeptes als Beispiel für das Verhalten der spektralen Klassifikatoren in dem jeweiligen Datenbereich und der Region. Im Gegensatz dazu analysiert OBIA Fernerkundungsdaten durch den Einbau von Wissen des Analytikers sowie kostenlose Zusatzdaten in einer Art und Weise, die menschliche Intelligenz für die Bildinterpretation als eine reale Darstellung der Funktion simuliert. Als ein Segment einer Basisverarbeitungseinheit wurden verschiedene Kombinationen von Segmentierungskriterien getestet um ähnliche spektrale Werte in Gruppen von relativ homogenen Pixeln zu trennen. An der kategorische Subtraktionsebene wurden Regeln entwickelt und optimale Eigenschaften für jede besondere Klasse extrahiert. Zwei Verfahren (Rule Based (RB) und Nearest Neighbour (NN) Classifier) wurden zugeteilt um die segmentierten Objekte der entsprechenden Klasse zuzuweisen. Außerdem versucht die Studie die Fragen zu beantworten, ob OBIA in feiner räumlicher Auflösung grundsätzlich genauer ist als eine gröbere Auflösung, und wie beide, das pixel-basierte und das OBIA Konzept sich in einer relativen Genauigkeit als eine Funktion der räumlichen Auflösung vergleichen lassen. Diese Arbeit zeigt insbesondere, dass das OBIA Konzept eine fortschrittliche Lösung für die Bildanalyse ist, da die Genauigkeiten - an den verschiedenen Skalen angewandt - im Vergleich mit denen der Pixel-basierten Konzept verbessert wurden. Unterdessen waren die berichteten Ergebnisse der feineren räumlichen Auflösung nicht nur für die beiden Ansätze konsequent hoch, sondern durch das OBIA Konzept deutlich verbessert. Die schnellen Veränderungen und die Heterogenität der Region sowie die unterschiedliche Datenherkunft haben dazu geführt, dass die Umsetzung von Post-Klassifizierungs- Änderungserkennung besser geeignet ist als radiometrische Transformationsmethoden. Basierend auf thematische LU/LC Karten wurden Serien von optimierten Algorithmen entwickelt, um die Dynamik in LU/LC Einheiten darzustellen. Deshalb wurden für Detailänderung "von-bis"-Informationsklassen sowie Veränderungsstatistiken erstellt. Ferner wurden die erzeugten Änderungskarten bewertet, was zeigte, dass die Genauigkeit der Änderungskarten konstant hoch ist. Aggregiert auf die Gemeinde-Ebene bieten Sozialerhebungen der Haushaltsdaten eine umfassende zusätzliche Sichtweise auf die Fernerkundungsdaten. Die vorher festgelegten degradierten und erfolgreich wiederhergestellten Hot Spots wurden untersucht. Die Studie verwendet einen gut gestalteten Fragebogen um Faktoren die die Dynamik der Änderung der Landbedeckung und mögliche Lösungen, die auf der Wahrnehmung der Gemeinden basieren, anzusprechen. Auf der Ebene des operativen strukturellen Waldbestandes wird die Begründung für die Einbeziehung dieser Analysen angegeben um semi-automatische OBIA Metriken zu schätzen, die aus dem Wald-Attribut durch automatisierte Segmentierungsalgorithmen in den Baumkronen abgegrenzt oder Cluster von Kronen Ebenen erworben wird. Korrelations- und Regressionsanalysen wurden angewandt, um die Beziehungen zwischen einer Vielzahl von spektralen und strukturellen Metriken und den aus den Untersuchungsgebieten abgeleiteten Waldattributen zu identifizieren. Die Ergebnisse des OBIA Rahmens zeigen starke Beziehungen und präzise Schätzungen. Die besten Modelle waren mit einem unabhängigen Satz von kreuz-validierten Feldproben ausgestattet, welche hohe Genauigkeiten ergaben. Eine wichtige Frage ist, wie die räumliche Auflösung und die verwendete Bandbreite die Qualität der entwickelten Modelle auch auf der Grundlage der verschiedenen untersuchten Sensoren beeinflussen. Schließlich zeigt die Studie, dass OBIA in der Lage ist, als ein effizienter und genauer Ansatz Kenntnisse über die Landfunktionen zu erlangen, sei es bei operativen Attributen der Waldstruktur oder auch auf der kategorischen LU/LC Ebene. Außerdem zeigt der methodischen Rahmen eine mögliche Lösung um präzise Fakten und Zahlen über die Veränderungsdynamik und ihre Antriebskräfte zu ermitteln.
56

Towards Accurate and Efficient Cell Tracking During Fly Wing Development

Blasse, Corinna 23 September 2016 (has links)
Understanding the development, organization, and function of tissues is a central goal in developmental biology. With modern time-lapse microscopy, it is now possible to image entire tissues during development and thereby localize subcellular proteins. A particularly productive area of research is the study of single layer epithelial tissues, which can be simply described as a 2D manifold. For example, the apical band of cell adhesions in epithelial cell layers actually forms a 2D manifold within the tissue and provides a 2D outline of each cell. The Drosophila melanogaster wing has become an important model system, because its 2D cell organization has the potential to reveal mechanisms that create the final fly wing shape. Other examples include structures that naturally localize at the surface of the tissue, such as the ciliary components of planarians. Data from these time-lapse movies typically consists of mosaics of overlapping 3D stacks. This is necessary because the surface of interest exceeds the field of view of todays microscopes. To quantify cellular tissue dynamics, these mosaics need to be processed in three main steps: (a) Extracting, correcting, and stitching individ- ual stacks into a single, seamless 2D projection per time point, (b) obtaining cell characteristics that occur at individual time points, and (c) determine cell dynamics over time. It is therefore necessary that the applied methods are capable of handling large amounts of data efficiently, while still producing accurate results. This task is made especially difficult by the low signal to noise ratios that are typical in live-cell imaging. In this PhD thesis, I develop algorithms that cover all three processing tasks men- tioned above and apply them in the analysis of polarity and tissue dynamics in large epithelial cell layers, namely the Drosophila wing and the planarian epithelium. First, I introduce an efficient pipeline that preprocesses raw image mosaics. This pipeline accurately extracts the stained surface of interest from each raw image stack and projects it onto a single 2D plane. It then corrects uneven illumination, aligns all mosaic planes, and adjusts brightness and contrast before finally stitching the processed images together. This preprocessing does not only significantly reduce the data quantity, but also simplifies downstream data analyses. Here, I apply this pipeline to datasets of the developing fly wing as well as a planarian epithelium. I additionally address the problem of determining cell polarities in chemically fixed samples of planarians. Here, I introduce a method that automatically estimates cell polarities by computing the orientation of rootlets in motile cilia. With this technique one can for the first time routinely measure and visualize how tissue polarities are established and maintained in entire planarian epithelia. Finally, I analyze cell migration patterns in the entire developing wing tissue in Drosophila. At each time point, cells are segmented using a progressive merging ap- proach with merging criteria that take typical cell shape characteristics into account. The method enforces biologically relevant constraints to improve the quality of the resulting segmentations. For cases where a full cell tracking is desired, I introduce a pipeline using a tracking-by-assignment approach. This allows me to link cells over time while considering critical events such as cell divisions or cell death. This work presents a very accurate large-scale cell tracking pipeline and opens up many avenues for further study including several in-vivo perturbation experiments as well as biophysical modeling. The methods introduced in this thesis are examples for computational pipelines that catalyze biological insights by enabling the quantification of tissue scale phenomena and dynamics. I provide not only detailed descriptions of the methods, but also show how they perform on concrete biological research projects.
57

A Framework for example-based Synthesis of Materials for Physically Based Rendering

Rudolph, Carsten 14 February 2019 (has links)
In computer graphics, textures are used to create detail along geometric surfaces. They are less computationally expensive than geometry, but this efficiency is traded for greater memory demands, especially with large output resolutions. Research has shown, that textures can be synthesized from low-resolution exemplars, reducing overall runtime memory cost and enabling applications, like remixing existing textures to create new, visually similar representations. In many modern applications, textures are not limited to simple images, but rather represent geometric detail in different ways, that describe how lights interacts at a certain point on a surface. Physically Based Rendering (PBR) is a technique, that employs complex lighting models to create effects like self-shadowing, realistic reflections or subsurface scattering. A set of multiple textures is used to describe what is called a material. In this thesis, example-based texture synthesis is extented to physical lighting models to create a physically based material synthesizer. It introduces a framework that is capable of utilizing multiple texture maps to synthesize new representations from existing material exemplars. The framework is then tested with multiple exemplars from different texture categories, to prospect synthesis performance in terms of quality and computation time. The synthesizer works in uv space, enabling to re-use the same exemplar material at runtime with different uv maps, reducing memory cost, whilst increasing visual varienty and minimizing repetition artifacts. The thesis shows, that this can be done effectively, without introducing inconsitencies like seams or discontiuities under dynamic lighting scenarios.:1. Context and Motivation 2. Introduction 2.1. Terminology: What is a Texture? 2.1.1. Classifying Textures 2.1.2. Characteristics and Appearance 2.1.3. Advanced Analysis 2.2. Texture Representation 2.2.1. Is there a theoretical Limit for Texture Resolution? 2.3. Texture Authoring 2.3.1. Texture Generation from Photographs 2.3.2. Computer-Aided Texture Generation 2.4. Introduction to Physically Based Rendering 2.4.1. Empirical Shading and Lighting Models 2.4.2. The Bi-Directional Reflectance Distribution Function (BRDF) 2.4.3. Typical Texture Representations for Physically Based Models 3. A brief History of Texture Synthesis 3.1. Algorithm Categories and their Developments 3.1.1. Pixel-based Texture Synthesis 3.1.2. Patch-based Texture Synthesis 3.1.3. Texture Optimization 3.1.4. Neural Network Texture Synthesis 3.2. The Purpose of example-based Texture Synthesis Algorithms 4. Framework Design 4.1. Dividing Synthesis into subsequent Stages 4.2. Analysis Stage 4.2.1. Search Space 4.2.2. Guidance Channel Extraction 4.3. Synthesis Stage 4.3.1. Synthesis by Neighborhood Matching 4.3.2. Validation 5. Implementation 5.1. Modules and Components 5.2. Image Processing 5.2.1. Image Representation 5.2.2. Filters and Guidance Channel Extraction 5.2.3. Search Space and Descriptors 5.2.4. Neighborhood Search 5.3. Implementing Synthesizers 5.3.1. Unified Synthesis Interface 5.3.2. Appearance Space Synthesis: A Hierarchical, Parallel, Per-Pixel Synthesizer 5.3.3. (Near-) Regular Texture Synthesis 5.3.4. Extented Appearance Space: A Physical Material Synthesizer 5.4. Persistence 5.4.1. Codecs 5.4.2. Assets 5.5. Command Line Sandbox 5.5.1. Providing Texture Images and Material Dictionaries 6. Experiments and Results 6.1. Test Setup 6.1.1. Metrics 6.1.2. Result Visualization 6.1.3. Limitations and Conventions 6.2. Experiment 1: Analysis Stage Performance 6.2.1. Influence of Exemplar Resolution 6.2.2. Influence of Exemplar Maps 6.3. Experiment 2: Synthesis Performance 6.3.1. Influence of Exemplar Resolution 6.3.2. Influence of Exemplar Maps 6.3.3. Influence of Sample Resolution 6.4. Experiment 3: Synthesis Quality 6.4.1. Influence of Per-Level Jitter 6.4.2. Influence of Exemplar Maps and Map Weights 7. Discussion and Outlook 7.1. Contributions 7.2. Further Improvements and Research 7.2.1. Performance Improvements 7.2.2. Quality Improvements 7.2.3. Methology 7.2.4. Further Problem Fields
58

Time-Resolved Quantification of Centrosomes by Automated Image Analysis Suggests Limiting Component to Set Centrosome Size in C. Elegans Embryos

Jaensch, Steffen 12 February 2010 (has links)
The centrosome is a dynamic organelle found in all animal cells that serves as a microtubule organizing center during cell division. Most of the centrosome components have been identified by genetic screens over the last decade, but little is known about how these components interact with each other to form a functional centrosome. Towards a better understanding of the molecular organization of the centrosome, we investigated the mechanism that regulates the size of the centrosome in the early C. elegans embryo. For this, we monitored fluorescently labeled centrosomes in living embryos and developed a suite of image analysis algorithms to quantify the centrosomes in the resulting 3D time-lapse images. In particular, we developed a novel algorithm involving a two-stage linking process for tracking entrosomes, which is a multi-object tracking task. This fully automated analysis pipeline enabled us to acquire time-resolved data of centrosome growth in a large number of embryos and could detect subtle phenotypes that were missed by previous assays based on manual image analysis. In a first set of experiments, we quantified centrosome size over development in wild-type embryos and made three essential observations. First, centrosome volume scales proportionately with cell volume. Second, beginning at the 4-cell stage, when cells are small, centrosome size plateaus during the cell cycle. Third, the total centrosome volume the embryo gives rise to in any one cell stage is approximately constant. Based on our observations, we propose a ‘limiting component’ model in which centrosome size is limited by the amounts of maternally derived centrosome components. In a second set of experiments, we tested our hypothesis by varying cell size, centrosome number and microtubule-mediated pulling forces. We then manipulated the amounts of several centrosomal proteins and found that the conserved centriolar and pericentriolar material protein SPD-2 is one such component that determines centrosome size.
59

Quantitative räumliche Auswertung der Mikrostruktur eines in Beton eingebetteten Multifilamentgarns

Kang, Bong-Gu, Focke, Inga, Brameshuber, Wolfgang, Benning, Wilhelm 03 June 2009 (has links)
Zur detaillierten Beschreibung des Lastabtragverhaltens textiler Bewehrung im Beton ist es erforderlich, das Penetrationsverhalten der Betonmatrix in die stark heterogene Garnstruktur zu beschreiben. Zur Charakterisierung der Mikrostruktur im Querschnitt wurde eine Bildanalysemethode entwickelt, um die Verbundsituation der einzelnen Filamente quantitativ auswerten zu können. Um eine räumliche Beschreibung der Verbundsituation zu erreichen, wurde die Strategie verfolgt, aus aufeinander folgenden Schichtaufnahmen mittels Rasterelektronenmikroskopie eine räumliche Struktur abzuleiten. Hierzu wurden zum einen die experimentelle Vorgehensweise erarbeitet und zum anderen ein Ansatz für die Zuordnung der Filamente zwischen den einzelnen Querschnitten entwickelt.
60

Selbstauskünfte eines Bildwerks - Die Tafel des Jüngsten Gerichts in Weesenstein: Ein Nachtrag

Schellenberger, Simona 21 February 2020 (has links)
Der Beitrag vergleicht das Weesensteiner „Jüngste Gericht“ mit dem Epitaph für Barthel Lauterbach aus dem Meißner Dom. Worin unterscheiden sie sich und womit lassen sich die offensichtlichen Beeinflussungen begründen?

Page generated in 0.076 seconds