11 |
Gaia DR1 compared to VLBI positionsMignard, François, Klioner, Sergei 02 June 2020 (has links)
Comparison of the Gaia DR1 auxiliary quasar solution to recent ground based VLBI solutions for ICRF2 sources.
|
12 |
Automated Pattern Detection and Generalization of Building GroupsWang, Xiao 09 October 2020 (has links)
This dissertation focuses on the topic of building group generalization by considering the detection of building patterns. Generalization is an important research field in cartography, which is part of map production and the basis for the derivation of multiple representation. As one of the most important features on map, buildings occupy large amount of map space and normally have complex shape and spatial distribution, which leads to that the generalization of buildings has long been an important and challenging task.
For social, architectural and geographical reasons, the buildings were built with some special rules which forms different building patterns. Building patterns are crucial structures which should be carefully considered during graphical representation and generalization. Although people can effortlessly perceive these patterns, however, building patterns are not explicitly described in building datasets. Therefore, to better support the subsequent generalization process, it is important to automatically recognize building patterns. The objective of this dissertation is to develop effective methods to detect building patterns from building groups. Based on the identified patterns, some generalization methods are proposed to fulfill the task of building generalization. The main contribution of the dissertation is described as the following five aspects:
(1) The terminology and concept of building pattern has been clearly explained; a detailed and relative complete typology of building patterns has been proposed by summarizing the previous researches as well as extending by the author; (2) A stroke-mesh based method has been developed to group buildings and detect different patterns from the building groups; (3) Through the analogy between line simplification and linear building group typification, a stroke simplification based typification method has been developed aiming at solving the generalization of building groups with linear patterns; (4) A mesh-based typification method has been developed for the generalization of the building groups with grid patterns; (5) A method of extracting hierarchical skeleton structures from discrete buildings have been proposed. The extracted hierarchical skeleton structures are regarded as the representations of the global shape of the entire region, which is used to control the generalization process.
With the above methods, the building patterns are detected from the building groups and the generalization of building groups are executed based on the patterns. In addition, the thesis has also discussed the drawbacks of the methods and gave the potential solutions.:Abstract I
Kurzfassung III
Contents V
List of Figures IX
List of Tables XIII
List of Abbreviations XIV
Chapter 1 Introduction 1
1.1 Background and motivation 1
1.1.1 Cartographic generalization 1
1.1.2 Urban building and building patterns 1
1.1.3 Building generalization 3
1.1.4 Hierarchical property in geographical objects 3
1.2 Research objectives 4
1.3 Study area 5
1.4 Thesis structure 6
Chapter 2 State of the Art 8
2.1 Operators for building generalization 8
2.1.1 Selection 9
2.1.2 Aggregation 9
2.1.3 Simplification 10
2.1.4 Displacement 10
2.2 Researches of building grouping and pattern detection 11
2.2.1 Building grouping 11
2.2.2 Pattern detection 12
2.2.3 Problem analysis . 14
2.3 Researches of building typification 14
2.3.1 Global typification 15
2.3.2 Local typification 15
2.3.3 Comparison analysis 16
2.3.4 Problem analysis 17
2.4 Summary 17
Chapter 3 Using stroke and mesh to recognize building group patterns 18
3.1 Abstract 19
3.2 Introduction 19
3.3 Literature review 20
3.4 Building pattern typology and study area 22
3.4.1 Building pattern typology 22
3.4.2 Study area 24
3.5 Methodology 25
3.5.1 Generating and refining proximity graph 25
3.5.2 Generating stroke and mesh 29
3.5.3 Building pattern recognition 31
3.6 Experiments 33
3.6.1 Data derivation and test framework 33
3.6.2 Pattern recognition results 35
3.6.3 Evaluation 39
3.7 Discussion 40
3.7.1 Adaptation of parameters 40
3.7.2 Ambiguity of building patterns 44
3.7.3 Advantage and Limitation 45
3.8 Conclusion 46
Chapter 4 A typification method for linear building groups based on stroke simplification 47
4.1 Abstract 48
4.2 Introduction 48
4.3 Detection of linear building groups 50
4.3.1 Stroke-based detection method 50
4.3.2 Distinguishing collinear and curvilinear patterns 53
4.4 Typification method 55
4.4.1 Analogy of building typification and line simplification 55
4.4.2 Stroke generation 56
4.4.3 Stroke simplification 57
4.5 Representation of newly typified buildings 60
4.6 Experiment 63
4.6.1 Linear building group detection 63
4.6.2 Typification results 65
4.7 Discussion 66
4.7.1 Comparison of reallocating remained nodes 66
4.7.2 Comparison with classic line simplification method 67
4.7.3 Advantage 69
4.7.4 Further improvement 71
4.8 Conclusion 71
Chapter 5 A mesh-based typification method for building groups with grid patterns 73
5.1 Abstract 74
5.2 Introduction 74
5.3 Related work 75
5.4 Methodology of mesh-based typification 78
5.4.1 Grid pattern classification 78
5.4.2 Mesh generation 79
5.4.3 Triangular mesh elimination 80
5.4.4 Number and positioning of typified buildings 82
5.4.5 Representation of typified buildings 83
5.4.6 Resizing Newly Typified Buildings 85
5.5 Experiments 86
5.5.1 Data derivation 86
5.5.2 Typification results and evaluation 87
5.5.3 Comparison with official map 91
5.6 Discussion 92
5.6.1 Advantages 92
5.6.2 Further improvements 93
5.7 Conclusion 94
Chapter 6 Hierarchical extraction of skeleton structures from discrete buildings 95
6.1 Abstract 96
6.2 Introduction 96
6.3 Related work 97
6.4 Study area 99
6.5 Hierarchical extraction of skeleton structures 100
6.5.1 Proximity Graph Network (PGN) of buildings 100
6.5.2 Centrality analysis of proximity graph network 103
6.5.3 Hierarchical skeleton structures of buildings 108
6.6 Generalization application 111
6.7 Experiment and discussion 114
6.7.1 Data statement 114
6.7.2 Experimental results 115
6.7.3 Discussion 118
6.8 Conclusions 120
Chapter 7 Discussion 121
7.1 Revisiting the research problems 121
7.2 Evaluation of the presented methodology 123
7.2.1 Strengths 123
7.2.2 Limitations 125
Chapter 8 Conclusions 127
8.1 Main contributions 127
8.2 Outlook 128
8.3 Final thoughts 131
Bibliography 132
Acknowledgements 142
Publications 143
|
13 |
Im Schwerpunkt der Anschlusspunkte – Zur Genauigkeit geodätischer KoordinatentransformationenLehmann, Rüdiger January 2010 (has links)
Eine in der Geoda¨sie bekannte Regel besagt, dass die Genauigkeit zu transformierender Neupunkte im Schwerpunkt der Anschlusspunkte am höchsten ist. Weniger bekannt ist, unter welchen Voraussetzungen dies generell gilt. Allgemein unbekannt ist bisher, auf welche Koordinatentransformationen man diese Regel ausdehnen kann. Wir zeigen dies auf und untersuchen einen Fall, in dem diese Regel nicht gilt. Es stellt sich heraus, dass der am genauesten transformierbare Neupunkt theoretisch sogar außerhalb der konvexen Hülle der Anschlusspunkte liegen kann. / A rule well-known in Geodesy states that the accuracy of points to be transformed is best in the centre of gravity of the control points. Less well- known is, under which conditions this rule gen- erally applies. The exact set of coordinate transforms, to which we can extend the validity of this rule, is widelyunknown. We demonstrate this and investigate a case, in which this rule does not apply. It turns out that the most accurately transformable point be even be located outside the convex hull of the control points.
|
14 |
The Optical Outcoupling of Organic Light Emitting DiodesHill, Duncan 23 June 2008 (has links)
OLEDs have seen a strong growth in development in recent years, however up to 80% of emitted light may be lost within the OLED stack and in the substrate layers. This thesis investigates the effects of the layer stack on the OLED properties and also studies a number of approaches to substrate structuring and treatment in order to couple light from the devices.
|
15 |
Zur Anatomie Schwarzer Löcher, das G-Boson, Dunkle Materie und Dunkle Energie: War beim Urknall einiges anders?Reichelt, Uwe J. M. 16 February 2022 (has links)
Über bisherige Aussagen hinausgehend wird ein Weg vorgestellt, der Aussagen über kleinstmögliche Schwarze Löcher gestattet. Es erweist sich, dass es sie (theoretisch) gibt und das sie ein neues stabiles Elementarteilchen (in dieser Arbeit G-Boson genannt) darstellen, das Zusammenhänge zur Dunklen Materie aufzeigt und die Dunkle Energie unabhängig von ihrer in der Makroquantentheorie erforderlichen Existenz notwendig macht, um astronomische Beobachtungen zu erklären. Dadurch ergeben sich logische Abläufe beim Urknall, die diesen in einem etwas anderen Zusammenhang erscheinen lassen als bisher bekannt.:Inhaltsverzeichnis
1. Abstract
2. Einleitung
3. Vorbetrachtung anhand von Planck-Einheiten
4. Die Grenzkraft
5. Grenzkraft und Schwarze Löcher, das G-Boson
6. Eigenschaften des G-Bosons
7. Entstehung der G-Bosonen, die Dunkle Materie und Energie
8. Was bedeutet das für den Urknall?
9. Astronomische Befunde
10. Zusammenfassung
|
16 |
A High-Resolution Time-of-Flight Spectrometer for Fission Fragments and Ion BeamsKosev, Krasimir 31 July 2008 (has links)
1. A quantitive understanding of the nucleosynthesis process requires the knowledge of the production rates, the masses and the ?-decay characteristics of exotic neutron-rich nu- clei. Nuclear fission is a suitable method of producing such nuclei with masses from 60 - 150. Neutron-rich nuclei close to the r-process path can be produced via photo-fission at the Rossendorf superconducting linear accelerator of high brilliance and low emittance (ELBE) or by means of nuclear reactions at relativistic energies (for example at GSI). If the fission prod- ucts are identified and also their charge numbers are obtained, it will be principally possible to investigate their structure by means of beta-gamma spectroscopy. 2. For the purpose of fission-fragment detection a double time-of-flight (TOF) spectrometer has been developed. The key component of the TOF spectrometer is a TOF detector consisting of multichannel-plate (MCP) detectors with a position-sensitive readout, a foil for secondary electron (SE) production and an electrostatic mirror. The fission fragments are detected by measuring the SEs impinging on the position-sensitive anode after emission from the foil, ac- celeration and deflection by the electrostatic mirror. 3. In the first part of the work, special attention is paid to the relevant methods of building a spectrometer of such type. The functionality of the different detector components is proven in detail. A unique method for the determination of the SE foil thickness with ?-particles is pre- sented. Values for the mirror transmission and scattering are deduced. A dedicated SIMION 3D simulation showed that introducing serpentine like wires with pitch distance of 1 mm is capable of providing transparency of more than 90% without significant impact on the time resolution. 4. Since the performance of the MCP detectors is crucial, optimised schemes for their high- voltage supplies have been implemented successfully. Further enhancement of the setup was achieved by introducing surface-mount device (SMD) elements for signal decoupling, positioned close to the detector surface. Thus, we succeeded in avoiding signal deterioration coming from the additional capacitances and inductivities caused by extra cable lengths. Because the MCP signal decoupling takes place by means of rings with not well-defined impedance, impedance- matching problems arise, causing signal ringing and distortion. An approach towards solving this problem was to build a special fast, wide-band transimpedance amplifier. Using its circuit mounted close to the detectors, a significant reduction of the signal ringing was observed while maintaining the rise time of the detector signal. In order to process the multichannel-plate de- tector signals optimally, a new state-of-the-art constant-fraction discriminator (CFD) based on the amplitude and rise time compensated (ARC) technique with very low threshold capabilities and optimised walk properties has been developed and incorporated into the setup. 5. In our first laboratory test measurements conducted with an ?-particle source, we demonstrated ability of the setup to resolve pattern images placed directly in front of the MCP detector or reflected by the electrostatic mirror. The obtained position resolution for the second case is in the order of 2 mm. We showed that the detection efficiency of the system for ions like He is less than 30 %. This is mainly due to the low number of the electrons liberated from the SE foil. In a setup consisting of two mirror MCP detectors, we could successfully observe the TOF spectrum of a mixed (226Ra, 222Rn, 210Po, 218Po, 214Po) ?-source and found a good agreement with a SRIM simulation. 6. Measurements performed at the FZ Dresden-Rossendorf 5 MV tandem accelerator en- abled us to learn more about the response of the TOF detectors to various beams of heavy ions. The first in-beam experiments clearly showed that the applied setup consisting of two mirror detectors is capable of resolving different 35Cl beam charge states. In a combination with the specially designed wide-band amplifier and dedicated CFDs based on the ARC technique, we managed to achieve an in-beam time resolution of 170 ps per TOF detector. Measurements with ions of Z > 30 resulted in detection efficiencies of greater than 90%. At foil accelerating potentials approximately two times larger than the mirror deflection voltage, most of the SEs gain enough energy to pass through the electrostatic mirror without being deflected towards the MCP surface. Thus, an abrupt drop of the efficiency curve was observed - the “transparent” mode of the mirror. 7. Properties of electrons ejected from thin foils from heavy ions have been also investigated. From the MCP pulse-height-distribution spectra, a number for the forward-emitted SEs ejected by 35Cl beam was deduced. A method for obtaining widths of the SE energy distributions from the drop of the efficiency curve for various ions has been proposed. Assuming that the efficiency curve as a function of the accelerating voltage follows an error function, its standard deviation gives the standard deviation of the SE energy distribution. Another method based on the TOF technique for reconstructing the secondary electron velocity and energy distribution was also invented. It was found that the resulting mean SE velocity closely approaches the one of the beam ions. This phenomenon was attributed to the so-called “convoy” electrons. 8. The obtained position resolution for beams like 35Cl, 79Br and 107Ag at stable detection efficiency was better than 1.8 mm. It was demonstrated that with increasing the foil accelerat- ing voltage, the position resolution improves due to the minimised SE angular spread. Such a mode of operation was favoured until the mirror “semi-transparency” regime was reached, after which increasing further the accelerating potential could lead to a position resolution worsen- ing. An explanation of the fact could be the deterioration of the anode timing signals or some defocusing effects arising from the mirror wires field at high accelerating voltages. 9. Testing photo-fission experiments were performed at the bremsstrahlung facility at the ELBE accelerator. For the first time a spectrometer of this kind was successfully employed for bremsstrahlung-induced photo-fission measurements. The setup consisted of two mirror detectors (first arm) and a 80 mm diameter MCP detector (second arm) with a 238U target positioned in between. TOF measurements with two bremsstrahlung end-point energies of 12.9 and 16.0 MeV were carried out. A clear cut separation of the TOF peaks for the medium- mass and heavy fission fragments was observed. At these experimental test runs, we did not aim at one-by-one fission fragment mass resolution, since this may be the purpose of a more specific experiment utilising a much thinner fissile source than the one applied here (minimum straggling of the fragments inside the target is required) and considerably better statistics. It was possible to estimate the photo-fission production rate for the two measuring cases and to compare the obtained results with data from other measurements.
|
17 |
Planungskartographie Ländlicher Räume in Deutschland: Eine Analyse und Diskussion aktueller Bestimmungsfaktoren zur kartographischen Modellierung einer RaumkategorieChudy, Thomas 11 June 2007 (has links)
Veränderungen im gesellschaftlichen Planungsverständnis sowie die zunehmend digital-technologische Prägung der Kartographie erfordern und ermöglichen in Inhalt und Gestaltung neue Kartenbilder und lassen einen Trend erkennen, wonach künftig noch mehr Kartennutzer die, für ihre Ansprüche vermeintlich besten, Karten selbst erstellen können und werden. Daraus resultiert zwangsläufig ein verändertes Berufsbild des Kartographen, modifiziert zum Datenmanager, GIS-Berater und professionellen Hersteller offizieller Planungsdokumente. Anliegen dieser Arbeit sind die Analyse und eine Diskussion der Bestimmungsfaktoren von Planungskarten, insbesondere der Planungskarten für den Ländlichen Raum, welche sich gegenwärtig auf nur wenige aktuelle Veröffentlichungen stützt. Die Methoden der Planungskartographie haben sich aus städtebaulichen Erfordernissen heraus entwickelt. Sie sind nicht geeignet, die Anforderungen an alle Planungskarten gleichermaßen zu erfüllen, so dass der Fokus auf die Wiedergabe der zweiten Kategorie der Raumordnung, den Ländlichen Raum, gelegt wurde. Die im Blickpunkt stehende Raumkategorie wurde einer ausführlichen Begriffsabklärung, mit dem Ziel, darstellungsrelevante Inhalte herauszustellen, unterzogen. Ausgehend von Darlegungen über die Bedeutung, die Entwicklung, die Funktionen und das System der Planungskarten in der Bundesrepublik Deutschland, folgt als weiterer thematischer Schwerpunkt die Analyse der Anforderungen. Die ermittelten Anforderungen an planungskartographische Produkte sowie eine Bewertung der gegenwärtigen Stellung der Planungskarten in den Raumplanungswissenschaften, einschließlich des Potentials zukünftigen Forderungen gerecht zu werden, wurden mit einer empirischen Komponente in Form einer Kartenanalyse verifiziert. Die Ergebnisse der Begriffsabklärung, die aufgezeigten Defizite aus dem Verhältnis der Forderungen mit den gegenwärtigen Visualisierungen der Planungskarten für den Ländlichen Raum und die planungskartographischen Besonderheiten werden diskutiert. Die Überlegungen wurden mit dem Projekt des Agraratlasses für das Land Sachsen-Anhalt exemplarisch umgesetzt. Aktuelle Trends, wie die stärkere Integration geo-statistischer Verfahren aber auch unkonventionelle moderne Visualisierungsmethoden, wie die Lentikulartechnik, werden abschließend in „Ausblick und Agenda“ angesprochen.
|
18 |
Adaptively Refined Large-Eddy Simulations of Galaxy Clusters / Adaptiv verfeinerte Grobstruktursimulationen von GalaxienhaufenMaier, Andreas January 2008 (has links) (PDF)
It is aim of this work to develop, implement, and apply a new numerical scheme for modeling turbulent, multiphase astrophysical flows such as galaxy cluster cores and star forming regions. The method combines the capabilities of adaptive mesh refinement (AMR) and large-eddy simulations (LES) to capture localized features and to represent unresolved turbulence, respectively; it will be referred to as Fluid mEchanics with Adaptively Refined Large-Eddy SimulationS or FEARLESS. / Ziel dieser Arbeit war, ein neues numerisches Modell zu entwickeln, welches es ermöglicht Grobstruktursimulationen auch mit adaptiven Gittercodes auszuführen, um Turbulenz über große Längenskalenbereiche konsistent zu simulieren.
|
19 |
Simulating Star Formation and Turbulence in Models of Isolated Disk Galaxies / Simulation von Sternentstehung und Turbulenz in Modellen von isolierten ScheibengalaxienHupp, Markus January 2008 (has links) (PDF)
We model Milky Way like isolated disk galaxies in high resolution three-dimensional hydrodynamical simulations with the adaptive mesh refinement code Enzo. The model galaxies include a dark matter halo and a disk of gas and stars. We use a simple implementation of sink particles to measure and follow collapsing gas, and simulate star formation as well as stellar feedback in some cases. We investigate two largely different realizations of star formation. Firstly, we follow the classical approach to transform cold, dense gas into stars with an fixed efficiency. These kind of simulations are known to suffer from an overestimation of star formation and we observe this behavior as well. Secondly, we use our newly developed FEARLESS approach to combine hydrodynamical simulations with a semi-analytic modeling of unresolved turbulence and use this technique to dynamically determine the star formation rate. The subgrid-scale turbulence regulated star formation simulations point towards largely smaller star formation efficiencies and henceforth more realistic overall star formation rates. More work is necessary to extend this method to account for the observed highly supersonic turbulence in molecular clouds and ultimately use the turbulence regulated algorithm to simulate observed star formation relations. / In dieser Arbeit beschäftigen wir uns mit der Modellierung und Durchführung von hoch aufgelösten dreidimensionalen Simulationen von isolierten Scheibengalaxien, vergleichbar unserer Milchstraße. Wir verwenden dazu den Simulations-Code Enzo, der die Methode der adaptiven Gitterverfeinerung benutzt um die örtliche und zeitliche Auflösung der Simulationen anzupassen. Unsere Galaxienmodelle beinhalten einen Dunkle Materie Halo sowie eine galaktische Scheibe aus Gas und Sternen. Regionen besonders hoher Gasdichte werden durch Teilchen ersetzt, die fortan die Eigenschaften des Gases beziehungsweise der darin entstehenden Sterne beschreiben. Wir untersuchen zwei grundlegend verschiedene Darstellungen von Sternentstehung. Die erste Methode beschreibt die Umwandlung dichten Gases einer Molekülwolke in Sterne mit konstanter Effektivität und führt wie in früheren Simulationen zu einer Überschätzung der Sternentstehungsrate. Die zweite Methode nutzt das von unserer Gruppe neu entwickelte FEARLESS Konzept, um hydrodynamische Simulationen mit analytischen-empirischen Modellen zu verbinden und bessere Aussagen über die in einer Simulation nicht explizit aufgelösten Bereiche treffen zu können. Besonderes Augenmerk gilt in dieser Arbeit dabei der in Molekülwolken beobachteten Turbulenz. Durch die Einbeziehung dieser nicht aufgelösten Effekte sind wir in der Lage eine realistischere Aussage über die Sternentstehungsrate zu treffen. Eine zukünftige Weiterentwicklung dieser von uns entwickelten und umgesetzten Technik kann in Zukunft dafür verwendet werden, die Qualität des durch Turbulenz regulierten Sternentstehungsmodells noch weiter zu steigern.
|
20 |
Towards a reconstruction of the SUSY seesaw model / Zur Rekonstruktion des SUSY Seesaw ModellsDeppisch, Frank January 2004 (has links) (PDF)
In this work, we studied in great detail how the unknown parameters of the SUSY seesaw model can be determined from measurements of observables at or below collider energies, namely rare flavor violating decays of leptons, slepton pair production processes at linear colliders and slepton mass differences. This is a challenging task as there is an intricate dependence of the observables on the unknown seesaw, light neutrino and mSUGRA parameters. In order to separate these different influences, we first considered two classes of seesaw models, namely quasi-degenerate and strongly hierarchical right-handed neutrinos. As a generalisation, we presented a method that can be used to reconstruct the high energy seesaw parameters, among them the heavy right-handed neutrino masses, from low energy observables alone. / In dieser Arbeit wurde detailliert untersucht wie die unbekannten Parameter des supersymmetrischen Seesaw-Modells durchMessung von niederenergetischen Observablen (Lepton-Flavor verletzende seltene Zerfälle der Leptonen, Slepton-Paar-Produktion an Elektron-Positron Linearbeschleunigern und Sleptonmassen-Differenzen) bestimmt werden können. Wegen des komplizierten Zusammenhangs zwischen diesen Messgrößen und den Seesaw-, Neutrino-, und SUSY-Parametern stellt dies eine große Herausforderung dar. Um die verschiedenen Einflüsse zu trennen, wurden zuerst zwei Klassen von Seesaw-Modellen betrachtet, nämlich solche die durch (quasi-)entartete und stark hierarchische rechtshändige Neutrinomassen charakterisiert sind. Zur Verallgemeinerung wurde zum Abschluss eine allgemeine Methode präsentiert, mittels der die zugrunde liegenden Hochenergie-Parameter des Seesaw-Modells allein durch niederenergetische Observable rekonstruiert werden können.
|
Page generated in 0.0444 seconds