Spelling suggestions: "subject:"info:entrepo/classification/ddc/500"" "subject:"info:restrepo/classification/ddc/500""
281 |
Untersuchungen zu einem mit Hedera helix 'Woerner' begrünten, hydroponischen Nutzwandsystem : Evaluierung ertrags- und pflanzenphysiologischer Parameter unter Berücksichtigung der klimatischen Einflüsse zur Modellierung eines intelligenten Wasser- und NährlösungsmanagementsWolter, Adelheid 17 December 2015 (has links)
Forschungsgegenstand war ein neuentwickeltes modulares Kassettensystem mit Hedera helix 'Woerner', das im Folgenden als hydroponische Nutzwand bezeichnet wurde. Die Lebensqualität in Ballungsräumen sinkt aufgrund von steigender Verdichtung. Stadtgrün verbessert die Lebensqualität und sorgt für lokale Klima- und Luftverbesserung. Allerdings besteht ein Nutzungskonflikt mit anderen Bebauungsvorhaben. Hier verspricht der Einsatz der hydroponischen Nutzwand ein hohes Potential, da das wandgebundene Fassadenbegrünungssystem weitgehend bodenunabhängig ist.
Es erfolgte eine Studie zur Quantifizierung des Leistungspotenzials. Neben einer detaillierten Beschreibung des Kassettensystems und der Versuchsanlage, die eine nach Norden und eine nach Süden exponierte Nutzwand darstellte, erfolgte die Erstellung einer Wasserbilanz und eines abgeleiteten Bewässerungsplanes. Im Substrat wurden Untersuchungen zum Sauerstoffgehalt durchgeführt. Ebenso war auch die Wirkung auf das Bestandsklima ein wesentliches Kriterium, um das Kassettensystem zu beschreiben. Für die Leistungsabschätzung wurden Wachstumsanalysen zur Beschreibung der Pflanzenproduktion durchgeführt. Für einige Kassettenelemente wurde dabei im Wurzelraum ein Pflanzenstärkungsmittel mit Bacillus subtilis angewendet, mit dem Ziel, eine Wachstumssteigerung zu erreichen. In einem Austrocknungsversuch im Gewächshaus wurde der Effekt verschiedener Konzentrationen des Pflanzenstärkungsmittels auf eine erhöhte Stresstoleranz von Hedera helix 'Woerner' untersucht. Der Wasserhaushalt stellte einen gesonderten Schwerpunkt dar, bei dem zwei Ansätze zur Bewässerungssteuerung verfolgt wurden. Es erfolgte eine Modellierung der Evapotranspiration über Daten aus meteorologischen Messungen und Messungen zur Transpiration der Pflanzen im Bestand. In einem zweiten Ansatz wurden die Möglichkeiten der pflanzenbasierten Sensorik untersucht, wofür ein elektronischer Blattdickensensor zum Einsatz kam.
Die Erkenntnisse aus der Dissertation sollten zeigen, welche Praxistauglichkeit eine hydroponische Nutzwand besitzt und ob sie lokal in der Lage ist, dem Problem sinkender Lebensqualität im städtischen Raum entgegenzuwirken.:I Inhaltsverzeichnis S. I
II Abbildungsverzeichnis S. III
III Tabellenverzeichnis S. X
IV Formelverzeichnis S. XV
V Abkürzungsverzeichnis S. XVII
1 Einleitung S. 1
2 Problemstellung S. 2
3 Zielsetzung S. 9
4 Kassettensystem S. 10
4.1 Material und Methoden S. 21
4.1.1 Versuchsanlage S. 21
4.1.2 Sauerstoffgehalt im Substrat S. 32
4.1.3 Mikroklimatische Einflüsse S. 34
4.2 Ergebnisse S. 43
4.2.1 Versuchsanlage S. 43
4.2.2 Sauerstoffgehalt im Substrat S. 49
4.2.3 Mikroklimatische Einflüsse S. 51
4.3 Diskussion S. 69
4.3.1 Versuchsanlage S. 70
4.3.2 Sauerstoffgehalt im Substrat S. 77
4.3.3 Mikroklimatische Einflüsse S. 78
4.4 Zusammenfassung S. 84
5 Wachstum S. 88
5.1 Material und Methoden S. 92
5.1.1 Wachstumsanalyse S. 92
5.1.2 Trockenstressversuch mit Bacillus subtilis FZB24® flüssig S. 102
5.2 Ergebnisse S. 108
5.2.1 Wachstumsanalysen S. 108
5.2.2 Trockenstressversuch mit Bacillus subtilis FZB24® flüssig S. 120
5.3 Diskussion S. 121
5.3.1 Wachstumsanalysen S. 121
5.3.2 Trockenstressversuch mit Bacillus subtilis FZB24® flüssig S. 125
5.4 Zusammenfassung S. 126
6 Wasserhaushalt S. 128
6.1 Material und Methoden S. 132
6.1.1 Transpiration von Einzelpflanzen S. 132
6.1.2 Evapotranspiration des Pflanzenbestandes S. 134
6.2 Ergebnisse S. 139
6.2.1 Transpiration von Einzelpflanzen S. 139
6.2.2 Evapotranspiration des Pflanzenbestandes S. 143
6.3 Diskussion S. 147
6.3.1 Transpiration von Einzelpflanzen S. 147
6.3.2 Evapotranspiration des Pflanzenbestandes S. 150
6.4 Exkurs Blattdickenmessung S. 151
6.5 Zusammenfassung S. 158
7 Schlussfolgerungen S. 161
8 Anhang Abbildungen S. 165
9 Anhang Tabellen S. 176
10 Literaturverzeichnis S. 191
|
282 |
Nonequilibrium and semiclassical dynamics in topological phases of quantum matterRoy, Sthitadhi 05 November 2018 (has links)
The discovery of topological phases of quantum matter has brought about a new paradigm in the understanding of rich and exotic phases which fall outside the conventional classification of phases using Landau’s theory of broken symmetries. The thesis addresses various aspects of nonequilibrium and semiclassical dynamics in systems hosting such topological phases. While the study of nonequilibrium closed quantum systems is an exciting field in itself, it has gained a lot of importance in the context topological systems. Much of this has been fuelled by the immense progress in the experimental realisation of such topological systems with ultracold atoms in optical lattices. As measurements of real-time responses are natural in such experiments, they have served as ideal platforms to study the nonequilibrium responses of topological systems.
The studies presented in this thesis can be brought under the umbrella of two broad questions, first, how non-equilibrium dynamics can be used to characterize topological phases or locate topological critical points, and second, what new topological phases can be realized out of equilibrium.
Generally, non-trivial topology of a system manifests itself via quantised responses at the edges of a system or via appropriate non-local string order parameters which are rather difficult to measure in experiments. Local measurements in the bulk are more conducive to experiments. We address this question by showing that within a non-equilibrium setup obtained via a quantum quench, local bulk observables can show sharp signatures of topological quantum criticality via a non-analyticity in parameter space at the critical point. Although via non-local basis transformations, topological phase transitions can often be mapped onto conventional phase transitions, a remarkable aspect of this result is that within the non-equilibrium setup, the local bulk observables can locate the critical point in the natural basis where the phase transition is topological and not described by a local order parameter.
The next question that the thesis explores is how nonequilibrium and semiclassical dynamics, more precisely wavepacket dynamics, can be used to probe topological phases with an emphasis on Chern insulators in two dimensions. Chern insulators are essentially similar to quantum Hall systems except that they show quantised Hall responses in the absence of external magnetic fields due to intrinsically broken time-reversal symmetry. The Hall conductance in these systems is related to an integer-valued topological invariant characterising the energy bands, known as the Chern number, which is the net flux of Berry curvature through the entire two-dimensional Brillouin zone. The Berry curvature modifies the semiclassical equations of motion describing the dynamics of a wavepacket. Hence, the real-time motion of a wavepacket is used to map out the Berry curvature and thence the topology of the band. Complementary to these bulk responses, spatially local quenches in Chern insulators are also shown as probes for the presence or absence of chiral edge modes.
The idea of semiclassical equations of motion can be extended to the case of a three-dimensional Weyl semimetal. Weyl semimetals are a new class of gapless topological systems in three dimensions, elementary fermionic excitations of which are described by the Weyl equation. Since in cold atom experiments, magnetic fields are realized synthetically via phases in complex hoppings, exploring the Hofstadter limit is a natural scenario. When the magnetic field penetrating a two-dimensional system becomes so large that the associated magnetic length becomes comparable to the lattice spacing, the energy spectrum of the system is described by fractal known as the Hofstadter butterfly. We introduce the Weyl butterfly, a set of fractals which describes the spectrum of a Weyl semimetal subjected to a magnetic field, and we characterize the fractal set of Weyl nodes in the spectrum using wavepacket dynamics to reveal their chirality and location. Moreover, we show that the chiral anomaly -- a hallmark of the topological Weyl semimetal -- does not remain proportional to the magnetic field at large fields, but rather inherits a fractal structure of linear regimes as a function of external field.
Finally, the thesis addresses the question of novel nonequilibrium topological phases of matter. In the context of phase structures of nonequilibrium systems, periodically driven, also known as Floquet systems, has received a lot of attention. Moreover, the role of disorder has been shown to be rather crucial as generically such Floquet systems heat up to featureless infinite temperature states. Also, in the context of topological systems like Chern insulators, disorder is expected to play an interesting role given that it is important in localising the bulk cyclotron orbits in an integer quantum Hall system. With this motivation, the phase diagram of the disordered Chern insulator with a Floquet drive is explored in the thesis. In the model considered there are indeed topological Floquet edge modes which are exclusive to Floquet systems, for instance, the edge modes in gaps of the quasienergy spectrum around ±pi. There are also disorder-induced topological transitions between different Floquet topological phases, due to a mechanism shown to be of levitation-annihilation type.
|
283 |
Adaptive Evolution of Long Non-Coding RNAsWalter Costa, Maria Beatriz 07 December 2018 (has links)
Chimpanzee is the closest living species to modern humans. Although the differences in phenotype are striking between these two species, the difference in genomic sequences is surprisingly small. Species specific changes and positive selection have been mostly found in proteins, but ncRNAs are also involved, including the largely uncharacterized class of long ncRNAs (lncRNAs). A notable example is the Human Accelerated Region 1 (HAR1), the region in the human genome with the highest number of human specific substitutions: 18 in 118 nucleotides. HAR1 is located in a pair of overlapping lncRNAs that are expressed in a crucial period for brain development. Importantly, structural rather then sequence constraints lead to evolution of many ncRNAs. Different methods have been developed for detecting negative selection in ncRNA structures, but none thus far for positive selection.
This motivated us to develop a novel method: the SSS-test (Selection on the Secondary Structure test). This novel method uses an excess of structure changing changes as a means of identifying positive selection. This is done using reports from RNAsnp, a tool that quantifies the structural effect of SNPs on RNA structures, and by applying multiple correction on the observations to generate selection scores. Insertions and deletions (indels) are dealt with separately using rank statistics and a background model. The scores for SNPs and indels are combined to calculate a final selection score for each of the input sequences, indicating the type of selection. We benchmarked the SSS-test with biological and synthetic datasets, obtaining coherent signals. We then applied it to a lncRNA database and obtained a set of 110 human lncRNAs as candidates for having evolved under adaptive evolution in humans.
Although lncRNAs have poor sequence conservation, they have conserved splice sites, which provide ideal guides for orthology annotation. To provide an alternative method for assigning orthology for lncRNAs, we developed the 'buildOrthologs' tool. It uses as input a map of ortholog splice sites created by the SpliceMap tool and applies a greedy algorithm to reconstruct valid ortholog transcripts. We applied this novel approach to create a well-curated catalog of lncRNA orthologs for primate species.
Finally, to understand the structural evolution of ncRNAs in full detail, we added a temporal aspect to the analysis. What was the order of mutations of a structure since its origin? This is a combinatorial problem, in which the exact mutations between ancestral and extant sequences must be put in order. For this, we developed the 'mutationOrder' tool using dynamic programming. It calculates every possible order of mutations and assigns probabilities to every path. We applied this novel tool to HAR1 as a case study and saw that the co-optimal paths that are equally likely to have occured share qualitatively comparable features. In general, they lead to stabilization of the human structure since the ancestral. We propose that this stabilization was caused by adaptive evolution.
With the new methods we developed and our analysis of primate databases, we gained new knowledge about adaptive evolution of human lncRNAs.
|
284 |
Large Deviations for Brownian Intersection MeasuresMukherjee, Chiranjib 27 July 2011 (has links)
We consider p independent Brownian motions in ℝd. We assume that p ≥ 2 and p(d- 2) < d. Let ℓt denote the intersection measure of the p paths by time t, i.e., the random measure on ℝd that assigns to any measurable set A ⊂ ℝd the amount of intersection local time of the motions spent in A by time t. Earlier results of Chen derived the logarithmic asymptotics of the upper tails of the total mass ℓt(ℝd) as t →∞. In this paper, we derive a large-deviation principle for the normalised intersection measure t-pℓt on the set of positive measures on some open bounded set B ⊂ ℝd as t →∞ before exiting B. The rate function is explicit and gives some rigorous meaning, in this asymptotic regime, to the understanding that the intersection measure is the pointwise product of the densities of the normalised occupation times measures of the p motions. Our proof makes the classical Donsker-Varadhan principle for the latter applicable to the intersection measure.
A second version of our principle is proved for the motions observed until the individual exit times from B, conditional on a large total mass in some compact set U ⊂ B. This extends earlier studies on the intersection measure by König and Mörters.
|
285 |
Perturbations in Boolean NetworksGhanbarnejad, Fakhteh 14 September 2012 (has links)
Boolean networks are coarse-grained models of the regulatory dynamics that controls the survival and proliferation of a living cell. The dynamics is time- and state-discrete. This Boolean abstraction assumes that small differences in concentration levels are irrelevant; and the binary distinction of a low or a high concentration of each biomolecule is sufficient to capture the dynamics. In this work, we briefly introduce the gene regulatory models, where with the advent of system-specific Boolean models, new conceptual questions and analytical and numerical challenges arise. In particular, the response of the system to external intervention presents a novel area of research.
Thus first we investigate how to quantify a node\\\''s individual impact on dynamics in a more detailed manner than an averaging against all eligible perturbations. Since each node now represents a specific biochemical entity, it is the subject of our interest. The prediction of nodes\\\'' dynamical impacts from the model may be compared to the empirical data from biological experiments.
Then we develop a hybrid model that incorporates both continuous and discrete random Boolean networks to compare the reaction of the dynamics against small as well as flip perturbations in different regimes. We show that the chaotic behaviour disappears in high sensitive Boolean ensembles with respect to continuous small fluctuations in contrast to the flipping.
Finally, we discuss the role of distributing delays in stabilizing of the Boolean dynamics against noise. These studies are expected to trigger additional experiments and lead to improvement of models in gene regulatory dynamics.
|
286 |
False Discovery Rates, Higher Criticism and Related Methods in High-Dimensional Multiple TestingKlaus, Bernd 09 January 2013 (has links)
The technical advancements in genomics, functional magnetic-resonance
and other areas of scientific research seen in the last two decades
have led to a burst of interest in multiple testing procedures.
A driving factor for innovations in the field of multiple testing has been the problem of
large scale simultaneous testing. There, the goal is to uncover lower--dimensional signals
from high--dimensional data. Mathematically speaking, this means that the dimension d
is usually in the thousands while the sample size n is relatively small (max. 100 in general,
often due to cost constraints) --- a characteristic commonly abbreviated as d >> n.
In my thesis I look at several multiple testing problems and corresponding
procedures from a false discovery rate (FDR) perspective, a methodology originally introduced in a seminal paper by Benjamini and Hochberg (2005).
FDR analysis starts by fitting a two--component mixture model to the observed test statistics. This mixture consists of a null model density and an alternative component density from which the interesting cases are assumed to be drawn.
In the thesis I proposed a new approach called log--FDR
to the estimation of false discovery rates. Specifically,
my new approach to truncated maximum likelihood estimation yields accurate
null model estimates. This is complemented by constrained maximum
likelihood estimation for the alternative density using log--concave
density estimation.
A recent competitor to the FDR is the method of \"Higher
Criticism\". It has been strongly advocated
in the context of variable selection in classification
which is deeply linked to multiple comparisons.
Hence, I also looked at variable selection in class prediction which can be viewed as
a special signal identification problem. Both FDR methods and Higher Criticism
can be highly useful for signal identification. This is discussed in the context of
variable selection in linear discriminant analysis (LDA),
a popular classification method.
FDR methods are not only useful for multiple testing situations in the strict sense,
they are also applicable to related problems. I looked at several kinds of applications of FDR in linear classification. I present and extend statistical techniques related to effect size estimation using false discovery rates and showed how to use these for variable selection. The resulting fdr--effect
method proposed for effect size estimation is shown to work as well as competing
approaches while being conceptually simple and computationally inexpensive.
Additionally, I applied the fdr--effect method to variable selection by minimizing
the misclassification rate and showed that it works very well and leads to compact
and interpretable feature sets.
|
287 |
The Leray-Serre spectral sequence in Morse homology on Hilbert manifolds and in Floer homology on cotangent bundlesSchneider, Matti 30 January 2013 (has links)
The Leray-Serre spectral sequence is a fundamental tool for studying singular homology of a fibration E->B with typical fiber F. It expresses H (E) in terms of H (B) and H (F). One of the classic examples of a fibration is given by the free loop space fibration, where the typical fiber is given by the based loop space .
The first part of this thesis constructs the Leray-Serre spectral sequence in Morse homology on Hilbert manifolds under certain natural conditions, valid for instance for the free loop space fibration if the base is a closed manifold. We extend the approach of Hutchings which is restricted to closed manifolds. The spectral sequence might provide answers to questions involving closed geodesics, in particular to spectral invariants for the geodesic energy functional. Furthermore we discuss another example, the free loop space of a compact G-principal bundle, where G is a connected compact Lie group. Here we encounter an additional difficulty, namely the base manifold of the fiber bundle is infinite-dimensional. Furthermore, as H ( P) = HF (T P) and H ( Q) =HF (T Q), where HF denotes Floer homology for periodic orbits, the spectral sequence for P -> Q might provide a stepping stone towards a similar spectral sequence defined in purely Floer-theoretic terms, possibly even for more general symplectic quotients.
Hutchings’ approach to the Leray-Serre spectral sequence in Morse homology couples a fiberwise negative gradient flow with a lifted negative gradient flow on the base. We study the Morse homology of a vector field that is not of gradient type. The central issue in the Hilbert manifold setting to be resolved is compactness of the involved moduli spaces. We overcome this difficulty by utilizing the special structure of the vector field. Compactness up to breaking of the corresponding moduli spaces is proved with the help of Gronwall-type estimates. Furthermore we point out and close gaps in the standard literature, see Section 1.4 for an overview.
In the second part of this thesis we introduce a Lagrangian Floer homology on cotangent bundles with varying Lagrangian boundary condition. The corresponding complex allows us to obtain the Leray-Serre spectral sequence in Floer homology on the cotangent bundle of a closed manifold Q for Hamiltonians quadratic in the fiber directions. This corresponds to the free loop space fibration of a closed manifold of the first part. We expect applications to spectral invariants for the Hamiltonian action functional.
The main idea is to study pairs of Morse trajectories on Q and Floer strips on T Q which are non-trivially coupled by moving Lagrangian boundary conditions. Again, compactness of the moduli spaces involved forms the central issue. A modification of the compactness proof of Abbondandolo-Schwarz along the lines of the Morse theory argument from the first part of the thesis can be utilized.
|
288 |
Unterstützung von Integrationsdienstleistungen durch abstrakte IntegrationsmusterPero, Martin 19 February 2013 (has links)
Integration ist eine fortwährende Aufgabe in betrieblichen Informationssystemen. Durch den Einsatz verschiedener personeller und maschineller Aufgabenträger kommt es zu wiederkehrenden Integrationsproblemen, die vorrangig durch externe Dienstleister gelöst werden. Das zentrale Problem dieser Arbeit ist, dass in der Wissenschaft diskutiertes Lösungswissen in Form von Mustern existiert, aber keinen Eingang in die Praxis findet. Um dieses Problem zu untersuchen, wurde eine qualitative empirische Untersuchung durchgeführt, welche erstmals im deutschsprachigen Raum Wirkungszusammenhänge und Entscheidungsmechanismen in Integrationsprojekten analysiert. Als Ergebnis der qualitativen Erhebung kann festgehalten werden, dass dem Dienstleistungscharakter der Integration bisher zu wenig Beachtung geschenkt wurde und dass Integrationsmuster nicht eingesetzt werden, weil der Abstraktionsgrad des so konservierten Lösungswissens nicht zum Abstraktionsgrad der Problemstellungen passt. Deshalb definiert die Arbeit zunächst ein Dienstleistungsmodell der Integration, welches sich auf die empirische Untersuchung stützt. Danach wird auf der Grundla-ge einer eigenschaftsbasierten Definition von Integrationsmustern eine Grundmenge an Mustern aus der Literatur extrahiert, die weiter abstrahiert werden. Als Abstraktionsprinzipien werden die Klassifikation und die Generalisierung eingesetzt. Abstrakte Integrationsmuster können als Ressourcen in ein Dienstleistungsmodell eingehen.
Für die Klassifikation wurde ein erweiterbares und flexibles Klassifikationsverfahren – die Facettenklassifikation – gewählt. Diese ermöglicht jederzeit das Hinzufügen weiterer Facetten. Die Einordnung eines Musters muss nur innerhalb einer Facette disjunkt sein, es kann aber in andere Facetten eingeordnet werden. Die verwendeten Facetten entstammen sowohl dem Problem als auch dem Lösungsbereich. Jeder Facette liegt eine umfassende Analyse zugrunde. Die Klassifikation bildet den Ausgangspunkt der erneuten Generalisierung. Muster mit ähnlichen bzw. identischen Ausprägungen werden erfasst und auf ein gemeinsames Konzept untersucht. Diese Generalisierung wurde exemplarisch für zwei Mustergruppen durchgeführt. Dabei wurden die beiden abstrakten Integrationsmuster „zusätzlicher Zugriffspunkt“ sowie „Vermittler“ identifiziert.
Die entwickelten Konzepte flossen in eine umfangreiche Evaluation ein, welche am Beispiel einer konkreten Dienstleistung im Bereich der E-Procurement-Integration durchgeführt wurde. Die Unabhängigkeit der Bewertung konnte dadurch sichergestellt werden, dass weder der Dienstleister noch der Kunde an der zuvor durchgeführten empirischen Untersuchung beteiligt waren. Der erarbeitete Lösungsvorschlag wurde in einer Laborumgebung implementiert. Das vollständige Integrationsszenario ist dabei auf der Basis einer Virtualisierungsumgebung realitätsnah nachgebildet worden. Neben Instanzen der Kundensysteme mit identischem Versions- und Patch-Stand kamen auch Datenbestände aus Produktivsystemen zum Einsatz. Die Integrationshilfsmittel wurden ebenfalls in der Laborumgebung eingerichtet.
Durch abstrakte Integrationsmuster verbessert sich die Dienstleistungserbringung. Auf der Kundenseite bewirkt dies eine Verbesserungen der Integrations- und Unternehmensarchitektur sowie die Erschließung weiteren Verbesserungspotenzials. Für den Dienstleister ergibt sich neben einem veränderten Dienstleistungsmodell vor allem die Möglichkeit, einmalige Angebote in ein konfigurierbares Standarddienstleistungsangebot zu überführen. Zusätzlich kann eine verbesserte Ressourcennutzung (vor allem der Humanressourcen) anhand des veränderten Dienstleistungsmodells nachgewiesen werden.
Im Rahmen der Arbeit konnte so ein Ansatz entwickelt werden, der die empirisch belegten Abstraktionsprobleme behebt und die Einsetzbarkeit von bestehendem Lösungswissen verbessert. Gleichzeitig werden die Wirkungsmechanismen und Entscheidungszusammenhänge durch das Dienstleistungsmodell besser erfass-, erklär- und vor allem planbar.
|
289 |
Numerische Behandlung zeitabhängiger akustischer Streuung im Außen- und FreiraumGruhne, Volker 17 April 2013 (has links)
Lineare hyperbolische partielle Differentialgleichungen in homogenen Medien, beispielsweise die Wellengleichung, die die Ausbreitung und die Streuung akustischer Wellen beschreibt, können im Zeitbereich mit Hilfe von Randintegralgleichungen formuliert werden. Im ersten Hauptteil dieser Arbeit stellen wir eine effiziente Möglichkeit vor, numerische Approximationen solcher Gleichungen zu implementieren, wenn das Huygens-Prinzip nicht gilt.
Wir nutzen die Faltungsquadraturmethode für die Zeitdiskretisierung und eine Galerkin-Randelement-Methode für die Raumdiskretisierung. Mit der Faltungsquadraturmethode geht eine diskrete Faltung der Faltungsgewichte mit der Randdichte einher. Bei Gültigkeit des Huygens-Prinzips konvergieren die Gewichte exponentiell gegen null, sofern der Index hinreichend groß ist. Im gegenteiligen Fall, das heißt bei geraden Raumdimensionen oder wenn Dämpfungseffekte auftreten, kann kein Verschwinden der Gewichte beobachtet werden. Das führt zu Schwierigkeiten bei der effizienten numerischen Behandlung.
Im ersten Hauptteil dieser Arbeit zeigen wir, dass die Kerne der Faltungsgewichte in gewisser Weise die Fundamentallösung im Zeitbereich approximieren und dass dies auch zutrifft, wenn beide bezüglich der räumlichen Variablen abgeleitet werden. Da die Fundamentallösung zudem für genügend große Zeiten, etwa nachdem die Wellenfront vorbeigezogen ist, glatt ist, schließen wir Gleiches auch in Bezug auf die Faltungsgewichte, die wir folglich mit hoher Genauigkeit und wenigen Interpolationspunkten interpolieren können. Darüber hinaus weisen wir darauf hin, dass zur weiteren Einsparung von Speicherkapazitäten, insbesondere bei Langzeitexperimenten, der von Schädle et al. entwickelte schnelle Faltungsalgorithmus eingesetzt werden kann. Wir diskutieren eine effiziente Implementierung des Problems und zeigen Ergebnisse eines numerischen Langzeitexperimentes.
Im zweiten Hauptteil dieser Arbeit beschäftigen wir uns mit Transmissionsproblemen der Wellengleichung im Freiraum. Solche Probleme werden gewöhnlich derart behandelt, dass der Freiraum, wenn nötig durch Einführen eines künstlichen Randes, in ein unbeschränktes Außengebiet und ein beschränktes Innengebiet geteilt wird mit dem Ziel, eventuelle Inhomogenitäten oder Nichtlinearitäten des Materials vollständig im Innengebiet zu konzentrieren. Wir werden eine Lösungsstrategie vorstellen, die es erlaubt, die aus der Teilung resultierenden Teilprobleme so weit wie möglich unabhängig voneinander zu behandeln. Die Kopplung der Teilprobleme erfolgt über Transmissionsbedingungen, die auf dem ihnen gemeinsamen Rand vorgegeben sind.
Wir diskutieren ein Kopplungsverfahren, das auf verschiedene Diskretisierungsschemata für das Innen- und das Außengebiet zurückgreift. Wir werden insbesondere ein explizites Verfahren im Innengebiet einsetzen, im Gegensatz zum Außengebiet, bei dem wir ein auf ein Mehrschrittverfahren beruhendes Faltungsquadraturverfahren nutzen. Die Kopplung erfolgt nach der Strategie von Johnson und Nédélec, bei der die direkte Randintegralmethode zum Einsatz kommt. Diese Strategie führt auf ein unsymmetrische System. Wir analysieren das diskrete Problem hinsichtlich Stabilität und Konvergenz und unterstreichen die Einsatzfähigkeit des Kopplungsalgorithmus mit der Durchführung numerischer Experimente.
|
290 |
Unsupervised Natural Language Processing for Knowledge Extraction from Domain-specific Textual ResourcesHänig, Christian 17 April 2013 (has links)
This thesis aims to develop a Relation Extraction algorithm to extract knowledge out of automotive data. While most approaches to Relation Extraction are only evaluated on newspaper data dealing with general relations from the business world their applicability to other data sets is not well studied.
Part I of this thesis deals with theoretical foundations of Information Extraction algorithms. Text mining cannot be seen as the simple application of data mining methods to textual data. Instead, sophisticated methods have to be employed to accurately extract knowledge from text which then can be mined using statistical methods from the field of data mining. Information Extraction itself can be divided into two subtasks: Entity Detection and Relation Extraction. The detection of entities is very domain-dependent due to terminology, abbreviations and general language use within the given domain. Thus, this task has to be solved for each domain employing thesauri or another type of lexicon. Supervised approaches to Named Entity Recognition will not achieve reasonable results unless they have been trained for the given type of data.
The task of Relation Extraction can be basically approached by pattern-based and kernel-based algorithms. The latter achieve state-of-the-art results on newspaper data and point out the importance of linguistic features. In order to analyze relations contained in textual data, syntactic features like part-of-speech tags and syntactic parses are essential. Chapter 4 presents machine learning approaches and linguistic foundations being essential for syntactic annotation of textual data and Relation Extraction. Chapter 6 analyzes the performance of state-of-the-art algorithms of POS tagging, syntactic parsing and Relation Extraction on automotive data. The findings are: supervised methods trained on newspaper corpora do not achieve accurate results when being applied on automotive data. This is grounded in various reasons. Besides low-quality text, the nature of automotive relations states the main challenge. Automotive relation types of interest (e. g. component – symptom) are rather arbitrary compared to well-studied relation types like is-a or is-head-of. In order to achieve acceptable results, algorithms have to be trained directly on this kind of data. As the manual annotation of data for each language and data type is too costly and inflexible, unsupervised methods are the ones to rely on.
Part II deals with the development of dedicated algorithms for all three essential tasks. Unsupervised POS tagging (Chapter 7) is a well-studied task and algorithms achieving accurate tagging exist. All of them do not disambiguate high frequency words, only out-of-lexicon words are disambiguated. Most high frequency words bear syntactic information and thus, it is very important to differentiate between their different functions. Especially domain languages contain ambiguous and high frequent words bearing semantic information (e. g. pump). In order to improve POS tagging, an algorithm for disambiguation is developed and used to enhance an existing state-of-the-art tagger. This approach is based on context clustering which is used to detect a word type’s different syntactic functions. Evaluation shows that tagging accuracy is raised significantly.
An approach to unsupervised syntactic parsing (Chapter 8) is developed in order to suffice the requirements of Relation Extraction. These requirements include high precision results on nominal and prepositional phrases as they contain the entities being relevant for Relation Extraction. Furthermore, accurate shallow parsing is more desirable than deep binary parsing as it facilitates Relation Extraction more than deep parsing. Endocentric and exocentric constructions can be distinguished and improve proper phrase labeling. unsuParse is based on preferred positions of word types within phrases to detect phrase candidates. Iterating the detection of simple phrases successively induces deeper structures. The proposed algorithm fulfills all demanded criteria and achieves competitive results on standard evaluation setups.
Syntactic Relation Extraction (Chapter 9) is an approach exploiting syntactic statistics and text characteristics to extract relations between previously annotated entities. The approach is based on entity distributions given in a corpus and thus, provides a possibility to extend text mining processes to new data in an unsupervised manner. Evaluation on two different languages and two different text types of the automotive domain shows that it achieves accurate results on repair order data. Results are less accurate on internet data, but the task of sentiment analysis and extraction of the opinion target can be mastered. Thus, the incorporation of internet data is possible and important as it provides useful insight into the customer\''s thoughts.
To conclude, this thesis presents a complete unsupervised workflow for Relation Extraction – except for the highly domain-dependent Entity Detection task – improving performance of each of the involved subtasks compared to state-of-the-art approaches. Furthermore, this work applies Natural Language Processing methods and Relation Extraction approaches to real world data unveiling challenges that do not occur in high quality newspaper corpora.
|
Page generated in 0.1945 seconds