• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 68
  • 15
  • 9
  • Tagged with
  • 91
  • 78
  • 59
  • 45
  • 44
  • 44
  • 21
  • 18
  • 14
  • 13
  • 12
  • 12
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Ein numerisches Modell zur lokalen Nebelvorhersage. Teil 1: Parametrisierte Mikrophysik und Strahlung

Trautmann, Thomas, Bott, Andreas 03 January 2017 (has links) (PDF)
Die Modellkomponenten für parametrisierteWolkenphysik, Strahlung und Sichtweitenbestimmung im Nebelvorhersagemodell PAFOG, das kürzlich in Zusammenarbeit mit dem Deutschen Wetterdienst als lokales Vorhersagesystem entwickelt wurde und für die Kurzfristprognose eingesetzt werden kann, werden vorgestellt. Die Modellphilosophie orientiert sich an einer mathematisch-physikalisch fundierten Beschreibung der beteiligten meteorologischen Prozesse, deren Einzelheiten in dieser Arbeit diskutiert werden. / This paper presents the model components for parameterized cloud physics, radiation and visibility determination as implemented in the local forecast model PAFOG. PAFOG has been recently developed in cooperation with the GermanWeather Service DWD. PAFOG can be employed for short-range forecasts of radiation fog and visibility. The philosophy of the model strongly emphasizes a mathematically and physically based formulation of the involved meteorological processes the details of which are discussed in this paper.
2

Ein numerisches Modell zur lokalen Nebelvorhersage. Teil 2: Behandlung von Erdboden und Vegetation

Trautmann, Thomas, Bott, Andreas 03 January 2017 (has links) (PDF)
Die im Nebelvorhersagemodell PAFOG enthaltenen Modellkomponenten für parametrisierte Wolkenphysik, Strahlung und Sichtweitenbestimmung wurden durch Module zur Beschreibung der Interaktion mit dem Boden und der Vegetation ergänzt. Das auf diese Weise komplettierte Modellsystem PAFOG-V kann dazu verwendet werden, das lokale Auftreten von Strahlungsnebel und niedriger stratiformer Bewölkung vorherzusagen. / The paper presents an extension of the model components for parameterized cloud physics, radiation and visibility determination as implemented in the local forecast model PAFOG to include the interaction with the soil and the vegetation. The resulting forecast system PAFOG-V can be used to predict local events of radiation fogs and of low level stratiform clouds.
3

Regionalization of an event based Nash cascade model for flood predictions in ungauged basins

Patil, Sachin Ramesh, January 2008 (has links)
Zugl.: Stuttgart, Univ., Diss., 2008.
4

Interannual and interdecadal oscillations in hydrological variables sources and modeling of the persistence in the Elbe River Basin /

Markovic, Danijela. Unknown Date (has links)
University, Diss., 2006--Kassel.
5

XTREND: A computer program for estimating trends in the occurrence rate of extreme weather and climate events

Mudelsee, Manfred 05 January 2017 (has links) (PDF)
XTREND consists of the following methodical Parts. Time interval extraction (Part 1) to analyse different parts of a time series; extreme events detection (Part 2) with robust smoothing; magnitude classification (Part 3) by hand; occurrence rate estimation (Part 4) with kernel functions; bootstrap simulations (Part 5) to estimate confidence bands around the occurrence rate. You work interactively with XTREND (parameter adjustment, calculation, graphics) to acquire more intuition for your data. Although, using “normal” data sizes (less than, say, 1000) and modern machines, the computing time seems to be acceptable (less than a few minutes), parameter adjustment should be done carefully to avoid spurious results or, on the other hand, too long computing times. This Report helps you to achieve that. Although it explains the statistical concepts used, this is generally done with less detail, and you should consult the given references (which include some textbooks) for a deeper understanding.
6

Improving statistical seismicity models

Bach, Christoph January 2013 (has links)
Several mechanisms are proposed to be part of the earthquake triggering process, including static stress interactions and dynamic stress transfer. Significant differences of these mechanisms are particularly expected in the spatial distribution of aftershocks. However, testing the different hypotheses is challenging because it requires the consideration of the large uncertainties involved in stress calculations as well as the appropriate consideration of secondary aftershock triggering which is related to stress changes induced by smaller pre- and aftershocks. In order to evaluate the forecast capability of different mechanisms, I take the effect of smaller--magnitude earthquakes into account by using the epidemic type aftershock sequence (ETAS) model where the spatial probability distribution of direct aftershocks, if available, is correlated to alternative source information and mechanisms. Surface shaking, rupture geometry, and slip distributions are tested. As an approximation of the shaking level, ShakeMaps are used which are available in near real-time after a mainshock and thus could be used for first-order forecasts of the spatial aftershock distribution. Alternatively, the use of empirical decay laws related to minimum fault distance is tested and Coulomb stress change calculations based on published and random slip models. For comparison, the likelihood values of the different model combinations are analyzed in the case of several well-known aftershock sequences (1992 Landers, 1999 Hector Mine, 2004 Parkfield). The tests show that the fault geometry is the most valuable information for improving aftershock forecasts. Furthermore, they reveal that static stress maps can additionally improve the forecasts of off--fault aftershock locations, while the integration of ground shaking data could not upgrade the results significantly. In the second part of this work, I focused on a procedure to test the information content of inverted slip models. This allows to quantify the information gain if this kind of data is included in aftershock forecasts. For this purpose, the ETAS model based on static stress changes, which is introduced in part one, is applied. The forecast ability of the models is systematically tested for several earthquake sequences and compared to models using random slip distributions. The influence of subfault resolution and segment strike and dip is tested. Some of the tested slip models perform very good, in that cases almost no random slip models are found to perform better. Contrastingly, for some of the published slip models, almost all random slip models perform better than the published slip model. Choosing a different subfault resolution hardly influences the result, as long the general slip pattern is still reproducible. Whereas different strike and dip values strongly influence the results depending on the standard deviation chosen, which is applied in the process of randomly selecting the strike and dip values. / Verschiedene Mechanismen werden für das Triggern von Erdbeben verantwortlich gemacht, darunter statische Spannungsänderungen und dynamischer Spannungstransfer. Deutliche Unterschiede zwischen diesen Mechanismen werden insbesondere in der räumlichen Nachbebenverteilung erwartet. Es ist allerdings schwierig diese Hypothesen zu überprüfen, da die großen Unsicherheiten der Spannungsberechnungen berücksichtigt werden müssen, ebenso wie das durch lokale sekundäre Spannungsänderungen hervorgerufene initiieren von sekundären Nachbeben. Um die Vorhersagekraft verschiedener Mechanismen zu beurteilen habe ich die Effekte von Erdbeben kleiner Magnitude durch Benutzen des "epidemic type aftershock sequence" (ETAS) Modells berücksichtigt. Dabei habe ich die Verteilung direkter Nachbeben, wenn verfügbar, mit alternativen Herdinformationen korreliert. Bodenbewegung, Bruchgeometrie und Slipmodelle werden getestet. Als Aproximation der Bodenbewegung werden ShakeMaps benutzt. Diese sind nach großen Erdbeben nahezu in Echtzeit verfügbar und können daher für vorläufige Vorhersagen der räumlichen Nachbebenverteilung benutzt werden. Alternativ können empirische Beziehungen als Funktion der minimalen Distanz zur Herdfläche benutzt werden oder Coulomb Spannungsänderungen basierend auf publizierten oder zufälligen Slipmodellen. Zum Vergleich werden die Likelihood Werte der Hybridmodelle im Falle mehrerer bekannter Nachbebensequenzen analysiert (1992 Landers, 1999 Hector Mine, 2004 Parkfield). Die Tests zeigen, dass die Herdgeometrie die wichtigste Zusatzinformation zur Verbesserung der Nachbebenvorhersage ist. Des Weiteren können statische Spannungsänderungen besonders die Vorhersage von Nachbeben in größerer Entfernung zur Bruchfläche verbessern, wohingegen die Einbeziehung von Bodenbewegungskarten die Ergebnisse nicht wesentlich verbessern konnte. Im zweiten Teil meiner Arbeit führe ich ein neues Verfahren zur Untersuchung des Informationsgehaltes von invertierten Slipmodellen ein. Dies ermöglicht die Quantifizierung des Informationsgewinns, der durch Einbeziehung dieser Daten in Nachbebenvorhersagen entsteht. Hierbei wird das im ersten Teil eingeführte erweiterte ETAS Modell benutzt, welches statische Spannungsänderung zur Vorhersage der räumlichen Nachbebenverteilung benutzt. Die Vorhersagekraft der Modelle wird systematisch anhand mehrerer Erdbebensequenzen untersucht und mit Modellen basierend auf zufälligen Slipverteilungen verglichen. Der Einfluss der Veränderung der Auflösung der Slipmodelle, sowie Streich- und Fallwinkel der Herdsegmente wird untersucht. Einige der betrachteten Slipmodelle korrelieren sehr gut, in diesen Fällen werden kaum zufällige Slipmodelle gefunden, welche die Nachbebenverteilung besser erklären. Dahingegen korrelieren bei einigen Beispielen nahezu alle zufälligen Slipmodelle besser als das publizierte Modell. Das Verändern der Auflösung der Bewegungsmodelle hat kaum Einfluss auf die Ergebnisse, solange die allgemeinen Slipmuster noch reproduzierbar sind, d.h. ein bis zwei größere Slipmaxima pro Segment. Dahingegen beeinflusst eine zufallsbasierte Änderung der Streich- und Fallwinkel der Segmente die Resultate stark, je nachdem welche Standardabweichung gewählt wurde.
7

Extreme Weather: Mitigation Enhancement by Better Forecasts or by Better Knowledge on Event Frequencies?

Tetzlaff, G. 27 July 2017 (has links)
The quality of forecasts can be measured with a wide variety of indices and formulae. All these approaches rely basically on the relation between the numbers of correct forecasts, wrong forecasts, false alarms and rejected cases. In the case of extreme events damage is the major topic. All extreme events by definition are more or less rare events. In many applications the events frequency of an extreme event is selected to be one event per 100 hundred years. Depending on the application other such event frequencies are in use. The mitigation of damage mainly uses rules for the design structures such as buildings. In principle their proper application would allow damage to occur only if a meteorological event oversteps a certain predefined threshold value. In practice the threshold proves to represent more something like a soft shoulder and damage is already observed to be caused by events somewhat smaller than the damage threshold value for the extreme weather case. No matter what its exact definition each threshold value is connected to an event frequency. This event frequency is hard to obtain in particular in the vicinity of the threshold of the extreme event case, because it has to be derived from data scarce by definition, however long the observation time series are. Therefore, these threshold values are subject to a certain inaccuracy. In addition, the low frequencies show some variability with time. Recently, climate changes support the idea that also the occurrence frequency of extreme values will change, increase, in the future. Calculating the forecast quality using the basic data leads to two formulations of the forecast quality, both based on the same principles. The fraction formulation correctly is free from any absolute damage height, it is sufficient to find one reference value. When going to the cumulative formulation the role of the effect of the frequency of occurrence can clearified. The two equations allow to compare the effects of long term changes and inaccuracies of the frequency of occurrence of extreme events with the effects of the improvements of the weather prediction.
8

The long and the short of computational ncRNA prediction

Rose, Dominic 12 November 2010 (has links) (PDF)
Non-coding RNAs (ncRNAs) are transcripts that function directly as RNA molecule without ever being translated to protein. The transcriptional output of eukaryotic cells is diverse, pervasive, and multi-layered. It consists of spliced as well as unspliced transcripts of both protein-coding messenger RNAs and functional ncRNAs. However, it also contains degradable non-functional by-products and artefacts - certainly a reason why ncRNAs have long been wrongly disposed as transcriptional noise. Today, RNA-controlled regulatory processes are broadly recognized for a variety of ncRNA classes. The thermoresponsive ROSE ncRNA (repression of heat shock gene expression) is only one example of a regulatory ncRNA acting at the post-transcriptional level via conformational changes of its secondary structure. Bioinformatics helps to identify novel ncRNAs in the bulk of genomic and transcriptomic sequence data which are produced at ever increasing rates. However, ncRNA annotation is unfortunately not part of generic genome annotation pipelines. Dedicated computational searches for particular ncRNAs are veritable research projects in their own right. Despite best efforts, ncRNAs across the animal phylogeny remain to a large extent uncharted territory. This thesis describes a comprehensive collection of exploratory bioinformatic field studies designed to de novo predict ncRNA genes in a series of computational screens and in a multitude of newly sequenced genomes. Non-coding RNAs can be divided into subclasses (families) according to peculiar functional, structural, or compositional similarities. A simple but eligible and frequently applied criterion to classify RNA species is length. In line, the thesis is structured into two parts: We present a series of pilot-studies investigating (1) the short and (2) the long ncRNA repertoire of several model species by means of state-of-the-art bioinformatic techniques. In the first part of the thesis, we focus on the detection of short ncRNAs exhibiting thermodynamically stable and evolutionary conserved secondary structures. We provide evidence for the presence of short structured ncRNAs in a variety of different species, ranging from bacteria to insects and higher eukaryotes. In particular, we highlight drawbacks and opportunities of RNAz-based ncRNA prediction at several hitherto scarcely investigated scenarios, as for example ncRNA prediction in the light of whole genome duplications. A recent microarray study provides experimental evidence for our approach. Differential expression of at least one-sixth of our drosophilid RNAz predictions has been reported. Beyond the means of RNAz, we moreover manually compile sophisticated annotation of short ncRNAs in schistosomes. Obviously, accumulating knowledge about the genetic material of malaria causing parasites which infect millions of humans world-wide is of utmost scientific interest. Since the performance of any comparative genomics approach is limited by the quality of its input alignments, we introduce a novel light-weight and performant genome-wide alignment approach: NcDNAlign. Although the tool is optimized for speed rather than sensitivity and requires only a minor fraction of CPU time compared to existing programs, we demonstrate that it is basically as sensitive and specific as competing approaches when applied to genome-wide ncRNA gene finding and analysis of ultra-conserved regions. By design, however, prediction approaches that search for regions with an excess of mutations that maintain secondary structure motifs will miss ncRNAs that are unstructured or whose structure is not well conserved in evolution. In the second part of the thesis, we therefore overcome secondary structure prediction and, based on splice site detection, develop novel strategies specifically designed to identify long ncRNAs in genomic sequences - probably the open problem in current RNA research. We perform splice site anchored gene-finding in drosophilids, nematodes, and vertebrate genomes and, at least for a subset of obtained candidate genes, provide experimental evidence for expression and the existence of novel spliced transcripts undoubtedly confirming our approach. In summary, we found evidence for a large number of previously undescribed RNAs which consolidates the idea of non-coding RNAs as an abundant class of regulatory active transcripts. Certainly, ncRNA prediction is a complex task. This thesis, however, rationally advises how to unveil the RNA complement of newly sequenced genomes. Since our results have already established both subsequent computational as well as experimental studies, we believe to have enduringly stimulated the field of RNA research and to have contributed to an enriched view on the subject.
9

The long and the short of computational ncRNA prediction

Rose, Dominic 11 March 2010 (has links)
Non-coding RNAs (ncRNAs) are transcripts that function directly as RNA molecule without ever being translated to protein. The transcriptional output of eukaryotic cells is diverse, pervasive, and multi-layered. It consists of spliced as well as unspliced transcripts of both protein-coding messenger RNAs and functional ncRNAs. However, it also contains degradable non-functional by-products and artefacts - certainly a reason why ncRNAs have long been wrongly disposed as transcriptional noise. Today, RNA-controlled regulatory processes are broadly recognized for a variety of ncRNA classes. The thermoresponsive ROSE ncRNA (repression of heat shock gene expression) is only one example of a regulatory ncRNA acting at the post-transcriptional level via conformational changes of its secondary structure. Bioinformatics helps to identify novel ncRNAs in the bulk of genomic and transcriptomic sequence data which are produced at ever increasing rates. However, ncRNA annotation is unfortunately not part of generic genome annotation pipelines. Dedicated computational searches for particular ncRNAs are veritable research projects in their own right. Despite best efforts, ncRNAs across the animal phylogeny remain to a large extent uncharted territory. This thesis describes a comprehensive collection of exploratory bioinformatic field studies designed to de novo predict ncRNA genes in a series of computational screens and in a multitude of newly sequenced genomes. Non-coding RNAs can be divided into subclasses (families) according to peculiar functional, structural, or compositional similarities. A simple but eligible and frequently applied criterion to classify RNA species is length. In line, the thesis is structured into two parts: We present a series of pilot-studies investigating (1) the short and (2) the long ncRNA repertoire of several model species by means of state-of-the-art bioinformatic techniques. In the first part of the thesis, we focus on the detection of short ncRNAs exhibiting thermodynamically stable and evolutionary conserved secondary structures. We provide evidence for the presence of short structured ncRNAs in a variety of different species, ranging from bacteria to insects and higher eukaryotes. In particular, we highlight drawbacks and opportunities of RNAz-based ncRNA prediction at several hitherto scarcely investigated scenarios, as for example ncRNA prediction in the light of whole genome duplications. A recent microarray study provides experimental evidence for our approach. Differential expression of at least one-sixth of our drosophilid RNAz predictions has been reported. Beyond the means of RNAz, we moreover manually compile sophisticated annotation of short ncRNAs in schistosomes. Obviously, accumulating knowledge about the genetic material of malaria causing parasites which infect millions of humans world-wide is of utmost scientific interest. Since the performance of any comparative genomics approach is limited by the quality of its input alignments, we introduce a novel light-weight and performant genome-wide alignment approach: NcDNAlign. Although the tool is optimized for speed rather than sensitivity and requires only a minor fraction of CPU time compared to existing programs, we demonstrate that it is basically as sensitive and specific as competing approaches when applied to genome-wide ncRNA gene finding and analysis of ultra-conserved regions. By design, however, prediction approaches that search for regions with an excess of mutations that maintain secondary structure motifs will miss ncRNAs that are unstructured or whose structure is not well conserved in evolution. In the second part of the thesis, we therefore overcome secondary structure prediction and, based on splice site detection, develop novel strategies specifically designed to identify long ncRNAs in genomic sequences - probably the open problem in current RNA research. We perform splice site anchored gene-finding in drosophilids, nematodes, and vertebrate genomes and, at least for a subset of obtained candidate genes, provide experimental evidence for expression and the existence of novel spliced transcripts undoubtedly confirming our approach. In summary, we found evidence for a large number of previously undescribed RNAs which consolidates the idea of non-coding RNAs as an abundant class of regulatory active transcripts. Certainly, ncRNA prediction is a complex task. This thesis, however, rationally advises how to unveil the RNA complement of newly sequenced genomes. Since our results have already established both subsequent computational as well as experimental studies, we believe to have enduringly stimulated the field of RNA research and to have contributed to an enriched view on the subject.
10

Prediction with Mixture Models

Haider, Peter January 2013 (has links)
Learning a model for the relationship between the attributes and the annotated labels of data examples serves two purposes. Firstly, it enables the prediction of the label for examples without annotation. Secondly, the parameters of the model can provide useful insights into the structure of the data. If the data has an inherent partitioned structure, it is natural to mirror this structure in the model. Such mixture models predict by combining the individual predictions generated by the mixture components which correspond to the partitions in the data. Often the partitioned structure is latent, and has to be inferred when learning the mixture model. Directly evaluating the accuracy of the inferred partition structure is, in many cases, impossible because the ground truth cannot be obtained for comparison. However it can be assessed indirectly by measuring the prediction accuracy of the mixture model that arises from it. This thesis addresses the interplay between the improvement of predictive accuracy by uncovering latent cluster structure in data, and further addresses the validation of the estimated structure by measuring the accuracy of the resulting predictive model. In the application of filtering unsolicited emails, the emails in the training set are latently clustered into advertisement campaigns. Uncovering this latent structure allows filtering of future emails with very low false positive rates. In order to model the cluster structure, a Bayesian clustering model for dependent binary features is developed in this thesis. Knowing the clustering of emails into campaigns can also aid in uncovering which emails have been sent on behalf of the same network of captured hosts, so-called botnets. This association of emails to networks is another layer of latent clustering. Uncovering this latent structure allows service providers to further increase the accuracy of email filtering and to effectively defend against distributed denial-of-service attacks. To this end, a discriminative clustering model is derived in this thesis that is based on the graph of observed emails. The partitionings inferred using this model are evaluated through their capacity to predict the campaigns of new emails. Furthermore, when classifying the content of emails, statistical information about the sending server can be valuable. Learning a model that is able to make use of it requires training data that includes server statistics. In order to also use training data where the server statistics are missing, a model that is a mixture over potentially all substitutions thereof is developed. Another application is to predict the navigation behavior of the users of a website. Here, there is no a priori partitioning of the users into clusters, but to understand different usage scenarios and design different layouts for them, imposing a partitioning is necessary. The presented approach simultaneously optimizes the discriminative as well as the predictive power of the clusters. Each model is evaluated on real-world data and compared to baseline methods. The results show that explicitly modeling the assumptions about the latent cluster structure leads to improved predictions compared to the baselines. It is beneficial to incorporate a small number of hyperparameters that can be tuned to yield the best predictions in cases where the prediction accuracy can not be optimized directly. / Das Lernen eines Modells für den Zusammenhang zwischen den Eingabeattributen und annotierten Zielattributen von Dateninstanzen dient zwei Zwecken. Einerseits ermöglicht es die Vorhersage des Zielattributs für Instanzen ohne Annotation. Andererseits können die Parameter des Modells nützliche Einsichten in die Struktur der Daten liefern. Wenn die Daten eine inhärente Partitionsstruktur besitzen, ist es natürlich, diese Struktur im Modell widerzuspiegeln. Solche Mischmodelle generieren Vorhersagen, indem sie die individuellen Vorhersagen der Mischkomponenten, welche mit den Partitionen der Daten korrespondieren, kombinieren. Oft ist die Partitionsstruktur latent und muss beim Lernen des Mischmodells mitinferiert werden. Eine direkte Evaluierung der Genauigkeit der inferierten Partitionsstruktur ist in vielen Fällen unmöglich, weil keine wahren Referenzdaten zum Vergleich herangezogen werden können. Jedoch kann man sie indirekt einschätzen, indem man die Vorhersagegenauigkeit des darauf basierenden Mischmodells misst. Diese Arbeit beschäftigt sich mit dem Zusammenspiel zwischen der Verbesserung der Vorhersagegenauigkeit durch das Aufdecken latenter Partitionierungen in Daten, und der Bewertung der geschätzen Struktur durch das Messen der Genauigkeit des resultierenden Vorhersagemodells. Bei der Anwendung des Filterns unerwünschter E-Mails sind die E-Mails in der Trainingsmende latent in Werbekampagnen partitioniert. Das Aufdecken dieser latenten Struktur erlaubt das Filtern zukünftiger E-Mails mit sehr niedrigen Falsch-Positiv-Raten. In dieser Arbeit wird ein Bayes'sches Partitionierunsmodell entwickelt, um diese Partitionierungsstruktur zu modellieren. Das Wissen über die Partitionierung von E-Mails in Kampagnen hilft auch dabei herauszufinden, welche E-Mails auf Veranlassen des selben Netzes von infiltrierten Rechnern, sogenannten Botnetzen, verschickt wurden. Dies ist eine weitere Schicht latenter Partitionierung. Diese latente Struktur aufzudecken erlaubt es, die Genauigkeit von E-Mail-Filtern zu erhöhen und sich effektiv gegen verteilte Denial-of-Service-Angriffe zu verteidigen. Zu diesem Zweck wird in dieser Arbeit ein diskriminatives Partitionierungsmodell hergeleitet, welches auf dem Graphen der beobachteten E-Mails basiert. Die mit diesem Modell inferierten Partitionierungen werden via ihrer Leistungsfähigkeit bei der Vorhersage der Kampagnen neuer E-Mails evaluiert. Weiterhin kann bei der Klassifikation des Inhalts einer E-Mail statistische Information über den sendenden Server wertvoll sein. Ein Modell zu lernen das diese Informationen nutzen kann erfordert Trainingsdaten, die Serverstatistiken enthalten. Um zusätzlich Trainingsdaten benutzen zu können, bei denen die Serverstatistiken fehlen, wird ein Modell entwickelt, das eine Mischung über potentiell alle Einsetzungen davon ist. Eine weitere Anwendung ist die Vorhersage des Navigationsverhaltens von Benutzern einer Webseite. Hier gibt es nicht a priori eine Partitionierung der Benutzer. Jedoch ist es notwendig, eine Partitionierung zu erzeugen, um verschiedene Nutzungsszenarien zu verstehen und verschiedene Layouts dafür zu entwerfen. Der vorgestellte Ansatz optimiert gleichzeitig die Fähigkeiten des Modells, sowohl die beste Partition zu bestimmen als auch mittels dieser Partition Vorhersagen über das Verhalten zu generieren. Jedes Modell wird auf realen Daten evaluiert und mit Referenzmethoden verglichen. Die Ergebnisse zeigen, dass das explizite Modellieren der Annahmen über die latente Partitionierungsstruktur zu verbesserten Vorhersagen führt. In den Fällen bei denen die Vorhersagegenauigkeit nicht direkt optimiert werden kann, erweist sich die Hinzunahme einer kleinen Anzahl von übergeordneten, direkt einstellbaren Parametern als nützlich.

Page generated in 0.0413 seconds