• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 7
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 37
  • 37
  • 12
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Calcul de probabilités d'événements rares liés aux maxima en horizon fini de processus stochastiques / Calculation of probabilities of rare events related to the finite-horizon maxima of stochastic processes

Shao, Jun 12 December 2016 (has links)
Initiée dans le cadre d’un projet ANR (le projet MODNAT) ciblé sur la modélisation stochastique de phénomènes naturels et la quantification probabiliste de leurs effets dynamiques sur des systèmes mécaniques et structuraux, cette thèse a pour objet le calcul de probabilités d’événements rares liés aux maxima en horizon fini de processus stochastiques, avec prise en compte des quatre contraintes imposées suivantes : (1) l’ensemble des processus considérés doit contenir les quatre grandes catégories de processus rencontrés en dynamique aléatoire, à savoir les gaussiens stationnaires, les gaussiens non stationnaires, les non gaussiens stationnaires et les non gaussiens non stationnaires ; (2) ces processus doivent pouvoir être, soit décrits par leurs lois, soit fonctions de processus décrits par leurs lois, soit solutions d’équations différentielles stochastiques, soit même solutions d’inclusions différentielles stochastiques ; (3) les événements en question sont des dépassements de seuils très élevés par les maxima en horizon fini des processus considérés et ces événements sont de très faible occurrence, donc de très faible probabilité (de l’ordre de 10 −4 à 10 −8 ), du fait de la valeur élevée des seuils ; et enfin (4) le recours à une approche Monte-Carlo pour effectuer ce type de calcul doit être banni, car trop chronophage compte tenu des contraintes précédentes. Pour résoudre un tel problème, dont le domaine d’intérêt s’étend bien au delà de la mécanique probabiliste et de la fiabilité structurale (on le rencontre notamment dans tous les secteurs scientifiques en connexion avec la statistique des valeurs extrêmes, comme par exemple les mathématiques financières ou les sciences économiques) une méthode innovante est proposée, dont l’idée maîtresse est née de l’analyse des résultats d’une étude statistique de grande ampleur menée dans le cadre du projet MODNAT. Cette étude, qui porte sur l’analyse du comportement des valeurs extrêmes des éléments d’un vaste ensemble de processus, a en effet mis en évidence deux fonctions germes dépendant explicitement de la probabilité cible (la première en dépendant directement, la seconde indirectement via une probabilité conditionnelle auxiliaire elle-même fonction de la probabilité cible) et possédant des propriétés de régularité remarquables et récurrentes pour tous les processus de la base de données, et c’est sur l’exploitation conjointe de ces propriétés et d’un principe d’approximation bas niveau-extrapolation haut niveau que s’appuie la construction de la méthode. Deux versions de celle-ci en sont d’abord proposées, se distinguant par le choix de la fonction germe et dans chacune desquelles cette fonction est approximée par un polynôme. Une troisième version est également développée, basée sur le formalisme de la deuxième version mais utilisant pour la fonction germe une approximation de type "fonction de survie de Pareto". Les nombreux résultats numériques présentés attestent de la remarquable efficacité des deux premières versions. Ils montrent également que celles-ci sont de précision comparable. La troisième version, légèrement moins performante que les deux premières, présente quant à elle l’intérêt d’établir un lien direct avec la théorie des valeurs extrêmes. Dans chacune de ses trois versions, la méthode proposée constitue à l’évidence un progrès par rapport aux méthodes actuelles dédiées à ce type de problème. De par sa structure, elle offre en outre l’avantage de rester opérationnelle en contexte industriel. / Initiated within the framework of an ANR project (the MODNAT project) targeted on the stochastic modeling of natural hazards and the probabilistic quantification of their dynamic effects on mechanical and structural systems, this thesis aims at the calculation of probabilities of rare events related to the maxima of stochastic processes over a finite time interval, taking into account the following four constraints : (1) the set of considered processes must contain the four main categories of processes encountered in random dynamics, namely stationary Gaussian, non-stationary Gaussian, stationary non-Gaussian and non-stationary non-Gaussian ones ; (2) these processes can be either described by their distributions, or functions of processes described by their distributions, or solutions of stochastic differential equations, or solutions of stochastic differential inclusions ; (3) the events in question are crossings of high thresholds by the maxima of the considered processes over finite time intervals and these events are of very weak occurrence, hence of very small probability, due to the high size of thresholds ; and finally (4) the use of a Monte Carlo approach to perform this type of calculation must be proscribed because it is too time-consuming given the above constraints. To solve such a problem, whose field of interest extends well beyond probabilistic mechanics and structural reliability (it is found in all scientific domains in connection with the extreme values theory, such as financial mathematics or economical sciences), an innovative method is proposed, whose main idea emerged from the analysis of the results of a large-scale statistical study carried out within the MODNAT project. This study, which focuses on analyzing the behavior of the extreme values of elements of a large set of processes, has indeed revealed two germ functions explicitly related to the target probability (the first directly related, the second indirectly via a conditional auxiliary probability which itself depend on the target probability) which possess remarkable and recurring regularity properties for all the processes of the database, and the method is based on the joint exploitation of these properties and a "low level approximation-high level extrapolation" principle. Two versions of this method are first proposed, which are distinguished by the choice of the germ function and in each of which the latter is approximated by a polynomial. A third version has also been developed. It is based on the formalism of the second version but which uses as germ function an approximation of "Pareto survival function" type. The numerous presented numerical results attest to the remarkable effectiveness of the first two versions. They also show that they are of comparable precision. The third version, slightly less efficient than the first two, presents the interest of establishing a direct link with the extreme values theory. In each of its three versions, the proposed method is clearly an improvement compared to current methods dedicated to this type of problem. Thanks to its structure, it also offers the advantage of remaining operational in industrial context.
32

Dynamiska förmågor som en betydande faktor i riskhantering : SMEs omställningsförmåga under en pandemi / Dynamic capabilities as a contributing factor in risk management : Adaptabilities of SMEs during a pandemic

Löfstedt, Hanna, Karlsson, Amelia January 2021 (has links)
Bakgrund: Covid-19 pandemin slog emot världen i början av 2020 och har påverkat marknaden kraftigt. Hela länder har genomfört restriktioner och nedstängningar till viss del, vilket har lett till stora problem för SMEs som är en av de mest utsatta grupperna. Att hantera risker som uppstår under kriser är av stor vikt för företag, inte minst SMEs, som EUs marknad till 99 procent består av. Diskussioner gällande dynamiska förmågor och möjligheten att nyttja dessa för att anpassa sig till en förändrad marknad är därav väldigt aktuell. Syfte: Syftet med studien ämnar till att skapa förståelse och bygga en uppfattning om hur riskhantering och dynamiska förmågor inom företag kan kombineras för att hantera kriser. Därtill ämnar studien analysera huruvida en kris eller en stor händelse leder till en organisations lärande och förändring. Genomförande: Studien genomfördes genom en kvalitativ flerfallsstudie, där vi undersökte fem fallföretag genom semistrukturerade intervjuer. Alla intervjupersoner hade en högt uppsatt position inom företagen och hade därmed en god överblick i styrningen och strategin. Urvalet av både fallföretag samt dess intervjupersoner skedde målstyrt med ett heterogent urval. Slutsats: Studiens resultat visar att det finns en tydlig avsaknad av riskhantering hos SMEs och att detta till största del beror på resursbrist samt tidsbrist. Alla svarande företag lyfter att de har klarat sig bra till följd av de snabba beslutsprocesserna, flexibiliteten och närheten inom organisationen, vilka kan klassas som dynamiska förmågor. Lika viktigt är lärdomarna och erfarenheten företagen har samlat på sig under krisens framfart, men framförallt medvetenheten den har skapat hos företagen. / Background: The Covid-19 pandemic struck the world in the beginning of 2020 and have had a major effect on the market. Countries have implemented extensive restrictions and to some extent lock downs, which has led to issues for SMEs that are one of the most vulnerable types of businesses. Managing the risks that arise during a crisis is of huge importance for companies, not the least SMEs, that make up 99 percent of EUs market. Discussions regarding dynamic capabilities and the opportunity to use these to adapt to the marketchange, is therefore a current subject. Purpose: The purpose of the study aims to create understanding and gain knowledge in regard to how risk management and dynamic capabilities within companies can be combined to deal with crises. It further aims to analyse whether a crisis or major event will lead to organizational learning and change. Completion: The study was conducted as a qualitative multi-case study, where we investigated five case businesses through semistructured interviews. All respondents held a high position within the company and had a good overview and knowledge of the strategy and management of the business. The selection was goal-oriented, and both the case companies and their respondents were a heterogeneous selection. Conclusion: The result of the study shows that there is a clear absence of risk management in SMEs which largely depend on resource scarcity and lack of time. All answering companies highlight that they have managed well due to the quick decision-making process, the flexibility and the proximity within the organisation, which can be categorized as dynamic capabilities. Equally important is the knowledge and experience the companies have gathered during the crisis, but above all the increasing awareness it has created within the companies.
33

Utilizing self-similar stochastic processes to model rare events in finance

Wesselhöfft, Niels 24 February 2021 (has links)
In der Statistik und der Mathematik ist die Normalverteilung der am meisten verbreitete, stochastische Term für die Mehrheit der statistischen Modelle. Wir zeigen, dass der entsprechende stochastische Prozess, die Brownsche Bewegung, drei entscheidende empirische Beobachtungen nicht abbildet: schwere Ränder, Langzeitabhängigkeiten und Skalierungsgesetze. Ein selbstähnlicher Prozess, der in der Lage ist Langzeitabhängigkeiten zu modellieren, ist die Gebrochene Brownsche Bewegung, welche durch die Faltung der Inkremente im Limit nicht normalverteilt sein muss. Die Inkremente der Gebrochenen Brownschen Bewegung können durch einen Parameter H, dem Hurst Exponenten, Langzeitabhängigkeiten darstellt werden. Für die Gebrochene Brownsche Bewegung müssten die Skalierungs-(Hurst-) Exponenten über die Momente verschiedener Ordnung konstant sein. Empirisch beobachten wir variierende Hölder-Exponenten, die multifraktales Verhalten implizieren. Wir erklären dieses multifraktale Verhalten durch die Änderung des alpha-stabilen Indizes der alpha-stabilen Verteilung, indem wir Filter für Saisonalitäten und Langzeitabhängigkeiten über verschiedene Zeitfrequenzen anwenden, startend bei 1-minütigen Hochfrequenzdaten. Durch die Anwendung eines Filters für die Langzeitabhängigkeit zeigen wir, dass die Residuen des stochastischen Prozesses geringer Zeitfrequenz (wöchentlich) durch die alpha-stabile Bewegung beschrieben werden können. Dies erlaubt es uns, den empirischen, hochfrequenten Datensatz auf die niederfrequente Zeitfrequenz zu skalieren. Die generierten wöchentlichen Daten aus der Frequenz-Reskalierungs-Methode (FRM) haben schwerere Ränder als der ursprüngliche, wöchentliche Prozess. Wir zeigen, dass eine Teilmenge des Datensatzes genügt, um aus Risikosicht bessere Vorhersagen für den gesamten Datensatz zu erzielen. Im Besonderen wäre die Frequenz-Reskalierungs-Methode (FRM) in der Lage gewesen, die seltenen Events der Finanzkrise 2008 zu modellieren. / Coming from a sphere in statistics and mathematics in which the Normal distribution is the dominating underlying stochastic term for the majority of the models, we indicate that the relevant diffusion, the Brownian Motion, is not accounting for three crucial empirical observations for financial data: Heavy tails, long memory and scaling laws. A self-similar process, which is able to account for long-memory behavior is the Fractional Brownian Motion, which has a possible non-Gaussian limit under convolution of the increments. The increments of the Fractional Brownian Motion can exhibit long memory through a parameter H, the Hurst exponent. For the Fractional Brownian Motion this scaling (Hurst) exponent would be constant over different orders of moments, being unifractal. But empirically, we observe varying Hölder exponents, the continuum of Hurst exponents, which implies multifractal behavior. We explain the multifractal behavior through the changing alpha-stable indices from the alpha-stable distributions over sampling frequencies by applying filters for seasonality and time dependence (long memory) over different sampling frequencies, starting at high-frequencies up to one minute. By utilizing a filter for long memory we show, that the low-sampling frequency process, not containing the time dependence component, can be governed by the alpha-stable motion. Under the alpha-stable motion we propose a semiparametric method coined Frequency Rescaling Methodology (FRM), which allows to rescale the filtered high-frequency data set to the lower sampling frequency. The data sets for e.g. weekly data which we obtain by rescaling high-frequency data with the Frequency Rescaling Method (FRM) are more heavy tailed than we observe empirically. We show that using a subset of the whole data set suffices for the FRM to obtain a better forecast in terms of risk for the whole data set. Specifically, the FRM would have been able to account for tail events of the financial crisis 2008.
34

Étude théorique de l’extinction de fluorescence des protéines fluorescentes : champ de forces, mécanisme moléculaire et modèle cinétique / A theoretical study of the fluorescence quenching in fluorescent proteins : force field, molecular mechanism and kinetic model

Jonasson, Gabriella 18 July 2012 (has links)
Les protéines fluorescentes, comme la GFP (green fluorescent protein), sont des protéines naturellement fluorescentes qui sont utilisées pour leur rôle de marqueur, permettant de localiser des protéines dans les cellules et d'en suivre les déplacements. De nombreuses études expérimentales et théoriques ont été menées ces dix dernières années sur les protéines fluorescentes. De là, se forge une compréhension essentiellement qualitative du rôle de la protéine vis-à-vis de l’obtention ou non d’une émission radiative : il apparaît que la protéine permet la fluorescence en bloquant les processus qui la désactivent ; ces processus de désactivation sont très rapides et efficaces (à l'échelle de la picoseconde) dans le cas du chromophore seul, et ils sont bien identifiés comme étant des torsions autour des liaisons intercycles (tau et phi). Dans la protéine, la sensibilité des temps de vie de fluorescence à des mutations proches ou non du chromophore, à des modifications de pH ou de température laisse supposer un contrôle de la dynamique du chromophore par différents paramètres, sans qu’ils soient pour autant identifiés et mis en relation.Une étude de la dynamique de la protéine permettrait de faire la lumière sur les mécanismes responsables de ces phénomènes photophysiques pour lesquels une analyse structurale ne suffit pas. Cependant l'étude de la dynamique est limitée par la taille du système (>30 000 atomes), par l'échelle de temps des phénomènes photophysiques considérés (dizaine de nanosecondes) et par le fait que les deux torsions tau et phi sont fortement couplées dans l'état excité du chromophore. Ces trois facteurs excluent les méthodes de dynamique existantes aujourd'hui ; dynamique quantique (AIMD), dynamique mixte classique-quantique (QM/MD) et dynamique moléculaire classique (MD).Nous avons surmonté le problème par la modélisation de la surface d’énergie potentielle de torsion du chromophore à l’état excité basée sur des calculs quantiques de haute précision, par une interpolation des valeurs obtenues par une expression analytique appropriée en fonction des angles de torsion tau et phi et avec une précision suffisante pour reproduire des barrières de l’ordre de la kcal/mol, et enfin, par l’implémentation de cette expression analytique dans le programme parallèle AMBER. Une deuxième difficulté théorique concerne la simulation et l’analyse statistique d’événements peu fréquents à l’échelle de la nanoseconde, et dont on ne connait pas le chemin de réaction, ici les déformations de la protéine et du chromophore conduisant aux géométries favorables à la conversion interne. Grâce à ces développements et aux simulations qu'ils ont permises, nous avons réalisé la première modélisation de la désactivation non-radiative par conversion interne à l’échelle de la nanoseconde dans trois protéines fluorescentes différentes. L’analyse des dynamiques moléculaires classiques nous donne une évaluation quantitative des temps de vie de l’extinction de fluorescence, en accord avec les données expérimentales. Par ailleurs elle nous a permis d'identifier les mouvements moléculaires concertés de la protéine et du chromophore conduisant à cette extinction. De ces résultats, émerge une représentation plus complète du mécanisme qui libère la torsion du chromophore ou qui la déclenche : il peut venir d’un mouvement spécifique de la protéine, qui se produit à l’échelle de la nanoseconde, ou bien de plusieurs mouvements spécifiques, plus fréquents (rupture de liaisons hydrogène, rotation de chaînes latérales, dynamique d'agrégats d’eau), mais qui coïncident seulement à l’échelle de la nanoseconde. Ces mouvements spécifiques n’ont pas un coût énergétique important mais la nécessité de leur coïncidence crée un délai de l’ordre de quelques nanosecondes alors que dans le vide la torsion se produit en quelques picosecondes. Dans le cas des protéines étudiées, on a identifié en grande partie les mécanismes et les acides aminés qui sont impliqués. / Fluorescent proteins, like GFP (green fluorescent protein), are efficient sensors for a variety of physical-chemical properties and they are extensively used as markers in living cells imaging. These proteins have been widely studied both experimentally and theoretically the last decade. The comprehension of the protein's role in the regulation of the radiative emission is today essentially qualitative: it appears that the protein enables the fluorescence by blocking the processes that deactivates it; the deactivating processes are very quick and efficient (on the picosecond time scale) when the chromophore is isolated, and they are identified as being the torsions around the central bonds of the chromophore (tau and phi). The fluorescence lifetimes of a protein is very sensitive to mutations in the vicinity of the chromophore, to modifications in pH or in temperature. This seems to indicate a control of the dynamics of the chromophore by different parameters, that are not necessarily identified.A study of the dynamics of the protein would allow a deeper understanding of the mechanisms that are responsible for the fluorescence quenching. From a theoretical point of view, one is faced with three difficulties in this type of study: the size of the system (>30 000 atoms including a water box), the required time scale (tens of nanoseconds) and the fact that the torsions tau and phi are strongly coupled in the excited state of the chromophore. We must thus rule out the already existing dynamics methods: quantum dynamics (AIMD), mixed classical-quantum dynamics (QM/MD) and classical molecular dynamics (MD).We have overcome this problem by modeling the torsional potential energy surface of the chromophore in the first excited state trough high precision quantum calculations, by interpolating the energy values with an analytical fitting expression depending on the torsions tau and phi and with a precision high enough to reproduce barriers of the order of 1 kcal/mol, and lastly, by implementing this fitting expression in a parallelized version of the MD program AMBER. Another theoretical difficulty concerns the simulation and the statistical analysis of rare events on the nanosecond time scale without knowing the reaction path in advance, i.e. the deformations of the protein and of the chromophore leading to geometries where the internal conversion is favored. As a result of these developments and of the simulations they have enabled, we have been able to model, for the first time, the non-radiative deactivation by internal conversion at the nanosecond time scale in three different fluorescent proteins. The analysis of the classical molecular dynamics gives us a quantitative evaluation of the lifetime of the fluorescence extinction, in agreement with experimental results. In addition, it has allowed us to identify the concerted molecular movements between the protein and the chromophore leading to this extinction. A more complete representation of the mechanism that liberates or provokes the chromophore torsion emerges from these results: it could be a specific movement of the protein, that occurs on the nanosecond timescale, or several specific movements that occur more frequently (breakage of a hydrogen bond, rotation of side chains, dynamics of a water cluster), but that coincide only on the nanosecond time scale. These specific movements do not have a high energy cost but the need for them to coincide creates a delay of several nanoseconds compared to the chromophore torsion in vacuo which occurs after a few picoseconds. In the proteins we have studied (GFP, YFP and Padron), we have identified the principle components of the mechanisms and the amino acids that are implicated in this chromophore-protein interplay.
35

Improving Efficiency of Prevention in Telemedicine / Zlepšování učinnosti prevence v telemedicíně

Nálevka, Petr January 2010 (has links)
This thesis employs data-mining techniques and modern information and communication technology to develop methods which may improve efficiency of prevention oriented telemedical programs. In particular this thesis uses the ITAREPS program as a case study and demonstrates that an extension of the program based on the proposed methods may significantly improve the program's efficiency. ITAREPS itself is a state of the art telemedical program operating since 2006. It has been deployed in 8 different countries around the world, and solely in the Czech republic it helped prevent schizophrenic relapse in over 400 participating patients. Outcomes of this thesis are widely applicable not just to schizophrenic patients but also to other psychotic or non-psychotic diseases which follow a relapsing path and satisfy certain preconditions defined in this thesis. Two main areas of improvement are proposed. First, this thesis studies various temporal data-mining methods to improve relapse prediction efficiency based on diagnostic data history. Second, latest telecommunication technologies are used in order to improve quality of the gathered diagnostic data directly at the source.
36

Decisions from experience: Time delays, complexity and illusions of control

Lejarraga, Tomás 09 July 2009 (has links)
Esta tesis incluye tres capítulos que exploran diferentes aspectos de la distinción entre decisiones desde la descripción y decisiones desde la experiencia. El capítulo 1 estudia escenarios de decisión cuando las personas cuentan con información tanto desde la descripción como desde la experiencia. Los resultados sugieren que la experiencia es desatendida ante una descripción.También se explora el impacto sobre las decisiones de diferencias individales con respecto a la habilidad racional.Las personas con habilidad racional más alta obtienen muestras de mayor tamaño que los participantes con menor habilidad racional.El capítulo 2 examina situaciones en las que la información obtenida desde la experiencia resulta una mejor fuente que una descripción.La complejidad y las demoras favorecen a la experiencia sobre la descripción como fuente de información. No se evidencian diferencias individuales con respecto a habilidades numéricas o racionales. Sin embargo, se evidencia una relación entre mayor habilidad racional y mayor tamaño muestral. Por último, el capítulo 3 explora, para una tarea de lotería,la interacción entre la ilusión de control y la fuente de información. / This thesis includes three chapters that study different aspects of the distinction between decisions from description and decisions from experience. Chapter 1 studies choice when decision makers have both information from description and information from experience. Results suggest that experience is disregarded in the face of description. Individual differences with respect to rational ability are also explored. Participants with higher rational ability draw larger samples than participants with lower rational ability. Chapter 2 examines situations in which information from experience is a better source of information than information from description. Complex scenarios and delayed judgmental tasks favor experience over description as source of information. Moreover, there were no individual differences due to numerical/rational abilities. Additional evidence was found that relates higher rational ability to larger samples.Finally,chapter 3 explores how illusion of control interacts with the source of information in a lottery task.
37

Informed statistical modelling of habitat suitability for rare and threatened species

O'Leary, Rebecca A. January 2008 (has links)
In this thesis a number of statistical methods have been developed and applied to habitat suitability modelling for rare and threatened species. Data available on these species are typically limited. Therefore, developing these models from these data can be problematic and may produce prediction biases. To address these problems there are three aims of this thesis. The _rst aim is to develop and implement frequentist and Bayesian statistical modelling approaches for these types of data. The second aim is develop and implement expert elicitation methods. The third aim is to apply these novel approaches to Australian rare and threatened species case studies with the intention of habitat suitability modelling. The _rst aim is ful_lled by investigating two innovative approaches for habitat suitability modelling and sensitivity analysis of the second approach to priors. The _rst approach is a new multilevel framework developed to model the species distribution at multiple scales and identify excess zeros (absences outside the species range). Applying a statistical modelling approach to the identi_cation of excess zeros has not previously been conducted. The second approach is an extension and application of Bayesian classi_cation trees to modelling the habitat suitability of a threatened species. This is the _rst `real' application of this approach in ecology. Lastly, sensitivity analysis of the priors in Bayesian classi_cation trees are examined for a real case study. Previously, sensitivity analysis of this approach to priors has not been examined. To address the second aim, expert elicitation methods are developed, extended and compared in this thesis. In particular, one elicitation approach is extended from previous research, there is a comparison of three elicitation methods, and one new elicitation approach is proposed. These approaches are illustrated for habitat suitability modelling of a rare species and the opinions of one or two experts are elicited. The _rst approach utilises a simple questionnaire, in which expert opinion is elicited on whether increasing values of a covariate either increases, decreases or does not substantively impact on a response. This approach is extended to express this information as a mixture of three normally distributed prior distributions, which are then combined with available presence/absence data in a logistic regression. This is one of the _rst elicitation approaches within the habitat suitability modelling literature that is appropriate for experts with limited statistical knowledge and can be used to elicit information from single or multiple experts. Three relatively new approaches to eliciting expert knowledge in a form suitable for Bayesian logistic regression are compared, one of which is the questionnaire approach. Included in this comparison of three elicitation methods are a summary of the advantages and disadvantages of these three methods, the results from elicitations and comparison of the prior and posterior distributions. An expert elicitation approach is developed for classi_cation trees, in which the size and structure of the tree is elicited. There have been numerous elicitation approaches proposed for logistic regression, however no approaches have been suggested for classi_cation trees. The last aim of this thesis is addressed in all chapters, since the statistical approaches proposed and extended in this thesis have been applied to real case studies. Two case studies have been examined in this thesis. The _rst is the rare native Australian thistle (Stemmacantha australis), in which the dataset contains a large number of absences distributed over the majority of Queensland, and a small number of presence sites that are only within South-East Queensland. This case study motivated the multilevel modelling framework. The second case study is the threatened Australian brush-tailed rock-wallaby (Petrogale penicillata). The application and sensitivity analysis of Bayesian classi_cation trees, and all expert elicitation approaches investigated in this thesis are applied to this case study. This work has several implications for conservation and management of rare and threatened species. Novel statistical approaches addressing the _rst aim provide extensions to currently existing methods, or propose a new approach, for identi _cation of current and potential habitat. We demonstrate that better model predictions can be achieved using each method, compared to standard techniques. Elicitation approaches addressing the second aim ensure expert knowledge in various forms can be harnessed for habitat modelling, a particular bene_t for rare and threatened species which typically have limited data. Throughout, innovations in statistical methodology are both motivated and illustrated via habitat modelling for two rare and threatened species: the native thistle Stemmacantha australis and the brush-tailed rock wallaby Petrogale penicillata.

Page generated in 0.4638 seconds