• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 6
  • 6
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

D- and Ds-optimal Designs for Estimation of Parameters in Bivariate Copula Models

Liu, Hua-Kun 27 July 2007 (has links)
For current status data, the failure time of interest may not be observed exactly. The type of this data consists only of a monitoring time and knowledge of whether the failure time occurred before or after the monitoring time. In order to be able to obtain more information from this data, so the monitoring time is very important. In this work, the optimal designs for determining the monitoring times such that maximum information may be obtained in bivariate copula model (Clayton) are investigated. Here, the D- optimal criterion is used to decide the best monitoring time Ci (i = 1; ¢ ¢ ¢ ; n), then use these monitoring times Ci to estimate the unknown parameters simultaneously by maximizing the corresponding likelihood function. Ds-optimal designs for estimation of association parameter in the copula model are also discussed. Simulation studies are presented to compare the performance of using monitoring time C¤D and C¤Ds to do the estimation.
2

Developing a methodology for monitoring personal exposure to particulate matter in a variety of microenvironments

Steinle, Susanne January 2014 (has links)
Adverse health effects from exposure to air pollution, although at present only partly understood, are a global challenge and of widespread concern. Quantifying human exposure to air pollutants is challenging, as ambient concentrations of air pollutants at potentially harmful levels are ubiquitous and subject to high spatial and temporal variability. At the same time, individuals have their very own unique activity-patterns. Hence exposure results from intertwined relationships between environmental and human systems add complexity to the assessment process. It is essential to develop a deeper understanding of individual exposure pathways and situations occurring in people’s everyday lives. This is important especially with regard to exposure and health impact assessment which provide the basis for public health advice and policy development. This thesis describes the development and application of a personal monitoring method to assess exposure to fine particulate matter in a variety of microenvironments. Tools and methods applied are tested with respect to feasibility, intrusiveness, performance and potential for future applications. The development of the method focuses on the application in everyday environments and situations in an attempt to capture as much of the total exposure as possible, across a complete set of microenvironments. Seventeen volunteers took part in the pilot study, collected data and provided feedback on methodology and tools applied. The low-cost particle counter applied showed good agreement with reference instruments when studied in two different environments. Based on the assessment of the two instruments functions to derive particle mass concentration from the original particle number counts have been defined. The application of the devices and tools received positive feedback from the volunteers. Limitations are mainly related to the non-weatherproof design of the particle counter. The collection of time-activity patterns with GPS and time-activity diaries is challenging and requires careful processing. Resulting personal exposure profiles highlight the influence of individual activities and contextual factors. Highest concentrations were measured in indoor environments where people also spent the majority of time. Differences between transport modes as well as between urban and rural areas were identified.
3

Robust boosting via convex optimization

Rätsch, Gunnar January 2001 (has links)
In dieser Arbeit werden statistische Lernprobleme betrachtet. Lernmaschinen extrahieren Informationen aus einer gegebenen Menge von Trainingsmustern, so daß sie in der Lage sind, Eigenschaften von bisher ungesehenen Mustern - z.B. eine Klassenzugehörigkeit - vorherzusagen. Wir betrachten den Fall, bei dem die resultierende Klassifikations- oder Regressionsregel aus einfachen Regeln - den Basishypothesen - zusammengesetzt ist. Die sogenannten Boosting Algorithmen erzeugen iterativ eine gewichtete Summe von Basishypothesen, die gut auf ungesehenen Mustern vorhersagen. <br /> Die Arbeit behandelt folgende Sachverhalte: <br /> <br /> o Die zur Analyse von Boosting-Methoden geeignete Statistische Lerntheorie. Wir studieren lerntheoretische Garantien zur Abschätzung der Vorhersagequalität auf ungesehenen Mustern. Kürzlich haben sich sogenannte Klassifikationstechniken mit großem Margin als ein praktisches Ergebnis dieser Theorie herausgestellt - insbesondere Boosting und Support-Vektor-Maschinen. Ein großer Margin impliziert eine hohe Vorhersagequalität der Entscheidungsregel. Deshalb wird analysiert, wie groß der Margin bei Boosting ist und ein verbesserter Algorithmus vorgeschlagen, der effizient Regeln mit maximalem Margin erzeugt.<br /> <br /> o Was ist der Zusammenhang von Boosting und Techniken der konvexen Optimierung? <br /> Um die Eigenschaften der entstehenden Klassifikations- oder Regressionsregeln zu analysieren, ist es sehr wichtig zu verstehen, ob und unter welchen Bedingungen iterative Algorithmen wie Boosting konvergieren. Wir zeigen, daß solche Algorithmen benutzt werden koennen, um sehr große Optimierungsprobleme mit Nebenbedingungen zu lösen, deren Lösung sich gut charakterisieren laesst. Dazu werden Verbindungen zum Wissenschaftsgebiet der konvexen Optimierung aufgezeigt und ausgenutzt, um Konvergenzgarantien für eine große Familie von Boosting-ähnlichen Algorithmen zu geben.<br /> <br /> o Kann man Boosting robust gegenüber Meßfehlern und Ausreissern in den Daten machen? <br /> Ein Problem bisheriger Boosting-Methoden ist die relativ hohe Sensitivität gegenüber Messungenauigkeiten und Meßfehlern in der Trainingsdatenmenge. Um dieses Problem zu beheben, wird die sogenannte 'Soft-Margin' Idee, die beim Support-Vector Lernen schon benutzt wird, auf Boosting übertragen. Das führt zu theoretisch gut motivierten, regularisierten Algorithmen, die ein hohes Maß an Robustheit aufweisen.<br /> <br /> o Wie kann man die Anwendbarkeit von Boosting auf Regressionsprobleme erweitern? <br /> Boosting-Methoden wurden ursprünglich für Klassifikationsprobleme entwickelt. Um die Anwendbarkeit auf Regressionsprobleme zu erweitern, werden die vorherigen Konvergenzresultate benutzt und neue Boosting-ähnliche Algorithmen zur Regression entwickelt. Wir zeigen, daß diese Algorithmen gute theoretische und praktische Eigenschaften haben.<br /> <br /> o Ist Boosting praktisch anwendbar? <br /> Die dargestellten theoretischen Ergebnisse werden begleitet von Simulationsergebnissen, entweder, um bestimmte Eigenschaften von Algorithmen zu illustrieren, oder um zu zeigen, daß sie in der Praxis tatsächlich gut funktionieren und direkt einsetzbar sind. Die praktische Relevanz der entwickelten Methoden wird in der Analyse chaotischer Zeitreihen und durch industrielle Anwendungen wie ein Stromverbrauch-Überwachungssystem und bei der Entwicklung neuer Medikamente illustriert. / In this work we consider statistical learning problems. A learning machine aims to extract information from a set of training examples such that it is able to predict the associated label on unseen examples. We consider the case where the resulting classification or regression rule is a combination of simple rules - also called base hypotheses. The so-called boosting algorithms iteratively find a weighted linear combination of base hypotheses that predict well on unseen data. We address the following issues:<br /> <br /> o The statistical learning theory framework for analyzing boosting methods.<br /> We study learning theoretic guarantees on the prediction performance on unseen examples. Recently, large margin classification techniques emerged as a practical result of the theory of generalization, in particular Boosting and Support Vector Machines. A large margin implies a good generalization performance. Hence, we analyze how large the margins in boosting are and find an improved algorithm that is able to generate the maximum margin solution.<br /> <br /> o How can boosting methods be related to mathematical optimization techniques?<br /> To analyze the properties of the resulting classification or regression rule, it is of high importance to understand whether and under which conditions boosting converges. We show that boosting can be used to solve large scale constrained optimization problems, whose solutions are well characterizable. To show this, we relate boosting methods to methods known from mathematical optimization, and derive convergence guarantees for a quite general family of boosting algorithms.<br /> <br /> o How to make Boosting noise robust?<br /> One of the problems of current boosting techniques is that they are sensitive to noise in the training sample. In order to make boosting robust, we transfer the soft margin idea from support vector learning to boosting. We develop theoretically motivated regularized algorithms that exhibit a high noise robustness.<br /> <br /> o How to adapt boosting to regression problems?<br /> Boosting methods are originally designed for classification problems. To extend the boosting idea to regression problems, we use the previous convergence results and relations to semi-infinite programming to design boosting-like algorithms for regression problems. We show that these leveraging algorithms have desirable theoretical and practical properties.<br /> <br /> o Can boosting techniques be useful in practice?<br /> The presented theoretical results are guided by simulation results either to illustrate properties of the proposed algorithms or to show that they work well in practice. We report on successful applications in a non-intrusive power monitoring system, chaotic time series analysis and a drug discovery process. <br><br> ---<br> Anmerkung:<br> Der Autor ist Träger des von der Mathematisch-Naturwissenschaftlichen Fakultät der Universität Potsdam vergebenen Michelson-Preises für die beste Promotion des Jahres 2001/2002.
4

Upgrading the Control and Monitoring system for the TOFOR neutron time-of-flight spectrometer at JET

Valldor-Blücher, Johan January 2013 (has links)
This report describes the development and testing of the upgraded Control and Monitoring (C&amp;Mu) system for the TOFOR neutron spectrometer. TOFOR is currently performing plasma diagnostics for the JET experimental fusion reactor. The purpose of the C&amp;Mu system is to enable monitoring of the amplitude dependent time delays of TOFOR. In order to perform this monitoring function the C&amp;Mu system must comprise a pulsed light source with variable intensity and a reference time signal. In this work a reference time signal has been retrieved from a laser comprising a motorized polarizer. This has been accomplished by installing a photomultiplier tube and a beamsplitter cube. The beamsplitter cube splits the laser light into two parts and directs one part into the photomultiplier tube. The photomultiplier tube converts the light into an electrical reference time signal. A control program has been developed for the motorized polarizer, enabling the user to vary the intensity of the light over the interval from 0% to 100%. The C&amp;Mu system has been performance tested and it was found that the time resolution of the system is about 0.1ns and the time stability of the system is about 0.12ns over 27 hours. The system is more than adequate to monitor variations in time delays at TOFOR of several nanoseconds, over a full JET day. The C&amp;Mu system is ready to be installed on TOFOR.
5

A 576 m long creep and shrinkage specimen – long-term deformation of a semi-integral concrete bridge with a massive solid cross-section

Herbers, Max, Wenner, Marc, Marx, Steffen 26 February 2024 (has links)
For creep and shrinkage investigations, relatively small cylindrical specimens are generally exposed to constant climatic conditions. The derived mainly empirical prediction models are used for the calculation of large engineering structures with massive cross-sections. In this paper, the expected values of the material models according to fib Model Code 2010 and Eurocode 2 are compared with monitoring data, which were acquired over a period of more than 12 years during a structural health monitoring of a large viaduct. It was found that in addition to the measured continuous increase in the viscous deformations, seasonal fluctuations due to climatic influences could also be detected. The numerical calculations show that the material models differ significantly in their magnitude and time course of the predicted viscous concrete deformations. In comparison with the monitoring data, a good agreement was achieved when using the material models according to Eurocode 2. The models of the fib Model Code 2010, on the other hand, underestimated the deformations of the massive bridge girder.
6

Genesis, conservation and deformation of ice-rich mountain permafrost:: Driving factors, mapping and geodetic monitoring

Kenner, Robert 29 January 2018 (has links)
This thesis analyses ice-rich mountain permafrost with regard to its genesis, distribution, deformation and interaction with other environmental factors. The processes influencing ground ice formation in ice-rich and ice-poor mountain permafrost are highlighted. Factors influencing the presence of ice-rich permafrost are identified and their individual or combined effect on frozen ground is determined. Based on these findings, a new permafrost distribution map of Switzerland was created, which specifies permafrost temperature and ice contents and considers rock glacier creep paths. The deformation of rock glaciers is investigated with newly developed monitoring systems and concepts. This enables a better understanding of the processes leading to rock glacier acceleration at different time scales.

Page generated in 0.0901 seconds