• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 29
  • 29
  • 6
  • 6
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Some Properties of Empirical Risk Minimization over Donsker Classes

Caponnetto, Andrea, Rakhlin, Alexander 17 May 2005 (has links)
We study properties of algorithms which minimize (or almost minimize) empirical error over a Donsker class of functions. We show that the L2-diameter of the set of almost-minimizers is converging to zero in probability. Therefore, as the number of samples grows, it is becoming unlikely that adding a point (or a number of points) to the training set will result in a large jump (in L2 distance) to a new hypothesis. We also show that under some conditions the expected errors of the almost-minimizers are becoming close with a rate faster than n^{-1/2}.
2

Biossegurança: valorizando a vida, saúde e ambiente

Rodrigues, Alexandra Rocha January 2010 (has links)
Dissertação(mestrado) - Universidade Federal do Rio Grande, Programa de Pós- Graduação em Educação em Ciências: Química da Vida e Saúde, Instituto de Educação, 2010. / Submitted by EDUARDO PENA (edupenaa@hotmail.com) on 2012-10-22T20:18:00Z No. of bitstreams: 1 BIOSSEGURANÇA.pdf: 2144485 bytes, checksum: 3dd09c0a0a532d481fed6b7337b7c0e7 (MD5) / Approved for entry into archive by Bruna Vieira(bruninha_vieira@ibest.com.br) on 2012-11-09T17:32:07Z (GMT) No. of bitstreams: 1 BIOSSEGURANÇA.pdf: 2144485 bytes, checksum: 3dd09c0a0a532d481fed6b7337b7c0e7 (MD5) / Made available in DSpace on 2012-11-09T17:32:07Z (GMT). No. of bitstreams: 1 BIOSSEGURANÇA.pdf: 2144485 bytes, checksum: 3dd09c0a0a532d481fed6b7337b7c0e7 (MD5) Previous issue date: 2010 / A Biossegurança pode ser definida como um conjunto de ações voltadas para a prevenção, minimização ou eliminação de riscos inerentes às atividades de pesquisa, produção, ensino, desenvolvimento, tecnologia e prestação de serviços, visando à saúde do homem, à preservação do meio ambiente e à qualidade dos resultados. A biossegurança nos laboratórios didáticos das universidades muitas vezes ocupa um lugar secundário, mas que merece um destaque e um olhar mais investigativo com o propósito de minimizar possíveis riscos à saúde e ao meio ambiente. Esta dissertação tem como objetivo analisar quantitativamente e qualitativamente o estado e as práticas de biossegurança existentes nos laboratórios de ensino e pesquisa da Universidade Federal do Rio Grande – FURG. Um questionário foi aplicado aos técnicos em educação, alunos de graduação, alunos de pósgraduação e professores. O questionário foi dividido em grupos de análise e envolveu temas como: conceitos em biossegurança, riscos químicos, físicos e biológicos, riscos ergonômicos, boas práticas de laboratório, equipamentos de proteção individual e coletiva e o manuseio, controle e descarte de resíduos químicos e biológicos. Os resultados em relação aos riscos físicos, químicos e biológicos encontrados durante esta pesquisa reafirmam a necessidade de se minimizar inconformidades encontradas nos laboratórios e de se reavaliar as práticas de biossegurança. Destaca-se o caráter interventivo desta pesquisa que pode trazer benefícios associados à elaboração de um manual educativo, com normas de biossegurança para as atividades de ensino e pesquisa nos laboratórios da FURG. Outras possibilidades incluem a criação de um programa de educação continuada em biossegurança e a indução de uma responsabilidade (ética) ambiental. / Biosafety can be defined as a group of actions aimed to minimize or eliminate risks associated to research activities, production, teaching, development, technology and services with the goal of preserve human health, environment and results quality. Biosafetyusually posses a secondary role in teaching laboratories at the universities, a situation that needs to change in order to minimize eventual risks to health and to the environment. The objective of this study was to analyze in a quantitative and qualitative way the issue of biosafety in teaching and research laboratories at the Federal University of Rio Grande – FURG. A query of biosafety issues was apply to education technicians, undergraduate and graduate students and professors. The query was divided in groups involving different topics as biosafety concepts, chemical, physical and biological risks, ergonomic risks, good laboratory practices, use of individual and collective protection equipments and the handling, control and discharge of chemical and biological residues. The obtained results related to physical, chemical and biological risks stress the need to correct several problems found in the laboratories as was well as some practices that need reevaluation. It is important to note the interventive nature of present research that should bring associated benefits such as the elaboration of an educative manual for biosafety norms to be applied in teaching and research activities at the laboratories at FURG. Other possibilities include the creation of a continued education program and to generate ethical environment responsibility.
3

The Use of Probabilistic Risk Functions and Linear Penalty Functions for Hospital Evacuation Planning

Soeffker, Ninja 20 November 2014 (has links)
In Bish et al. (2014), two approaches for the generation of hospital evacuation transportation plans were proposed: the minimization of the overall risk and the minimization of the evacuation duration. The resulting evacuation plans differ in terms of overall risk and duration, but also in the evacuation order of patients with different characteristics, the filling of hospital beds, and the assignments of the patients to the various vehicle types. Due to the computational effort of the duration minimization, manipulations of the risk functions for the risk minimization approach were searched in this thesis such that the resulting evacuation plans approach the minimal duration without rules for the assignments of patients to vehicle types. It is possible to create risk functions such that the resulting plans have shorter durations than with the basic risk functions, but the overall risk increases and other properties of the plans change. Furthermore, a new objective function was introduced in this thesis that minimizes an overall penalty function, where penalties are incurred for time intervals in which patients are at the evacuating hospital or being transported. The characteristics of the patients are considered by different weights in the penalty function. For the given problem instance, it is possible to choose penalty factors such that the overall risk is close to the minimal risk or to choose them such that the duration decreases. It is a simple approach with run times that are comparable to the risk minimization approach for the given problem instance. / Master of Science
4

Quadratic Criteria for Optimal Martingale Measures in Incomplete Markets

McWalter, Thomas Andrew 22 February 2007 (has links)
Student Number : 8804388Y - MSc Dissertation - School of Computational and Applied Mathematics - Faculty of Science / This dissertation considers the pricing and hedging of contingent claims in a general semimartingale market. Initially the focus is on a complete market, where it is possible to price uniquely and hedge perfectly. In this context the two fundamental theorems of asset pricing are explored. The market is then extended to incorporate risk that cannot be hedged fully, thereby making it incomplete. Using quadratic cost criteria, optimal hedging approaches are investigated, leading to the derivations of the minimal martingale measure and the variance-optimal martingale measure. These quadratic approaches are then applied to the problem of minimizing the basis risk that arises when an option on a non-traded asset is hedged with a correlated asset. Closed-form solutions based on the Black-Scholes equation are derived and numerical results are compared with those resulting from a utility maximization approach, with encouraging results.
5

Couverture des produits dérivés par minimisation locale de critères de risque convexes / Hedging Contingent Claims by Convex Local Risk-Minimization

Millot, Nicolas 17 February 2012 (has links)
On s'intéresse dans cette thèse à la couverture des produits dérivés dans des marchés incomplets. L'approche choisie peut se voir comme une extension des travaux de M. Schweizer sur la minimisation locale du risque quadratique. En effet, tout en restant dans le cadre de la modélisation des actifs par des semimartingales, notre méthode consiste à remplacer le critère de risque quadratique par un critère de risque plus général, sous la forme d'une fonctionnelle convexe du coût local. Nous obtenons d'abord des résultats d'existence, d'unicité et de caractérisation des stratégies optimales dans un marché sans friction, en temps discret et en temps continu. Puis nous explicitons ces stratégies dans le cadre de modèles de diffusion avec et sans sauts. Nous étendons également notre méthode au cas où la liquidité n'est plus infinie. Enfin nous montrons par le biais de simulations numériques les effets du choix de la fonctionnelle de risque sur la constitution du portefeuille optimal. / This thesis deals with the issue of hedging contingent claims in incomplete markets. The way we tackle this issue may be seen as an extension of M. Schweizer's work on quadratic local risk-minimization. Indeed, while still modelling assets as semimartingales, our method relies on the introduction of a convex function of the local costs to assess risk, thus relaxing the quadratic assumption. The results we obtain are existence and uniqueness results first and characterizations of optimal strategies in a frictionless market, both in discrete and continuous time settings. We then make those strategies explicit by using diffusion models with and without jumps. We further extend our approach in the case when liquidity is given through a stochastic supply curve. Finally we show the effect of the choice of different risk functions on the optimal portfolio by numerically solving the optimality equations.
6

The Leasing process on the Commercial Real Market

Hedén, Fredrik, Olofsson, Alexander January 2018 (has links)
Den kommersiella fastighetsmarknaden är till skillnad från bostadsmarknaden, inte utsatt för högre efterfrågan än utbud. Istället finns det ett större utbud för hyresgästerna att välja mellan. För att locka nya kunder måste fastighetsbolagen vara tillräckligt flexibla för att tillgodose behoven hos den eventuellt nya hyresgästen. Genom att anpassa anläggningarna för kundens behov säkerställer fastighetsägaren möjligheten att maximera hyresgästens vinst vilket i sin tur ökar viljan till att betala en högre hyra. Studien syftar till att undersöka vilka faktorer fastighetsbolagen utgår ifrån gällande riskminimering vid bedömning av hyresgästanpassningar.Studien genomfördes med hjälp av en kvalitativ metod för att samla information, åsikter och erfarenheter från fem liknande men olika fastighetsbolag, som är aktiva på fastighetsmarknaden. Företagens struktur skiljer sig åt då de inte arbetar i samma område.Studien kom fram till att de flesta företag som arbetar på fastighetsmarknaden arbetar på ett liknande sätt. Vad som skiljer dem från varandra är det faktum att de undviker vissa typer av kunder, som till exempel restauranger, spa-anläggningar, eller call-center. Den gemensamma nämnaren är att samtliga fastighetsbolag försöker undvika de branscher där de besitter minst kunskap. Bristen på kunskap och erfarenhet inom vissa områden försvårar bedömningen och tvingar företagen att spendera mer tid och pengar på att minimera risken för en negativ investering. För att minimera riskerna lägger alla företag stor vikt vid att bibehålla goda relationer, skapa en god hyresgästmix, skapa rörlighet inom beståndet, samt genomför en genomgående ekonomisk prövning av hyresgästerna. / The commercial real estate market is, unlike the housing market not suffering from a higher demand than supply which gives the tenants the option of picking and choosing when considering new facilities. To attract new customers, real estate companies must be flexible enough to satisfy the needs of the possibly new tenant. By adapting the facilities to fit the needs of the customer the real estate owner ensures the possibilities of maximizing the tenants’ profits which in turn increases the will to pay a higher rent. The study aims to investigate what factors real estate companies focuses on while considering investing in facility adaptations. The study was conducted using a qualitative method in order to gather information, opinions and experiences from five, similar but different real estate companies that are present on and shaping the real estate market. The company’s business structure differs as they do not work entirely in the same field. The study came to the conclusion that most companies working on the real estate market operates similar to each other. What differentiates them from one and other is the fact that avoids certain types of customers, being restaurants, spa-facilities or call-centers. The common denominator is that they all chose to avoid the types they know the least about. The lack of knowledge, experience and confidence in certain fields prohibits companies to choose that kind of customer, or at least running the numbers an extra time to make sure to prevents future risks. The real estate companies are putting a lot of effort in maintaining good relations, a great mix of tenants, creating flexibility within their real estate portfolio and analyzing their future tenants economically.
7

Estimation robuste pour des distributions à queue lourde / Robust estimation of heavy-tailed distributions

Joly, Emilien 14 December 2015 (has links)
Nous nous intéressons à estimer la moyenne d'une variable aléatoire de loi à queue lourde. Nous adoptons une approche plus robuste que la moyenne empirique classique communément utilisée. L'objectif est de développer des inégalités de concentration de type sous-gaussien sur l'erreur d'estimation. En d'autres termes, nous cherchons à garantir une forte concentration sous une hypothèse plus faible que la bornitude : une variance finie. Deux estimateurs de la moyenne pour une loi à support réel sont invoqués et leurs résultats de concentration sont rappelés. Plusieurs adaptations en dimension supérieure sont envisagées. L'utilisation appropriée de ces estimateurs nous permet d'introduire une nouvelle technique de minimisation du risque empirique pour des variables aléatoires à queue lourde. Quelques applications de cette technique sont développées. Nous appuyons ces résultats sur des simulations sur des jeux de données simulées. Dans un troisième temps, nous étudions un problème d'estimation multivarié dans le cadre des U-statistiques où les estimateurs précédents offrent, là aussi, une généralisation naturelle d'estimateurs présents dans la littérature. / In this thesis, we are interested in estimating the mean of heavy-tailed random variables. We focus on a robust estimation of the mean approach as an alternative to the classical empirical mean estimation. The goal is to develop sub-Gaussian concentration inequalities for the estimating error. In other words, we seek strong concentration results usually obtained for bounded random variables, in the context where the bounded condition is replaced by a finite variance condition. Two existing estimators of the mean of a real-valued random variable are invoked and their concentration results are recalled. Several new higher dimension adaptations are discussed. Using those estimators, we introduce a new version of empirical risk minimization for heavy-tailed random variables. Some applications are developed. These results are illustrated by simulations on artificial data samples. Lastly, we study the multivariate case in the U-statistics context. A natural generalization of existing estimators is offered, once again, by previous estimators.
8

Evolutionary dynamics, topological disease structures, and genetic machine learning

Gryder, Ryan Wayne 06 October 2021 (has links)
Topological evolution is a new dynamical systems model of biological evolution occurring within a genomic state space. It can be modeled equivalently as a stochastic dynamical system, a stochastic differential equation, or a partial differential equation drift-diffusion model. An application of this approach is a model of disease evolution tracing diseases in ways similar to standard functional traits (e.g., organ evolution). Genetically embedded diseases become evolving functional components of species-level genomes. The competition between species-level evolution (which tends to maintain diseases) and individual evolution (which acts to eliminate them), yields a novel structural topology for the stochastic dynamics involved. In particular, an unlimited set of dynamical time scales emerges as a means of timing different levels of evolution: from individual to group to species and larger units. These scales exhibit a dynamical tension between individual and group evolutions, which are modeled on very different (fast and slow, respectively) time scales. This is analyzed in the context of a potentially major constraint on evolution: the species-level enforcement of lifespan via (topological) barriers to genomic longevity. This species-enforced behavior is analogous to certain types of evolutionary altruism, but it is denoted here as extreme altruism based on its potential shaping through mass extinctions. We give examples of biological mechanisms implementing some of the topological barriers discussed and provide mathematical models for them. This picture also introduces an explicit basis for lifespan-limiting evolutionary pressures. This involves a species-level need to maintain flux in its genome via a paced turnover of its biomass. This is necessitated by the need for phenomic characteristics to keep pace with genomic changes through evolution. Put briefly, the phenome must keep up with the genome, which occurs with an optimized limited lifespan. An important consequence of this model is a new role for diseases in evolution. Rather than their commonly recognized role as accidental side-effects, they play a central functional role in the shaping of an optimal lifespan for a species implemented through the topology of their embedding into the genome state space. This includes cancers, which are known to be embedded into the genome in complex and sometimes hair-triggered ways arising from DNA damage. Such cancers are known also to act in engineered and teleological ways that have been difficult to explain using currently very popular theories of intra-organismic cancer evolution. This alternative inter-organismic picture presents cancer evolution as occurring over much longer (evolutionary) time scales rather than very shortened organic evolutions that occur in individual cancers. This in turn may explain some evolved, intricate, and seemingly engineered properties of cancer. This dynamical evolutionary model is framed in a multiscaled picture in which different time scales are almost independently active in the evolutionary process acting on semi-independent parts of the genome. We additionally move from natural evolution to artificial implementations of evolutionary algorithms. We study genetic programming for the structured construction of machine learning features in a new structural risk minimization environment. While genetic programming in feature engineering is not new, we propose a Lagrangian optimization criterion for defining new feature sets inspired by structural risk minimization in statistical learning. We bifurcate the optimization of this Lagrangian into two exhaustive categories involving local and global search. The former is accomplished through local descent with given basins of attraction while the latter is done through a combinatorial search for new basins via an evolution algorithm.
9

The Applicability of the PICTS-SV in Offenders with Severe Substance Use Histories

Williams, Margot Maryanne 08 1900 (has links)
Two important developments are addressed by this dissertation. First, the Psychological Inventory of Criminal Thinking Styles-Simplified Version (PICTS-SV) was examined in relation to the original PICTS. Second, the vulnerability of the PICTS-SV to intentional risk minimization (RM) was tested. Two separate studies recruited a total of 150 offenders from a court-mandated residential substance use treatment facility. As expected, Study I established the PICTS-SV's good concurrent validity with the PICTS, especially at the broad composite scale level. For Study II, criminal thinking failed to show the anticipated convergence with HCR-20 risk classifications or forensic correlates. Potential explanations, including a restricted range of risk levels in this sample, are discussed. As a particular strength, the findings highlight that the PICTS-SV, in contrast to many risk measures, displays robust resistance to RM distortion, although revisions to its Df-r validity scale are warranted. These results overall demonstrate strong evidence of the PICTS-SV's utility for assessing a dynamic criminogenic need to inform effective interventions and accurate risk determinations.
10

Representation Learning Based Causal Inference in Observational Studies

Lu, Danni 22 February 2021 (has links)
This dissertation investigates novel statistical approaches for causal effect estimation in observational settings, where controlled experimentation is infeasible and confounding is the main hurdle in estimating causal effect. As such, deconfounding constructs the main subject of this dissertation, that is (i) to restore the covariate balance between treatment groups and (ii) to attenuate spurious correlations in training data to derive valid causal conclusions that generalize. By incorporating ideas from representation learning, adversarial matching, generative causal estimation, and invariant risk modeling, this dissertation establishes a causal framework that balances the covariate distribution in latent representation space to yield individualized estimations, and further contributes novel perspectives on causal effect estimation based on invariance principles. The dissertation begins with a systematic review and examination of classical propensity score based balancing schemes for population-level causal effect estimation, presented in Chapter 2. Three causal estimands that target different foci in the population are considered: average treatment effect on the whole population (ATE), average treatment effect on the treated population (ATT), and average treatment effect on the overlap population (ATO). The procedure is demonstrated in a naturalistic driving study (NDS) to evaluate the causal effect of cellphone distraction on crash risk. While highlighting the importance of adopting causal perspectives in analyzing risk factors, discussions on the limitations in balance efficiency, robustness against high-dimensional data and complex interactions, and the need for individualization are provided to motivate subsequent developments. Chapter 3 presents a novel generative Bayesian causal estimation framework named Balancing Variational Neural Inference of Causal Effects (BV-NICE). Via appealing to the Robinson factorization and a latent Bayesian model, a novel variational bound on likelihood is derived, explicitly characterized by the causal effect and propensity score. Notably, by treating observed variables as noisy proxies of unmeasurable latent confounders, the variational posterior approximation is re-purposed as a stochastic feature encoder that fully acknowledges representation uncertainties. To resolve the imbalance in representations, BV-NICE enforces KL-regularization on the respective representation marginals using Fenchel mini-max learning, justified by a new generalization bound on the counterfactual prediction accuracy. The robustness and effectiveness of this framework are demonstrated through an extensive set of tests against competing solutions on semi-synthetic and real-world datasets. In recognition of the reliability issue when extending causal conclusions beyond training distributions, Chapter 4 argues ascertaining causal stability is the key and introduces a novel procedure called Risk Invariant Causal Estimation (RICE). By carefully re-examining the relationship between statistical invariance and causality, RICE cleverly leverages the observed data disparities to enable the identification of stable causal effects. Concretely, the causal inference objective is reformulated under the framework of invariant risk modeling (IRM), where a population-optimality penalty is enforced to filter out un-generalizable effects across heterogeneous populations. Importantly, RICE allows settings where counterfactual reasoning with unobserved confounding or biased sampling designs become feasible. The effectiveness of this new proposal is verified with respect to a variety of study designs on real and synthetic data. In summary, this dissertation presents a flexible causal inference framework that acknowledges the representation uncertainties and data heterogeneities. It enjoys three merits: improved balance to complex covariate interactions, enhanced robustness to unobservable latent confounders, and better generalizability to novel populations. / Doctor of Philosophy / Reasoning cause and effect is the innate ability of a human. While the drive to understand cause and effect is instinct, the rigorous reasoning process is usually trained through the observation of countless trials and failures. In this dissertation, we embark on a journey to explore various principles and novel statistical approaches for causal inference in observational studies. Throughout the dissertation, we focus on the causal effect estimation which answers questions like ``what if" and ``what could have happened". The causal effect of a treatment is measured by comparing the outcomes corresponding to different treatment levels of the same unit, e.g. ``what if the unit is treated instead of not treated?". The challenge lies in the fact that i) a unit only receives one treatment at a time and therefore it is impossible to directly compare outcomes of different treatment levels; ii) comparing the outcomes across different units may involve bias due to confounding as the treatment assignment potentially follows a systematic mechanism. Therefore, deconfounding constructs the main hurdle in estimating causal effects. This dissertation presents two parallel principles of deconfounding: i) balancing, i.e., comparing difference under similar conditions; ii) contrasting, i.e., extracting invariance under heterogeneous conditions. Chapter 2 and Chapter 3 explore causal effect through balancing, with the former systematically reviews a classical propensity score weighting approach in a conventional data setting and the latter presents a novel generative Bayesian framework named Balancing Variational Neural Inference of Causal Effects(BV-NICE) for high-dimensional, complex, and noisy observational data. It incorporates the advance deep learning techniques of representation learning, adversarial learning, and variational inference. The robustness and effectiveness of the proposed framework are demonstrated through an extensive set of experiments. Chapter 4 extracts causal effect through contrasting, emphasizing that ascertaining stability is the key of causality. A novel causal effect estimating procedure called Risk Invariant Causal Estimation(RICE) is proposed that leverages the observed data disparities to enable the identification of stable causal effects. The improved generalizability of RICE is demonstrated through synthetic data with different structures, compared with state-of-art models. In summary, this dissertation presents a flexible causal inference framework that acknowledges the data uncertainties and heterogeneities. By promoting two different aspects of causal principles and integrating advance deep learning techniques, the proposed framework shows improved balance for complex covariate interactions, enhanced robustness for unobservable latent confounders, and better generalizability for novel populations.

Page generated in 0.0749 seconds