• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 85
  • 38
  • 14
  • 14
  • 13
  • 9
  • 8
  • 7
  • 7
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 227
  • 105
  • 77
  • 62
  • 50
  • 34
  • 26
  • 26
  • 18
  • 17
  • 17
  • 17
  • 16
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Evaluating the benefits and effectiveness of public policy

Sandström, F. Mikael January 1999 (has links)
The dissertation consists of four essays that treat different aspects or the evaluation of public policy. Two essays are applications of the travel cost method. In the first of these, recreational travel to the Swedish coast is studied to obtain estimates of the social benefits from reduced eutrophication of the sea. The second travel cost essay attempts at estimating how the probability that a woman will undergo mammographic screening for breast cancer is affected by the distance she has to travel to undergo such an examination. Using these estimated probabilities, the woman's valuation of the examination is obtained. The two other essays deal with automobile taxation. One essay analyzes how taxation and the Swedish eco-labeling system of automobiles have affected the sale of different car models. The last essay treats the effects of taxes and of scrappage premiums on the life length of cars. / Diss. Stockholm : Handelshögskolan, 1999
202

On Methods for Real Time Sampling and Distributions in Sampling

Meister, Kadri January 2004 (has links)
This thesis is composed of six papers, all dealing with the issue of sampling from a finite population. We consider two different topics: real time sampling and distributions in sampling. The main focus is on Papers A–C, where a somewhat special sampling situation referred to as real time sampling is studied. Here a finite population passes or is passed by the sampler. There is no list of the population units available and for every unit the sampler should decide whether or not to sample it when he/she meets the unit. We focus on the problem of finding suitable sampling methods for the described situation and some new methods are proposed. In all, we try not to sample units close to each other so often, i.e. we sample with negative dependencies. Here the correlations between the inclusion indicators, called sampling correlations, play an important role. Some evaluation of the new methods are made by using a simulation study and asymptotic calculations. We study new methods mainly in comparison to standard Bernoulli sampling while having the sample mean as an estimator for the population mean. Assuming a stationary population model with decreasing autocorrelations, we have found the form for the nearly optimal sampling correlations by using asymptotic calculations. Here some restrictions on the sampling correlations are used. We gain most in efficiency using methods that give negatively correlated indicator variables, such that the correlation sum is small and the sampling correlations are equal for units up to lag m apart and zero afterwards. Since the proposed methods are based on sequences of dependent Bernoulli variables, an important part of the study is devoted to the problem of how to generate such sequences. The correlation structure of these sequences is also studied. The remainder of the thesis consists of three diverse papers, Papers D–F, where distributional properties in survey sampling are considered. In Paper D the concern is with unified statistical inference. Here both the model for the population and the sampling design are taken into account when considering the properties of an estimator. In this paper the framework of the sampling design as a multivariate distribution is used to outline two-phase sampling. In Paper E, we give probability functions for different sampling designs such as conditional Poisson, Sampford and Pareto designs. Methods to sample by using the probability function of a sampling design are discussed. Paper F focuses on the design-based distributional characteristics of the π-estimator and its variance estimator. We give formulae for the higher-order moments and cumulants of the π-estimator. Formulae of the design-based variance of the variance estimator, and covariance of the π-estimator and its variance estimator are presented.
203

Υποδείγματα χρονοσειρών περιορισμένης εξαρτημένης μεταβλητής και μέτρηση της ταχείας διάχυσης αρνητικών χρηματοοικονομικών συμβάντων

Λίβανος, Θεόδωρος 16 June 2011 (has links)
Στόχος της παρούσης διπλωματικής εργασίας είναι να μελετηθεί η Ταχεία Διάχυση Αρνητικών Χρηματοοικονομικών Συμβάντων (financial contagion) όπως αυτή παρουσιάζεται στην βιβλιογραφία καθώς επίσης οι αιτίες, οι τρόποι διάχυσης και οι τρόποι μέτρησης της. Όσον αφορά στο εφαρμοσμένο κομμάτι της υπάρχουσας βιβλιογραφίας εξετάζεται το μέρος αυτής το οποίο αφορά στην εξέταση της Ταχείας Διάχυσης Αρνητικών Χρηματοοικονομικών Συμβάντων με μοντέλα περιορισμένης εξαρτημένης μεταβλητής. Γίνεται εκτενέστερη ανάλυση στο multinomial logit μοντέλο το οποίο φανερώνει την πιθανότητα εμφάνισης ενός ενδεχομένου σε σχέση με τις επεξηγηματικές μεταβλητές που επιλέγονται. Στα πλαίσια της εργασίας αυτής γίνεται και μια εμπειρική εφαρμογή ενός τέτοιου μοντέλου με δεδομένα που αφορούν την Ελληνική Χρηματιστηριακή Αγορά με σκοπό να δειχθεί αν οι χαμηλές αποδόσεις ορισμένων υποδεικτών του Γενικού Δείκτη Τιμών επηρεάζουν την πιθανότητα εμφάνισης ταυτόχρονων κοινών υπερβάσεων στις αποδόσεις (coexceedances) και άλλων υποδεικτών. / The aim of this thesis is to study the rapid dissemination Negative Financial Events (financial contagion) as presented in the literature as well as the causes, ways and methods of diffusion measurement. As far as the applied part of the existing literature is concerned, it is examined the part which concerns the examination of the Rapid Diffusion of Negative Financial Events (financial contagion) with limited dependent variable models. There is extensive analysis of the multinomial logit model. As part of this work it is presented an empirical application of such a model with data from the Greek stock market in order to indicate whether the low returns of certain subindices of the General Price Index affect the likelihood of simultaneous joint excesses in returns (coexceedances) of other subindices .
204

Understanding Immigrants' Travel Behavior in Florida: Neighborhood Effects and Behavioral Assimilation

Zaman, Nishat 14 November 2014 (has links)
The goal of this study was to develop Multinomial Logit models for the mode choice behavior of immigrants, with key focuses on neighborhood effects and behavioral assimilation. The first aspect shows the relationship between social network ties and immigrants’ chosen mode of transportation, while the second aspect explores the gradual changes toward alternative mode usage with regard to immigrants’ migrating period in the United States (US). Mode choice models were developed for work, shopping, social, recreational, and other trip purposes to evaluate the impacts of various land use patterns, neighborhood typology, socioeconomic-demographic and immigrant related attributes on individuals’ travel behavior. Estimated coefficients of mode choice determinants were compared between each alternative mode (i.e., high-occupancy vehicle, public transit, and non-motorized transport) with single-occupant vehicles. The model results revealed the significant influence of neighborhood and land use variables on the usage of alternative modes among immigrants. Incorporating these indicators into the demand forecasting process will provide a better understanding of the diverse travel patterns for the unique composition of population groups in Florida.
205

Nonparametric kernel estimation methods for discrete conditional functions in econometrics

Elamin, Obbey Ahmed January 2013 (has links)
This thesis studies the mixed data types kernel estimation framework for the models of discrete dependent variables, which are known as kernel discrete conditional functions. The conventional parametric multinomial logit MNL model is compared with the mixed data types kernel conditional density estimator in Chapter (2). A new kernel estimator for discrete time single state hazard models is developed in Chapter (3), and named as the discrete time “external kernel hazard” estimator. The discrete time (mixed) proportional hazard estimators are then compared with the discrete time external kernel hazard estimator empirically in Chapter (4). The work in Chapter (2) attempts to estimate a labour force participation decision model using a cross-section data from the UK labour force survey in 2007. The work in Chapter (4) estimates a hazard rate for job-vacancies in weeks, using data from Lancashire Careers Service (LCS) between the period from March 1988 to June 1992. The evidences from the vast literature regarding female labour force participation and the job-market random matching theory are used to examine the empirical results of the estimators. The parametric estimator are tighten by the restrictive assumption regarding the link function of the discrete dependent variable and the dummy variables of the discrete covariates. Adding interaction terms improves the performance of the parametric models but encounters other risks like generating multicollinearity problem, increasing the singularity of the data matrix and complicates the computation of the ML function. On the other hand, the mixed data types kernel estimation framework shows an outstanding performance compared with the conventional parametric estimation methods. The kernel functions that are used for the discrete variables, including the dependent variable, in the mixed data types estimation framework, have substantially improved the performance of the kernel estimators. The kernel framework uses very few assumptions about the functional form of the variables in the model, and relay on the right choice of the kernel functions in the estimator. The outcomes of the kernel conditional density shows that female education level and fertility have high impact on females propensity to work and be in the labour force. The kernel conditional density estimator captures more heterogeneity among the females in the sample than the MNL model due to the restrictive parametric assumptions in the later. The (mixed) proportional hazard framework, on the other hand, missed to capture the effect of the job-market tightness in the job-vacancies hazard rate and produce inconsistent results when the assumptions regarding the distribution of the unobserved heterogeneity are changed. The external kernel hazard estimator overcomes those problems and produce results that consistent with the job market random matching theory. The results in this thesis are useful for nonparametric estimation research in econometrics and in labour economics research.
206

La adopción de tecnología en los invernaderos hortícolas mediterráneos

García Martínez, María del Carmen 25 November 2009 (has links)
En la horticultura intensiva española la mayor parte de las exportaciones procede de los cultivos de invernadero, localizados en Almería, Murcia y Alicante, donde se ha centrado el presente estudio. Actualmente la posición competitiva no presenta amenazas muy graves pero tampoco muestra una etapa creciente. Exportaciones y precios soportan la competencia de otros países del área mediterránea, con los cuales España debe competir en capital y en tecnología elevando el nivel de equipamiento de los invernaderos. Ante unas exigencias de reestructuración de las instalaciones actuales, no aplazables, se plantea la presente tesis con el fin de conocer el estado actual de la tecnología y su evolución y, además, las características de las explotaciones y la actitud de sus titulares respecto a las innovaciones necesarias. Las fuentes de información se han basado en una toma de precios en origen del tomate y pimiento, como principales productos hortícolas, y en una encuesta, realizada en 242 explotaciones, mediante muestreo proporcional estratificado, en las zonas de El Ejido (Almería), Valle del Guadalentín y Campo de Cartagena (Murcia) y Sur de Alicante. El análisis de la información tuvo una primera parte dedicada a los precios, con el cálculo de la tendencia y la estacionalidad y la aplicación de modelos ARIMA. La finalidad ha sido conocer la evolución de las rentas de los productores, efectuar predicciones, y establecer una relación entre los precios y la tecnología adoptable. El tratamiento de los datos de la encuesta con sus resultados comprende la mayor parte del contenido del trabajo. Se aplicó el análisis estadístico univariante a las características estructurales de explotaciones e invernaderos y el bivariante, con contraste de independencia, para determinar relaciones de interés entre los factores que influyen en los procesos de innovación. / García Martínez, MDC. (2009). La adopción de tecnología en los invernaderos hortícolas mediterráneos [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/6472 / Palancia
207

Creation of a Next-Generation Standardized Drug Groupingfor QT Prolonging Reactions using Machine Learning Techniques

Tiensuu, Jacob, Rådahl, Elsa January 2021 (has links)
This project aims to support pharmacovigilance, the science and activities relating to drug-safety and prevention of adverse drug reactions (ADRs). We focus on a specific ADR called QT prolongation, a serious reaction affecting the heartbeat. Our main goal is to group medicinal ingredients that might cause QT prolongation. This grouping can be used in safety analysis and for exclusion lists in clinical studies. It should preferably be ranked according to level of suspected correlation. We wished to create an automated and standardised process. Drug safety-related reports describing patients' experienced ADRs and what medicinal products they have taken are collected in a database called VigiBase, that we have used as source for ingredient extraction. The ADRs are described in free-texts and coded using an international standardised terminology. This helps us to process the data and filter ingredients included in a report that describes QT prolongation. To broaden our project scope to include uncoded data, we extended the process to use free-text verbatims describing the ADR as input. By processing and filtering the free-text data and training a classification model for natural language processing released by Google on VigiBase data, we were able to predict if a free-text verbatim is describing QT prolongation. The classification resulted in an F1-score of 98%. For the ingredients extracted from VigiBase, we wanted to validate if there is a known connection to QT prolongation. The VigiBase occurrences is a parameter to consider, but it might be misleading since a report can include several drugs, and a drug can include several ingredients, making it hard to validate the cause. For validation, we used product labels connected to each ingredient of interest. We used a tool to download, scan and code product labels in order to see which ones mention QT prolongation. To rank our final list of ingredients according to level of suspected QT prolongation correlation, we used a multinomial logistic regression model. As training data, we used a data subset manually labeled by pharmacists. Used on unlabeled validation data, the model accuracy was 68%. Analyzing the training data showed that it was not easily separated linearly explaining the limited classification performance. The final ranked list of ingredients suspected to cause QT prolongation consists of 1086 ingredients.
208

Automatic map generation from nation-wide data sources using deep learning

Lundberg, Gustav January 2020 (has links)
The last decade has seen great advances within the field of artificial intelligence. One of the most noteworthy areas is that of deep learning, which is nowadays used in everything from self driving cars to automated cancer screening. During the same time, the amount of spatial data encompassing not only two but three dimensions has also grown and whole cities and countries are being scanned. Combining these two technological advances enables the creation of detailed maps with a multitude of applications, civilian as well as military.This thesis aims at combining two data sources covering most of Sweden; laser data from LiDAR scans and surface model from aerial images, with deep learning to create maps of the terrain. The target is to learn a simplified version of orienteering maps as these are created with high precision by experienced map makers, and are a representation of how easy or hard it would be to traverse a given area on foot. The performance on different types of terrain are measured and it is found that open land and larger bodies of water is identified at a high rate, while trails are hard to recognize.It is further researched how the different densities found in the source data affect the performance of the models, and found that some terrain types, trails for instance, benefit from higher density data, Other features of the terrain, like roads and buildings are predicted with higher accuracy by lower density data.Finally, the certainty of the predictions is discussed and visualised by measuring the average entropy of predictions in an area. These visualisations highlight that although the predictions are far from perfect, the models are more certain about their predictions when they are correct than when they are not.
209

Confidence Intervals for Population Size in a Capture-Recapture Problem.

Zhang, Xiao 14 August 2007 (has links) (PDF)
In a single capture-recapture problem, two new Wilson methods for interval estimation of population size are derived. Classical Chapman interval, Wilson and Wilson-cc intervals are examined and compared in terms of their expected interval width and exact coverage properties in two models. The new approach performs better than the Chapman in each model. Bayesian analysis also gives a different way to estimate population size.
210

Some Inferential Results for One-Shot Device Testing Data Analysis

So, Hon Yiu January 2016 (has links)
In this thesis, we develop some inferential results for one-shot device testing data analysis. These extend and generalize existing methods in the literature. First, a competing-risk model is introduced for one-shot testing data under accelerated life-tests. One-shot devices are products which will be destroyed immediately after use. Therefore, we can observe only a binary status as data, success or failure, of such products instead of its lifetime. Many one-shot devices contain multiple components and failure of any one of them will lead to the failure of the device. Failed devices are inspected to identify the specific cause of failure. Since the exact lifetime is not observed, EM algorithm becomes a natural tool to obtain the maximum likelihood estimates of the model parameters. Here, we develop the EM algorithm for competing exponential and Weibull cases. Second, a semi-parametric approach is developed for simple one-shot device testing data. Semi-parametric estimation is a model that consists of parametric and non-parametric components. For this purpose, we only assume the hazards at different stress levels are proportional to each other, but no distributional assumption is made on the lifetimes. This provides a greater flexibility in model fitting and enables us to examine the relationship between the reliability of devices and the stress factors. Third, Bayesian inference is developed for one-shot device testing data under exponential distribution and Weibull distribution with non-constant shape parameters for competing risks. Bayesian framework provides statistical inference from another perspective. It assumes the model parameters to be random and then improves the inference by incorporating expert's experience as prior information. This method is shown to be very useful if we have limited failure observation wherein the maximum likelihood estimator may not exist. The thesis proceeds as follows. In Chapter 2, we assume the one-shot devices to have two components with lifetimes having exponential distributions with multiple stress factors. We then develop an EM algorithm for developing likelihood inference for the model parameters as well as some useful reliability characteristics. In Chapter 3, we generalize to the situation when lifetimes follow a Weibull distribution with non-constant shape parameters. In Chapter 4, we propose a semi-parametric model for simple one-shot device test data based on proportional hazards model and develop associated inferential results. In Chapter 5, we consider the competing risk model with exponential lifetimes and develop inference by adopting the Bayesian approach. In Chapter 6, we generalize these results on Bayesian inference to the situation when the lifetimes have a Weibull distribution. Finally, we provide some concluding remarks and indicate some future research directions in Chapter 7. / Thesis / Doctor of Philosophy (PhD)

Page generated in 0.5104 seconds