201 |
Ochota platit za zelenou elektřinu / Willingness to pay for green electricityNovák, Jan January 2015 (has links)
We estimate the willingness to pay for electricity generated from renewable energy in the Czech Republic. Discrete choice experiment is used to elicit preferences for various attributes of renewable electricity support scheme (PM emission, GHG emission, size of RE power plant, revenue distribution, and costs). Original survey is carried with 404 respondents living in two regions - Ustecky (polluted area) and Southern Bohemia (cleaner area). We find that respondents prefer decentralized renewable electricity sources over centralized, local air quality improvements over reduction in greenhouse gas emissions. Estimated marginal willingness to pay for 1% reduction in emission of particulate matter equals to 49 CZK, respectively 3.7 % of average monthly electricity bill. In total, WTP for green electricity is larger than current compulsory contributions to renewable energy support scheme. Powered by TCPDF (www.tcpdf.org)
|
202 |
Exploring students’ patterns of reasoningMatloob Haghanikar, Mojgan January 1900 (has links)
Doctor of Philosophy / Department of Physics / Dean Zollman / As part of a collaborative study of the science preparation of elementary school teachers, we investigated the quality of students’ reasoning and explored the relationship between sophistication of reasoning and the degree to which the courses were considered inquiry oriented.
To probe students’ reasoning, we developed open-ended written content questions with the distinguishing feature of applying recently learned concepts in a new context. We devised a protocol for developing written content questions that provided a common structure for probing and classifying students’ sophistication level of reasoning. In designing our protocol, we considered several distinct criteria, and classified students’ responses based on their performance for each criterion.
First, we classified concepts into three types: Descriptive, Hypothetical, and Theoretical and categorized the abstraction levels of the responses in terms of the types of concepts and the inter-relationship between the concepts. Second, we devised a rubric based on Bloom’s revised taxonomy with seven traits (both knowledge types and cognitive processes) and a defined set of criteria to evaluate each trait.
Along with analyzing students’ reasoning, we visited universities and observed the courses in which the students were enrolled. We used the Reformed Teaching Observation Protocol (RTOP) to rank the courses with respect to characteristics that are valued for the inquiry courses. We conducted logistic regression for a sample of 18 courses with about 900 students and reported the results for performing logistic regression to estimate the relationship between traits of reasoning and RTOP score.
In addition, we analyzed conceptual structure of students’ responses, based on conceptual classification schemes, and clustered students’ responses into six categories. We derived regression model, to estimate the relationship between the sophistication of the categories of conceptual structure and RTOP scores. However, the outcome variable with six categories required a more complicated regression model, known as multinomial logistic regression, generalized from binary logistic regression.
With the large amount of collected data, we found that the likelihood of the higher cognitive processes were in favor of classes with higher measures on inquiry. However, the usage of more abstract concepts with higher order conceptual structures was less prevalent in higher RTOP courses.
|
203 |
預售屋、新成屋與中古屋之偏好選擇 / Housing choice among presale houses, newly constructed houses and existing houses.王俊鈞, Wang, Jiun Jiun Unknown Date (has links)
住宅選擇是每一個家戶都會面臨到的問題,過去文獻發現購屋者先選擇租屋或購屋,若決定購屋,則先決定於何區位購屋,然後再決定購買何種房屋類型之房屋,然而卻未曾提及購屋者於不同市場類型住宅間之選擇。預售屋、新成屋以及中古屋等不同市場類型之住宅,各自隱含不同的效用及風險,影響著購屋者之選擇,因此本研究試圖討論購屋者於不同市場類型住宅間之選擇與偏好。
本研究採用內政部營建署「住宅需求動向調查」資料,利用混合多項羅吉特模型探討不同限制條件下,預售屋、新成屋與中古屋之個體選擇行為。實證結果顯示,投資者較偏好於購買知覺風險較高之預售屋,期待以高知覺風險換取高的報酬;教育程度較高者,因對居住品質要求愈高,因此傾向於選擇設備新穎之預售屋與新成屋;家戶平均月所得較高之購屋者,負擔能力較高,因此選擇總價較高之預售屋機率較高,其次為新成屋。此外,搜尋頻率愈高者,選擇預售屋之機率愈高,因預售屋無實體存在,預售屋購屋者為降低其知覺風險,將花費更高之搜尋成本。在價格彈性分析部分,實證結果顯示預售屋之競爭力最高,但預售屋之受衝擊力亦最高,而中古屋之競爭力於三種市場類型中居次,但中古屋衝擊力最小,因此,當單價屬性發生變動時,較不影響中古屋購屋者之選擇,但卻大幅影響預售屋購屋者之選擇機率。 / Every household would face housing choice, the past housing choice study founded that households decided tenure choice first, if they decides to buy a house, they first decided on what location, and then decided what type of housing to buy, but it has not been mentioned the housing choice among different residential market types. Pre-sale houses, newly constructed houses and existing houses implied different effectiveness and risks, affecting the choice of homebuyers. This article tried to discuss homeowners’ choice among different residential market types.
This study use Construction and Planning Agency, "Housing Demand Survey of 2009" data, use mixed multinomial logit model, investigated under different constraints, housing choice behavior among pre-sale houses, newly constructed houses and existing houses. The empirical results showed that investors prefer higher perceived risk in buying pre-sale housing, looking for a high perceived risk and high rewards; higher education level, due to the higher quality requirements for living, so they preferred pre-sale houses and newly constructed houses. Homebuyers which have higher average monthly household income, have more affordable ability, so the probability of choosing pre-sale houses are much higher, followed by the newly constructed houses. In addition, the higher search frequency were more likely to choose pre-sale houses because pre-sale houses for sale no physical presence, pre-sale housing homebuyers in order to reduce their perceived risk, would spend more search costs. In the price elasticity analysis, empirical results showed that the pre-sale houses had the highest competitiveness, but the impact force was also the highest, while the existing houses market, the competitiveness of the third types was the second place, and the competitiveness of the existing houses was the smallest. Thus, when a change in unit price attribute, does not affect existing houses homebuyers, but significantly affected the choice probability of pre-sale houses homebuyers.
|
204 |
Evaluating the benefits and effectiveness of public policySandström, F. Mikael January 1999 (has links)
The dissertation consists of four essays that treat different aspects or the evaluation of public policy. Two essays are applications of the travel cost method. In the first of these, recreational travel to the Swedish coast is studied to obtain estimates of the social benefits from reduced eutrophication of the sea. The second travel cost essay attempts at estimating how the probability that a woman will undergo mammographic screening for breast cancer is affected by the distance she has to travel to undergo such an examination. Using these estimated probabilities, the woman's valuation of the examination is obtained. The two other essays deal with automobile taxation. One essay analyzes how taxation and the Swedish eco-labeling system of automobiles have affected the sale of different car models. The last essay treats the effects of taxes and of scrappage premiums on the life length of cars. / Diss. Stockholm : Handelshögskolan, 1999
|
205 |
On Methods for Real Time Sampling and Distributions in SamplingMeister, Kadri January 2004 (has links)
This thesis is composed of six papers, all dealing with the issue of sampling from a finite population. We consider two different topics: real time sampling and distributions in sampling. The main focus is on Papers A–C, where a somewhat special sampling situation referred to as real time sampling is studied. Here a finite population passes or is passed by the sampler. There is no list of the population units available and for every unit the sampler should decide whether or not to sample it when he/she meets the unit. We focus on the problem of finding suitable sampling methods for the described situation and some new methods are proposed. In all, we try not to sample units close to each other so often, i.e. we sample with negative dependencies. Here the correlations between the inclusion indicators, called sampling correlations, play an important role. Some evaluation of the new methods are made by using a simulation study and asymptotic calculations. We study new methods mainly in comparison to standard Bernoulli sampling while having the sample mean as an estimator for the population mean. Assuming a stationary population model with decreasing autocorrelations, we have found the form for the nearly optimal sampling correlations by using asymptotic calculations. Here some restrictions on the sampling correlations are used. We gain most in efficiency using methods that give negatively correlated indicator variables, such that the correlation sum is small and the sampling correlations are equal for units up to lag m apart and zero afterwards. Since the proposed methods are based on sequences of dependent Bernoulli variables, an important part of the study is devoted to the problem of how to generate such sequences. The correlation structure of these sequences is also studied. The remainder of the thesis consists of three diverse papers, Papers D–F, where distributional properties in survey sampling are considered. In Paper D the concern is with unified statistical inference. Here both the model for the population and the sampling design are taken into account when considering the properties of an estimator. In this paper the framework of the sampling design as a multivariate distribution is used to outline two-phase sampling. In Paper E, we give probability functions for different sampling designs such as conditional Poisson, Sampford and Pareto designs. Methods to sample by using the probability function of a sampling design are discussed. Paper F focuses on the design-based distributional characteristics of the π-estimator and its variance estimator. We give formulae for the higher-order moments and cumulants of the π-estimator. Formulae of the design-based variance of the variance estimator, and covariance of the π-estimator and its variance estimator are presented.
|
206 |
Υποδείγματα χρονοσειρών περιορισμένης εξαρτημένης μεταβλητής και μέτρηση της ταχείας διάχυσης αρνητικών χρηματοοικονομικών συμβάντωνΛίβανος, Θεόδωρος 16 June 2011 (has links)
Στόχος της παρούσης διπλωματικής εργασίας είναι να μελετηθεί η Ταχεία Διάχυση Αρνητικών Χρηματοοικονομικών Συμβάντων (financial contagion) όπως αυτή παρουσιάζεται στην βιβλιογραφία καθώς επίσης οι αιτίες, οι τρόποι διάχυσης και οι τρόποι μέτρησης της. Όσον αφορά στο εφαρμοσμένο κομμάτι της υπάρχουσας βιβλιογραφίας εξετάζεται το μέρος αυτής το οποίο αφορά στην εξέταση της Ταχείας Διάχυσης Αρνητικών Χρηματοοικονομικών Συμβάντων με μοντέλα περιορισμένης εξαρτημένης μεταβλητής. Γίνεται εκτενέστερη ανάλυση στο multinomial logit μοντέλο το οποίο φανερώνει την πιθανότητα εμφάνισης ενός ενδεχομένου σε σχέση με τις επεξηγηματικές μεταβλητές που επιλέγονται. Στα πλαίσια της εργασίας αυτής γίνεται και μια εμπειρική εφαρμογή ενός τέτοιου μοντέλου με δεδομένα που αφορούν την Ελληνική Χρηματιστηριακή Αγορά με σκοπό να δειχθεί αν οι χαμηλές αποδόσεις ορισμένων υποδεικτών του Γενικού Δείκτη Τιμών επηρεάζουν την πιθανότητα εμφάνισης ταυτόχρονων κοινών υπερβάσεων στις αποδόσεις (coexceedances) και άλλων υποδεικτών. / The aim of this thesis is to study the rapid dissemination Negative Financial Events (financial contagion) as presented in the literature as well as the causes, ways and methods of diffusion measurement. As far as the applied part of the existing literature is concerned, it is examined the part which concerns the examination of the Rapid Diffusion of Negative Financial Events (financial contagion) with limited dependent variable models. There is extensive analysis of the multinomial logit model. As part of this work it is presented an empirical application of such a model with data from the Greek stock market in order to indicate whether the low returns of certain subindices of the General Price Index affect the likelihood of simultaneous joint excesses in returns (coexceedances) of other subindices .
|
207 |
Understanding Immigrants' Travel Behavior in Florida: Neighborhood Effects and Behavioral AssimilationZaman, Nishat 14 November 2014 (has links)
The goal of this study was to develop Multinomial Logit models for the mode choice behavior of immigrants, with key focuses on neighborhood effects and behavioral assimilation. The first aspect shows the relationship between social network ties and immigrants’ chosen mode of transportation, while the second aspect explores the gradual changes toward alternative mode usage with regard to immigrants’ migrating period in the United States (US). Mode choice models were developed for work, shopping, social, recreational, and other trip purposes to evaluate the impacts of various land use patterns, neighborhood typology, socioeconomic-demographic and immigrant related attributes on individuals’ travel behavior. Estimated coefficients of mode choice determinants were compared between each alternative mode (i.e., high-occupancy vehicle, public transit, and non-motorized transport) with single-occupant vehicles. The model results revealed the significant influence of neighborhood and land use variables on the usage of alternative modes among immigrants. Incorporating these indicators into the demand forecasting process will provide a better understanding of the diverse travel patterns for the unique composition of population groups in Florida.
|
208 |
Nonparametric kernel estimation methods for discrete conditional functions in econometricsElamin, Obbey Ahmed January 2013 (has links)
This thesis studies the mixed data types kernel estimation framework for the models of discrete dependent variables, which are known as kernel discrete conditional functions. The conventional parametric multinomial logit MNL model is compared with the mixed data types kernel conditional density estimator in Chapter (2). A new kernel estimator for discrete time single state hazard models is developed in Chapter (3), and named as the discrete time “external kernel hazard” estimator. The discrete time (mixed) proportional hazard estimators are then compared with the discrete time external kernel hazard estimator empirically in Chapter (4). The work in Chapter (2) attempts to estimate a labour force participation decision model using a cross-section data from the UK labour force survey in 2007. The work in Chapter (4) estimates a hazard rate for job-vacancies in weeks, using data from Lancashire Careers Service (LCS) between the period from March 1988 to June 1992. The evidences from the vast literature regarding female labour force participation and the job-market random matching theory are used to examine the empirical results of the estimators. The parametric estimator are tighten by the restrictive assumption regarding the link function of the discrete dependent variable and the dummy variables of the discrete covariates. Adding interaction terms improves the performance of the parametric models but encounters other risks like generating multicollinearity problem, increasing the singularity of the data matrix and complicates the computation of the ML function. On the other hand, the mixed data types kernel estimation framework shows an outstanding performance compared with the conventional parametric estimation methods. The kernel functions that are used for the discrete variables, including the dependent variable, in the mixed data types estimation framework, have substantially improved the performance of the kernel estimators. The kernel framework uses very few assumptions about the functional form of the variables in the model, and relay on the right choice of the kernel functions in the estimator. The outcomes of the kernel conditional density shows that female education level and fertility have high impact on females propensity to work and be in the labour force. The kernel conditional density estimator captures more heterogeneity among the females in the sample than the MNL model due to the restrictive parametric assumptions in the later. The (mixed) proportional hazard framework, on the other hand, missed to capture the effect of the job-market tightness in the job-vacancies hazard rate and produce inconsistent results when the assumptions regarding the distribution of the unobserved heterogeneity are changed. The external kernel hazard estimator overcomes those problems and produce results that consistent with the job market random matching theory. The results in this thesis are useful for nonparametric estimation research in econometrics and in labour economics research.
|
209 |
Creation of a Next-Generation Standardized Drug Groupingfor QT Prolonging Reactions using Machine Learning TechniquesTiensuu, Jacob, Rådahl, Elsa January 2021 (has links)
This project aims to support pharmacovigilance, the science and activities relating to drug-safety and prevention of adverse drug reactions (ADRs). We focus on a specific ADR called QT prolongation, a serious reaction affecting the heartbeat. Our main goal is to group medicinal ingredients that might cause QT prolongation. This grouping can be used in safety analysis and for exclusion lists in clinical studies. It should preferably be ranked according to level of suspected correlation. We wished to create an automated and standardised process. Drug safety-related reports describing patients' experienced ADRs and what medicinal products they have taken are collected in a database called VigiBase, that we have used as source for ingredient extraction. The ADRs are described in free-texts and coded using an international standardised terminology. This helps us to process the data and filter ingredients included in a report that describes QT prolongation. To broaden our project scope to include uncoded data, we extended the process to use free-text verbatims describing the ADR as input. By processing and filtering the free-text data and training a classification model for natural language processing released by Google on VigiBase data, we were able to predict if a free-text verbatim is describing QT prolongation. The classification resulted in an F1-score of 98%. For the ingredients extracted from VigiBase, we wanted to validate if there is a known connection to QT prolongation. The VigiBase occurrences is a parameter to consider, but it might be misleading since a report can include several drugs, and a drug can include several ingredients, making it hard to validate the cause. For validation, we used product labels connected to each ingredient of interest. We used a tool to download, scan and code product labels in order to see which ones mention QT prolongation. To rank our final list of ingredients according to level of suspected QT prolongation correlation, we used a multinomial logistic regression model. As training data, we used a data subset manually labeled by pharmacists. Used on unlabeled validation data, the model accuracy was 68%. Analyzing the training data showed that it was not easily separated linearly explaining the limited classification performance. The final ranked list of ingredients suspected to cause QT prolongation consists of 1086 ingredients.
|
210 |
Automatic map generation from nation-wide data sources using deep learningLundberg, Gustav January 2020 (has links)
The last decade has seen great advances within the field of artificial intelligence. One of the most noteworthy areas is that of deep learning, which is nowadays used in everything from self driving cars to automated cancer screening. During the same time, the amount of spatial data encompassing not only two but three dimensions has also grown and whole cities and countries are being scanned. Combining these two technological advances enables the creation of detailed maps with a multitude of applications, civilian as well as military.This thesis aims at combining two data sources covering most of Sweden; laser data from LiDAR scans and surface model from aerial images, with deep learning to create maps of the terrain. The target is to learn a simplified version of orienteering maps as these are created with high precision by experienced map makers, and are a representation of how easy or hard it would be to traverse a given area on foot. The performance on different types of terrain are measured and it is found that open land and larger bodies of water is identified at a high rate, while trails are hard to recognize.It is further researched how the different densities found in the source data affect the performance of the models, and found that some terrain types, trails for instance, benefit from higher density data, Other features of the terrain, like roads and buildings are predicted with higher accuracy by lower density data.Finally, the certainty of the predictions is discussed and visualised by measuring the average entropy of predictions in an area. These visualisations highlight that although the predictions are far from perfect, the models are more certain about their predictions when they are correct than when they are not.
|
Page generated in 0.0763 seconds