• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • 1
  • Tagged with
  • 6
  • 6
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Efficient Multilevel and Multi-index Sampling Methods in Stochastic Differential Equations

Haji Ali, Abdul Lateef 22 May 2016 (has links)
Most problems in engineering and natural sciences involve parametric equations in which the parameters are not known exactly due to measurement errors, lack of measurement data, or even intrinsic variability. In such problems, one objective is to compute point or aggregate values, called “quantities of interest”. A rapidly growing research area that tries to tackle this problem is Uncertainty Quantification (UQ). As the name suggests, UQ aims to accurately quantify the uncertainty in quantities of interest. To that end, the approach followed in this thesis is to describe the parameters using probabilistic measures and then to employ probability theory to approximate the probabilistic information of the quantities of interest. In this approach, the parametric equations must be accurately solved for multiple values of the parameters to explore the dependence of the quantities of interest on these parameters, using various so-called “sampling methods”. In almost all cases, the parametric equations cannot be solved exactly and suitable numerical discretization methods are required. The high computational complexity of these numerical methods coupled with the fact that the parametric equations must be solved for multiple values of the parameters make UQ problems computationally intensive, particularly when the dimensionality of the underlying problem and/or the parameter space is high. This thesis is concerned with optimizing existing sampling methods and developing new ones. Starting with the Multilevel Monte Carlo (MLMC) estimator, we first prove its normality using the Lindeberg-Feller CLT theorem. We then design the Continuation Multilevel Monte Carlo (CMLMC) algorithm that efficiently approximates the parameters required to run MLMC. We also optimize the hierarchies of one-dimensional discretization parameters that are used in MLMC and analyze the tolerance splitting parameter between the statistical error and the bias constraints. An important contribution of this thesis is the novel Multi-index Monte Carlo (MIMC) method which is an extension of MLMC in high dimensional problems with significant computational savings. Under reasonable assumptions on the weak and variance convergence, which are related to the mixed regularity of the underlying problem and the discretization method, the order of the computational complexity of MIMC is, at worst up to a logarithmic factor, independent of the dimensionality of the underlying parametric equation. We also apply the same multi-index methodology to another sampling method, namely the Stochastic Collocation method. Hence, the novel Multi-index Stochastic Collocation method is proposed and is shown to be more efficient in problems with sufficient mixed regularity than our novel MIMC method and other standard methods. Finally, MIMC is applied to approximate quantities of interest of stochastic particle systems in the mean-field when the number of particles tends to infinity. To approximate these quantities of interest up to an error tolerance, TOL, MIMC has a computational complexity of O(TOL-2log(TOL)2). This complexity is achieved by building a hierarchy based on two discretization parameters: the number of time steps in an Milstein scheme and the number of particles in the particle system. Moreover, we use a partitioning estimator to increase the correlation between two stochastic particle systems with different sizes. In comparison, the optimal computational complexity of MLMC in this case is O(TOL-3) and the computational complexity of Monte Carlo is O(TOL-4).
2

Socio-Economically Responsible Investing and Income Inequality in the USA

Brown, David January 2017 (has links)
To add to the tools currently available to combat income inequality in the United States an investment fund type is proposed, justified, described, and created using historical asset returns from 1960 to 2015. By focusing on two socio-economic indicators of poverty, inflation and unemployment rates, this fund, when marketed to investors who live near, at, or below the poverty line, seeks to increase returns during times of increased strain on the economies of the poor. Multiple hurdles are proposed and affirmatively answered to this end and a fund type and corresponding four factor model that realized hypothetical excess returns fitting the requirements of a successful investment strategy was developed and evaluated. With the increasing importance of socially responsible investment practices an investment bank who maintains a fund of this type could potentially see financial and reputational benefits.
3

A Test Of Multi-index Asset Pricing Models: The Case Of Istanbul Stock Exchange

Kalac, Sirri Selim 01 September 2012 (has links) (PDF)
This study employs widely excepted asset pricing models to test their explanatory power in the context of Istanbul Stock Exchange listed companies between 1990 and 2010. The risk factors, beta, size, book-to-market equity, and momentum are used to form portfolios and their factor loadings are estimated. The results of this study are mostly in line with the previous academic research, and some unique attributes of the return generation mechanism of Istanbul Stock Exchange are reported.
4

Locally Optimized Mapping of Slum Conditions in a Sub-Saharan Context: A Case Study of Bamenda, Cameroon

Anchang, Julius 18 November 2016 (has links)
Despite being an indicator of modernization and macro-economic growth, urbanization in regions such as Sub-Saharan Africa is tightly interwoven with poverty and deprivation. This has manifested physically as slums, which represent the worst residential urban areas, marked by lack of access to good quality housing and basic services. To effectively combat the slum phenomenon, local slum conditions must be captured in quantitative and spatial terms. However, there are significant hurdles to this. Slum detection and mapping requires readily available and reliable data, as well as a proper conceptualization of measurement and scale. Using Bamenda, Cameroon, as a test case, this dissertation research was designed as a three-pronged attack on the slum mapping problematic. The overall goal was to investigate locally optimized slum mapping strategies and methods that utilize high resolution satellite image data, household survey data, simple machine learning and regionalization theory. The first major objective of the study was to tackle a "measurement" problem. The aim was to explore a multi-index approach to measure and map local slum conditions. The rationale behind this was that prior sub-Saharan slum research too often used simplified measurement techniques such as a single unweighted composite index to represent diverse local slum conditions. In this study six household indicators relevant to the United Nations criteria for defining slums were extracted from a 2013 Bamenda household survey data set and aggregated for 63 local statistical areas. The extracted variables were the percent of households having the following attributes: more than two residents per room, non-owner, occupying a single room or studio, having no flush toilet, having no piped water, having no drainage. Hierarchical variable clustering was used as a surrogate for exploratory factor analysis to determine fewer latent slum factors from these six variables. Variable groups were classified such that the most correlated variables fell in the same group while non-correlated variables fell in separate groups. Each group membership was then examined to see if the group suggested a conceptually meaningful slum factor which could quantified as a stand-alone "high" and "low" binary slum index. Results showed that the slum indicators in the study area could be replaced by at least two meaningful and statistically uncorrelated latent factors. One factor reflected the home occupancy conditions (tenancy status, overcrowded and living space conditions) and was quantified using K-means clustering of units as an ‘occupancy disadvantage index’ (Occ_D). The other reflected the state of utilities access (piped water and flush toilet) and was quantified as utilities disadvantage index (UT_D). Location attributes were used to examine/validate both indices. Independent t-tests showed that units with high Occ_D were on average closer to nearest town markets and major roads when compared with units of low Occ_D. This was consistent with theory as it is expected that typical slum residents (in this case overcrowded and non-owner households) will favor accessibility to areas of high economic activity. However, this situation was not the same with UT_D as shown by lack of such as a strong pattern. The second major objective was to tackle a "learning" problem. The purpose was to explore the potential of unsupervised machine learning to detect or "learn" slum conditions from image data. The rationale was that such an approach would be efficient, less reliant on prior knowledge and expertise. A 2012 GeoEye image scene of the study area was subjected to image classification from which the following physical settlement attributes were quantified for each of the 63 statistical areas: per cent roof area, percent open space area, per cent bare soil, per cent paved road surface, per cent dirt road surface, building shadow-roof area ratio. The shadow-roof ratio was an innovative measure used to capture the size and density attributes of buildings. In addition to the 6 image derived variables, the mean slope of each area was calculated from a digital elevation dataset. All 7 attributes were subject to principal component analysis from which the first 2 components were extracted and used for hierarchical clustering of statistical areas to derive physical types. Results show that area units could be optimally classified into 4 physical types labelled generically as Categories 1 – 4, each with at least one defining physical characteristic. Kruskal Wallis tests comparing physical types in terms of household and locations attributes showed that at least two physical types were different in terms of aggregated household slum conditions and location attributes. Category 4 areas, located on steep slopes and having high shadow-to-roof ratio, had the highest distribution of non-owner households. They were also located close to nearest town markets. They were thus the most likely candidates of slums in the city. Category 1 units on other hand located at the outskirts and having abundant open space were least likely to have slum conditions. The third major objective was to tackle the problem of "spatial scale". Neighborhoods, by their very nature of contiguity and homogeneity, represent an ideal scale for urban spatial analysis and mapping. Unfortunately, in most areas, neighborhoods are not objectively defined and slum mapping often relies in the use of arbitrary spatial units which do not capture the true extent of the phenomenon. The objective was thus to explore the use of analytic regionalization to quantitatively derive the neighborhood unit for mapping slums. Analytic neighborhoods were created by spatially constrained clustering of statistical areas using the minimum spanning tree algorithm. Unlike previous studies that relied on socio-economic and/or demographic information, this study innovatively used multiple land cover and terrain attributes as neighborhood homogenizing factors. Five analytic neighborhoods (labeled Regions 1-5) were created this way and compared using Kruskal Wallis tests for differences in household slum attributes. This was to determine largest possible contiguous areas that could be labeled as slum or non-slum neighborhoods. The results revealed that at least two analytic regions were significantly different in terms of aggregated household indicators. Region 1 stood apart as having significantly higher distributions of overcrowded and non-owner households. It could thus be viewed as the largest potential slum neighborhood in the city. In contrast, regions 3 (located at higher elevation and separated from rest of city by a steep escarpment) was generally associated with low distribution of household slum attributes and could be considered the strongest model of a non-slum or formal neighborhood. Both Regions 1 and 3 were also qualitatively correlated with two locally recognized (vernacular) neighborhoods. These neighborhoods, "Sisia" (for Region 1) and "Up Station" (for Region 3), are commonly perceived by local folk as occupying opposite ends of the socio-economic spectrum. The results obtained by successfully carrying the three major objectives have major implication for future research and policy. In the case of multi-index analysis of slum conditions, it affirms the notion the that slum phenomenon is diverse in the local context and that remediation efforts must be compartmentalized to be effective. The results of image based unsupervised mapping of slums from imagery show that it is a tool with high potential for rapid slum assessment even when there is no supporting field data. Finally, the results of analytic regionalization showed that the true extent of contiguous slum neighborhoods can be delineated objectively using land cover and terrain attributes. It thus presents an opportunity for local planning and policy actors to consider redesigning the city neighborhood districts as analytic units. Quantitively derived neighborhoods are likely to be more useful in the long term, be it for spatial sampling, mapping or planning purposes.
5

Improved Statistical Methods for Elliptic Stochastic Homogenization Problems : Application of Multi Level- and Multi Index Monte Carlo on Elliptic Stochastic Homogenization Problems

Daloul, Khalil January 2023 (has links)
In numerical multiscale methods, one relies on a coupling between macroscopic model and a microscopic model. The macroscopic model does not include the microscopic properties that the microscopic model offers and that are vital for the desired solution. Such microscopic properties include parameters like material coefficients and fluxes which may variate microscopically in the material. The effective values of this data can be computed by running local microscale simulations while averaging the microscopic data. One desires the effect of the microscopic coefficients on a macroscopic scale, and this can be done using classical homogenisation theory. One method in the homogenization theory is to use local elliptic cell problems in order to compute the homogenized constants and this results in <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Clambda%20/R" data-classname="equation_inline" data-title="" /> error where <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Clambda" data-classname="equation" /> is the wavelength of the microscopic variations and <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?R" data-classname="mimetex" data-title="" /> is the size of the simulation domain. However, one could greatly improve the accuracy by a slight modification in the homogenisation elliptic PDE and use a filter in the averaging process to get much better orders of error. The modification relates the elliptic PDE to a parabolic one, that could be solved and integrated in time to get the elliptic PDE's solution.   In this thesis I apply the modified elliptic cell homogenization method with a qth order filter to compute the homogenized diffusion constant in a 2d Poisson equation on a rectangular domain. Two cases were simulated. The diffusion coefficients used in the first case was a deterministic 2d matrix function and in the second case I used stochastic 2d matrix function, which results in a 2d stochastic differential equation (SDE). In the second case two methods were used to determine the expected value of the homogenized constants, firstly the multi-level Monte Carlo (MLMC) and secondly its generalization multi-index Monte Carlo (MIMC). The performance of MLMC and MIMC is then compared when used in the process of the homogenization.   In the homogenization process the finite element notations in 2d were used to estimate a solution of the Poisson equation. The grid spatial steps were varied in a first order differences in MLMC (square mesh) and first order mixed differences in MIMC (which allows for rectangular mesh).
6

證券市場個人投資者投資決策行為之研究

曾嘉麟, ZENG,JIA-LIN Unknown Date (has links)
影響股票價格之因素,基本上可分為三大類,分別是1.市場因素、2.行業因素、3.公 司因素。本研究主要在了解台灣地區股票投資報酬率與行業因素之關系。 理論基礎部份,介紹單一指數模式(single-indel model)與多重指數模式(multi-in- dex model),以了解兩模式之主要內容以及有關研究行業因素過程中應注意的問題。 文獻探討部份在說明以往國內外學者之研究大部份證實了行業因素之存在,並敘述將 行業因素引入多重指數模式中,以了解是否能降低殘差相關,提高對股價變動之解釋 能力的有關研究之研究方法與結果。以往國內有關行業因素之研究皆按現行證券市場 之分類標準予以分類,本研究則以陳發輝依六變數( 包括流動比率、負債比率、 EPS 、資產週轉率、資本額、稅後盈餘成長率、交易週轉率) 所區分出之五種類型( 包括 成長績優型、高度投機型、穩健成長型、保守停滯型、穩定獲利型 )來將研究資料中 上市公司予以分類。 研究設計在說明研究的設計過程,包括證券種類的選擇、研究期間、選樣標準、變數 之操作性定義、資料來源、研究步驟等。 實證結果與分析部份對實證研究的統計結果加以解釋並對本研究實證部分的研究限制 加以說明。

Page generated in 0.046 seconds