• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 766
  • 222
  • 87
  • 68
  • 60
  • 33
  • 30
  • 24
  • 20
  • 15
  • 10
  • 7
  • 7
  • 6
  • 5
  • Tagged with
  • 1556
  • 273
  • 203
  • 188
  • 154
  • 147
  • 144
  • 143
  • 129
  • 125
  • 88
  • 87
  • 85
  • 81
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

控制風險值下的最適投資組合

洪幸資 Unknown Date (has links)
採用風險值取代標準差來衡量投資組合的下方風險,除了更符合投資人的對風險的態度,也更貼近目前金融機構多以風險值作為內部控管工具的情形。但除了風險的事後衡量,本篇論文希望能夠事前積極地控制投資組合風險值,求得最適投資組合的各資產配置權重。故本篇論文研究方法採用了Rockafellar and Uryasev.(2000)的極小條件風險值最適投資組合模型先建立Mean-CVaR效率前緣,並將此效率前緣上的投資組合風險以風險值衡量,再應用電腦上的探索方法進一步求得風險值更低的投資組合,逼近求得Mean-VaR效率前緣,最後利用Mean-VaR效率前緣採用Campbell,Huisman與Koedijk(2001)模型求得控制風險值下的最適投資組合。 在實證分析上,本篇論文採用國內三檔股票為標的,首先在實證標的資產報酬檢定為非常態分配下,使用歷史模擬法,以資產實際非常態報酬分配估計VaR,驗證了使用本篇論文研究方法極小CVaR投資組合與探索方法,可以適當逼近真實的Mean-VaR效率前緣。再者研究比較不同信賴水準、不同資產報酬分配假設與不同權重產生方式下的Mean-VaR效率前緣與Mean- 效率前緣效果差異,最後求得控制風險值下的最適投資組合。 / In contrast to the role of variance in the traditional Mean-Variance framework, in this thesis we introduce Value-at-Risk (VaR) as a shortfall-constraint into the portfolio selection decision. Doing so is much more in fitting with individual perception to risk and in line with the constraints which financial institutes currently face. However, mathematically VaR has some serious limitations making the portfolio selection problem difficult to attain optimal solution. In order to apply VaR to ex ante portfolio decision, we use the closely related tractable risk measure Conditional Value-at-Risk (CVaR) in this thesis as a proxy to find efficient portfolios. We utilize linear programming formulation developed by Rockafellar and Uryasev(2000) to construct a Mean-CVaR efficient frontier. Following which the VaR of resulting portfolios in the Mean-CVaR efficient frontier is reduced further by a simple heuristic procedure. After constructing an empirical Mean-VaR efficient frontier that can be proven an useful approximation to the true Mean-VaR efficient frontier, the Campbell, Huisman and Koedijk(2001) model is used to find the optimal portfolio. Three Taiwan listing stocks are used to build the Mean-VaR efficient frontier in the empirical study. And the Mean-VaR efficient frontier of different confident levels, under different asset return assumptions, and different optimal portfolio selection models are compared and results analyzed.
222

ON THE CONVERGENCE AND APPLICATIONS OF MEAN SHIFT TYPE ALGORITHMS

Aliyari Ghassabeh, Youness 01 October 2013 (has links)
Mean shift (MS) and subspace constrained mean shift (SCMS) algorithms are non-parametric, iterative methods to find a representation of a high dimensional data set on a principal curve or surface embedded in a high dimensional space. The representation of high dimensional data on a principal curve or surface, the class of mean shift type algorithms and their properties, and applications of these algorithms are the main focus of this dissertation. Although MS and SCMS algorithms have been used in many applications, a rigorous study of their convergence is still missing. This dissertation aims to fill some of the gaps between theory and practice by investigating some convergence properties of these algorithms. In particular, we propose a sufficient condition for a kernel density estimate with a Gaussian kernel to have isolated stationary points to guarantee the convergence of the MS algorithm. We also show that the SCMS algorithm inherits some of the important convergence properties of the MS algorithm. In particular, the monotonicity and convergence of the density estimate values along the sequence of output values of the algorithm are shown. We also show that the distance between consecutive points of the output sequence converges to zero, as does the projection of the gradient vector onto the subspace spanned by the D-d eigenvectors corresponding to the D-d largest eigenvalues of the local inverse covariance matrix. Furthermore, three new variations of the SCMS algorithm are proposed and the running times and performance of the resulting algorithms are compared with original SCMS algorithm. We also propose an adaptive version of the SCMS algorithm to consider the effect of new incoming samples without running the algorithm on the whole data set. As well, we develop some new potential applications of the MS and SCMS algorithm. These applications involve finding straight lines in digital images; pre-processing data before applying locally linear embedding (LLE) and ISOMAP for dimensionality reduction; noisy source vector quantization where the clean data need to be estimated before the quanization step; improving the performance of kernel regression in certain situations; and skeletonization of digitally stored handwritten characters. / Thesis (Ph.D, Mathematics & Statistics) -- Queen's University, 2013-09-30 18:01:12.959
223

Fundamentação eletromiográfica do método de pré-exaustão no treinamento de força / Electromyography as a basis to pre-exhaustion method in strength training

Leite, Allan Brennecke 03 April 2007 (has links)
Ao contrário da recomendação tradicional do treinamento de força, a proposta do método de pré-exaustão é iniciar a sessão de treino com exercícios monoarticulares e terminar com exercícios multiarticulares. O objetivo deste estudo foi, por meio da EMG, investigar parâmetros temporais e de intensidade da ativação dos músculos peitoral maior (PM), deltóide (DA) e tríceps braquial (TB) que possam fundamentar a aplicação do método de pré-exaustão em 10RM dos exercícios supino e crucifixo. Foram comparados dois protocolos experimentais: P1) método da préexaustão; P2) recomendações tradicionais. A intensidade de ativação baseada no valor RMS, bem como a relação desta com a duração da contração muscular, estabelecida em faixas de intensidade, não obteve diferenças estatisticamente significativas para PM. Para DA, não houve diferenças estatisticamente significativas entre os protocolos na intensidade de ativação quando as repetições foram analisadas em conjunto. Entretanto, quando analisado cada repetição, este músculo apresentou aumento estatisticamente significativo de intensidade de ativação em P1, assim como maior solicitação da faixa de intensidade 80 a 100% CIVM. Para TB, a intensidade de ativação foi significativamente maior em P1 que em P2 para todas as formas de análise. Os resultados mostraram que o aparelho locomotor aumentou a dependência de TB como estratégia alternativa para tentar atingir 10RM do supino em P1. Assim, é possível afirmar que o método de pré-exaustão pode ser eficiente para impor maior estímulo neural sobre pequenos grupos acessórios na execução de um movimento e não sobre o grupo principal o qual se deseja. Entretanto, estes achados suportam que os efeitos do método de pré-exaustão ainda não podem ser afirmados categoricamente. Pois, ao longo da série em P1 não houve aumento significativo na intensidade de ativação de um mesmo músculo, bem como das faixas de intensidade, como houve em P2. Desse modo, é possível afirmar que os músculos, em P1, iniciaram a série em um nível de intensidade mais alto que em P2, pois foram estimulados previamente / Contrariwise the strength training traditional recommendation, the preexhaustion method purposes to begin a training session with monoarticular exercises and to finish it with multiarticular exercises. The aim of this study was, through EMG, to inquire into temporal and activation intensity parameters of pectoralis major (PM), deltoid (DA) and triceps brachii (TB) muscles, which can be used as a basis to bench press and flying 10RM pre-exhaustion method application. It was compared two experimental protocols: P1) pre-exhaustion method; P2) traditional recommendation. The activation intensity, as well its relationship with the muscular contraction duration, established on intensity levels, did not attain significant differences to PM. To DA, there were not differences between the protocols respecting the activation intensity when whole the repetitions were analyzed. However, when each repetition was analyzed, this muscle exhibited significant increasing in activation intensity in P1; as well it showed a more intense solicitation of 80 to 100% MVIC level. To TB, the activation intensity was significant greater in P1 than P2 respecting whole manners to data analysis. The results exhibited that the locomotor apparatus increased the TB dependence as an alternative strategy to try to attain bench press 10RM in P1. Therefore, it is possible to assert that pre-exhaustion method may be efficient to impose largest neural stimuli on small synergists muscular groups during movement execution, but not on the main target muscular group. However, these findings sustain that the pre-exhaustion method effects cannot receive a categorical affirmation, yet. Because, contrariwise the P2, during the P1 bench press set there was not significant increasing in the same muscle activation intensity, as well in the intensity levels. This way, it is possible to assert that, in P1 the muscles began the set in a highest intensity levels than in P2, because they were stimulated previously
224

Hedging no modelo com processo de Poisson composto / Hedging in compound Poisson process model

Sung, Victor Sae Hon 07 December 2015 (has links)
Interessado em fazer com que o seu capital gere lucros, o investidor ao optar por negociar ativos, fica sujeito aos riscos econômicos de qualquer negociação, pois não existe uma certeza quanto a valorização ou desvalorização de um ativo. Eis que surge o mercado futuro, em que é possível negociar contratos a fim de se proteger (hedge) dos riscos de perdas ou ganhos excessivos, fazendo com que a compra ou venda de ativos, seja justa para ambas as partes. O objetivo deste trabalho consiste em estudar os processos de Lévy de puro salto de atividade finita, também conhecido como modelo de Poisson composto, e suas aplicações. Proposto pelo matemático francês Paul Pierre Lévy, os processos de Lévy tem como principal característica admitir saltos em sua trajetória, o que é frequentemente observado no mercado financeiro. Determinaremos uma estratégia de hedging no modelo de mercado com o processo de Poisson composto via o conceito de mean-variance hedging e princípio da programação dinâmica. / The investor, that negotiate assets, is subject to economic risks of any negotiation because there is no certainty regarding the appreciation or depreciation of an asset. Here comes the futures market, where contracts can be negotiated in order to protect (hedge) the risk of excessive losses or gains, making the purchase or sale assets, fair for both sides. The goal of this work consist in study Lévy pure-jump process with finite activity, also known as compound Poisson process, and its applications. Discovered by the French mathematician Paul Pierre Lévy, the Lévy processes admits jumps in paths, which is often observed in financial markets. We will define a hedging strategy for a market model with compound Poisson process using mean-variance hedging and dynamic programming.
225

Dekomponeringsanalys av personbilstrafikens CO2-utsläpp i Sverige 1990–2015

Kalla, Christelle January 2019 (has links)
År 2045 ska Sverige uppnå territoriella nettonollutsläpp och till år 2030 ska utsläppen från transportsektorn ha minskat med 70 % jämfört med år 2010. Sveriges vägtrafik står för en tredjedel av de totala växthusgasutsläppen. För att uppnå klimatmålen bör de mest lämpade styrmedlen och åtgärderna prioriteras. En systematisk undersökning av de faktorer som påverkat utsläppsutvecklingen kan vägleda beslutsfattare att fördela resurserna där de gör mest nytta. Dekomponeringsanalys är en potentiell metod för detta syfte då flera olika faktorers effekter kan särskiljs och mätas. Fem additiva LMDI-I dekomponeringsanalyser genomfördes på utsläppsutvecklingen av fossilt CO2 inom personbilstrafiken mellan åren 1990–2015. De faktorer som undersöktes var befolkning, bil per capita, bränsleteknologier, motorstorlekar, trafikarbete per bil, emissioner och biobränsle. Data från emissionsmodellen HBEFA, Trafikverket och SCB användes i analyserna. Under hela perioden 1990–2015 minskade CO2-utsläppen och dekomponeringsanalyserna visade att alla de ingående faktorerna påverkat utvecklingen. Sett över hela tidsperioden 1990–2015 hade faktorerna påverkat utvecklingen mest i storleksordningen trafikarbete per bil (35 %), bränsleteknologier (15 %), befolkning (15 %), bil per capita (13 %), emissioner (11 %), biobränsle (7 %) samt motorstorlekar (5 %). Procenten anger andelen som faktorn utgjorde av effekternas absoluta summa. Trafikarbete per bil, emissioner, biobränsle och motorstorlekar minskade utsläppen. Bränsleteknologier, befolkning och bil per capita ökade utsläppen. Resultaten kan användas som en indikation för vilka faktorer som kan påverka den framtida utsläppsutvecklingen mest och för vilka åtgärder bör vidtas. Åtgärderförslag är incitament för att välja mer hållbara transportsätt, öka andelen av bilar med lägre utsläpp i fordonsflottan och använda mer biobränsle. / By year 2045 Sweden shall reach zero territorial net emissions and by year 2030 the emissions from the transport sector shall be reduced by 70% compared to year 2010. In Sweden the road traffic stands for a third of the total greenhouse gas emissions. In order to achieve the climate targets, the most suited policies and actions should be prioritized. A systematic investigation into the factors that affect the change in emissions can guide decision makers to distribute resources where they contribute the most. A decomposition analysis is a potential method for this purpose since the effect of different factors can be separated and measured. Five additive LMDI-I decomposition analyses were made on the change in fossil CO2 emission from passenger cars in Sweden between year 1990–2015. The factors that were investigated were: population, vehicle per capita, fuel technologies, engine sizes, distance travelled per vehicle, emissions and biofuel share. Data from the emissions model HBEFA, the Swedish Transport Administration and Statistics Sweden were used in the analyses. During the period of year 1990–2015 the CO2 emissions were reduced, and the decomposition analyses showed that all ingoing factors affected the change. Throughout the period the factors that contributed the most were in order of size: distance travelled per vehicle (35%), fuel technologies (15%), population (15%), car per capita (13%), emissions (11%), biofuel (7%) and engine size (5%). The percentage is the share of the factor’s effect of the absolute sum of all the different effects. Distance travelled per vehicle, emissions, bio fuels and engine size reduced the emissions. Fuel technologies, population and car per capita increased the emissions. The suggestions of actions are incentive for people to use more sustainable means for transportation, increase the share of cars with lower emissions in the fleet and use more biofuel.
226

A comparison between the effects of black tea and rooibos on the iron status of primary school children / P. Breet

Breet, Petronella January 2003 (has links)
Thesis (M.Sc. (Nutrition))--North-West University, Potchefstroom Campus, 2004.
227

Numerical Methods for Continuous Time Mean Variance Type Asset Allocation

Wang, Jian January 2010 (has links)
Many optimal stochastic control problems in finance can be formulated in the form of Hamilton-Jacobi-Bellman (HJB) partial differential equations (PDEs). In this thesis, a general framework for solutions of HJB PDEs in finance is developed, with application to asset allocation. The numerical scheme has the following properties: it is unconditionally stable; convergence to the viscosity solution is guaranteed; there are no restrictions on the underlying stochastic process; it can be easily extended to include features as needed such as uncertain volatility and transaction costs; and central differencing is used as much as possible so that use of a locally second order method is maximized. In this thesis, continuous time mean variance type strategies for dynamic asset allocation problems are studied. Three mean variance type strategies: pre-commitment mean variance, time-consistent mean variance, and mean quadratic variation, are investigated. The numerical method can handle various constraints on the control policy. The following cases are studied: allowing bankruptcy (unconstrained case), no bankruptcy, and bounded control. In some special cases where analytic solutions are available, the numerical results agree with the analytic solutions. These three mean variance type strategies are compared. For the allowing bankruptcy case, analytic solutions exist for all strategies. However, when additional constraints are applied to the control policy, analytic solutions do not exist for all strategies. After realistic constraints are applied, the efficient frontiers for all three strategies are very similar. However, the investment policies are quite different. These results show that, in deciding which objective function is appropriate for a given economic problem, it is not sufficient to simply examine the efficient frontiers. Instead, the actual investment policies need to be studied in order to determine if a particular strategy is applicable to specific investment problem.
228

A Hybrid of Stochastic Programming Approaches with Economic and Operational Risk Management for Petroleum Refinery Planning under Uncertainty

Khor, Cheng Seong January 2006 (has links)
In view of the current situation of fluctuating high crude oil prices, it is now more important than ever for petroleum refineries to operate at an optimal level in the present dynamic global economy. Acknowledging the shortcomings of deterministic models, this work proposes a hybrid of stochastic programming formulations for an optimal midterm refinery planning that addresses three factors of uncertainties, namely price of crude oil and saleable products, product demand, and production yields. An explicit stochastic programming technique is utilized by employing compensating slack variables to account for violations of constraints in order to increase model tractability. Four approaches are considered to ensure both solution and model robustness: (1) the Markowitz’s mean–variance (MV) model to handle randomness in the objective coefficients of prices by minimizing variance of the expected value of the random coefficients; (2) the two-stage stochastic programming with fixed recourse approach via scenario analysis to model randomness in the right-hand side and left-hand side coefficients by minimizing the expected recourse penalty costs due to constraints’ violations; (3) incorporation of the MV model within the framework developed in Approach 2 to minimize both the expectation and variance of the recourse costs; and (4) reformulation of the model in Approach 3 by adopting mean-absolute deviation (MAD) as the risk metric imposed by the recourse costs for a novel application to the petroleum refining industry. A representative numerical example is illustrated with the resulting outcome of higher net profits and increased robustness in solutions proposed by the stochastic models.
229

Numerical Methods for Continuous Time Mean Variance Type Asset Allocation

Wang, Jian January 2010 (has links)
Many optimal stochastic control problems in finance can be formulated in the form of Hamilton-Jacobi-Bellman (HJB) partial differential equations (PDEs). In this thesis, a general framework for solutions of HJB PDEs in finance is developed, with application to asset allocation. The numerical scheme has the following properties: it is unconditionally stable; convergence to the viscosity solution is guaranteed; there are no restrictions on the underlying stochastic process; it can be easily extended to include features as needed such as uncertain volatility and transaction costs; and central differencing is used as much as possible so that use of a locally second order method is maximized. In this thesis, continuous time mean variance type strategies for dynamic asset allocation problems are studied. Three mean variance type strategies: pre-commitment mean variance, time-consistent mean variance, and mean quadratic variation, are investigated. The numerical method can handle various constraints on the control policy. The following cases are studied: allowing bankruptcy (unconstrained case), no bankruptcy, and bounded control. In some special cases where analytic solutions are available, the numerical results agree with the analytic solutions. These three mean variance type strategies are compared. For the allowing bankruptcy case, analytic solutions exist for all strategies. However, when additional constraints are applied to the control policy, analytic solutions do not exist for all strategies. After realistic constraints are applied, the efficient frontiers for all three strategies are very similar. However, the investment policies are quite different. These results show that, in deciding which objective function is appropriate for a given economic problem, it is not sufficient to simply examine the efficient frontiers. Instead, the actual investment policies need to be studied in order to determine if a particular strategy is applicable to specific investment problem.
230

A Bidirectional Lms Algorithm For Estimation Of Fast Time-varying Channels

Yapici, Yavuz 01 May 2011 (has links) (PDF)
Effort to estimate unknown time-varying channels as a part of high-speed mobile communication systems is of interest especially for next-generation wireless systems. The high computational complexity of the optimal Wiener estimator usually makes its use impractical in fast time-varying channels. As a powerful candidate, the adaptive least mean squares (LMS) algorithm offers a computationally efficient solution with its simple first-order weight-vector update equation. However, the performance of the LMS algorithm deteriorates in time-varying channels as a result of the eigenvalue disparity, i.e., spread, of the input correlation matrix in such chan nels. In this work, we incorporate the L MS algorithm into the well-known bidirectional processing idea to produce an extension called the bidirectional LMS. This algorithm is shown to be robust to the adverse effects of time-varying channels such as large eigenvalue spread. The associated tracking performance is observed to be very close to that of the optimal Wiener filter in many cases and the bidirectional LMS algorithm is therefore referred to as near-optimal. The computational complexity is observed to increase by the bidirectional employment of the LMS algorithm, but nevertheless is significantly lower than that of the optimal Wiener filter. The tracking behavior of the bidirectional LMS algorithm is also analyzed and eventually a steady-state step-size dependent mean square error (MSE) expression is derived for single antenna flat-fading channels with various correlation properties. The aforementioned analysis is then generalized to include single-antenna frequency-selective channels where the so-called ind ependence assumption is no more applicable due to the channel memory at hand, and then to multi-antenna flat-fading channels. The optimal selection of the step-size values is also presented using the results of the MSE analysis. The numerical evaluations show a very good match between the theoretical and the experimental results under various scenarios. The tracking analysis of the bidirectional LMS algorithm is believed to be novel in the sense that although there are several works in the literature on the bidirectional estimation, none of them provides a theoretical analysis on the underlying estimators. An iterative channel estimation scheme is also presented as a more realistic application for each of the estimation algorithms and the channel models under consideration. As a result, the bidirectional LMS algorithm is observed to be very successful for this real-life application with its increased but still practical level of complexity, the near-optimal tracking performa nce and robustness to the imperfect initialization.

Page generated in 0.0356 seconds