Spelling suggestions: "subject:"stopping"" "subject:"topping""
161 |
Processos de burn-in e de garantia em sistemas coerentes sob o modelo de tempo de vida geral / Burni-in and warranty processes in coherent systems under the general lifetime modelNelfi Gertrudis Gonzalez Alvarez 09 October 2009 (has links)
Neste trabalho consideramos três tópicos principais. Nos dois primeiros generalizamos alguns dos resultados clássicos da Teoria da Confiabilidade na otimização dos procedimentos de burn-in e de políticas de garantia, respectivamente, sob o modelo de tempo de vida geral, quando um sistema coerente é observado ao nível de seus componentes, e estendemos os conceitos de intensidade de falha na forma de banheira e do modelo de falha geral através da definiçâo de processos progressivamente mensuráveis sob a pré-t-história completa dos componentes do sistema. Uma regra de parada monótona é usada na metodologia de otimizaçâo proposta. No terceiro tópico modelamos os custos de garantia descontados por reparo mínimo de um sistema coerente ao nível de seus componentes, propomos o estimador martingal do custo esperado para um período de garantia fixado e provamos as suas propriedades assintóticas mediante o Teorema do Limite Central para Martingais. / In this work we consider three main topics. In the first two, we generalize some classical results on Reliability Theory related to the optimization in burn-in procedures and warranty policies, using the general lifetime model of a coherent system observed on the component level and extending the definitions of bathtub shaped failure rate and general failure model to progressively measurable processes under the complete pre-t-history. A monotone stopping rule is applied within the proposed methodology. In the third topic, we define the discounted warranty cost process for a coherent system minimally repaired on the component level and we propose a martingale estimator to the expected warranty cost for a fixed period and setting its asymptotic properties by means of Martingale Central Limit Theorem.
|
162 |
Vers une méthode de restauration aveugle d’images hyperspectrales / Towards a blind restoration method of hyperspectral imagesZhang, Mo 06 December 2018 (has links)
Nous proposons dans cette thèse de développer une méthode de restauration aveugle d'images flouées et bruitées où aucune connaissance a priori n'est exigée. Ce manuscrit est composé de trois chapitres : le 1er chapitre est consacré aux travaux de l'état de l'art. Les approches d'optimisation pour la résolution du problème de restauration y sont d'abord discutées. Ensuite les principales méthodes de restauration, dites semi-aveugles car nécessitant un minimum de connaissance a priori sont analysées. Parmi ces méthodes, cinq sont retenues pour évaluation. Le 2ème chapitre est dédié à la comparaison des performances des méthodes retenues dans le chapitre précédent. Les principaux critères objectifs d'évaluation de la qualité des images restaurées sont présentés. Parmi ces critères, la norme L1 de l'erreur d'estimation est sélectionnée. L'étude comparative menée sur une banque d'images monochromes, dégradées artificiellement par deux fonctions floues de supports différents et trois niveaux de bruit a permis de mettre en évidence les deux méthodes les plus pertinentes. La première repose sur une approche alternée mono-échelle où la PSF et l'image sont estimées dans une seule étape. La seconde utilise une approche hybride multi-échelle qui consiste tout d'abord à estimer de manière alternée la PSF et une image latente, puis dans une étape suivante séquentielle, à restaurer l'image. Dans l'étude comparative conduite, l'avantage revient à cette dernière. Les performances de ces méthodes serviront de référence pour comparer ensuite la méthode développée. Le 3ème chapitre porte sur la méthode développée. Nous avons cherché à rendre aveugle l'approche hybride retenue dans le chapitre précédent tout en améliorant la qualité d'estimation de la PSF et de l'image restaurée. Les contributions ont porté sur plusieurs points. Une première série d'améliorations concerne la redéfinition des échelles, celle de l'initialisation de l'image latente à chaque niveau d'échelle, l'évolution des paramètres pour la sélection des contours pertinents servant de support à l'estimation de la PSF et enfin, la définition d'un critère d'arrêt aveugle. Une seconde série de contributions a porté sur l'estimation aveugle des deux paramètres de régularisation impliqués pour éviter d'avoir à les fixer empiriquement. Chaque paramètre est associé à une fonction coût distincte l'une pour l'estimation de la PSF et la seconde pour l'estimation d'une image latente. Dans l'étape séquentielle qui suit, nous avons cherché à affiner le support de la PSF estimée dans l'étape alternée, avant de l'exploiter dans le processus de restauration de l'image. A ce niveau, la seule connaissance a priori nécessaire est une borne supérieure du support de la PSF. Les différentes évaluations conduites sur des images monochromes et hyperspectrales dégradées artificiellement par plusieurs flous de type mouvement, de supports différents, montrent une nette amélioration de la qualité de restauration obtenue par l'approche développée par rapport aux deux meilleures approches de l'état de l'art retenues. / We propose in this thesis manuscript to develop a blind restoration method of single component blurred and noisy images where no prior knowledge is required. This manuscript is composed of three chapters: the first chapter focuses on state-of-art works. The optimization approaches for resolving the restoration problem are discussed first. Then, the main methods of restoration, so-called semi-blind ones because requiring a minimum of a priori knowledge are analysed. Five of these methods are selected for evaluation. The second chapter is devoted to comparing the performance of the methods selected in the previous chapter. The main objective criteria for evaluating the quality of the restored images are presented. Of these criteria, the l1 norm for the estimation error is selected. The comparative study conducted on a database of monochromatic images, artificially degraded by two blurred functions with different support size and three levels of noise, revealed the most two relevant methods. The first one is based on a single-scale alternating approach where both the PSF and the image are estimated alternatively. The second one uses a multi-scale hybrid approach, which consists first of alternatingly estimating the PSF and a latent image, then in a sequential next step, restoring the image. In the comparative study performed, the benefit goes to the latter. The performance of both these methods will be used as references to then compare the newly designed method. The third chapter deals with the developed method. We have sought to make the hybrid approach retained in the previous chapter as blind as possible while improving the quality of estimation of both the PSF and the restored image. The contributions covers a number of points. A first series concerns the redefinition of the scales that of the initialization of the latent image at each scale level, the evolution of the parameters for the selection of the relevant contours supporting the estimation of the PSF and finally the definition of a blind stop criterion. A second series of contributions concentrates on the blind estimation of the two regularization parameters involved in order to avoid having to fix them empirically. Each parameter is associated with a separate cost function either for the PSF estimation or for the estimation of a latent image. In the sequential step that follows, we refine the estimation of the support of the PSF estimated in the previous alternated step, before exploiting it in the process of restoring the image. At this level, the only a priori knowledge necessary is a higher bound of the support of the PSF. The different evaluations performed on monochromatic and hyperspectral images artificially degraded by several motion-type blurs with different support sizes, show a clear improvement in the quality of restoration obtained by the newly designed method in comparison to the best two state-of-the-art methods retained.
|
163 |
Stochastic volatility Libor modeling and efficient algorithms for optimal stopping problemsLadkau, Marcel 12 July 2016 (has links)
Die vorliegende Arbeit beschäftigt sich mit verschiedenen Aspekten der Finanzmathematik. Ein erweitertes Libor Markt Modell wird betrachtet, welches genug Flexibilität bietet, um akkurat an Caplets und Swaptions zu kalibrieren. Weiterhin wird die Bewertung komplexerer Finanzderivate, zum Beispiel durch Simulation, behandelt. In hohen Dimensionen können solche Simulationen sehr zeitaufwendig sein. Es werden mögliche Verbesserungen bezüglich der Komplexität aufgezeigt, z.B. durch Faktorreduktion. Zusätzlich wird das sogenannte Andersen-Simulationsschema von einer auf mehrere Dimensionen erweitert, wobei das Konzept des „Momentmatchings“ zur Approximation des Volaprozesses in einem Heston Modell genutzt wird. Die daraus resultierende verbesserten Konvergenz des Gesamtprozesses führt zu einer verringerten Komplexität. Des Weiteren wird die Bewertung Amerikanischer Optionen als optimales Stoppproblem betrachtet. In höheren Dimensionen ist die simulationsbasierte Bewertung meist die einzig praktikable Lösung, da diese eine dimensionsunabhängige Konvergenz gewährleistet. Eine neue Methode der Varianzreduktion, die Multilevel-Idee, wird hier angewandt. Es wird eine untere Preisschranke unter zu Hilfenahme der Methode der „policy iteration“ hergeleitet. Dafür werden Konvergenzraten für die Simulation des Optionspreises erarbeitet und eine detaillierte Komplexitätsanalyse dargestellt. Abschließend wird das Preisen von Amerikanischen Optionen unter Modellunsicherheit behandelt, wodurch die Restriktion, nur ein bestimmtes Wahrscheinlichkeitsmodell zu betrachten, entfällt. Verschiedene Modelle können plausibel sein und zu verschiedenen Optionswerten führen. Dieser Ansatz führt zu einem nichtlinearen, verallgemeinerten Erwartungsfunktional. Mit Hilfe einer verallgemeinerte Snell''sche Einhüllende wird das Bellman Prinzip hergeleitet. Dadurch kann eine Lösung durch Rückwärtsrekursion erhalten werden. Ein numerischer Algorithmus liefert untere und obere Preisschranken. / The work presented here deals with several aspects of financial mathematics. An extended Libor market model is considered offering enough flexibility to accurately calibrate to various market data for caplets and swaptions. Moreover the evaluation of more complex financial derivatives is considered, for instance by simulation. In high dimension such simulations can be very time consuming. Possible improvements regarding the complexity of the simulation are shown, e.g. factor reduction. In addition the well known Andersen simulation scheme is extended from one to multiple dimensions using the concept of moment matching for the approximation of the vola process in a Heston model. This results in an improved convergence of the whole process thus yielding a reduced complexity. Further the problem of evaluating so called American options as optimal stopping problem is considered. For an efficient evaluation of these options, particularly in high dimensions, a simulation based approach offering dimension independent convergence often happens to be the only practicable solution. A new method of variance reduction given by the multilevel idea is applied to this approach. A lower bound for the option price is obtained using “multilevel policy iteration” method. Convergence rates for the simulation of the option price are obtained and a detailed complexity analysis is presented. Finally the valuation of American options under model uncertainty is examined. This lifts the restriction of considering one particular probabilistic model only. Different models might be plausible and may lead to different option values. This approach leads to a non-linear expectation functional, calling for a generalization of the standard expectation case. A generalized Snell envelope is obtained, enabling a backward recursion via Bellman principle. A numerical algorithm to valuate American options under ambiguity provides lower and upper price bounds.
|
164 |
新建房屋最適銷售時機--融資決策與實質選擇權的配合李克誠, Li, Philip K.C. Unknown Date (has links)
以前在台灣房地產開發市場上主要的房屋銷售模式是預售制度,這是受限於當時政治、經濟的環境條件下,所形成的特殊制度,主要的原因就是需要從市場中,獲得足夠的營運週轉資金;但是台灣的房地產市場在這幾年來逐漸轉變,已經出現為數不少的成屋銷售個案,主要著眼於當房地產市場景氣上揚時,延遲銷售能夠使專案獲得更大的報酬,而且當房地產專案融資的取得逐漸放寬,資金來源不在成為限制條件時,預售房屋可能已不再是唯一的銷售模式,且可能不再是最適銷售模式,但市場上房地產業者仍延續以前的思考模式,以融資比例的大小(有錢沒錢),作為判斷銷售時機的決策依據。本研究所想要研究的方向是最適銷售時機的選擇與融資決策是否會影響銷售時機的選擇,在各種不同市場條件下最適銷售時機與選擇權價值的變化。
本研究以實質選擇權(Real Options)模式探討新建房屋最適銷售時機,但以應用以前學者所推導的模式並不做模式的推導;首先以建立市場中專案營收的模式與建立實質選擇權決策模式,模擬房地產業者營運情境,並以隨機亂數帶入房價與融資利率模擬模式中,以模擬房地產市場中房價與融資利率,將模擬結果帶入所建立的模式中,模擬不同房地產市場條件下專案的營收,並藉由不同的決策值所模擬的專案營收,探討房地產市場中新建房屋的最適銷售時機的選擇與選擇權的價值。並且將模式中所應用的各變數予以獨立(在其他條件不變下,僅改變該變數)做敏感性分析,探討各模式中變數對於選擇最適銷售時機與實質選擇價值變化所產生的影響,以瞭解房地產市場中各外生變數,對於房地產市場新建房屋最適銷售時機與實質選擇權價值所可能造成的影響,與所應該注意的涵義。
第壹章 緒論
第壹節 研究動機與目的 1
第貳節 研究範圍與限制 5
第參節 研究架構 7
第貳章 產業分析與個案訪談
第壹節 銷售時機 11
第貳節 不動產金融 25
第參節 文獻探討與個案研究對本研究的涵意 28
第參章 文獻探討
第壹節 最適銷售時機模式 32
第貳節 文獻探討與個案研究對本研究的涵意 51
第肆章 模式建構與模式設計
第壹節 最適銷售時機 56
第貳節 研究設計 67
第伍章 實證結果分析
第壹節 融資決策與最適銷售時機 75
第貳節 實質選擇權價值敏感性分析 81
第參節 最適銷售時機選擇敏感性分析 90
第陸章 結論與建議
第壹節 研究結果涵義 104
第貳節 建議 110
參考文獻
中文部份 114
英文部份 115
|
165 |
Selected Problems in Financial MathematicsEkström, Erik January 2004 (has links)
<p>This thesis, consisting of six papers and a summary, studies the area of continuous time financial mathematics. A unifying theme for many of the problems studied is the implications of possible mis-specifications of models. Intimately connected with this question is, perhaps surprisingly, convexity properties of option prices. We also study qualitative behavior of different optimal stopping boundaries appearing in option pricing.</p><p>In Paper I a new condition on the contract function of an American option is provided under which the option price increases monotonically in the volatility. It is also shown that American option prices are continuous in the volatility.</p><p>In Paper II an explicit pricing formula for the perpetual American put option in the Constant Elasticity of Variance model is derived. Moreover, different properties of this price are studied.</p><p>Paper III deals with the Russian option with a finite time horizon. It is shown that the value of the Russian option solves a certain free boundary problem. This information is used to analyze the optimal stopping boundary.</p><p>A study of perpetual game options is performed in Paper IV. One of the main results provides a condition under which the value of the option is increasing in the volatility.</p><p>In Paper V options written on several underlying assets are considered. It is shown that, within a large class of models, the only model for the stock prices that assigns convex option prices to all convex contract functions is geometric Brownian motion.</p><p>Finally, in Paper VI it is shown that the optimal stopping boundary for the American put option is convex in the standard Black-Scholes model. </p>
|
166 |
Selected Problems in Financial MathematicsEkström, Erik January 2004 (has links)
This thesis, consisting of six papers and a summary, studies the area of continuous time financial mathematics. A unifying theme for many of the problems studied is the implications of possible mis-specifications of models. Intimately connected with this question is, perhaps surprisingly, convexity properties of option prices. We also study qualitative behavior of different optimal stopping boundaries appearing in option pricing. In Paper I a new condition on the contract function of an American option is provided under which the option price increases monotonically in the volatility. It is also shown that American option prices are continuous in the volatility. In Paper II an explicit pricing formula for the perpetual American put option in the Constant Elasticity of Variance model is derived. Moreover, different properties of this price are studied. Paper III deals with the Russian option with a finite time horizon. It is shown that the value of the Russian option solves a certain free boundary problem. This information is used to analyze the optimal stopping boundary. A study of perpetual game options is performed in Paper IV. One of the main results provides a condition under which the value of the option is increasing in the volatility. In Paper V options written on several underlying assets are considered. It is shown that, within a large class of models, the only model for the stock prices that assigns convex option prices to all convex contract functions is geometric Brownian motion. Finally, in Paper VI it is shown that the optimal stopping boundary for the American put option is convex in the standard Black-Scholes model.
|
167 |
Revision Moment for the Retail Decision-Making SystemJuszczuk, Agnieszka Beata, Tkacheva, Evgeniya January 2010 (has links)
In this work we address to the problems of the loan origination decision-making systems. In accordance with the basic principles of the loan origination process we considered the main rules of a clients parameters estimation, a change-point problem for the given data and a disorder moment detection problem for the real-time observations. In the first part of the work the main principles of the parameters estimation are given. Also the change-point problem is considered for the given sample in the discrete and continuous time with using the Maximum likelihood method. In the second part of the work the disorder moment detection problem for the real-time observations is considered as a disorder problem for a non-homogeneous Poisson process. The corresponding optimal stopping problem is reduced to the free-boundary problem with a complete analytical solution for the case when the intensity of defaults increases. Thereafter a scheme of the real time detection of a disorder moment is given.
|
168 |
Physical-layer security: practical aspects of channel coding and cryptographyHarrison, Willie K. 21 June 2012 (has links)
In this work, a multilayer security solution for digital communication systems is provided by considering the joint effects of physical-layer security channel codes with application-layer cryptography. We address two problems: first, the cryptanalysis of error-prone ciphertext; second, the design of a practical physical-layer security coding scheme. To our knowledge, the cryptographic attack model of the noisy-ciphertext attack is a novel concept. The more traditional assumption that the attacker has the ciphertext is generally assumed when performing cryptanalysis. However, with the ever-increasing amount of viable research in physical-layer security, it now becomes essential to perform the analysis when ciphertext is unreliable. We do so for the simple substitution cipher using an information-theoretic framework, and for stream ciphers by characterizing the success or failure of fast-correlation attacks when the ciphertext contains errors. We then present a practical coding scheme that can be used in conjunction with cryptography to ensure positive error rates in an eavesdropper's observed ciphertext, while guaranteeing error-free communications for legitimate receivers. Our codes are called stopping set codes, and provide a blanket of security that covers nearly all possible system configurations and channel parameters. The codes require a public authenticated feedback channel. The solutions to these two problems indicate the inherent strengthening of security that can be obtained by confusing an attacker about the ciphertext, and then give a practical method for providing the confusion. The aggregate result is a multilayer security solution for transmitting secret data that showcases security enhancements over standalone cryptography.
|
169 |
兩母體共有物種數的估計及最佳停止點 / The optimal stopping rule for estimating the number of shared species of two populations蔡政珈 Unknown Date (has links)
在生態學與生物學上,物種數常作為生物多樣性的指標,以估計單一群體物種數為例,較知名的方法首推Good (1953)以在樣本中出現一次的物種為基礎,提出的物種數估計方法堪稱的先驅,隨後許多文獻延伸Good的想法,發展出許多的估計方法,例如Burham and Overton (1978)的摺刀估計法,Chao and Lee (1992)則以涵蓋機率方式估計。相對而言,兩群體的共有物種數的研究少有人探討,目前以Chao et al. (2000)的估計式較為知名。
本研究參考Good (1953)提出估計未發現物種出現機率的想法,估計未發現共有物種的機率,並以Burham and Overton (1978)中應用摺刀法估計物種數的概念,建立一階摺刀估計式與變異數,且另行以多項分配公式推導變異數估計式,進行電腦模擬與實際資料驗證並與Chao et al. (2000)提出的共有物種估計式比較。最後根據Rasmussen and Starr (1979)以抽樣成本建立最適停止規則的概念,應用於本研究所提出的估計式,並經由電腦模擬找出抽樣成本與物種分佈均勻程度的關聯,可作為設定停止規則的依據。 / The number of species is often used to measure the biodiversity of a population in ecology and biology. Good (1953) proposed a famous estimate for the number of species based on the probability of unseen species. Subsequently, many studies applied Good’s idea to create new estimation methods, For example, the Jackknife estimate by Burham and Overton (1978), and the estimate by using the sample coverage probability in Chao and Lee (1992) are two famous examples. However, not many studies focus on estimating the number of shared species of two populations, except the method by Chao et al. (2000).
In this study, we modify Good’s idea and extend the Jackknife method of Burham and Overton (1978) to develop the estimate for the number of shared species of two populations. In addition, we also establish the variance formula of the estimator by using the multinomial distribution. Subsequently, we use computer simulation and real data sets to evaluate the proposed method, and compare them with the estimator by Chao et al. (2000). Finally, we adapt the idea of optimal stopping rule by Rasmussen and Starr (1979) and combine it with the proposed jackknife estimate. We found that using the sampling cost as the stopping rule is a feasible approach for estimating the number of shared species.
|
170 |
Detection of the Change Point and Optimal Stopping Time by Using Control Charts on Energy DerivativesAL, Cihan, Koroglu, Kubra January 2011 (has links)
No description available.
|
Page generated in 0.0619 seconds