• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 56
  • 23
  • 13
  • 11
  • 11
  • 9
  • 9
  • 9
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 366
  • 43
  • 40
  • 37
  • 35
  • 34
  • 30
  • 26
  • 25
  • 25
  • 22
  • 21
  • 19
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Modelling equity risk and external dependence: A survey of four African Stock Markets

Samuel, Richard Abayomi 18 May 2019 (has links)
Department of Statistics / MSc (Statistics) / The ripple e ect of a stock market crash due to extremal dependence is a global issue with key attention and it is at the core of all modelling e orts in risk management. Two methods of extreme value theory (EVT) were used in this study to model equity risk and extremal dependence in the tails of stock market indices from four African emerging markets: South Africa, Nigeria, Kenya and Egypt. The rst is the \bivariate-threshold-excess model" and the second is the \point process approach". With regards to the univariate analysis, the rst nding in the study shows in descending hierarchy that volatility with persistence is highest in the South African market, followed by Egyptian market, then Nigerian market and lastly, the Kenyan equity market. In terms of risk hierarchy, the Egyptian EGX 30 market is the most risk-prone, followed by the South African JSE-ALSI market, then the Nigerian NIGALSH market and the least risky is the Kenyan NSE 20 market. It is therefore concluded that risk is not a brainchild of volatility in these markets. For the bivariate modelling, the extremal dependence ndings indicate that the African continent regional equity markets present a huge investment platform for investors and traders, and o er tremendous opportunity for portfolio diversi cation and investment synergies between markets. These synergistic opportunities are due to the markets being asymptotic (extremal) independent or (very) weak asymptotic dependent and negatively dependent. This outcome is consistent with the ndings of Alagidede (2008) who analysed these same markets using co-integration analysis. The bivariate-threshold-excess and point process models are appropriate for modelling the markets' risks. For modelling the extremal dependence however, given the same marginal threshold quantile, the point process has more access to the extreme observations due to its wider sphere of coverage than the bivariate-threshold-excess model. / NRF
352

Modélisation de la Volatilité Implicite, Primes de Risque d’Assurance, et Stratégies d’Arbitrage de Volatilité / Implied Volatility Modelling, Tail Risk Premia, and Volatility Arbitrage Strategies

Al Wakil, Anmar 11 December 2017 (has links)
Les stratégies de volatilité ont connu un rapide essor suite à la crise financière de 2008. Or, les récentes performances catastrophiques de ces instruments indiciels ont remis en question leurs contributions en couverture de portefeuille. Mes travaux de thèse visent à repenser, réinventer la philosophie des stratégies de volatilité. Au travers d'une analyse empirique préliminaire reposant sur la théorie de l'utilité espérée, le chapitre 1 dresse le diagnostic des stratégies traditionnelles de volatilité basées sur la couverture de long-terme par la réplication passive de la volatilité implicite. Il montre que, bien que ce type de couverture bat la couverture traditionnelle, elle s'avère inappropriée pour des investisseurs peu averses au risque.Le chapitre 2 ouvre la voie à une nouvelle génération de stratégies de volatilité, actives, optionnelles et basées sur l'investissement factoriel. En effet, notre décomposition analytique et empirique du smile de volatilité implicite en primes de risque implicites, distinctes et investissables permet de monétiser de manière active le portage de risques d'ordres supérieurs. Ces primes de risques mesurent l'écart de valorisation entre les distributions neutres au risque et les distributions physiques.Enfin, le chapitre 3 compare notre approche investissement factoriel avec les stratégies de volatilité employées par les hedge funds. Notre essai montre que nos stratégies de primes de risque d'assurance sont des déterminants importants dans la performance des hedge funds, tant en analyse temporelle que cross-sectionnelle. Ainsi, nous mettons en évidence dans quelle mesure l'alpha provient en réalité de la vente de stratégies d'assurance contre le risque extrême. / Volatility strategies have flourished since the Great Financial Crisis in 2008. Nevertheless, the recent catastrophic performance of such exchange-traded products has put into question their contributions for portfolio hedging and diversification. My thesis work aims to rethink and reinvent the philosophy of volatility strategies.From a preliminary empirical study based on the expected utility theory, Chapter 1 makes a diagnostic of traditional volatility strategies, based on buy-and-hold investments and passive replication of implied volatility. It exhibits that, although such portfolio hedging significantly outperforms traditional hedging, it appears strongly inappropriate for risk-loving investors.Chapter 2 paves the way for a new generation of volatility strategies, active, option-based and factor-based investing. Indeed, our both analytical and empirical decomposition of implied volatility smiles into a combination of implied risk premia, distinct and tradeable, enables to harvest actively the compensation for bearing higher-order risks. These insurance risk premia measure the pricing discrepanciesbetween the risk-neutral and the physical probability distributions.Finally, Chapter 3 compares our factor-based investing approach to the strategies usually employed in the hedge fund universe. Our essay clearly evidences that our tail risk premia strategies are incremental determinants in the hedge fund performance, in both the time-series and the cross-section of returns. Hence, we exhibit to what extent hedge fund alpha actually arises from selling crash insurance strategies against tail risks.
353

Information und Interaktion: Die Ermittlung von sicherheitsrelevanten Anforderungen an innovative Heckleuchten

Mennig, Isis Maria 13 December 2017 (has links)
Der Mensch erfasst die Umwelt mithilfe seiner Sinnessysteme. Licht macht die physikalische Welt sichtbar und liefert dem visuellen System dadurch wichtige Informationen über unsere Umgebung. Im Straßenverkehr wird die visuelle Wahrnehmung durch die Fahrzeugbeleuchtung unterstützt. Dabei erfüllen Heckleuchten zwei zentrale Funktionen: Zum einen wird die Sichtbarkeit des Fahrzeugs bei Dunkelheit und schlechter Sicht für andere Verkehrsteilnehmer erhöht. Das Schlusslicht übermittelt Informationen über die Anwesenheit eines Fahrzeugs, über den Fahrzeugtyp und über die Fahrzeugbreite. Diese Informationen sind notwendig, um während der Fahrt einen angemessenen Abstand zu einem vorausfahrenden Fahrzeug einschätzen und einhalten zu können. Die zweite zentrale Funktion von Heckleuchten ist die Kommunikation und Interaktion mit dem rückwärtigen Verkehr. Mithilfe von Lichtsignalen der Bremsleuchte und des Fahrtrichtungsanzeigers wird die Aufmerksamkeit auf das vorausfahrende Fahrzeug gelenkt und es werden gezielt Informationen an andere Verkehrsteilnehmer übertragen. So werden Verzögerungen in Längsrichtung, Fahrtrichtungsänderungen und der Hinweis auf eine mögliche Gefahrenquelle durch die Heckleuchte kommuniziert. Für alle Lichtfunktionen gilt, dass die ausgesendeten Signale in jeder Situation gut wahrnehmbar und intuitiv verständlich sein müssen. In dieser Arbeit wurde der mögliche Einfluss verschiedener Heckleuchtenattribute auf den Straßenverkehr untersucht. Eine Unfallanalyse ergab, dass der Großteil der Auffahrunfälle durch ein angemessenes Geschwindigkeits- und Abstandsverhalten sowie eine adäquate Aufmerksamkeitsverteilung vermieden werden könnte. Adaptive Schlusslichtfunktionen weisen das Potenzial auf, bei schlechter Sicht die Erkennbarkeit des Fahrzeugs zu erhöhen, ohne dabei den Hinterherfahrenden zu blenden. In der Literatur finden sich Hinweise darauf, dass die Anbauhöhe und Position von Heckleuchten sowie die Helligkeit des Schlusslichts die Entfernungsschätzung zwischen zwei Fahrzeugen beeinflussen können. Im Rahmen von verkehrspsychologischen Untersuchungen wurde der Einfluss einer adaptiven Helligkeitssteuerung im Schlusslicht auf die Wahrnehmung untersucht. Es zeigte sich, dass eine kontinuierliche Helligkeitsanpassung des Schlusslichts an den Abstand zum Hinterherfahrenden die Entfernungsschätzung nicht beeinflusste. Um die Bedeutung der Gestaltungsform einer Heckleuchte für die Güte der Entfernungsschätzung beurteilen zu können, wurden zwei psychophysikalische Experimente durchgeführt. Diese Experimente zeigten, dass der Abstand zwischen zwei Elementen durch das Hinzufügen einer Verbindungslinie minimal verändert wahrgenommen wurde. Auf Heckleuchten übertragen könnte man daher von einer leichten Überschätzung der Breite des Fahrzeughecks bei verbundenen Schlussleuchten ausgehen. Dies könnte das Abstandsverhalten eines Hinterherfahrenden tendenziell positiv beeinflussen, indem der Fahrer durch diese Wahrnehmungstäuschung den Abstand zum Fahrzeug minimal unterschätzen würde. Da Licht und Bewegung durch ihre Salienz ein großes Potenzial zur Aufmerksamkeitslenkung aufweisen, beschäftigte sich eine weitere Versuchsreihe mit der Wirkung dynamischer Lichtfunktionen. Die Reaktionszeiten auf die Deaktivierung eines Bremslichts konnten durch eine Dynamik im Bremslicht nicht weiter verbessert werden. Die Untersuchung von Dynamiken im Schlusslicht zeigte jedoch, dass in Abhängigkeit bestimmter Umgebungsbedingungen, der Bewegungsart und -geschwindigkeit unterschiedliche Reaktionen hervorgerufen werden können. Durch die Entwicklung salienter und intuitiv verständlicher Lichtfunktionen könnten somit die Kommunikationsmöglichkeiten im Straßenverkehr erweitert und differenziert werden. Die gezielte Steuerung der Aufmerksamkeit durch Heckleuchtenfunktionen in kritischen Situationen kann zur Erhöhung der Verkehrssicherheit führen. Ziel der automobilen Lichtentwicklung ist eine innovative, allgemein akzeptierte und sicherheitsfördernde Heckleuchtengestaltung. Die Arbeit zeigte im Rahmen einer Unfallanalyse und verschiedenen wahrnehmungspsychologischen Experimenten die sicherheitsrelevanten Anforderungen an Heckleuchten auf. Dabei weisen verschiedene Heckleuchtenattribute und -funktionen das Potenzial auf, positiven Einfluss auf die Verkehrssicherheit nehmen zu können. / Humans perceive the world through their senses. Light makes the world visible and provides the visual system with important information about our environment. On the road, visual perception is supported by automotive lighting. Tail lights have two main functions: firstly, they improve the visibility of the vehicle for other road users at night time and in bad weather conditions. The tail light gives information about the presence of a car and the type and width of the vehicle. This information is necessary to assess and keep the appropriate distance to a vehicle in front. The second important function of tail lights is the communication and interaction with the following traffic. The lighting signals of the brake light and the turn signal draw attention to the car in front and specific information can be transmitted to other road users. In this manner, the tail light communicates decelerations in the longitudinal direction, changes in the driving direction and the indication of a possible source of danger. It is essential for every lighting signal that it is detectable in all situations and that it is highly intuitive at any time. This doctoral thesis studied the potential influence of distinct attributes of tail lights on the road traffic. Accident analysis showed that the majority of rear end crashes could be prevented by an appropriate speed and distance behavior as well as an adequate allocation of attention. Adaptive rear lighting shows potential to improve the perceptibility of the vehicle in poor visibility conditions without excessive glare for the following driver. Evidence from literature shows that the distance estimation between two vehicles could be influenced by the mounting height and the position of tail lights as well as by the brightness of the rear position lamp. Psychological experiments showed the influence of adaptive brightness control in the rear position lamp on the human perception. There was no effect on the distance perception by continuous adjustment of the brightness of the rear position lamp to the distance of the following vehicle. To evaluate the importance of the design of a tail light, two psychophysical experiments were conducted. These experiments showed a minimal influence on the perceived distance between two elements by adding a connection line. Transferred to tail lights, this effect implies a small overestimation of the width of the vehicle rear with connected rear position lamps. This could have a small positive effect on the distance behavior of the following driver due to the sensory illusion of slightly underestimating the distance. Because of the high potential for attention control by light and movement, another series of experiments was conducted, to assess the effect of dynamic light functions. Reaction times were not further improved through the use of dynamic functions in the brake light deactivation. However, the use of dynamics in the rear position lamp produced different responses as a function of distinct environment variables, the type of movement and the speed of animation. The development of salient and intuitive understandable light functions points out various communication possibilities for use within road traffic. Capturing the attention of other road users in critical situations through tail light functions could lead to an improvement in road safety. The aim of automotive lighting development is an innovative, universally accepted and safety enhancing tail light design. The dissertation demonstrated the safety requirements for tail lights through accident analysis and different perception experiments. Different attributes and functions of the tail light show the potential to have a positive influence on road safety.
354

Poisson hyperplane tessellation: Asymptotic probabilities of the zero and typical cells

Bonnet, Gilles 17 February 2017 (has links)
We consider the distribution of the zero and typical cells of a (homogeneous) Poisson hyperplane tessellation. We give a direct proof adapted to our setting of the well known Complementary Theorem. We provide sharp bounds for the tail distribution of the number of facets. We also improve existing bounds for the tail distribution of size measurements of the cells, such as the volume or the mean width. We improve known results about the generalised D.G. Kendall's problem, which asks about the shape of large cells. We also show that cells with many facets cannot be close to a lower dimensional convex body. We tacle the much less study problem of the number of facets and the shape of small cells. In order to obtain the results above we also develop some purely geometric tools, in particular we give new results concerning the polytopal approximation of an elongated convex body.
355

Contribution à la modélisation spatiale des événements extrêmes / Contributions to modeling spatial extremal events and applications

Bassene, Aladji 06 May 2016 (has links)
Dans cette de thèse, nous nous intéressons à la modélisation non paramétrique de données extrêmes spatiales. Nos résultats sont basés sur un cadre principal de la théorie des valeurs extrêmes, permettant ainsi d’englober les lois de type Pareto. Ce cadre permet aujourd’hui d’étendre l’étude des événements extrêmes au cas spatial à condition que les propriétés asymptotiques des estimateurs étudiés vérifient les conditions classiques de la Théorie des Valeurs Extrêmes (TVE) en plus des conditions locales sur la structure des données proprement dites. Dans la littérature, il existe un vaste panorama de modèles d’estimation d’événements extrêmes adaptés aux structures des données pour lesquelles on s’intéresse. Néanmoins, dans le cas de données extrêmes spatiales, hormis les modèles max stables,il n’en existe que peu ou presque pas de modèles qui s’intéressent à l’estimation fonctionnelle de l’indice de queue ou de quantiles extrêmes. Par conséquent, nous étendons les travaux existants sur l’estimation de l’indice de queue et des quantiles dans le cadre de données indépendantes ou temporellement dépendantes. La spécificité des méthodes étudiées réside sur le fait que les résultats asymptotiques des estimateurs prennent en compte la structure de dépendance spatiale des données considérées, ce qui est loin d’être trivial. Cette thèse s’inscrit donc dans le contexte de la statistique spatiale des valeurs extrêmes. Elle y apporte trois contributions principales. • Dans la première contribution de cette thèse permettant d’appréhender l’étude de variables réelles spatiales au cadre des valeurs extrêmes, nous proposons une estimation de l’indice de queue d’une distribution à queue lourde. Notre approche repose sur l’estimateur de Hill (1975). Les propriétés asymptotiques de l’estimateur introduit sont établies lorsque le processus spatial est adéquatement approximé par un processus M−dépendant, linéaire causal ou lorsqu'il satisfait une condition de mélange fort (a-mélange). • Dans la pratique, il est souvent utile de lier la variable d’intérêt Y avec une co-variable X. Dans cette situation, l’indice de queue dépend de la valeur observée x de la co-variable X et sera appelé indice de queue conditionnelle. Dans la plupart des applications, l’indice de queue des valeurs extrêmes n’est pas l’intérêt principal et est utilisé pour estimer par exemple des quantiles extrêmes. La contribution de ce chapitre consiste à adapter l’estimateur de l’indice de queue introduit dans la première partie au cadre conditionnel et d’utiliser ce dernier afin de proposer un estimateur des quantiles conditionnels extrêmes. Nous examinons les modèles dits "à plan fixe" ou "fixed design" qui correspondent à la situation où la variable explicative est déterministe et nous utlisons l’approche de la fenêtre mobile ou "window moving approach" pour capter la co-variable. Nous étudions le comportement asymptotique des estimateurs proposés et donnons des résultats numériques basés sur des données simulées avec le logiciel "R". • Dans la troisième partie de cette thèse, nous étendons les travaux de la deuxième partie au cadre des modèles dits "à plan aléatoire" ou "random design" pour lesquels les données sont des observations spatiales d’un couple (Y,X) de variables aléatoires réelles. Pour ce dernier modèle, nous proposons un estimateur de l’indice de queue lourde en utilisant la méthode des noyaux pour capter la co-variable. Nous utilisons un estimateur de l’indice de queue conditionnelle appartenant à la famille de l’estimateur introduit par Goegebeur et al. (2014b). / In this thesis, we investigate nonparametric modeling of spatial extremes. Our resultsare based on the main result of the theory of extreme values, thereby encompass Paretolaws. This framework allows today to extend the study of extreme events in the spatialcase provided if the asymptotic properties of the proposed estimators satisfy the standardconditions of the Extreme Value Theory (EVT) in addition to the local conditions on thedata structure themselves. In the literature, there exists a vast panorama of extreme events models, which are adapted to the structures of the data of interest. However, in the case ofextreme spatial data, except max-stables models, little or almost no models are interestedin non-parametric estimation of the tail index and/or extreme quantiles. Therefore, weextend existing works on estimating the tail index and quantile under independent ortime-dependent data. The specificity of the methods studied resides in the fact that theasymptotic results of the proposed estimators take into account the spatial dependence structure of the relevant data, which is far from trivial. This thesis is then written in thecontext of spatial statistics of extremes. She makes three main contributions.• In the first contribution of this thesis, we propose a new approach of the estimatorof the tail index of a heavy-tailed distribution within the framework of spatial data. This approach relies on the estimator of Hill (1975). The asymptotic properties of the estimator introduced are established when the spatial process is adequately approximated by aspatial M−dependent process, spatial linear causal process or when the process satisfies a strong mixing condition.• In practice, it is often useful to link the variable of interest Y with covariate X. Inthis situation, the tail index depends on the observed value x of the covariate X and theunknown fonction (.) will be called conditional tail index. In most applications, the tailindexof an extreme value is not the main attraction, but it is used to estimate for instance extreme quantiles. The contribution of this chapter is to adapt the estimator of the tail index introduced in the first part in the conditional framework and use it to propose an estimator of conditional extreme quantiles. We examine the models called "fixed design"which corresponds to the situation where the explanatory variable is deterministic. To tackle the covariate, since it is deterministic, we use the window moving approach. Westudy the asymptotic behavior of the estimators proposed and some numerical resultsusing simulated data with the software "R".• In the third part of this thesis, we extend the work of the second part of the framemodels called "random design" for which the data are spatial observations of a pair (Y,X) of real random variables . In this last model, we propose an estimator of heavy tail-indexusing the kernel method to tackle the covariate. We use an estimator of the conditional tail index belonging to the family of the estimators introduced by Goegebeur et al. (2014b).
356

Some Contributions to Inferential Issues of Censored Exponential Failure Data

Han, Donghoon 06 1900 (has links)
In this thesis, we investigate several inferential issues regarding the lifetime data from exponential distribution under different censoring schemes. For reasons of time constraint and cost reduction, censored sampling is commonly employed in practice, especially in reliability engineering. Among various censoring schemes, progressive Type-I censoring provides not only the practical advantage of known termination time but also greater flexibility to the experimenter in the design stage by allowing for the removal of test units at non-terminal time points. Hence, we first consider the inference for a progressively Type-I censored life-testing experiment with k uniformly spaced intervals. For small to moderate sample sizes, a practical modification is proposed to the censoring scheme in order to guarantee a feasible life-test under progressive Type-I censoring. Under this setup, we obtain the maximum likelihood estimator (MLE) of the unknown mean parameter and derive the exact sampling distribution of the MLE through the use of conditional moment generating function under the condition that the existence of the MLE is ensured. Using the exact distribution of the MLE as well as its asymptotic distribution and the parametric bootstrap method, we discuss the construction of confidence intervals for the mean parameter and their performance is then assessed through Monte Carlo simulations. Next, we consider a special class of accelerated life tests, known as step-stress tests in reliability testing. In a step-stress test, the stress levels increase discretely at pre-fixed time points and this allows the experimenter to obtain information on the parameters of the lifetime distributions more quickly than under normal operating conditions. Here, we consider a k-step-stress accelerated life testing experiment with an equal step duration τ. In particular, the case of progressively Type-I censored data with a single stress variable is investigated. For small to moderate sample sizes, we introduce another practical modification to the model for a feasible k-step-stress test under progressive censoring, and the optimal τ is searched using the modified model. Next, we seek the optimal τ under the condition that the step-stress test proceeds to the k-th stress level, and the efficiency of this conditional inference is compared to the preceding models. In all cases, censoring is allowed at each change stress point iτ, i = 1, 2, ... , k, and the problem of selecting the optimal Tis discussed using C-optimality, D-optimality, and A-optimality criteria. Moreover, when a test unit fails, there are often more than one fatal cause for the failure, such as mechanical or electrical. Thus, we also consider the simple stepstress models under Type-I and Type-II censoring situations when the lifetime distributions corresponding to the different risk factors are independently exponentially distributed. Under this setup, we derive the MLEs of the unknown mean parameters of the different causes under the assumption of a cumulative exposure model. The exact distributions of the MLEs of the parameters are then derived through the use of conditional moment generating functions. Using these exact distributions as well as the asymptotic distributions and the parametric bootstrap method, we discuss the construction of confidence intervals for the parameters and then assess their performance through Monte Carlo simulations. / Thesis / Doctor of Philosophy (PhD)
357

Some Advanced Model Selection Topics for Nonparametric/Semiparametric Models with High-Dimensional Data

Fang, Zaili 13 November 2012 (has links)
Model and variable selection have attracted considerable attention in areas of application where datasets usually contain thousands of variables. Variable selection is a critical step to reduce the dimension of high dimensional data by eliminating irrelevant variables. The general objective of variable selection is not only to obtain a set of cost-effective predictors selected but also to improve prediction and prediction variance. We have made several contributions to this issue through a range of advanced topics: providing a graphical view of Bayesian Variable Selection (BVS), recovering sparsity in multivariate nonparametric models and proposing a testing procedure for evaluating nonlinear interaction effect in a semiparametric model. To address the first topic, we propose a new Bayesian variable selection approach via the graphical model and the Ising model, which we refer to the ``Bayesian Ising Graphical Model'' (BIGM). There are several advantages of our BIGM: it is easy to (1) employ the single-site updating and cluster updating algorithm, both of which are suitable for problems with small sample sizes and a larger number of variables, (2) extend this approach to nonparametric regression models, and (3) incorporate graphical prior information. In the second topic, we propose a Nonnegative Garrote on a Kernel machine (NGK) to recover sparsity of input variables in smoothing functions. We model the smoothing function by a least squares kernel machine and construct a nonnegative garrote on the kernel model as the function of the similarity matrix. An efficient coordinate descent/backfitting algorithm is developed. The third topic involves a specific genetic pathway dataset in which the pathways interact with the environmental variables. We propose a semiparametric method to model the pathway-environment interaction. We then employ a restricted likelihood ratio test and a score test to evaluate the main pathway effect and the pathway-environment interaction. / Ph. D.
358

Interference Analysis and Resource Management in Server Processors: from HPC to Cloud Computing

Pons Escat, Lucía 01 September 2023 (has links)
[ES] Una de las principales preocupaciones de los centros de datos actuales es maximizar la utilización de los servidores. En cada servidor se ejecutan simultáneamente varias aplicaciones para aumentar la eficiencia de los recursos. Sin embargo, las prestaciones dependen en gran medida de la proporción de recursos que recibe cada aplicación. El mayor número de núcleos (y de aplicaciones ejecutándose) con cada nueva generación de procesadores hace que crezca la preocupación por la interferencia en los recursos compartidos. Esta tesis se centra en mitigar la interferencia cuando diferentes aplicaciones se consolidan en un mismo procesador desde dos perspectivas: computación de alto rendimiento (HPC) y computación en la nube. En el contexto de HPC, esta tesis propone políticas de gestión para dos de los recursos más críticos: la caché de último nivel (LLC) y los núcleos del procesador. La LLC desempeña un papel clave en las prestaciones de los procesadores actuales al reducir considerablemente el número de accesos de alta latencia a memoria principal. Se proponen estrategias de particionado de la LLC tanto para cachés inclusivas como no inclusivas, ambos diseños presentes en los procesadores para servidores actuales. Para los esquemas, se detectan nuevos comportamientos problemáticos y se asigna un mayor espacio de caché a las aplicaciones que hacen mejor uso de este. En cuanto a los núcleos del procesador, muchas aplicaciones paralelas (como aplicaciones de grafos) no escalan bien con un mayor número de núcleos. Además, el planificador de Linux aplica una estrategia de tiempo compartido que no ofrece buenas prestaciones cuando se ejecutan aplicaciones de grafo. Para maximizar la utilización del sistema, esta tesis propone ejecutar múltiples aplicaciones de grafo en el mismo procesador, asignando a cada una el número óptimo de núcleos (y adaptando el número de hilos creados) dinámicamente. En cuanto a la computación en la nube, esta tesis aborda tres grandes retos: la compleja infraestructura de estos sistemas, las características de sus aplicaciones y el impacto de la interferencia entre máquinas virtuales (MV). Primero, esta tesis presenta la plataforma experimental desarrollada con los principales componentes de un sistema en la nube. Luego, se presenta un amplio estudio de caracterización sobre un conjunto de aplicaciones de latencia crítica representativas con el fin de identificar los puntos que los proveedores de servicios en la nube deben tener en cuenta para mejorar el rendimiento y la utilización de los recursos. Por último, se realiza una propuesta que permite detectar y estimar dinámicamente la interferencia entre MV. El enfoque usa métricas que pueden monitorizarse fácilmente en la nube pública, ya que las MV deben tratarse como "cajas negras". Toda la investigación descrita se lleva a cabo respetando las restricciones y cumpliendo los requisitos para ser aplicable en entornos de producción de nube pública. En resumen, esta tesis aborda la contención en los principales recursos compartidos del sistema en el contexto de la consolidación de servidores. Los resultados experimentales muestran importantes ganancias sobre Linux. En los procesadores con LLC inclusiva, el tiempo de ejecución (TT) se reduce en más de un 40%, mientras que se mejora el IPC más de un 3%. Con una LLC no inclusiva, la equidad y el TT mejoran en un 44% y un 24%, respectivamente, al mismo tiempo que se mejora el rendimiento hasta un 3,5%. Al distribuir los núcleos del procesador de forma eficiente, se alcanza una equidad casi perfecta (94%), y el TT se reduce hasta un 80%. En entornos de computación en la nube, la degradación del rendimiento puede estimarse con un error de un 5% en la predicción global. Todas las propuestas presentadas han sido diseñadas para ser aplicadas en procesadores comerciales sin requerir ninguna información previa, tomando las decisiones dinámicamente con datos recogidos de los contadores de prestaciones. / [CAT] Una de les principals preocupacions dels centres de dades actuals és maximitzar la utilització dels servidors. A cada servidor s'executen simultàniament diverses aplicacions per augmentar l'eficiència dels recursos. Tot i això, el rendiment depèn en gran mesura de la proporció de recursos que rep cada aplicació. El nombre creixent de nuclis (i aplicacions executant-se) amb cada nova generació de processadors fa que creixca la preocupació per l'efecte causat per les interferències en els recursos compartits. Aquesta tesi se centra a mitigar la interferència en els recursos compartits quan diferents aplicacions es consoliden en un mateix processador des de dues perspectives: computació d'alt rendiment (HPC) i computació al núvol. En el context d'HPC, aquesta tesi proposa polítiques de gestió per a dos dels recursos més crítics: la memòria cau d'últim nivell (LLC) i els nuclis del processador. La LLC exerceix un paper clau a les prestacions del sistema en els processadors actuals reduint considerablement el nombre d'accessos d'alta latència a la memòria principal. Es proposen estratègies de particionament de la LLC tant per a caus inclusives com no inclusives, ambdós dissenys presents en els processadors actuals. Per als dos esquemes, se detecten nous comportaments problemàtics i s'assigna un major espai de memòria cau a les aplicacions que en fan un millor ús. Pel que fa als nuclis del processador, moltes aplicacions paral·leles (com les aplicacions de graf) no escalen bé a mesura que s'incrementa el nombre de nuclis. A més, el planificador de Linux aplica una estratègia de temps compartit que no ofereix bones prestacions quan s'executen aplicacions de graf. Per maximitzar la utilització del sistema, aquesta tesi proposa executar múltiples aplicacions de grafs al mateix processador, assignant a cadascuna el nombre òptim de nuclis (i adaptant el nombre de fils creats) dinàmicament. Pel que fa a la computació al núvol, aquesta tesi aborda tres grans reptes: la complexa infraestructura d'aquests sistemes, les característiques de les seues aplicacions i l'impacte de la interferència entre màquines virtuals (MV). En primer lloc, aquesta tesi presenta la plataforma experimental desenvolupada amb els principals components d'un sistema al núvol. Després, es presenta un ampli estudi de caracterització sobre un conjunt d'aplicacions de latència crítica representatives per identificar els punts que els proveïdors de serveis al núvol han de tenir en compte per millorar el rendiment i la utilització dels recursos. Finalment, es fa una proposta que de manera dinàmica permet detectar i estimar la interferència entre MV. L'enfocament es basa en mètriques que es poden monitoritzar fàcilment al núvol públic, ja que les MV han de tractar-se com a "caixes negres". Tota la investigació descrita es duu a terme respectant les restriccions i complint els requisits per ser aplicable en entorns de producció al núvol públic. En resum, aquesta tesi aborda la contenció en els principals recursos compartits del sistema en el context de la consolidació de servidors. Els resultats experimentals mostren que s'obtenen importants guanys sobre Linux. En els processadors amb una LLC inclusiva, el temps d'execució (TT) es redueix en més d'un 40%, mentres que es millora l'IPC en més d'un 3%. En una LLC no inclusiva, l'equitat i el TT es milloren en un 44% i un 24%, respectivament, al mateix temps que s'obté una millora del rendiment de fins a un 3,5%. Distribuint els nuclis del processador de manera eficient es pot obtindre una equitat quasi perfecta (94%), i el TT pot reduir-se fins a un 80%. En entorns de computació al núvol, la degradació del rendiment pot estimar-se amb un error de predicció global d'un 5%. Totes les propostes presentades en aquesta tesi han sigut dissenyades per a ser aplicades en processadors de servidors comercials sense requerir cap informació prèvia, prenent decisions dinàmicament amb dades recollides dels comptadors de prestacions. / [EN] One of the main concerns of today's data centers is to maximize server utilization. In each server processor, multiple applications are executed concurrently, increasing resource efficiency. However, performance and fairness highly depend on the share of resources that each application receives, leading to performance unpredictability. The rising number of cores (and running applications) with every new generation of processors is leading to a growing concern for interference at the shared resources. This thesis focuses on addressing resource interference when different applications are consolidated on the same server processor from two main perspectives: high-performance computing (HPC) and cloud computing. In the context of HPC, resource management approaches are proposed to reduce inter-application interference at two major critical resources: the last level cache (LLC) and the processor cores. The LLC plays a key role in the system performance of current multi-cores by reducing the number of long-latency main memory accesses. LLC partitioning approaches are proposed for both inclusive and non-inclusive LLCs, as both designs are present in current server processors. In both cases, newly problematic LLC behaviors are identified and efficiently detected, granting a larger cache share to those applications that use best the LLC space. As for processor cores, many parallel applications, like graph applications, do not scale well with an increasing number of cores. Moreover, the default Linux time-sharing scheduler performs poorly when running graph applications, which process vast amounts of data. To maximize system utilization, this thesis proposes to co-locate multiple graph applications on the same server processor by assigning the optimal number of cores to each one, dynamically adapting the number of threads spawned by the running applications. When studying the impact of system-shared resources on cloud computing, this thesis addresses three major challenges: the complex infrastructure of cloud systems, the nature of cloud applications, and the impact of inter-VM interference. Firstly, this thesis presents the experimental platform developed to perform representative cloud studies with the main cloud system components (hardware and software). Secondly, an extensive characterization study is presented on a set of representative latency-critical workloads which must meet strict quality of service (QoS) requirements. The aim of the studies is to outline issues cloud providers should consider to improve performance and resource utilization. Finally, we propose an online approach that detects and accurately estimates inter-VM interference when co-locating multiple latency-critical VMs. The approach relies on metrics that can be easily monitored in the public cloud as VMs are handled as ``black boxes''. The research described above is carried out following the restrictions and requirements to be applicable to public cloud production systems. In summary, this thesis addresses contention in the main system shared resources in the context of server consolidation, both in HPC and cloud computing. Experimental results show that important gains are obtained over the Linux OS scheduler by reducing interference. In inclusive LLCs, turnaround time (TT) is reduced by over 40% while improving IPC by more than 3%. In non-inclusive LLCs, fairness and TT are improved by 44% and 24%, respectively, while improving performance by up to 3.5%. By distributing core resources efficiently, almost perfect fairness can be obtained (94%), and TT can be reduced by up to 80%. In cloud computing, performance degradation due to resource contention can be estimated with an overall prediction error of 5%. All the approaches proposed in this thesis have been designed to be applied in commercial server processors without requiring any prior information, making decisions dynamically with data collected from hardware performance counters. / Pons Escat, L. (2023). Interference Analysis and Resource Management in Server Processors: from HPC to Cloud Computing [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/195840
359

網路數位出版商業模式對智慧財產權管理之影響 / IP Management and The Business Model of Digital Publishing

呂忠晏, Jhong Yan, Lyu Unknown Date (has links)
在數位出版產業中,台灣出版業者面對網路改革的壓力,積極尋找網路數位出版發展趨勢與商業模式,並且華文網路市場機會的興起以及網路社會上智慧財產權的日益重要,本研究將以智慧財產權管理的角度探討在網路中,臺灣出版業者如何尋找新的商業模式掌握此次出版業的革命商機,為廠商提出發展建議。 網路數位出版,意指在內容出版的過程,從內容創造、流通到利用都是以數位化的技術進行,並且在網路的環境中完成這些過程。網路數位出版平台,指的是溝通內容創造與內容利用兩端的中間者。負責聯絡兩方平台業者所面對的智財權議題以著作權最為重要,因為數位出版產業立基於內容發展,而著作權主要說明內容的管理。 本研究從網路長尾理論出發,以該現象形成的原因作為分析廠商的智財權管理的面向,分析專業數位出版平台—商業周刊網站,以及業餘數位出版平台—PIXNET,尋找商業模式與智財權管理的關連性。 根據個案訪談的結果分析,專業數位出版平台內容發行與智財權管理皆為以業者角度為主的單向,然而業餘數位出版平台的內容發行與智財權管理都是以互動的方式完成,平台之商業模式確實影響智財權的管理。除此之外,對於未來數位出版平台的發展,專業與業餘之間的藩籬逐漸消弭,成為兩者的綜合,本研究亦建議業者朝向專業與業餘並行的發展方向前進。 / The importance of intellectual property increases with the growth of Chinese internet market. Taiwan publishers are now confronted with the pressure from internet innovation and thus are forced to find out a new business model and more opportunity. This study will focus on the possible business model of digital publishing and the strategy of IP management which Taiwan publishers should adopt. Digital publishing suggests an all-digitalized publication process. Every step of publication, from content creation, editing, publishing, selling to application, can be completed all through the internet. The platform of digital publishing serves to communicate both side of content providers and users. Hence, copyright of the digital content and the management of copyright becomes one of the most important intellectual property issues. The thesis will begin with Chris Anderson’s Long Tail theory. Two websites of digital publishing in Taiwan will be chosen for analysis: the Business Weekly as the commercial plaform and PIXNET Blog as the avocational one. Through the comparison, the study aims to find out the connections between the business model and the intellectual property management. According to the case studies, the content and the intellectual property management of commercial platform are unidirectional, decided by managers. On the other hand, tthese of avocational one are bidirectional, influenced by website managers and users. Intellectual property managements are influenced by different business model. However, it would be the trend that the barrier between commercial platform and the avocational one will melt away and the business model of website in the future will change. The business model and intellectual property management of digital publishing platform should combine characteristics of both commercial and avocational one.
360

利用混合模型估計風險值的探討

阮建豐 Unknown Date (has links)
風險值大多是在假設資產報酬為常態分配下計算而得的,但是這個假設與實際的資產報酬分配不一致,因為很多研究者都發現實際的資產報酬分配都有厚尾的現象,也就是極端事件的發生機率遠比常態假設要來的高,因此利用常態假設來計算風險值對於真實損失的衡量不是很恰當。 針對這個問題,本論文以歷史模擬法、變異數-共變異數法、混合常態模型來模擬報酬率的分配,並依給定的信賴水準估算出風險值,其中混合常態模型的參數是利用準貝式最大概似估計法及EM演算法來估計;然後利用三種風險值的評量方法:回溯測試、前向測試與二項檢定,來評判三種估算風險值方法的優劣。 經由實證結果發現: 1.報酬率分配在左尾臨界機率1%有較明顯厚尾的現象。 2.利用混合常態分配來模擬報酬率分配會比另外兩種方法更能準確的捕捉到左尾臨界機率1%的厚尾。 3.混合常態模型的峰態係數值接近於真實報酬率分配的峰態係數值,因此我們可以確認混合常態模型可以捕捉高峰的現象。 關鍵字:風險值、厚尾、歷史模擬法、變異數-共變異教法、混合常態模型、準貝式最大概似估計法、EM演算法、回溯測試、前向測試、高峰 / Initially, Value at Risk (VaR) is calculated by assuming that the underline asset return is normal distribution, but this assumption sometimes does not consist with the actual distribution of asset return. Many researchers have found that the actual distribution of the underline asset return have Fat-Tail, extreme value events, character. So under normal distribution assumption, the VaR value is improper compared with the actual losses. The paper discuss three methods. Historical Simulated method - Variance-Covariance method and Mixture Normal .simulating those asset, return and VaR by given proper confidence level. About the Mixture Normal Distribution, we use both EM algorithm and Quasi-Bayesian MLE calculating its parameters. Finally, we use tree VaR testing methods, Back test、Forward tes and Binomial test -----comparing its VaR loss probability We find the following results: 1.Under 1% left-tail critical probability, asset return distribution has significant Fat-tail character. 2.Using Mixture Normal distribution we can catch more Fat-tail character precisely than the other two methods. 3.The kurtosis of Mixture Normal is close to the actual kurtosis, this means that the Mixture Normal distribution can catch the Leptokurtosis phenomenon. Key words: Value at Risk、VaR、Fat tail、Historical simulation method、 Variance-Covariance method、Mixture Normal distribution、Quasi-Bayesian MLE、EM algorithm、Back test、 Forward test、 Leptokurtosis

Page generated in 0.048 seconds