• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 220
  • 68
  • 62
  • 50
  • 21
  • 14
  • 13
  • 10
  • 9
  • 8
  • 3
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 549
  • 104
  • 73
  • 59
  • 56
  • 55
  • 55
  • 49
  • 42
  • 38
  • 38
  • 37
  • 35
  • 35
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Stochastic Volatility Models and Simulated Maximum Likelihood Estimation

Choi, Ji Eun 08 July 2011 (has links)
Financial time series studies indicate that the lognormal assumption for the return of an underlying security is often violated in practice. This is due to the presence of time-varying volatility in the return series. The most common departures are due to a fat left-tail of the return distribution, volatility clustering or persistence, and asymmetry of the volatility. To account for these characteristics of time-varying volatility, many volatility models have been proposed and studied in the financial time series literature. Two main conditional-variance model specifications are the autoregressive conditional heteroscedasticity (ARCH) and the stochastic volatility (SV) models. The SV model, proposed by Taylor (1986), is a useful alternative to the ARCH family (Engle (1982)). It incorporates time-dependency of the volatility through a latent process, which is an autoregressive model of order 1 (AR(1)), and successfully accounts for the stylized facts of the return series implied by the characteristics of time-varying volatility. In this thesis, we review both ARCH and SV models but focus on the SV model and its variations. We consider two modified SV models. One is an autoregressive process with stochastic volatility errors (AR--SV) and the other is the Markov regime switching stochastic volatility (MSSV) model. The AR--SV model consists of two AR processes. The conditional mean process is an AR(p) model , and the conditional variance process is an AR(1) model. One notable advantage of the AR--SV model is that it better captures volatility persistence by considering the AR structure in the conditional mean process. The MSSV model consists of the SV model and a discrete Markov process. In this model, the volatility can switch from a low level to a high level at random points in time, and this feature better captures the volatility movement. We study the moment properties and the likelihood functions associated with these models. In spite of the simple structure of the SV models, it is not easy to estimate parameters by conventional estimation methods such as maximum likelihood estimation (MLE) or the Bayesian method because of the presence of the latent log-variance process. Of the various estimation methods proposed in the SV model literature, we consider the simulated maximum likelihood (SML) method with the efficient importance sampling (EIS) technique, one of the most efficient estimation methods for SV models. In particular, the EIS technique is applied in the SML to reduce the MC sampling error. It increases the accuracy of the estimates by determining an importance function with a conditional density function of the latent log variance at time t given the latent log variance and the return at time t-1. Initially we perform an empirical study to compare the estimation of the SV model using the SML method with EIS and the Markov chain Monte Carlo (MCMC) method with Gibbs sampling. We conclude that SML has a slight edge over MCMC. We then introduce the SML approach in the AR--SV models and study the performance of the estimation method through simulation studies and real-data analysis. In the analysis, we use the AIC and BIC criteria to determine the order of the AR process and perform model diagnostics for the goodness of fit. In addition, we introduce the MSSV models and extend the SML approach with EIS to estimate this new model. Simulation studies and empirical studies with several return series indicate that this model is reasonable when there is a possibility of volatility switching at random time points. Based on our analysis, the modified SV, AR--SV, and MSSV models capture the stylized facts of financial return series reasonably well, and the SML estimation method with the EIS technique works very well in the models and the cases considered.
342

A Content Analysis Of The Security Dimension Of The Turkish Accession To The European Union

Sayin, Ayse 01 July 2008 (has links) (PDF)
This thesis aims to analyze the security relations between Turkey and the European Union within the context of enlargement. In this framework, firstly, the historical background of the changing dynamics of their bilateral security relations is studied by focusing both on the Cold War and the Post Cold War periods. In this historical study, more emphasis is put on the Post Cold War period where the changing security understandings of both Turkey and the EU, major developments leading to adoption of new mechanisms by both actors and their impact on their security relations are analyzed. Secondly, after evaluating the importance of security in the European integration and enlargement processes, the security dimension of the Turkish accession, appearing in the official enlargement discourse of the EU actors and in the articles of the leading European think tanks&rsquo / scholars is examined via the use of content analysis method. Following this study, a critical analysis of the given speeches and articles is made. In the last part, the different security roles ascribed to Turkey by the EU actors and scholars in the related speeches and articles are discussed within the framework of Turkey&rsquo / s accession process. Accordingly, it is argued in this thesis that although Turkey&rsquo / s significance for European and regional security is accepted by the EU actors and scholars, this is not properly reflected on its accession process.
343

Problem decomposition by mutual information and force-based clustering

Otero, Richard Edward 28 March 2012 (has links)
The scale of engineering problems has sharply increased over the last twenty years. Larger coupled systems, increasing complexity, and limited resources create a need for methods that automatically decompose problems into manageable sub-problems by discovering and leveraging problem structure. The ability to learn the coupling (inter-dependence) structure and reorganize the original problem could lead to large reductions in the time to analyze complex problems. Such decomposition methods could also provide engineering insight on the fundamental physics driving problem solution. This work forwards the current state of the art in engineering decomposition through the application of techniques originally developed within computer science and information theory. The work describes the current state of automatic problem decomposition in engineering and utilizes several promising ideas to advance the state of the practice. Mutual information is a novel metric for data dependence and works on both continuous and discrete data. Mutual information can measure both the linear and non-linear dependence between variables without the limitations of linear dependence measured through covariance. Mutual information is also able to handle data that does not have derivative information, unlike other metrics that require it. The value of mutual information to engineering design work is demonstrated on a planetary entry problem. This study utilizes a novel tool developed in this work for planetary entry system synthesis. A graphical method, force-based clustering, is used to discover related sub-graph structure as a function of problem structure and links ranked by their mutual information. This method does not require the stochastic use of neural networks and could be used with any link ranking method currently utilized in the field. Application of this method is demonstrated on a large, coupled low-thrust trajectory problem. Mutual information also serves as the basis for an alternative global optimizer, called MIMIC, which is unrelated to Genetic Algorithms. Advancement to the current practice demonstrates the use of MIMIC as a global method that explicitly models problem structure with mutual information, providing an alternate method for globally searching multi-modal domains. By leveraging discovered problem inter-dependencies, MIMIC may be appropriate for highly coupled problems or those with large function evaluation cost. This work introduces a useful addition to the MIMIC algorithm that enables its use on continuous input variables. By leveraging automatic decision tree generation methods from Machine Learning and a set of randomly generated test problems, decision trees for which method to apply are also created, quantifying decomposition performance over a large region of the design space.
344

Der \"Leitbahn\"-Begriff in der Akupunktur

Kienitz, Malte Sebastian 10 March 2011 (has links) (PDF)
Ziel dieser Arbeit war die Betrachtung der Leitbahnen der Akupunktur unter wissenschaftlichen Gesichtspunkten. Bereits während der Recherchen wurde deutlich, dass dieses Thema auf mehreren Ebenen zu bearbeiten ist. Zunächst wurde daher der zeitliche Rahmen der Entstehung der Akupunktur im Allgemeinen und der Veterinärakupunktur im Besonderen eingegrenzt, die Bedeutung der Veterinärakupunktur im Alten China untersucht und die Beschreibung und Darstellung der Leitbahnen und Punkte beim Tier thematisiert (Kap. 3). Anschließend wurde eine Einführung in die theoretischen Grundlagen gegeben, ihre Entstehung und Entwicklung vor allem unter historischen und soziokulturellen Gesichtspunkten und zuletzt im medizinischen Kontext beschrieben (Kap. 4). Im Folgenden wurde der Versuch des Nachweises der Leitbahnen in naturwissenschaftlichem Kontext anhand verschiedener methodischer Beispiele untersucht (Kap. 5). Schließlich wurden die Ergebnisse zusammengefasst, untereinander sowie mit Aspekten der Forschung zur Punktspezifität in einen Kontext gestellt und abschließend eine Einschätzung des Stellenwertes der Leitbahnen in der (tier)medizinischen Praxis gegeben (Kap. 6). Anhand der Quellenlage kann die Entwicklung der Akupunktur ab etwa 200 v. Chr. nachvollzogen werden, wobei Nachweise für die Tierakupunktur erst in der Sui-Zeit (581 – 618 n. Chr.) vorliegen. Aus dem Alten China sind keine Darstellungen der Leitbahnen bekannt, eine Einteilung erfolgt eher nach den Körperregionen. Erst im Europa der 1950er Jahre werden Leitbahnkarten für Tiere durch Transposition entwickelt. Die Akupunktur ist eine Teildisziplin der sogenannten Entsprechungsmedizin. Als solche sind die ihr zugrundeliegenden Theorien ein Ergebnis der politischen und sozialen Veränderungen zwischen der Zeit der streitenden Reiche (481 – 221 v. Chr.) und der Han-Zeit (202 v. Chr. – 220 n. Chr.), die danach über etwa 1700 Jahre nie grundlegend in Frage gestellt wurden. Dieser theoretische Rahmen hat in China selbst nur geringe praktische Relevanz, während ihm im Westen als Abgrenzung gegenüber der konventionellen Medizin und um den Wunschvorstellungen einer idealen alternativen Therapiemethode zu entsprechen eine deutlich größere Rolle zukommt. Ein Nachweis der Existenz der Leitbahnen wurde vielfach versucht, konnte jedoch nicht erbracht werden. Einige Ergebnisse dieser Arbeiten und auch vieler Wirksamkeitsstudien zeigen, dass es sich bei der Akupunktur um ein multifaktorielles Therapiekonzept handelt. Besonders hervorzuheben ist die rezeptive und transmissive Rolle des Nervensystems auf unterschiedlichen Funktions- und Integrationsebenen. Die Leitbahnen als Linien auf der Körperoberfläche haben rein deskriptiven Charakter, um eine Anzahl von Punkten zu verbinden. Allerdings deutet einiges darauf hin, dass eher von sensiblen und effektiven Zonen gesprochen werden müsste. In diesem Rahmen ist es nicht sinnvoll, an einer Kartographie von Punkt und Linie festzuhalten. Zu eng sind hier die Beziehungen zur sozio-historisch bedingten Theorie, die die physiologisch-anatomischen Gegebenheiten nicht adäquat wiedergibt. Die weitere Forschung auf dem Gebiet der Akupunktur muss weiterhin um Aufklärung der Wirkmechanismen bemüht sein. Gleichzeitig muss eine objektive Quantifizierung der Akupunkturwirkungen erfolgen, um sinnvolle Einsatzbereiche zu definieren.
345

Ignalinos AE tikimybinio saugos vertinimo modelio neapibrėžtumo ir jautrumo analizė / Uncertainty and sensitivity analysis of Ignalina NPP probabilistic safety assessment model

Bucevičius, Nerijus 19 June 2008 (has links)
Neapibrėžtumo analizė techninių sistemų modeliavimo rezultatams yra ypač aktuali, kai modeliuojamas pavojingų sistemų darbas, saugą užtikrinančių sistemų funkcionavimas, nagrinėjami avarijų scenarijai ar kiti, su rizika susiję klausimai. Tokiais atvejais, ypatingai reaktorių saugos analizės srityje, yra labai svarbu, kad gauti modeliavimo rezultatais būtų robastiški. Šiame darbe yra atliekama Ignalinos AE tikimybinio saugos vertinimo modelio neapibrėžtumo ir jautrumo analizė. Neapibrėžtumo ir jautrumo analizė atlikta naudojantis skirtingais statistinio vertinimo metodais, taikant programų paketą SUSA. Gauti rezultatai palyginti su tikimybinio modeliavimo sistemos Risk Spectrum PSA tyrimo rezultatais. Palyginimas parodė, jog skirtingais metodais ir programiniais paketais parametrų reikšmingumas įvertintas vienodai. Statistinė neapibrėžtumo ir jautrumo analizė, taikant Monte Karlo modeliavimo metodą, leido nustatyti parametrus turėjusius didžiausią įtaką modelio rezultatui. / The uncertainty estimation is the part of full analysis for modelling of safety system functioning in case of the accident, for risk estimation and for making the risk-based decision. In this paper the uncertainty and sensitivity analysis of Ignalina NPP probabilistic safety assessment model was performed using SUSA software package. The results were compared with the results, performed using software package Risk Spectrumm PSA. Statistical analysis of uncertainty and sensitivity allows to estimate the influence of parameters on the calculation results and find those modelling parameters that have the largest impact on the result. Conclusions about for importance of a parameters and sensitivity of the result are obtained using a linear approximation of the model under analysis.
346

台灣銀行業系統重要性之衡量 / Measuring Systemic Importance of Taiwan’s Banking System

林育慈, Lin, Yu Tzu Unknown Date (has links)
本文利用Gravelle and Li (2013)提出之系統重要性指標來衡量國內九家上市金控銀行對於系統風險之貢獻程度。此種衡量方法係將特定銀行之系統重要性定義為該銀行發生危機造成系統風險增加的幅度,並以多變量極值理論進行機率的估算。實證結果顯示:一、系統重要性最高者為第一銀行;最低者為中國信託銀行。其中除中國信託銀行之重要性顯著低於其他銀行外,其餘銀行之系統重要性均無顯著差異。二、經營期間較長之銀行其系統重要性較高;具公股色彩之銀行對於系統風險之貢獻程度平均而言高於民營銀行。三、銀行規模與其對系統風險之貢獻大致呈現正向關係,即規模越大之銀行其重要性越高。在此情況下可能會有銀行大到不能倒的問題發生。四、存放比較低之銀行系統重要性亦較低,而資本適足率與系統重要性間並無明顯關係。 / In this thesis, we apply the measure proposed by Gravelle and Li (2013) to examine the systemic importance of certain Taiwanese banks. The systemic importance is defined as the increase in the systemic risk conditioned on the crash of a particular bank, and is estimated by the multivariate extreme value theory. Our empirical evidence shows that the most systemically important bank is First Commercial Bank, and the CTBC Bank is significantly less important than other banks, while the differences among the remaining banks are not significant. Second, banks established earlier have higher systemic importance; and the contribution to systemic risk of public banks, on average, is higher than the contribution of private banks. Third, we also find out that the size of a bank and its risk contribution have positive relationship. That is, the bigger a bank is, the more important it is. Under this circumstances, the too big to fail problem may occur. Last, the bank which has lower loan-to-deposit ratio will be less systemically important than those with higher ones, while the relation between capital adequacy ratio and systemic importance is unclear.
347

Saillance Visuelle, de la 2D à la 3D Stéréoscopique : Examen des Méthodes Psychophysique et Modélisation Computationnelle

Wang, Junle 16 November 2012 (has links) (PDF)
L'attention visuelle est l'un des mécanismes les plus importants mis en oeuvre par le système visuel humain (SVH) afin de réduire la quantité d'information que le cerveau a besoin de traiter pour appréhender le contenu d'une scène. Un nombre croissant de travaux est consacré à l'étude de l'attention visuelle, et en particulier à sa modélisation computationnelle. Dans cette thèse, nous présentons des études portant sur plusieurs aspects de cette recherche. Nos travaux peuvent être classés globalement en deux parties. La première concerne les questions liées à la vérité de terrain utilisée, la seconde est relative à la modélisation de l'attention visuelle dans des conditions de visualisation 3D. Dans la première partie, nous analysons la fiabilité de cartes de densité de fixation issues de différentes bases de données occulométriques. Ensuite, nous identifions quantitativement les similitudes et les différences entre carte de densité de fixation et carte d'importance visuelle, ces deux types de carte étant les vérités de terrain communément utilisées par les applications relatives à l'attention. Puis, pour faire face au manque de vérité de terrain exploitable pour la modélisation de l'attention visuelle 3D, nous procédons à une expérimentation oculométrique binoculaire qui aboutit à la création d'une nouvelle base de données avec des images stéréoscopiques 3D. Dans la seconde partie, nous commençons par examiner l'impact de la profondeur sur l'attention visuelle dans des conditions de visualisation 3D. Nous quantifions d'abord le " biais de profondeur " lié à la visualisation de contenus synthétiques 3D sur écran plat stéréoscopique. Ensuite, nous étendons notre étude avec l'usage d'images 3D au contenu naturel. Nous proposons un modèle de l'attention visuelle 3D basé saillance de profondeur, modèle qui repose sur le contraste de profondeur de la scène. Deux façons différentes d'exploiter l'information de profondeur par notre modèle sont comparées. Ensuite, nous étudions le biais central et les différences qui existent selon que les conditions de visualisation soient 2D ou 3D. Nous intégrons aussi le biais central à notre modèle de l'attention visuelle 3D. Enfin, considérant que l'attention visuelle combinée à une technique de floutage peut améliorer la qualité d'expérience de la TV-3D, nous étudions l'influence de flou sur la perception de la profondeur, et la relation du flou avec la disparité binoculaire.
348

Nouvelles approches en filtrage particulaire. Application au recalage de la navigation inertielle

Murangira, A. 25 March 2014 (has links) (PDF)
Les travaux présentés dans ce mémoire de thèse concernent le développement et la mise en œuvre d'un algorithme de filtrage particulaire pour le recalage de la navigation inertielle par mesures altimétriques. Le filtre développé, le MRPF (Mixture Regularized Particle Filter), s'appuie à la fois sur la modélisation de la densité a posteriori sous forme de mélange fini, sur le filtre particulaire régularisé ainsi que sur l'algorithme mean-shiftclustering. Nous proposons également une extension du MRPF au filtre particulaire Rao-Blackwellisé appelée MRBPF (Mixture Rao-Blackwellized ParticleFilter). L'objectif est de proposer un filtre adapté à la gestion des multimodalités dues aux ambiguïtés de terrain. L'utilisation des modèles de mélange fini permet d'introduire un algorithme d'échantillonnage d'importance afin de générer les particules dans les zones d'intérêt. Un second axe de recherche concerne la mise au point d'outils de contrôle d'intégrité de la solution particulaire. En nous appuyant sur la théorie de la détection de changement, nous proposons un algorithme de détection séquentielle de la divergence du filtre. Les performances du MRPF, MRBPF, et du test d'intégrité sont évaluées sur plusieurs scénarios de recalage altimétrique.
349

Lärares arbetssätt i träning av elevers matematiska resonemang / Teacher's working methods when they are training students in mathematical reasoning

Linsten, Linda January 2014 (has links)
The ability to apply and follow mathematical reasoning is an ability students should develop according to Curriculum for the compulsory school, preeschool class and the leisure- time centre 2011. The purpose of this survey was to find out how teachers in compulsory school and preeschool class work with the ability to apply and follow mathematical reasoning. I was also interested in investigating if participation in continuing professional development in mathematics influence the teachers way of working with the students. The survey consisted of six qualitative interviewees in which three of the interviewees were part of continuing professional development in didactics for teachers educating mathematics. The result showed that all interviewees consider that it is importent to communicate mathematics, both between students as well as between teachers and students. The teachers included in continuing professinonal development showed a clear consciousness in their work to reason and follow mathematical reasoning. However, among some interviewees the consiousness appeared to come from their experience and which reflects their way of working. The student's age and how far they have developed their language also appeared to be significance to how capable they are of reasoning. The teachers analyze the ability to apply and follow mathematical reasoning, it's meaning and usage, differently.
350

Digital Marketing Strategy

Bång, Andreas, Roos, Cajsa January 2014 (has links)
Abstract Course/level: 2FE16E, Bachelor Thesis Authors: Bång Andreas & Roos Cajsa Tutor: Krister Jönsson Examiner: Pejvak Oghazi Title: Digital Marketing Strategy within Manufacturing Industry – A qualitative case study Keywords: Digital marketing strategy, relationship, branding, profit/performance, social media, social commerce, service quality, digital channels, the Internet, digital development, importance of digital channels Background: The climate in B2B is very competitive and the need for a marketing strategy is vital for a company to stay competitive. Relationship, branding and profit/performance are three parts that are argued to be central in a marketing strategy. The Internet and digital marketing strategy is an area that lacks research in the context of the manufacturing industry. Therefore it could be of importance to identify how companies use the Internet and digital channels in their digital marketing strategy. Research questions:  RQ 1. How do small and medium sized companies in B2B sectors use the Internet and digital channels in their marketing strategy? RQ 2. Why do small and medium sized companies in B2B use the Internet and digital channels in their marketing? RQ 3. How do small and medium sized companies in B2B see at future development of the use of the Internet and digital channels? Purpose:  The purpose of this study is to identify how small and medium sized companies in B2B use the Internet as a tool for digital marketing strategy.   Methodology: Qualitative approach, multi case study, semi-structured interviews   Conclusion: Most used digital channel is the homepage. The more competition a company has results in a higher adoption of digital channels. Relationships can be enhanced through digital channels.

Page generated in 0.0883 seconds