• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 442
  • 79
  • 76
  • 38
  • 28
  • 22
  • 9
  • 8
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 866
  • 98
  • 81
  • 79
  • 70
  • 60
  • 60
  • 57
  • 54
  • 47
  • 47
  • 47
  • 42
  • 41
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
771

The Diffusion of New Music through Online Social Networks

Monk, Adam Joel 25 June 2012 (has links)
No description available.
772

[pt] DOIS ENSAIOS EM IDENTIFICAÇÃO FRACA EM MODELOS MACROECONÔMICOS / [en] TWO ESSAYS ON WEAK IDENTIFICATION IN MACROECONOMIC MODELS

MARCUS VINICIUS FERNANDES GOMES DE CASTRO 21 February 2020 (has links)
[pt] O problema de identificação fraca surge naturalmente em modelos macroeconômicos. Consequentemente, métodos de variáveis instrumentais produzem resultados enigmáticos de forma mais frequente do que seria empiricamente razoável. Neste trabalho, propomos dois novos métodos para tratar destas dificuldades, no que tange a duas das principais equações de modelos macro: a Curva de Phillips Novo-Keynesiana (NKPC) e a Equação de Euler (EE). Sabe-se das dificuldades em se estimar um coeficiente de sensibilidade positivo entre inflação e produto no primeiro caso, e que, mesmo quando se obtém uma estimativa positiva, o nível de rigidez nominal implicado para a economia é incompatível com o que sugerem os micro dados. Nós abordamos essa questão no primeiro capítulo, propondo um modelo de economia multi-setorial com heterogeneidade na fixação de preços entre setores. O método gera coeficientes de sensibilidade positivos e estáveis para diferentes configurações econométricas, assim como níveis de rigidez nominal alinhados com a evidência micro, para a economia como um todo e também para cada setor individualmente. Todas essas estimativas variam em linha com implicações teóricas, quando hipóteses do modelo são alteradas. O foco do segundo capítulo é a estimação da elasticidade de substituição intertemporal (EIS), parâmetro central da EE. Argumentamos como o uso de séries oficiais de consumo – que são estatisticamente tratadas antes de disponibilizadas – distorce estimativas da EIS. Propondo um modelo generalizado para desfiltrar diferentes tipos de séries de consumo disponíveis, – micro e macro, com várias frequências –, demonstramos como a utilização de consumo não filtrado gera estimativas da EIS que são consideravelmente mais estáveis, independente do arcabouço econométrico e da série de consumo usada. Resultados também parecem menos sensíveis à presença de instrumentos fracos, comparativamente a estimações usando séries oficiais. / [en] The weak identification problem arises naturally in macroeconomic models. Consequently, instrumental variables methods produce puzzling results more often than what is empirically plausible. We propose novel methods to address puzzles usually featured in two of the main equations in macro models, namely the New-Keynesian Phillips Curve (NKPC) and the Euler Equation (EE). For the former, difficulties to estimate a positive slope without incurring a degree of stickiness incompatible with the micro evidence are widely known. We address the matter in the first chapter, proposing a richer framework of a multi-sector economy with price-setting heterogeneity. The procedure generates positive and roughly unchanging slope coefficients across econometric settings, as well as degrees of stickiness in line with the micro data, both regarding the entire economy and the cross section of sectors. Importantly, all of these estimates move consistently with implications by theory when modifying the model assumptions. The second chapter focuses on the estimation of the elasticity of intertemporal substitution (EIS), central parameter of the EE in models of dynamic choice. There, we argue that the use of officially reported consumption data – which is usually filtered, smoothed, interpolated, etc – distorts estimates of the EIS. A generalised model to unfilter available consumption data is proposed, suitable for several types of data – macro and micro – at different frequencies. Estimations based on unfiltered consumption produce considerably more stable estimates of the EIS, regardless of the econometric approach and the type of consumption data used. Results also seem less sensitive to the presence of weak instruments, compared to officially reported data.
773

The EU Taxonomy on Sustainable Finance : A Major Stride Forward or a Nightmare in Practice? / EU Taxonomi för hållbara investeringar : Ett stort steg framåt eller en mardröm i praktiken?

Wallhed, Niklas January 2021 (has links)
To limit the effects of climate change and keep the global mean temperature increase of below 1.5 °C in the year 2100, the financial markets need to shift investments into low-carbon business and technologies. By integrating sustainability assessments into traditional investment analysis, sustainable investing is in this case a concept that can be used to enable this transition. The EU Taxonomy on sustainable finance is a tool and future mandatory regulation that aims to highlight sustainable investing and to help different companies, investors and project promoters make investment decisions that align with the transition to a low-carbon society. This study aims to describe how the EU taxonomy can be related to sustainable investments, and to examine how companies in different sectors are evaluated using the EU Taxonomy by using companies included in a sustainability fund as a case study. Furthermore, the study aims to examine whether the EU Taxonomy is in line with the Industrial Ecology concepts of Life cycle thinking and weak/strong sustainability. The results of the literature study of the EU Taxonomy together with the case study suggests that the implementation of the EU Taxonomy into EU regulation will lead to a greater integration of sustainable investing within the EU. Furthermore, the EU Taxonomy does in theory integrate Life cycle thinking as well as possess a strong view on sustainability. However, this does not translate into practice as the practical implementation of the EU Taxonomy as it stands right now does not implement these concepts. The results of the case study showed that there are issues associated with accessing company specific information, as the analyzed companies did not disclose how their operations relate to the EU Taxonomy. Moreover, the case study indicated that some sectors have an easier time being considered environmentally sustainable by the EU Taxonomy. The EU Taxonomy is a step in the right direction towards increased integration of environmental sustainability into investments, however, the tool and regulation as it stands right now leaves much to be desired. / För att hålla den globala medeltemperaturen under en ökning på 1.5 °C till år 2100, måste den finansiella marknaden investera i företag och teknologier som bidrar till en minskning av växthusgasutsläpp globalt. Hållbara investeringar är ett koncept som tar hänsyn till b.la utsläpp av växthusgaser i investeringsbeslut, vilket kan möjliggöra denna övergång till hållbarare samhälle. EU:s Taxonomi om hållbara investeringar är ett ramverk och framtida reglemente som ämnar att möjliggöra och hjälpa företag, investerare och projektansvariga att investera mer miljömässig hållbart. Denna studie syftar till att förklara hur EU Taxonomin kan relateras till hållbara investeringar, samt att examinera hur företag i olika sektorer kan utvärderas med hjälp av EU Taxonomin genom att genomföra en casestudie på företag inkludera i en hållbarhetsfond. Vidare ämnar denna studie examinera om EU Taxonomin är i linje med industriell ekologi med koncepten livscykeltänkande (en: Life Cycle Thinking) samt stark och svag hållbarhet. Resultatet indikerar att implementeringen av EU Taxonomin kommer att leda till en ökad integrering av hållbara investering inom EU. Vidare indikerar resultatet att EU Taxonomin har som mål att ta hänsyn till livscykeltänkande och vara i linje med stark hållbarhet, men att denna målsättning inte är inkluderat i det praktiska verktyget som EU Taxonomin använder. Resultatet av casestudien visar att det finns problem kopplade till att bedöma hur väl företag tar hänsyn till kriterierna inkluderade i EU Taxonomin, då företag inte delger den information som behövs för att göra denna bedömning. Vidare så indikerade casestudien att vissa sektorer kommer ha det lättare när det kommer till att vara i linje med EU Taxonomins kriterier. Avslutningsvis så är EU Taxonomin ett steg i rätt riktning när det kommer till att integrera miljöaspekter i investeringar, men det praktiska verktyget och ramverk som det ser ut just nu skulle kunna vara mer utvecklat och mer strikt i praktiken.
774

Online expansion: is it another kind of strategic manufacturer response to a dominant retailer?

He, R., Xiong, Y., Cheng, Y., Hou, Jiachen January 2016 (has links)
Yes / The issues of channel conflict and channel power have received widespread research attention, including Geylani et al.’s (2007) work on channel relations in an asymmetric retail setting. Specifically, these authors suggest that a manufacturer can respond to a dominant retailer’s pricing pressure by raising the wholesale price for a weak retailer over that for the dominant retailer while transferring demand to the weak retailer channel via cooperative advertising. But, is online expansion another kind of strategic manufacturer’s optimal response to a dominant retailer? In this paper, we extend this work by adding a direct online selling channel to illustrate the impact of the manufacturer’s internet entry on firms’ demands, profits, and pricing strategies and on consumer welfare. Our analysis thus includes a condition in which the manufacturer can add an online channel. If such an online channel is opened, the channel-supported network externality will always benefit the manufacturer but hurt the retailers. Consumers, however, will only benefit from the network externality when a dominant retailer is present and will be hurt when both retailers are symmetric. / National Natural Science Foundation of China, Chongqing’s Natural Science Foundation, British Academy
775

Methods for face detection and adaptive face recognition

Pavani, Sri-Kaushik 21 July 2010 (has links)
The focus of this thesis is on facial biometrics; specifically in the problems of face detection and face recognition. Despite intensive research over the last 20 years, the technology is not foolproof, which is why we do not see use of face recognition systems in critical sectors such as banking. In this thesis, we focus on three sub-problems in these two areas of research. Firstly, we propose methods to improve the speed-accuracy trade-off of the state-of-the-art face detector. Secondly, we consider a problem that is often ignored in the literature: to decrease the training time of the detectors. We propose two techniques to this end. Thirdly, we present a detailed large-scale study on self-updating face recognition systems in an attempt to answer if continuously changing facial appearance can be learnt automatically. / L'objectiu d'aquesta tesi és sobre biometria facial, específicament en els problemes de detecció de rostres i reconeixement facial. Malgrat la intensa recerca durant els últims 20 anys, la tecnologia no és infalible, de manera que no veiem l'ús dels sistemes de reconeixement de rostres en sectors crítics com la banca. En aquesta tesi, ens centrem en tres sub-problemes en aquestes dues àrees de recerca. En primer lloc, es proposa mètodes per millorar l'equilibri entre la precisió i la velocitat del detector de cares d'última generació. En segon lloc, considerem un problema que sovint s'ignora en la literatura: disminuir el temps de formació dels detectors. Es proposen dues tècniques per a aquest fi. En tercer lloc, es presenta un estudi detallat a gran escala sobre l'auto-actualització dels sistemes de reconeixement facial en un intent de respondre si el canvi constant de l'aparença facial es pot aprendre de forma automàtica.
776

First Time Measurements of Polarization Observables for the Charged Cascade Hyperon in Photoproduction

Bono, Jason S 06 June 2014 (has links)
The parity violating weak decay of hyperons offers a valuable means of measuring their polarization, providing insight into the production of strange quarks and the matter they compose. Jefferson Lab’s CLAS collaboration has utilized this property of hyperons, publishing the most precise polarization measurements for the Λ and Σ in both photoproduction and electroproduction to date. In contrast, cascades, which contain two strange quarks, can only be produced through indirect processes and as a result, exhibit low cross sections thus remaining experimentally elusive. At present, there are two aspects in cascade physics where progress has been minimal: characterizing their production mechanism, which lacks theoretical and experimental developments, and observation of the numerous excited cascade resonances that are required to exist by flavor SU(3)F symmetry. However, CLAS data were collected in 2008 with a luminosity of 68 pb−1 using a circularly polarized photon beam with energies up to 5.45 GeV, incident on a liquid hydrogen target. This dataset is, at present, the world’s largest for meson photoproduction in its energy range and provides a unique opportunity to study cascade physics with polarization measurements. The current analysis explores hyperon production through the γp → K+K+Ξ− reaction by providing the first ever determination of spin observables P, Cx and Cz for the cascade. Three of our primary goals are to test the only cascade photoproduction model in existence, examine the underlying processes that give rise to hyperon polarization, and to stimulate future theoretical developments while providing constraints for their parameters. Our research is part of a broader program to understand the production of strange quarks and hadrons with strangeness. The remainder of this document discusses the motivation behind such research, the method of data collection, details of their analysis, and the significance of our results.
777

Fully linear elliptic equations and semilinear fractionnal elliptic equations

Chen, Huyuan 10 January 2014 (has links)
Cette thèse est divisée en six parties. La première partie est consacrée à l'étude de propriétés de Hadamard et à l'obtention de théorèmes de Liouville pour des solutions de viscosité d'équations aux dérivées partielles elliptiques complètement non-linéaires avec des termes de gradient, ... / This thesis is divided into six parts. The first part is devoted to prove Hadamard properties and Liouville type theorems for viscosity solutions of fully nonlinear elliptic partial differential equations with gradient term ...
778

Sharḥ Lubāb al-nuqūl fī asbāb al-nuzūl [ṣafwa al-taʻlīqāt al-mutaʻllīqāt bi-aḥwāl nuzūl al-Qurʼān] / Commentary on Lubab al-nuqul fi asbab al-nuzul (the cream of reports relating to circumstances for Qur'anic revelation

Elkoly, Mohammed Hassan Mohamed 05 1900 (has links)
Arabic text. Arabic summary and keywords cannot be copied into meta data fields / The importance of this research is evident from the field related to it; namely, that of reports about circumstances for Qur’anic revelations. Without a comprehensive knowledge of it, many of the subtleties and nuances of Qur’anic discourse remain concealed from us. For this purpose, I used al-Suyuti’s "Lubab al-Nuqul fi Asbab al-Nuzul" for the pivotal and comprehensive role it occupies among works dealing with this discipline. Briefly, my methodology was the following: I first presented a summary of the author’s biography. Secondly, I edited the manuscript I had obtained of this book from the King Faisal Library in Riyadh, Saudi Arabia. Thirdly, I made comparisons between this version and numerous printed versions of this book in order to verify the authenticity of textual information I presented in this thesis by categorising Prophetic and other reports at the levels of their soundness and weakness so that the reader may obtain a firm insight into their status levels. Fourthly, I amended reports that al-Suyuti had omitted in relation to verses he had cited. Fifthly, I graded different reports on a topic according to established criteria in this subject; often reconciling them where it was possible. I interpreted verses in their general purport by indicating that the report/s relating to the circumstances for revelation was/were already embodied in their signification. I only deviated from this norm where I found that a body of reliable scholars had given preference to a particular report vis-à-vis a certain verse; in which case I adopted their opinion. Sixthly, I attached brief commentaries to relevant verses to enable the reader to gain a more comprehensive grasp of text within its context. Finally, I defined some peculiar terminology found in this book for the benefit of unfamiliar researchers. / Religious Studies and Arabic / D. Litt. et Phil. (Islamic Studies)
779

門檻式自動迴歸模型參數之近似信賴區間 / Approximate confidence sets for parameters in a threshold autoregressive model

陳慎健, Chen, Shen Chien Unknown Date (has links)
本論文主要在估計門檻式自動迴歸模型之參數的信賴區間。由線性自動迴歸 模型衍生出來的非線性自動迴歸模型中,門檻式自動迴歸模型是其中一種經常會被應用到的模型。雖然,門檻式自動迴歸模型之參數的漸近理論已經發展了許多;但是,相較於大樣本理論,有限樣本下參數的性質討論則較少。對於有限樣本的研究,Woodroofe (1989) 提出一種近似法:非常弱近似法。 Woodroofe 和 Coad (1997) 則利用此方法去架構一適性化線性模型之參數的修正信賴區間。Weng 和 Woodroofe (2006) 則將此近似法應用於線性自動迴歸模型。這個方法的應用始於定義一近似樞紐量,接著利用此方法找出近似樞紐量的近似期望值及近似變異數,並對此近似樞紐量標準化,則標準化後的樞紐量將近似於標準常態分配,因此得以架構參數的修正信賴區間。而在線性自動迴歸模型下,利用非常弱展開所導出的近似期望值及近似變異數僅會與一階動差及二階動差的微分有關。因此,本論文的研究目的就是在樣本數為適當的情況下,將線性自動迴歸模型的結果運用於門檻式自動迴歸模型。由於大部分門檻式自動迴歸模型的動差並無明確之形式;因此,本研究採用蒙地卡羅法及插分法去近似其動差及微分。最後,以第一階門檻式自動迴歸模型去配適美國的國內生產總值資料。 / Threshold autoregressive (TAR) models are popular nonlinear extension of the linear autoregressive (AR) models. Though many have developed the asymptotic theory for parameter estimates in the TAR models, there have been less studies about the finite sample properties. Woodroofe (1989) and Woodroofe and Coad (1997) developed a very weak approximation and used it to construct corrected confidence sets for parameters in an adaptive linear model. This approximation was further developed by Woodroofe and Coad (1999) and Weng and Woodroofe (2006), who derived the corrected confidence sets for parameters in the AR(p) models and other adaptive models. This approach starts with an approximate pivot, and employs the very weak expansions to determine the mean and variance corrections of the pivot. Then, the renormalized pivot is used to form corrected confidence sets. The correction terms have simple forms, and for AR(p) models it involves only the first two moments of the process and the derivatives of these moments. However, for TAR models the analytic forms for moments are known only in some cases when the autoregression function has special structures. The goal of this research is to extend the very weak method to the TAR models to form corrected confidence sets when sample size is moderate. We propose using the difference quotient method and Monte Carlo simulations to approximate the derivatives. Some simulation studies are provided to assess the accuracy of the method. Then, we apply the approach to a real U.S. GDP data.
780

On New Constructive Tools in Bayesian Nonparametric Inference

Al Labadi, Luai 22 June 2012 (has links)
The Bayesian nonparametric inference requires the construction of priors on infinite dimensional spaces such as the space of cumulative distribution functions and the space of cumulative hazard functions. Well-known priors on the space of cumulative distribution functions are the Dirichlet process, the two-parameter Poisson-Dirichlet process and the beta-Stacy process. On the other hand, the beta process is a popular prior on the space of cumulative hazard functions. This thesis is divided into three parts. In the first part, we tackle the problem of sampling from the above mentioned processes. Sampling from these processes plays a crucial role in many applications in Bayesian nonparametric inference. However, having exact samples from these processes is impossible. The existing algorithms are either slow or very complex and may be difficult to apply for many users. We derive new approximation techniques for simulating the above processes. These new approximations provide simple, yet efficient, procedures for simulating these important processes. We compare the efficiency of the new approximations to several other well-known approximations and demonstrate a significant improvement. In the second part, we develop explicit expressions for calculating the Kolmogorov, Levy and Cramer-von Mises distances between the Dirichlet process and its base measure. The derived expressions of each distance are used to select the concentration parameter of a Dirichlet process. We also propose a Bayesain goodness of fit test for simple and composite hypotheses for non-censored and censored observations. Illustrative examples and simulation results are included. Finally, we describe the relationship between the frequentist and Bayesian nonparametric statistics. We show that, when the concentration parameter is large, the two-parameter Poisson-Dirichlet process and its corresponding quantile process share many asymptotic pr operties with the frequentist empirical process and the frequentist quantile process. Some of these properties are the functional central limit theorem, the strong law of large numbers and the Glivenko-Cantelli theorem.

Page generated in 0.0241 seconds