• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 580
  • 240
  • 59
  • 58
  • 28
  • 25
  • 24
  • 24
  • 20
  • 15
  • 15
  • 7
  • 3
  • 3
  • 3
  • Tagged with
  • 1281
  • 621
  • 315
  • 272
  • 197
  • 195
  • 193
  • 180
  • 172
  • 167
  • 151
  • 122
  • 122
  • 108
  • 106
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Directors’ share dealings and corporate insolvencies: evidence from the UK

Ozkan, Aydin, Poletti-Hughes, Jannine, Trzeciakiewicz, Agnieszka 05 August 2015 (has links)
Yes / This paper investigates the relation between insider trading and the likelihood of insolvency with a specific focus on the directors’ sale and purchase transactions preceding insolvency.We use a unique data set on directors’ dealings in 474 non-financial UK firms, of which 117 filed for insolvency, over the period 2000–2010.We show that the directors of insolvent firms increase their purchase transactions significantly as the insolvency approaches. The results also reveal a significant relation between three different measures of insider trading activity and the likelihood of insolvency, which is observed to be positive only during the last six-month trading period. The relation is negative for the earlier trading periods. While the earlier purchase transactions appear to be motivated by superior information held by insiders, the purchase trades closer to the insolvency date are possibly initiated by directors’ motives to influence the market’s perception of the firm in an attempt to avert or delay insolvency.
182

Choosing summary statistics by least angle regression for approximate Bayesian computation

Faisal, Muhammad, Futschik, A., Hussain, I., Abd-el.Moemen, M. 01 February 2016 (has links)
Yes / Bayesian statistical inference relies on the posterior distribution. Depending on the model, the posterior can be more or less difficult to derive. In recent years, there has been a lot of interest in complex settings where the likelihood is analytically intractable. In such situations, approximate Bayesian computation (ABC) provides an attractive way of carrying out Bayesian inference. For obtaining reliable posterior estimates however, it is important to keep the approximation errors small in ABC. The choice of an appropriate set of summary statistics plays a crucial role in this effort. Here, we report the development of a new algorithm that is based on least angle regression for choosing summary statistics. In two population genetic examples, the performance of the new algorithm is better than a previously proposed approach that uses partial least squares. / Higher Education Commission (HEC), College Deanship of Scientific Research, King Saud University, Riyadh Saudi Arabia - research group project RGP-VPP-280.
183

Search for steady and flaring astrophysical neutrino point sources with the IceCube detector

Alba, José Luis Bazo 27 September 2010 (has links)
Für astrophysikalische Quellen, z.B. aktive galaktische Kerne, werden hochenergetische Neutrinoflüsse vorhergesagt. Neutrinos und Gammastrahlen werden in hadronischen Prozessen erzeugt, für die Protonen auf hohe Energien beschleunigt werden müssen. Da Neutrinos nur schwach wechselwirken und nicht von Magnetfeldern abgelenkt werden können, bleiben Flussstärken und ihre Richtung erhalten. IceCube, ein Kubikkilometer-Detektor der sich am Südpol befindet, kann solche Neutrinos nachweisen. In dieser Arbeit wurden Daten zweier Teilkonfigurationen IceCubes (22 und 40 Trossen) ausgewertet. Die Daten, die zwischen 2007 und 2009 gesammelt wurden, bestehen hauptsächlich aus atmosphärischen Myon-Neutrinos, die im Nordhimmel erzeugt wurden und hochenergetischen atmosphärischen Myonen aus dem Südhimmel. Eine zeitunabhängige Analyse, die nach Neutrino-Punktquellen im Nordhimmel sucht, wurde mit einem sensitivitäts-optimierten Datensatz von IceCube-22 durchgeführt. Die ganze Hemisphäre und eine Liste ausgewählter Quellen wurden analysiert, wobei kein Hinweis auf extraterrestrische Neutrino-Signale gefunden wurde. Um das Entdeckungspotenzial für eine variable Quelle zu erhöhen, wurde eine nicht-getriggerte zeitabhängige Analyse entwickelt. Diese Suche ist durch Neutrino-Photon-Korrelationen und Gammastrahlung-Ausbrüche kosmischer Objekte motiviert, jedoch wurden nur Neutrino-Daten verwendet. Ein grosser Bereich möglicher Strahlungsausbruchsdauer wurde abgedeckt. Die gebinnte Methode wurde zu einer ungebinnten Likelihood-Methode erweitert, so dass die Ergebnisse um 5-25% verbessert werden konnten. Auswahlkriterien für eine Liste zeitlich veränderlicher astrophysikalischer Quellen vom ganzen Himmel wurden für IceCube-22 und IceCube-40 entwickelt. Zum ersten Mal wurde eine zeitabhängige Suchmethode im Südhimmel benutzt. Es konnten keine Ereignisüberschüsse über dem Untergrund festgestellt werden. Demzufolge wurden obere Grenzen für Neutrinoflüsse aus diesen Quellen berechnet. / High energy neutrino astronomy relies on the predictions of neutrino fluxes coming from astrophysical objects, for example active galactic nuclei. In these models, neutrinos and gamma-rays are produced in hadronic processes, which require the acceleration of protons to very high energies. Since neutrinos hardly interact and travel towards Earth undeflected by magnetic fields, they can point back to their sources. IceCube, located at the South Pole, is a large volumen detector for high energy neutrinos. In this work, data from two partial configurations of IceCube (22 and 40 strings) are analyzed. The data cover 651 days, from 2007 to 2009, and consist mostly of atmospheric muon neutrinos in the Northern sky and high energy atmospheric muons in the Southern sky. A time integrated search for neutrino point sources in the Northern sky was developed and applied to an event sample obtained for the best sensitivity, with IceCube 22-string. The search was performed on pre-selected sources and the whole hemisphere was scanned. No evidence of a neutrino signal was found. In order to enhance the flare detection probability, an untriggered time dependent search that looks for neutrino events clustering in time from specific sources in the entire sky was developed. This search was motivated by neutrino-photon correlations and the observations of flaring objects in gamma-rays, but focuses only on the neutrino data, covering a wide range of possible flare durations. The search method was expanded from a binned approach to a newly developed unbinned likelihood method, improving the results by 5-25%. Moreover, for the first time the Southern sky was analyzed with a time dependent method. A source selection criteria was developed defining two lists of variable astrophysical sources, for IceCube 22 and 40-string. The results were compatible with background fluctuations for all sources tested. Therefore, upper limits on the neutrino fluence from these sources are presented.
184

Diagnóstico em regressão L1 / Diagnostic in L1 regression

Rodrigues, Kévin Allan Sales 14 March 2019 (has links)
Este texto apresenta um método alternativo de regressão que é denominado regressão L1. Este método é robusto com relação a outliers na variável Y enquanto o método tradicional, mínimos quadrados, não oferece robustez a este tipo de outlier. Neste trabalho reanalisaremos os dados sobre imóveis apresentados por Narula e Wellington (1977) à luz da regressão L1. Ilustraremos os principais resultados inferenciais como: interpretação do modelo, construção de intervalos de confiança e testes de hipóteses para os parâmetros, análise de medidas de qualidade do ajuste do modelo e também utilizaremos medidas de diagnóstico para destacar observações influentes. Dentre as medidas de influência utilizaremos a diferença de verossimilhanças e a diferença de verossimilhanças condicional. / This text presents an alternative method of regression that is called L1 regression. This method is robust to outliers in the Y variable while the traditional least squares method does not provide robustness to this type of outlier. In this work we will review the data about houses presented by Narula and Wellington (1977) in the light of the L1 regression. We will illustrate the main inferential results such as: model interpretation, construction of confidence intervals and hypothesis tests for the parameters, analysis of quality measures of model fit and also use diagnostic measures to highlight influential observations. Among the measures of influence we will use the likelihood displacement and the conditional likelihood displacement.
185

Etude des parentés génétiques dans les populations humaines anciennes : estimation de la fiabilité et de l'efficacité des méthodes d'analyse / Genetic kinship in ancient human populations : estimating the reliability and efficiency of analysis methods

Zvénigorosky-Durel, Vincent 13 November 2018 (has links)
L'étude des parentés génétiques permet à l'anthropologie d'identifier la place du sujet au sein des différentes structures dans lesquelles il évolue : l'individu est membre d'une famille biologique, d'un groupe social et d'une population. L'application des méthodes probabilistes classiques (établies pour répondre à des problématiques de médecine légale, comme la méthode des Likelihood Ratios (LR) ou " Rapports de vraisemblance ") aux données STR issues du matériel archéologique a permis la découverte de nombreux liens de parenté, qui ensemble constituent des généalogies parfois complexes. Notre pratique prolongée de ces méthodes nous a cependant amenés à identifier certaines limites de l'interprétation des données STR, en particulier dans les cas de parentés complexes, distantes ou consanguines, ou dans des populations isolées, méconnues ou disparues. Ce travail de thèse s'attache en premier lieu à quantifier la fiabilité et l'efficacité de la méthode des LR dans quatre situations : une population moderne avec une grande diversité allélique, une population moderne avec une faible diversité allélique, une population ancienne de grande taille et une population ancienne de petite taille. Les publications récentes font usage des marqueurs plus nombreux issus des nouvelles technologies de séquençage (NGS) pour mettre en place de nouvelles stratégies de détection des parentés, basées en particulier sur l'analyse des segments chromosomiques partagés par ascendance entre les individus (segments IBD). Ces méthodes ont rendu possible l'estimation plus fiable de probabilités de parenté dans le matériel ancien. Elles sont néanmoins inadaptées à certaines situations caractéristiques de la génétique des parentés archéologiques : elles ne sont pas conçues pour fonctionner avec une seule paire isolée d'individus et reposent, comme les méthodes classiques, sur l'estimation de la diversité allélique dans la population. Nous proposons donc une quantification de la fiabilité et de l'efficacité de la méthode des segments partagés à partir de données NGS, en s'attachant à déterminer la qualité des résultats dans les différentes situations qui correspondent à des tailles de population plus ou moins importantes et à une hétérogénéité plus ou moins grande de l'échantillonnage.[...] / The study of genetic kinship allows anthropology to identify the place of an individual within which they evolve: a biological family, a social group, a population. The application of classical probabilistic methods (that were established to solve cases in legal medicine, such as Likelihood Ratios, or LR) to STR data from archaeological material has permitted the discovery of numerous parental links which together constitute genealogies both simple and complex. Our continued practice of these methods has however led us to identify limits to the interpretation of STR data, especially in cases of complex, distant or inbred kinship. The first part of the present work is constituted by the estimation of the reliability and the efficacy of the LR method in four situations: a large modern population with significant allelic diversity, a large modern population with poor allelic diversity, a large ancient population and a small ancient population. Recent publications use the more numerous markers analysed using Next generation Sequencing (NGS) to implement new strategies in the detection of kinship, especially based on the analysis of chromosome segments shared due to common ancestry (IBD "Identity-by-Descent" segments). These methods have permitted the more reliable estimation of kinship probabilities in ancient material. They are nevertheless ill-suited to certain typical situations that are characteristic of ancient DNA studies: they were not conceived to function using single pairs of isolated individuals and they depend, like classical methods, on the estimation of allelic diversity in the population. We therefore propose the quantification of the reliability and efficiency of the IBD segment method using NGS data, focusing on the estimation of the quality of results in different situations with populations of different sizes and different sets of more or less heterogeneous samples.[...]
186

Jackknife Emperical Likelihood Method and its Applications

Yang, Hanfang 01 August 2012 (has links)
In this dissertation, we investigate jackknife empirical likelihood methods motivated by recent statistics research and other related fields. Computational intensity of empirical likelihood can be significantly reduced by using jackknife empirical likelihood methods without losing computational accuracy and stability. We demonstrate that proposed jackknife empirical likelihood methods are able to handle several challenging and open problems in terms of elegant asymptotic properties and accurate simulation result in finite samples. These interesting problems include ROC curves with missing data, the difference of two ROC curves in two dimensional correlated data, a novel inference for the partial AUC and the difference of two quantiles with one or two samples. In addition, empirical likelihood methodology can be successfully applied to the linear transformation model using adjusted estimation equations. The comprehensive simulation studies on coverage probabilities and average lengths for those topics demonstrate the proposed jackknife empirical likelihood methods have a good performance in finite samples under various settings. Moreover, some related and attractive real problems are studied to support our conclusions. In the end, we provide an extensive discussion about some interesting and feasible ideas based on our jackknife EL procedures for future studies.
187

Empirical Likelihood Method for Ratio Estimation

Dong, Bin 22 February 2011 (has links)
Empirical likelihood, which was pioneered by Thomas and Grunkemeier (1975) and Owen (1988), is a powerful nonparametric method of statistical inference that has been widely used in the statistical literature. In this thesis, we investigate the merits of empirical likelihood for various problems arising in ratio estimation. First, motivated by the smooth empirical likelihood (SEL) approach proposed by Zhou & Jing (2003), we develop empirical likelihood estimators for diagnostic test likelihood ratios (DLRs), and derive the asymptotic distributions for suitable likelihood ratio statistics under certain regularity conditions. To skirt the bandwidth selection problem that arises in smooth estimation, we propose an empirical likelihood estimator for the same DLRs that is based on non-smooth estimating equations (NEL). Via simulation studies, we compare the statistical properties of these empirical likelihood estimators (SEL, NEL) to certain natural competitors, and identify situations in which SEL and NEL provide superior estimation capabilities. Next, we focus on deriving an empirical likelihood estimator of a baseline cumulative hazard ratio with respect to covariate adjustments under two nonproportional hazard model assumptions. Under typical regularity conditions, we show that suitable empirical likelihood ratio statistics each converge in distribution to a 2 random variable. Through simulation studies, we investigate the advantages of this empirical likelihood approach compared to use of the usual normal approximation. Two examples from previously published clinical studies illustrate the use of the empirical likelihood methods we have described. Empirical likelihood has obvious appeal in deriving point and interval estimators for time-to-event data. However, when we use this method and its asymptotic critical value to construct simultaneous confidence bands for survival or cumulative hazard functions, it typically necessitates very large sample sizes to achieve reliable coverage accuracy. We propose using a bootstrap method to recalibrate the critical value of the sampling distribution of the sample log-likelihood ratios. Via simulation studies, we compare our EL-based bootstrap estimator for the survival function with EL-HW and EL-EP bands proposed by Hollander et al. (1997) and apply this method to obtain a simultaneous confidence band for the cumulative hazard ratios in the two clinical studies that we mentioned above. While copulas have been a popular statistical tool for modeling dependent data in recent years, selecting a parametric copula is a nontrivial task that may lead to model misspecification because different copula families involve different correlation structures. This observation motivates us to use empirical likelihood to estimate a copula nonparametrically. With this EL-based estimator of a copula, we derive a goodness-of-fit test for assessing a specific parametric copula model. By means of simulations, we demonstrate the merits of our EL-based testing procedure. We demonstrate this method using the data from Wieand et al. (1989). In the final chapter of the thesis, we provide a brief introduction to several areas for future research involving the empirical likelihood approach.
188

Empirical Likelihood Method for Ratio Estimation

Dong, Bin 22 February 2011 (has links)
Empirical likelihood, which was pioneered by Thomas and Grunkemeier (1975) and Owen (1988), is a powerful nonparametric method of statistical inference that has been widely used in the statistical literature. In this thesis, we investigate the merits of empirical likelihood for various problems arising in ratio estimation. First, motivated by the smooth empirical likelihood (SEL) approach proposed by Zhou & Jing (2003), we develop empirical likelihood estimators for diagnostic test likelihood ratios (DLRs), and derive the asymptotic distributions for suitable likelihood ratio statistics under certain regularity conditions. To skirt the bandwidth selection problem that arises in smooth estimation, we propose an empirical likelihood estimator for the same DLRs that is based on non-smooth estimating equations (NEL). Via simulation studies, we compare the statistical properties of these empirical likelihood estimators (SEL, NEL) to certain natural competitors, and identify situations in which SEL and NEL provide superior estimation capabilities. Next, we focus on deriving an empirical likelihood estimator of a baseline cumulative hazard ratio with respect to covariate adjustments under two nonproportional hazard model assumptions. Under typical regularity conditions, we show that suitable empirical likelihood ratio statistics each converge in distribution to a 2 random variable. Through simulation studies, we investigate the advantages of this empirical likelihood approach compared to use of the usual normal approximation. Two examples from previously published clinical studies illustrate the use of the empirical likelihood methods we have described. Empirical likelihood has obvious appeal in deriving point and interval estimators for time-to-event data. However, when we use this method and its asymptotic critical value to construct simultaneous confidence bands for survival or cumulative hazard functions, it typically necessitates very large sample sizes to achieve reliable coverage accuracy. We propose using a bootstrap method to recalibrate the critical value of the sampling distribution of the sample log-likelihood ratios. Via simulation studies, we compare our EL-based bootstrap estimator for the survival function with EL-HW and EL-EP bands proposed by Hollander et al. (1997) and apply this method to obtain a simultaneous confidence band for the cumulative hazard ratios in the two clinical studies that we mentioned above. While copulas have been a popular statistical tool for modeling dependent data in recent years, selecting a parametric copula is a nontrivial task that may lead to model misspecification because different copula families involve different correlation structures. This observation motivates us to use empirical likelihood to estimate a copula nonparametrically. With this EL-based estimator of a copula, we derive a goodness-of-fit test for assessing a specific parametric copula model. By means of simulations, we demonstrate the merits of our EL-based testing procedure. We demonstrate this method using the data from Wieand et al. (1989). In the final chapter of the thesis, we provide a brief introduction to several areas for future research involving the empirical likelihood approach.
189

Searching for Gamma Rays from Galaxy Clusters with the Fermi Large Area Telescope : Cosmic Rays and Dark Matter

Zimmer, Stephan January 2013 (has links)
In this licentiate thesis, I report a search for GeV γ rays towards the location of Galaxy clusters. I mainly discuss the results of a search for cosmic-ray (CR) induced γ-ray emission but also briefly elaborate on a related study, searching for Dark Matter (DM)-induced γ-ray emission from Galaxy clusters. In addition, I provide a detailed discussion on the analysis tools that were used and discuss some additional tests that are not included in the papers this licentiate thesis is based on. In a comprehensive search almost covering the entire sky, we find no statistically significant evidence for either DM or CR induced γ rays from galaxy clusters. Thus we report upper limits on CR quantities that exclude emission scenarios in which the maximum hadronic injection efficiency is larger than 21% and associated limits on the maximum CR-to-thermal pressure ratio, <XCR>. In addition, we update previous flux upper limits given a new set of modeling and taking the source extension into account. For a DM masses below 100 GeV, we exclude annihilation cross sections above ∼ 10−24 cm3 s−1 into bb. For decaying DM, we exclude decay times lower than 1027 s over the mass range of 20 GeV– 2 TeV.
190

Influencers Retorik : Hur influencers argumenterar på sociala medier / Influencers’ Rhetoric : How influencers argue on social media

Samater, Miski, Ali, Ilham January 2021 (has links)
Syfte: Syftet med studien är att kartlägga hur influencers argumenterar för samhällsviktiga frågor, personliga frågor samt för produkter/varumärken de marknadsför på sociala medier. Den retoriska modellen, och de teoretiska modellerna parasociala relationer och Elaboration Likelihood Model (ELM) ska tillämpas för att identifiera om retoriska argumenteringar har betydelse i skapandet av relationer. Studien syftar också till att få en förståelse till varför följare lyssnar på influencers köpråd/rekommendationer. Metod: Studien använder sig av en kvalitativ metod i form av en innehållsanalys och fokusgruppsintervjuer. Resultat: Utifrån innehållsanalysen framgår det att de retoriska argumenten ethos, pathos, och logos är ofta förekommande i influencers argumentering genom sociala medier. Mest förekommande var argumentet pathos som bestod av minst 50 % av inläggen för betalt samarbete. Fokusgrupperna visar att den yngre generationen är mer motiverade att följa och agera efter influencers rekommendationer än den äldre generationen. Slutsats: Studien visar att influencers använder sig frekvent av retoriska argumenteringar för att framföra ett meddelande på sociala medier. Inläggen som influencers producerar möjliggör skapandet av parasociala relationer som i sin tur leder till köpintentioner. De retoriska argumenteringar influencers använder sig av leder även till att följare exempelvis vill agera efter deras rekommendationer/köpråd. Det är även genom retoriska argumenteringar som influencers skapar parasociala relationer till deras följare. Studien tyder även på att det finns skillnader mellan den yngre och äldre generationen, där de yngre har lättare att ingå i parasociala relationer med influencers. / Purpose: The purpose of the study is to map how influencers argue for socially important issues, personal issues and for products / brands they market on social media. The rhetorical models, and the theoretical models parasocial relations and the Elaboration Likelihood Model (ELM) will be applied to identify whether rhetorical arguments are important in the creation of relations. The study also aims to gain an understanding of why followers listen to influencers' recommendations. Method: The study uses a qualitative method in the form of a content analysis and focus group interviews. Findings: Based on the content analysis, it appears that the rhetorical arguments ethos, pathos, and logos are often used in influencers' arguments through social media. The most common argument was pathos, which consisted of at least 50% of the posts for paid collaborations. The focus groups show that the younger generation is more motivated to follow and act on influencers' recommendations than the older generation. Conclusion: The study shows that influencers frequently use rhetorical arguments to convey a message on social media.The posts that influencers produce enable the creation of parasocial relationships which in turn leads to purchase intentions. The rhetorical arguments influencers use also lead to followers, wanting to act on their recommendations / buying advice. It is also through rhetorical arguments that influencers create parasocial relationships with their followers. The study also indicates that there are differences between the younger and the older generation, where the younger ones find it easier to enter into parasocial relationships with influencers.(This thesis is written in Swedish)

Page generated in 0.0287 seconds