Spelling suggestions: "subject:"[een] LA CONDITIONAL"" "subject:"[enn] LA CONDITIONAL""
171 |
Study of the Higgs boson decay H → ZZ(∗) → 4ℓ and inner detector performance studies with the ATLAS experimentSelbach, Karoline Elfriede January 2014 (has links)
The Higgs mechanism is the last piece of the SM to be discovered which is responsible for giving mass to the electroweak W± and Z bosons. Experimental evidence for the Higgs boson is therefore important and is currently explored at the Large Hadron Collider (LHC) at CERN. The ATLAS experiment (A Toroidal LHC ApparatuS) is analysing a wide range of physics processes from collisions produced by the LHC at a centre-of-mass energy of 7-8TeV and a peak luminosity of 7.73×10³³ cm−2s−1. This thesis concentrates on the discovery and mass measurement of the Higgs boson. The analysis using the H → ZZ(∗) → 4ℓ channel is presented, where ℓ denotes electrons or muons. Statistical methods with non-parametric models are successfully cross-checked with parametric models. The per-event errors studied to improve the mass determination decreases the total mass uncertainty by 9%. The other main focus is the performance of the initial, and possible upgraded, layouts of the ATLAS inner detector. The silicon cluster size, channel occupancy and track separation in jets are analysed for a detailed understanding of the inner detector. The inner detector is exposed to high particle fluxes and is crucial for tracking and vertexing. The simulation of the detector performance is improved by adjusting the cross talk of adjacent hit pixels and the Lorentz Angle in the digitisation. To improve the ATLAS detector for upgrade conditions, the performance is studied with pile-up of up to 200. Several possible layout configurations were considered before converging on the baseline one used for the Letter of Intent. This includes increased granularity in the Pixel and SCT and additional silicon detector layers. This layout was validated to accomplish the design target of an occupancy < 1% throughout the whole inner detector. The H → ZZ(∗) → 4ℓ analysis benefits from the excellent momentum resolution, particularly for leptons down to pT = 6GeV. The current inner detector is designed to provide momentum measurements of low pT charged tracks with resolution of σpT /pT = 0.05% pT ⊕ 1% over a range of |η| < 2.5. The discovery of a new particle in July 2012 which is compatible with the Standard model Higgs boson included the 3.6σ excess of events observed in the H → ZZ(∗) → 4ℓ channel at 125GeV. The per-event error was studied using a narrow mass range, concentrated around the signal peak (110GeV< mH < 150GeV). The error on the four-lepton invariant mass is derived and its probability density function (pdf) is multiplied by the conditional pdf of the four-lepton invariant mass given the error. Applying a systematics model dependent on the true mass of the discovered particle, the new fitting machinery was developed to exploit additional statistical methods for the mass measurement resulting in a discovery with 6.6σ at mH = 124.3+0.6−0.5(stat)+0.5−0.3(syst)GeV and μ = 1.7±0.5 using the full 2011 and 2012 datasets.
|
172 |
CP-nets: From Theory to PracticeAllen, Thomas E. 01 January 2016 (has links)
Conditional preference networks (CP-nets) exploit the power of ceteris paribus rules to represent preferences over combinatorial decision domains compactly. CP-nets have much appeal. However, their study has not yet advanced sufficiently for their widespread use in real-world applications. Known algorithms for deciding dominance---whether one outcome is better than another with respect to a CP-net---require exponential time. Data for CP-nets are difficult to obtain: human subjects data over combinatorial domains are not readily available, and earlier work on random generation is also problematic. Also, much of the research on CP-nets makes strong, often unrealistic assumptions, such as that decision variables must be binary or that only strict preferences are permitted. In this thesis, I address such limitations to make CP-nets more useful. I show how: to generate CP-nets uniformly randomly; to limit search depth in dominance testing given expectations about sets of CP-nets; and to use local search for learning restricted classes of CP-nets from choice data.
|
173 |
The synchronization of GDP growth in the G7 during US recessionsAntonakakis, Nikolaos, Scharler, Johann January 2012 (has links) (PDF)
Using the dynamic conditional correlation (DCC) model due to Engle (2002), we estimate time varying correlations of quarterly real GDP growth among the G7 countries. In general, we find that rather heterogeneous patterns of international synchronization exist during US recessions. During the 2007-2009 recession, however, international co-movement increased substantially. (authors' abstract)
|
174 |
Behavioural asset pricing in Chinese stock marketsXu, Yihan January 2011 (has links)
This thesis addresses asset pricing in Chinese A-share stock markets using a dataset consisting of all shares listed in Shanghai and Shenzhen stock exchanges from January 1997 to December 2007. The empirical work is carried out based on two theoretical foundations: the efficient market hypothesis and behavioural finance. It examines and compares the validity of two traditional asset pricing models and two behavioural asset pricing models. The investigation is initially performed within a traditional asset pricing framework. The three-factor Fama-French model is estimated and then augmented by additional macroeconomic and bond market variables. The results suggest that these traditional asset pricing models fail to explain fully the time-variation of stock returns in Chinese stock markets, leaving non-normally distributed and heteroskedastic residuals, calling for further explanatory variables and suggesting the existence of a structure break. Indeed, the macroeconomic and bond market factors provide little help to the asset pricing model. Using the Fama-French model as the benchmark, further research is done by investigating investor sentiment as the third dimension beside returns and risks. Investor sentiment helps explain the mis-pricing component of returns in the Fama-French model and the time-variation in the factors themselves. Incorporating investor sentiment into the asset pricing model improves the model performance, lessening the importance of the Fama-French factors, and suggesting that in China, sentiment affects both the way in which investors judge risks as well as portfolio returns directly. The sentiment effect on asset pricing is also examined under a nonlinear Markov-switching framework. The stochastic regime-dependent model reveals that stock returns in China are driven by fundamental factors in bear and low volatility markets but are prone to sentiment and become uncoupled from fundamental risks in bull and high volatility markets.
|
175 |
Supply chain network design under uncertainty and riskHollmann, Dominik January 2011 (has links)
We consider the research problem of quantitative support for decision making in supply chain network design (SCND). We first identify the requirements for a comprehensive SCND as (i) a methodology to select uncertainties, (ii) a stochastic optimisation model, and (iii) an appropriate solution algorithm. We propose a process to select a manageable number of uncertainties to be included in a stochastic program for SCND. We develop a comprehensive two-stage stochastic program for SCND that includes uncertainty in demand, currency exchange rates, labour costs, productivity, supplier costs, and transport costs. Also, we consider conditional value at risk (CV@R) to explore the trade-off between risk and return. We use a scenario generator based on moment matching to represent the multivariate uncertainty. The resulting stochastic integer program is computationally challenging and we propose a novel iterative solution algorithm called adaptive scenario refinement (ASR) to process the problem. We describe the rationale underlying ASR, validate it for a set of benchmark problems, and discuss the benefits of the algorithm applied to our SCND problem. Finally, we demonstrate the benefits of the proposed model in a case study and show that multiple sources of uncertainty and risk are important to consider in the SCND. Whereas in the literature most research is on demand uncertainty, our study suggests that exchange rate uncertainty is more important for the choice of optimal supply chain strategies in international production networks. The SCND model and the use of the coherent downside risk measure in the stochastic program are innovative and novel; these and the ASR solution algorithm taken together make contributions to knowledge.
|
176 |
Logarithmic opinion pools for conditional random fieldsSmith, Andrew January 2007 (has links)
Since their recent introduction, conditional random fields (CRFs) have been successfully applied to a multitude of structured labelling tasks in many different domains. Examples include natural language processing (NLP), bioinformatics and computer vision. Within NLP itself we have seen many different application areas, like named entity recognition, shallow parsing, information extraction from research papers and language modelling. Most of this work has demonstrated the need, directly or indirectly, to employ some form of regularisation when applying CRFs in order to overcome the tendency for these models to overfit. To date a popular method for regularising CRFs has been to fit a Gaussian prior distribution over the model parameters. In this thesis we explore other methods of CRF regularisation, investigating their properties and comparing their effectiveness. We apply our ideas to sequence labelling problems in NLP, specifically part-of-speech tagging and named entity recognition. We start with an analysis of conventional approaches to CRF regularisation, and investigate possible extensions to such approaches. In particular, we consider choices of prior distribution other than the Gaussian, including the Laplacian and Hyperbolic; we look at the effect of regularising different features separately, to differing degrees, and explore how we may define an appropriate level of regularisation for each feature; we investigate the effect of allowing the mean of a prior distribution to take on non-zero values; and we look at the impact of relaxing the feature expectation constraints satisfied by a standard CRF, leading to a modified CRF model we call the inequality CRF. Our analysis leads to the general conclusion that although there is some capacity for improvement of conventional regularisation through modification and extension, this is quite limited. Conventional regularisation with a prior is in general hampered by the need to fit a hyperparameter or set of hyperparameters, which can be an expensive process. We then approach the CRF overfitting problem from a different perspective. Specifically, we introduce a form of CRF ensemble called a logarithmic opinion pool (LOP), where CRF distributions are combined under a weighted product. We show how a LOP has theoretical properties which provide a framework for designing new overfitting reduction schemes in terms of diverse models, and demonstrate how such diverse models may be constructed in a number of different ways. Specifically, we show that by constructing CRF models from manually crafted partitions of a feature set and combining them with equal weight under a LOP, we may obtain an ensemble that significantly outperforms a standard CRF trained on the entire feature set, and is competitive in performance to a standard CRF regularised with a Gaussian prior. The great advantage of LOP approach is that, unlike the Gaussian prior method, it does not require us to search a hyperparameter space. Having demonstrated the success of LOPs in the simple case, we then move on to consider more complex uses of the framework. In particular, we investigate whether it is possible to further improve the LOP ensemble by allowing parameters in different models to interact during training in such a way that diversity between the models is encouraged. Lastly, we show how the LOP approach may be used as a remedy for a problem that standard CRFs can sometimes suffer. In certain situations, negative effects may be introduced to a CRF by the inclusion of highly discriminative features. An example of this is provided by gazetteer features, which encode a word's presence in a gazetteer. We show how LOPs may be used to reduce these negative effects, and so provide some insight into how gazetteer features may be more effectively handled in CRFs, and log-linear models in general.
|
177 |
An associative approach to task switchingForrest, Charlotte Louise January 2012 (has links)
This thesis explores the behaviour of participants taking an associative approach to a task-cueing paradigm. Task-cueing is usually intended to explore controlled processing of task-sets. But small stimulus sets plausibly afford associative learning via simple and conditional discriminations. In six experiments participants were presented with typical task-cueing trials: a cue (coloured shape) followed by a digit (or in Experiment 5 a symbol) requiring one of two responses. In the standard Tasks condition (Monsell Experiment and Experiments 1-3), the participant was instructed to perform either an odd/even or a high/low task dependent on the cue. The second condition was intended to induce associative learning of cue + stimulus-response mappings. In general, the Tasks condition showed a large switch cost that reduced with preparation time, a small, constant congruency effect and a small perturbation when new stimuli were introduced. By contrast the CSR condition showed a small, reliable switch cost that did not reduce with preparation time, a large congruency effect that changed over time and a large perturbation when new stimuli were introduced. These differences may indicate automatic associative processing in the CSR condition and rule-based classification in the Tasks condition. Furthermore, an associative model based on the APECS learning algorithm (McLaren, 1993) provided an account of the CSR data. Experiment 3 showed that participants were able to deliberately change their approach to the experiment from using CSR instructions to using Tasks instructions, and to some extent vice versa. Experiments 4 & 5 explored the cause of the small switch cost in the CSR condition. Consideration of the aspects of the paradigm that produced the switch cost in the APECS model produced predictions, which were tested against behavioural data. Experiment 4 found that the resulting manipulation made participants more likely to induce task-sets. Experiment 5 used random symbols instead of numbers, removing the underlying task-sets. The results of this experiment broadly agreed with the predictions made using APECS. Chapter 6 considers an initial attempt to create a real-time version of APECS. It also finds that an associative model of a different class (AMAN, Harris & Livesey, 2010) can provide an account of some, but not all, of the phenomena found in the CSR condition. This thesis concludes that performance in the Tasks condition is suggestive of the use of cognitive control processes, whilst associatively based responding is available as a basis for performance in the CSR condition.
|
178 |
Caveat Emptor: Does Bitcoin Improve Portfolio Diversification?Gasser, Stephan, Eisl, Alexander, Weinmayer, Karl January 2014 (has links) (PDF)
Bitcoin is an unregulated digital currency originally introduced in 2008 without legal tender status. Based on a decentralized peer-to-peer network to confirm transactions and generate a limited amount of new bitcoins, it functions without the backing of a central bank or any other monitoring authority. In recent years, Bitcoin has seen increasing media coverage and trading volume, as well as major capital gains and losses in a high volatility environment. Interestingly, an analysis of Bitcoin returns shows remarkably low correlations with traditional investment assets such as other currencies, stocks, bonds or commodities such as gold or oil. In this paper, we shed light on the impact an investment in Bitcoin can have on an already well-diversified investment portfolio. Due to the non-normal nature of Bitcoin returns, we do not propose the classic mean-variance approach, but adopt a Conditional Value-at-Risk framework that does not require asset returns to be normally distributed. Our results indicate that Bitcoin should be included in optimal portfolios. Even though an investment in Bitcoin increases the CVaR of a portfolio, this additional risk is overcompensated by high returns leading to better return-risk ratios.
|
179 |
Autoregressive Conditional DensityLindberg, Jacob January 2016 (has links)
We compare two time series models: an ARMA(1,1)-ACD(1,1)-NIG model against an ARMA(1,1)-GARCH(1,1)-NIG model. Their out-of-sample performance is of interest rather than their in-sample properties. The models produce one-day ahead forecasts which are evaluated using three statistical tests: VaR-test, VaRdur-test and Berkowitz-test. All three tests are concerned with the the tail events, since our time series models are often used to estimate downside risk. When the two models are applied to data on Canadian stock market returns, our three statistical tests point in the direction that the ACD model and the GARCH model perform similarly. The difference between the models is small. We finish with comments on the model uncertainty inherit in the comparison.
|
180 |
Lewis’ Theory of Counterfactuals and EssentialismLippiatt, Ian 12 1900 (has links)
La logique contemporaine a connu de nombreux développements au cours de la seconde moitié du siècle dernier. Le plus sensationnel est celui de la logique modale et de sa sémantique des mondes possibles (SMP) dû à Saul Kripke dans les années soixante. Ces dans ce cadre que David Lewis exposera sa sémantique des contrefactuels (SCF). Celle-ci constitue une véritable excroissance de l’architecture kripkéenne. Mais sur quoi finalement repose l’architecture kripkéenne elle-même ? Il semble bien que la réponse soit celle d’une ontologie raffinée ultimement basée sur la notion de mondes possible. Ce mémoire comporte quatre objectifs. Dans un premier temps, nous allons étudier ce qui distingue les contrefactuels des autres conditionnels et faire un survol historique de la littérature concernant les contrefactuels et leur application dans différent champs du savoir comme la philosophie des sciences et l’informatique. Dans un deuxième temps, nous ferons un exposé systématique de la théorie de Lewis telle qu’elle est exposée dans son ouvrage Counterfactuals. Finalement, nous allons explorer la fondation métaphysique des mondes possible de David Lewis dans son conception de Réalisme Modal. / Modern logic since the end of the Second World War has undergone many developments. Two of the most interesting of these are the Kripkian Possible World Semantics and Lewis’ system of Counterfactuals. The first was developed by Saul Kripke in the 1960s and the second was developed by David Lewis in the 1970s. In some senses we can say that Lewis’ system of counterfactuals or Counter Factual Semantics (CFS) is built on top of the architecture which Kripke created with his Possible Worlds Semantics (PWS). But, what is the Kripkian Possible World Semantics itself built on? The answer it seems is very finely tuned ontology founded on the notion of possible worlds. This paper will attempt to do the following. First, attempt to draw a distinction between on the one hand conditionals and the other counterfactuals and at the same time attempt to look at some of the historical literature surrounding counterfactuals and their application in various fields like the philosophy of science. Second, recapitulate Lewis’ system of counterfactual semantics as developed primarily in Lewis’ book Counterfactuals. Finally this paper will attempt to explore the metaphysical foundations of the possible worlds account argued for by David Lewis in his conception of Modal Realism.
|
Page generated in 0.0481 seconds