• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 2
  • 1
  • 1
  • Tagged with
  • 11
  • 11
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Statistical data compression by optimal segmentation. Theory, algorithms and experimental results.

Steiner, Gottfried 09 1900 (has links) (PDF)
The work deals with statistical data compression or data reduction by a general class of classification methods. The data compression results in a representation of the data set by a partition or by some typical points (called prototypes). The optimization problems are related to minimum variance partitions and principal point problems. A fixpoint method and an adaptive approach is applied for the solution of these problems. The work contains a presentation of the theoretical background of the optimization problems and lists some pseudo-codes for the numerical solution of the data compression. The main part of this work concentrates on some practical questions for carrying out a data compression. The determination of a suitable number of representing points, the choice of an objective function, the establishment of an adjacency structure and the improvement of the fixpoint algorithm belong to the practically relevant topics. The performance of the proposed methods and algorithms is compared and evaluated experimentally. A lot of examples deepen the understanding of the applied methods. (author's abstract)
2

Internal cooling for HP turbine blades

Pearce, Robert January 2016 (has links)
Modern gas turbine engines run at extremely high temperatures which require the high pressure turbine blades to be extensively cooled in order to reach life requirements. This must be done using the minimum amount of coolant in order to reduce the negative impacts on the cycle efficiency. In the design process the cooling configuration and stress distribution must be carefully considered before verification of the design is conducted. Improvements to all three of these blade design areas are presented in this thesis which investigates internal cooling systems in the form of ribbed, radial passages and leading edge impingement systems. The effect of rotation on the heat transfer distribution in ribbed radial passages is investigated. An engine representative triple-pass serpentine passage, typical of a gas turbine mid-chord HP blade passage, is simulated using common industrial RANS CFD methodology with the results compared to those from the RHTR, a rotating experimental facility. The simulations are found to perform well under stationary conditions with the rotational cases proving more challenging. Further study and simulations of radial passages are undertaken in order to understand the salient flow and heat transfer features found, namely the inlet velocity profile and rib orientation relative to the mainstream flow. A consistent rib direction gives improved heat transfer characteristics whilst careful design of inlet conditions could give an optimised heat transfer distribution. The effect of rotation on the heat transfer distribution in leading edge impingement systems is investigated. As for the radial passages, RANS CFD simulations are compared and validated against experimental data from a rotating heat transfer rig. The simulations provide accurate average heat transfer levels under stationary and rotating conditions. The full target surface heat transfer in an engine realistic leading edge impingement system is investigated. Experimental data is compared to RANS CFD simulations. Experimental results are in line with previous studies and the simulations provide reasonable heat transfer predictions. A new method of combined thermal and mechanical analysis is presented and validated for a leading edge impingement system. Conjugate CFD simulations are used to provide a metal temperature distribution for a mechanical analysis. The effect of changes to the geometry and temperature profile on stress levels are studied and methods to improve blade stress levels are presented. The thermal FEA model is used to quantify the effect of HTC alterations on different surfaces within a leading edge impingement system, in terms of both temperature and stress distributions. These are then used to provide improved target HTC distributions in order to increase blade life. A new method using Gaussian process regression for thermal matching is presented and validated for a leading edge impingement case. A simplified model is matched to a full conjugate CFD solution to test the method's quality and reliability. It is then applied to two real engine blades and matched to data from thermal paint tests. The matches obtained are very close, well within experimental accuracy levels, and offer consistency and speed improvements over current methodologies.
3

Uma revisão da análise de experimentos unifatoriais com tratamentos de natureza quantitativa: comparações múltiplas ou análise de regressão? / A review of the analysis of unifactorial experiments with quantitative treatments: Multiple Comparisons or Regression Analysis?

Rodrigues, Josiane 21 June 2011 (has links)
O presente trabalho teve por objetivo fazer uma reflexão acerca do uso de testes de comparações múltiplas e da análise de regressão no estudo de experimentos unifatoriais cujos tratamentos são níveis de um fator quantitativo, para comparar os resultados e informações que são trazidas por cada uma dessas análises, verificando suas eventuais vantagens e limitações. De acordo com os objetivos propostos pelo presente trabalho, foi feita, depois de realizada a revisão bibliográfica sobre a análise de regressão e alguns dos testes de comparação de médias, um levantamento acerca de artigos cujo objetivo principal era o de fazer uma investigação de trabalhos publicados em jornais, revistas ou periódicos nos quais se utilizou algum procedimento de comparação de médias verificando assim a adequação desses testes às análises estatísticas realizadas. Essa revisão demonstrou que um número significativo de pesquisadores utiliza de procedimentos de comparações múltiplas em análises estatísticas de experimentos unifatoriais nos quais os tratamentos envolvidos são níveis de um fator quantitativo, o que é considerado por alguns como um procedimento inadequado. Assim sendo, foram analisados também dados de experimentos unifatoriais com tratamentos dessa ordem, que foram submetidos a uma análise de regressão e também a um procedimento de comparação múltipla das médias, com o objetivo de verificar quais as vantagens e limitações de cada um desses procedimentos na análise do experimento em questão. Nessa comparação ficou claro que o uso de procedimentos de comparações múltiplas na análise de experimentos unifatoriais envolvendo tratamentos quantitativos pode resultar na redução de informações e também da eficiência dos resultados, quando procedimentos mais apropriados, nesse caso, a análise de regressão, estão disponíveis para analisar dados dessa natureza. / The present work had like purpose to make a reflection about the use of multiple comparison tests and of the regression analysis on learning of unifactorial experiments whose treatments are levels of a quantitative factor, to compare the results and information are brought for each one of the analysis, verifying the eventual advantages and limitations of them. According to the purposes of the present work, was realized, later the bibliographical revision about regression analysis and some of the mean comparison tests was done, a survey about articles whose principal aim was to make a raising of works published at newspapers, magazines or periodicals where was used some mean comparison procedure verifying the adaptation of these tests to the statistical analysis realized. This revision demonstrated that a revealing number of searchers use multiple comparison procedures at analysis of unifactorial experiments whose treatments involved are levels of a quantitative factor, what is considered for some searchers like an inadequate procedure. Of this way, the data of unifactorial experiments, whose treatments were levels of a quantitative factor, were analyzed too, that were submitted to a regression analysis and to a multiple comparison procedure, with the aim of verifying the advantages and limitations of each one of these procedures at the analysis of the experiment. At this comparison, was clear that the use of multiple comparison procedures at analysis of experiments involving quantitative experiments can result in loss of information and reduced efficiency of the results, when more appropriate procedures, in this case, the regression analysis, are available to analyze this kind of data.
4

Uma revisão da análise de experimentos unifatoriais com tratamentos de natureza quantitativa: comparações múltiplas ou análise de regressão? / A review of the analysis of unifactorial experiments with quantitative treatments: Multiple Comparisons or Regression Analysis?

Josiane Rodrigues 21 June 2011 (has links)
O presente trabalho teve por objetivo fazer uma reflexão acerca do uso de testes de comparações múltiplas e da análise de regressão no estudo de experimentos unifatoriais cujos tratamentos são níveis de um fator quantitativo, para comparar os resultados e informações que são trazidas por cada uma dessas análises, verificando suas eventuais vantagens e limitações. De acordo com os objetivos propostos pelo presente trabalho, foi feita, depois de realizada a revisão bibliográfica sobre a análise de regressão e alguns dos testes de comparação de médias, um levantamento acerca de artigos cujo objetivo principal era o de fazer uma investigação de trabalhos publicados em jornais, revistas ou periódicos nos quais se utilizou algum procedimento de comparação de médias verificando assim a adequação desses testes às análises estatísticas realizadas. Essa revisão demonstrou que um número significativo de pesquisadores utiliza de procedimentos de comparações múltiplas em análises estatísticas de experimentos unifatoriais nos quais os tratamentos envolvidos são níveis de um fator quantitativo, o que é considerado por alguns como um procedimento inadequado. Assim sendo, foram analisados também dados de experimentos unifatoriais com tratamentos dessa ordem, que foram submetidos a uma análise de regressão e também a um procedimento de comparação múltipla das médias, com o objetivo de verificar quais as vantagens e limitações de cada um desses procedimentos na análise do experimento em questão. Nessa comparação ficou claro que o uso de procedimentos de comparações múltiplas na análise de experimentos unifatoriais envolvendo tratamentos quantitativos pode resultar na redução de informações e também da eficiência dos resultados, quando procedimentos mais apropriados, nesse caso, a análise de regressão, estão disponíveis para analisar dados dessa natureza. / The present work had like purpose to make a reflection about the use of multiple comparison tests and of the regression analysis on learning of unifactorial experiments whose treatments are levels of a quantitative factor, to compare the results and information are brought for each one of the analysis, verifying the eventual advantages and limitations of them. According to the purposes of the present work, was realized, later the bibliographical revision about regression analysis and some of the mean comparison tests was done, a survey about articles whose principal aim was to make a raising of works published at newspapers, magazines or periodicals where was used some mean comparison procedure verifying the adaptation of these tests to the statistical analysis realized. This revision demonstrated that a revealing number of searchers use multiple comparison procedures at analysis of unifactorial experiments whose treatments involved are levels of a quantitative factor, what is considered for some searchers like an inadequate procedure. Of this way, the data of unifactorial experiments, whose treatments were levels of a quantitative factor, were analyzed too, that were submitted to a regression analysis and to a multiple comparison procedure, with the aim of verifying the advantages and limitations of each one of these procedures at the analysis of the experiment. At this comparison, was clear that the use of multiple comparison procedures at analysis of experiments involving quantitative experiments can result in loss of information and reduced efficiency of the results, when more appropriate procedures, in this case, the regression analysis, are available to analyze this kind of data.
5

Rissbreitenentwicklung unter Langzeitbelastung anhand lokaler Verbundbeziehungen

Koschemann, Marc 10 November 2022 (has links)
Das Rissverhalten von Stahlbeton wird maßgeblich durch die Verbundwirkung zwischen Bewehrung und Beton beeinflusst. Aktuelle Untersuchungen befassen sich mit der Rissbreitenentwicklung und Verbundspannungsverteilung unter Langzeitbelastung. Dabei werden verschiedene Betonsorten (fcm ≈ 30–70 MPa), drei unterschiedliche Probetypen sowie faseroptische Sensoren verwendet. In diesem Artikel sind der experimentelle und messtechnische Aufbau sowie die Ergebnisse der ersten Versuchsreihen im Vergleich zu bestehenden Verbundmodellen dargestellt. Darüber hinaus werden Einflüsse des Prüfkörpers und der Verbundlänge sowie die Möglichkeiten zur Erfassung des lokalen Verbundverhaltens mit faseroptischen Sensoren aufgezeigt.
6

PREVENTING DATA POISONING ATTACKS IN FEDERATED MACHINE LEARNING BY AN ENCRYPTED VERIFICATION KEY

Mahdee, Jodayree 06 1900 (has links)
Federated learning has gained attention recently for its ability to protect data privacy and distribute computing loads [1]. It overcomes the limitations of traditional machine learning algorithms by allowing computers to train on remote data inputs and build models while keeping participant privacy intact. Traditional machine learning offered a solution by enabling computers to learn patterns and make decisions from data without explicit programming. It opened up new possibilities for automating tasks, recognizing patterns, and making predictions. With the exponential growth of data and advances in computational power, machine learning has become a powerful tool in various domains, driving innovations in fields such as image recognition, natural language processing, autonomous vehicles, and personalized recommendations. traditional machine learning, data is usually transferred to a central server, raising concerns about privacy and security. Centralizing data exposes sensitive information, making it vulnerable to breaches or unauthorized access. Centralized machine learning assumes that all data is available at a central location, which is only sometimes practical or feasible. Some data may be distributed across different locations, owned by different entities, or subject to legal or privacy restrictions. Training a global model in traditional machine learning involves frequent communication between the central server and participating devices. This communication overhead can be substantial, particularly when dealing with large-scale datasets or resource-constrained devices. / Recent studies have uncovered security issues with most of the federated learning models. One common false assumption in the federated learning model is that participants are the attacker and would not use polluted data. This vulnerability enables attackers to train their models using polluted data and then send the polluted updates to the training server for aggregation, potentially poisoning the overall model. In such a setting, it is challenging for an edge server to thoroughly inspect the data used for model training and supervise any edge device. This study evaluates the vulnerabilities present in federated learning and explores various types of attacks that can occur. This paper presents a robust prevention scheme to address these vulnerabilities. The proposed prevention scheme enables federated learning servers to monitor participants actively in real-time and identify infected individuals by introducing an encrypted verification scheme. The paper outlines the protocol design of this prevention scheme and presents experimental results that demonstrate its effectiveness. / Thesis / Doctor of Philosophy (PhD) / federated learning models face significant security challenges and can be vulnerable to attacks. For instance, federated learning models assume participants are not attackers and will not manipulate the data. However, in reality, attackers can compromise the data of remote participants by inserting fake or altering existing data, which can result in polluted training results being sent to the server. For instance, if the sample data is an animal image, attackers can modify it to contaminate the training data. This paper introduces a robust preventive approach to counter data pollution attacks in real-time. It incorporates an encrypted verification scheme into the federated learning model, preventing poisoning attacks without the need for specific attack detection programming. The main contribution of this paper is a mechanism for detection and prevention that allows the training server to supervise real-time training and stop data modifications in each client's storage before and between training rounds. The training server can identify real-time modifications and remove infected remote participants with this scheme.
7

Search for Charged Higgs Bosons with the ATLAS Detector at the LHC

Czodrowski, Patrick 23 August 2013 (has links) (PDF)
Die Entdeckung eines geladenen Higgs-Bosons, H+, wäre ein unbestreitbarer Nachweis von Physik jenseits des Standardmodells. In der vorliegenden Arbeit wird die Suche nach dem H+ mit Hilfe von Proton-Proton-Kollisionen, welche im Jahr 2011 mit dem ATLAS Experiment am Large Hadron Collider, LHC, des CERN aufgenommen wurden, beschrieben. Im Rahmen dieser Arbeit wurde eine überarbeitete Analyse der Suche nach geladenen Higgs-Bosonen, die eine Verhältnismethode anwendet und damit die Sensitivität des traditionell direkten Suchansatzes stark verbessert, durchgeführt. Leichte geladene Higgs-Bosonen, welche eine Masse geringer als die des Top-Quarks aufweisen, können aus einem Top-Quark-Zerfall hervorgehen. Im Gegensatz zu den schweren geladenen Higgs-Bosonen sind die leichten aufgrund des hohen Produktionswirkungsquerschnitts von Top-Quark-Paaren am LHC potenziell mit den ersten Daten des Experiments beobachtbar. In den meisten Theorien und Szenarien sowie dem größten Bereich ihres Phasenraumes zerfallen leichte geladene Higgs-Bosonen meist im H± → τ±ν Kanal. Demzufolge spielen sowohl die τ-Identifikation als auch die τ-Fehlidentifikation eine besondere Rolle für die Suche nach geladenen Higgs-Bosonen. Eigens für die Ermittlung der Fehlidentifikationswahrscheinlichkeiten von Elektronen als hadronisch zerfallende τ-Leptonen wurde eine “tag-and-probe”-Methode, basierend auf Z → ee Ereignissen, entwickelt. Diese Messungen sind mit den allerersten Daten durchgeführt worden. Dabei haben diese einerseits für alle Analysen, welche die Elektronenveto-Algorithmen der τ-Identifikation nutzen, essenzielle Skalenfaktoren hervorgebracht. Andererseits wurde, beruhend auf diesen Ergebnissen, eine datenbasierte Abschätzungsmethode entwickelt und für die Untergründe der geladenen Higgs-Boson-Suche, die von der Fehlidentifikation von Elektronen als hadronisch zerfallende τ-Leptonen stammen, erfolgreich implementiert. Im Rahmen dieser Arbeit wurden Triggerstudien, mit dem Ziel höchstmögliche Signaleffizienzen zu gewährleisten, durchgeführt. Neuartige Triggerobjekte, basierend auf einer Kombination aus τ-Trigger und fehlender transversaler Energie-Trigger, wurden entworfen, überprüft und in das Triggermenü für die Datennahme im Jahr 2012 aufgenommen. Eine direkte Suche nach dem geladenen Higgs-Boson wurde in drei Kanälen mit einem τ-Lepton im Endzustand unter Berücksichtigung des gesamten Datensatzes des Jahres 2011 durchgeführt. Da kein signifikanter Überschuss, der von den Vorhersagen des Standardmodells abweicht, in den Daten beobachtet wurde, sind obere Ausschlussgrenzen auf B(t → bH+) gesetzt worden. Letztlich ist die Analyse des Kanals mit einem hadronisch zerfallenden τ-Lepton und einem Myon oder Elektron im Endzustand des tt ̄-Zerfalls, unter Anwendung der sogenannten Verhältnismethode, wiederholt worden. Diese Methode misst Verhältnisse von Ereignisausbeuten, anstatt die Verteilungen diskriminierender Variablen zu evaluieren. Folglich kürzen sich die meisten dominant beitragenden systematischen Unsicherheiten intrinsisch heraus. Die Daten stimmen mit den Vorhersagen des Standardmodells überein. Durch Zuhilfenahme der Verhältnismethode wurden die oberen Ausschlussgrenzen, im Vergleich zur direkten Suche, signifikant verbessert. Die Resultate der Verhältnismethode sind mit denen der direkten Suche, welche ein hadronisch zerfallendes τ-Lepton und zwei Jets im Endzustand des tt ̄-Zerfalls nutzt, kombiniert worden. Auf diese Art und Weise konnten obere Ausschlussgrenzen auf B(t → bH+) in einem Bereich von 0,8 %–3,4 % für geladene Higgs-Bosonen in einem Massenbereich für m_H+ zwischen 90 GeV und 160 GeV gesetzt werden. Sollte das Minimal Supersymmetrische Standardmodell (MSSM) in der Natur realisiert sein, so haben die hier ermittelten oberen Ausschlussgrenzen auf B(t → bH+) direkte Konsequenzen für die Identität des Higgs-Boson-ähnlichen Teilchens, welches im Jahr 2012 am LHC entdeckt wurde.
8

Search for heavy resonances decaying into the fully hadronic di-tau final state with the ATLAS detector

Morgenstern, Marcus Matthias 11 April 2014 (has links) (PDF)
The discovery of a heavy neutral particle would be a direct hint for new physics beyond the Standard Model. In this thesis searches for new heavy neutral particles decaying into two tau leptons, which further decay into hadrons, are presented. They cover neutral Higgs bosons in the context of the minimal supersymmetric extension of the Standard Model (MSSM) as well as Z′ bosons, predicted by various theories with an extended gauge sector. Both analyses are based on the full 2012 proton-proton collision dataset taken by the ATLAS experiment at the Large Hadron Collider (LHC). The extended Higgs sector in the MSSM suggests additional heavy neutral Higgs bosons which decay into tau leptons in about 10% of the time. Given that the dominant final state, φ → b¯b, suffers from tremendous QCD initiated backgrounds, the decay into two tau leptons is the most promising final state to discover such new resonances. The fully hadronic final state is the dominant one with a branching fraction of about 42%. It governs the sensitivity, in particular at high transverse momentum when the QCD multijet background becomes small. Other theoretical extensions of the Standard Model, which are mainly driven by the concept of gauge unification, predict additional heavy particles arising from an extended underlying gauge group. Some of them further predict an enhanced coupling to fermions of the third generation. This motivates the search for Z′ bosons in the fully hadronic di-tau final state. One major challenge in physics analyses involving tau leptons is to have an outstanding performance of trigger and identification algorithms suitable to select real tau leptons with high efficiency, while rejecting fake taus originating from quark or gluon initiated jets. In this work a new tau trigger concept based on multivariate classifiers has been developed and became the default tau trigger algorithm in 2012 data-taking. An updated tau identification technique based on the log-likelihood approach has been provided for 2011 data-taking. Furthermore, a new framework has been developed to perform the tuning of the tau identification algorithm and exploited for the optimisation for 2012 data-taking, accordingly. The search for new heavy neutral Higgs bosons in the context of the MSSM has been performed exploiting the full 2012 dataset corresponding to an integrated luminosity of 19.5 fb−1 taken at a centre-of-mass energy of √s = 8 TeV. Updated event selection criteria and novel data-driven background estimation techniques have been developed and are suitable to increase the sensitivity of the analysis significantly. No deviations from the Standard Model prediction are observed, and thus 95% C.L. exclusion limits on the production cross section times branching ratio, σ(pp → φ) × BR(φ → ττ), are derived exploiting the CLs method. The exclusion ranges from 13.0 pb at 150GeV to 7.0 fb at 1 TeV for Higgs boson production in association with b-quarks and from 23.6 pb at 150GeV to 7.5 fb at 1 TeV for Higgs bosons produced via gluon-gluon fusion. The obtained exclusion limit on σ(pp → φ) × BR(φ → ττ) can be related to an exclusion of the MSSM parameter space in the MA-tan β-plane. Various benchmark scenario are considered. The ”standard candle” is the mhmax scenario, for which tan β values between 13.3 and 55 can be excluded at 95% C.L. in the considered mass range. Updated benchmark scenarios designed to incorporate the recently discovered SM-like Higgs boson were suggested and analysed as well. In the mhmod+ (mhmod−) scenario tan β values between 13.5 (13.3 ) and 55 (52 ) can be excluded. Finally, a search for heavy neutral resonances in the context of Z′ bosons was performed. As in the search for new Higgs bosons, no deviation from the Standard Model prediction is observed, and hence exclusion limits on the production cross section times branching ratio, σ(pp → Z′) × BR(Z′ → ττ), and on the Z′ boson mass are derived exploiting the Bayesian approach. Z′ bosons with MZ′ < 1.9 TeV can be excluded at 95% credibility, and thus mark the strongest exclusion limit obtained in the di-tau final state by any collider experiment so far.
9

Synthèse de contrôleurs avancés pour les systèmes quasi-LPV appliqués au contrôle de moteurs automobiles / Advanced controller design for quasi-LPV systems applied to automotive engine control

Laurain, Thomas 04 December 2017 (has links)
Ma thèse en automatique s’inscrit dans la thématique de recherche «Transport» du LAMIH. L’objectif est d’améliorer le fonctionnement des moteurs thermiques (essence), notamment en réduisant la consommation et la pollution. Face à cet enjeu écologique et économique, et compte tenu des nouvelles normes et des stratégies court-termistes de l’industrie (scandale Volkswagen...), de nouveaux contrôleurs doivent être conçus pour piloter l’arrivée d’air et d’essence au sein du moteur. En considérant l’aspect hautement non-linéaire du système, la représentation Takagi-Sugeno et le background théorique du LAMIH sont utilisés. Un premier contrôleur est synthétisé pour régler le problème de la vitesse de ralenti du moteur. Cependant, la complexité du système impose l’utilisation d’un contrôleur très coûteux d’un point de vue computationnel. Un contrôleur alternatif est donc synthétisé afin d’être implémenté dans l’ordinateur embarqué du moteur. Un second contrôleur est obtenu pour maintenir la richesse en proportions stoechiométriques afin de réduire la pollution. Ce système étant sujet à un retard de transport variable, un changement de domaine est réalisé afin de rendre ce retard constant, et de concevoir un contrôleur simple et efficace. Des essais réels sur le banc d’essai moteur du LAMIH sont réalisés afin de valider la méthodologie présentée. / My PhD in Automatic Control is part of the research theme “Transport” of the LAMIH. The objective is to improve the functioning of the gasoline engines, mainly by reducing the fuel consumption and the pollution. With this ecologic and economic challenge, and taking into account the new norms and the short-term strategies of the industry (scandal of Volkswagen...), new controllers have to be designed to control the air valve and the fuel injection inside the engine. Considering the highly nonlinear aspect of the system, the Takagi-Sugeno representation and the theoretical background of the LAMIH have been used. A first controller is designed to solve the problem of idle engine speed. However, the complexity of the system forces the use of a controller that is very costly from a computational point of view. An alternative controller is then designed in order to be implemented inside the embedded computer of the engine. A second controller is obtained to maintain the air-fuel ratio in stoichiometric proportions in order to reduce the pollution. This system being subject to a variable transport delay, a change of domain is realized to make this delay constant, and to design a simple and efficient controller. Real-time experiments have been realized on the engine test bench of the LAMIH in order to validate the presented methodology.
10

Search for heavy resonances decaying into the fully hadronic di-tau final state with the ATLAS detector

Morgenstern, Marcus Matthias 21 March 2014 (has links)
The discovery of a heavy neutral particle would be a direct hint for new physics beyond the Standard Model. In this thesis searches for new heavy neutral particles decaying into two tau leptons, which further decay into hadrons, are presented. They cover neutral Higgs bosons in the context of the minimal supersymmetric extension of the Standard Model (MSSM) as well as Z′ bosons, predicted by various theories with an extended gauge sector. Both analyses are based on the full 2012 proton-proton collision dataset taken by the ATLAS experiment at the Large Hadron Collider (LHC). The extended Higgs sector in the MSSM suggests additional heavy neutral Higgs bosons which decay into tau leptons in about 10% of the time. Given that the dominant final state, φ → b¯b, suffers from tremendous QCD initiated backgrounds, the decay into two tau leptons is the most promising final state to discover such new resonances. The fully hadronic final state is the dominant one with a branching fraction of about 42%. It governs the sensitivity, in particular at high transverse momentum when the QCD multijet background becomes small. Other theoretical extensions of the Standard Model, which are mainly driven by the concept of gauge unification, predict additional heavy particles arising from an extended underlying gauge group. Some of them further predict an enhanced coupling to fermions of the third generation. This motivates the search for Z′ bosons in the fully hadronic di-tau final state. One major challenge in physics analyses involving tau leptons is to have an outstanding performance of trigger and identification algorithms suitable to select real tau leptons with high efficiency, while rejecting fake taus originating from quark or gluon initiated jets. In this work a new tau trigger concept based on multivariate classifiers has been developed and became the default tau trigger algorithm in 2012 data-taking. An updated tau identification technique based on the log-likelihood approach has been provided for 2011 data-taking. Furthermore, a new framework has been developed to perform the tuning of the tau identification algorithm and exploited for the optimisation for 2012 data-taking, accordingly. The search for new heavy neutral Higgs bosons in the context of the MSSM has been performed exploiting the full 2012 dataset corresponding to an integrated luminosity of 19.5 fb−1 taken at a centre-of-mass energy of √s = 8 TeV. Updated event selection criteria and novel data-driven background estimation techniques have been developed and are suitable to increase the sensitivity of the analysis significantly. No deviations from the Standard Model prediction are observed, and thus 95% C.L. exclusion limits on the production cross section times branching ratio, σ(pp → φ) × BR(φ → ττ), are derived exploiting the CLs method. The exclusion ranges from 13.0 pb at 150GeV to 7.0 fb at 1 TeV for Higgs boson production in association with b-quarks and from 23.6 pb at 150GeV to 7.5 fb at 1 TeV for Higgs bosons produced via gluon-gluon fusion. The obtained exclusion limit on σ(pp → φ) × BR(φ → ττ) can be related to an exclusion of the MSSM parameter space in the MA-tan β-plane. Various benchmark scenario are considered. The ”standard candle” is the mhmax scenario, for which tan β values between 13.3 and 55 can be excluded at 95% C.L. in the considered mass range. Updated benchmark scenarios designed to incorporate the recently discovered SM-like Higgs boson were suggested and analysed as well. In the mhmod+ (mhmod−) scenario tan β values between 13.5 (13.3 ) and 55 (52 ) can be excluded. Finally, a search for heavy neutral resonances in the context of Z′ bosons was performed. As in the search for new Higgs bosons, no deviation from the Standard Model prediction is observed, and hence exclusion limits on the production cross section times branching ratio, σ(pp → Z′) × BR(Z′ → ττ), and on the Z′ boson mass are derived exploiting the Bayesian approach. Z′ bosons with MZ′ < 1.9 TeV can be excluded at 95% credibility, and thus mark the strongest exclusion limit obtained in the di-tau final state by any collider experiment so far.

Page generated in 0.4951 seconds