• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 576
  • 129
  • 96
  • 93
  • 87
  • 37
  • 25
  • 21
  • 20
  • 19
  • 19
  • 18
  • 6
  • 6
  • 5
  • Tagged with
  • 1280
  • 338
  • 195
  • 191
  • 190
  • 175
  • 149
  • 116
  • 106
  • 93
  • 84
  • 83
  • 79
  • 75
  • 68
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

A search for massive resonances decaying to top quark pairs and jet trigger performance studies with the ATLAS detector at the Large Hadron Collider

Fajardo, Luz Stella Gomez 17 July 2014 (has links)
Diese Arbeit behandelt die Suche nach neuen Teilchen, die in Top-Quark-Paare zerfallen (t¯t). Die Analyse beruht auf Daten des ATLAS-Experiments von Proton- Proton-Kollisionen am LHC bei einer Schwerpunktsenergie von p s = 7 TeV und einer Gesamtluminosität von 2.05 fb−1. Hierzu wird der Lepton+Jets Endzustand im t¯t ! WbWb Zerfallskanal verwendet, worin ein W-Boson leptonisch und das andere hadronisch zerfällt. Das t¯t -Ereignis wird sowohl in aufgelösten als auch geboosteten Zerfallstopologien rekonstruiert. Zum ersten Mal werden die Korrelationen beider Kanäle in Form einer dritten Kategorie nutzbar gemacht, welche aus Ereignissen besteht, die in beiden Topologien selektiert wurden. Die Sensitivität der Analyse wird hierdurch erhöht. Obere Schranken bei 95% Vertrauensniveau auf den Wirkungsquerschnitt multipliziert mit der Zerfallsbreite für massive Zustände großer und kleiner Zerfallsbreite werden berechnet. Diese werden aus der Kombination der beiden Ansätze der t¯t -Rekonstruktion gewonnen. Für die Z0-Resonanz kleiner Breite reicht die beobachtete (erwartete) obere Grenze auf den Wirkungsquerschnitt von 4.85 (4.81) pb, für eine Masse von 0.6 TeV, bis 0.21 (0.13) pb, für eine Masse von 2 TeV. Eine schmale leptophobische Topcolor-Z0-Resonanzen mit einer Masse unterhalb von 1.3 TeV kann ausgeschlossen werden. Weiterhin konnten beobachtete (erwartete) obere Grenzen auch für eine breite Farboktett-Resonanz berechnet werden. Diese liegen zwischen 2.52 (2.59) pb und 0.37 (0.27) pb für Massen von 0.7 TeV bzw. 2 TeV. Breite Kaluza-Klein-Gluon-Resonanzen mit einer Masse unter 1.65 TeV können ausgeschlossen werden. / This thesis presents the search for new particles that decay into top quark pairs (t¯t). The analysis is performed with the ATLAS experiment at the LHC, using an integrated luminosity of 2.05 fb−1 of proton–proton collision data, collected at a center-of-mass energy of p s = 7 TeV. The lepton plus jets final state is used in the t¯t ! WbWb decay, where one W boson decays leptonically and the other hadronically. The t¯t system is reconstructed using both resolved and boosted topologies of the top-quark decay. For the first time, correlations between the two search channels have been employed by creating a third channel with the events selected by both analyses. The sensitivity to new physics phenomena is thereby improved. Upper limits are derived on the production cross-section times branching ratio for narrow and wide massive states, at the 95 % confidence level. These are extracted by combining the two approaches of the t¯t reconstruction. For a narrow Z0 resonance, the observed (expected) upper limits range from 4.85 (4.81) pb for a mass of 0.6 TeV, to 0.21 (0.13) pb for a mass of 2 TeV. A narrow leptophobic topcolor Z0 resonance with a mass below 1.3 TeV is excluded. Observed (expected) limits are also derived for a broad color-octet resonance. They vary between 2.52 (2.59) pb and 0.37 (0.27) pb for a mass of 0.7 TeV and 2 TeV, respectively. The wide Kaluza-Klein gluon with a mass below 1.65 TeV is excluded. Another aspect of this thesis are performance studies of the level-1 jet trigger. Trigger efficiencies have been measured, using data collected by the ATLAS detector in 2010 at p s = 7 TeV. The turn-on curves obtained for a variety of jet triggers, showed good agreement between data and simulation in the plateau region. The efficiency results were used at the first stage of analyses for multi-jet cross-section measurements.
232

Jet activity in top-quark events at √s = 13 TeV using 3.2 fb-1 data collected by the ATLAS detector

Eckardt, Christoph 27 March 2020 (has links)
In dieser Arbeit wird die Messung des normalisierten differentiellen Wechselwirkungsquerschnitts von Top Quark Produktionen mit zusätzlichen Jets präsentiert. Es werden Proton-Proton Kollisionsdaten des ATLAS Experiments am Large Hadron Collider bei einer Schwerpunktsenergie von 13 TeV mit einer Luminosität von 3.2 fb-1 verwendet. Die Top Quark Ereignissen werden durch ein entgegengesetzt geladenes Elektron-Muon Paar und zwei b-tagged Jets selektiert. Der differentielle Wechselwirkungsquerschnitt wird als Funktion von Observablen, die sensitiv auf zusätzliche Jets sind, gemessen: der Jetmultiplizität, des transversalen Impulses der zusätzlichen Jets, der Summe der transversalen Impulse aller Objekte im Ereignis und räumlichen Korrelationen zwischen den zwei Jets mit dem größten Impulsen. Die gemessenen Daten werden auf Teilchen-Level korrigiert und mit verschiedenen Vorhersagen für detailierte Studien des Monte Carlo QCD Models verglichen. / In this thesis, the measurement of the normalised differential cross-sections of top quark pair production in association with jets using 3.2 fb-1 of proton-proton col- lision data at a centre-of-mass energy of 13 TeV by the ATLAS experiment at the LHC are presented. Jets are selected from top events which are defined by an opposite-charge electron-muon pair and two b-tagged jets in the final state. The cross-sections are measured as functions of several observables are sensitive to ad- ditional jets: jet multiplicities, transverse momentum of additional jets, transverse momentum sum of all objects in the event and spatial correlations of the two high- est momentum additional jets. The data are corrected to obtain particle-level fidu- cial cross-section. The resulting measurements are compared to several predictions allowing detailed studies of Monte Carlo QCD modelling.
233

Search for heavy Higgs bosons A/H decaying to a top-quark pair in pp collisions at \sqrt(s)=8 TeV with the ATLAS detector

Stănescu-Bellu, Mădălina 30 April 2021 (has links)
In dieser Dissertation wird die Suche nach schweren neutralen pseudoskalaren A und skalaren H Higgs-Bosonen vorgestellt, die in gg-Fusionen erzeugt werden, und in ein Top-Antitop-Quark-Paar zerfallen. Gesucht wurde im vollständigen Datensatz von Proton–Proton-Kollisionen bei einer Schwerpunktsenergie von 8 TeV die vom ATLAS-Detektor am Large Hadron Collider aufgezeichnet wurde und einer integrierten Luminosität von 20.3 fb−1 entspricht. Der Signalprozess und der Haupthintergrund aus der Top-Quark-Paar-Produktion über starke gg-Fusionen-Prozesse, interferieren heftig, was zu einer Verzerrung des reinen Breit-Wigner-Resonanzpeak in eine Peak-Dip-Struktur führt. Diese Analyse ist die erste am LHC, die die Interferenz zwischen Signal und Hintergrundprozessen vollständig berücksichtigt. Die Suche stützt sich auf die statistische Analyse des invarianten Top-Quark-Paar-Massenspektrum, welches aus Ereignissen mit einem Elektron oder Myon mit hohem Transversalimpuls, einer hohen fehlenden Transversalenergie von dem nicht detektierten Neutrino und mindestens vier Jets rekonstruiert wird. In den Daten wird keine signifikante Abweichung vom erwarteten Standardmodell-Hintergrund beobachtet. Die Ausschließungsgrenzen wurden abgeleitet im Kontext des Typ II Two-Higgs-Doublet Model, für Higgs-Bosonen mit einer Masse von 500 und 750 GeV und mit niedrigerem tan(\beta)-Parameter, bei der tan(\beta) das Verhältnis der Vakuumerwartungswerte der beiden Higgs-Dublett-Felder ist. Diese Parameterregionen sind weitgehend unerforscht in Untersuchungen von beliebigen Endzuständen. / In this thesis a search is presented for heavy neutral pseudoscalar A and scalar H Higgs bosons, produced in gg fusion and decaying into a top-antitop quark pair. The search is conducted on the full proton-proton collisions dataset recorded by the ATLAS detector at the Large Hadron Collider at a centre-of-mass collision energy of 8 TeV and corresponding to an integrated luminosity of 20.3 fb−1. The signal process and the main background from top quark pair production via the gg fusion strong process, interfere heavily, distorting the signal shape from the pure Breit-Wigner resonance peak to a peak-dip structure. This analysis is the first one at the LHC that fully takes into account the interference between a signal and the background processes. The search relies on the statistical analysis of the top quark pair invariant mass spectrum, which is reconstructed in signal candidate events with a high-transverse momentum electron or muon, large missing transverse energy from the undetected neutrino and at least four jets. No significant deviation from the expected SM background is observed in data. Exclusion limits are derived in the context of the type-II Two-Higgs-Doublet Model, for Higgs boson masses of 500 and 750 GeV and in the low tan(\beta) parameter region, where tan(\beta) is the ratio of the vacuum expectation values of the two Higgs doublet fields. These parameter regions have been largely unexplored by searches in any final state.
234

Calibration des algorithmes d'identification des jets issus de quarks b et mesure de la section efficace différentielle de production de paires de quarks top-antitop en fonction de la masse et de la rapidité du système top-antitopdans les collisions p-p à une énergie au centre de masse de 7 TeV auprès de l'expérience ATLAS au LHC.

Tannoury, N. 09 October 2012 (has links) (PDF)
ATLAS, LHC, quark b, quark top, étiquetage des jets issus de quark b, calibration des algorithmes d'étiquetage des jets issus de quark b, quarks top-antitop, section efficace, section efficace différentielle des paires top-antitop, masse du top, rapidité du top, nouvelle physique.
235

Challenges encountered by women who requested termination of pregnancy services in the North West Province of South Africa

Mokgethi, Nomathemba Emily Blaai 08 1900 (has links)
In 1996 the South African government legalised the termination of pregnancy (TOP) services, allowing women to choose to terminate unplanned pregnancies at designated facilities. Although TOP services are available, pregnant women continue to use illegal abortion services, with potentially life-risking consequences. The purpose of this study was to identify challenges encountered by women requesting TOP services, and to make recommendations for improved policies and practices, enabling more women in the North West Province (NWP) to access TOP services. This was a non-experimental, exploratory, descriptive and quantitative study. Structured interviews were conducted with 150 women who had used TOP services in phase 1, with 50 women who were unable to access TOP services in phase 2 and with 20 professional nurses providing TOP services in the NWP in phase 3. In phase 1, 96.0% (n=144) of the women needed transport to access TOP services, and 73.2% (n=109) indicated that nurses put women’s names on waiting lists, posing barriers to such access in the NWP. In phase 2, 92.0% (n=46) of these respondents had reportedly requested TOPs for the first time, but 89.0% (n=44) could not access TOP services. In phase 3, only 14 out of 19 designated facilities in the NWP, and only 20 nurses, provided TOP services during the study period. Out of the 20 interviewed nurses, 74.0% (n=14) regarded the Choice on Termination of Pregnancy Act, Act 92 of 1996 (CTOP Act) was being unclear requiring a revision. These professional nurses provided TOP services in NWP, by choice. Unless more facilities and more nurses can provide TOP services to the women of the NWP, these services will continue to remain inaccessible, necessitating the continued utilisation of illegal abortion services, in spite of the TOP Act’s prescriptions. It is also recommended that management will provide sufficient support and training opportunities for professional nurses working in TOP services in the NWP. / Health Studies / (D. Litt. et Phil. (Health Studies))
236

Datenzentrierte Bestimmung von Assoziationsregeln in parallelen Datenbankarchitekturen

Legler, Thomas 15 August 2009 (has links) (PDF)
Die folgende Arbeit befasst sich mit der Alltagstauglichkeit moderner Massendatenverarbeitung, insbesondere mit dem Problem der Assoziationsregelanalyse. Vorhandene Datenmengen wachsen stark an, aber deren Auswertung ist für ungeübte Anwender schwierig. Daher verzichten Unternehmen auf Informationen, welche prinzipiell vorhanden sind. Assoziationsregeln zeigen in diesen Daten Abhängigkeiten zwischen den Elementen eines Datenbestandes, beispielsweise zwischen verkauften Produkten. Diese Regeln können mit Interessantheitsmaßen versehen werden, welche dem Anwender das Erkennen wichtiger Zusammenhänge ermöglichen. Es werden Ansätze gezeigt, dem Nutzer die Auswertung der Daten zu erleichtern. Das betrifft sowohl die robuste Arbeitsweise der Verfahren als auch die einfache Auswertung der Regeln. Die vorgestellten Algorithmen passen sich dabei an die zu verarbeitenden Daten an, was sie von anderen Verfahren unterscheidet. Assoziationsregelsuchen benötigen die Extraktion häufiger Kombinationen (EHK). Hierfür werden Möglichkeiten gezeigt, Lösungsansätze auf die Eigenschaften moderne System anzupassen. Als Ansatz werden Verfahren zur Berechnung der häufigsten $N$ Kombinationen erläutert, welche anders als bekannte Ansätze leicht konfigurierbar sind. Moderne Systeme rechnen zudem oft verteilt. Diese Rechnerverbünde können große Datenmengen parallel verarbeiten, benötigen jedoch die Vereinigung lokaler Ergebnisse. Für verteilte Top-N-EHK auf realistischen Partitionierungen werden hierfür Ansätze mit verschiedenen Eigenschaften präsentiert. Aus den häufigen Kombinationen werden Assoziationsregeln gebildet, deren Aufbereitung ebenfalls einfach durchführbar sein soll. In der Literatur wurden viele Maße vorgestellt. Je nach den Anforderungen entsprechen sie je einer subjektiven Bewertung, allerdings nicht zwingend der des Anwenders. Hierfür wird untersucht, wie mehrere Interessantheitsmaßen zu einem globalen Maß vereinigt werden können. Dies findet Regeln, welche mehrfach wichtig erschienen. Der Nutzer kann mit den Vorschlägen sein Suchziel eingrenzen. Ein zweiter Ansatz gruppiert Regeln. Dies erfolgt über die Häufigkeiten der Regelelemente, welche die Grundlage von Interessantheitsmaßen bilden. Die Regeln einer solchen Gruppe sind daher bezüglich vieler Interessantheitsmaßen ähnlich und können gemeinsam ausgewertet werden. Dies reduziert den manuellen Aufwand des Nutzers. Diese Arbeit zeigt Möglichkeiten, Assoziationsregelsuchen auf einen breiten Benutzerkreis zu erweitern und neue Anwender zu erreichen. Die Assoziationsregelsuche wird dabei derart vereinfacht, dass sie statt als Spezialanwendung als leicht nutzbares Werkzeug zur Datenanalyse verwendet werden kann. / The importance of data mining is widely acknowledged today. Mining for association rules and frequent patterns is a central activity in data mining. Three main strategies are available for such mining: APRIORI , FP-tree-based approaches like FP-GROWTH, and algorithms based on vertical data structures and depth-first mining strategies like ECLAT and CHARM. Unfortunately, most of these algorithms are only moderately suitable for many “real-world” scenarios because their usability and the special characteristics of the data are two aspects of practical association rule mining that require further work. All mining strategies for frequent patterns use a parameter called minimum support to define a minimum occurrence frequency for searched patterns. This parameter cuts down the number of patterns searched to improve the relevance of the results. In complex business scenarios, it can be difficult and expensive to define a suitable value for the minimum support because it depends strongly on the particular datasets. Users are often unable to set this parameter for unknown datasets, and unsuitable minimum-support values can extract millions of frequent patterns and generate enormous runtimes. For this reason, it is not feasible to permit ad-hoc data mining by unskilled users. Such users do not have the knowledge and time to define suitable parameters by trial-and-error procedures. Discussions with users of SAP software have revealed great interest in the results of association-rule mining techniques, but most of these users are unable or unwilling to set very technical parameters. Given such user constraints, several studies have addressed the problem of replacing the minimum-support parameter with more intuitive top-n strategies. We have developed an adaptive mining algorithm to give untrained SAP users a tool to analyze their data easily without the need for elaborate data preparation and parameter determination. Previously implemented approaches of distributed frequent-pattern mining were expensive and time-consuming tasks for specialists. In contrast, we propose a method to accelerate and simplify the mining process by using top-n strategies and relaxing some requirements on the results, such as completeness. Unlike such data approximation techniques as sampling, our algorithm always returns exact frequency counts. The only drawback is that the result set may fail to include some of the patterns up to a specific frequency threshold. Another aspect of real-world datasets is the fact that they are often partitioned for shared-nothing architectures, following business-specific parameters like location, fiscal year, or branch office. Users may also want to conduct mining operations spanning data from different partners, even if the local data from the respective partners cannot be integrated at a single location for data security reasons or due to their large volume. Almost every data mining solution is constrained by the need to hide complexity. As far as possible, the solution should offer a simple user interface that hides technical aspects like data distribution and data preparation. Given that BW Accelerator users have such simplicity and distribution requirements, we have developed an adaptive mining algorithm to give unskilled users a tool to analyze their data easily, without the need for complex data preparation or consolidation. For example, Business Intelligence scenarios often partition large data volumes by fiscal year to enable efficient optimizations for the data used in actual workloads. For most mining queries, more than one data partition is of interest, and therefore, distribution handling that leaves the data unaffected is necessary. The algorithms presented in this paper have been developed to work with data stored in SAP BW. A salient feature of SAP BW Accelerator is that it is implemented as a distributed landscape that sits on top of a large number of shared-nothing blade servers. Its main task is to execute OLAP queries that require fast aggregation of many millions of rows of data. Therefore, the distribution of data over the dedicated storage is optimized for such workloads. Data mining scenarios use the same data from storage, but reporting takes precedence over data mining, and hence, the data cannot be redistributed without massive costs. Distribution by special data semantics or user-defined selections can produce many partitions and very different partition sizes. The handling of such real-world distributions for frequent-pattern mining is an important task, but it conflicts with the requirement of balanced partition.
237

Challenges encountered by women who requested termination of pregnancy services in the North West Province of South Africa

Mokgethi, Nomathemba Emily Blaai 08 1900 (has links)
In 1996 the South African government legalised the termination of pregnancy (TOP) services, allowing women to choose to terminate unplanned pregnancies at designated facilities. Although TOP services are available, pregnant women continue to use illegal abortion services, with potentially life-risking consequences. The purpose of this study was to identify challenges encountered by women requesting TOP services, and to make recommendations for improved policies and practices, enabling more women in the North West Province (NWP) to access TOP services. This was a non-experimental, exploratory, descriptive and quantitative study. Structured interviews were conducted with 150 women who had used TOP services in phase 1, with 50 women who were unable to access TOP services in phase 2 and with 20 professional nurses providing TOP services in the NWP in phase 3. In phase 1, 96.0% (n=144) of the women needed transport to access TOP services, and 73.2% (n=109) indicated that nurses put women’s names on waiting lists, posing barriers to such access in the NWP. In phase 2, 92.0% (n=46) of these respondents had reportedly requested TOPs for the first time, but 89.0% (n=44) could not access TOP services. In phase 3, only 14 out of 19 designated facilities in the NWP, and only 20 nurses, provided TOP services during the study period. Out of the 20 interviewed nurses, 74.0% (n=14) regarded the Choice on Termination of Pregnancy Act, Act 92 of 1996 (CTOP Act) was being unclear requiring a revision. These professional nurses provided TOP services in NWP, by choice. Unless more facilities and more nurses can provide TOP services to the women of the NWP, these services will continue to remain inaccessible, necessitating the continued utilisation of illegal abortion services, in spite of the TOP Act’s prescriptions. It is also recommended that management will provide sufficient support and training opportunities for professional nurses working in TOP services in the NWP. / Health Studies / (D. Litt. et Phil. (Health Studies))
238

Search for a vector-like quark T' decaying into top+Higgs in single production mode in full hadronic final state using CMS data collected at 8 TeV / Recherche d'un quark vectoriel T¹ qui se désintègre entop+Higgs dans le mode de production célibataire dans le état final hadronique avec les données recueillies par l'expérience CMS à 8 TeV

Ruiz Alvarez, José David 21 October 2015 (has links)
Le LHC (Large Hadron Collider) a produit en 2012 des collisions proton proton à une énergie de 8 TeV dans le centre de masse pour les expériences ATLAS et CMS. Ces deux expériences ont été conçues pour découvrir le boson de Higgs et pour rechercher de nouvelles particules prédites par des modèles théoriques. Le boson de Higgs a été découvert le 4 juillet 2012 par les expériences ATLAS et CMS. Cette découverte marque le début d'une nouvelle période de recherche dans le domaine. Avec la confirmation de l'existence du boson de Higgs, les recherches de nouvelle physique liées à ce boson sont devenues prioritaires. Par exemple, on peut chercher dans les données une nouvelle particule massive qui peut se désintégrer dans un boson de Higgs associé à d'autres particules du modèle standard. Une signature attendue est un boson de Higgs avec un quark top, les deux particules les plus lourdes du modèle standard. Le modèle standard prédit une section efficace pour la production du Higgs avec un quark top. Ainsi une mesure de cette section efficace montrant une valeur plus importante prouverait l'existence de physique au-delà du modèle standard. En outre, l'existence de physique au-delà le modèle standard pourrait montrer des résonances qui se désintègrent dans un quark top et un boson de Higgs. Dans la première partie de ce manuscrit, je présente les bases théoriques et expérimentales du modèle standard, ainsi que le dispositif expérimental. Dans le même chapitre théorique je discute une extension du modèle standard dans le cadre d'un modèle effectif englobant ce dernier. De plus, je détaille une étude de faisabilité d'une recherche d'une des nouvelles particules prédites par ce modèle, un quark vectoriel. Dans la deuxième partie, la recherche dans CMS de ce quark vectoriel T_, partenaire du quark top, est décrite. Ce partenaire du top est une nouvelle particule très similaire au quark top du modèle standard, mais beaucoup plus lourde. On considère le cas où ce nouveau quark se désintègre préférentiellement dans un quark top et un boson de Higgs. J'ai fait cette recherche dans le canal hadronique ou le Higgs se désintègre en deux quarks b et le quark top se désintègre en trois quarks, un quark b et deux quarks légers. J'ai reconstruit la masse du T_ à partir de l'identification de tous ses produits de désintégration. Le résultat obtenu est décrit sous forme des limites observées sur la section efficace de production du T_ déduites à partir de cette analyse / During 2012, the Large Hadron Collider (LHC) has delivered proton-proton collisions at 8 TeV center of mass energy to the ATLAS and CMS experiments. These two experiments have been designed to discover the Higgs boson and to search for new particles predicted by several theoretical models, as supersymmetry. The Higgs boson has been discovered by ATLAS and CMS experiments on July, 4th of 2012, starting a new era of discoveries in particle physics domain. With the confirmation of the existence of the Higgs boson, searches for new physics involving this boson are of major interest. In particular, data can be used to look for new massive particles that decay into the Higgs boson accompanied with other particles of the standard model. One expected signature is a Higgs boson produced with a top quark, the two heaviest particles in the standard model. The standard model predicts a cross section of top-Higgs production, then any enhancement of their associated production will be a clear signature of physics beyond the standard model. In addition, the existence of physics beyond the standard model can also be reflected by resonances that decay into a top-quark and a Higgs boson. In the first part of my work I describe the theoretical and experimental foundations of the standard model, as well as the experimental device. In the same theoretical chapter, I also discuss the formulation of an extension of the standard model. In addition, I describe a feasibility study of a search of one of the particles predicted by such model. The second part contains the realization of the search for a top partner, T_, within the CMS experiment. This top partner is a new particle very similar to the standard model top quark, but much heavier, that can decay into a top quark and a Higgs boson. The analysis looks for this particle in the full hadronic final state, where the Higgs boson decays into two b-quarks and the top quark decays into three standard model quarks, a b and two light quarks. In this channel, I reconstruct its mass from the identification of all its decay products. As a result of the analysis, I show the limits on the T_ production cross section from the number of observed events in the specific signature
239

Datenzentrierte Bestimmung von Assoziationsregeln in parallelen Datenbankarchitekturen

Legler, Thomas 22 June 2009 (has links)
Die folgende Arbeit befasst sich mit der Alltagstauglichkeit moderner Massendatenverarbeitung, insbesondere mit dem Problem der Assoziationsregelanalyse. Vorhandene Datenmengen wachsen stark an, aber deren Auswertung ist für ungeübte Anwender schwierig. Daher verzichten Unternehmen auf Informationen, welche prinzipiell vorhanden sind. Assoziationsregeln zeigen in diesen Daten Abhängigkeiten zwischen den Elementen eines Datenbestandes, beispielsweise zwischen verkauften Produkten. Diese Regeln können mit Interessantheitsmaßen versehen werden, welche dem Anwender das Erkennen wichtiger Zusammenhänge ermöglichen. Es werden Ansätze gezeigt, dem Nutzer die Auswertung der Daten zu erleichtern. Das betrifft sowohl die robuste Arbeitsweise der Verfahren als auch die einfache Auswertung der Regeln. Die vorgestellten Algorithmen passen sich dabei an die zu verarbeitenden Daten an, was sie von anderen Verfahren unterscheidet. Assoziationsregelsuchen benötigen die Extraktion häufiger Kombinationen (EHK). Hierfür werden Möglichkeiten gezeigt, Lösungsansätze auf die Eigenschaften moderne System anzupassen. Als Ansatz werden Verfahren zur Berechnung der häufigsten $N$ Kombinationen erläutert, welche anders als bekannte Ansätze leicht konfigurierbar sind. Moderne Systeme rechnen zudem oft verteilt. Diese Rechnerverbünde können große Datenmengen parallel verarbeiten, benötigen jedoch die Vereinigung lokaler Ergebnisse. Für verteilte Top-N-EHK auf realistischen Partitionierungen werden hierfür Ansätze mit verschiedenen Eigenschaften präsentiert. Aus den häufigen Kombinationen werden Assoziationsregeln gebildet, deren Aufbereitung ebenfalls einfach durchführbar sein soll. In der Literatur wurden viele Maße vorgestellt. Je nach den Anforderungen entsprechen sie je einer subjektiven Bewertung, allerdings nicht zwingend der des Anwenders. Hierfür wird untersucht, wie mehrere Interessantheitsmaßen zu einem globalen Maß vereinigt werden können. Dies findet Regeln, welche mehrfach wichtig erschienen. Der Nutzer kann mit den Vorschlägen sein Suchziel eingrenzen. Ein zweiter Ansatz gruppiert Regeln. Dies erfolgt über die Häufigkeiten der Regelelemente, welche die Grundlage von Interessantheitsmaßen bilden. Die Regeln einer solchen Gruppe sind daher bezüglich vieler Interessantheitsmaßen ähnlich und können gemeinsam ausgewertet werden. Dies reduziert den manuellen Aufwand des Nutzers. Diese Arbeit zeigt Möglichkeiten, Assoziationsregelsuchen auf einen breiten Benutzerkreis zu erweitern und neue Anwender zu erreichen. Die Assoziationsregelsuche wird dabei derart vereinfacht, dass sie statt als Spezialanwendung als leicht nutzbares Werkzeug zur Datenanalyse verwendet werden kann. / The importance of data mining is widely acknowledged today. Mining for association rules and frequent patterns is a central activity in data mining. Three main strategies are available for such mining: APRIORI , FP-tree-based approaches like FP-GROWTH, and algorithms based on vertical data structures and depth-first mining strategies like ECLAT and CHARM. Unfortunately, most of these algorithms are only moderately suitable for many “real-world” scenarios because their usability and the special characteristics of the data are two aspects of practical association rule mining that require further work. All mining strategies for frequent patterns use a parameter called minimum support to define a minimum occurrence frequency for searched patterns. This parameter cuts down the number of patterns searched to improve the relevance of the results. In complex business scenarios, it can be difficult and expensive to define a suitable value for the minimum support because it depends strongly on the particular datasets. Users are often unable to set this parameter for unknown datasets, and unsuitable minimum-support values can extract millions of frequent patterns and generate enormous runtimes. For this reason, it is not feasible to permit ad-hoc data mining by unskilled users. Such users do not have the knowledge and time to define suitable parameters by trial-and-error procedures. Discussions with users of SAP software have revealed great interest in the results of association-rule mining techniques, but most of these users are unable or unwilling to set very technical parameters. Given such user constraints, several studies have addressed the problem of replacing the minimum-support parameter with more intuitive top-n strategies. We have developed an adaptive mining algorithm to give untrained SAP users a tool to analyze their data easily without the need for elaborate data preparation and parameter determination. Previously implemented approaches of distributed frequent-pattern mining were expensive and time-consuming tasks for specialists. In contrast, we propose a method to accelerate and simplify the mining process by using top-n strategies and relaxing some requirements on the results, such as completeness. Unlike such data approximation techniques as sampling, our algorithm always returns exact frequency counts. The only drawback is that the result set may fail to include some of the patterns up to a specific frequency threshold. Another aspect of real-world datasets is the fact that they are often partitioned for shared-nothing architectures, following business-specific parameters like location, fiscal year, or branch office. Users may also want to conduct mining operations spanning data from different partners, even if the local data from the respective partners cannot be integrated at a single location for data security reasons or due to their large volume. Almost every data mining solution is constrained by the need to hide complexity. As far as possible, the solution should offer a simple user interface that hides technical aspects like data distribution and data preparation. Given that BW Accelerator users have such simplicity and distribution requirements, we have developed an adaptive mining algorithm to give unskilled users a tool to analyze their data easily, without the need for complex data preparation or consolidation. For example, Business Intelligence scenarios often partition large data volumes by fiscal year to enable efficient optimizations for the data used in actual workloads. For most mining queries, more than one data partition is of interest, and therefore, distribution handling that leaves the data unaffected is necessary. The algorithms presented in this paper have been developed to work with data stored in SAP BW. A salient feature of SAP BW Accelerator is that it is implemented as a distributed landscape that sits on top of a large number of shared-nothing blade servers. Its main task is to execute OLAP queries that require fast aggregation of many millions of rows of data. Therefore, the distribution of data over the dedicated storage is optimized for such workloads. Data mining scenarios use the same data from storage, but reporting takes precedence over data mining, and hence, the data cannot be redistributed without massive costs. Distribution by special data semantics or user-defined selections can produce many partitions and very different partition sizes. The handling of such real-world distributions for frequent-pattern mining is an important task, but it conflicts with the requirement of balanced partition.
240

Essays on private equity leadership composition, risk and performance

Bekyol, Yilmaz 29 July 2024 (has links)
The private equity industry has experienced a decade marked by substantial growth. However, as the investment landscape for capital providers has become more complex, the leadership team of private equity firms plays a more crucial role in navigating significant challenges. Focused on two themes, this dissertation explores the background of top management teams (TMTs) in private equity firms, its correlation with fund performance, and the backgrounds of deal lead partners and their risk assessment of leveraged buyout (LBO) investments. The first essay investigates TMT diversity, emphasizing its multi-dimensional connection with fund performance. The study differentiates between socio-demographic and occupational diversity, uncovering various effects on fund outcomes. The second essay constructs a diversity index based on a comprehensive methodology to maximize the correlation between TMT diversity and private equity fund performance. The third essay explores the risk profiles of private equity partners in LBO investment decisions, establishing a link between socio-demographic backgrounds and distinct risk assessment archetypes. This dissertation contributes to the literature in the intersection of private equity and TMT, providing insights for scholars and practitioners alike.

Page generated in 0.0514 seconds