• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 237
  • 34
  • 26
  • 18
  • 13
  • 10
  • 9
  • 8
  • 7
  • 6
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 441
  • 126
  • 76
  • 57
  • 57
  • 53
  • 50
  • 45
  • 45
  • 43
  • 39
  • 39
  • 39
  • 38
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Transport et manipulation d’électrons produits par interaction laser plasma sur la ligne COXINEL / Transport and manipulation of electrons produced by laser plasma interaction on COXINEL beam line

André, Thomas 18 December 2018 (has links)
Les récents progrès en termes de techniques d’accélération par interaction Laser Plasma (LPA) permettent aujourd’hui de générer de forts gradients accélérateurs (GV.m⁻¹); cependant, les faisceaux d’électrons ainsi produits présentent encore une grande dispersion énergie (%) et une divergence élevée (mrad). Le projet COXINEL (ERC Advanced Grant 350014, PI. M.E. Couprie), vise à qualifier, en remplacement d’un accélérateur conventionnel, un accélérateur Laser Plasma, dans le but d’une application de Laser à Électrons Libres. Pour atteindre les propriétés requises, le faisceau d’électrons doit être manipulé à l’aide d’une ligne de transport. Cette ligne est constituée d’un premier triplet de quadrupôles à aimants permanents de gradient variable qui focalise le faisceau et permet la maîtrise de la divergence initiale. Une chicane électromagnétique réduit ensuite la dispersion en énergie par tranche en allongeant longitudinalement le faisceau. Une gamme d’énergie restreinte peut être ensuite sélectionnée via l’insertion d’une fente dans la chicane. Enfin, un quadruplet de quadrupôles électromagnétiques fournit la focalisation finale dans un onduleur. Le travail de thèse porte sur l’étude du transport des faisceaux d’électrons produit par LPA le long de cette ligne. Différents régimes de production d’électrons ont été utilisés : injection par ionisation, cellule de gaz. La maîtrise du transport a été obtenue à l’aide d’une nouvelle méthode d’alignement et de compensation de dérive de pointé initial des électrons en réglant de manière indépendante la position et la dispersion du faisceau à différents endroits de la ligne. Un réglage fin de l’énergie transportée a été effectué en ajustant le gradient des quadrupôles. Les faisceaux produits ont été transportés le long de la ligne et caractérisés en termes de distribution transverse, d’émittance et d’énergie. Les résultats expérimentaux ont ensuite été comparés avec succès aux simulations numériques. Ce travail ouvre la voie à l’observation de rayonnement de l’onduleur, étape préliminaire à une amplification Laser à Électrons Libres. / Recent advances in Laser Plasma Acceleration techniques (LPA) are now able to generate strong accelerating gradients (GV.m⁻¹); however the produced electron beam thus still presents a large energy spread (%) and a large divergence (mrad). The COXINEL project (ERC Advanced Grant 350014, PI. M.E. Couprie), aims at qualifying, in replacement of a conventional accelerator, a Laser Plasma Accelerator, for a Free Electrons Laser application. To achieve the required properties, the electron beam must be manipulated using a transport line. This line consists in a first triplet of permanent magnets quadrupoles of variable gradient which focuses the beam and allows for the control of the initial divergence. An electromagnetic chicane then reduces the slice energy spread by lengthening the beam longitudinally. A restricted energy range can then be selected by inserting a slit inside the chicane. Finally, a quadruple of electromagnetic quadrupoles provides the final focus in an undulator. The thesis deals on the study of electron beam transport produced by LPA along this line. Different electron production regimes have been used: ionization injection, gas cell. The transport was controlled using a new alignment and pointing compensation method for the initial electron beam by adjusting independently the beam position and dispersion at different location on the line. A fine adjustment of the transported energy was carried out by adjusting the quadrupole gradient. The produced beam was transported along the line and was characterized in terms of transverse distribution, emittance and energy. Experimental results were then successfully compared with numerical simulations. This work paves the way for the observation of undulator radiation, a preliminary step before Free Electron Laser amplification.
102

Corporate venturing activities of established companies

Kanbach, Dominik K. 10 January 2017 (has links)
This publication-based dissertation covers research on corporate venturing activities of established companies over six chapters. The first chapter provides an introduction into corporate venturing and a summary of the four research papers comprising this dissertation. The second chapter is a structured literature review analyzing the heterogeneity inherent in corporate venturing activities. The characteristics that differentiate corporate venturing activities from each other are comprehensively identified, and a framework that integrates existing activities into six clusters is derived. The third chapter is a research paper that explores a recent corporate venturing activity – namely corporate accelerators – in depth. This empirical study, based on 13 case studies, identifies the objectives pursued with these programs and its design configurations and derives four common types of corporate accelerators. The fourth chapter is a research paper that analyzes empirically entrepreneurial start-ups as knowledge sources for established companies in the context of corporate accelerators and incubators. The paper identifies knowledge need, knowledge forms, and knowledge exchange as knowledge elements in these programs. Based on the comparative analysis of 12 case studies, typical combinations of the knowledge elements are found. The fifth chapter is a teaching case study building on the decision of Media-Saturn-Holding, Europe´s leading consumer electronics retailer, to develop the company´s own corporate accelerator program called SPACELAB. The sixth chapter summarizes the contributions of this dissertation for research and practice as well as its limitations and potential directions for further research.:List of tables Table of figures Table of abbreviations 1 Introduction 1.1 Motivation and research gap 1.2 Summary of research papers 1.3 Presentation / publication information of research papers 1.4 References 2 Corporate venturing activities: a review of typologies and proposed framework 2.1 Introduction 2.2 Research method and descriptive analysis of reviewed studies 2.3 Dimensions of corporate venturing activities 2.4 Clustering of corporate venturing activities 2.5 Further research 2.6 Summary of findings and conclusion 2.7 List of appendices 2.8 Appendix 2.9 References 3 Corporate accelerators as recent form of start-up engagement: the what, the why, and the how 3.1 Introduction 3.2 Literature review 3.3 Analysis approach 3.4 Results of empirical analysis 3.5 Corporate accelerator typology: Four distinct types 3.6 Discussion and implications 3.7 Conclusion 3.8 References 4 Start-ups as knowledge sources for companies: an analysis of corporate accelerators and incubators 4.1 Introduction 4.2 Research approach 4.3 Knowledge elements in corporate accelerators and incubators 4.4 Typical combinations of knowledge elements 4.5 Discussion 4.6 Implications 4.7 Conclusions and limitations 4.8 List of appendices 4.9 Appendix 4.10 References 5 Media-Saturn-Holding GmbH - the SPACELAB accelerator: a teaching case study 5.1 Part 1: background and accelerator design options 5.2 Part 2: design choices made and program execution 5.3 Teaching note 5.4 List of appendices 5.5. Appendix 5.6 References 6 Contribution and further research 6.1 Contribution to research 6.2 Contribution to practice 6.3 Limitations and further research 6.4 References
103

Utvärdering av kalciumnitrat som bindetidsaccelerator / Evaluation of calciumnitrate as setting time accelerator

Rafiq, Ari, HamaAmin, Garmian January 2013 (has links)
Man vill förkorta betongs bindetid dvs. den tid då betongytan kan behandlas så att betongytan blir slät efter gjutning. Det är en stor utmaning för företag som tillverkar fabriksbetong vintertid, eftersom bindetiden förlängs ju kallare klimatet är. Syftet med denna labboration var att visa hur Kalciumnitrart fungerar som bindetidsaccelerator i betong, och om Kalciumnitrart påverkar betongens fysikaliska egenskaper.  Följande faktorer har studerats för att se hur dessa faktorer påverkar betongens bindetid i kombination med användning av Kalciumnitrat. Betongens utgångstemperartur Typ av flyttillsatsmedel i betongen Betongens utgångskonsistens Betongens lagringsklimat Även hitta rätt dosering för att denna produkt ska vara lönsamt att användas i praktiken. Alla underökningar har utförts hos Sika AB laboratorium. All data har noggrant undersökts och använts i Excel program för framtagning av tabeller och diagram. Resultaten/slutsats i underökningarna visade följande. Bindetiden kan förkortas med Kalciumnitrat utan att behöva riskera betongens fysikaliska egenskaper. Enligt bindetidsdiagram noterades att 2,0 % och 2,5 % doseringarna gav bästa resultat gällande bindetid dvs. de gav kortast bindetid. Observera att +5 graders lagringsklimat gav ologiska resultat dvs. referensbetongen utan acceleratorn gav kortast bindetid. Tryckhållfastheten påverkas inte av acceleratorn dvs. man kan använda denna produkt utan att riskera betongens bärförmåga. Resultaten visade att betongens utgångskonsistens har stor betydelse för bindetiden, ju högre konsistens värde desto längre bindetid. Även betongens utgångstemperatur har påverkan på bindetiden, ju högre betongtemperatur desto kortare bindetid. / You want to reduce the concrete’s initial setting i.e.  the time the concrete surface can be treated so that surface gets plane after molding. It’s a big challenge for the companies that produce mill concrete in winter. Since the colder the climate gets the binding process will be extended. The purpose of this lab was to show how Calcium Nitrate functions as bonding time accelerator in concrete and if Calcium Nitrate affects the physical features of the concrete. The following elements have been studied to see how these elements affect the initial setting of the concrete in combination with the use of Calcium Nitrate. The initial temperature of the concrete The type of super plasticizer in the concrete The initial consistency Concrete storage climate Even finding the right dose so that this product will be profitable to use in the practice. All investigations have been made at Sika AB laboratory. All the data have been investigated and used in excel program for the product of chart and diagram. The results of the investigations showed the following:   Bond time can be reduced with Calcium Nitrate without needing to risk the physical features of the concrete.  According to bonding time diagram it was noted that 2.0 % and 2.5% doses gave the best result valid the initial setting i.e. that gave the shortest time of initial setting, Observe that +5 degrees storage climate gave illogical results i.e. reference concrete without the accelerator gave the shortest initial setting. Compressive strength does not get affected by the accelerator i.e. you can use this product without risking the concretes carrying capacity.  The results showed that the concrete initial consistency has a big importance to bond time, the higher consistency value the longer time of initial setting. Even concrete initial temperature has influence on the bond time, the higher concrete temperature, the shorter time of initial setting.
104

Coordinate conversion for the Hough transform

Eriksson, Edvin January 2021 (has links)
This thesis attempts to develop a conversion algorithm between local coordinates in constituent detector modules and global coordinates encompassing the whole detector structure in a generic detector. The thesis is a part of preparatory work for studying the Hough Transform as a means of track reconstruction in the level-1 hardware trigger in the upgraded trigger and data acquisition (TDAQ) system in the phase 2 upgrade of the ATLAS detector at CERN. The upgrades being made are to withstand much more extreme conditions that come with the high-Luminosity Large Hadron Collider (HL-LHC). Two algorithms have been made and then implemented in Python scripts to test their feasibility and to compare them against each-other. The Rotation algorithm uses several rotations to correctly place the local coordinates in the global system. The second, the Shear algorithm, simplifies the process into two shears and one rotation, using the small angle approximation. Both algorithms need to be extended to work with more parts of the detector to be considered complete. Despite having lower maximum precision the second algorithm is considered the most promising attempt, since it is much less sensitive to the truncation error that results from working in an integer environment, which is a requirement for use in FPGAs. / I denna uppsats görs ett försök att skapa en omvandlingsalgoritm mellan lokala koordinater i konstituerande detektormoduler och globala koordinater i hela detektorstrukturen för en generisk detektor. Uppsatsen är en del i förberedande arbete för att undersöka hur Houghtransformen kan användas för spårrekonstruktion i den hårdvarubaserade level-1 triggern i det uppgraderade trigger- och datainsamlingssystemet (TDAQ) i fas två-uppgraderingen av ATLAS detektorn vid CERN. Uppgraderingarna som görs är för att kunna utstå de mycket mer extrema förhållanden som medförs av högluminositetsuppgraderingen av Large Hadron Collider (HL-LHC). Två algoritmer har skapats och implementerats i Pythonskript för att testa genomförbarhet och för att jämföra med varandra. Rotationsalgoritmen använder ett antal rotationer för att korrekt placera ut de lokala koordinaterna i det globala systemet. Den andra, Skjuvalgortimen, förenklar processen till två skjuvningar och en rotation med hjälp av liten vinkel-approximationen. Båda algoritmerna behöver utökas för att fungera för fler delar av detektorn för att anses kompletta. Trots lägre maximal precision bedöms den andra algoritmen vara det mest lovande försöket, eftersom den är mycket mindre känslig för trunkeringsfelet som kommer av att arbeta i en heltalsmiljö, som är ett krav för FPGA-implementationen.
105

Designing radiation protection for a linear accelerator : using Monte carlo-simulations / Framtagning av förslag på förstärkt strålskydd för en linjäraccelerator : med hjälp av Monte Carlo-simuleringar

Lindahl, Jonatan January 2019 (has links)
The department of Radiation Sciences at Umeå University has obtained an old linear accelerator, intended for educational purposes. The goal of this thesis was to find proper reinforced radiation protection in an intended bunker (a room with thick concrete walls), to ensure that the radiation outside the bunker falls within acceptable levels. The main method was with the use of Monte Carlo-simulations. To properly simulate the accelerator, knowledge of the energy distribution of emitted radiation was needed. For this, a novel method for spectra determination, using several depth dose measurements including off-axis, was developed. A method that shows promising results in finding the spectra when measurements outside the primary beam are included. The found energy spectrum was then used to simulate the accelerator in the intended bunker. The resulting dose distribution was visualized together with 3D CAD-images of the bunker, to easily see in which locations outside the bunker where the dose was high. An important finding was that some changes are required to ensure that the public does not receive too high doses of radiation on a public outdoor-area that is located above the bunker. Otherwise, the accelerator is only allowed to be run 1.8 hours per year. A workaround to this problem could be to just plant a thorn bush, covering the dangerous area of radius 3m. After such a measure has been taken, which is assumed in the following results, the focus moves to the radiation that leaks into the accelerator’s intended control room, which is located right outside the bunker’s entrance door. The results show that the accelerator is only allowed to be run for a maximum of 6.1 or 3.3 hours per year (depending on the placement of the accelerator in the room), without a specific extra reinforced radiation protection consisting mainly of lead bricks. With the specific extra protection added, the accelerator is allowed to be run 44 or 54 hours per year instead, showing a distinct improvement. However, the dose rate to the control room was still quite high, 13.7 μGy/h or 11.2 μGy/h, compared to the average dose received by someone living in Sweden, which is 0.27 μGy/h. Therefore, further measures are recommended. This is however a worst case scenario, since the leakage spectrum from the accelerator itself was simulated as having the same energy spectrum as the primarybeam at 0.1 % of the intensity, which is the maximum leakage dose according to the specifications for the accelerator. This is probably an overestimation of the intensity. Also, the energy spectra of the leakage is probably of lower energy than the primary beam in at least some directions. Implementing more knowledge of the leak spectra in future work, should therefore result in more allowed run hours for the accelerator.
106

Datenzentrierte Bestimmung von Assoziationsregeln in parallelen Datenbankarchitekturen

Legler, Thomas 15 August 2009 (has links) (PDF)
Die folgende Arbeit befasst sich mit der Alltagstauglichkeit moderner Massendatenverarbeitung, insbesondere mit dem Problem der Assoziationsregelanalyse. Vorhandene Datenmengen wachsen stark an, aber deren Auswertung ist für ungeübte Anwender schwierig. Daher verzichten Unternehmen auf Informationen, welche prinzipiell vorhanden sind. Assoziationsregeln zeigen in diesen Daten Abhängigkeiten zwischen den Elementen eines Datenbestandes, beispielsweise zwischen verkauften Produkten. Diese Regeln können mit Interessantheitsmaßen versehen werden, welche dem Anwender das Erkennen wichtiger Zusammenhänge ermöglichen. Es werden Ansätze gezeigt, dem Nutzer die Auswertung der Daten zu erleichtern. Das betrifft sowohl die robuste Arbeitsweise der Verfahren als auch die einfache Auswertung der Regeln. Die vorgestellten Algorithmen passen sich dabei an die zu verarbeitenden Daten an, was sie von anderen Verfahren unterscheidet. Assoziationsregelsuchen benötigen die Extraktion häufiger Kombinationen (EHK). Hierfür werden Möglichkeiten gezeigt, Lösungsansätze auf die Eigenschaften moderne System anzupassen. Als Ansatz werden Verfahren zur Berechnung der häufigsten $N$ Kombinationen erläutert, welche anders als bekannte Ansätze leicht konfigurierbar sind. Moderne Systeme rechnen zudem oft verteilt. Diese Rechnerverbünde können große Datenmengen parallel verarbeiten, benötigen jedoch die Vereinigung lokaler Ergebnisse. Für verteilte Top-N-EHK auf realistischen Partitionierungen werden hierfür Ansätze mit verschiedenen Eigenschaften präsentiert. Aus den häufigen Kombinationen werden Assoziationsregeln gebildet, deren Aufbereitung ebenfalls einfach durchführbar sein soll. In der Literatur wurden viele Maße vorgestellt. Je nach den Anforderungen entsprechen sie je einer subjektiven Bewertung, allerdings nicht zwingend der des Anwenders. Hierfür wird untersucht, wie mehrere Interessantheitsmaßen zu einem globalen Maß vereinigt werden können. Dies findet Regeln, welche mehrfach wichtig erschienen. Der Nutzer kann mit den Vorschlägen sein Suchziel eingrenzen. Ein zweiter Ansatz gruppiert Regeln. Dies erfolgt über die Häufigkeiten der Regelelemente, welche die Grundlage von Interessantheitsmaßen bilden. Die Regeln einer solchen Gruppe sind daher bezüglich vieler Interessantheitsmaßen ähnlich und können gemeinsam ausgewertet werden. Dies reduziert den manuellen Aufwand des Nutzers. Diese Arbeit zeigt Möglichkeiten, Assoziationsregelsuchen auf einen breiten Benutzerkreis zu erweitern und neue Anwender zu erreichen. Die Assoziationsregelsuche wird dabei derart vereinfacht, dass sie statt als Spezialanwendung als leicht nutzbares Werkzeug zur Datenanalyse verwendet werden kann. / The importance of data mining is widely acknowledged today. Mining for association rules and frequent patterns is a central activity in data mining. Three main strategies are available for such mining: APRIORI , FP-tree-based approaches like FP-GROWTH, and algorithms based on vertical data structures and depth-first mining strategies like ECLAT and CHARM. Unfortunately, most of these algorithms are only moderately suitable for many “real-world” scenarios because their usability and the special characteristics of the data are two aspects of practical association rule mining that require further work. All mining strategies for frequent patterns use a parameter called minimum support to define a minimum occurrence frequency for searched patterns. This parameter cuts down the number of patterns searched to improve the relevance of the results. In complex business scenarios, it can be difficult and expensive to define a suitable value for the minimum support because it depends strongly on the particular datasets. Users are often unable to set this parameter for unknown datasets, and unsuitable minimum-support values can extract millions of frequent patterns and generate enormous runtimes. For this reason, it is not feasible to permit ad-hoc data mining by unskilled users. Such users do not have the knowledge and time to define suitable parameters by trial-and-error procedures. Discussions with users of SAP software have revealed great interest in the results of association-rule mining techniques, but most of these users are unable or unwilling to set very technical parameters. Given such user constraints, several studies have addressed the problem of replacing the minimum-support parameter with more intuitive top-n strategies. We have developed an adaptive mining algorithm to give untrained SAP users a tool to analyze their data easily without the need for elaborate data preparation and parameter determination. Previously implemented approaches of distributed frequent-pattern mining were expensive and time-consuming tasks for specialists. In contrast, we propose a method to accelerate and simplify the mining process by using top-n strategies and relaxing some requirements on the results, such as completeness. Unlike such data approximation techniques as sampling, our algorithm always returns exact frequency counts. The only drawback is that the result set may fail to include some of the patterns up to a specific frequency threshold. Another aspect of real-world datasets is the fact that they are often partitioned for shared-nothing architectures, following business-specific parameters like location, fiscal year, or branch office. Users may also want to conduct mining operations spanning data from different partners, even if the local data from the respective partners cannot be integrated at a single location for data security reasons or due to their large volume. Almost every data mining solution is constrained by the need to hide complexity. As far as possible, the solution should offer a simple user interface that hides technical aspects like data distribution and data preparation. Given that BW Accelerator users have such simplicity and distribution requirements, we have developed an adaptive mining algorithm to give unskilled users a tool to analyze their data easily, without the need for complex data preparation or consolidation. For example, Business Intelligence scenarios often partition large data volumes by fiscal year to enable efficient optimizations for the data used in actual workloads. For most mining queries, more than one data partition is of interest, and therefore, distribution handling that leaves the data unaffected is necessary. The algorithms presented in this paper have been developed to work with data stored in SAP BW. A salient feature of SAP BW Accelerator is that it is implemented as a distributed landscape that sits on top of a large number of shared-nothing blade servers. Its main task is to execute OLAP queries that require fast aggregation of many millions of rows of data. Therefore, the distribution of data over the dedicated storage is optimized for such workloads. Data mining scenarios use the same data from storage, but reporting takes precedence over data mining, and hence, the data cannot be redistributed without massive costs. Distribution by special data semantics or user-defined selections can produce many partitions and very different partition sizes. The handling of such real-world distributions for frequent-pattern mining is an important task, but it conflicts with the requirement of balanced partition.
107

Datenzentrierte Bestimmung von Assoziationsregeln in parallelen Datenbankarchitekturen

Legler, Thomas 22 June 2009 (has links)
Die folgende Arbeit befasst sich mit der Alltagstauglichkeit moderner Massendatenverarbeitung, insbesondere mit dem Problem der Assoziationsregelanalyse. Vorhandene Datenmengen wachsen stark an, aber deren Auswertung ist für ungeübte Anwender schwierig. Daher verzichten Unternehmen auf Informationen, welche prinzipiell vorhanden sind. Assoziationsregeln zeigen in diesen Daten Abhängigkeiten zwischen den Elementen eines Datenbestandes, beispielsweise zwischen verkauften Produkten. Diese Regeln können mit Interessantheitsmaßen versehen werden, welche dem Anwender das Erkennen wichtiger Zusammenhänge ermöglichen. Es werden Ansätze gezeigt, dem Nutzer die Auswertung der Daten zu erleichtern. Das betrifft sowohl die robuste Arbeitsweise der Verfahren als auch die einfache Auswertung der Regeln. Die vorgestellten Algorithmen passen sich dabei an die zu verarbeitenden Daten an, was sie von anderen Verfahren unterscheidet. Assoziationsregelsuchen benötigen die Extraktion häufiger Kombinationen (EHK). Hierfür werden Möglichkeiten gezeigt, Lösungsansätze auf die Eigenschaften moderne System anzupassen. Als Ansatz werden Verfahren zur Berechnung der häufigsten $N$ Kombinationen erläutert, welche anders als bekannte Ansätze leicht konfigurierbar sind. Moderne Systeme rechnen zudem oft verteilt. Diese Rechnerverbünde können große Datenmengen parallel verarbeiten, benötigen jedoch die Vereinigung lokaler Ergebnisse. Für verteilte Top-N-EHK auf realistischen Partitionierungen werden hierfür Ansätze mit verschiedenen Eigenschaften präsentiert. Aus den häufigen Kombinationen werden Assoziationsregeln gebildet, deren Aufbereitung ebenfalls einfach durchführbar sein soll. In der Literatur wurden viele Maße vorgestellt. Je nach den Anforderungen entsprechen sie je einer subjektiven Bewertung, allerdings nicht zwingend der des Anwenders. Hierfür wird untersucht, wie mehrere Interessantheitsmaßen zu einem globalen Maß vereinigt werden können. Dies findet Regeln, welche mehrfach wichtig erschienen. Der Nutzer kann mit den Vorschlägen sein Suchziel eingrenzen. Ein zweiter Ansatz gruppiert Regeln. Dies erfolgt über die Häufigkeiten der Regelelemente, welche die Grundlage von Interessantheitsmaßen bilden. Die Regeln einer solchen Gruppe sind daher bezüglich vieler Interessantheitsmaßen ähnlich und können gemeinsam ausgewertet werden. Dies reduziert den manuellen Aufwand des Nutzers. Diese Arbeit zeigt Möglichkeiten, Assoziationsregelsuchen auf einen breiten Benutzerkreis zu erweitern und neue Anwender zu erreichen. Die Assoziationsregelsuche wird dabei derart vereinfacht, dass sie statt als Spezialanwendung als leicht nutzbares Werkzeug zur Datenanalyse verwendet werden kann. / The importance of data mining is widely acknowledged today. Mining for association rules and frequent patterns is a central activity in data mining. Three main strategies are available for such mining: APRIORI , FP-tree-based approaches like FP-GROWTH, and algorithms based on vertical data structures and depth-first mining strategies like ECLAT and CHARM. Unfortunately, most of these algorithms are only moderately suitable for many “real-world” scenarios because their usability and the special characteristics of the data are two aspects of practical association rule mining that require further work. All mining strategies for frequent patterns use a parameter called minimum support to define a minimum occurrence frequency for searched patterns. This parameter cuts down the number of patterns searched to improve the relevance of the results. In complex business scenarios, it can be difficult and expensive to define a suitable value for the minimum support because it depends strongly on the particular datasets. Users are often unable to set this parameter for unknown datasets, and unsuitable minimum-support values can extract millions of frequent patterns and generate enormous runtimes. For this reason, it is not feasible to permit ad-hoc data mining by unskilled users. Such users do not have the knowledge and time to define suitable parameters by trial-and-error procedures. Discussions with users of SAP software have revealed great interest in the results of association-rule mining techniques, but most of these users are unable or unwilling to set very technical parameters. Given such user constraints, several studies have addressed the problem of replacing the minimum-support parameter with more intuitive top-n strategies. We have developed an adaptive mining algorithm to give untrained SAP users a tool to analyze their data easily without the need for elaborate data preparation and parameter determination. Previously implemented approaches of distributed frequent-pattern mining were expensive and time-consuming tasks for specialists. In contrast, we propose a method to accelerate and simplify the mining process by using top-n strategies and relaxing some requirements on the results, such as completeness. Unlike such data approximation techniques as sampling, our algorithm always returns exact frequency counts. The only drawback is that the result set may fail to include some of the patterns up to a specific frequency threshold. Another aspect of real-world datasets is the fact that they are often partitioned for shared-nothing architectures, following business-specific parameters like location, fiscal year, or branch office. Users may also want to conduct mining operations spanning data from different partners, even if the local data from the respective partners cannot be integrated at a single location for data security reasons or due to their large volume. Almost every data mining solution is constrained by the need to hide complexity. As far as possible, the solution should offer a simple user interface that hides technical aspects like data distribution and data preparation. Given that BW Accelerator users have such simplicity and distribution requirements, we have developed an adaptive mining algorithm to give unskilled users a tool to analyze their data easily, without the need for complex data preparation or consolidation. For example, Business Intelligence scenarios often partition large data volumes by fiscal year to enable efficient optimizations for the data used in actual workloads. For most mining queries, more than one data partition is of interest, and therefore, distribution handling that leaves the data unaffected is necessary. The algorithms presented in this paper have been developed to work with data stored in SAP BW. A salient feature of SAP BW Accelerator is that it is implemented as a distributed landscape that sits on top of a large number of shared-nothing blade servers. Its main task is to execute OLAP queries that require fast aggregation of many millions of rows of data. Therefore, the distribution of data over the dedicated storage is optimized for such workloads. Data mining scenarios use the same data from storage, but reporting takes precedence over data mining, and hence, the data cannot be redistributed without massive costs. Distribution by special data semantics or user-defined selections can produce many partitions and very different partition sizes. The handling of such real-world distributions for frequent-pattern mining is an important task, but it conflicts with the requirement of balanced partition.
108

Simulations of Quantum Black Hole collisions at the LHC with PYTHIA

Niblaeus, Carl January 2011 (has links)
In this bachelor's thesis the concept of microscopical black hole production at colliders is investigated. By using extra dimensional models where the value of the Planck mass can be reduced to the TeV scale, gravity can be made stronger at small distances and the Hierarchy problem can be solved. Since gravity is much stronger already at the TeV-scale, there is a possibility that microscopical black holes are produced at the LHC. In this thesis the possibility to produce Quantum Black Holes, black holes with masses around the Planck mass, is implemented in the event generator PYTHIA. Events where the Quantum Black Holes decay into two particles are simulated and studied. A main contribution is successful colour connections between the final states. Something to solve in future simulations is how to give the black holes a spectrum of masses.
109

Application of GEANT4 toolkit for simulations of high gradient phenomena

Persson, Daniel January 2018 (has links)
To study electron emissions and dark currents in the accelerating structures in particle colliders, a test facility with a spectrometer has been constructed at CERN. This spectrometer has been simulated in the C++ toolkit GEANT4 and in this project the simulation has been improved to handle new realistic input data of the emitted electrons. The goal was to find relations between where the electrons are emitted inside the accelerating structure and the energy or position of the particles measured by the spectrometer. The result was that there is a linear relation between the initial position of the electrons and the width in the positions of the particles measured by the spectrometer. It also appears to be a relations between energy the emitted electrons get in the accelerating structure, which is related to the position, and the energy they deposit in the spectrometer. Further studies where the simulations are compared with real measurement data are required to determine whether these relations are true or not, find better reliability in the relations and get a better understanding of the phenomena.
110

Thermo-mechanical analysis of cryo-cooled electrode system in COMSOL

Olofsson, Joel January 2018 (has links)
In the planned linear accelerator called Compact Linear Collider, CLIC, electrons and positrons will be accelerated to velocities near the speed of light. A limiting factor in accelerating structures are vacuum breakdowns, which are electrical discharges from a surface as a result of a large electric field being applied. In the preparatory studies for the CLIC, Uppsala University in collaboration with The European Organization for Nuclear Research, CERN, is building a DC Spark system to analyze vacuum breakdowns. This system containing large planar electrodes will be cooled down all the way down to around 4 K in order to limit the rate of wich vacuum breakdowns happen. When cooling a system like this, which consists of different components made of different materials there is the question of how the system will be affected. The objective of this project is to investigate how the cooling will affect the stability in terms of stresses and to analyze the cool down time of the system. Another goal is to make a material recommendation for a few parts based on the results. This will be done by simulating the cooling in COMSOL Multiphysics, which is a program that uses finite element analysis to solve complex problems where different branches of physics interact. The conclusion is that the system will most likely be stable as it is and there is no need to redesign it. The choice of recommended material is alumina with the reason being it should cause the least stress and the smallest gap between the electrodes when the cooling is done. There was no big difference in the cool down time between the materials. Further studies and simulations on the system is also recommended since there are many factors not taken into consideration in this study.

Page generated in 0.076 seconds