• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 359
  • 54
  • 47
  • 45
  • 37
  • 19
  • 16
  • 6
  • 6
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 727
  • 318
  • 113
  • 77
  • 74
  • 66
  • 57
  • 54
  • 54
  • 51
  • 41
  • 41
  • 41
  • 37
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
501

Padronização do Y-90 pelo método CIEMAT/NIST em sistema de cintilação líquida e pelo método do traçador em sistema de coincidência 4πβ-γ / Standardization of Y-90 by CIEMAT/NIST method in scintillation counting system and by tracing method in 4πβ-γ coincidence system

Sales, Tatiane da Silva Nascimento 30 May 2014 (has links)
O 90Y tem uma meia-vida de 2,7 dias, decaindo com 99,98% por emissão beta para o estado fundamental do 90Zr. Neste trabalho foram aplicadas duas metodologias para a padronização do 90Y. O método do traçador em um sistema de coincidência de 4πβ-γ, onde foi medido o emissor beta puro, misturado com um emissor de beta-gama, que proporciona a eficiência de detecção beta. Para este método, o radionuclídeo 24Na, que decai com meia-vida de 0,623 dia pela emissão beta, com energia beta máxima de 1393 keV, seguido por dois raios gama, foi usado como traçador. A eficiência foi obtida, selecionando-se o pico de absorção total com energia de 1369 keV no canal gama. Alíquotas conhecidas do traçador, previamente padronizadas pelo método de coincidência 4πβ-γ, foram misturadas com alíquotas conhecidas de 90Y. A atividade do emissor beta puro foi calculada por meio de um sistema de coincidência por software (SCS) usando discriminação eletrônica para alterar a eficiência de beta. O comportamento da curva de extrapolação foi predito por meio do código Esquema, que utiliza a técnica de Monte Carlo. O outro método usado foi o método CIEMAT/NIST desenvolvido para sistemas de contagem de cintilação líquida. Para este método, utilizou-se uma solução padrão de 3H. O sistema 2100TR TRICARB foi usado para as medições, o qual opera em coincidência com duas fotomultiplicadoras; uma fonte externa, colocada perto do sistema de medição foi usada para determinar o parâmetro quenching. O coquetel utilizado foi o Ultima Gold, a variação do fator de quenching foi obtida pelo uso de nitrometano. As amostras radioativas foram preparadas em frascos de vidro com baixa concentração de potássio. As atividades determinadas pelos dois métodos foram comparadas e os resultados obtidos são concordantes dentro das incertezas experimentais. Por meio deste trabalho, foi possível avaliar o desempenho do método CIEMAT/NIST, que apresenta várias vantagens em relação ao método do traçador, entre elas a facilidade para a preparação das fontes, medidas simples e rápidas sem a necessidade de determinar as curvas de extrapolação. / The 90Y has a half-life of 2.7 days, decaying with 99.98 % by beta emission to the ground state of 90Zr. In this work two methodologies for the standardization of yttrium-90 (90Y) were applied. One was the tracing method performed in a 4πβ-γ coincidence system, measuring the pure beta emitter mixed with a beta-gamma emitter, which provides the beta detection efficiency. For this method, the radionuclide 24Na, which decays with half life of 0.623 day by beta particle, with end point energy of 1393 keV followed by two gamma-rays, was used as tracer, the efficiency was obtained by selecting the 1369 keV total energy absorption peak at the gamma channel. Known aliquots of the tracer, previously standardized by 4πβ-γ coincidence, were mixed with known aliquots of 90Y. The activity was calculated by means of a Software Coincidence System (SCS) using electronic discrimination for changing the beta efficiency. The behavior of the extrapolation curve was predicted by means of the Esquema code, which uses the Monte Carlo technique. The other was the CIEMAT/NIST method developed for Liquid Scintillation Counting (LSC) systems. For this method, a 3H standard solution was used. A TRICAB 2100TR system was used for the measurements. It operates with two photomultipliers in coincidence and an external source placed near the measurement system is used for determining the quenching parameter. Ultima Gold was the liquid scintillation cocktail. In order to obtain the quenching parameter curve a nitro methane carrier solution was used. The radioactive samples were prepared in glass vials with low potassium concentration. The activities determined by the two methods were compared and they are in agreement within the experimental uncertainties. By means of this work it was possible to evaluate the performance of the CIEMAT/NIST method, which presents several advantages with respect to the tracer method, among them is the facility for the preparation of the sources, simple and fast measurements without the need of determining extrapolation curves.
502

Análise óptica e térmica do receptor de um sistema de concentradores Fresnel lineares

Scalco, Patricia 22 January 2016 (has links)
Submitted by Silvana Teresinha Dornelles Studzinski (sstudzinski) on 2017-04-19T16:40:27Z No. of bitstreams: 1 Patricia Scalco_.pdf: 2617305 bytes, checksum: 329172a91dc38579cc85445e7b13abf1 (MD5) / Made available in DSpace on 2017-04-19T16:40:27Z (GMT). No. of bitstreams: 1 Patricia Scalco_.pdf: 2617305 bytes, checksum: 329172a91dc38579cc85445e7b13abf1 (MD5) Previous issue date: 2016-01-22 / CNPQ – Conselho Nacional de Desenvolvimento Científico e Tecnológico / O estudo de diferentes fontes de energia é de extrema importância, tanto em termos econômicos e sociais, como no âmbito ambiental. Assim, o uso da energia solar para a geração de calor para alimentar processos que necessitam de temperaturas em torno de 300 ºC aparece como uma alternativa para suprir o uso de combustíveis fósseis em ambientes industriais, seja de forma parcial ou total. Para atingir essa faixa de temperatura, devem ser utilizados equipamentos de alto desempenho e que possam concentrar ao máximo a radiação solar. Assim, é utilizada a tecnologia de refletores Fresnel lineares, que se baseia no princípio de concentração solar, onde os raios solares incidem em espelhos que refletem essa radiação para um receptor. O receptor é composto por um tubo absorvedor e por uma segunda superfície refletora, conhecida como concentrador secundário, que tem como função maximizar a quantidade de raios absorvidos pelo receptor. Esse tipo de instalação tem se mostrado competitiva diante de outros tipos de concentração solar devido à sua estrutura simples, custo reduzido e fácil manutenção. Assim, neste trabalho serão analisados aspectos ópticos e térmicos do conjunto do receptor, tanto para o concentrador secundário do formato trapezoidal como para o CPC. Para isso, o estudo foi dividido em duas etapas. Na primeira etapa foi feito o traçado de raios para as duas geometrias do concentrador secundário estudadas afim de determinar o fator de interceptação e as perdas ópticas envolvidas neste processo. Além disso, foi analisada a influência da inserção de uma superfície de vidro na base do receptor. A segunda etapa consistiu na análise térmica, onde foi feito o estudo da transferência de calor no receptor com a finalidade de determinar a eficiência do sistema, bem como os fatores que influenciam no desempenho do mesmo. Na análise geométrica, o fator de interceptação para a concentrador secundário do tipo trapezoidal foi de 36% para o receptor aberto e 45% para o receptor com o fechamento de vidro. Para o concentrador secundário do tipo CPC, os resultados foram de 44% para o receptor aberto e 56% para o receptor isolado com vidro. Através da análise térmica, foi possível estabelecer a eficiência do sistema que, para a melhor condição de trabalho, DNI de 1000 W/m², foi de 80%. / The study of different energy sources is extremely important, both in economic and social scope, as well as in the environmental field. Thus, the use of solar energy for the generation of heat to feed processes that require temperatures around 300 ºC appears as an alternative to supply the use of fossil fuels in industrial environments, either partially or totally. To reach this temperature range, high-performance equipment must be used that can concentrate solar radiation to the maximum. Thus, Fresnel linear reflector technology is used, which uses the principle of solar concentration, where the solar rays focus on mirrors that reflect this radiation to the receiver. The receiver is composed of an absorber tube and a second reflecting surface whose function is to maximize the number of rays absorbed by the receiver. This type of installation has been competitive in comparison to other types of solar concentration because of its simple structure, low cost and easy maintenance. Thus, in this work will be analyzed optical and thermal aspects of the receiver set for the trapezoidal and the CPC secondary concentrator. For this, the study was divided into two stages. In the first stage the ray tracing was done for the two geometries of the secondary concentrator studied in order to determine the interception factor and the optical losses involved in this process. In addition, the influence of insertion of a glass surface on the base of the receptor was isolated by isolating it from the environment. The second stage consisted of the thermal analysis, where the heat transfer study was carried out in the receiver in order to determine the efficiency of the system as well as the factors that influence the performance of the system. In the geometric analysis, the interception factor for the trapezoidal secondary concentrator was 36% for the open receptor and 45% for the receptor with the glass enclosure. For the CPC secondary concentrator, the results were 44% for the open receptor and 56% for the receptor with the glass enclosure. Through the thermal analysis, it was possible to establish the efficiency of the system, which, for the best working condition, DNI of 1000 W/m², was 80%.
503

Função de mapeamento brasileira da atmosfera neutra e sua aplicação no posicionamento GNSS na América do Sul /

Gouveia, Tayná Aparecida Ferreira. January 2019 (has links)
Orientador: João Francisco Galera Monico / Resumo: A tecnologia Global Navigation Satellite Systems (GNSS) tem sido amplamente utilizada em posicionamento, desde as aplicações cotidianas (acurácia métrica), até aplicações que requerem alta acurácia (poucos cm ou dm). Quando se pretende obter alta acurácia, diferentes técnicas devem ser aplicadas a fim de minimizar os efeitos que o sinal sofre desde sua transmissão, no satélite, até sua recepção. O sinal GNSS ao se propagar na atmosfera neutra (da superfície até 50 km), é afetado por gases hidrostáticos e vapor d’água. A variação desses constituintes atmosféricos causa uma refração no sinal que gera um atraso. Esse atraso pode ocasionar erros na medida de no mínimo 2,5 m (zenital) e superior a 25 m (inclinado). A determinação do atraso na direção inclinada (satélite-receptor) de acordo com o ângulo de elevação é realizada pelas funções de mapeamento. Uma das técnicas para o cálculo do atraso é o traçado de raio (ray tracing). Essa técnica permite mapear o caminho real que o sinal percorreu e modelar a interferência da atmosfera neutra sobre esse sinal. Diferentes abordagens podem ser usadas para obter informações que descrevem os constituintes da atmosfera neutra. Dentre as possibilidades pode-se citar o uso de medidas de radiossondas, modelos de previsão do tempo e clima (PNT), medidas GNSS, assim como modelos teóricos. Modelos de PNT regionais do Centro de Previsão de Tempo e Estudos Climáticos (CPTEC) do Instituto Nacional de Pesquisas Espaciais (INPE) apresentam-se como um... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: Global Navigation Satellite Systems (GNSS) technology has been widely used in positioning, from day-to-day applications (metric accuracy) to applications that require high accuracy (few cm or dm). For high accuracy, different techniques may be applied to minimize the effects that the signal suffers from its transmission on the satellite to its reception. GNSS signal when propagating in the neutral atmosphere (from surface up to 50km) is influenced by hydrostatic gases and water vapor. The variation of these atmospheric constituents causes a refraction in the signal that generates a delay. This delay may cause errors of at least 2.5 m (zenith) and greater than 25 m (slant). The determination of the delay in the slanted direction (satellite-receiver) according to the elevation angle is performed by the mapping functions. One of the techniques for calculating the delay is raytracing. This technique allows us to map the actual path that the signal has traveled and to model the interference of the neutral atmosphere on it. Different approaches can be used to obtain information describing the neutral atmosphere constituents - temperature, pressure and humidity. The possibilities include the use of radiosonde measurements, weather and climate models (NWP), GNSS measurements, as well as theoretical models. Regional NWP models from the Center Weather Forecasting and Climate Studies (CPTEC) of the National Institute for Space Research (INPE) are a good alternative to provide atmospheri... (Complete abstract click electronic access below) / Doutor
504

Optimisation of an Ultrasonic Flow Meter Based on Experimental and Numerical Investigation of Flow and Ultrasound Propagation

Temperley, Neil Colin, Mechanical & Manufacturing Engineering, Faculty of Engineering, UNSW January 2002 (has links)
This thesis presents a procedure to optimise the shape of a coaxial transducer ultrasonic flow meter. The technique uses separate numerical simulations of the fluid flow and the ultrasound propagation within a meter duct. A new flow meter geometry has been developed, having a significantly improved (smooth and monotonic) calibration curve. In this work the complex fluid flow field and its influence on the propagation of ultrasound in a cylindrical flow meter duct is investigated. A geometric acoustics (ray tracing) propagation model is applied to a flow field calculated by a commercial Computational Fluid Dynamics (CFD) package. The simulation results are compared to measured calibration curves for a variety of meter geometries having varying lengths and duct diameters. The modelling shows reasonable agreement to the calibration characteristics for several meter geometries over a Reynolds number range of 100...100000 (based on bulk velocity and meter duct diameter). Various CFD simulations are validated against flow visualisation measurements, Laser Doppler Velocimetry measurements or published results. The thesis includes software to calculate the acoustic ray propagation and also to calculate the optimal shape for the annular gap around the transducer housings in order to achieve desired flow acceleration. A dimensionless number is proposed to characterise the mean deflection of an acoustic beam due to interaction with a fluid flow profile (or acoustic velocity gradient). For flow in a cylindrical duct, the 'acoustic beam deflection number' is defined as M g* (L/D)^2, where: M is the Mach Number of the bulk velocity; g* is the average non-dimensionalised velocity gradient insonified by the acoustic beam (g* is a function of transducer diameter - typically g* = 0.5...4.5); L is the transducer separation; and D is the duct diameter. Large values of this number indicate considerable beam deflection that may lead to undesirable wall reflections and diffraction effects. For a single path coaxial transducer ultrasonic flow meter, there are practical limits to the length of a flow meter and to the maximum size of a transducer for a given duct diameter. The 'acoustic beam deflection number' characterises the effect of these parameters.
505

On the Search for High-Energy Neutrinos : Analysis of data from AMANDA-II

Lundberg, Johan January 2008 (has links)
A search for a diffuse flux of cosmic neutrinos with energies in excess of 1014 eV was performed using two years of AMANDA-II data, collected in 2003 and 2004. A 20% evenly distributed sub-sample of experimental data was used to verify the detector description and the analysis cuts. A very good agreement between this 20% sample and the background simulations was observed. The analysis was optimised for discovery, to a relatively low price in limit setting power. The background estimate for the livetime of the examined 80% sample is 0.035 ± 68% events with an additional 41% systematical uncertainty. The total neutrino flux needed for a 5σ discovery to be made with 50% probability was estimated to 3.4 ∙ 10-7 E-2 GeV s-1 sr-1 cm-2 equally distributed over the three flavours, taking statistical and systematic uncertainties in the background expectation and the signal efficiency into account. No experimental events survived the final discriminator cut. Hence, no ultra-high energy neutrino candidates were found in the examined sample. A 90% upper limit is placed on the total ultra-high energy neutrino flux at 2.8 ∙ 10-7 E-2 GeV s-1 sr-1 cm-2, taking both systematical and statistical uncertainties into account. The energy range in which 90% of the simulated E-2 signal is contained is 2.94 ∙ 1014 eV to 1.54 ∙ 1018 eV (central interval), assuming an equal distribution over the neutrino flavours at the Earth. The final acceptance is distributed as 48% electron neutrinos, 27% muon neutrinos, and 25% tau neutrinos. A set of models for the production of neutrinos in active galactic nuclei that predict spectra deviating from E-2 was excluded.
506

Discovery Of Application Workloads From Network File Traces

Yadwadkar, Neeraja 12 1900 (has links) (PDF)
An understanding of Input/Output data access patterns of applications is useful in several situations. First, gaining an insight into what applications are doing with their data at a semantic level helps in designing efficient storage systems. Second, it helps to create benchmarks that mimic realistic application behavior closely. Third, it enables autonomic systems as the information obtained can be used to adapt the system in a closed loop. All these use cases require the ability to extract the application-level semantics of I/O operations. Methods such as modifying application code to associate I/O operations with semantic tags are intrusive. It is well known that network file system traces are an important source of information that can be obtained non-intrusively and analyzed either online or offline. These traces are a sequence of primitive file system operations and their parameters. Simple counting, statistical analysis or deterministic search techniques are inadequate for discovering application-level semantics in the general case, because of the inherent variation and noise in realistic traces. In this paper, we describe a trace analysis methodology based on Profile Hidden Markov Models. We show that the methodology has powerful discriminatory capabilities that enables it to recognize applications based on the patterns in the traces, and to mark out regions in a long trace that encapsulate sets of primitive operations that represent higher-level application actions. It is robust enough that it can work around discrepancies between training and target traces such as in length and interleaving with other operations. We demonstrate the feasibility of recognizing patterns based on a small sampling of the trace, enabling faster trace analysis. Preliminary experiments show that the method is capable of learning accurate profile models on live traces in an online setting. We present a detailed evaluation of this methodology in a UNIX environment using NFS traces of selected commonly used applications such as compilations as well as on industrial strength benchmarks such as TPC-C and Postmark, and discuss its capabilities and limitations in the context of the use cases mentioned above.
507

Flicker Source Identification At A Point Of Common Coupling Of The Power System

Altintas, Erinc 01 June 2010 (has links) (PDF)
Voltage fluctuations under 30 Hz in the electricity grid, leads to oscillations in the light intensity that can be perceived by human eye, which is called flicker. In this thesis, the sources of the flicker at a point of common coupling is investigated. When there are more than one flicker sources connected to a PCC, individual effects of each flicker source is determined by using a new method which depends on the reactive current components of the sources. This method is mainly based on the flickermeter design defined by the International Electrotechnical Commission (IEC), but uses the current variations in addition to the voltage variations to compute flicker. The proposed method is applied to several different types of loads supplied from a PCC and their flicker contributions on the busbar are investigated. Experiments are performed on field data obtained by the power quality analyzers (PQ+) developed by the National Power Quality Project and the method has been found to provide accurate results for flicker contributions of various loads. The PQ+ analyzers with the proposed flicker contribution detection algorithm are called Flicker Contribution Meters (FCM) and they will be installed at the points of the Turkish Electricity Transmission Network when required.
508

Strategy for construction of polymerized volume data sets

Aragonda, Prathyusha 12 April 2006 (has links)
This thesis develops a strategy for polymerized volume data set construction. Given a volume data set defined over a regular three-dimensional grid, a polymerized volume data set (PVDS) can be defined as follows: edges between adjacent vertices of the grid are labeled 1 (active) or 0 (inactive) to indicate the likelihood that an edge is contained in (or spans the boundary of) a common underlying object, adding information not in the original volume data set. This edge labeling “polymerizes” adjacent voxels (those sharing a common active edge) into connected components, facilitating segmentation of embedded objects in the volume data set. Polymerization of the volume data set also aids real-time data compression, geometric modeling of the embedded objects, and their visualization. To construct a polymerized volume data set, an adjacency class within the grid system is selected. Edges belonging to this adjacency class are labeled as interior, exterior, or boundary edges using discriminant functions whose functional forms are derived for three local adjacency classes. The discriminant function parameter values are determined by supervised learning. Training sets are derived from an initial segmentation on a homogeneous sample of the volume data set, using an existing segmentation method. The strategy of constructing polymerized volume data sets is initially tested on synthetic data sets which resemble neuronal volume data obtained by three-dimensional microscopy. The strategy is then illustrated on volume data sets of mouse brain microstructure at a neuronal level of detail. Visualization and validation of the resulting PVDS is shown in both cases. Finally the procedures of polymerized volume data set construction are generalized to apply to any Bravais lattice over the regular 3D orthogonal grid. Further development of this latter topic is left to future work.
509

L'uso delle reti sociali per la costruzione di campioni probabilistici: possibilità e limiti per lo studio di popolazioni senza lista di campionamento

VITALINI, ALBERTO 04 March 2011 (has links)
Il campionamento a valanga è considerato un tipo di campionamento non probabilistico, la cui rappresentatività può essere valutata solo sulla base di considerazioni soggettive. D’altro canto esso risulta spesso il solo praticamente utilizzabile nel caso di popolazioni senza lista di campionamento. La tesi si divide in due parti. La prima, teorica, descrive alcuni tentativi proposti in letteratura di ricondurre le forme di campionamento a valanga nell’alveo dei campionamenti probabilistici; tra questi è degno di nota il Respondent Driven Sampling, un disegno campionario che dovrebbe combinare il campionamento a valanga con un modello matematico che pesa le unità estratte in modo da compensare la non casualità dell’estrazione e permettere così l’inferenza statistica. La seconda, empirica, indaga le prestazioni del RDS sia attraverso simulazioni sia con una web-survey su una comunità virtuale in Internet, di cui si conoscono la struttura delle relazioni e alcune caratteristiche demografiche per ogni individuo. Le stime RDS, calcolate a partire dai dati delle simulazioni e della web-survey, sono confrontate con i valori veri della popolazione e le potenziali fonti di distorsione (in particolare quelle relative all’assunzione di reclutamento casuale) sono analizzate. / Populations without sampling frame are inherently hard to sample by conventional sampling designs. Often the only practical methods of obtaining the sample involve following social links from some initially identified respondents to add more research participants to the sample. These kinds of link-tracing designs make the sample liable to various forms of bias and make extremely difficult to generalize the results to the population studied. This thesis is divided into two parts. The first part of the thesis describes some attempts to build a statistical theory of link-tracing designs and illustrates, deeply, the Respondent-Driven Sampling, a link-tracing sampling design that should allow researchers to make, in populations without sampling frame, asymptotically unbiased estimates under certain conditions. The second part of the thesis investigates the performance of RDS by simulating sampling from a virtual community on the Internet, which are available in both the network structure of the population and demographic traits for each individual. In addition to simulations, this thesis tests the RDS by making a web-survey of the same population. RDS estimates from simulations and web-survey are compared to true population values and potential sources of bias (in particular those related to the random recruitment assumption) are discussed.
510

Advanced Memory Data Structures for Scalable Event Trace Analysis

Knüpfer, Andreas 17 April 2009 (has links) (PDF)
The thesis presents a contribution to the analysis and visualization of computational performance based on event traces with a particular focus on parallel programs and High Performance Computing (HPC). Event traces contain detailed information about specified incidents (events) during run-time of programs and allow minute investigation of dynamic program behavior, various performance metrics, and possible causes of performance flaws. Due to long running and highly parallel programs and very fine detail resolutions, event traces can accumulate huge amounts of data which become a challenge for interactive as well as automatic analysis and visualization tools. The thesis proposes a method of exploiting redundancy in the event traces in order to reduce the memory requirements and the computational complexity of event trace analysis. The sources of redundancy are repeated segments of the original program, either through iterative or recursive algorithms or through SPMD-style parallel programs, which produce equal or similar repeated event sequences. The data reduction technique is based on the novel Complete Call Graph (CCG) data structure which allows domain specific data compression for event traces in a combination of lossless and lossy methods. All deviations due to lossy data compression can be controlled by constant bounds. The compression of the CCG data structure is incorporated in the construction process, such that at no point substantial uncompressed parts have to be stored. Experiments with real-world example traces reveal the potential for very high data compression. The results range from factors of 3 to 15 for small scale compression with minimum deviation of the data to factors > 100 for large scale compression with moderate deviation. Based on the CCG data structure, new algorithms for the most common evaluation and analysis methods for event traces are presented, which require no explicit decompression. By avoiding repeated evaluation of formerly redundant event sequences, the computational effort of the new algorithms can be reduced in the same extent as memory consumption. The thesis includes a comprehensive discussion of the state-of-the-art and related work, a detailed presentation of the design of the CCG data structure, an elaborate description of algorithms for construction, compression, and analysis of CCGs, and an extensive experimental validation of all components. / Diese Dissertation stellt einen neuartigen Ansatz für die Analyse und Visualisierung der Berechnungs-Performance vor, der auf dem Ereignis-Tracing basiert und insbesondere auf parallele Programme und das Hochleistungsrechnen (High Performance Computing, HPC) zugeschnitten ist. Ereignis-Traces (Ereignis-Spuren) enthalten detaillierte Informationen über spezifizierte Ereignisse während der Laufzeit eines Programms und erlauben eine sehr genaue Untersuchung des dynamischen Verhaltens, verschiedener Performance-Metriken und potentieller Performance-Probleme. Aufgrund lang laufender und hoch paralleler Anwendungen und dem hohen Detailgrad kann das Ereignis-Tracing sehr große Datenmengen produzieren. Diese stellen ihrerseits eine Herausforderung für interaktive und automatische Analyse- und Visualisierungswerkzeuge dar. Die vorliegende Arbeit präsentiert eine Methode, die Redundanzen in den Ereignis-Traces ausnutzt, um sowohl die Speicheranforderungen als auch die Laufzeitkomplexität der Trace-Analyse zu reduzieren. Die Ursachen für Redundanzen sind wiederholt ausgeführte Programmabschnitte, entweder durch iterative oder rekursive Algorithmen oder durch SPMD-Parallelisierung, die gleiche oder ähnliche Ereignis-Sequenzen erzeugen. Die Datenreduktion basiert auf der neuartigen Datenstruktur der "Vollständigen Aufruf-Graphen" (Complete Call Graph, CCG) und erlaubt eine Kombination von verlustfreier und verlustbehafteter Datenkompression. Dabei können konstante Grenzen für alle Abweichungen durch verlustbehaftete Kompression vorgegeben werden. Die Datenkompression ist in den Aufbau der Datenstruktur integriert, so dass keine umfangreichen unkomprimierten Teile vor der Kompression im Hauptspeicher gehalten werden müssen. Das enorme Kompressionsvermögen des neuen Ansatzes wird anhand einer Reihe von Beispielen aus realen Anwendungsszenarien nachgewiesen. Die dabei erzielten Resultate reichen von Kompressionsfaktoren von 3 bis 5 mit nur minimalen Abweichungen aufgrund der verlustbehafteten Kompression bis zu Faktoren > 100 für hochgradige Kompression. Basierend auf der CCG_Datenstruktur werden außerdem neue Auswertungs- und Analyseverfahren für Ereignis-Traces vorgestellt, die ohne explizite Dekompression auskommen. Damit kann die Laufzeitkomplexität der Analyse im selben Maß gesenkt werden wie der Hauptspeicherbedarf, indem komprimierte Ereignis-Sequenzen nicht mehrmals analysiert werden. Die vorliegende Dissertation enthält eine ausführliche Vorstellung des Stands der Technik und verwandter Arbeiten in diesem Bereich, eine detaillierte Herleitung der neu eingeführten Daten-strukturen, der Konstruktions-, Kompressions- und Analysealgorithmen sowie eine umfangreiche experimentelle Auswertung und Validierung aller Bestandteile.

Page generated in 0.0844 seconds