• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • 1
  • Tagged with
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Data Dependence Analysis and Its Applicatons on Loop Transformation

Yang, Cheng-Ming 18 July 2000 (has links)
For the past several decades, parallel processing has become an important research subject in the computer science area. According to the statistics, in executing a numerical program, most of time is spent on the loops. If we can use the technique of loop restructuring in the parallelizing compiler such that the conventional sequential program can be executed by exploiting the characteristics of vector machine or parallel machine, the execution efficiency will be greatly improved. In the parallelizing compiler, data dependence analysis is very important because it provides the information for loop restructuring. Data dependence analysis is necessary in order to determine whether a loop can be vectorized or parallelized. It analyzes whether the same array element or variable will be accessed more than once in a loop (e.g. access the same memory location more than once in loop execution). In the recent years, the researches on parallelizing compiler are considerable. But, data dependence analysis is still a bottleneck. There are many data dependence test such as Banerjee Test, test, Omega Test, I Test, Power Test, ... and so on, which have been used in the design of parallelizing compiler. In the thesis, we will propose a novel exact data dependence test method called Interval Reduced test (IR test). This method reduces the integer boundary of each constraint variable by repeatedly projection. When the effective region of a variable is reduced to be empty, the constraint containing this variable has no integer solution and the memory accesses under this constraint are therefore independent. The IR test is only suitable for the loops in which the loop bounds are rectangular, triangular, or unknown at compiling-time in some limited condition. To enhance the data dependence analysis capability of the IR test, we proposed the Extension-IR test in this thesis to extend the dependence testing range of one-dimensional array references to linear subscripts with variable bounds under any given direction vector. The Extension-IR test can solve in effective polynomial time. When array subscripts are non-linear expressions or too complex to analyze by the existing data dependence testing schemes, we devise a new parallelization algorithm called non-linear array subscripts test (NLA test) to deal with. The iterations subject to loop-carried dependence are scheduled into different wavefronts, while the iterations with no loop-carried dependence are assigned into the same wavefront. Based on the wavefront information, the original loop is transformed into parallel code for execution at run-time. Loop interchange is an important restructuring technique for supporting vectorization and parallelization. In this thesis, we proposed a technique, which can determine efficiently, whether loops can be interchanged between two non-adjacent loops on perfect nested loop or some imperfectly nested loop. A method for determining whether two arbitrary levels in perfectly nested loops, which contain IF and GOTO statements, can be interchanged is also presented in this thesis.
2

Modeling Temporal and Spatial Data Dependence with Bayesian Nonparametrics

Ren, Lu January 2010 (has links)
<p>In this thesis, temporal and spatial dependence are considered within nonparametric priors to help infer patterns, clusters or segments in data. In traditional nonparametric mixture models, observations are usually assumed exchangeable, even though dependence often exists associated with the space or time at which data are generated.</p> <p>Focused on model-based clustering and segmentation, this thesis addresses the issue in different ways, for temporal and spatial dependence.</p> <p>For sequential data analysis, the dynamic hierarchical Dirichlet process is proposed to capture the temporal dependence across different groups. The data collected at any time point are represented via a mixture associated with an appropriate underlying model; the statistical properties of data collected at consecutive time points are linked via a random parameter that controls their probabilistic similarity. The new model favors a smooth evolutionary clustering while allowing innovative patterns to be inferred. Experimental analysis is performed on music, and may also be employed on text data for learning topics.</p> <p>Spatially dependent data is more challenging to model due to its spatially-grid structure and often large computational cost of analysis. As a non-parametric clustering prior, the logistic stick-breaking process introduced here imposes the belief that proximate data are more likely to be clustered together. Multiple logistic regression functions generate a set of sticks with each dominating a spatially localized segment. The proposed model is employed on image segmentation and speaker diarization, yielding generally homogeneous segments with sharp boundaries.</p> <p>In addition, we also consider a multi-task learning with each task associated with spatial dependence. For the specific application of co-segmentation with multiple images, a hierarchical Bayesian model called H-LSBP is proposed. By sharing the same mixture atoms for different images, the model infers the inter-similarity between each pair of images, and hence can be employed for image sorting.</p> / Dissertation
3

Assistance à l'Abstraction de Composants Virtuels pour la Vérification Rapide de Systèmes Numériques

Muhammad, W. 19 December 2008 (has links) (PDF)
De nos jours la conception des IP (IP: Intellectual Property) peut bénéficier de nouvelles techniques de vérification symbolique: abstraction de donnée et analyse statique formelle. Nous pensons qu'il est nécessaire de séparer clairement le Contrôle des Données avant toute vérification automatique. Nous avons proposé une définition du contrôle qui repose sur l'idée intuitive qu'il a un impact sur le séquencement de données. Autour de cette idée, le travail a consisté à s'appuyer sur la sémantique des opérateurs booléens et proposer une extension qui exprime cette notion deséquencement. Ceci nous a mené à la conclusion que la séparation parfaite du contrôle et des données est illusoire car les calculs dépendent trop de la représentation syntaxique. Pour atteindre notre objectif, nous nous sommes alors basés sur la connaissance fournie par le concepteur: séparation a priori des entrées contrôle et des entrées données. De cela, nous avons proposé un algorithme de slicing pour partitionner le modèle. Une abstraction fut alors obtenue dans le cas où le contrôle est bien indépendant des données. Pour accélérer les simulations, nous avons remplacé le traitement de données, défini au niveau bit par un modèle d'exécution fonctionnel, tout en gardant inchangé la partie contrôle. Ce modèle intègre des aspects temporels qui permet de se greffer sur des outils de model checking. Nous introduisons la notion de significativité support des données intentionnelles dans les modèles IP. La significativité est utilisée pour représenter des dépendances de données booléennes en vue de vérifier formellement et statiquement les lots de données. Nous proposons plusieurs approximations qui mettent en oeuvre cette nouvelle notion.

Page generated in 0.075 seconds