261 |
An Automatic Code Generation Tool For Partitioned Software in Distributed ComputingSingh, Neeta S 30 March 2005 (has links)
In a large class of distributed embedded systems, most of the code generation models in use today target at object-oriented applications. Distributed computing using procedural language applications is challenging because there are no compatible code generators to test the partitioned programs mapped on to the multi-processor system. In this thesis, we design a code generator to produce procedural language code for a distributed embedded system. Unpartitioned programs along with the partition primitives are converted into independently executable concrete implementations. The process consists of two steps, first translating the primitives of the unpartitioned program into equivalent code clusters, and then scheduling the implementations of these code clusters according to the data dependency inherent in the unpartitioned program. Communication and scheduling of the partitioned programs require the original source code to be reverse engineered. A reverse engineering tool is used for the creation of a metadata table describing the program elements and dependency trees. This data gathered, is used by Parallel Virtual Machine message passing system for communication between the partitioned programs in the distributed environment. The proposed Code Generation Model has been implemented using C and the experimental results are presented for various test cases. Further, metrics are developed for testing the quality of the results taking into consideration the correctness and the execution of time.
|
262 |
Planar segmentation for Geometric Reverse Engineering using data from a laser profile scanner mounted on an industrial robotRahayem, Mohamed January 2008 (has links)
<p>Laser scanners in combination with devices for accurate orientation like Coordinate Measuring Machines (CMM) are often used in Geometric Reverse Engineering (GRE) to measure point data. The industrial robot as a device for orientation has relatively low accuracy but the advantage of being numerically controlled, fast, flexible, rather cheap and compatible with industrial environments. It is therefore of interest to investigate if it can be used in this application.</p><p>This thesis will describe a measuring system consisting of a laser profile scanner mounted on an industrial robot with a turntable. It will also give an introduction to Geometric Reverse Engineering (GRE) and describe an automatic GRE process using this measuring system. The thesis also presents a detailed accuracy analysis supported by experiments that show how 2D profile data can be used to achieve a higher accuracy than the basic accuracy of the robot. The core topic of the thesis is the investigation of a new technique for planar segmentation. The new method is implemented in the GRE system and compared with an implementation of a more traditional method.</p><p>Results from practical experiments show that the new method is much faster while equally accurate or better.</p>
|
263 |
L'interpolation de formesDa, Tran Kai Frank 21 January 2002 (has links) (PDF)
Pour de nombreuses applications informatiques, il est nécessaire d'interpréter des données échantillonnées et de fournir une représentation aussi correcte que possible des objets dont elles proviennent. Entre autres, on peut penser à l'imagerie médicale, au reverse engineering, à des applications de réalité virtuelle ou encore aux effets spéciaux pour le cinéma. Le problème étudié dans le cadre de cette thèse peut être formulé ainsi : à partir d'un ensemble S de points 3D échantillonnés sur un objet O, il s'agit de fournir un modèle géométrique de la surface délimitant O. Dans un premier temps, on détaille l'implantation d'une solution classique en Géométrie algorithmique dans le cadre du progiciel CGAL (http://www.cgal.org/). Les modules développés, Alpha-formes en dimensions 2 et 3, sont dorénavant partie intégrante de la libraire et distribués avec la version 2.3. Ensuite, on présente une nouvelle approche pour la reconstruction 3D à partir de nuages de points, dont le principe est de déployer une surface orientable sur les données. Cette méthode se révèle être très efficace, et surtout capable de fournir des réponses dans des cas difficiles. Elle offre, en outre, d'excellentes performances et permet de traiter de gros jeux de données. Enfin, on décrit une nouvelle méthode de reconstruction 3D pour des points organisés en sections. Il s'agit d'une méthode d'interpolation reposant sur les voisins naturels, un système de coordonnées barycentriques locales. Elle réunit deux grandes tendances: elle propose une définition fonctionnelle, C^1 presque partout, de l'objet reconstruit tout en ne considérant que des structures géométriques discrètes de type triangulation de Delaunay. L'interpolation de coupes parallèles permet, de surcroît, une solution efficace, grâce à des calculs uniquement réalisés en dimension 2.
|
264 |
Completeness of Fact Extractors and a New Approach to Extraction with Emphasis on the Refers-to RelationLin, Yuan 07 August 2008 (has links)
This thesis deals with fact extraction, which analyzes source code (and sometimes related artifacts) to produce extracted facts about the code. These facts may, for example, record where in the code variables are declared and where they are used, as well as related information. These extracted facts are typically used in software reverse engineering to reconstruct the design of the program.
This thesis has two main parts, each of which deals with a formal approach to fact extraction. Part 1 of the thesis deals with the question: How can we demonstrate that a fact extractor actually does its job? That is, does the extractor produce the facts that it is supposed to produce? This thesis builds on the concept of semantic completeness of a fact extractor, as defined by Tom Dean et al, and further defines source, syntax and compiler completeness. One of the contributions of this thesis is to show that in particular important cases (when the extractor is deterministic and its front end is idempotent), there is an efficient algorithm to determine if the extractor is compiler complete. This result is surprising, considering that in general it is undecidable if two programs are semantically equivalent, and it would seem that source code and its corresponding extracted facts are each essentially programs that are to be proved to be equivalent or at least sufficiently similar.
The larger part of the thesis, Part 2, presents Algebraic Refers-to Analysis (ARA), a new approach to fact extraction with emphasis on the Refers-to relation. ARA provides a framework for specifying fact extraction, based on a three-step pipeline: (1) basic (lexical and syntactic) extraction, (2) a normalization step and (3) a binding step.
For practical programming languages, these three steps are repeated, in stages and phases, until the Refers-to relation is computed. During the writing of this thesis, ARA pipelines for C, Java, C++, Fortran, Pascal and Ada have been designed. A prototype fact extractor for the C language has been created.
Validating ARA means to demonstrate that ARA pipelines satisfy the programming language standards such as ISO C++ standard. In other words, we show that ARA phases (stages and formulas) are correctly transcribed from the rules in the language standard.
Comparing with the existing approaches such as Attribute Grammar, ARA has the following advantages. First, ARA formulas are concise, elegant and more importantly, insightful. As a result, we have some interesting discovery about the programming languages. Second, ARA is validated based on set theory and relational algebra, which is more reliable than exhaustive testing. Finally, ARA formulas are supported by existing software tools such as database management systems and relational calculators.
Overall, the contributions of this thesis include 1) the invention of the concept of hierarchy of completeness and the automatic testing of completeness, 2) the use of the relational data model in fact extraction, 3) the invention of Algebraic Refers-to Relation Analysis (ARA) and 4) the discovery of some interesting facts of programming languages.
|
265 |
Supporting Framework Use via Automatically Extracted Concept-Implementation TemplatesHeydarnoori, Abbas January 2009 (has links)
Object-oriented application frameworks allow the reuse of both software design and code and are one of the most effective reuse technologies available today. Frameworks provide domain-specific concepts, which are generic units of functionality. Framework-based applications are constructed by writing completion code that instantiates these concepts. The instantiation of such concepts requires implementation steps in the completion code, such as subclassing framework-provided classes, implementing interfaces, and calling appropriate framework services. Unfortunately, many existing frameworks are difficult to use because of their large and complex APIs and often incomplete user documentation. To cope with this problem, application developers often use existing framework applications as a guide. While existing applications contain valuable examples of concept implementation steps, locating them in the application code is often challenging.
To address this issue, this dissertation introduces the notion of concept implementation templates, which summarize the necessary concept implementation steps, and a technique named FUDA (Framework API Understanding through Dynamic Analysis) which automatically extracts such templates from runtime information collected when that concept is invoked in two or more different contexts in one or more sample applications. The experimental evaluation of FUDA with twelve realistic concepts on top of four widely-used frameworks suggests that the technique is effective in producing quality implementation templates for a given concept with high precision and recall from only two sample sample applications and execution scenarios. Moreover, it was observed in a user study with twelve subjects that the choice of templates vs. documentation had much less impact on development time than the concept complexity.
|
266 |
Completeness of Fact Extractors and a New Approach to Extraction with Emphasis on the Refers-to RelationLin, Yuan 07 August 2008 (has links)
This thesis deals with fact extraction, which analyzes source code (and sometimes related artifacts) to produce extracted facts about the code. These facts may, for example, record where in the code variables are declared and where they are used, as well as related information. These extracted facts are typically used in software reverse engineering to reconstruct the design of the program.
This thesis has two main parts, each of which deals with a formal approach to fact extraction. Part 1 of the thesis deals with the question: How can we demonstrate that a fact extractor actually does its job? That is, does the extractor produce the facts that it is supposed to produce? This thesis builds on the concept of semantic completeness of a fact extractor, as defined by Tom Dean et al, and further defines source, syntax and compiler completeness. One of the contributions of this thesis is to show that in particular important cases (when the extractor is deterministic and its front end is idempotent), there is an efficient algorithm to determine if the extractor is compiler complete. This result is surprising, considering that in general it is undecidable if two programs are semantically equivalent, and it would seem that source code and its corresponding extracted facts are each essentially programs that are to be proved to be equivalent or at least sufficiently similar.
The larger part of the thesis, Part 2, presents Algebraic Refers-to Analysis (ARA), a new approach to fact extraction with emphasis on the Refers-to relation. ARA provides a framework for specifying fact extraction, based on a three-step pipeline: (1) basic (lexical and syntactic) extraction, (2) a normalization step and (3) a binding step.
For practical programming languages, these three steps are repeated, in stages and phases, until the Refers-to relation is computed. During the writing of this thesis, ARA pipelines for C, Java, C++, Fortran, Pascal and Ada have been designed. A prototype fact extractor for the C language has been created.
Validating ARA means to demonstrate that ARA pipelines satisfy the programming language standards such as ISO C++ standard. In other words, we show that ARA phases (stages and formulas) are correctly transcribed from the rules in the language standard.
Comparing with the existing approaches such as Attribute Grammar, ARA has the following advantages. First, ARA formulas are concise, elegant and more importantly, insightful. As a result, we have some interesting discovery about the programming languages. Second, ARA is validated based on set theory and relational algebra, which is more reliable than exhaustive testing. Finally, ARA formulas are supported by existing software tools such as database management systems and relational calculators.
Overall, the contributions of this thesis include 1) the invention of the concept of hierarchy of completeness and the automatic testing of completeness, 2) the use of the relational data model in fact extraction, 3) the invention of Algebraic Refers-to Relation Analysis (ARA) and 4) the discovery of some interesting facts of programming languages.
|
267 |
Supporting Framework Use via Automatically Extracted Concept-Implementation TemplatesHeydarnoori, Abbas January 2009 (has links)
Object-oriented application frameworks allow the reuse of both software design and code and are one of the most effective reuse technologies available today. Frameworks provide domain-specific concepts, which are generic units of functionality. Framework-based applications are constructed by writing completion code that instantiates these concepts. The instantiation of such concepts requires implementation steps in the completion code, such as subclassing framework-provided classes, implementing interfaces, and calling appropriate framework services. Unfortunately, many existing frameworks are difficult to use because of their large and complex APIs and often incomplete user documentation. To cope with this problem, application developers often use existing framework applications as a guide. While existing applications contain valuable examples of concept implementation steps, locating them in the application code is often challenging.
To address this issue, this dissertation introduces the notion of concept implementation templates, which summarize the necessary concept implementation steps, and a technique named FUDA (Framework API Understanding through Dynamic Analysis) which automatically extracts such templates from runtime information collected when that concept is invoked in two or more different contexts in one or more sample applications. The experimental evaluation of FUDA with twelve realistic concepts on top of four widely-used frameworks suggests that the technique is effective in producing quality implementation templates for a given concept with high precision and recall from only two sample sample applications and execution scenarios. Moreover, it was observed in a user study with twelve subjects that the choice of templates vs. documentation had much less impact on development time than the concept complexity.
|
268 |
An Accelerated Aerodynamic Optimization Approach For A Small Turbojet Engine Centrifugal CompressorCeylanoglu, Arda 01 December 2009 (has links) (PDF)
Centrifugal compressors are widely used in propulsion technology. As an important part of turbo-engines, centrifugal compressors increase the pressure of the air and let the pressurized air flow into the combustion chamber. The developed pressure and the flow characteristics mainly affect the thrust generated by the engine.
The design of centrifugal compressors is a challenging and time consuming process including several tests, computational fluid dynamics (CFD) analyses and optimization studies. In this study, a methodology on the geometry optimization and CFD analyses of the centrifugal compressor of an existing small turbojet engine are introduced as increased pressure ratio being the objective. The purpose is to optimize the impeller geometry of a centrifugal compressor such that the pressure ratio at the maximum speed of the engine is maximized. The methodology introduced provides a guidance on the geometry optimization of centrifugal impellers supported with CFD analysis outputs.
The original geometry of the centrifugal compressor is obtained by means of optical scanning. Then, the parametric model of the 3-D geometry is created by using a CAD software. A design of experiments (DOE) procedure is applied through geometrical parameters in order to decrease the computation effort and guide through the optimization process. All the designs gathered through DOE study are modelled in the CAD software and meshed for CFD analyses. CFD analyses are carried out to investigate the resulting pressure ratio and flow characteristics.
The results of the CFD studies are used within the Artificial Neural Network methodology to create a fit between geometric parameters (inputs) and the pressure ratio (output). Then, the resulting fit is used in the optimization study and a centrifugal compressor with higher pressure ratio is obtained by following a single objective optimization process supported by design of experiments methodology.
|
269 |
Identification of topological and dynamic properties of biological networks through diverse types of dataGuner, Ugur 23 May 2011 (has links)
It is becoming increasingly important to understand biological networks in order to understand complex diseases, identify novel, safer protein targets for therapies and design efficient drugs. 'Systems biology' has emerged as a discipline to uncover biological networks through genomic data. Computational methods for identifying these networks become immensely important and have been growing in number in parallel to increasing amount of genomic data under the discipline of 'Systems Biology'.
In this thesis we introduced novel computational methods for identifying topological and dynamic properties of biological networks. Biological data is available in various forms. Experimental data on the interactions between biological components provides a connectivity map of the system as a network of interactions and time series or steady state experiments on concentrations or activity levels of biological constituents will give a dynamic picture of the web of these interactions. Biological data is scarce usually relative to the number of components in the networks and subject to high levels of noise. The data is available from various resources however it can have missing information and inconsistencies. Hence it is critical to design intelligent computational methods that can incorporate data from different resources while considering noise component.
This thesis is organized as follows; Chapter 1 and 2 will introduce the basic concepts for biological network types. Chapter 2 will give a background on biochemical network identification data types and computational approaches for reverse engineering of these networks. Chapter 3 will introduce our novel constrained total least squares approach for recovering network topology and dynamics through noisy measurements. We proved our method to be superior over existing reverse engineering methods. Chapter 4 is an extension of chapter 3 where a Bayesian parameter estimation algorithm is presented that is capable of incorporating noisy time series and prior information for the connectivity of network. The quality of prior information is critical to be able to infer dynamics of the networks. The major drawback of prior connectivity data is the presence of false negatives, missing links. Hence, powerful link prediction methods are necessary to be able to identify missing links. At this junction a novel link prediction method is introduced in Chapter 5. This method is capable of predicting missing links in a connectivity data. An application of this method on protein-protein association data from a literature mining database will be demonstrated. In chapter 6 a further extension into link prediction applications will be given. An interesting application of these methods is the drug adverse effect prediction. Adverse effects are the major reason for the failure of drugs in pharmaceutical industry, therefore it is very important to identify potential toxicity risks in the early drug development process. Motivated by this chapter 6 introduces our computational framework that integrates drug-target, drug-side effect, pathway-target and mouse phenotype-mouse genes data to predict side effects. Chapter 7 will give the significant findings and overall achievements of the thesis. Subsequent steps will be suggested that can follow the work presented here to improve network prediction methods.
|
270 |
Trimačiai objektai: atvaizdavimo ir deformacijos algoritmai / Three dimensional objects: visualization and deformation algorithmsŽukas, Andrius 11 August 2008 (has links)
Magistro baigiamajam darbui pasirinkta tema yra Trimačiai objektai: atvaizdavimo ir deformacijos algoritmai. Ši tema nagrinėja paviršiaus rekonstrukciją iš taškų debesies ir galimybes pritaikyti paviršiaus deformacijos algoritmus. Analizės etapo metu išsiaiškinta, kad pagrindinė paviršiaus atstatymo iš taškų debesies problema yra lėtas algoritmų veikimas. Šiame darbe siūlomas atvirkštinės inžinerijos metodas, veikiantis 2D Delaunay trianguliacijos pagrindu. Pateikiami algoritmai padalina taškų debesį į kelias dalis, tada iš trimatės erdvės taškų debesies dalys yra transformuojamos į dvimatę erdvę, suskaičiuojama 2D Delaunay trianguliacija ir gautas trikampių tinklelis vėl transformuojamas į trimatę erdvę. Taip pat pateikiamos teorinės galimybės gautą paviršių transformuoti jau žinomu algoritmu. Po algoritmų praktinio įgyvendinimo buvo nustatyta, kad jie veikia taip kaip tikėtasi, rezultatas gaunamas greičiau nei naudojant kitus žinomus algoritmus. Taip pat buvo pastebėta, kad 2D Delaunay trianguliaciją geriau naudoti kai taškų skaičius taškų debesyje yra labai didelis, o kai taškų skaičius neviršija 2000 geriau naudoti 3D Delaunay trianguliaciją. / The chosen theme of the Master of Science degree paper is “Three dimensional objects: visualization and deformation algorithms“. This subject considers surface reconstruction from point clouds and the possibilities to apply surface deformation algorithms. During the analysis phase we found that the main problem of the algorithms of surface reconstruction from scanned point clouds is the lack of speed. So in this paper a method, based on 2D Delaunay triangulation, for reverse engineering is proposed. This method divides point clouds into several parts, and then maps all the points of those point cloud parts to the plane. Then a 2D Delaunay triangulation is computed and the mesh is mapped back to the point cloud. We also give theoretical possibilities to apply a known algorithm for surface deformation. During the implementation phase we found that our algorithms work as expected, but quicker than the other methods proposed earlier. We also noticed that it’s better to use 2D Delaunay triangulation for bigger point clouds and 3D Delaunay triangulation for point clouds, which contains no more than approximately 2000 points.
|
Page generated in 0.0946 seconds