Spelling suggestions: "subject:"reverse engineering"" "subject:"reverse ingineering""
61 |
A Stochastic Petri Net Reverse Engineering Methodology for Deep Understanding of Technical DocumentsRematska, Giorgia 06 June 2018 (has links)
No description available.
|
62 |
Decompiling Go : Using metadata to improve decompilation readability / Dekompilering av Go : Att använda metadata för att förbättra dekompileringens läsbarhetGrenfeldt, Mattias January 2023 (has links)
Malware written in Go is on the rise, and yet, tools for investigating Go programs, such as decompilers, are limited. A decompiler takes a compiled binary and tries to recover its source code. Go is a high-level language that requires runtime metadata to implement many of its features, such as garbage collection and polymorphism. While decompilers have to some degree used this metadata to benefit manual reverse engineering, there is more that can be done. To remedy this, we extend the decompiler Ghidra with improvements that increase the readability of the decompilation of Go binaries by using runtime metadata. We make progress towards enabling Ghidra to represent Go's assembly conventions. We implement multiple analyses: some which reduce noise for the reverse engineer to filter through, some which enhance the decompilation by adding types, etc. The analyses are a mix of reimplementations of previous work and novel improvements. The analyses use metadata known beforehand but in new ways: applying data types at polymorphic function call sites, and using function names to import signatures from source code. We also discover previously unused metadata, which points to promising future work. Our experimental evaluation compares our extension against previously existing extensions for decompilers using multiple readability metrics. Our extension improves on metrics measuring the amount of code, such as lines of code. It also decreases the number of casts. However, the extension performs worse on other metrics, producing more variables and glue functions. In conclusion, our extension produces more compact code while also increasing its informativeness for the reverse engineer. / Datorvirus skrivna i Go ökar, men verktyg för att undersöka Go-program, såsom dekompilatorer, är begränsade. En dekompilator tar en kompilerad binär och försöker återskapa dess ursprungliga källkod. Go är ett högnivåspråk som kräver metadata under körtid för att implementera många av dess funktionaliteter, såsom automatisk minneshantering och polymorfism. Medan dekompilatorer i någon mån har använt denna metadata för att gynna manuell reverse engineering, så finns det mer som kan göras. För att åtgärda detta bygger vi en utökning till dekompilatorn Ghidra som förbättrar dekompileringens läsbarhet för Go-binärer genom att använda körtidsmetadata. Vi gör framsteg mot att få Ghidra att kunna representera Gos assemblerkonventioner. Vi implementerar flera analyser: några som minskar bruset för undersökaren att filtrera bort, några som förbättrar dekompileringen genom att lägga till datatyper, etc. Vissa analyser är återimplementationer av tidigare arbeten, och vissa är originella. Analyserna använder tidigare känd metadata, men på nya sätt: de applicerar datatyper vid anrop till polymorfiska funktioner, och använder funktionsnamn för att importera funktionssignaturer från källkod. Vi upptäcker även tidigare okänd metadata, som är lovande att undersöka i framtida studier. Vår experimentella utvärdering jämför vår utökning mot tidigare existerande utökningar av dekompilatorer med hjälp av flera läsbarhetsmått. Vår utökning förbättrar mått av mängd kod, såsom antal kodrader. Den minskar också antalet typkonverteringar. Dock så presterar utökningen sämre på andra mått och producerar fler variabler och limfunktioner. Sammanfattningsvis producerar vår utökning mer kompakt kod samtidigt som den ökar mängden användbar information tillgänglig för undersökaren.
|
63 |
Betrachtungen zur Skelettextraktion umformtechnischer BauteileKühnert, Tom, Brunner, David, Brunnett, Guido January 2011 (has links)
Die Skelettextraktion ist besonders in der Formanalyse ein wichtiges Werkzeug. Im Rahmen des Forschungsprojektes ’Extraktion fertigungsrelevanter Merkmale aus 3D-Daten umformtechnischer Bauteile zur featurebasierten Fertigungsprozessgestaltung’ als Kooperationsprojekt zwischen der Professur Graphische Datenverarbeitung und Visualisierung an der Technischen Universität Chemnitz und des Fraunhofer-Institut für Werkzeugmaschinen und Umformtechnik Chemnitz wurde diese zur Featureerkennung umgesetzt. Dieses Dokument gibt zunächst Einblick in grundlegende Verfahren und Problemstellungen einer solchen Extraktion. Die Ergebnisse mehrerer Forschungsschwerpunkte, die sich aus den zu untersuchenden Massivumformteilen ergaben, werden vorgestellt. Hierbei besonders interessant ist die robuste Extraktion von Kurvenskeletten bei Bauteilen mit nicht-zylindrischer Hauptform, sowie bei Bauteilen mit Nebenformelementen. Desweiteren werden Nachverarbeitung und Auswertung des Kurvenskeletts, sowie verwandte Forschungsarbeiten und -ergebnisse diskutiert.:1. Einleitung
1.1. Bezug zum Forschungsprojekt
1.2. Zielstellung und Organisation
2. Entwicklung von Grundlagenalgorithmen
2.1. Voxelisierung
2.1.1. Algorithmische Grundidee
2.1.2. Qualität und Laufzeit der Voxelisierung
2.1.3. Anforderungen an die Geometrie
2.2. Euklidische Distanztransformation
2.3. Vektor-/ Potentialfelder im Voxelgitter
2.4. Divergenz
2.5. Visualisierung
2.6. Filterung
2.7. Skelettierung
2.7.1. Sequentielles Ausdünnen
2.7.2. Paralleles Ausdünnen
2.8. Invarianz gegenüber Rotation und Rauschen
3. Forschungsschwerpunkte
3.1. Problemdefinition
3.2. Verwandte Arbeiten
3.3. Lösungsansätze im Rahmen der Skelettextraktion
3.4. Lösungsansätze im Rahmen der Geometrieverarbeitung
3.5. Zusammenfassung
4. Skelettverarbeitung
4.1. Grapherzeugung
4.2. Nachverarbeitungsschritte
4.3. Objektanalyse auf Basis des Kurvenskeletts
4.3.1. Profilschnitt
4.3.2. Krümmungsberechnung
4.3.3. Euklidische Distanz zum Rand
4.3.4. Massebestimmung
4.4. Schnittstellendefinition
5. Sonstige Forschungsergebnisse und Betrachtungen
5.1. Beschleunigung
5.2. Größeres Kernel
5.3. Untersuchung verwandter Forschungsarbeiten : Level Set Graph
5.4. Untersuchung verwandter Forschungsarbeiten : Formabstraktion
5.5. Ausrichtung der Geometrie
5.6. Analyse der Geometrie
Anhänge
A. Formverstehen, Ligature Instability
B. Hierarchische Raumunterteilung und Featuregröße
C. 2D/3D Untersuchungen zum GVF
D. Erhaltung von Flächen
E. Beispiele automatisch skelettierter Objekte
|
64 |
Ontological approach for database integrationAlalwan, Nasser Alwan January 2011 (has links)
Database integration is one of the research areas that have gained a lot of attention from researcher. It has the goal of representing the data from different database sources in one unified form. To reach database integration we have to face two obstacles. The first one is the distribution of data, and the second is the heterogeneity. The Web ensures addressing the distribution problem, and for the case of heterogeneity there are many approaches that can be used to solve the database integration problem, such as data warehouse and federated databases. The problem in these two approaches is the lack of semantics. Therefore, our approach exploits the Semantic Web methodology. The hybrid ontology method can be facilitated in solving the database integration problem. In this method two elements are available; the source (database) and the domain ontology, however, the local ontology is missing. In fact, to ensure the success of this method the local ontologies should be produced. Our approach obtains the semantics from the logical model of database to generate local ontology. Then, the validation and the enhancement can be acquired from the semantics obtained from the conceptual model of the database. Now, our approach can be applied in the generation phase and the validation-enrichment phase. In the generation phase in our approach, we utilise the reverse engineering techniques in order to catch the semantics hidden in the SQL language. Then, the approach reproduces the logical model of the database. Finally, our transformation system will be applied to generate an ontology. In our transformation system, all the concepts of classes, relationships and axioms will be generated. Firstly, the process of class creation contains many rules participating together to produce classes. Our unique rules succeeded in solving problems such as fragmentation and hierarchy. Also, our rules eliminate the superfluous classes of multi-valued attribute relation as well as taking care of neglected cases such as: relationships with additional attributes. The final class creation rule is for generic relation cases. The rules of the relationship between concepts are generated with eliminating the relationships between integrated concepts. Finally, there are many rules that consider the relationship and the attributes constraints which should be transformed to axioms in the ontological model. The formal rules of our approach are domain independent; also, it produces a generic ontology that is not restricted to a specific ontology language. The rules consider the gap between the database model and the ontological model. Therefore, some database constructs would not have an equivalent in the ontological model. The second phase consists of the validation and the enrichment processes. The best way to validate the transformation result is to facilitate the semantics obtained from the conceptual model of the database. In the validation phase, the domain expert captures the missing or the superfluous concepts (classes or relationships). In the enrichment phase, the generalisation method can be applied to classes that share common attributes. Also, the concepts of complex or composite attributes can be represented as classes. We implement the transformation system by a tool called SQL2OWL in order to show the correctness and the functionally of our approach. The evaluation of our system showed the success of our proposed approach. The evaluation goes through many techniques. Firstly, a comparative study is held between the results produced by our approach and the similar approaches. The second evaluation technique is the weighting score system which specify the criteria that affect the transformation system. The final evaluation technique is the score scheme. We consider the quality of the transformation system by applying the compliance measure in order to show the strength of our approach compared to the existing approaches. Finally the measures of success that our approach considered are the system scalability and the completeness.
|
65 |
Constraint based program transformation theoryNatelberg, Stefan January 2009 (has links)
The FermaT Transformation Engine is an industrial strength toolset for the migration of Assembler and Cobol based legacy systems to C. It uses an intermediate language and several dozen mathematical proven transformations to raise the abstraction level of a source code or to restructure and simplify it as needed. The actual program transformation process with the aid of this toolset is semi-automated which means that a maintainer has not only to apply one transformation after another but also to evaluate the transformation result. This can be a very difficult task especially if the given program is very large and if a lot of transformations have to be applied. Moreover, it cannot be assured that a transformation target will be achieved because it relies on the decisions taken by the respective maintainer which in turn are based on his personal knowledge. Even a small mistake can lead to a failure of the entire program transformation process which usually causes an extensive and time consuming backtrack. Furthermore, it is difficult to compare the results of different transformation sequences applied on the same program. To put it briefly, the manual approach is inflexible and often hard to use especially for maintainers with little knowledge about transformation theory. There already exist different approaches to solve these well known problems and to simplify the accessibility of the FermaT Transformation Engine. One recently presented approach is based on a particular prediction technique whereas another is based on various search tactics. Both intend to automatise the program transformation process. However, the approaches solve some problems but not without introducing others. On the one hand, the prediction based approach is very fast but often not able to provide a transformation sequence which achieves the defined program transformation targets. The results depend a lot on the algorithms which analyse the given program and on the knowledge which is available to make the right decisions during the program transformation process. On the other hand, the search based approach usually finds suitable results in terms of the given target but only in combination with small programs and short transformation sequences. It is simply not possible to perform an extensive search on a large-scale program in reasonable time. To solve the described problems and to extend the operating range of the FermaT Transformation Engine, this thesis proposes a constraint based program transformation system. The approach is semi-automated and provides the possibility to outline an entire program transformation process on the basis of constraints and transformation schemes. In this context, a constraint is a condition which has to be satisfied at some point during the application of a transformation sequence whereas a transformation scheme defines the search space which consists of a set of transformation sequences. After the constraints and the scheme have been defined, the system uses a unique knowledge-based prediction technique followed by a particular search tactic to reduce the number of transformation sequences within the search space and to find a transformation sequence which is applicable and which satisfies the given constraints. Moreover, it is possible to describe those transformation schemes with the aid of a formal language. The presented thesis will provide a definition and a classification of constraints for program transformations. It will discuss capabilities and effects of transformations and their value to define transformation sets. The modelling of program transformation processes with the aid of transformation schemes which in turn are based on finite automata will be presented and the inclusion of constraints into these schemes will be explained. A formal language to describe transformation schemes will be introduced and the automated construction of these schemes from the language will be shown. Furthermore, the thesis will discuss a unique prediction technique which uses the capabilities of transformations, an evaluation of the transformation sequences on the basis of transformation effects and a particular search tactic which is related to linear and tree search tactics. The practical value of the presented approach will be proven with the aid of three medium-scale case studies. The first one will show how to raise the abstraction level whereas the second one will show how to decrease the complexity of a particular program. The third one will show how to increase the execution speed of a selected program. Moreover, the work will be summarised and evaluated on the basis of the research questions. Its limitations will be disclosed and some suggestion for future work will be made.
|
66 |
Edge scanning and swept surface approximation in reverse engineeringSchreve, Kristiaan 12 1900 (has links)
Thesis (PhD)--University of Stellenbosch, 2001. / ENGLISH ABSTRACT: Broadly speaking Reverse Engineering is the process of digitising a physical object
and creating a computer model of the object. If sharp edges formed by two surfaces
can be extracted from a point cloud (which is the set of measured points) it can speed
up the segmentation of the point cloud and the edges may also be used to construct
swept surfaces (or various other types of surface that best captures the design intent).
A strategy is presented to "scan" edges. The strategy simulates a CMM (Coordinate
Measurement Machine) as it would scan a sequence of short lines straddling the edge.
Rather than measuring on a physical object, the algorithm developed in this
dissertation "scans" on the points in the point cloud. Each line is divided in two parts,
or line sections, belonging to the surfaces fanning the edge. The points of the line
sections are then approximated with polynomials. Each edge point is the intersection
of two such polynomials. In many engineering components sharp edges are replaced
with fillet radii or the edges become worn or damaged. This algorithm is capable of
reconstructing the original sharp edge without prior segmentation.
A simple analytical model was developed to determine the theoretically achievable
accuracy. This Analytical accuracy was compared with the accuracy of edges
extracted from point clouds. A series of experiments were done on point clouds. The
input parameters of the experiments were chosen using the technique of Design of
Experiments. Using the experimental results the parameters that most significantly
influences the accuracy of the algorithm was determined. From the Analytical and
experimental analysis guidelines were developed which will help a designer to specify
sensible input parameters for the algorithm. With these guidelines it is possible to find
an edge with an accuracy comparably with an edge found with the traditional method
of finding the edges with NURBS surface intersections.
Finally the algorithm was combined with a swept surface fitting algorithm. The
scanned edges are used as rails and profile curves for the swept surfaces. The
algorithms were demonstrated by reverse engineering part of another core box for an
inlet manifold. If the edge detection parameters are specified according to the guidelines developed
here, this algorithm can successfully detect edges. The maximum gap size in the point
cloud is an important limiting factor, but its effect has also been quantified. / AFRIKAANSE OPSOMMING: In Truwaartse Ingenieurswese word 'n fisiese voorwerp opgemeet en 'n rekenaar
model word daarvan geskep. Die segmentering van die puntewolk (dit is die
versameling gemete punte) sal aansienlik vergemaklik word indien dit moontlik is om
skerp rante in die puntewolk te identifiseer. Die rante sal dan gebruik kan word om
veegvlakke (swept surfaces), of enige ander tipe oppervalk wat die ontwerp die beste
beskryf, te konstrueer.
Hierdie proefskrif beskryf 'n strategie wat die rante kan opmeet. Dit simuleer die
manier waarvolgens 'n Koërdinaatmeetmasjien 'n reeks lyne, wat oor die rant lê, sou
meet. In plaas van op 'n fisiese voorwerp op te meet, "meet" die algoritme op 'n
puntwolk. Elke lyn word dan in twee dele verdeel (elke deel word 'n meetlynseksie
genoem). Elke meetlynseksie behoort aan een van die twee oppervlaktes wat die rant
vorm. Die rant punte word bereken as die interseksie van twee polinome wat deur die
punte van die meetlynseksie gepas is. Dit is dikwels die geval met meganiese
onderdele dat skerp rante vervang word met 'n vulstraal of dit kan ook gebeur dat die
rant verweer het of beskadig is. Die algoritme, wat hier beskryf word, kan selfs die
oorspronklike skerp rant in sulke gevalle herkonstrueer.
'n Eenvoudige analitiese model is ontwikkelom die teoretiese akkuraatheid van die
algoritme te bepaal. Die teoretiese akkuraatheid is vergelyk met die akkuraatheid van
rante wat uit puntewolke bepaal is. 'n Reeks eksperimente is op puntwolke gedoen.
Die parameters vir die eksperimente is gekies deur van Eksperimentele Ontwerp
gebruik te maak. Met behulp van hierdie tegniek kon bepaal word watter meetparameters
die grootste invloed op die akkuraatheid van die gemete punte het. Die
teoretiese en eksperimentele resultate is gebruik om riglyne daar te stel waarmee die
intreeparameters van die algoritme gekies kan word. Met hierdie riglyne is dit
moontlik om 'n rant te vind met 'n akkuraatheid vergelykbaar met die tradisionele
metode om die rante te vind met behulp van NURBS oppervlakte interseksies.
Laastens is die algoritme gekombineer met 'n algoritme wat veegvlakke deur punte
kan pas. Die gemete rante word gebruik as spore en profiele vir die veegvlakke. Die tegnieke is gebruik om 'n CAD model van 'n sandkernvorm (vir die giet van 'n
inlaatspruitstuk) te maak.
Deur die riglyne te gebruik om die intreeparameters vir die algoritme te spesifiseer,
kan rante suksesvol uit puntewolke bepaal word. Die maksimum afstand tussen
naburige punte in die puntewolk beperk die gebruik van die algoritme, maar die effek
hiervan is ook vasgevat in die riglyne wat ontwikkel is vir die algoritme.
|
67 |
Automating business intelligence recovery in software evolutionKang, Jian January 2009 (has links)
The theme of this thesis is to pave a path to vertically extract business intelligence (BI) from software code to business intelligence base, which is a tank of BI. Business intelligence is the atomic unit to build a piece of program comprehensibility in business logic point of view. It outstands because it covers all reverse engineering levels from code to specification. It refers to technologies for the localisation, extraction, analysis of business intelligence in software system. Such an approach naturally requires information transformation from software system to business intelligence base, and hence a novel set of automatic business intelligence recovery methods are needed. After a brief introduction of major issues covered by this thesis, the state of art of the area coined by the author as “business intelligence elicitation from software system”, in particular, the kinds of business intelligence that can be elicited from software system and their corresponding reverse engineering technical solutions are presented. Several new techniques are invented to pave the way towards realising this approach and make it light-weight. In particular, a programming-style-based method is proposed to partition a source program into business intelligence oriented program modules; concept recovery rules are defined to recover business intelligence concepts from the names embedded in a program module; formal concept analysis is built to model the recovered business intelligence and present business logic. The future research of this task is viewed as “automating business intelligence accumulation in Web” which is defined to bridge work in this thesis to nowadays Web computing trends. A prototype tool for recovering business intelligence from a Web-based mobile retailing system is then presented, followed by case study giving evaluation on the approach in different aspects. Finally, conclusions are drawn. Original contributions of this research work to the field of software reverse engineering are made explicit and future opportunities are explored.
|
68 |
Development of a computer interface for a clamp-on ultrasonic flow meterSundin, Peter January 2007 (has links)
<p>The section for volume, flow and temperature at SP Technical Research</p><p>Institute of Sweden performs measurements of volume, flow and temperature</p><p>in liquids.</p><p>Flow meters are best calibrated in its installation to take sources of error like</p><p>installation effects and the medium into account. If this can be done without</p><p>having to place measurement equipment inside the pipe it will mean several</p><p>practical benefits.</p><p>Since many years, clamp-on ultrasonic flow meters have been available on the</p><p>market. But even with today’s improvements they still have a measurement</p><p>uncertainty in the measurements that is five to ten times too big to make them</p><p>useful as references for calibration procedures.</p><p>This thesis focuses on analysis, using reversed engineering, of an existing</p><p>clamp-on ultrasonic flow meter.</p><p>The goal of the project is evaluation and further development of the ultrasonic</p><p>flow meter’s existing computer interface with the purpose of offering the</p><p>option of using Microsoft Excel and Visual Basic for data acquisition and</p><p>measurement of the flow rate of liquids.</p>
|
69 |
Ghost in the Shell: A Counter-intelligence Method for Spying while Hiding in (or from) the Kernel with APCsAlexander, Jason 18 October 2012 (has links)
Advanced malicious software threats have become commonplace in cyberspace, with large scale cyber threats exploiting consumer, corporate and government systems on a constant basis. Regardless of the target, upon successful infiltration into a target system an attacker will commonly deploy a backdoor to maintain persistent access as well as a rootkit to evade detection on the infected machine. If the attacked system has access to classified or sensitive material, virus eradication may not be the best response. Instead, a counter-intelligence operation may be initiated to track the infiltration back to its source. It is important that the counter-intelligence operations are not detectable by the infiltrator.
Rootkits can not only hide malware, they can also hide the detection and analysis operations of the defenders from malware. This thesis presents a rootkit based on Asynchronous Procedure Calls (APC). This allows the counter-intelligence software to exist inside the kernel and avoid detection. Two techniques are presented to defeat current detection methods: Trident, using a kernel-mode driver to inject payloads into the user-mode address space of processes, and Sidewinder, moving rapidly between user-mode threads without intervention from any kernel-mode controller.
Finally, an implementation of the explored techniques is discussed. The Dark Knight framework is outlined, explaining the loading process that employs Master Boot Record (MBR) modifications and the primary driver that enables table hooking, kernel object manipulation, virtual memory subversion, payload injection, and subterfuge. A brief overview of Host-based Intrusion Detection Systems is also presented to outline how the Dark Knight system can be used in conjunction with for immediate reactive investigations. / Thesis (Master, Computing) -- Queen's University, 2012-10-18 09:54:09.678
|
70 |
Crystallizing Application ConfigurationsZhang, Zanqing January 2006 (has links)
Software applications have both static and dynamic dependencies. Static dependencies are those derived from the source code. Dynamic runtime dependencies are established at runtime and may be based on information external to the source code, such as configuration files. Flexible applications commonly rely on configuration to adapt to diverse environments. An application's configuration encodes runtime dependencies between the various parts of the application. Reverse engineering tools have traditionally been based solely on static dependencies extracted from the source code. Neglecting dynamic dependencies encoded in an application's configuration can result in incorrect or incomplete program comprehension. Unfortunately, many applications store their configuration in an ad hoc, unstructured format from which it is not feasible to extract runtime dependencies by traditional reverse engineering. Our work takes advantage of well structured, published configuration formats, such as that of J2EE applications. Using these formats we are able to extend reverse engineering to analyse this previously neglected information. We introduce a technique called crystallization, which extracts configuration facts that encode dynamic dependencies. We use these recovered facts to predict and validate dynamic dependencies. Crystallizing configurations has the potential to increase developer productivity by providing better program comprehension.
|
Page generated in 0.113 seconds