• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 188
  • 32
  • 30
  • 21
  • 16
  • 10
  • 9
  • 6
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 383
  • 383
  • 96
  • 90
  • 59
  • 55
  • 49
  • 44
  • 36
  • 34
  • 33
  • 33
  • 29
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Αντίστροφη μηχανίκευση συστημάτων διαχείρισης περιεχομένου ανοιχτού κώδικα σε επίπεδο μοντέλου

Μανδρώζος, Ασημάκης 07 June 2013 (has links)
Τα συστήματα διαχείρισης περιεχομένου (Content Management Systems) χρησιμοποιούνται ευρέως στην κατασκευή εφαρμογών του παγκόσμιου ιστού. Το κυριότερο πλεονέκτημά τους είναι ότι άνθρωποι που δεν γνωρίζουν τις τεχνολογίες διαδικτύου μπορούν πολύ εύκολα μέσω διεπαφών να δημιουργήσουν αλλά και να συντηρήσουν την δική τους ιστοσελίδα.Ένα από τα δημοφιλέστερα CMS συστήματα είναι το Joomla. Το Joomla είναι open source οπότε και μπορεί κάποιος να το κατεβάσει δωρεάν ,να το χρησιμοποιήσει αλλά και να δει τον πηγαίο κώδικά του. Ξεχωρίζει από τα υπόλοιπα CMS συστήματα λόγο της απλότητας χειρισμού του και της μεγάλης κοινότητας χρηστών που το υποστηρίζουν. Η WebML είναι μια γλώσσα μοντελοποίησης διαδικτυακών εφαρμογών. Στόχος της είναι η παρουσίαση της δομής μιας τέτοιας εφαρμογής. Με την χρησιμοποίηση των WebML μονάδων ως επίπεδο αφαίρεσης μιας διαδικτυακής εφαρμογής είναι έυκολο να διακρίνουμε την δομή αλλά και τον τρόπο λειτουργίας της. Στην παρούσα διπλωματική εργασία χρησιμοποιείται η WebML για την μοντελοποίηση των ιστοσελίδων που παράγει το Joomla.H μοντελοποίηση αυτή μας βοηθάει να διακρίνουμε τα σχεδιαστικά πλεονεκτήματα αλλά και τις αδυναμίες που μπορεί να παρουσιάζει μια διαδικτυακή εφαρμογή δημιουργημένη με το συγκεκριμένο CMS. Η μοντελοποίηση γίνεται αυτόματα με ένα εργαλείο που έχει υλοποιηθεί στην γλώσσα προγραμματισμού Java και παρουσιάζει τα αποτελέσματα τόσο σε γραφική όσο και σε XML μορφή. / Content management systems (Content Management Systems) are widely used in the construction of Web applications. The main advantage is that people who are not familiar with internet technologies can very easily through interfaces create and maintain their own websites.One of the most popular CMS system is Joomla. Joomla is open source so anyone can download it free of charge, use it and see the source code. It distinguishes itself from other CMS systems because of the simplicity of use and the large user community that supports it. The WebML is a modeling language for web applications. Its aim is to present the structure of such an application. By using WebML units at an abstraction level in a web application it is easy to discern the structure and how it works. In this thesis WebML is used for modeling the web pages produced by the Joomla CMS. The modeling helps us to discern the design strengths and weaknesses that a web application created with this CMS may present. The modeling is done automatically with a tool that has been implemented in the Java programming language and presents the results both graphically and in XML format.
52

Katana databas 1.0

Bärling, Leo January 2009 (has links)
The task to this thesis has been to create an application, Katana-databas 1.0, for analysing c-code. The generated output gets stored in a data structure which content in the end of the program run gets written in a textfile which gets used by Katana. It's a tool for reverse engineering, developed by Johan Kraft at Mälardalens institute. Katana-databas has got the following limitations. (1) It can only handle preprocessed files, meaning it doesn't contain any rows beginning with "#". (2) Only complete files can be handled. (3) No references to unknown functions or variables are allowed. (4) A further limitation is that the application can't handle any ADT's. It can only handle primitive types. (5) Finally the application is only written for pure c-code, and thus doesn't handle code written C++. The task has been solved by creating an automatically generated lexer with Flex and Bison rules in Visual Studio. There after a limited parser has been developed which purpose is to process the lexemes which the lexer generates. The underlying causes for the thesis is to replace Understand with Katana-databas. Katana has this far used the database in Understand, but it contains closed source code. What is seeked is open source code, which Katana-databas is based on. / Programmeringsuppgiften till detta arbete har bestått i att skapa en applikation, Katana-databas 1.0, för analys av C-kod. Utflödet som applikationen skapar sparas i en datastruktur vars innehåll i slutet av programkörningen skrivs ut i en textfil som används av Katana. Det är ett verktyg för reverse engineering, utvecklat av Johan Kraft på Mälardalens högskola. Katana-databas har fått följande begränsningar. (1) Den kan bara hantera filer som är preprocessade, dvs. den innehåller inga rader som inleds med ”#”. (2) Endast kompletta filer kan hanteras. (3) Inga referenser till okända funktioner eller variabler är tillåtna. (4) En ytterligare begränsning är att applikationen inte kan hantera ADT:er. Den kan bara hantera primitiva typer. (5) Tillsist är applikationen endast skriven för ren c-kod, och klarar således inte av att hantera kod skriven i C++. Uppgiften har lösts genom att skapa en automatgenererad lexer med Flex och Bisonrules i Visual Studio. Därefter har en limiterad parser utvecklats vars syfte är att bearbeta de lexem som lexern genererar. Det bakomliggande syftet med arbetet är att ersätta Understand med Katana-databas. Katana har hittills använt sig av databasen i Understand, men den består av sluten källkod. Det som eftersträvas är öppen källkod, vilket Katana-databas baseras på.
53

General Unpacking : Overview and Techniques

Niculae, Danut January 2015 (has links)
Over the years, packed malware have started to appear at a more rapid pace.Hackers are modifying the source code of popular packers to create new typesof compressors which can fool the Anti Virus software. Due to the sheer vol-ume of packer variations, creating unpacking scripts based on the packer’ssignature has become a tedious task. In this paper we will analyse genericunpacking techniques and apply them on ten popular compression software.The techniques prove to be successful in nine out of ten cases, providing aneasy and accessible way to unpack the provided samples
54

Návrh replikované výroby zvoleného dílu za využití technologie Reverse engineering a Rapid prototyping / Design of replicated production of the selected part using the technology Reverse Engineering and Rapid prototyping

Horňák, Matúš January 2020 (has links)
This diploma thesis in theoretical part describes methods of Reverse engineering and Rapid prototyping. Each method describes its characteristics, pros and cons and usability. Practical part deals with application of these methods on part of a ledge of Škoda 1000 MB, digitalization of object, creating a new volume model, analyzing its dimensions and geometry using deviation analysis, creating prototype, choosing suitable manufacturing technology and technical-economical aspects.
55

Reverse Engineering med hjälp av 3D-skanning / Reverse Engineering using 3D-scanning

Wu, Christy January 2021 (has links)
Inom området maskinteknik finns idag ett stort intresse för Reverse Engineering med hjälp av 3Dskanning. Tekniken utgår ifrån att skapa Computer-Aided-Design (CAD) modeller av reala objekt. Föreliggande projekt utfördes vid institutionen för tillämpad fysik och elektronik vid Umeå universitetet i syfte att utvärdera prestationen av Reverse Engineering av objekt som är utmanade at rita direkt i CAD-program. Fyra olika fysiska objekt valdes för analys: en bult, en tolvkantshylsa, en propeller och ett snäckhjul; det sistnämnda tillhandahållen av företaget Rototilt Group AB. Objekten avbildades med en 3D-skannare som använder sig av metoden strukturerat ljus för att läsa in objektens form och har en noggrannhet på 0,04 mm. De 3Davbildade objekten redigerades sedan och CAD-ritningar skapades. Slutligen skrevs CADmodellerna ut med hjälp av en 3D-skrivare och en toleransanalys med gränsvärdet 0,2 mmutfördes för att jämföra dimensionerna av originalobjekten, de olika digitala modellerna samt de utskrivna objekten. Resultatet visar att Reverse Engineering (med vissa begränsningar) är en bra metod för objekt som är utmanade att modellera i CAD. Med tekniken kan fysiska objektrekonstrueras till CAD-modeller snabbt och med hög noggrannhet. / In the field of mechanical engineering, there is an increasing interest in Reverse Engineering using 3D-scanning. The technology is based on creating Computer-Aided-Design (CAD) models of real objects. The present project was carried out at the Department of Applied Physics and Electronics at Umeå University in order to evaluate the performance of Reverse Engineering of objects that are challenging to draw directly in CAD programs. Four different physical objects were selected for analysis: a bolt, a hex socket, a propeller and a worm wheel; the latter provided by the company Rototilt Group AB. A structured light 3D-scanner with a specified accuracy of 0,04 mm was used to image the objects. The 3D images were then post-processed and transferred to CAD software to create the CAD drawings. Finally, the CAD-models were printed with a 3D printer and a tolerance analysis with a limit of 0,2 mm was performed to compare the dimensions of the original objects, the different digital models and the printed objects. The results show that Reverse Engineering (with some limitations) is a good method for objects that are difficult to model in CAD. The technique is well-suited to reconstruct physical objects into CAD-models quickly and with high accuracy.
56

Static Evaluation of Type Inference and Propagation on Global Variables with Varying Context

Frasure, Ivan 06 June 2019 (has links)
No description available.
57

Obtaining Architectural Descriptions from Legacy Systems: The Architectural Synthesis Process (ASP)

Waters, Robert Lee 29 October 2004 (has links)
A majority of software development today involves maintenance or evolution of legacy systems. Evolving these legacy systems, while maintaining good software design principles, is a significant challenge. Research has shown the benefits of using software architecture as an abstraction to analyze quality attributes of proposed designs. Unfortunately, for most legacy systems, a documented software architecture does not exist. Developing a good architectural description frequently requires extensive experience on the part of the developer trying to recover the legacy system's architecture. This work first describes a four-phase process that provides a framework within which architectural recovery activities can be automated. These phases consist of: extraction (obtaining a subset of information about the legacy system from a single source), classification (partitioning the information based upon its viewpoint), union (combining all the information in a particular viewpoint into a candidate view), and fusion (cross-checking all candidate views for consistency. The work then concentrates on the major problem facing automated architectural recovery---the concept assignment problem. To overcome this problem, a technique called semantic approximation is presented and validated via experimental results. Semantic approximation uses a combination of text data mining and a mathematical technique called concept analysis to build a lattice of similar concepts between higher-level domain information and low-level code concepts. The experimental data reveals that while semantic approximation does improve results over the more traditional lexical and topological approaches, it does not yet fully solve the concept assignment problem.
58

Noch jemand ohne Rückfahrkarte? Anmerkungen zu den gestalterischen Potentialen des Reverse Engineering: Noch jemand ohne Rückfahrkarte? Anmerkungen zu den gestalterischen Potentialen des Reverse Engineering

Groh, Rainer January 2012 (has links)
Seit geraumer Zeit wird im Maschinenbau (und nicht nur dort) mit Reverse Engineering ein komplexes Vorgehen im Entwicklungsprozess bezeichnet. Bislang getrennt und in Etappen ablaufende Vorgehensweisen werden durch den Rechnereinsatz integriert. Schlüssel dafür sind computergrafische Algorithmen, die es erlauben, aus Scan-, Röntgen- und Messdaten (Punktwolken) Oberflächen zu rekonstruieren. Die als Polygonnetze beschriebenen Oberflächen können für CAD und CAM, für Optimierungsverfahren (FEM) oder für die Qualitätssicherung (Werkstoffprüfung) genutzt werden. [... aus der Einleitung]
59

A Methodology for Designing Product Components with Built-in Barriers to Reverse Engineering

Harston, Stephen P. 14 July 2009 (has links) (PDF)
Reverse engineering, defined as extracting information about a product from the product itself, is a common industry practice for gaining insight into innovative products. Both the original designer and those reverse engineering the original design can benefit from estimating the time and barrier to reverse engineer a product. This thesis presents a set of metrics and parameters that can be used to calculate the barrier to reverse engineer any product as well as the time required to do so. To the original designer, these numerical representations of the barrier and time can be used to strategically identify and improve product characteristics so as to increase the difficulty and time to reverse engineer them. One method for increasing the time and barrier to reverse engineer a product – presented in this thesis – is to treat material microstructures (crystallographic grain size, orientation, and distribution) as continuous design variables that can be manipulated to identify unusual material properties and to design devices with unexpected mechanical performance. A practical approach, carefully tied to proven manufacturing strategies, is used to tailor material microstructures by strategically orienting and laminating thin anisotropic metallic sheets. This approach, coupled with numerical optimization, manipulates material microstructures to obtain desired material properties at designer-specified locations (heterogeneously) or across the entire part (homogeneously). As the metrics and parameters characterizing the reverse engineering time and barrier are also quantitative in nature, they can also be used in conjunction with numerical optimization techniques, thereby enabling products to be developed with a maximum reverse engineering barrier and time – at a minimum development cost. On the other hand, these quantitative measures enable competitors who reverse engineer original designs to focus their efforts on products that will result in the greatest return on investment. While many products were analyzed in an empirical study demonstrating that the characterization of the time to reverse engineer a product has an average error of 12.2%, we present the results of three different products. Two additional examples are also presented showing how microstructure manipulation leads to product hardware with unexpected mechanical performance effectively increasing reverse engineering time and barrier.
60

Characterization of the Initial Flow Rate of Information During Reverse Engineering

Anderson, Nicole 21 April 2011 (has links) (PDF)
The future of companies that are founded on the development of new and innovative products is threatened when competitors reverse engineer and imitate the products. If the original developers could predict how long it would take a competitor to reverse engineer a product, it may be possible for them to delay, if not prevent, that competitor's entry into the market. Metrics and measures have been developed that can estimate the time it would take an individual to reverse engineer a product. The main purpose of these metrics and measures is to help designers determine how quickly a competitor could reverse engineer a product and develop and market a competing product. A critical parameter of these metrics is the flow rate of information (how quickly information can be extracted from a product), which is a parameter unique to each individual. This thesis seeks to establish a method for creating probability distributions that could be used to select a reasonable flow rate for an individual, by using data collected on the initial flow rate of multiple individuals.

Page generated in 0.1036 seconds