• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 188
  • 32
  • 30
  • 21
  • 16
  • 10
  • 9
  • 6
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 384
  • 384
  • 96
  • 91
  • 59
  • 55
  • 49
  • 44
  • 36
  • 34
  • 33
  • 33
  • 30
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Der Weg zum digitalen Zwilling mit Mainstream CAD-Lösungen

Schawohl, Elke 05 July 2018 (has links)
Von der ersten Idee bis zur Auslieferung eines Produktes laufen verschiedene Prozesse ab, die koordiniert und optimiert werden, um Produkte schnell zur Marktreife zu entwickeln. Die Digitalisierung von Prozessen sowie eine firmenweit einheitliche Datenplattform sind in der Produktentwicklung zwingend notwendig. Digitaler Zwilling, und PLM rücken in den Fokus. Die Herausforderung der Industrie liegt in der Optimierung von Produkten. Wo beginnt die Optimierung? Während der Konstruktion greifen verschiedene Optimierungstools in die Entwicklungsphase ein. Skalierbare FEM-Tools ermöglichen konstruktionsbegleitende Analysen. Verschiedene Konstruktions-Tools in der CAD-Lösung sparen Zeit und Kosten. Die Konstruktion der nächsten Generation Generative Konstruktion – bei der Modellerzeugung werden die Vorteile der additiven Fertigung einbezogen und somit die Bauteilkonstruktion optimiert. Reverse Engineering bietet die Möglichkeit direkt mit Facettendaten zu arbeiten und Flächen zu generieren. Convergent Modeling bietet die nahtlose Kombination von „B-Rep“-Volumen und „Facetten“-Modellen. Solid Edge Portfolio - die Zukunft der Produktentwicklung Solid Edge Apps erweitern den Funktionsumfang. Auf bestimmte Marktsegmente entwickelte Applikationen runden die Anwendungsmöglichkeiten ab.
122

Riktlinjer för Pensionering av IT - System

Marcus, Elwin, Emil, Lehti January 2016 (has links)
Retirement of old IT - systems to newer ones has become more up-to-date, due to increases in performance in hardware and new programming languages. Nevertheless for companies it is not uncomplicated how this retirement process is best handled, and there are not many studies covering this phenomenon. In this thesis the goal is to find general guidelines regarding retirement of IT - systems. This is done by doing a case study on retirement for the old (MAPPER/BIS) platform to a more modern C# system which is currently used at Handelsbanken, one of Sweden’s largest banks. With the help of a qualitative research method, literature study and with interviews at Handelsbanken and external parts, we aim to understand and analyze what’s important in a retirement process. In order to create recommendations for our guidelines by also using some parts from the EM3: Software Retirement Process Model. The result of the thesis report is in total 16 guidelines that are presented as tables in the study, which companies can use in their retirement process. Ten guidelines are concerned with the retirement of the platform and six guidelines regarding conversion rules. Nonetheless the study has shown that not all systems on the MAPPER platform were possible to retire. / Pensionering av IT-system har idag blivit en mer aktuell fråga än någonsin, till följd av att system blir gamla och inte är anpassade till dagens hårdvara eller moderna programmeringsspråk. Det är inte okomplicerat hur en sådan pensionering ska ske, speciellt då det inte gjorts så många studier kring det ämnet. I denna uppsats är målet att ta fram riktlinjer gällande pensionering, detta gjordes genom att utföra en fallstudie. Fallstudien gjordes på uppdrag av Handelsbanken, en av de största bankerna i Sverige, kring pensionering av (MAPPER/BIS) plattformen till en annan mer moderna plattform i C#. Med hjälp av en kvalitativ forskningsmetod, litteraturstudie samt intervjuer med personal på Handelsbanken och externa parter, ämnar vi att förstå och analysera vad som är viktigt vid en pensioneringsprocess. För att kunna skapa rekommendationer till de riktlinjer kring pensionering som detta arbete presenterar. Vissa utvalda delar av EM3: Software Retirement Process Model kommer också att ligga till grund för riktlinjerna. Uppsatsrapporten har resulterat i totalt 16 riktlinjerna som presenteras i tabeller som företag kan använda i sina pensioneringsprocesser. Tio stycken riktlinjer grundar sig i pensioneringen av en plattform och ytligare sex stycken gäller konvertering. Studien visade dock att alla system på MAPPER plattformen inte var möjliga att pensionera.
123

An Initial Investigation of Neural Decompilation for WebAssembly / En Första Undersökning av Neural Dekompilering för WebAssembly

Benali, Adam January 2022 (has links)
WebAssembly is a new standard of the World Wide Web that is used as a compilation target and which is meant to enable high-performance applications. As it becomes more popular, the need for corresponding decompilers increases, for security reasons for instance. However, building an accurate decompiler capable of restoring the original source code is a complicated task. Recently, Neural Machine Translation (NMT) has been proposed as an alternative to traditional decompilers which involve a lot of manual and laborious work. We investigate the viability of Neural Machine Translation for decompiling WebAssembly binaries to C source code. The state-of-the-art transformer and LSTM sequence-to-sequence (Seq2Seq) models with attention are experimented with. We build a custom randomly-generated dataset ofWebAssembly to C pairs of source code and use different metrics to quantitatively evaluate the performance of the models. Our implementation consists of several processing steps that have the WebAssembly input and the C output as endpoints. The results show that the transformer outperforms the LSTM based neural model. Besides, while the model restores the syntax and control-flow structure with up to 95% of accuracy, it is incapable of recovering the data-flow. The different benchmarks on which we run our evaluation indicate a drop of decompilation accuracy as the cyclomatic complexity and the nesting of the programs increase. Nevertheless, our approach has a lot of potential, encouraging its usage in future works. / WebAssembly est un nouveau standard du World Wide Web utilisé comme cible de compilation et qui est principalement destiné à exécuter des applications dans un navigateur Web avec des performances supérieures. À mesure que le langage devient populaire, le besoin en rétro-ingénierie des fichiers WebAssembly binaires se ressent. Toutefois, la construction d’un bon décompilateur capable de restaurer du code source plus aussi proche que possible de l’original est une tâche compliquée. Récemment, la traduction automatique neuronale a été proposée comme alternative aux décompilateurs traditionnels qui impliquent du travail fastidieux, coûteux et difficilement adaptable à d’autres langages. Nous investiguons les chances de succès de la traduction automatique neuronale pour décompiler des fichiers binaires WebAssembly en code source C. Les modèles du transformeur et du LSTM séquence-à-séquence (Seq2Seq) sont utilisés. Nous construisons un jeu de données généré de manière aléatoire constitué de paires de code source WebAssembly et C et nous utilisons différentes métriques pour évaluer quantitativement les performances des deux modèles. Notre implémentation consiste en plusieurs phases de traitement qui reçoivent en entrée le code WebAssembly et produisent en sortie le code source C. Les résultats montrent que le transformeur est plus performant que le modèle basé sur les LSTMs. De plus, bien que le modèle puisse restaurer la syntaxe ainsi que la structure de contrôle du programme avec jusqu’à 95% de précision, il est incapable de produire un flux de données équivalent. Les différents jeux de données produits indiquent une chute de performance à mesure que la complexité cyclomatique ainsi que le niveau d’imbrication augmentent. Nous estimons, toutefois, que cette approche possède du potentiel. / WebAssembluy är en ny standard för World Wide Web som används som ett kompileringsmål och som är tänkt att möjliggöra högpresterande applikationer i webbläsaren. När det blir mer populärt ökar behovet av motsvarande dekompilatorer. Att bygga en exakt dekompilator som kan återställa den ursprungliga källkoden är dock en komplicerad uppgift. Nyligen har Neural Maskinöversättning (NMT) föreslagits som ett alternativ till traditionella dekompilatorer som innebär mycket manuellt och mödosamt arbete. Vi undersöker genomförbarheten hos Neural Maskinöversättning för dekompilering av WebAssembly -binärer till C -källkod. De toppmoderna transformer och LSTM sequence-to-sequence (Seq2Seq) modellerna med attention experimenteras med. Vi bygger en anpassad slumpmässigt genererad dataset för WebAssembly till C-källkodspar och använder olika mätvärden för att kvantitativt utvärdera modellernas prestanda. Vår implementering består av flera bearbetningssteg som har WebAssembly -ingången och C -utgången som slutpunkter. Resultaten visar att transformer överträffar den LSTM -baserade neuralmodellen. Även om modellen återställer syntaxen och kontrollflödesstrukturen med upp till 95 % noggrannhet, är den oförmögen att återställa dataflödet. De olika benchmarks som vi använder vår utvärdering på indikerar en minskning av dekompilationsnoggrannheten när den cyklomatiska komplexiteten och häckningen av programmen ökar. Vi tror dock att detta tillvägagångssätt har stor potential.
124

Development of an experimental diaphragm valve used for velocity profiling of such devices

Humphreys, P., Erfort, E., Fester, V., Chhiba, M., Kotze, R., Philander, O., Sam, M. January 2010 (has links)
Published Article / The design, manufacture and use of diaphragm valves in the minerals industry is becoming increasingly important since this sector is restricted from using excessive amounts of water for their operations. This forces a change in the flow properties of these devices from turbulent to laminar in nature and thus necessitates the characterization of these flows for future designs. Furthermore, diaphragm valves have a short service life due to a variety of reasons that includes the abrasive nature of the flow environment. This paper describes the activities of the Adaptronics Advanced Manufacturing Technology Laboratory (AMTL) at the Cape Peninsula University of Technology in the research and development of diaphragm valves using rapid prototyping technologies. As a first step, an experimental diaphragm valve was reverse engineered and retrofitted with ultrasonic transducers used in Ultrasonic Velocity Profiling (UVP) measurements. The use of this device enables measurements of velocity profiles to gain insight into the flow structure within the valve and the increased pressure losses generated within the valve. It also showed that components fabricated using the Z-Corporation machine could withstand the working environment of diaphragm valves. Research is now conducted on ultrasonic transducer placement in the device to further enhance the velocity profiling through the device. As a second step we produced a thin-walled stainless steel diaphragm valve using rapid prototyping technology and investment casting processes. A study of the durability of this device will be conducted and certain geometric and manufacturing aspects of this valve will be discussed.
125

Semi-automatic extraction of primitive geometric entities from point clouds

Goussard, Charl Leonard 12 1900 (has links)
Thesis (MScEng)--University of Stellenbosch, 2001. / ENGLISH ABSTRACT: This thesis describes an algorithm to extract primitive geometric entities (flat planes, spheres or cylinders, as determined by the user's inputs) from unstructured, unsegmented point clouds. The algorithm extracts whole entities or only parts thereof. The entity boundaries are computed automatically. Minimal user interaction is required to extract these entities. The algorithm is accurate and robust. The algorithm is intended for use in the reverse engineering environment. Point clouds created in this environment typically have normal error distributions. Comprehensive testing and results are shown as well as the algorithm's usefulness in the reverse engineering environment. / AFRIKAANSE OPSOMMING: Hierdie tesis beskryf 'n algoritme wat primitiewe geometriese entiteite (plat vlakke, sfere of silinders na gelang van die gebruiker se inset) pas op ongestruktureerde, ongesegmenteerde puntewolke. Die algoritme pas geslote geometriese entiteite of slegs dele daarvan. Die grense van hierdie entiteite word automaties bereken. Minimale gebruikersinteraksie word benodig om die geometriese entiteite te pas. Die algoritme is akkuraat en robuust. Die algoritme is ontwikkel vir gebruik in die truwaartse ingenieurswese omgewing. Puntewolke opgemeet in hierdie omgewing het tipies meetfoute met 'n normaal verdeling. Omvattende toetsing en resultate word getoon en daarmee ook die nut wat die algoritme vir die gebruiksomgewing inhou.
126

Incorporation of Departure Time Choice in a Mesoscopic Transportation Model for Stockholm

Kristoffersson, Ida January 2009 (has links)
<p>Travel demand management policies such as congestion charges encourage car-users to change among other things route, mode and departure time. Departure time may be especially affected by time-varying charges, since car-users can avoid high peak hour charges by travelling earlier or later, so called peak spreading effects. Conventional transport models do not include departure time choice as a response. For evaluation of time-varying congestion charges departure time choice is essential.</p><p>In this thesis a transport model called SILVESTER is implemented for Stockholm. It includes departure time, mode and route choice. Morning trips, commuting as well as other trips, are modelled and time is discretized into fifteen-minute time periods. This way peak spreading effects can be analysed. The implementation is made around an existing route choice model called CONTRAM, for which a Stockholm network already exists. The CONTRAM network has been in use for a long time in Stockholm and an origin-destination matrix calibrated against local traffic counts and travel times guarantee local credibility. On the demand side, an earlier developed departure time and mode choice model of mixed logit type is used. It was estimated on CONTRAM travel times to be consistent with the route choice model. The behavioural response under time-varying congestion charges was estimated from a hypothetical study conducted in Stockholm.</p><p>Paper I describes the implementation of SILVESTER. The paper shows model structure, how model run time was reduced and tests of convergence. As regards run time, a 75% cut down was achieved by reducing the number of origin-destination pairs while not changing travel time and distance distributions too much.</p><p>In Paper II car-users underlying preferred departure times are derived using a method called reverse engineering. This method derives preferred departure times that reproduce as well as possible the observed travel pattern of the base year. Reverse engineering has previously only been used on small example road networks. Paper II shows that application of reverse engineering to a real-life road network is possible and gives reasonable results.</p> / Silvester
127

Automatic reconstruction and analysis of security policies from deployed security components

Martinez, Salvador 30 June 2014 (has links) (PDF)
Security is a critical concern for any information system. Security properties such as confidentiality, integrity and availability need to be enforced in order to make systems safe. In complex environments, where information systems are composed by a number of heterogeneous subsystems, each subsystem plays a key role in the global system security. For the specific case of access-control, access-control policies may be found in several components (databases, networksand applications) all, supposedly, working together. Nevertheless since most times these policies have been manually implemented and/or evolved separately they easily become inconsistent. In this context, discovering and understanding which security policies are actually being enforced by the information system comes out as a critical necessity. The main challenge to solve is bridging the gap between the vendor-dependent security features and a higher-level representation that express these policies in a way that abstracts from the specificities of concrete system components, and thus, it's easier to understand and reason with. This high-level representation would also allow us to implement all evolution/refactoring/manipulation operations on the security policies in a reusable way. In this work we propose such a reverse engineering and integration mechanism for access-control policies. We rely on model-driven technologies to achieve this goal.
128

Redesign supported by data models with particular reference to reverse engineering

Borja Ramirez, Vicente January 1997 (has links)
The research reported in this thesis is focused on the creation of a CAE system to support Reverse Engineering. It is centred around the computational representation of products (Product Model) and manufacturing capabilities (Manufacturing Model). These models are essential for modem and future software systems aimed to assist the design process, enabling data sharing among the participants who use various computational tools. Reverse Engineering is employed as a particular context and motivation for exploring the application of the models. The research builds on the achievements of the recently finished Model Oriented Simultaneous Engineering System (MOSES) project, undertaken jointly by Leeds University and the Department of Manufacturing Engineering of Loughborough University. MOSES' work on information modelling was analysed and combined together with the original proposals of the author to elaborate a suitable support to Reverse Engineering, applicable to redesign in general. A process for Reverse Engineering is proposed and documented and a data model driven CAE system to support it is specified. The CAE system includes a Product Model, a Manufacturing Model and two software application environments. The Product Model of the system is based on the information requirements of the Reverse Engineering process and is suitable for representing multi-component products, from different perspectives through its life cycle. The applications assist the characteristic activities of Reverse Engineering. In particular, the system is used for exploring the application of Product and Manufacturing Models in supporting Design for Manufacture. The theoretical research is tested by the use of a case study which explores the Reverse Engineering of a component. This work is supported by a prototype software instance of the CAE system. The case study component is an axle which forms part of a product designed and manufactured by a collaborating company.
129

Harmonising metalworking fluid formulations with end-of-life biological treatment

Uapipatanakul, Boontida January 2015 (has links)
Metalworking fluids (MWFs) are coolants and lubricants, which are widely employed in metal cutting works. They are designed to be a long lasting product. Manufacturers have designed MWFs with lack of awareness of end-of-life disposal by including biocides, which make biological treatment challenging. Here, Syntilo 9913 was used as a case study to develop a cradle-to-grave product that was biologically stable in use but amenable to sustainable hybrid biological treatment at end-of-life. The product was reverse engineered employing factorial design approach based on a priori knowledge of the product components. From the combinatorial work, it was observed that chemical interactions can results in synergistic and antagonistic effects in terms of the toxicity and biodegradability. One of the major components of most MWFs are amines such as Triethanolamine (TEA). TEA does not biodeteriorate in single compound screening, but in combination with many other components TEA was found to cause "softening" of MWF formulations. Octylamine was found to be best for "bio-hardening" but it was not economically sustainable. Hence, the modified biocide-free synthetic MWF, Syntilo 1601, was reformulated with TEA, isononanoic acid, neodecnoic acid, Cobratec TT50S, and pluronic 17R40, which were resistant to biological treatment. Although, no change in the overall oxidation state of the MWF, metabolic activity did occur as breakdown products were observed. This suggested that both raw materials and metabolic breakdown products were recalcitrant. Thus, immobilisation agents were applied to aid further biodegradation by removing toxic bottleneck compounds. It was found that hybrid nano-iron and kaffir lime leaf performed similarly in removing chemical oxygen demand and ammonium from the system. Work in this Thesis demonstrated that the combined use of biological treatment and immobilisation agents effectively overcome the limitations of biological treatment alone by removing bottleneck compounds, which allowed greater COD reduction. This laboratory scale is a proof of principle, which needs to be tested at full scale.
130

Learning via Query Synthesis

Alabdulmohsin, Ibrahim Mansour 07 May 2017 (has links)
Active learning is a subfield of machine learning that has been successfully used in many applications. One of the main branches of active learning is query synthe- sis, where the learning agent constructs artificial queries from scratch in order to reveal sensitive information about the underlying decision boundary. It has found applications in areas, such as adversarial reverse engineering, automated science, and computational chemistry. Nevertheless, the existing literature on membership query synthesis has, generally, focused on finite concept classes or toy problems, with a limited extension to real-world applications. In this thesis, I develop two spectral algorithms for learning halfspaces via query synthesis. The first algorithm is a maximum-determinant convex optimization method while the second algorithm is a Markovian method that relies on Khachiyan’s classical update formulas for solving linear programs. The general theme of these methods is to construct an ellipsoidal approximation of the version space and to synthesize queries, afterward, via spectral decomposition. Moreover, I also describe how these algorithms can be extended to other settings as well, such as pool-based active learning. Having demonstrated that halfspaces can be learned quite efficiently via query synthesis, the second part of this thesis proposes strategies for mitigating the risk of reverse engineering in adversarial environments. One approach that can be used to render query synthesis algorithms ineffective is to implement a randomized response. In this thesis, I propose a semidefinite program (SDP) for learning a distribution of classifiers, subject to the constraint that any individual classifier picked at random from this distributions provides reliable predictions with a high probability. This algorithm is, then, justified both theoretically and empirically. A second approach is to use a non-parametric classification method, such as similarity-based classification. In this thesis, I argue that learning via the empirical kernel maps, also commonly referred to as 1-norm Support Vector Machine (SVM) or Linear Programming (LP) SVM, is the best method for handling indefinite similarities. The advantages of this method are established both theoretically and empirically.

Page generated in 0.0773 seconds