• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 93
  • 19
  • 19
  • 14
  • 14
  • 7
  • 7
  • 6
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 219
  • 46
  • 35
  • 31
  • 31
  • 22
  • 21
  • 19
  • 19
  • 18
  • 17
  • 16
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

EMULATION FOR MULTIPLE INSTRUCTION SET ARCHITECTURES

Christopher M Wright (10645670) 07 May 2021 (has links)
<p>System emulation and firmware re-hosting are popular techniques to answer various security and performance related questions, such as, does a firmware contain security vulnerabilities or meet timing requirements when run on a specific hardware platform. While this motivation for emulation and binary analysis has previously been explored and reported, starting to work or research in the field is difficult. Further, doing the actual firmware re-hosting for various Instruction Set Architectures(ISA) is usually time consuming and difficult, and at times may seem impossible. To this end, I provide a comprehensive guide for the practitioner or system emulation researcher, along with various tools that work for a large number of ISAs, reducing the challenges of getting re-hosting working or porting previous work for new architectures. I layout the common challenges faced during firmware re-hosting and explain successive steps and survey common tools to overcome these challenges. I provide emulation classification techniques on five different axes, including emulator methods, system type, fidelity, emulator purpose, and control. These classifications and comparison criteria enable the practitioner to determine the appropriate tool for emulation. I use these classifications to categorize popular works in the field and present 28 common challenges faced when creating, emulating and analyzing a system, from obtaining firmware to post emulation analysis. I then introduce a HALucinator [1 ]/QEMU [2 ] tracer tool named HQTracer, a binary function matching tool PMatch, and GHALdra, an emulator that works for more than 30 different ISAs and enables High Level Emulation.</p>
72

Study of Photovoltaic System Integration in Microgrids through Real-Time Modeling and Emulation of its Components Using HiLeS / Étude de l’Intégration des Systèmes Photovoltaïques aux Microgrids par la Modélisation et Emulation Temps Réel de ses Composants en Utilisant HiLeS

Gutiérrez Galeano, Alonso 06 September 2017 (has links)
L'intégration actuelle des systèmes photovoltaïques dans les systèmes d'alimentation conventionnels a montré une croissance importante, ce qui a favorisé l'expansion rapide des micro-réseaux du terme anglais microgrid. Cette intégration a cependant augmenté la complexité du système d'alimentation qui a conduit à de nouveaux défis de recherche. Certains de ces défis de recherche encouragent le développement d'approches de modélisation innovantes en temps réel capables de faire face à cette complexité croissante. Dans ce contexte, une méthodologie innovante est proposée et basée sur les composants pour la modélisation et l'émulation de systèmes photovoltaïques en temps réel integers aux microgrids. L'approche de modélisation proposée peut utiliser le langage de modélisation des systèmes (SysML) pour décrire la structure et le comportement des systèmes photovoltaïques intégrés en tenant compte de leurs caractéristiques multidisciplinaires. De plus, cette étude présente le cadre de spécification de haut niveau des systèmes embarqués (HiLeS) pour transformer les modèles SysML développés en code source destinés à configurer le matériel intégré. Cette caractéristique de la generation automatique de code permet de profiter de dispositifs avec un haut degré d'adaptabilité et de performances de traitement. Cette méthodologie basée sur HiLeS et SysML est axée sur l'étude des systems photovoltaïques partiellement ombragés ainsi que des architectures flexibles en électronique de puissance en raison de leur influence sur les microgrids actuels. En outre, cette perspective de recherche est utilisée pour évaluer les stratégies de contrôle et de supervision dans les conditions normales et de défauts. Ce travail représente la première étape pour développer une approche innovante en temps réel pour modéliser et émuler des systèmes photovoltaïques complexes en tenant compte des propriétés de modularité, de haut degré d'évolutivité et des conditions de travail non uniformes. Les résultats expérimentaux et analytiques valident la méthodologie proposée. / Nowadays, the integration of photovoltaic systems into electrical grids is encouraging the expansion of microgrids. However, this integration has also increased the power system complexity leading to new research challenges. Some of these research challenges require the development of innovative modeling approaches able to deal with this increasing complexity. Therefore, this thesis is intended to contribute with an innovative methodology component-based for modeling and emulating in real-time photovoltaic systems integrated to microgrids. The proposed modeling approach uses the Systems Modeling Language (SysML) to describe the structure and behavior of integrated photovoltaic systems. In addition, this study presents the High Level Specification of Embedded Systems (HiLeS) to transform automatically the developed SysML models in embedded code and Petri nets. These characteristics of automatic code generation and design based on Petri nets allow taking advantage of FPGAs for application of real-time emulation of photovoltaic systems. This dissertation is focused on partially shaded photovoltaic systems and flexible power electronics architectures because of their relevant influence on current microgrids. Furthermore, this research perspective is intended to evaluate control and supervision strategies in normal and fault conditions. This work represents the first step to develop an innovative real-time approach to model and emulate complex photovoltaic systems considering properties of modularity, high degree of scalability, and non-uniform working conditions. Finally, experimental and analytical results validate the proposed methodology.
73

The development of a system that emulates percussion to detect the borders of the liver

Rauch, Hanz Frederick 03 1900 (has links)
Thesis (MScEng (Mechanical and Mechatronic Engineering))--University of Stellenbosch, 2009. / Percussion is a centuries old bedside diagnostic technique that is used to diagnose various conditions of the thorax and abdomen, among these, abnormalities of the liver. The physician taps the patient’s skin in the area of interest to determine the qualities or presence of the underlying tissue or organ, by listening to the generated sound. The research contained in this thesis views percussion as a system identification method which uses an impulse response to identify the underlying system. A design employing an electromagnetic actuator as input pulse generator and accelerometer as impulse response recorder was motivated and built. Tests were performed on volunteers and the recorded signals were analysed to find methods of identifying the presence of the liver from these signals. The analyses matched signals to models or simply extracted signal features and matched these model parameters or signal features to the presence of the liver. Matching was done using statistical pattern recognition methods and the true presence of the liver was established using MR images. Features extracted from test data could not be matched to the presence of the liver with sufficient confidence which led to the conclusion that either the test, apparatus or analysis was flawed. The lack of success compelled a further test on a mock-up of the problem – a silicone model with an anomaly representing the organ under test. Results from these tests showed that signals should be measured further from the actuator and the approach followed during this test could lead to the successful location of the anomaly and discrimination between subtle differences in the consistency thereof. It is concluded that further research should aim to first validate percussion as performed by the physician and increase complexity in a phased manner, validating results and apparatus at each step. The approach followed was perhaps too bold in light of the lack of fundamental understanding of percussion and the underlying mechanisms.
74

Evading Greek models : Three studies on Roman visual culture

Habetzeder, Julia January 2012 (has links)
For a long time, Roman ideal sculptures have primarily been studied within the tradition of Kopienkritik. Owing to some of the theoretical assumptions tied to this practice, several important aspects of Roman visual culture have been neglected as the overall aim of such research has been to gain new knowledge regarding assumed Classical and Hellenistic models. This thesis is a collection of three studies on Roman ideal sculpture. The articles share three general aims: 1. To show that the practice of Kopienkritik has, so far, not produced convincing interpretations of the sculpture types and motifs discussed. 2. To show that aspects of the methodology tied to the practice of Kopienkritik (thorough examination and comparison of physical forms in sculptures) can, and should, be used to gain insights other than those concerning hypothetical Classical and Hellenistic model images. 3. To present new interpretations of the sculpture types and motifs studied, interpretations which emphasize their role and importance within Roman visual culture. The first article shows that reputed, post-Antique restorations may have an unexpected—and unwanted—impact on the study of ancient sculptures. This is examined by tracing the impact that a restored motif ("Satyrs with cymbals") has had on the study of an ancient sculpture type: the satyr ascribed to the two-figure group "The invitation to the dance". The second article presents and interprets a sculpture type which had previously gone unnoticed—The satyrs of "The Palazzo Massimo-type". The type is interpreted as a variant of "The Marsyas in the forum", a motif that was well known within the Roman cultural context. The third article examines how, and why, two motifs known from Classical models were changed in an eclectic fashion once they had been incorporated into Roman visual culture. The motifs concerned are kalathiskos dancers, which were transformed into Victoriae, and pyrrhic dancers, which were also reinterpreted as mythological figures—the curetes. / <p>At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 1: Accepted. Paper 3: Accepted.</p>
75

Massively parallel computing for particle physics

Preston, Ian Christopher January 2010 (has links)
This thesis presents methods to run scientific code safely on a global-scale desktop grid. Current attempts to harness the world’s idle desktop computers face obstacles such as donor security, portability of code and privilege requirements. Nereus, a Java-based architecture, is a novel framework that overcomes these obstacles and allows the creation of a globally-scalable desktop grid capable of executing Java bytecode. However, most scientific code is written for the x86 architecture. To enable the safe execution of unmodified scientific code, we created JPC, a pure Java x86 PC emulator. The Nereus framework is applied to two tasks, a trivially parallel data generation task, BlackMax, and a parallelization and fault tolerance framework, Mycelia. Mycelia is an implementation of the Map-Reduce parallel programming paradigm. BlackMax is a microscopic blackhole event generator, of direct relevance for the Large Hadron Collider (LHC). The Nereus based BlackMax adaptation dramatically speeds up the production of data, limited only by the number of desktop machines available.
76

TCP in Wireless Networks: Challenges, Optimizations and Evaluations

Alfredsson, Stefan January 2005 (has links)
This thesis presents research on transport layer behavior in wireless networks. As the Internet is expanding its reach to include mobile devices, it has become apparent that some of the original design assumptions for the dominant transport protocol, TCP, are approaching their limits. A key feature of TCP is the congestion control algorithm, constructed with the assumption that packet loss is normally very low, and that packet loss therefore is a sign of network congestion. This holds true for wired networks, but for mobile wireless networks non-congestion related packet loss may appear. The varying signal power inherent with mobility and handover between base-stations are two example causes of such packet loss. This thesis provides an overview of the challenges for TCP in wireless networks together with a compilation of a number of suggested TCP optimizations for these environments. A TCP modification called TCP-L is proposed. It allows an application to increase its performance, in environments where residual bit errors normally give a degraded throughput, by making a reliability tradeoff. The performance of TCP-L is experimentally evaluated with an implementation in the Linux kernel. The transport layer performance in a 4G scenario is also experimentally investigated, focusing on the impact of the link layer design and its parameterization. Further, for emulation-based protocol evaluations, controlled packet loss and bit error generation is shown to be an important aspect.
77

Modèles à échelle réduite en similitude pour l'ingénierie système et l'expérimentation simulée "temps compacté" : application à un microréseau incluant un stockage électrochimique.

Varais, Andy 10 January 2019 (has links) (PDF)
Cette thèse a été réalisée en collaboration avec la société SCLE SFE (Groupe ENGIE) et le laboratoire Laplace. Elle porte sur le développement d’une méthodologie permettant d’élaborer des modèles dits « en similitude », à échelle de puissance et de temps réduites. Ces modèles peuvent servir pour l’analyse des systèmes mais ils sont en particulier utiles pour l’expérimentation en temps réel des systèmes énergétiques. En effet, les expérimentations sont très souvent menées à échelle réduite pour des questions de taille, de coût,… Certaines parties de ces expérimentations peuvent être « émulées » (simulées physiquement par des dispositifs de puissance) d’autres étant constitués de composants physiques : on parle alors de procédure Hardware in the Loop (HIL). Même si, à la base, la démarche de réduction d’échelle a une portée générale, notre gamme d’application principale concerne les micro réseaux avec intégration de sources renouvelables intermittentes couplées à des composants de stockage. En conséquence,nos travaux se focalisent sur la mise en œuvre de modèles de similitudes en puissance/énergie/temps de sources ENR et de stockeurs. La notion de réduction de temps, nous parlerons de « temps virtuel compacté », est un des concepts clés de ces travaux. Une attention particulière est portée sur le développement d’un émulateur physique de batterie électrochimique.En effet, le stockage d’énergie est un point clé dans un micro réseau. De plus, cet élément présente de fortes non-linéarités dont la mise en similitude doit impérativement tenir compte et n’est pas triviale. Une fois ces modèles développés, on les éprouve via la mise en œuvre d’essais en expérimentation simulée par émulateurs physiques à échelle de puissance réduite et en temps virtuel compacté. Ces essais permettent par ailleurs de confronter les notions d’émulateurs «copie-modèle », pour lequel un modèle est utilisé pour reproduire le comportement du système, et d’émulateurs « copie-image », où le comportement du système est reproduit à partir d’un de ses composants réels (par exemple une cellule pour la batterie).
78

Paralelizando unidades de cache hierárquicas para roteadores ICN

Mansilha, Rodrigo Brandão January 2017 (has links)
Um desafio fundamental em ICN (do inglês Information-Centric Networking) é desenvolver Content Stores (ou seja, unidades de cache) que satisfaçam três requisitos: espaço de armazenamento grande, velocidade de operação rápida e custo acessível. A chamada Hierarchical Content Store (HCS) é uma abordagem promissora para atender a esses requisitos. Ela explora a correlação temporal entre requisições para prever futuras solicitações. Por exemplo, assume-se que um usuário que solicita o primeiro minuto de um filme também solicitará o segundo minuto. Teoricamente, essa premissa permitiria transferir proativamente conteúdos de uma área de cache relativamente grande, mas lenta (Layer 2 - L2), para uma área de cache mais rápida, porém menor (Layer 1 - L1). A estrutura hierárquica tem potencial para incrementar o desempenho da CS em uma ordem de grandeza tanto em termos de vazão como de tamanho, mantendo o custo. Contudo, o desenvolvimento de HCS apresenta diversos desafios práticos. É necessário acoplar as hierarquias de memória L2 e L1 considerando as suas taxas de transferência e tamanhos, que dependem tanto de aspectos de hardware (por exemplo, taxa de leitura da L2, uso de múltiplos SSD físicos em paralelo, velocidade de barramento, etc.), como de software (por exemplo, controlador do SSD, gerenciamento de memória, etc.). Nesse contexto, esta tese apresenta duas contribuições principais. Primeiramente, é proposta uma arquitetura para superar os gargalos inerentes ao sistema através da paralelização de múltiplas HCS. Em resumo, o esquema proposto supera desafios inerentes à concorrência (especificamente, sincronismo) através do particionamento determinístico das requisições de conteúdos entre múltiplas threads. Em segundo lugar, é proposta uma metodologia para investigar o desenvolvimento de HCS explorando técnicas de emulação e modelagem analítica conjuntamente. A metodologia proposta apresenta vantagens em relação a metodologias baseadas em prototipação e simulação. A L2 é emulada para viabilizar a investigação de uma variedade de cenários de contorno (tanto em termos de hardware como de software) maior do que seria possível através de prototipação (considerando as tecnologias atuais). Além disso, a emulação emprega código real de um protótipo para os outros componentes do HCS (por exemplo L1, gerência das camadas e API) para fornecer resultados mais realistas do que seriam obtidos através de simulação. / A key challenge in Information Centric Networking (ICN) is to develop cache units (also called Content Store - CS) that meet three requirements: large storage space, fast operation, and affordable cost. The so-called HCS (Hierarchical Content Store) is a promising approach to satisfy these requirements jointly. It explores the correlation between content requests to predict future demands. Theoretically, this idea would enable proactively content transfers from a relatively large but slow cache area (Layer 2 - L2) to a faster but smaller cache area (Layer 1 - L1). Thereby, it would be possible to increase the throughput and size of CS in one order of magnitude, while keeping the cost. However, the development of HCS introduces several practical challenges. HCS requires a careful coupling of L2 and L1 memory levels considering their transfer rates and sizes. This requirement depends on both hardware specifications (e.g., read rate L2, use of multiple physical SSD in parallel, bus speed, etc.), and software aspects (e.g., the SSD controller, memory management, etc.). In this context, this thesis presents two main contributions. First, we propose an architecture for overcoming the HCS bottlenecks by parallelizing multiple HCS. In summary, the proposed scheme overcomes racing condition related challenges through deterministic partitioning of content requests among multiple threads. Second, we propose a methodology to investigate the development of HCS exploiting emulation techniques and analytical modeling jointly. The proposed methodology offers advantages over prototyping and simulation-based methods. We emulate the L2 to enable the investigation of a variety of boundary scenarios that are richer (regarding both hardware and software aspects) than would be possible through prototyping (considering current technologies). Moreover, the emulation employs real code from a prototype for the other components of the HCS (e.g., L1, layers management and API) to provide more realistic results than would be obtained through simulation.
79

Contributions à l'expérimentation sur les systèmes distribués de grande taille

Nussbaum, Lucas 04 December 2008 (has links) (PDF)
Cette thèse s'inscrit dans le domaine de l'expérimentation sur les systèmes distribués, et en particulier de leur test ou de leur validation. À côté des méthodes d'évaluation classiques (modélisation, simulation, plates-formes d'expérimentation comme PlanetLab ou Grid'5000) les méthodes basées sur l'émulation et la virtualisation proposent une alternative prometteuse. Elles permettent d'exécuter l'application réelle à étudier, en lui présentant un environnement synthétique, correspondant aux conditions d'expérience souhaitées : il est ainsi possible, à moindre coût, de réaliser des expériences dans des conditions expérimentales différentes, éventuellement impossibles à reproduire dans un environnement réel. Mais l'utilisation de tels outils d'émulation ne peut se faire sans répondre à des questions sur leur réalisme et leur passage à l'échelle. Dans ce travail, nous utilisons une démarche incrémentale pour construire une plate-forme d'émulation destinée à l'étude des systèmes pair-à-pair à grande échelle. Nous commençons par comparer les différentes solutions d'émulation logicielle de liens réseaux, puis illustrons leur utilisation, notamment en étudiant une application réseau complexe : TUNS, un tunnel IP sur DNS. Nous construisons ensuite notre plate-forme d'émulation, P2PLab, en utilisant l'un des émulateurs réseaux précédemment étudiés, ainsi qu'un modèle de topologies réseaux adapté à l'étude des systèmes pair-à-pair. Nous y proposons une solution légère de virtualisation, permettant un bon rapport de repliement (grand nombre de noeuds émulés sur chaque machine physique). Après avoir validé cette plate-forme, nous l'utilisons pour étudier le protocole de diffusion de fichiers pair-à-pair BitTorrent à l'aide d'expériences mettant en jeu près de 15000 noeuds participants.
80

A Synthesizable VHDL Behavioral Model of A DSP On Chip Emulation Unit

Li, Qingsen January 2003 (has links)
<p>This thesis describes the VHDL behavioral model design of a DSP On Chip Emulation Unit. The prototype of this design is the OnCE port of the Motorola DSP56002. </p><p>Capabilities of this On Chip Emulation Unit are accessible through four pins, which allows the user to step through a program, to set the breakpoint that stop program execution at a specific address, and to examine the contents of registers, memory, and pipeline information. The detailed design that includes input/output signals and sub blocks is presented in this thesis. </p><p>The user will interact with the DSP through a GUI on the host computer via the RS232 port. An interface between the RS232 and On Chip Emulation Unit is therefore designed as well. </p><p>The functionality is designed to be same as described by Motorola and it is verified by a test bench. The writing of the test bench, test sequence and results is presented also.</p>

Page generated in 0.0886 seconds