• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 31
  • 31
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Towards the development of a reliable reconfigurable real-time operating system on FPGAs

Hong, Chuan January 2013 (has links)
In the last two decades, Field Programmable Gate Arrays (FPGAs) have been rapidly developed from simple “glue-logic” to a powerful platform capable of implementing a System on Chip (SoC). Modern FPGAs achieve not only the high performance compared with General Purpose Processors (GPPs), thanks to hardware parallelism and dedication, but also better programming flexibility, in comparison to Application Specific Integrated Circuits (ASICs). Moreover, the hardware programming flexibility of FPGAs is further harnessed for both performance and manipulability, which makes Dynamic Partial Reconfiguration (DPR) possible. DPR allows a part or parts of a circuit to be reconfigured at run-time, without interrupting the rest of the chip’s operation. As a result, hardware resources can be more efficiently exploited since the chip resources can be reused by swapping in or out hardware tasks to or from the chip in a time-multiplexed fashion. In addition, DPR improves fault tolerance against transient errors and permanent damage, such as Single Event Upsets (SEUs) can be mitigated by reconfiguring the FPGA to avoid error accumulation. Furthermore, power and heat can be reduced by removing finished or idle tasks from the chip. For all these reasons above, DPR has significantly promoted Reconfigurable Computing (RC) and has become a very hot topic. However, since hardware integration is increasing at an exponential rate, and applications are becoming more complex with the growth of user demands, highlevel application design and low-level hardware implementation are increasingly separated and layered. As a consequence, users can obtain little advantage from DPR without the support of system-level middleware. To bridge the gap between the high-level application and the low-level hardware implementation, this thesis presents the important contributions towards a Reliable, Reconfigurable and Real-Time Operating System (R3TOS), which facilitates the user exploitation of DPR from the application level, by managing the complex hardware in the background. In R3TOS, hardware tasks behave just like software tasks, which can be created, scheduled, and mapped to different computing resources on the fly. The novel contributions of this work are: 1) a novel implementation of an efficient task scheduler and allocator; 2) implementation of a novel real-time scheduling algorithm (FAEDF) and two efficacious allocating algorithms (EAC and EVC), which schedule tasks in real-time and circumvent emerging faults while maintaining more compact empty areas. 3) Design and implementation of a faulttolerant microprocessor by harnessing the existing FPGA resources, such as Error Correction Code (ECC) and configuration primitives. 4) A novel symmetric multiprocessing (SMP)-based architectures that supports shared memory programing interface. 5) Two demonstrations of the integrated system, including a) the K-Nearest Neighbour classifier, which is a non-parametric classification algorithm widely used in various fields of data mining; and b) pairwise sequence alignment, namely the Smith Waterman algorithm, used for identifying similarities between two biological sequences. R3TOS gives considerably higher flexibility to support scalable multi-user, multitasking applications, whereby resources can be dynamically managed in respect of user requirements and hardware availability. Benefiting from this, not only the hardware resources can be more efficiently used, but also the system performance can be significantly increased. Results show that the scheduling and allocating efficiencies have been improved up to 2x, and the overall system performance is further improved by ~2.5x. Future work includes the development of Network on Chip (NoC), which is expected to further increase the communication throughput; as well as the standardization and automation of our system design, which will be carried out in line with the enablement of other high-level synthesis tools, to allow application developers to benefit from the system in a more efficient manner.
22

Contributions to the Transaction-Level Modeling of Systems-on-a-Chip / Contributions à la modélisation transactionnelle des systèmes sur puce

Funchal, Giovanni 18 November 2011 (has links)
Cette thèse porte sur la modélisation des systèmes-sur-puce au niveau transactionnel, une approche connue sous le nom de prototypage virtuel. Les prototypes virtuels sont d'un grand intérêt industriel parce qu'ils permettent de démarrer certaines activités (telles que le développement du logiciel embarqué) plus tôt dans le flot de conception. Du fait que cette approche est relativement nouvelle, un grand nombre de problèmes de modélisation sont encore ouverts. En particulier, il est essentiel de comprendre à quel point un modèle donné est proche du système hypothétique qu'il est sensé représenter. C'est un problème difficile car nous n'avons pas les moyens de réaliser une comparaison objective, vu que le système modélisé n'est pas disponible physiquement au moment de la modélisation. Nous avons besoin d'une méthodologie pour traiter ces difficultés, qui s'étendent au-delà de simples exigences objectives et de l'analyse de besoin fonctionnel. Dans ce contexte, l'industrie cherche des directives de modélisation claires, fondées sur l'expérience et l'identification des pratiques actuelles et des problèmes récurrents. Dans cette thèse, nous présentons une étude compréhensive d'un large éventail de considérations techniques impliquées dans le flot de conception du logiciel et du matériel qui constituent un système-sur-puce typique. Nous utilisons ces connaissances pour identifier une source particulière de divergence entre le modèle et le système modélisé. Nous montrons que cette divergence masque certains bogues du logiciel sur le prototype virtuel. Nous mettons en évidence la pratique de modélisation à l'origine de cette situation. Deuxièmement, nous essayons d'identifier des problèmes liés à l'utilisation du langage de modélisation dans les pratiques actuelles. Nous prétendons que, d'une part, ces problèmes sont dûs à la confusion entre les concepts de la modélisation transactionnelle et leur implémentation dans le langage standard de l'industrie ; et d'autre part que ce n'est qu'en menant des comparaisons avec un autre langage que l'on pourrait quantifier leur étendue. Pour ce faire, nous proposons un cadre d'application spécialement conçu pour guider l'étude des concepts fondamentaux de la modélisation transactionnelle. Entre autres, nous introduisons une nouvelle méthode pour la modélisation du temps dans les simulateurs à événements discrets. Cette méthode dévoile la différence entre une action instantanée et une tâche avec durée. Ensuite, elle l'exploite de plusieurs manières : pour enrichir les outils de visualisation de traces ; pour dériver une définition claire de chevauchement de tâches ; pour accélérer la simulation à moindre effort, en parallélisant l'exécution d'actions se déroulant à des temps simulés différents ; et pour révéler des bogues subtiles en tenant compte du fait que les actions à des temps simulés différents ne sont pas forcément synchronisées. / This thesis deals with modeling of Systems-on-a-Chip (SoC) at the Transactional Level (TLM), an approach also known as virtual prototyping. Virtual prototypes are of special industrial interest because they allow some activities (such as embedded software development) to start earlier in the design flow. Because this approach is relatively new, several modeling issues are still open. In particular, there is an increasing need for understanding how close a given model is to the hypothetical system it is intended to represent. This is a difficult problem specially because we lack a way to perform an objective comparison, since the modeling activity is prior to the physical existence of the modeled system. A methodology is required to address these concerns, going beyond classical objective and functional quality requirements. In this context, the industry searches for clear modeling guidelines based on experience and the identification of the current modeling practices and known recurring problems. In this thesis, we present a comprehensive study of a range of technical considerations involved in the design flow of the hardware and software that constitutes a typical SoC. We use this knowledge to identify one particular source of divergence between the model and the modeled system. We show that this divergence causes some software bugs to become hidden in the virtual prototype and we correlate this situation to the corresponding modeling practice. Secondly, we attempt to identify language-dependency issues in the modeling practices. We claim that it is only by confronting with an alternative language that we could measure the extent to which common modeling issues were caused by mixing up conceptual transaction-level modeling with its implementation in the current industry standard language. Therefore, we propose a complete experimentation framework specifically designed to help in the study of fundamental concepts beneath TLM. Amongst other features, this framework introduces a novel approach to modeling time in discrete-event simulators that distinguishes between instantaneous actions and tasks that take time. We show that this notion can be exploited to enrich trace visualization tools; to derive a clear definition of overlapping tasks; to effortlessly achieve an important simulation speedup by enabling parallel execution of actions occurring at different simulation times; and to expose subtle bugs by removing the constraint that actions at different simulation times are necessarily synchronized.
23

Deterministisk Komprimering/Dekomprimering av Testvektorer med Hjälp av en Inbyggd Processor och Faxkodning / Deterministic Test Vector Compression/Decompression Using an Embedded Processor and Facsimile Coding

Persson, Jon January 2005 (has links)
<p>Modern semiconductor design methods makes it possible to design increasingly complex system-on-a-chips (SOCs). Testing such SOCs becomes highly expensive due to the rapidly increasing test data volumes with longer test times as a result. Several approaches exist to compress the test stimuli and where hardware is added for decompression. This master’s thesis presents a test data compression method based on a modified facsimile code. An embedded processor on the SOC is used to decompress and apply the data to the cores of the SOC. The use of already existing hardware reduces the need of additional hardware. </p><p>Test data may be rearranged in some manners which will affect the compression ratio. Several modifications are discussed and tested. To be realistic a decompressing algorithm has to be able to run on a system with limited resources. With an assembler implementation it is shown that the proposed method can be effectively realized in such environments. Experimental results where the proposed method is applied to benchmark circuits show that the method compares well with similar methods. </p><p>A method of including the response vector is also presented. This approach makes it possible to abort a test as soon as an error is discovered, still compressing the data used. To correctly compare the test response with the expected one the data needs to include don’t care bits. The technique uses a mask vector to mark the don’t care bits. The test vector, response vector and mask vector is merged in four different ways to find the most optimal way.</p>
24

Automated Bus Generation for Multi-processor SoC Design

Ryu, Kyeong Keol 12 July 2004 (has links)
In the design of a multi-processor System-on-a-Chip (SoC), the bus architecture typically comes to the forefront because the system performance is not dependent only on the speed of the Processing Elements (PEs) but also on the bus architecture in the system. An efficient bus architecture with effective arbitration for reducing contention on the bus plays an important role in maximizing performance. Therefore, among many issues of multi-processor SoC research, we focus on two issues related to the bus architecture in this dissertation. One issue is how to quickly and easily design an efficient bus architecture for an SoC. The second issue is how to quickly explore the design space across performance influencing factors to achieve a high performance bus system. The objective of this research is to provide a Computer-Aided Design (CAD) tool with which the user can quickly explore System-on-a-Chip (SoC) bus design space in search of a high performance SoC bus system. From a straightforward description of the numbers and types of Processing Elements (PEs), non-PEs, memories and buses (including, for example, the address and data bus widths of the buses and memories), our Bus Synthesis tool, called BusSynth, generates a Register-Transfer Level (RTL) Verilog Hardware Description Language (HDL) description of the specified bus system. The user can utilize this RTL Verilog in bus-accurate simulations to more quickly arrive at an efficient bus architecture for a multi-processor SoC. The methodology we propose gives designers a great benefit in fast design space exploration of bus systems across a variety of performance influencing factors such as bus types, PE types and software programming styles (e.g., pipelined parallel fashion or functional parallel fashion). We also show that BusSynth can efficiently generate bus systems in a matter of seconds as opposed to weeks of design effort to integrate together each system component by hand. Moreover, unlike the previous related work, BusSynth can support a wide variety of PEs, memory types and bus architectures (including a hybrid bus architecture) in search of a high performance SoC.
25

Réseau sur puce sécurisé pour applications cryptographiques sur FPGA / Secure Network-on-Chip for cryptographic applications on FPGA

Druyer, Rémy 26 October 2017 (has links)
Que ce soit au travers des smartphones, des consoles de jeux portables ou bientôt des supercalculateurs, les systèmes sur puce (System-on-chip (SoC)) ont vu leur utilisation largement se répandre durant ces deux dernières décennies. Ce phénomène s’explique notamment par leur faible consommation de puissance au regard des performances qu’ils sont capables de délivrer, et du large panel de fonctions qu’ils peuvent intégrer. Les SoC s’améliorant de jour en jour, ils requièrent de la part des systèmes d’interconnexions qui supportent leurs communications, des performances de plus en plus élevées. Pour répondre à cette problématique les réseaux sur puce (Network-on-Chip (NoC)) ont fait leur apparition.En plus des ASIC, les circuit reconfigurables FPGA sont un des choix possibles lors de la réalisation d’un SoC. Notre première contribution a donc été de réaliser et d’étudier les performances du portage du réseau sur puce générique Hermes initialement conçu pour ASIC, sur circuit reconfigurable. Cela nous a permis de confirmer que l’architecture du système d’interconnexions doit être adaptée à celle du circuit pour pouvoir atteindre les meilleures performances possibles. Par conséquent, notre deuxième contribution a été la conception de l’architecture de TrustNoC, un réseau sur puce optimisé pour FPGA à hautes performances en latence, en fréquence de fonctionnement, et en quantité de ressources logiques occupées.Un autre aspect primordial qui concerne les systèmes sur puce, et plus généralement de tous les systèmes numériques est la sécurité. Notre dernière principale contribution a été d’étudier les menaces qui s’exercent sur les SoC durant toutes les phases de leur vie, puis de développer à partir d’un modèle de menaces, des mécanismes matériels de sécurité permettant de lutter contre des détournements d’IP, et des attaques logicielles. Nous avons également veillé à limiter au maximum le surcoût qu’engendre les mécanismes de sécurité sur les performances sur réseau sur puce. / Whether through smartphones, portable game consoles, or high performances computing, Systems-on-Chip (SoC) have seen their use widely spread over the last two decades. This can be explained by the low power consumption of these circuits with the regard of the performances they are able to deliver, and the numerous function they can integrate. Since SoC are improving every day, they require better performances from interconnects that support their communications. In order to address this issue Network-on-Chip have emerged.In addition to ASICs, FPGA circuits are one of the possible choices when conceiving a SoC. Our first contribution was therefore to perform and study the performance of Hermes NoC initially designed for ASIC, on reconfigurable circuit. This allowed us to confirm that the architecture of the interconnection system must be adapted to that of the circuit in order to achieve the best possible performances. Thus, our second contribution was to design TrustNoC, an optimized NoC for FPGA platform, with low latency, high operating frequency, and a moderate quantity of logical resources required for implementation.Security is also a primordial aspect of systems-on-chip, and more generally, of all digital systems. Our latest contribution was to study the threats that target SoCs during all their life cycle, then to develop and integrate hardware security mechanisms to TrustNoC in order to counter IP hijacking, and software attacks. During the design of security mechanisms, we tried to limit as much as possible the overhead on NoC performances.
26

Deterministisk Komprimering/Dekomprimering av Testvektorer med Hjälp av en Inbyggd Processor och Faxkodning / Deterministic Test Vector Compression/Decompression Using an Embedded Processor and Facsimile Coding

Persson, Jon January 2005 (has links)
Modern semiconductor design methods makes it possible to design increasingly complex system-on-a-chips (SOCs). Testing such SOCs becomes highly expensive due to the rapidly increasing test data volumes with longer test times as a result. Several approaches exist to compress the test stimuli and where hardware is added for decompression. This master’s thesis presents a test data compression method based on a modified facsimile code. An embedded processor on the SOC is used to decompress and apply the data to the cores of the SOC. The use of already existing hardware reduces the need of additional hardware. Test data may be rearranged in some manners which will affect the compression ratio. Several modifications are discussed and tested. To be realistic a decompressing algorithm has to be able to run on a system with limited resources. With an assembler implementation it is shown that the proposed method can be effectively realized in such environments. Experimental results where the proposed method is applied to benchmark circuits show that the method compares well with similar methods. A method of including the response vector is also presented. This approach makes it possible to abort a test as soon as an error is discovered, still compressing the data used. To correctly compare the test response with the expected one the data needs to include don’t care bits. The technique uses a mask vector to mark the don’t care bits. The test vector, response vector and mask vector is merged in four different ways to find the most optimal way.
27

台灣IC設計業者與Fab廠間技術知識連結關係之研究-以系統單晶片(SoC)為例

蔡博文, Michael Tsai, Po-Wen Unknown Date (has links)
本研究探討台灣IC設計業者與FAB廠間技術知識連結之影響,主要的研究問題包括:(1)IC設計業者內部技術知識連結與知識流通之影響?(2)技術知識特性與IC設計業者跟FAB廠技術知識連結之影響?(3)IC設計業者跟FAB廠技術知識連結與知識流通之影響? 本研究採個案研究法,共訪問六家IC設計業者及兩家FAB廠,主要的研究結論如下: 壹、IC設計業者內部技術知識連結對其知識流通之影響 一、本研究發現當專案組織結構的不同,IC設計業者知識的蓄積有所不同。 當專案組織結構採重量型團隊時,IC設計業者主要將知識蓄積在人員身上,例如:個案A公司、個案C公司、個案F公司。 當專案組織結構採自主性團隊時,IC設計業者除了主要將知識蓄積在人員身上外,還將知識蓄積在文件上,例如:個案B公司、個案D公司、個案E公司。 二、本研究發現當團隊成員解決問題透過面對面(包括正式或非正式)溝通方式、分享經驗,有助於IC設計業者知識在共同化過程中創造,例如:個案A公司、個案B公司、個案C公司、個案D公司、個案E公司。 貳、技術知識特性對其IC設計業者跟FAB廠技術知識連結之影響 一、本研究發現當技術知識路徑相依度不同之產品開發專案,IC設計業者與FAB廠間之互動方式有所不同。 當技術知識路徑相依度低之產品開發專案,IC設計業者與FAB廠間的互動方式採共同開發模式,例如:個案A公司、個案D公司、個案E公司。當技術知識路徑相依度高之產品開發專案,IC設計業者與FAB廠間的互動方式採早期投入模式,例如:個案B公司、個案C公司、個案F公司。 二、本研究發現當技術知識路徑相依度不同之產品開發專案,IC設計業者對FAB廠服務經驗的要求有所不同。 當技術知識路徑相依度低之產品開發專案,IC設計業者對FAB廠服務經驗的要求高,例如:個案A公司、個案D公司、個案E公司。 當技術知識路徑相依度高之產品開發專案,IC設計業者對FAB廠服務經驗的要求低,例如:個案B公司、個案C公司、個案F公司。 三、技術知識路徑相依度不同之產品開發專案,不會影響IC設計業者在設計初期搭便車搭便車(Shuttle Bus),其在驗證最新的設計,例如:個案A公司、個案B公司、個案C公司、個案D公司、個案E公司。 四、本研究發現當技術路徑相依度不同之產品開發專案,IC設計業者與FAB廠間良率管理有所不同。 當技術知識路徑相依度低之產品開發專案,IC設計業者與FAB廠間採系統式良率管理,例如:個案A公司、個案D公司。 當技術知識路徑相依度高之產品開發專案,IC設計業者與FAB廠間採隨機式良率管理,例如:個案B公司、個案C公司、個案E公司、個案F公司。E公司由於只是採成熟的製程技術,技術複雜度不高,而且良率比較高,所以不需要系統式的良率管理。 參、IC設計業者跟FAB廠技術知識連結對其知識流通之影響 一、本研究發現當FAB廠提供豐富的服務經驗時,有助於IC設計業者技術知識的創造上有效的整合,例如:個案A公司、個案B公司、個案C公司、個案D公司、個案E公司、個案F公司。 二、本研究發現當IC設計業者在設計初期時搭便車(Shuttle Bus)時,有助於IC設計業者建立知識創造上的原型,例如:個案A公司、個案B公司、個案C公司、個案D公司、個案E公司。 三、本研究發現當IC設計業者與FAB廠間良率管理採系統式時,有助於IC設計業者知識的吸收上培養跨越疆界者,例如:個案A公司、個案D公司。
28

Composants abstraits pour la vérification fonctionnelle des systèmes sur puce / high-level component-based models for functional verificationof systems-on-a-chip

Romenska, Yuliia 10 May 2017 (has links)
Les travaux présentés dans cette thèse portent sur la modélisation, la spécification et la vérification des modèlesdes Systèmes sur Puce (SoCs) au niveau d’abstraction transactionnel et à un niveau d’abstraction plus élevé.Les SoCs sont hétérogènes: ils comprennent des composants matériels et des processeurs pour réaliser le logicielincorporé, qui est en lien direct avec du matériel. La modélisation transactionnelle (TLM) basée sur SystemCa été très fructueuse à fournir des modèles exécutables des SoCs à un haut niveau d’abstraction, aussi appelésprototypes virtuels (VPs). Ces modèles peuvent être utilisés plus tôt dans le cycle de développement des logiciels,et la validation des matériels réels. La vérification basée sur assertions (ABV) permet de vérifier les propriétés tôtdans le cycle de conception de façon à trouver les défauts et faire gagner du temps et de l’effort nécessaires pourla correction de ces défauts. Les modèles TL peuvent être sur-contraints, c’est-à-dire qu’ils ne presentent pastous les comportements du matériel. Ainsi, ceci ne permet pas la détection de tous les défauts de la conception.Nos contributions consistent en deux parties orthogonales et complémentaires: D’une part, nous identifions lessources des sur-contraintes dans les modèles TLM, qui apparaissent à cause de l’ordre d’interaction entre lescomposants. Nous proposons une notion d’ordre mou qui permet la suppression de ces sur-contraintes. D’autrepart, nous présentons un mécanisme généralisé de stubbing qui permet la simulation précoce avec des prototypesvirtuels SystemC/TLM.Nous offrons un jeu de patrons pour capturer les propriétés d’ordre mou et définissons une transformationdirecte de ces patrons en moniteurs SystemC. Notre mécanisme généralisé du stubbing permet la simulationprécoce avec les prototypes virtuels SystemC/TLM, dans lesquels certains composants ne sont pas entièrementdéterminés sur les valeurs des données échangées, l’ordre d’interaction et/ou le timing. Ces composants nepossèdent qu’une spécification abstraite, sous forme de contraintes entre les entrées et les sorties. Nous montronsque les problèmes essentielles de la synchronisation entre les composants peuvent être capturés à l’aide de notresimulation avec les stubs. Le mécanisme est générique; nous mettons l’accent uniquement sur les concepts-clés,les principes et les règles qui rendent le mécanisme de stubbing implémentable et applicable aux études de casindustriels. N’importe quel language de spécification satisfaisant nos exigences (par ex. le langage des ordresmou) peut être utilisé pour spécifier les composants, c’est-à-dire il peut être branché au framework de stubbing.Nous fournissons une preuve de concept pour démontrer l’intérêt d’utiliser la simulation avec stubs pour ladétection anticipée et la localisation des défauts de synchronisation du modèle. / The work presented in this thesis deals with modeling, specification and testing of models of Systems-on-a-Chip (SoCs) at the transaction abstraction level and higher. SoCs are heterogeneous: they comprise bothhardware components and processors to execute embedded software, which closely interacts with hardware.SystemC-based Transaction Level Modeling (TLM) has been very successful in providing high-level executablecomponent-based models for SoCs, also called virtual prototypes (VPs). These models can be used early in thedesign flow for the development of the software and the validation of the actual hardware. For SystemC/TLMvirtual prototypes, Assertion-Based Verification (ABV) allows property checking early in the design cycle,helping to find bugs early in the model and to save time and effort that are needed for their fixing. TL modelscan be over-constrained, which means that they do not represent all the behaviors of the hardware, and thus,do not allow detection of some malfunctions of the prototype. Our contributions consist of two orthogonal andcomplementary parts: On the one hand, we identify sources of over-constraints in TL models appearing due tothe order of interactions between components, and propose a notion of loose-ordering which allows to removethese over-constraints. On the other hand, we propose a generalized stubbing mechanism which allows the veryearly simulation with SystemC/TLM virtual prototypes.We propose a set of patterns to capture loose-ordering properties, and define a direct translation of thesepatterns into SystemC monitors. Our generalized stubbing mechanism enables the early simulation with Sys-temC/TLM virtual prototypes, in which some components are not entirely determined on the values of theexchanged data, the order of the interactions and/or the timing. Those components have very abstract speci-fications only, in the form of constraints between inputs and outputs. We show that essential synchronizationproblems between components can be captured using our simulation with stubs. The mechanism is generic;we focus only on key concepts, principles and rules which make the stubbing mechanism implementable andapplicable for real, industrial case studies. Any specification language satisfying our requirements (e.g., loose-orderings) can be used to specify the components, i.e., it can be plugged in the stubbing framework. We providea proof of concept to demonstrate the interest of using the simulation with stubs for very early detection andlocalization of synchronization bugs of the design.
29

Commissioning of a FPGA/DSP unit for Centralized Control of a Variable Phase Pole Multiphase Machine / Programmering av FPGA/DSP för Styrning av Multifasmaskin med Variabel fas pol Konfiguration

Hansson, Jonas January 2022 (has links)
Multiphase Electrical Machines (MPEMs) with variable phase-pole configurations have recently gained interest as it offers advantages such as a larger operating range and improved fault tolerance compared to more traditional MPEMs. The current state-of-the-art modeling and control method, Vector Space Decomposition (VSD), has the drawback of introducing discontinuities in the model when transitioning from one phase-pole configuration to another. The newly developed Harmonic Plane Decomposition (HPD) theory, which is an extension of the VSD, solves this issue by unifying all possible phase-pole configurations into one transformation. To enable further research on HPD theory and MPEMs with variable phase-pole configuration, a test bench is desired to obtain experimental results. The Sustainable Power Lab at KTH already possesses a MPEM with varying phase-pole configurations, the Wound Independently- Controlled Stator Coils machine (WICSC machine), but is missing a complete drive that can capitalize on the additional degrees of freedom that the machine offers. The aim of the thesis is to commission the test bench by implementing HPD control on a Digital Signal Processor (DSP) and design the missing hardware required to close the signal chain. The HPD control was implemented on a Xilinx System on a Chip (SoC) programmed using C and hardware description languages. Part of the control was tested but due to issues with the data acquisition, the complete control was not managed to be fully tested and verified. For the missing hardware, three Printed Circuit Boards (PCBs) were designed for the test bench. Two of them were built and tested, and one PCB was designed but not fabricated during the project. / Multifasmaskiner med variabel fas pol konfiguration har på senaste tiden fått mer uppmärksamhet på grund av fördelar som mer operationspunkter och förbättrad feltolerans jämfört mot mer traditionella multifasmaskiner. Den modernaste metoden, Vector Space Decomposition (VSD), för att styra traditionella multifasmaskiner har nackdelen att modellen blir diskontinuerlig när man går från en fas pol konfiguration till en annan. Den nyutvecklade Harmonic Plane Decomposition (HPD) teorin, vilket är en påbyggnad på VSD, löser det problemet genom att samla alla möjliga fas pol konfigurationer till en transform. För att fortsätta forskningen kring HPD och multifasmaskiner med variabel fas pol konfiguration behövs en testbänk för att kunna skaffa experimentella resultat. Elkraftlabbet på KTH har redan investerat i en multifasmaskin med variabel fas pol konfiguration, men saknar en komplett växelriktare som kan utnyttja maskinens alla frihetsgrader. Målet med projektet var att färdigställa testbänken genom att implementera HPD styrningen på en processor och konstruera hårdvara som testbänken saknade. HPD styrningen implementerades på en systemkrets från Xilinx och programmerades med C och hårdvarubeskrivande språk. Delar av styrningen kunde testas men på grund av problem med inläsningen av data så gick det inte att verifiera hela den implementerade styrningen. Till testbänken konstruerades också tre kretskort varav två byggdes, testades och fick sin funktionalitet verifierad. Det andra kretskortet konstruerades men blev inte tillverkat under projektet.
30

Physical Design of Optoelectronic System-on-a-Chip/Package Using Electrical and Optical Interconnects: CAD Tools and Algorithms

Seo, Chung-Seok 19 November 2004 (has links)
Current electrical systems are faced with the limitation in performance by the electrical interconnect technology determining overall processing speed. In addition, the electrical interconnects containing many long distance interconnects require high power to drive. One of the best ways to overcome these bottlenecks is through the use of optical interconnect to limit interconnect latency and power. This research explores new computer-aided design algorithms for developing optoelectronic systems. These algorithms focus on place and route problems using optical interconnections covering system-on-a-chip design as well as system-on-a-package design. In order to design optoelectronic systems, optical interconnection models are developed at first. The CAD algorithms include optical interconnection models and solve place and route problems for optoelectronic systems. The MCNC and GSRC benchmark circuits are used to evaluate these algorithms.

Page generated in 0.0506 seconds