• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1419
  • 370
  • 155
  • 140
  • 105
  • 92
  • 45
  • 32
  • 25
  • 18
  • 17
  • 15
  • 8
  • 6
  • 6
  • Tagged with
  • 2858
  • 1727
  • 814
  • 595
  • 507
  • 403
  • 399
  • 308
  • 294
  • 273
  • 270
  • 268
  • 246
  • 228
  • 208
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
991

The Functional Paradigm in Embedded Real-Time Systems : A study in the problems and opportunities the functional programming paradigm entails to embedded real-time systems

Bergström, Emil, Tong, Shiliang January 2014 (has links)
This thesis explores the possibility of the functional programming paradigm in the domain of hard embedded real-time systems. The implementation consists of re-implementing an already developed system that is written with the imperative and object oriented paradigms. The functional implementation of the system in question is compared with the original implementation and a study of code complexity, timing properties, CPU utilization and memory usage is performed. The implementation of this thesis consists of re-developing three of the periodic tasks of the original system and the whole development process is facilitated with the TDD development cycle. The programming language used in this thesis is C but with a functional approach to the problem. We conclusions of this thesis is that the functional implementation will give a more stable, reliable and readable system but some code volume, memory usage and CPU utilization overhead is present. The main benefit of using the functional paradigm in this type of system is the ability of using the TDD development cycle. The main con of this type of implementation is that it relies heavily on garbage collection due to the enforcement of data immutability. We find in conclusion that one can only use the functional paradigm if one has an over dimensioned system when it comes to hardware, mainly when it comes to memory size and CPU power. When developing small systems with scarce resources one should choose another paradigm.
992

Field Load Data Acquisition with regard to Vibration, Shock and Climate including Self-heating of ECUs

Yadur Balagangadhar, Nakul 02 March 2015 (has links) (PDF)
For the reliability design of Engine Control Unit devices in motor vehicles, the knowledge of stresses occurring in the field within the product service life is essential. In addition to the environmental influences such as temperature, moisture and humidity, vibration and shock issues are in focus. To ensure the robustness of the products and they are still easily and inexpensively made, they must be interpreted appropriately in the development process. For this, the load spectra for the mechanical influences of road conditions and operating conditions are to be determined. Work will also include temperature and humidity values examined on typical installation locations. The essential everyday situations (commuters, taxi, farmer, ...) should be considered. Existing measurement technology must be combined to this end a comprehensive logger system with communication to the vehicle.
993

From Models to Code and Back : A Round-trip Approach for Model-driven Engineering of Embedded Systems

Ciccozzi, Federico January 2014 (has links)
The complexity of modern systems is continuously growing, thus demanding novel powerful development approaches.In this direction, model-driven and component-based software engineering have reached the status of promising paradigms for the development of complex systems. Moreover, in the embedded domain, their combination is believed to be helpful in handling the ever-increasing complexity of such systems.However, in order for them and their combination to definitively break through at industrial level, code generated from models through model transformations should preserve system properties modelled at design level. This research work focuses on aiding the preservation of system properties throughout the entire development process across different abstraction levels. Towards this goal, we provide the possibility of analysing and preserving system properties through a development chain constituted of three steps: (i) generation of code from system models, (ii) execution and analysis of generated code, and (iii) back-propagation of analysis results to system models.With the introduction of steps (ii) and (iii), properties that are hard to predict at modelling level are compared with runtime values and this consequently allows the developer to work exclusively at modelling level thus focusing on optimising system models with the help of those values. / Denna doktorsavhandling presenterar nya och förbättrade tekniker för modelldriven och komponentbaserad utveckling av programvara. Syftet är att bevara systemegenskaper, som specificerats i modeller, genom de olika stadierna av utvecklingen och när modeller översätts mellan olika abstraktionsnivåer och till kod. Vi introducerar möjligheter att studera och bevara systemets egenskaper genom att skapa en kedja i tre steg som: (i) genererar kod från systemmodellen, (ii) exekverar och analyserar den genererade koden och (iii) slutligen återkopplar analysvärden till systemmodellen. Introduktionen av steg (ii) och (iii) gör det möjligt att genomföra en detaljerad analys av egenskaper som är svåra, eller till och med omöjliga, att studera med hjälp av endast systemmodeller. Fördelen med det här tillvägagångssättet är att det förenklar för utvecklaren som slipper arbeta direkt med kod för att ändra systemegenskaper. Istället kan utvecklaren arbeta helt och hållet med modeller och fokusera på optimering av systemmodeller med hjälp av analysvärden från testkörningar av systemet. Vi är övertygade om att denna typ av teknik är nödvändig att utveckla för att stödja modelldriven utveckling av programvara eftersom dagens tekniker inte möjliggör för systemutvecklare att specificera, analysera och optimera systemegenskaper på modellnivå. / La continua crescita in complessitá dei sistemi software moderni porta alla necessitá di definire nuovi e piú efficaci approcci di sviluppo. In questa direzione, metodi basati su modelli (model-driven engineering) e componenti (component-based software engineering) sono stati riconosciuti come promettenti nuove alternative per lo sviluppo di sistemi complessi. Inoltre l'interazione tra loro é ritenuta particolarmente vantaggiosa nella gestione nello sviluppo di sistemi integrati. Affinché questi approcci, cosí come la loro interazione, possano definitivamente prendere piede in campo industriale, il codice generato dai modelli tramite apposite transformazioni deve essere in grado di preservare le proprietá di sistema, sia funzionali che extra-funzionali, definite nei modelli. Il lavoro di ricerca presentato in questa tesi di dottorato si focalizza sul preservamento delle proprietá di sistema nell'intero processo di sviluppo e attraverso i diversi livelli di astrazione. Il risultato principale é rappresentato da un approccio automatico di round-trip engineering in grado di sostenere il preservamento delle proprietá di sistema attraverso: 1) generazione automatica di codice, 2) monitoraggio e analisi dell'esecuzione del codice generate su piattaforme specifiche, e 3) offrendo la possibilitá di propagare verticalmente i risultati da runtime al livello di modellazione. In questo modo, quelle proprietá che possono essere stimate staticamente solo in maniera approssimativa, vengono valutate in rapporto ai valori ottenuti a runtime. Ció permette di ottimizzare il sistema a livello di design attraverso i modelli, piuttosto che manualmente a livello di codice, per assicurare il preservamento degli proprietá di sistema d'interesse.
994

Deflection and shape change of smart composite laminates using shape memory alloy actuators

Giles, Adam R. January 2005 (has links)
Shape memory materials have been known for many years to possess the unique ability of memorising their shape at some temperature. If these materials are pre-strained into the plastic range, they tend to recover their original un-strained shapes via phase transformation when subjected to heat stimulation. In recent years, this shape memory effect (SME) or strain recovery capability has been explored in aerospace structures for actuating the real-time movement of structural components. Among all the shape memory materials, the nickel-titanium based shape memory alloy (SMA) has by far received the most attention because of its high recovery capabilities. Since SMAs are usually drawn into the form of wires, they are particularly suitable for being integrated into fibre-reinforced composite structures. These integrated composite structures with SMA wires are thus called smart adaptive structures. To achieve the SME, these wires are normally embedded in the host composite structures. In returning to their unstrained shape upon heat application, they tend to exert internal stresses on the host composite structures in which they are embedded. This action could result in a controlled change in shape of the structural components. Although there has been a significant amount of research dedicated to characterising and modelling the SME of SMA wires, little experimental work had been done to offer an in-depth understanding of the mechanical behaviour of these smart adaptive polymeric composite structures. This project examined the deflection and shape change of carbon/epoxy and glass/epoxy cantilever beams through heating and cooling of internal nitinol SMA wires/strips. The heat damage mechanism and cyclic behaviour are major factors in the operation of such a system and need to be clearly understood in order to develop and gain confidence for the possible implementation of future smart actuating systems. Therefore, the objectives of the proposed research were to investigate (i) effect of embedding SMA, wires on mechanical properties of host composite, (ii) assessment of single-cycle and multiple-cycle actuation performance of smart beams, and (iii) thermal effects of excessive heat on the surrounding composite matrix.
995

Microarchitecture and FPGA Implementation of the Multi-level Computing Architecture

Capalija, Davor 30 July 2008 (has links)
We design the microarchitecture of the Multi-Level Computing Architecture (MLCA), focusing on its Control Processor (CP). The design of the microarchitecture of the CP faces us with both opportunities and challenges that stem from the coarse granularity of the tasks and the large number of inputs and outputs for each task instruction. Thus, we explore changes to standard superscalar microarchitectural techniques. We design the entire CP microarchitecture and implement it on an FPGA using SystemVerilog. We synthesize and evaluate the MLCA system based on a 4-processor shared-memory multiprocessor. The performance of realistic applications shows scalable speedups that are comparable to that of simulation. We believe that our implementation achieves low complexity in terms of FPGA resource usage and operating frequency. In addition, we argue that our design methodology allows the scalability of the CP as the entire system grows.
996

Microarchitecture and FPGA Implementation of the Multi-level Computing Architecture

Capalija, Davor 30 July 2008 (has links)
We design the microarchitecture of the Multi-Level Computing Architecture (MLCA), focusing on its Control Processor (CP). The design of the microarchitecture of the CP faces us with both opportunities and challenges that stem from the coarse granularity of the tasks and the large number of inputs and outputs for each task instruction. Thus, we explore changes to standard superscalar microarchitectural techniques. We design the entire CP microarchitecture and implement it on an FPGA using SystemVerilog. We synthesize and evaluate the MLCA system based on a 4-processor shared-memory multiprocessor. The performance of realistic applications shows scalable speedups that are comparable to that of simulation. We believe that our implementation achieves low complexity in terms of FPGA resource usage and operating frequency. In addition, we argue that our design methodology allows the scalability of the CP as the entire system grows.
997

FPGA-based Soft Vector Processors

Yiannacouras, Peter 23 February 2010 (has links)
FPGAs are increasingly used to implement embedded digital systems because of their low time-to-market and low costs compared to integrated circuit design, as well as their superior performance and area over a general purpose microprocessor. However, the hardware design necessary to achieve this superior performance and area is very difficult to perform causing long design times and preventing wide-spread adoption of FPGA technology. The amount of hardware design can be reduced by employing a microprocessor for less-critical computation in the system. Often this microprocessor is implemented using the FPGA reprogrammable fabric as a soft processor which can preserve the benefits of a single-chip FPGA solution without specializing the device with dedicated hard processors. Current soft processors have simple architectures that provide performance adequate for only the least-critical computations. Our goal is to improve soft processors by scaling their performance and expanding their suitability to more critical computation. To this end we focus on the data parallelism found in many embedded applications and propose that soft processors be augmented with vector extensions to exploit this parallelism. We support this proposal through experimentation with a parameterized soft vector processor called VESPA (Vector Extended Soft Processor Architecture) which is designed, implemented, and evaluated on real FPGA hardware. The scalability of VESPA combined with several other architectural parameters can be used to finely span a large design space and derive a custom architecture for exactly matching the needs of an application. Such customization is a key advantage for soft processors since their architectures can be easily reconfigured by the end-user. Specifically, customizations can be made to the pipeline, functional units, and memory system within VESPA. In addition, general purpose overheads can be automatically eliminated from VESPA. Comparing VESPA to manual hardware design, we observe a 13x speed advantage for hardware over our fastest VESPA, though this is significantly less than the 500x speed advantage over scalar soft processors. The performance-per-area of VESPA is also observed to be significantly higher than a scalar soft processor suggesting that the addition of vector extensions makes more efficient use of silicon area for data parallel workloads.
998

Time-Triggered Program Monitoring

Thomas, Johnson January 2012 (has links)
Debugging is an important phase in the embedded software development cycle because of its high proportion in the overall cost in the product development. Debugging is difficult for real-time applications as such programs are time-sensitive and must meet deadlines in often a resource constrained environment. A common approach for real-time systems is to monitor the execution instead of stepping through the program, because stepping will usually violate all deadline constraints. We consider a time-triggered approach for program monitoring at runtime, resulting in bounded and predictable overhead. In time-triggered execution monitoring, a monitor runs as a separate process in parallel with an application program and samples the program's state periodically to evaluate a set of properties. Applying this technique in computing systems, results in bounded and predictable overhead. However, the time-triggered approach can have high overhead depending on the granularity of the monitoring effort. To reduce this overhead, we instrument the program with markers that will require to sample less frequently and thus reduce the overhead. This leads to interesting problems of (a) where to place the markers in the code and (b) how to manipulate the markers. While related work investigates the first part, in this work, we investigate the second part. We investigate different instrumentation schemes and propose two new schemes based on bitvectors that significantly reduce the overhead for time-triggered execution monitoring. Time-triggered execution monitoring suffers from several drawbacks such as; the time-triggered monitor requires certain synchronization features at the operating system level and may suffer from various concurrency and synchronization dependencies in a real-time setting. Furthermore, the time-triggered execution monitoring scheme requires the embedded environment to provide multi-tasking features. To address the aforementioned problems, we propose a new method called time-triggered self-monitoring, where the program under inspection is instrumented, so that it self-samples its state in a periodic fashion without requiring assistance from an external monitor or an internal timer. The experimental results show that a time-triggered self-monitored program performs significantly better in terms of execution time, binary code size, and context switches when compared to the same program monitored by an external time-triggered monitor.
999

Certification of an Instruction Set Simulator

Shi, Xiaomu 10 July 2013 (has links) (PDF)
Cette thèse expose nos travaux de certification d'une partie d'un programme C/C++ nommé SimSoC (Simulation of System on Chip), qui simule le comportement d'archi- tectures basées sur des processeurs tels que ARM, PowerPC, MIPS ou SH4. Un simulateur de System on Chip peut être utilisé pour developper le logiciel d'un système embarqué spécifique, afin de raccourcir les phases des développement et de test, en particulier quand la vitesse de simulation est réaliste (environ 100 millions d'instructions par seconde par cœur dans le cas de SimSoC). Les réductions de temps et de coût de développement obtenues se traduisent par des cycles de conception interactifs et rapides, en évitant la lourdeur d'un système de développement matériel. SimSoC est un logiciel complexe, comprenant environ 60 000 de C++, intégrant des parties écrites en SystemC et des optimisations non triviales pour atteindre une grande vitesse de simulation. La partie de SimSoC dédiée au processeur ARM, l'un des plus répandus dans le domaine des SoC, transcrit les informations contenues dans un manuel épais de plus de 1000 pages. Les erreurs sont inévitables à ce niveau de complexité, et certaines sont passées au travers des tests intensifs effectués sur la version précédente de SimSoC pour l'ARMv5, qui réussissait tout de même à simuler l'amorçage complet de linux. Un problème critique se pose alors : le simulateur simule-t-il effectivement le matériel réel ? Pour apporter des éléments de réponse positifs à cette question, notre travail vise à prouver la correction d'une partie significative de SimSoC, de sorte à augmenter la confiance de l'utilisateur en ce similateur notamment pour des systèmes critiques. Nous avons concentré nos efforts sur un composant particulièrement sensible de SimSoC : le simulateur du jeu d'instructions de l'ARMv6, faisant partie de la version actuelle de SimSoC. Les approches basées sur une sémantique axiomatique (logique de Hoare par exemple) sont les plus répandues en preuve de programmes impératifs. Cependant, nous avons préféré essayer une approche moins classique mais plus directe, basée sur la sémantique opérationnelle de C : cela était rendu possible en théorie depuis la formalisation en Coq d'une telle sémantique au sein du projet CompCert et mettait à notre disposition toute la puissance de Coq pour gérer la complexitité de la spécification. À notre connaissance, au delà de la certification d'un simulateur, il s'agit de la première expérience de preuve de correction de programmes C à cette échelle basée sur la sémantique opérationnelle. Nous définissons une représentation du jeu d'instruction ARM et de ses modes d'adressage formalisée en Coq, grâce à un générateur automatique prenant en entrée le pseudo-code des instructions issu du manuel de référence ARM. Nous générons égale- ment l'arbre syntaxique abstrait CompCert du code C simulant les mêmes instructions au sein de Simlight, une version allégée de SimSoC. À partir de ces deux représentations Coq, nous pouvons énoncer et démontrer la correction de Simlight, en nous appuyant sur la sémantique opérationnelle définie dans CompCert. Cette méthodologie a été appliquée à au moins une instruction de chaque catégorie du jeu d'instruction de l'ARM. Au passage, nous avons amélioré la technologie disponible en Coq pour effectuer des inversions, une forme de raisonnement utilisée intensivement dans ce type de situation.
1000

Structural And Functional Analysis Of Henry James

Celebi, Hatice 01 January 2003 (has links) (PDF)
The aim this thesis is to analyse the narrative structure of the novel, The Portrait of A Lady, with the aim of revealing how meaning is made and to show how certain elements are transferred to the film version and the consequent changes in meaning and emphasis. The structural analysis of The Portrait will chiefly rely on Shlomith Rimmon- Kenan&rsquo / s scheme she draws in her book Narrative Fiction. The functional analysis to show the consequent changes in meaning and emphasis, on the other hand, will rely on Roland Barthes&rsquo / s theory of functions he discusses in his article &ldquo / Structural Analysis of Narratives&rdquo / . In order to explore the narrative structure of The Portrait of A Lady, this thesis will examine story, characterization, time and focalization and demonstrate the techniques Henry James uses in narration. In the functional analysis of the novel, on the other hand, the functions of the units discussed in the story and the characterization will be compared to the functions of the same units that are transferred to the adaptation of the novel to reveal how the meaning and emphasis of the novel changes.

Page generated in 0.0299 seconds