Spelling suggestions: "subject:"embedded"" "subject:"imbedded""
1001 |
Test Driven Development Of Embedded SystemsIspir, Mustafa 01 December 2004 (has links) (PDF)
In this thesis, the Test Driven Development method (TDD) is studied for use in developing embedded software. The required framework is written for the development environment Rhapsody.
Integration of TDD into a classical development cycle, without necessitating a transition to agile methodologies of software development and required unit test framework to apply TDD to an object oriented embedded software development project with a specific development environment and specific project conditions are done in this thesis. A software tool for unit testing is developed specifically for this purpose, both to support the proposed approach and to illustrate its application.
The results show that RhapUnit supplies the required testing functionality for developing embedded software in Rhapsody with TDD. Also, development of RhapUnit is a successful example of the application of TDD.
|
1002 |
Increasing the Performance and Predictability of the Code Execution on an Embedded Java Platform / Ansätze zur Steigerung der Leistungsfähigkeit und Vorhersagbarkeit der Codeausführung auf einer eingebetteten Java-PlattformPreußer, Thomas 21 October 2011 (has links) (PDF)
This thesis explores the execution of object-oriented code on an embedded Java platform. It presents established and derives new approaches for the implementation of high-level object-oriented functionality and commonly expected system services. The goal of the developed techniques is the provision of the architectural base for an efficient and predictable code execution.
The research vehicle of this thesis is the Java-programmed SHAP platform. It consists of its platform tool chain and the highly-customizable SHAP bytecode processor. SHAP offers a fully operational embedded CLDC environment, in which the proposed techniques have been implemented, verified, and evaluated.
Two strands are followed to achieve the goal of this thesis. First of all, the sequential execution of bytecode is optimized through a joint effort of an optimizing offline linker and an on-chip application loader. Additionally, SHAP pioneers a reference coloring mechanism, which enables a constant-time interface method dispatch that need not be backed a large sparse dispatch table.
Secondly, this thesis explores the implementation of essential system services within designated concurrent hardware modules. This effort is necessary to decouple the computational progress of the user application from the interference induced by time-sharing software implementations of these services. The concrete contributions comprise
a spill-free, on-chip stack; a predictable method cache; and a concurrent garbage collection.
Each approached means is described and evaluated after the relevant state of the art has been reviewed. This review is not limited to preceding small embedded approaches but also includes techniques that have proven successful on larger-scale platforms. The other way around, the chances that these platforms may benefit from the techniques developed for SHAP are discussed.
|
1003 |
Modeling of the excited modes in inverted embedded microstrip lines using the finite-difference time-domain (FDTD) techniqueHaque, Amil 20 November 2008 (has links)
This thesis investigates the presence of multiple (quasi-TEM) modes in inverted embedded microstrip lines. It has already been shown that parasitic modes do exist in inverted embedded microstrips due to field leakage inside the dielectric substrate, especially for high dielectric constants (like Silicon). This thesis expands upon that work and characterizes those modes for a variety of geometrical dimensions. Chapter 1 focuses on the theory behind the different transmission line modes, which may be present in inverted embedded microstrips. Based on the structure of the inverted embedded microstrip, the conventional microstrip mode, the quasi-conventional microstrip mode, and the stripline mode are expected. Chapter 2 discusses in detail the techniques used to decompose the total probed
field into the various modes present in the inverted embedded microstrip lines. Firstly, a
short explanation of the finite-difference time-domain method, that is used for the simulation and modeling of inverted microstrips up to 50 GHz is provided. Next, a flowchart of the process involved in decomposing the modes is laid out. Lastly, the challenges of this approach are also highlighted to give an appreciation of the difficulty in obtaining accurate results.
Chapter 3 shows the results (dispersion diagrams, values/percentage of the individual mode energies ) obtained after running time-domain simulations for a variety of geometrical dimensions. Chapter 4 concludes the thesis by explaining the results in terms of the
transmission line theory presented in Chapter 1. Next, possible future work is mentioned.
|
1004 |
High dielectric constant polymer nanocomposites for embedded capacitor applicationsLu, Jiongxin 17 September 2008 (has links)
Driven by ever growing demands of miniaturization, increased functionality, high performance and low cost for microelectronic products and packaging, embedded passives will be one of the key emerging techniques for realizing the system integration which offer various advantages over traditional discrete components. Novel materials for embedded capacitor applications are in great demand, for which a high dielectric constant (k), low dielectric loss and process compatibility with printed circuit boards are the most important prerequisites. To date, no available material satisfies all these prerequisites and research is needed to develop materials for embedded capacitor applications. Conductive filler/polymer composites are likely candidate material because they show a dramatic increase in their dielectric constant close to the percolation threshold. One of the major hurdles for this type of high-k composites is the high dielectric loss inherent in these systems.
In this research, material and process innovations were explored to design and develop conductive filler/polymer nanocomposites based on nanoparticles with controlled parameters to fulfill the balance between sufficiently high-k and low dielectric loss, which satisfied the requirements for embedded decoupling capacitor applications.
This work involved the synthesis of the metal nanoparticles with different parameters including size, size distribution, aggregation and surface properties, and an investigation on how these varied parameters impact the dielectric properties of the high-k nanocomposites incorporated with these metal nanoparticles. The dielectric behaviors of the nanocomposites were studied systematically over a range of frequencies to determine the dependence of dielectric constant, dielectric loss tangent and dielectric strength on these parameters.
|
1005 |
Scratch-pad memory management for static data aggregatesLi, Lian, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Scratch-pad memory (SPM), a fast on-chip SRAM managed by software, is widely used in embedded systems. Compared to hardware-managed cache, SPM can be more efficient in performance, power and area cost, and has the added advantage of better time predictability. In this thesis, SPMs should be seen in a general context. For example, in stream processors, a software-managed stream register file is usually used to stage data to and from off-chip memory. In IBM's Cell architecture, each co-processor has a software-managed local store for keeping data and instructions. SPM management is critical for SPM-based embedded systems. In this thesis, we propose two novel methodologies, the memory colouring methodology and the perfect colouring methodology, to place the static data aggregates such as arrays and structs of a program in SPM. Our methodologies are dynamic in the sense that some data aggregates can be swapped into and out of SPM during program execution. To this end, a live range splitting heuristic is introduced in order to create potential data transfer statements between SPM and off-chip memory. The memory colouring methodology is a general-purpose compiler approach. The novelty of this approach lies in partitioning an SPM into a pseudo register file then generalising existing graph colouring algorithms for register allocation to colour data aggregates. In this thesis, a scheme for partitioning an SPM into a pseudo register file is introduced. This methodology is inter-procedural and therefore operates on the interference graph for the data aggregates in the whole program. Different graph colouring algorithms may give rise to different results due to live range splitting and spilling heuristics used. As a result, two representative graph colouring algorithms, George and Appel's iterative-coalescing and Park and Moon's optimistic-coalescing, are generalised and evaluated for SPM allocation. Like memory colouring, perfect colouring is also inter-procedural. The novelty of this second methodology lies in formulating the SPM allocation problem as an interval colouring problem. The interval colouring problem is an NP problem and no widely-accepted approximation algorithms exist. The key observation is that the interference graphs for data aggregates in many embedded applications form a special class of superperfect graphs. This has led to the development of two additional SPM allocation algorithms. While differing in whether live range splits and spills are done sequentially or together, both algorithms place data aggregates in SPM based on the cliques in an interference graph. In both cases, we guarantee optimally that all data aggregates in an interference graph can be placed in SPM if the given SPM size is no smaller than the chromatic number of the graph. We have developed two memory colouring algorithms and two perfect colouring algorithms for SPM allocation. We have evaluated them using a set of embedded applications. Our results show that both methodologies are efficient and effective in handling large-scale embedded applications. While neither methodology outperforms the other consistently, perfect colouring has yielded better overall results in the set of benchmarks used in our experiments. All these algorithms are expected to be valuable. For example, they can be made available as part of the same compiler framework to assist the embedded designer with exploring a large number of optimisation opportunities for a particular embedded application.
|
1006 |
Embedded speech recognition systemsCheng, Octavian January 2008 (has links)
Apart from recognition accuracy, decoding speed and vocabulary size, another point of consideration when developing a practical ASR application is the adaptability of the system. An ASR system is more useful if it can cope with changes that are introduced by users, for example, new words and new grammar rules. In addition, the system can also automatically update the underlying knowledge sources, such as language model probabilities, for better recognition accuracy. Since the knowledge sources need to be adaptable, it is in°exible to statically combine them. It is because on-line modi¯cation becomes di±cult once all the knowledge sources have been combined into one static search space. The second objective of the thesis is to develop an algorithm which allows dynamic integration of knowledge sources during decoding. In this approach, each knowledge source is represented by a weighted ¯nite state transducer (WFST). The knowledge source that is subject to adaptation is factorized from the entire search space. The adapted knowledge source is then combined with the others during decoding. In this thesis, we propose a generalized dynamic WFST composition algorithm, which avoids the creation of non- coaccessible paths, performs weight look-ahead and does not impose any constraints to the topology of the WFSTs. Experimental results on Wall Street Journal (WSJ1) 20k- word trigram task show that our proposed approach has a better word accuracy versus real-time factor characteristics than other dynamic composition approaches.
|
1007 |
Embedded speech recognition systemsCheng, Octavian January 2008 (has links)
Apart from recognition accuracy, decoding speed and vocabulary size, another point of consideration when developing a practical ASR application is the adaptability of the system. An ASR system is more useful if it can cope with changes that are introduced by users, for example, new words and new grammar rules. In addition, the system can also automatically update the underlying knowledge sources, such as language model probabilities, for better recognition accuracy. Since the knowledge sources need to be adaptable, it is in°exible to statically combine them. It is because on-line modi¯cation becomes di±cult once all the knowledge sources have been combined into one static search space. The second objective of the thesis is to develop an algorithm which allows dynamic integration of knowledge sources during decoding. In this approach, each knowledge source is represented by a weighted ¯nite state transducer (WFST). The knowledge source that is subject to adaptation is factorized from the entire search space. The adapted knowledge source is then combined with the others during decoding. In this thesis, we propose a generalized dynamic WFST composition algorithm, which avoids the creation of non- coaccessible paths, performs weight look-ahead and does not impose any constraints to the topology of the WFSTs. Experimental results on Wall Street Journal (WSJ1) 20k- word trigram task show that our proposed approach has a better word accuracy versus real-time factor characteristics than other dynamic composition approaches.
|
1008 |
Embedded speech recognition systemsCheng, Octavian January 2008 (has links)
Apart from recognition accuracy, decoding speed and vocabulary size, another point of consideration when developing a practical ASR application is the adaptability of the system. An ASR system is more useful if it can cope with changes that are introduced by users, for example, new words and new grammar rules. In addition, the system can also automatically update the underlying knowledge sources, such as language model probabilities, for better recognition accuracy. Since the knowledge sources need to be adaptable, it is in°exible to statically combine them. It is because on-line modi¯cation becomes di±cult once all the knowledge sources have been combined into one static search space. The second objective of the thesis is to develop an algorithm which allows dynamic integration of knowledge sources during decoding. In this approach, each knowledge source is represented by a weighted ¯nite state transducer (WFST). The knowledge source that is subject to adaptation is factorized from the entire search space. The adapted knowledge source is then combined with the others during decoding. In this thesis, we propose a generalized dynamic WFST composition algorithm, which avoids the creation of non- coaccessible paths, performs weight look-ahead and does not impose any constraints to the topology of the WFSTs. Experimental results on Wall Street Journal (WSJ1) 20k- word trigram task show that our proposed approach has a better word accuracy versus real-time factor characteristics than other dynamic composition approaches.
|
1009 |
Embedded speech recognition systemsCheng, Octavian January 2008 (has links)
Apart from recognition accuracy, decoding speed and vocabulary size, another point of consideration when developing a practical ASR application is the adaptability of the system. An ASR system is more useful if it can cope with changes that are introduced by users, for example, new words and new grammar rules. In addition, the system can also automatically update the underlying knowledge sources, such as language model probabilities, for better recognition accuracy. Since the knowledge sources need to be adaptable, it is in°exible to statically combine them. It is because on-line modi¯cation becomes di±cult once all the knowledge sources have been combined into one static search space. The second objective of the thesis is to develop an algorithm which allows dynamic integration of knowledge sources during decoding. In this approach, each knowledge source is represented by a weighted ¯nite state transducer (WFST). The knowledge source that is subject to adaptation is factorized from the entire search space. The adapted knowledge source is then combined with the others during decoding. In this thesis, we propose a generalized dynamic WFST composition algorithm, which avoids the creation of non- coaccessible paths, performs weight look-ahead and does not impose any constraints to the topology of the WFSTs. Experimental results on Wall Street Journal (WSJ1) 20k- word trigram task show that our proposed approach has a better word accuracy versus real-time factor characteristics than other dynamic composition approaches.
|
1010 |
Scratch-pad memory management for static data aggregatesLi, Lian, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Scratch-pad memory (SPM), a fast on-chip SRAM managed by software, is widely used in embedded systems. Compared to hardware-managed cache, SPM can be more efficient in performance, power and area cost, and has the added advantage of better time predictability. In this thesis, SPMs should be seen in a general context. For example, in stream processors, a software-managed stream register file is usually used to stage data to and from off-chip memory. In IBM's Cell architecture, each co-processor has a software-managed local store for keeping data and instructions. SPM management is critical for SPM-based embedded systems. In this thesis, we propose two novel methodologies, the memory colouring methodology and the perfect colouring methodology, to place the static data aggregates such as arrays and structs of a program in SPM. Our methodologies are dynamic in the sense that some data aggregates can be swapped into and out of SPM during program execution. To this end, a live range splitting heuristic is introduced in order to create potential data transfer statements between SPM and off-chip memory. The memory colouring methodology is a general-purpose compiler approach. The novelty of this approach lies in partitioning an SPM into a pseudo register file then generalising existing graph colouring algorithms for register allocation to colour data aggregates. In this thesis, a scheme for partitioning an SPM into a pseudo register file is introduced. This methodology is inter-procedural and therefore operates on the interference graph for the data aggregates in the whole program. Different graph colouring algorithms may give rise to different results due to live range splitting and spilling heuristics used. As a result, two representative graph colouring algorithms, George and Appel's iterative-coalescing and Park and Moon's optimistic-coalescing, are generalised and evaluated for SPM allocation. Like memory colouring, perfect colouring is also inter-procedural. The novelty of this second methodology lies in formulating the SPM allocation problem as an interval colouring problem. The interval colouring problem is an NP problem and no widely-accepted approximation algorithms exist. The key observation is that the interference graphs for data aggregates in many embedded applications form a special class of superperfect graphs. This has led to the development of two additional SPM allocation algorithms. While differing in whether live range splits and spills are done sequentially or together, both algorithms place data aggregates in SPM based on the cliques in an interference graph. In both cases, we guarantee optimally that all data aggregates in an interference graph can be placed in SPM if the given SPM size is no smaller than the chromatic number of the graph. We have developed two memory colouring algorithms and two perfect colouring algorithms for SPM allocation. We have evaluated them using a set of embedded applications. Our results show that both methodologies are efficient and effective in handling large-scale embedded applications. While neither methodology outperforms the other consistently, perfect colouring has yielded better overall results in the set of benchmarks used in our experiments. All these algorithms are expected to be valuable. For example, they can be made available as part of the same compiler framework to assist the embedded designer with exploring a large number of optimisation opportunities for a particular embedded application.
|
Page generated in 0.0267 seconds