Spelling suggestions: "subject:"embedded computer systems."" "subject:"imbedded computer systems.""
101 |
Scheduling for composite event detection in wireless sensor networksUnknown Date (has links)
Wireless sensor networks are used in areas that are inaccessible, inhospitable or for continuous monitoring. The main use of such networks is for event detection. Event detection is used to monitor a particular environment for an event such as fire or flooding. Composite event detection is used to break down the detection of the event into the specific conditions that need to be present for the event to occur. Using this method, each sensor node does not need to carry every sensing component necessary to detect the event. Since energy efficiency is important the sensor nodes need to be scheduled so that they consume [sic] consume as little energy as possible to extend the network lifetime. In this thesis, a solution to the sensor Scheduling for Composite Event Detection (SCED) problem will be presented as a way to improve the network lifetime when using composite event detection. / by Arny I. Ambrose. / Thesis (M.S.C.S.)--Florida Atlantic University, 2008. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2008. Mode of access: World Wide Web.
|
102 |
Towards a portal and search engine to facilitate academic and research collaboration in engineering andUnknown Date (has links)
While international academic and research collaborations are of great importance at this
time, it is not easy to find researchers in the engineering field that publish in languages
other than English. Because of this disconnect, there exists a need for a portal to find
Who’s Who in Engineering Education in the Americas. The objective of this thesis is to
built an object-oriented architecture for this proposed portal. The Unified Modeling
Language (UML) model developed in this thesis incorporates the basic structure of a
social network for academic purposes. Reverse engineering of three social networks
portals yielded important aspects of their structures that have been incorporated in the
proposed UML model. Furthermore, the present work includes a pattern for academic
social networks. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2014. / FAU Electronic Theses and Dissertations Collection
|
103 |
Enhancing performance in publish/subscribe systemsUnknown Date (has links)
Publish/subscribe is a powerful paradigm for distributed applications based on decoupled clients of information. In pub/sub applications, there exist a large amount of publishers and subscribes ranging from hundreds to millions. Publish/subscribe systems need to disseminate numerous events through a network of brokers. Due to limited resources of brokers, there may be lots of events that cannot be handled in time which in turn causes overload problem. Here arises the need of admission control mechanism to provide guaranteed services in publish/subscribe systems. Our approach gives the solution to this overload problem in the network of brokers by limiting the incoming subscriptions by certain criteria. The criteria are the factors like resources which include bandwidth, CPU, memory (in broker network), resource requirements by the subscription. / by Akshay Kamdar. / Thesis (M.S.C.S.)--Florida Atlantic University, 2009. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2009. Mode of access: World Wide Web.
|
104 |
Efficient Algorithms for Elliptic Curve Cryptosystems on Embedded SystemsWoodbury, Adam D 01 October 2001 (has links)
"This thesis describes how an elliptic curve cryptosystem can be implemented on low cost microprocessors without coprocessors with reasonable performance. We focus in this paper on the Intel 8051 family of microcontrollers popular in smart cards and other cost-sensitive devices, and on the Motorola Dragonball, found in the Palm Computing Platform. The implementation is based on the use of the Optimal Extension Fields GF((2^8-17)^17) for low end 8-bit processors, and GF((2^13-1)^13) for 16-bit processors. Two advantages of our method are that subfield modular reduction can be performed infrequently, and that an adaption of Itoh and Tsujii's inversion algorithm may be used for the group operation. We show that an elliptic curve scalar multiplication with a fixed point, which is the core operation for a signature generation, can be performed in a group of order approximately 2^134 in less than 2 seconds on an 8-bit smart card. On a 16-bit microcontroller, signature generation in a group of order approximately 2^169 can be performed in under 700 milliseconds. Unlike other implementations, we do not make use of curve parameters defined over a subfield such as Koblitz curves."
|
105 |
Unmediated Interaction: Communicating with Computers and Embedded Devices as If They Are Not ThereSmith, Brian Anthony January 2018 (has links)
Although computers are smaller and more readily accessible today than they have ever been, I believe that we have barely scratched the surface of what computers can become. When we use computing devices today, we end up spending a lot of our time navigating to particular functions or commands to use devices their way rather than executing those commands immediately. In this dissertation, I explore what I call unmediated interaction, the notion of people using computers as if the computers are not there and as if the people are using their own abilities or powers instead. I argue that facilitating unmediated interaction via personalization, new input modalities, and improved text entry can reduce both input overhead and output overhead, which are the burden of providing inputs to and receiving outputs from the intermediate device, respectively. I introduce three computational methods for reducing input overhead and one for reducing output overhead. First, I show how input data mining can eliminate the need for user inputs altogether. Specifically, I develop a method for mining controller inputs to gain deep insights about a players playing style, their preferences, and the nature of video games that they are playing, all of which can be used to personalize their experience without any explicit input on their part. Next, I introduce gaze locking, a method for sensing eye contact from an image that allows people to interact with computers, devices, and other objects just by looking at them. Third, I introduce computationally optimized keyboard designs for touchscreen manual input that allow people to type on smartphones faster and with far fewer errors than currently possible. Last, I introduce the racing auditory display (RAD), an audio system that makes it possible for people who are blind to play the same types of racing games that sighted players can play, and with a similar speed and sense of control as sighted players. The RAD shows how we can reduce output overhead to provide user interface parity between people with and without disabilities. Together, I hope that these systems open the door to even more efforts in unmediated interaction, with the goal of making computers less like devices that we use and more like abilities or powers that we have.
|
106 |
Reconfigurable memory systems for embedded microprocessorsKoltes, Andreas January 2015 (has links)
No description available.
|
107 |
Design of application-specific instruction set processors with asynchronous methodology for embedded digital signal processing applications.January 2005 (has links)
Kwok Yan-lun Andy. / Thesis submitted in: November 2004. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (leaves 133-137). / Abstracts in English and Chinese. / Abstract --- p.i / 摘要 --- p.ii / Acknowledgements --- p.iii / List of Figures --- p.vii / List of Tables and Examples --- p.x / Chapter 1. --- Introduction --- p.1 / Chapter 1.1. --- Motivation --- p.1 / Chapter 1.2. --- Objective and Approach --- p.4 / Chapter 1.3. --- Thesis Organization --- p.5 / Chapter 2. --- Related Work --- p.7 / Chapter 2.1. --- Coverage --- p.7 / Chapter 2.2. --- ASIP Design Methodologies --- p.8 / Chapter 2.3. --- Asynchronous Technology on Processors --- p.12 / Chapter 2.4. --- Summary --- p.14 / Chapter 3. --- Asynchronous Design Methodology --- p.15 / Chapter 3.1. --- Overview --- p.15 / Chapter 3.2. --- Asynchronous Design Style --- p.17 / Chapter 3.2.1. --- Micropipelines --- p.17 / Chapter 3.2.2. --- Fine-grain Pipelining --- p.20 / Chapter 3.2.3. --- Globally-Asynchronous Locally-Synchronous (GALS) Design --- p.22 / Chapter 3.3. --- Advantages of GALS in ASIP Design --- p.27 / Chapter 3.3.1. --- Reuse of Synchronous and Asynchronous IP --- p.27 / Chapter 3.3.2. --- Fine Tuning of Performance and Power Consumption --- p.27 / Chapter 3.3.3. --- Synthesis-based Design Flow --- p.28 / Chapter 3.4. --- Design of GALS Asynchronous Wrapper --- p.28 / Chapter 3.4.1. --- Handshake Protocol --- p.28 / Chapter 3.4.2. --- Pausible Clock Generator --- p.29 / Chapter 3.4.3. --- Port Controllers --- p.30 / Chapter 3.4.4. --- Performance of the Asynchronous Wrapper --- p.33 / Chapter 3.5. --- Summary --- p.35 / Chapter 4. --- Platform Based ASIP Design Methodology --- p.36 / Chapter 4.1. --- Platform Based Approach --- p.36 / Chapter 4.1.1. --- The Definition of Our Platform --- p.37 / Chapter 4.1.2. --- The Definition of the Platform Based Design --- p.37 / Chapter 4.2. --- Platform Architecture --- p.38 / Chapter 4.2.1. --- The Nature of DSP Algorithms --- p.38 / Chapter 4.2.2. --- Design Space of Datapath Optimization --- p.46 / Chapter 4.2.3. --- Proposed Architecture --- p.49 / Chapter 4.2.4. --- The Strategy of Realizing an Optimized Datapath --- p.51 / Chapter 4.2.5. --- Pipeline Organization --- p.59 / Chapter 4.2.6. --- GALS Partitioning --- p.61 / Chapter 4.2.7. --- Operation Mechanism --- p.63 / Chapter 4.3. --- Overall Design Flow --- p.67 / Chapter 4.4. --- Summary --- p.70 / Chapter 5. --- Design of the ASIP Platform --- p.72 / Chapter 5.1. --- Design Goal --- p.72 / Chapter 5.2. --- Instruction Fetch --- p.74 / Chapter 5.2.1. --- Instruction fetch unit --- p.74 / Chapter 5.2.2. --- Zero-overhead loops and Subroutines --- p.75 / Chapter 5.3. --- Instruction Decode --- p.77 / Chapter 5.3.1. --- Instruction decoder --- p.77 / Chapter 5.3.2. --- The Encoding of Parallel and Complex Instructions --- p.80 / Chapter 5.4. --- Datapath --- p.81 / Chapter 5.4.1. --- Base Functional Units --- p.81 / Chapter 5.4.2. --- Functional Unit Wrapper Interface --- p.83 / Chapter 5.5. --- Register File Systems --- p.84 / Chapter 5.5.1. --- Memory Hierarchy --- p.84 / Chapter 5.5.2. --- Register File Organization --- p.85 / Chapter 5.5.3. --- Address Generation --- p.93 / Chapter 5.5.4. --- Load and Store --- p.98 / Chapter 5.6. --- Design Verification --- p.100 / Chapter 5.7. --- Summary --- p.104 / Chapter 6. --- Case Studies --- p.105 / Chapter 6.1. --- Objective --- p.105 / Chapter 6.2. --- Approach --- p.105 / Chapter 6.3. --- Based versus Optimized --- p.106 / Chapter 6.3.1. --- Matrix Manipulation --- p.106 / Chapter 6.3.2. --- Autocorrelation --- p.109 / Chapter 6.3.3. --- CORDIC --- p.110 / Chapter 6.4. --- Optimized versus Advanced Commercial DSPs --- p.113 / Chapter 6.4.1. --- Introduction to TMS320C62x and SC140 --- p.113 / Chapter 6.4.2. --- Results --- p.115 / Chapter 6.5. --- Summary --- p.116 / Chapter 7. --- Conclusion --- p.118 / Chapter 7.1. --- When ASIPs encounter asynchronous --- p.118 / Chapter 7.2. --- Contributions --- p.120 / Chapter 7.3. --- Future Directions --- p.121 / Chapter A --- Synthesis of Extended Burst-Mode Asynchronous Finite State Machine --- p.122 / Chapter B --- Base Instruction Set --- p.124 / Chapter C --- Special Registers --- p.127 / Chapter D --- Synthesizable Model of GALS Wrapper --- p.130 / Reference --- p.133
|
108 |
Computer vision based embedded fire detection system. / 基於計算機視覺的嵌入式火災監測系統 / Ji yu ji suan ji shi jue de qian ru shi huo zai jian ce xi tongJanuary 2011 (has links)
Gong, Yibo. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (p. 99-108). / Abstracts in English and Chinese. / Abstract --- p.ii / Acknowledgement --- p.v / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation and Objective --- p.1 / Chapter 1.2 --- Contributions --- p.4 / Chapter 1.2.1 --- Embedded fire detection platform --- p.4 / Chapter 1.2.2 --- Extended CAMSHIFT object detection frame work --- p.5 / Chapter 1.2.3 --- Cooperative multiple camera module --- p.8 / Chapter 1.2.4 --- Aerial maritime survivor detection system --- p.9 / Chapter 1.3 --- Organization of this thesis --- p.9 / Chapter 2 --- Background Study --- p.11 / Chapter 2.1 --- Embedded computer vision --- p.11 / Chapter 2.2 --- Visual Fire detection --- p.12 / Chapter 2.3 --- Color-based object detection and tracking --- p.15 / Chapter 2.4 --- Multiple-camera system cooperation --- p.16 / Chapter 2.5 --- Multiple-camera system calibration --- p.18 / Chapter 3 --- Overview of the embedded fire detection system --- p.22 / Chapter 3.1 --- Functional modules of the detection unit --- p.25 / Chapter 3.2 --- Dataflow within the detection unit --- p.28 / Chapter 4 --- Simulated annealing based MEAN SHIFT framework --- p.31 / Chapter 4.1 --- Simulated annealing framework --- p.33 / Chapter 4.2 --- Combination of simulated annealing with MEAN SHIFT --- p.37 / Chapter 5 --- Extended CAMSHIFT framework for fire detection --- p.42 / Chapter 5.1 --- Bidirectional color histogram training and backprojection --- p.43 / Chapter 5.2 --- Choice of properly sized fire window --- p.48 / Chapter 5.3 --- Alternative optimization based search window resizing --- p.49 / Chapter 5.4 --- Multiple modal particle filter based window size optimization --- p.53 / Chapter 5.4.1 --- Multiple modal particle filter --- p.53 / Chapter 5.4.2 --- Integration of the MMPF with CAMSHIFT framework --- p.57 / Chapter 5.5 --- fire monitoring --- p.63 / Chapter 6 --- The multiple camera module --- p.65 / Chapter 6.1 --- Calibration of the multi-camera system --- p.66 / Chapter 6.2 --- Region mapping and cooperation among the cameras --- p.69 / Chapter 7 --- Implementation and Experiments --- p.71 / Chapter 7.1 --- Implementation --- p.71 / Chapter 7.2 --- Experiments and performance evaluations --- p.74 / Chapter 7.2.1 --- Bidirectional histogram training and backprojection --- p.76 / Chapter 7.2.2 --- Performance of the hybrid Simulated annealing-Mean shift framework --- p.78 / Chapter 7.2.3 --- Alternative optimization based search window resizing for CAMSHIFT --- p.84 / Chapter 7.2.4 --- Multiple modal particle filter based search window resizing for CAMSHIFT --- p.87 / Chapter 7.2.5 --- Real-scenario test on the arm system --- p.94 / Chapter 7.2.6 --- Comparison of the two search window resizing mechanisms --- p.96 / Chapter 7.2.7 --- Accuracy of the multiple camera calibration method --- p.97 / Chapter 8 --- Extension to aerial maritime survivor search --- p.99 / Chapter 8.1 --- Introduction --- p.99 / Chapter 8.2 --- Implementation and experiment results --- p.102 / Chapter 9 --- Conclusion --- p.105 / Chapter 9.1 --- Contribution and summary of the work --- p.105 / Chapter 9.2 --- Future work --- p.107 / Bibliography --- p.109
|
109 |
Quality Evaluation in Fixed-point Systems with Selective Simulation / Evaluation de la qualité des systèmes en virgule fixe avec la simulation sélectiveNehmeh, Riham 13 June 2017 (has links)
Le temps de mise sur le marché et les coûts d’implantation sont les deux critères principaux à prendre en compte dans l'automatisation du processus de conception de systèmes numériques. Les applications de traitement du signal utilisent majoritairement l'arithmétique virgule fixe en raison de leur coût d'implantation plus faible. Ainsi, une conversion en virgule fixe est nécessaire. Cette conversion est composée de deux parties correspondant à la détermination du nombre de bits pour la partie entière et pour la partie fractionnaire. Le raffinement d'un système en virgule fixe nécessite d'optimiser la largeur des données en vue de minimiser le coût d'implantation tout en évitant les débordements et un bruit de quantification excessif. Les applications dans les domaines du traitement d'image et du signal sont tolérantes aux erreurs si leur probabilité ou leur amplitude est suffisamment faible. De nombreux travaux de recherche se concentrent sur l'optimisation de la largeur de la partie fractionnaire sous contrainte de précision. La réduction du nombre de bits pour la partie fractionnaire conduit à une erreur d'amplitude faible par rapport à celle du signal. La théorie de la perturbation peut être utilisée pour propager ces erreurs à l'intérieur des systèmes à l'exception du cas des opérations un- smooth, comme les opérations de décision, pour lesquelles une erreur faible en entrée peut conduire à une erreur importante en sortie. De même, l'optimisation de la largeur de la partie entière peut réduire significativement le coût lorsque l'application est tolérante à une faible probabilité de débordement. Les débordements conduisent à une erreur d'amplitude élevée et leur occurrence doit donc être limitée. Pour l'optimisation des largeurs des données, le défi est d'évaluer efficacement l'effet des erreurs de débordement et de décision sur la métrique de qualité associée à l'application. L'amplitude élevée de l'erreur nécessite l'utilisation d'approches basées sur la simulation pour évaluer leurs effets sur la qualité. Dans cette thèse, nous visons à accélérer le processus d'évaluation de la métrique de qualité. Nous proposons un nouveau environnement logiciel utilisant des simulations sélectives pour accélérer la simulation des effets des débordements et des erreurs de décision. Cette approche peut être appliquée à toutes les applications de traitement du signal développées en langage C. Par rapport aux approches classiques basées sur la simulation en virgule fixe, où tous les échantillons d'entrée sont traités, l'approche proposée simule l'application uniquement en cas d'erreur. En effet, les dépassements et les erreurs de décision doivent être des événements rares pour maintenir la fonctionnalité du système. Par conséquent, la simulation sélective permet de réduire considérablement le temps requis pour évaluer les métriques de qualité des applications. De plus, nous avons travaillé sur l'optimisation de la largeur de la partie entière, qui peut diminuer considérablement le coût d'implantation lorsqu'une légère dégradation de la qualité de l'application est acceptable. Nous exploitons l'environnement logiciel proposé auparavant à travers un nouvel algorithme d'optimisation de la largeur des données. La combinaison de cet algorithme et de la technique de simulation sélective permet de réduire considérablement le temps d'optimisation. / Time-to-market and implementation cost are high-priority considerations in the automation of digital hardware design. Nowadays, digital signal processing applications use fixed-point architectures due to their advantages in terms of implementation cost. Thus, floating-point to fixed-point conversion is mandatory. The conversion process consists of two parts corresponding to the determination of the integer part word-length and the fractional part world-length. The refinement of fixed-point systems requires optimizing data word -length to prevent overflows and excessive quantization noises while minimizing implementation cost. Applications in image and signal processing domains are tolerant to errors if their probability or their amplitude is small enough. Numerous research works focus on optimizing the fractional part word-length under accuracy constraint. Reducing the number of bits for the fractional part word- length leads to a small error compared to the signal amplitude. Perturbation theory can be used to propagate these errors inside the systems except for unsmooth operations, like decision operations, for which a small error at the input can leads to a high error at the output. Likewise, optimizing the integer part word-length can significantly reduce the cost when the application is tolerant to a low probability of overflow. Overflows lead to errors with high amplitude and thus their occurrence must be limited. For the word-length optimization, the challenge is to evaluate efficiently the effect of overflow and unsmooth errors on the application quality metric. The high amplitude of the error requires using simulation based-approach to evaluate their effects on the quality. In this thesis, we aim at accelerating the process of quality metric evaluation. We propose a new framework using selective simulations to accelerate the simulation of overflow and un- smooth error effects. This approach can be applied on any C based digital signal processing applications. Compared to complete fixed -point simulation based approaches, where all the input samples are processed, the proposed approach simulates the application only when an error occurs. Indeed, overflows and unsmooth errors must be rare events to maintain the system functionality. Consequently, selective simulation allows reducing significantly the time required to evaluate the application quality metric. 1 Moreover, we focus on optimizing the integer part, which can significantly decrease the implementation cost when a slight degradation of the application quality is acceptable. Indeed, many applications are tolerant to overflows if the probability of overflow occurrence is low enough. Thus, we exploit the proposed framework in a new integer word-length optimization algorithm. The combination of the optimization algorithm and the selective simulation technique allows decreasing significantly the optimization time.
|
110 |
Scratch-pad memory management for static data aggregatesLi, Lian, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Scratch-pad memory (SPM), a fast on-chip SRAM managed by software, is widely used in embedded systems. Compared to hardware-managed cache, SPM can be more efficient in performance, power and area cost, and has the added advantage of better time predictability. In this thesis, SPMs should be seen in a general context. For example, in stream processors, a software-managed stream register file is usually used to stage data to and from off-chip memory. In IBM's Cell architecture, each co-processor has a software-managed local store for keeping data and instructions. SPM management is critical for SPM-based embedded systems. In this thesis, we propose two novel methodologies, the memory colouring methodology and the perfect colouring methodology, to place the static data aggregates such as arrays and structs of a program in SPM. Our methodologies are dynamic in the sense that some data aggregates can be swapped into and out of SPM during program execution. To this end, a live range splitting heuristic is introduced in order to create potential data transfer statements between SPM and off-chip memory. The memory colouring methodology is a general-purpose compiler approach. The novelty of this approach lies in partitioning an SPM into a pseudo register file then generalising existing graph colouring algorithms for register allocation to colour data aggregates. In this thesis, a scheme for partitioning an SPM into a pseudo register file is introduced. This methodology is inter-procedural and therefore operates on the interference graph for the data aggregates in the whole program. Different graph colouring algorithms may give rise to different results due to live range splitting and spilling heuristics used. As a result, two representative graph colouring algorithms, George and Appel's iterative-coalescing and Park and Moon's optimistic-coalescing, are generalised and evaluated for SPM allocation. Like memory colouring, perfect colouring is also inter-procedural. The novelty of this second methodology lies in formulating the SPM allocation problem as an interval colouring problem. The interval colouring problem is an NP problem and no widely-accepted approximation algorithms exist. The key observation is that the interference graphs for data aggregates in many embedded applications form a special class of superperfect graphs. This has led to the development of two additional SPM allocation algorithms. While differing in whether live range splits and spills are done sequentially or together, both algorithms place data aggregates in SPM based on the cliques in an interference graph. In both cases, we guarantee optimally that all data aggregates in an interference graph can be placed in SPM if the given SPM size is no smaller than the chromatic number of the graph. We have developed two memory colouring algorithms and two perfect colouring algorithms for SPM allocation. We have evaluated them using a set of embedded applications. Our results show that both methodologies are efficient and effective in handling large-scale embedded applications. While neither methodology outperforms the other consistently, perfect colouring has yielded better overall results in the set of benchmarks used in our experiments. All these algorithms are expected to be valuable. For example, they can be made available as part of the same compiler framework to assist the embedded designer with exploring a large number of optimisation opportunities for a particular embedded application.
|
Page generated in 0.0852 seconds