Spelling suggestions: "subject:"embedded"" "subject:"imbedded""
351 |
Teaching Signals to Students: a Tool for Visualizing Signal, Filter and DSP ConceptsAshraf, Pouya, Billman, Linnar, Wendelin, Adam January 2016 (has links)
Students at Uppsala University have for some years been given the opportunity to take courses in subjects directly, or indirectly, related to the fields of signal processing and signal analysis. According to the directors of these courses, a considerable number of students are recurringly having difficulties grasping different concepts related to this field of study. This report covers a tool that easily allows teachers to visualize and listen to different manipulations of signals, which should help students get an intuitive understanding of the subject. Features of the system include multiple kinds of analog filters, sampling with variable settings and zero-order hold reconstruction. The finished system is flexible, tunable and modifiable to the teachers every need, making it usable for a wide variety of courses involving signal processing. The system meets its requirements even though individual components’ results de- viate slightly from ideal values. / Studenter vid Uppsala Universitet har, under ett antal år, givits möjligheten att läsa kurser inom ämnen direkt, eller indirekt, relaterade till signalbehandling/signalanalys. Enligt kursansvariga för dessa kurser har en ansenlig andel av studenterna svårigheter med att förstå en del av de begrepp och fenomen som förekommer under kurserna. Denna rapport behandlar ett verktyg som ger lärare i dessa kurser möjlighet att på ett enkelt sätt visualisera och lyssna på olika manipulationer av signaler, vilket bör hjälpa studenterna bygga en intuition för ämnet. Systemets olika funktioner inkluderar flera olika typer av analoga filter, sampling med olika inställningar, och så kallad ’Zero-Order-Hold’ rekonstruktion. Det resulterande systemet är flexibelt, inställbart och modifierbart till användarens behov, vilket gör det applicerbart i flera kurser som innefattar signalbehandling/analys. Systemet möter kraven som ställs, även fast resultaten hos individuella komponenter avviker aningen från ideala värden.
|
352 |
A model-continuous specification and design methodology for embedded multiprocessor signal processing systemsJanka, Randall Scott 12 1900 (has links)
Thesis made openly available per email from author, August 2015.
|
353 |
THE STUDY OF EMBEDDED INTELLIGENT VEHICLE NAVIGATION SYSTEM*Shengxi, Ding, Bo, Zhang, Jingchang, Tan, Dayi, Zeng 10 1900 (has links)
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California / The intelligent vehicle navigation system is the multifunctional and complex integrate system that involved in auto positioning technology, geography information system and digital map database, computer technology, multimedia and wireless communication technology. In this paper, the autonomous navigation system based on the embedded hardware and embedded software platform is proposed. The system has advantages of low cost, low power consumption, multifunction and high stability and reliability.
|
354 |
Compilation techniques for high-performance embedded systems with multiple processorsFranke, Bjorn January 2004 (has links)
Despite the progress made in developing more advanced compilers for embedded systems, programming of embedded high-performance computing systems based on Digital Signal Processors (DSPs) is still a highly skilled manual task. This is true for single-processor systems, and even more for embedded systems based on multiple DSPs. Compilers often fail to optimise existing DSP codes written in C due to the employed programming style. Parallelisation is hampered by the complex multiple address space memory architecture, which can be found in most commercial multi-DSP configurations. This thesis develops an integrated optimisation and parallelisation strategy that can deal with low-level C codes and produces optimised parallel code for a homogeneous multi-DSP architecture with distributed physical memory and multiple logical address spaces. In a first step, low-level programming idioms are identified and recovered. This enables the application of high-level code and data transformations well-known in the field of scientific computing. Iterative feedback-driven search for “good” transformation sequences is being investigated. A novel approach to parallelisation based on a unified data and loop transformation framework is presented and evaluated. Performance optimisation is achieved through exploitation of data locality on the one hand, and utilisation of DSP-specific architectural features such as Direct Memory Access (DMA) transfers on the other hand. The proposed methodology is evaluated against two benchmark suites (DSPstone & UTDSP) and four different high-performance DSPs, one of which is part of a commercial four processor multi-DSP board also used for evaluation. Experiments confirm the effectiveness of the program recovery techniques as enablers of high-level transformations and automatic parallelisation. Source-to-source transformations of DSP codes yield an average speedup of 2.21 across four different DSP architectures. The parallelisation scheme is – in conjunction with a set of locality optimisations – able to produce linear and even super-linear speedups on a number of relevant DSP kernels and applications.
|
355 |
A Study on Timed Base Choice Criteria for Testing Embedded SoftwareBergström, Henning January 2016 (has links)
Programs for Programmable Logic Controller (PLC) are often written in graphical or textual languages. Control engineers design and use them in systems where safety is vital, such as avionics, nuclear power plants or transportation systems. Malfunction of such a computer could have severe consequences, therefore thorough testing of PLCs are important. The Base Choice (BC) combination strategy was proposed as a suitable technique for testing software. Test cases are created based on BC strategy by varying the values of one parameter at a time while keeping the values of the other parameters fixed on the values in the base choice. However, this strategy might not be as effective when used on embedded software where parameters need to be set for a certain amount of time in order to trigger a certain interesting behavior. By incorporating time as another parameter when generating the tests, the goal is to create a better strategy that will increase not only code coverage but also fault detection compared to base choice strategy. Timed Base Choice (TBC) coverage criteria is an improvement upon the regular Base Choice criteria with the inclusion of time. We define TBC as follows: The base test case in timed base choice criteria is determined by the tester of the program. A criterion suggested by Ammann and Offutt is the “most likely value” from the point of view of the user. In addition, a time choice T is determined by the tester as the most likely time for keeping the base test case to the same values. From the base test case, new test cases are created by varying the interesting values of one parameter at a time, keeping the values of the other parameters fixed on the base test case. Each new test case is executed with the input values set for a certain amount of time determined by the time choice T. The time choice is given in time units. The research questions stated in this thesis are as follows: Research Question 1 (RQ1) How does Timed Base Choice tests compare to Base Choice tests in terms of decision coverage? Research Question 2 (RQ2) How does Timed Base Choice tests compare to Base Choice tests in terms of fault detection? In order to answer these questions, an empirical study was made in which 11 programs was tested along with respective test cases generated by BC and TBC. Each program was executed on a PLC along with the belonging test cases and several faulty programs (mutants). From this testing we got the corresponding decision coverage for each program achieved by BC and TBC respectively as well as a mutation score measuring how many of the mutated programs was detected and killed. We found that TBC outperformed BC testing both in terms of decision coverage and fault detection. Using TBC testing we managed to achieve full decision coverage on several programs that we were unable to achieve using regular BC. This shows that TBC is an improvement upon the regular BC in both ways, thus answering our previously stated research questions.
|
356 |
Partitioning methodology validation for embedded systems designEriksson, Jonas January 2016 (has links)
As modern embedded systems are becoming more sophisticated the demands on their applications significantly increase. A current trend is to utilize the advances of heterogeneous platforms (i.e. platform consisting of different computational units (e.g. CPU, FPGA or GPU)) where different parts of the application can be distributed among the different computational units as software and hardware implementations. This technology can improve the application characteristics to meet requirements (e.g. execution time, power consumption and design cost), but it leads to a new challenge in finding the best combination of hardware and software implementation (referred as system configuration). The decisions whether a part of the application should be implemented in software (e.g. as C code) or hardware (e.g. as VHDL code) affect the entire product life-cycle. This is traditionally done manually by the developers in the early stage of the design phase. However, due to the increasing complexity of the application the need of a systematic process that aids the developer when making these decisions to meet the demands rises. Prior to this work a methodology called MULTIPAR has been designed to address this problem. MULTIPAR applies component-/model-based techniques to design the application, i.e. the application is modeled as a number of interconnected components, where some of the components will be implemented as software and the remaining ones as hardware. To perform the partitioning decisions, i.e. determining for each component whether it should be implemented as software or hardware, MULTIPAR proposes a set of formulas to calculate the properties of the entire system based on the properties for each component working in isolation. This thesis aims to show to what extent the proposed system formulas are valid. In particular it focuses on validating the formulas that calculate the system response time, system power consumption, system static memory and system FPGA area. The formulas were validated trough an industrial case study, where the system properties for different system configurations were measured and calculated by applying these formulas. The measured values and calculated values for the system properties were compared by conducting a statistical analysis. The case study demonstrated that the system properties can be accurately calculated by applying the system formulas.
|
357 |
ENERGY-AWARE OPTIMIZATION FOR EMBEDDED SYSTEMS WITH CHIP MULTIPROCESSOR AND PHASE-CHANGE MEMORYLi, Jiayin 01 January 2012 (has links)
Over the last two decades, functions of the embedded systems have evolved from simple real-time control and monitoring to more complicated services. Embedded systems equipped with powerful chips can provide the performance that computationally demanding information processing applications need. However, due to the power issue, the easy way to gain increasing performance by scaling up chip frequencies is no longer feasible. Recently, low-power architecture designs have been the main trend in embedded system designs.
In this dissertation, we present our approaches to attack the energy-related issues in embedded system designs, such as thermal issues in the 3D chip multiprocessor (CMP), the endurance issue in the phase-change memory(PCM), the battery issue in the embedded system designs, the impact of inaccurate information in embedded system, and the cloud computing to move the workload to remote cloud computing facilities.
We propose a real-time constrained task scheduling method to reduce peak temperature on a 3D CMP, including an online 3D CMP temperature prediction model and a set of algorithm for scheduling tasks to different cores in order to minimize the peak temperature on chip. To address the challenging issues in applying PCM in embedded systems, we propose a PCM main memory optimization mechanism through the utilization of the scratch pad memory (SPM). Furthermore, we propose an MLC/SLC configuration optimization algorithm to enhance the efficiency of the hybrid DRAM + PCM memory. We also propose an energy-aware task scheduling algorithm for parallel computing in mobile systems powered by batteries.
When scheduling tasks in embedded systems, we make the scheduling decisions based on information, such as estimated execution time of tasks. Therefore, we design an evaluation method for impacts of inaccurate information on the resource allocation in embedded systems. Finally, in order to move workload from embedded systems to remote cloud computing facility, we present a resource optimization mechanism in heterogeneous federated multi-cloud systems. And we also propose two online dynamic algorithms for resource allocation and task scheduling. We consider the resource contention in the task scheduling.
|
358 |
Exploring a LOGO microworld : the first minutesNg, Kevin 14 October 2014 (has links)
In his 1980 book, Mindstorms, Seymour Papert proposes using microworlds to help children learn mathematics like mathematicians. In a microworld like LOGO that is culturally rich in math, Papert claims that learning math can be as natural as learning French in France. Although the technology at the time was adequate, LOGO faltered due to improper implementation in the classroom. A newfound political interest in inquiry and computer literacy could breathe new life into Papert's vision. In contrast with the routinized approaches to introducing aspects of programming that, arguably, limited the trajectory for the implementation of programming in schools (Papert, 1980), this report explores what can and does happen in the first few minutes using a more open, student directed, approach to programming with high school physics students. A grounded theory approach led to connections with Vygotsky's Zone of Proximal Development. / text
|
359 |
Millimeter Wave Radar Interfacing with Android SmartphoneGholamhosseinpour, Ali January 2015 (has links)
Radar system development is generally costly, complicated and time consuming. This has kept its presence mostly inside industries and research centers with the necessary equipment to produce and operate such a system. Until recent years, realization of a fully integrated radar system on a chip was not feasible, however this is no longer the case and there are several types of sensors available from different manufacturers. Radar sensors offer some advantages that are unmatched by other sensing and imaging technologies such as operation in fog, dust and over long distances. This makes them suitable for use in Navigation, Automation, Robotics, and Security systems applications. The purpose of this thesis is to demonstrate the feasibility of a simplified radar system’s user interface via integration with the most common portable computer, a Smartphone, to make it possible for users with minimal knowledge of radar systems design and operation to use it in different applications. Smartphones are very powerful portable computers equipped with a suite of sensors with the potential to be used in a wide variety of applications. It seems logical to take advantage of their computing power and portability. The combination of a radar sensor and a smartphone can act as a demonstrator in an effort to bring radar sensors one step closer to the hands of the developers and consumers. In this study the following areas are explored and proper solutions are implemented; Design of a control board with capability to drive a radar sensor, capture the signal and transfer it to a secondary device (PC or Smartphone) both wired and wirelessly e.g. Bluetooth. A firmware that is capable of driving the control board and can receive, interpret and execute messages from a PC and or a Smartphone A cross compatible master software that can run on Linux, Windows, Mac and Android OSs and is capable of communication with the firmware/control board Proper analysis methods for signal capture and process purposes Automation of some parameter adjustment for different modes of operation of the Radar System in order to make the user interface as simple as possible A user friendly user-interface and API that can run on both PC and Smartphone
|
360 |
Automated Orchestra for Industrial Automation on Virtualized Multicore Environment / Extending Real-Time component-based Framework to Virtual Nodes : Demonstration: Automated Orchestra real-time ApplicationMahmud, Nesredin January 2013 (has links)
Industrial control systems are applied in many areas e.g., motion control for industrial robotics, process control of large plants such as in the area of oil and gas, and in large national power grids. Since the last decade with advancement and adoption of virtualization and multicore technology (e.g., Virtual Monitoring Machine, cloud computing, server virtualization, application virtualization), IT systems, automation industries have benefited from low investment, effective system management and high service availability. However, virtualization and multicore technologies have posed a serious challenge to real-time systems, which is violating timeliness and predictability of real-time application running on control systems. To address the challenge, we have extended a real-time component-based framework with virtual nodes; and evaluated the framework in the context of virtualized multicore environment. The evaluation is demonstrated by modeling and implementing an orchestra application with QoS for CPU, memory and network bandwidth. The orchestra application is a real-time and distributed application deployed on virtualized multicore PCs connected with speakers. The result shows undistorted orchestra performance played through speakers connected to physical computer nodes. The contribution of the thesis can be considered: 1) extending a real-time component-based framework, Future Automation Software Architecture (FASA) with virtual nodes using Virtual Computation Resource (VCR) and 2) design and installation of reusable test environment for development, debugging and testing of real-time application on a network of virtualized multicore environment. / Vinnova project “AUTOSAR for Multi-Core in Automotive and Automation Industries “
|
Page generated in 0.0338 seconds