Spelling suggestions: "subject:"embeddedsystems"" "subject:"embeddedsystem""
1 |
Garden Monitoring with Embedded Systemsvon Hacht, Karl-Johan January 2015 (has links)
In today’s modern society the process of handling crops in an accountable way withoutloss have become more and more important. By letting a gardener evaluate the progressof his plants from relevant data one can reduce these losses and increase effectiveness ofthe whole plantation. This work is about the construction of such a system composedfrom a developers perspective of three different platforms, from the start of data samplingwithin the context of gardening to and end user easily able to understand the data thentranslated. The first platform will be created from scratch with both hardware andsoftware, the next assembled from already finished hardware components and build withsimpler software. The last will essentially only be a software solution in an alreadyfinished hardware environment.
|
2 |
Object-oriented techniques applied to real-time systemsMaclean, Stuart Douglas January 1995 (has links)
No description available.
|
3 |
Methodologies for Approximation of Unary Functions and Their Implementation in HardwareHertz, Erik January 2016 (has links)
Applications in computer graphics, digital signal processing, communication systems, robotics, astrophysics, fluid physics and many other areas have evolved to become very computation intensive. Algorithms are becoming increasingly complex and require higher accuracy in the computations. In addition, software solutions for these applications are in many cases not sufficient in terms of performance. A hardware implementation is therefore needed. A recurring bottleneck in the algorithms is the performance of the approximations of unary functions, such as trigonometric functions, logarithms and the square root, as well as binary functions such as division. The challenge is therefore to develop a methodology for the implementation of approximations of unary functions in hardware that can cope with the growing requirements. The methodology is required to result in fast execution time, low complexity basic operations that are simple to implement in hardware, and – sincemany applications are battery powered – low power consumption. To ensure appropriate performance of the entire computation in which the approximation is a part, the characteristics and distribution of the approximation error are also things that must be possible to manage. The new approximation methodologies presented in this thesis are of the type that aims to reduce the sizes of the look-up tables by the use of auxiliary functions. They are founded on a synthesis of parabolic functions by multiplication – instead of addition, which is the most common. Three approximation methodologies have been developed; the two last being further developments of the first. For some functions, such as roots, inverse and inverse roots, a straightforward solution with an approximation is not manageable. Since these functions are frequent in many computation intensive algorithms, it is necessary to find very efficient implementations of these functions. New methods for this are also presented in this thesis. They are all founded on working in a floating-point format, and, for the roots functions, a change of number base is also used. The transformations not only enable simpler solutions but also increased accuracy, since the approximation algorithm is performed on a mantissa of limited range. Tools for error analysis have been developed as well. The characteristics and distribution of the approximation error in the new methodologies are presented and compared with existing state-of-the-art methods such as CORDIC. The verification and evaluation of the solutions have to a large extent been made as comparative ASIC implementations with other approximation methods, separately or embedded in algorithms. As an example, an implementation of the logarithm made using the third methodology developed, Harmonized Parabolic Synthesis (HPS), is compared with an implementation using the CORDIC algorithm. Both implementations are designed to provide 15-bit resolution. The design implemented using HPS performs 12 times better than the CORDIC implementation in terms of throughput. In terms of energy consumption, the new methodology consumes 96% less. The chip area is 60% smaller than for the CORDIC algorithm. In summary, the new approximation methodologies presented are found to well meet the demanding requirements that exist in this area.
|
4 |
Sensor Integration for High Temperature MeasurementsRagnarsson, David January 2017 (has links)
In today's mining industry, most of the sensor measurements in high temperature environments are expensive and the sensors are not well integrated with the materials treated in the hot temperatures. The conditions can vary much between the sensors location and where the materials are located. It is crucial to have high performance measurements to reach a more optimized control over the oven. A more optimized process gives a better combustion which decreases the fuel consumption and is more energy efficient. To increase the performance of these measurements, it is necessary to have wireless sensor systems, which can be well integrated with the materials and have a low cost. This so there is no need to use same system several times and it shouldn't matter if it gets destroyed in the oven. In this thesis, the focus lies on building the electronics and software for controlling a wide band oxygen sensor. The electronics are built by components with an upper temperature limit of 125 ◦C or above. The sensor itself is supposed to be heated up by an internal heating element. However, in these experiments, it is heated up by the surroundings in the oven. A major challenge in the work was the design of the control loop to keep the sensor in a correct and stable operating point. When initial oxygen measurements were compared with reference measurement done simultaneously in the oven, it didn't match well. These differences were shown to be caused by different locations of the sensor and the reference measurements. Further measurements in a live industrial setting confirmed the functionality of the system.
|
5 |
DESIGNING A LAB ASSIGNMENT FOR STUDYING REAL-TIME EMBEDDED SYSTEMSRännar, Rasmus, Mustaniemi, Miika January 2019 (has links)
Embedded systems are all around us in the modern world and continue to evolve as time passes. Therefore, it is important to keep the knowledge in the field evolving, and education is a big part of it. This thesis focuses on how to design a lab assignment for a course about embedded systems with the stress on networking. Embedded systems have reliability and timeliness requirements, which will have to be accounted for when designing the network of the system. The work started with a literature study of communication protocols and how they support the requirements imposed by the embedded systems. Using this knowledge hardware was evaluated and chosen. With the lab assignment in mind, Arduino Zero was chosen as the platform as well as three network modules: Wi-Fi, Bluetooth and Zigbee. The hardware was used to implement a simple embedded system consisting of two nodes: a sensor node and a controller node. The sensor node sends the data to the controller which then acts upon the data. Three program were written, each with its own communication solution (time-triggered, event-triggered and a hybrid solution) and then tested in different environments. From the results of the tests, guidelines were formulated about how to design an assignment and what hardware to use. A general guideline was also created describing a lab assignment step by step. We recommended switching the platform from Arduino Zero to Arduino Uno to reduce the amount of workarounds needed to get the system running. Having more than one communication protocol also proved valuable since the students could show their knowledge by argumenting for their choice of protocol.
|
6 |
MEASURING THE REAL-TIME LATENCY OF AN I.MX7D USING XENOMAI AND THE YOCTO PROJECT / MÄTA RESPONSTIDEN AV EN I.MX7D MED HJÄLP AV XENOMAI OCH YOCTO PROJEKTETCoenen, Bram January 2019 (has links)
In this thesis the real-time latency of an i.MX7D processor on a CL-SOM-IMX7 boardis evaluated. The real-time Linux for the system is created using Xenomai with both theI-Pipe patch and thePREEMPT_RTpatch. The embedded distribution is built using theYocto Project and uses a vendor i.MX kernel maintained by NXP. The maximum latency for thecobaltcore is268μsfor user-space tasks with a loadedCPU. These types of tasks have the highest latency of Xenomai's three task categories.A latency measurement of thePREEMPT_RTpatch showed a maximum latency ofwith an idle CPU. Therefore it is concluded that thecobalt412μscore has a lower latencyand is therefore better suited for real-time applications. A comparison is made with other modules and it is found that the latency measured inthis thesis is high compared to for example a Raspberry Pi 3B. The source code and congurations for the project can be found at https://github.com/bracoe/meta-xenomai-imx7d / Denna uppsats utvärderar realtidsfördröjningen för en i.MX7D på en CL-SOM-IMX7.Realtidoperativsystemet skapas med hjälp av Linux och både Xenomais I-Pipe patchochPREEMPT_RTpatch implementeras. Den inbyggda distributionen byggs med hjälp avYocto projektet och använder NXPs egna Linux kärna. Den maximala fördröjningen förcobalt kärnan är 268μs för user-space uppgifter med enbelastad CPU. Dessa typer av uppgifter har den högsta fördröjningen av Xenomais treuppgiftskategorier. En fördröjningsmätning avfördröjning på412μsPREEMPT_RTpatchen visade en maximalmed en overksam CPU. Slutsatsen görs attcobaltkärnan har enlägre fördröjning och är därför mer lämpad för realtidsapplikationer. En jämförelse görs med andra moduler och den visar att fördröjningen mätt i dennauppsats är hög jämfört med till exempel en Raspberry Pi 3B. Källkoden och kongurationer kan hittas på https://github.com/bracoe/meta-xenomai-imx7d
|
7 |
Grunden till en simulering av batterier och batterisystemJohansson, Gustav January 2019 (has links)
No description available.
|
8 |
Efficient Implementation of Histogram Dimension Reduction using Deep Learning : The project focuses on implementing deep learning algorithms on the state of the art Nvidia Drive PX GPU platform to achieve high performance.Ng, Robin January 2017 (has links)
No description available.
|
9 |
Formally Assured Intelligent Systems for Enhanced Ambient Assisted Living SupportKunnappilly, Ashalatha January 2019 (has links)
Ambient Assisted Living (AAL) solutions are aimed to assist the elderly in their independent and safe living. During the last decade, the AAL field has witnessed a significant development due to advancements in Information and Communication Technologies, Ubiquitous Computing and Internet of Things. However, a closer look at the existing AAL solutions shows that these improvements are used mostly to deliver one or a few functions mainly of the same type (e.g. health monitoring functions). There are comparatively fewer initiatives that integrate different kinds of AAL functionalities, such as fall detection, reminders, fire alarms, etc., besides health monitoring, into a common framework, with intelligent decision-making that can thereby offer enhanced reasoning by combining multiple events. To address this shortage, in this thesis, we propose two different categories of AAL architecture frameworks onto which different functionalities, chosen based on user preferences, can be integrated. One of them follows a centralized approach, using an intelligent Decision Support System (DSS), and the other, follows a truly distributed approach, involving multiple intelligent agents. The centralized architecture is our initial choice, due to its ease of development by combining multiple functionalities with a centralized DSS that can assess the dependency between multiple events in real time. While easy to develop, our centralized solution suffers from the well-known single point of failure, which we remove by adding a redundant DSS. Nevertheless, the scalability, flexibility, multiple user accesses, and potential self-healing capability of the centralized solution are hard to achieve, therefore we also propose a distributed, agent-based architecture as a second solution, to provide the community with two different AAL solutions that can be applied depending on needs and available resources. Both solutions are to be used in safety-critical applications, therefore their design-time assurance, that is, providing a guarantee that they meet functional requirements and deliver the needed quality-of-service, is beneficial. Our first solution is a generic architecture that follows the design of many commercial AAL solutions with sensors, a data collector, DSS, security and privacy, database (DB) systems, user interfaces (UI), and cloud computing support. We represent this architecture in the Architecture Analysis and Design Language (AADL) via a set of component patterns that we propose. The advantage of using patterns is that they are easily re-usable when building specific AAL architectures. Our patterns describe the behavior of the components in the Behavioral Annex of AADL, and the error behavior in AADL's Error Annex. We also show various instantiations of our generic model that can be developed based on user requirements. To formally assure these solutions against functional, timing and reliability requirements, we show how we can employ exhaustive model checking using the state-of-art model checker, UPPAAL, and also statistical model-checking techniques with UPPAAL SMC, an extension of the UPPAAL model checker for stochastic systems, which can be employed in cases when exhaustive verification does not scale. The second proposed architecture is an agent-based architecture for AAL systems, where agents are intelligent entities capable of communicating with each other in order to decide on an action to take. Therefore, the decision support is now distributed among agents and can be used by multiple users distributed across multiple locations. Due to the fact that this solution requires describing agents and their interaction, the existing core AADL does not suffice as an architectural framework. Hence, we propose an extension to the core AADL language - The Agent Annex, with formal semantics as Stochastic Transition Systems, which allows us to specify probabilistic, non-deterministic and real-time AAL system behaviors. In order to formally assure our multi-agent system, we employ the state-of-art probabilistic model checker PRISM, which allows us to perform probabilistic yet exhaustive verification. As a final contribution, we also present a small-scale validation of an architecture of the first category, with end users from three countries (Romania, Poland, Denmark). This work has been carried out with partners from the mentioned countries. Our work in this thesis paves the way towards the development of user-centered, intelligent ambient assisted living solutions with ensured quality of service.
|
10 |
Performance Study and Analysis of Time Sensitive NetworkingMuminovic, Mia, Suljic, Haris January 2019 (has links)
Modern technology requires reliable, fast, and cheap networks as a backbone for the data transmission. Among many available solutions, switched Ethernet combined with Time Sensitive Networking (TSN) standard excels because it provides high bandwidth and real-time characteristics by utilizing low-cost hardware. For the industry to acknowledge this technology, extensive performance studies need to be conducted, and this thesis provides one. Concretely, the thesis examines the performance of two amendments IEEE 802.1Qbv and IEEE 802.1Qbu that are recently appended to the TSN standard. The academic community understands the potential of this technology, so several simulation frameworks already exist, but most of them are unstable and undertested. This thesis builds on top of existent frameworks and utilizes the framework developed in OMNeT++. Performance is analyzed through several segregated scenarios and is measured in terms of end-to-end transmission latency and link utilization. Attained results justify the industry interest in this technology and could lead to its greater representation in the future.
|
Page generated in 0.0556 seconds