• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 719
  • 196
  • 107
  • 69
  • 32
  • 24
  • 20
  • 17
  • 12
  • 9
  • 6
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 1294
  • 1294
  • 398
  • 396
  • 364
  • 248
  • 213
  • 201
  • 200
  • 158
  • 139
  • 133
  • 129
  • 126
  • 124
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

Programming methodologies for ADAS applications in parallel heterogeneous architectures / Méthodologies de programmation d'applications ADAS sur des architectures parallèles et hétérogènes

Dekkiche, Djamila 10 November 2017 (has links)
La vision par ordinateur est primordiale pour la compréhension et l’analyse d’une scène routière afin de construire des systèmes d’aide à la conduite (ADAS) plus intelligents. Cependant, l’implémentation de ces systèmes dans un réel environnement automobile et loin d’être simple. En effet, ces applications nécessitent une haute performance de calcul en plus d’une précision algorithmique. Pour répondre à ces exigences, de nouvelles architectures hétérogènes sont apparues. Elles sont composées de plusieurs unités de traitement avec différentes technologies de calcul parallèle: GPU, accélérateurs dédiés, etc. Pour mieux exploiter les performances de ces architectures, différents langages sont nécessaires en fonction du modèle d’exécution parallèle. Dans cette thèse, nous étudions diverses méthodologies de programmation parallèle. Nous utilisons une étude de cas complexe basée sur la stéréo-vision. Nous présentons les caractéristiques et les limites de chaque approche. Nous évaluons ensuite les outils employés principalement en terme de performances de calcul et de difficulté de programmation. Le retour de ce travail de recherche est crucial pour le développement de futurs algorithmes de traitement d’images en adéquation avec les architectures parallèles avec un meilleur compromis entre les performances de calcul, la précision algorithmique et la difficulté de programmation. / Computer Vision (CV) is crucial for understanding and analyzing the driving scene to build more intelligent Advanced Driver Assistance Systems (ADAS). However, implementing CV-based ADAS in a real automotive environment is not straightforward. Indeed, CV algorithms combine the challenges of high computing performance and algorithm accuracy. To respond to these requirements, new heterogeneous circuits are developed. They consist of several processing units with different parallel computing technologies as GPU, dedicated accelerators, etc. To better exploit the performances of such architectures, different languages are required depending on the underlying parallel execution model. In this work, we investigate various parallel programming methodologies based on a complex case study of stereo vision. We introduce the relevant features and limitations of each approach. We evaluate the employed programming tools mainly in terms of computation performances and programming productivity. The feedback of this research is crucial for the development of future CV algorithms in adequacy with parallel architectures with a best compromise between computing performance, algorithm accuracy and programming efforts.
552

Performance Analysis of the Preemption Mechanism in TSN

Murselović, Lejla January 2020 (has links)
Ethernet-based real-time network communication technologies are nowadays a promising communication technology for industrial applications. It offers high bandwidth, scalability and performance compared to the existing real-time networks. Time-Sensitive Networking is an enhancement for the existing Ethernet standards thus offers compatibility, cost efficiency and simplified infrastructure, like previous prioritization and bridging standards. Time-Sensitive Networking is suitable for networks with both time-critical and non-time-critical traffic. The timing requirements of time-critical traffic are undisturbed by the less-critical traffic due to TSN features like the Time-Aware Scheduler. It is a time-triggered scheduling mechanism that guarantees the fulfilment of temporal requirements of highly time-critical traffic. Features like the Credit-Based Shapers and preemption result in a more efficiently utilized network. This thesis focuses on the effects that the preemption mechanism has on network performance. Simulation-based performance analysis of a singe-node and singe-egress port model for different configuration patterns is conducted. The simulation tool used is a custom developed simulator called TSNS. The configuration patterns include having multiple express traffic classes. In a single-egress port model, the most significant performance contributor is the response time and this is one of the simulation measurements obtained from the TSNS network simulator. The comparison between the results of these different network configurations, using realistic traffic patterns, provides a quantitative evaluation of the network performance when the network is configured in various ways, including multiple preemption scenarios.
553

FPGA acceleration of superpixel segmentation

Östgren, Magnus January 2020 (has links)
Superpixel segmentation is a preprocessing step for computer vision applications, where an image is split into segments referred to as superpixels. Then running the main algorithm on these superpixels reduces the number of data points processed in comparison to running the algorithm on pixels directly, while still keeping much of the same information. In this thesis, the possibility to run superpixel segmentation on an FPGA is researched. This has resulted in the development of a modified version of the algorithm SLIC, Simple Linear Iterative Clustering. An FPGA implementation of this algorithm has then been built in VHDL, it is designed as a pipeline, unrolling the iterations of SLIC. The designed algorithm shows a lot of potential and runs on real hardware, but more work is required to make the implementation more robust, and remove some visual artefacts.
554

Towards Reliable Computer Vision in Aviation: An Evaluation of Sensor Fusion and Quality Assessment

Björklund, Emil, Hjorth, Johan January 2020 (has links)
Research conducted in the aviation industry includes two major areas, increased safety and a reduction of the environmental footprint. This thesis investigates the possibilities of increased situational awareness with computer vision in avionics systems. Image fusion methods are evaluated with appropriate pre-processing of three image sensors, one in the visual spectrum and two in the infra-red spectrum. The sensor setup is chosen to cope with the different weather and operational conditions of an aircraft, with a focus on the final approach and landing phases. Extensive image quality assessment metrics derived from a systematic review is applied to provide a precise evaluation of the image quality of the fusion methods. A total of four image fusion methods are evaluated, where two are convolutional network-based, using the networks for feature extraction in the detailed layers. Other approaches with visual saliency maps and sparse representation are also evaluated. With methods implemented in MATLAB, results show that a conventional method implementing a rolling guidance filter for layer separation and visual saliency map provides the best results. The results are further confirmed with a subjective ranking test, where the image quality of the fusion methods is evaluated further.
555

Eliminating effects of Flakiness in Embedded Software Testing : An industrial case study

Kanneganti, Joshika, Vadrevu, Krithi Sameera January 2020 (has links)
Background. Unstable and unpredictable tests, herein referred to as flaky tests, pose a serious challenge to systems in the production environment. If a device is not tested thoroughly, it will be sent back for retesting from the production centers, which is an expensive affair. Removing flaky tests involves detecting the flaky tests, finding the causes of flakiness and finally the elimination of flakiness. The existing literature provides information on causes and elimination techniques of flakiness for software systems. All of these are studied thoroughly, and support is taken from interviews to understand they are applicable in the context of embedded systems. Objectives. The primary objective is to identify causes of flakiness in a device under test and also techniques for eliminating flakiness. Methods. In this paper, we applied a literature review to find the current state-of-art of flakiness. A case study is selected to address the objectives of the study. Interviews and observations carried out to collect data. Data analysis performed using a directed content analysis method. Results. Observations resulted in eliminating 4 causes of flakiness in embedded systems. Interview results in finding 4 elimination techniques which were not found in the literature. Conclusions. Causes and Elimination techniques for the domain of embedded systems are identified. Knowledge translation between the domains was carried out effectively.
556

Camera pose estimation with moving Aruco-board. : Retrieving camera pose in a stereo camera tolling system application. / Kamerapositionskalibrering med Aruco-tavla i rörelse.

Isaksson, Jakob, Magnusson, Lucas January 2020 (has links)
Stereo camera systems can be utilized for different applications such as position estimation,distance measuring, and 3d modelling. However, this requires the cameras to be calibrated.This paper proposes a traditional calibration solution with Aruco-markers mounted on avehicle to estimate the pose of a stereo camera system in a tolling environment. Our method isbased on Perspective N Point which presumes the intrinsic matrix to be already known. Thegoal is to find each camera’s pose by identifying the marker corners in pixel coordinates aswell as in world coordinates. Our tests show a worst-case error of 21.5 cm and a potential forcentimetre accuracy. It also verifies validity by testing the obtained pose estimation live in thecamera system. The paper concludes that the method has potential for higher accuracy notobtained in our experiment due to several factors. Further work would focus on enlarging themarkers and widening the distance between the markers.
557

An Embedded Garbage Collection Module with Support for Multiple Mutators and Weak References

Preußer, Thomas B., Reichel, Peter, Spallek, Rainer G. 14 November 2012 (has links)
This report details the design of a garbage collection (GC) module, which introduces modern GC features to the domain of embedded implementations. The described design supports weak references and feeds reference queues. Its architecture allows multiple concurrent application cores operating as mutators on the shared memory managed by the GC module. The garbage collection is exact and fully concurrent so as to enable the uninterrupted computational progress of the mutators. It combines a distributed root marking with a centralized heap scan of the managed memory. It features a novel mark-and-copy GC strategy on a segmented memory, which thereby overcomes both the tremendous space overhead of two-space copying and the compaction race of mark-and-compact approaches. The proposed GC architecture has been practically implemented and proven using the embedded bytecode processor SHAP as a sample testbed. The synthesis results for settings up to three SHAP mutator cores are given and online functional measurements are presented. Basic performance dependencies on the system configuration are evaluated.
558

EXPLORATION OF DEEP LEARNING APPLICATIONS ON AN AUTONOMOUS EMBEDDED PLATFORM (BLUEBOX 2.0)

Dewant Katare (8082806) 06 December 2019 (has links)
<div>An Autonomous vehicle depends on the combination of latest technology or the ADAS safety features such as Adaptive cruise control (ACC), Autonomous Emergency Braking (AEB), Automatic Parking, Blind Spot Monitor, Forward Collision Warning or Avoidance (FCW or FCA), Lane Departure Warning. The current trend follows incorporation of these technologies using the Artificial neural network or Deep neural network, as an imitation of the traditionally used algorithms. Recent research in the field of deep learning and development of competent processors for autonomous or self driving car have shown amplitude of prospect, but there are many complexities for hardware deployment because of limited resources such as memory, computational power, and energy. Deployment of several mentioned ADAS safety feature using multiple sensors and individual processors, increases the integration complexity and also results in the distribution of the system, which is very pivotal for autonomous vehicles.</div><div><br></div><div>This thesis attempts to tackle two important adas safety feature: Forward collision Warning, and Object Detection using the machine learning and Deep Neural Networks and there deployment in the autonomous embedded platform.</div><div><br></div><div><div>This thesis proposes the following: </div><div>1. A machine learning based approach for the forward collision warning system in an autonomous vehicle.<br></div><div>2.3-D object detection using Lidar and Camera which is primarily based on Lidar Point Clouds. </div><div><br></div><div>The proposed forward collision warning model is based on the forward facing automotive radar providing the sensed input values such as acceleration, velocity and separation distance to a classifier algorithm which on the basis of supervised learning model, alerts the driver of possible collision. Decision Tress, Linear Regression, Support Vector Machine, Stochastic Gradient Descent, and a Fully Connected Neural Network is used for the prediction purpose.</div><div><br></div><div>The second proposed methods uses object detection architecture, which combines the 2D object detectors and a contemporary 3D deep learning techniques. For this approach, the 2D object detectors is used first, which proposes a 2D bounding box on the images or video frames. Additionally a 3D object detection technique is used where the point clouds are instance segmented and based on raw point clouds density a 3D bounding box is predicted across the previously segmented objects.</div></div>
559

Adaptive Sensor : Exploring the use of dynamic role allocation based on interesting to detect blood and tumors in a smart pill

Yang, Can January 2018 (has links)
For intelligent systems, the ability to adapt a sensor's sensing capabilities offers promise for reducing numbers, weight, and volume of sensors required. This basic idea is in line with a recent assertion by the well-known roboticist Rodney Brooks, that versatile robots could be used to perform various tasks instead of requiring a large number of specialized robots.In the current work, we consider the concept of a "smart" sensor which could dynamically adapt itself to replace multiple static sensors--within the application area of ingestible smart pills, where small sensors might be required to detect problems such as bleeding or tumours.\\ Simulations were used to evaluate some basic strategies for how to adapt the sensor and their effectiveness was compared; as well, a hardware prototype using LEDs to indicate system switching was prepared.
560

Development of a demo platform on mobile devices for 2D- and 3D-sound processing

Rosencrantz, Frans January 2020 (has links)
This thesis project aims for the development of a demonstration platform on mobile devices for testing and demonstrating algorithms for 2D and 3D spatial sound reproduction. The demo system consists of four omnidirectional microphones in a square planar array, an Octo sound card (from Audio Injector), a Raspberry Pi 3B+ (R-Pi) single-board computer and an inertial measurement unit (IMU) located in the center of the array. The microphone array captures sound, which is then digitized, and in turn, transferred to the R-Pi. On the R-Pi, the digitized sound signal is rendered through the directional audio coding (DirAC) algorithm to maintain the spatial properties of the sound. Finally, the digital signal and spatial properties are rendered through Dirac VR to maintain a spatial stereo signal of the recorded environment. The directional audio coding algorithm was initially implemented in Matlab and then ported to C++ since the R-Pi does not support Matlab natively. The ported algorithm was verified on a four-channel in and six-channel out system, processing 400 000 samples at 44 100 kHz. The results show that the C++ DirAC implementation maintained a maximum error of 4.43e-05 or -87 dB compares to the original Matlab implementation. For future research on spatial audio reproduction, a four-microphone smartphone mock-up was constructed based on the same hardware used in the demo system. A software interface was also implemented for transferring the microphone recordings and the orientation of the mock-up to Matlab.

Page generated in 0.0801 seconds