• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • 1
  • Tagged with
  • 5
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A DESIGN FOR A 10.4 GIGABIT/SECOND SOLID-STATE DATA RECORDER

Wise, Richard J. Jr 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada / A need has been identified in the Test and Evaluation (T&E) and tactical aircraft communities for a ruggedized high-speed instrumentation data recorder to complement the ever-increasing number of high frame-rate digital cameras and sensors. High-speed digital camera manufacturers are entering this market in order to provide adequate recording capability for their own cameras. This paper discusses a Solid-State Data Recorder (SSDR) for use in Imaging and High-Speed Sensor Data Aquisition applications. The SSDR is capable of a 10.4 Gb/sec sustained, 16Gb/sec burst, input data rate via a proprietary 32-channel-by-10-bit generic high-speed parallel interface, a massively-parallel 256-bit bus architecture, and unique memory packaging design. A 32-bit PCIbus control/archive and dedicated DCRsi™ interface are also employed, allowing data archiving to standard high-speed interfaces (SCSI, Fiber-Channel, USB, etc.) and DCRsi™-compatible tape recorders.
2

Stagioni: Temperature management to enable near-sensor processing for performance, fidelity, and energy-efficiency of vision and imaging workloads

January 2019 (has links)
abstract: Vision processing on traditional architectures is inefficient due to energy-expensive off-chip data movements. Many researchers advocate pushing processing close to the sensor to substantially reduce data movements. However, continuous near-sensor processing raises the sensor temperature, impairing the fidelity of imaging/vision tasks. The work characterizes the thermal implications of using 3D stacked image sensors with near-sensor vision processing units. The characterization reveals that near-sensor processing reduces system power but degrades image quality. For reasonable image fidelity, the sensor temperature needs to stay below a threshold, situationally determined by application needs. Fortunately, the characterization also identifies opportunities -- unique to the needs of near-sensor processing -- to regulate temperature based on dynamic visual task requirements and rapidly increase capture quality on demand. Based on the characterization, the work proposes and investigate two thermal management strategies -- stop-capture-go and seasonal migration -- for imaging-aware thermal management. The work present parameters that govern the policy decisions and explore the trade-offs between system power and policy overhead. The work's evaluation shows that the novel dynamic thermal management strategies can unlock the energy-efficiency potential of near-sensor processing with minimal performance impact, without compromising image fidelity. / Dissertation/Thesis / Masters Thesis Computer Engineering 2019
3

Vehicle Perception: Localization, Mapping with Detection, Classification and Tracking of Moving Objects

Vu, Trung-Dung 18 September 2009 (has links) (PDF)
Perceiving or understanding the environment surrounding of a vehicle is a very important step in building driving assistant systems or autonomous vehicles. In this thesis, we study problems of simultaneous localization and mapping (SLAM) with detection, classification and tracking moving objects in context of dynamic outdoor environments focusing on using laser scanner as a main perception sensor. It is believed that if one is able to accomplish these tasks reliably in real time, this will open a vast range of potential automotive applications. The first contribution of this research is made by a grid-based approach to solve both problems of SLAM with detection of moving objects. To correct vehicle location from odometry we introduce a new fast incremental scan matching method that works reliably in dynamic outdoor environments. After good vehicle location is estimated, the surrounding map is updated incrementally and moving objects are detected without a priori knowledge of the targets. Experimental results on datasets collected from different scenarios demonstrate the efficiency of the method. The second contribution follows the first result after a good vehicle localization and a reliable map are obtained. We now focus on moving objects and present a method of simultaneous detection, classification and tracking moving objects. A model-based approach is introduced to interpret the laser measurement sequence over a sliding window of time by hypotheses of moving object trajectories. The data-driven Markov chain Monte Carlo (DDMCMC) technique is used to solve the data association in the spatio-temporal space to effectively find the most likely solution. We test the proposed algorithm on real-life data of urban traffic and present promising results. The third contribution is an integration of our perception module on a real vehicle for a particular safety automotive application, named Pre-Crash. This work has been performed in the framework of the European Project PReVENT-ProFusion in collaboration with Daimler AG. A comprehensive experimental evaluation based on relevant crash and non-crash scenarios is presented which confirms the robustness and reliability of our proposed method.
4

Multiple sensor fusion for detection, classification and tracking of moving objects in driving environments / Fusion multi-capteur pour la détection, classification et suivi d'objets mobiles en environnement routier

Chavez Garcia, Ricardo Omar 25 September 2014 (has links)
Les systèmes avancés d'assistance au conducteur (ADAS) aident les conducteurs à effectuer des tâches de conduite complexes et à éviter ou atténuer les situations dangereuses. Le véhicule détecte le monde extérieur au moyen de capteurs, et ensuite construit et met à jour un modèle interne de la configuration de l'environnement. La perception de véhicule consiste à établir des relations spatiales et temporelles entre le véhicule et les obstacles statiques et mobiles dans l'environnement. Cette perception se compose de deux tâches principales : la localisation et cartographie simultanées (SLAM) traite de la modélisation de pièces statiques; et la détection et le suivi d'objets en mouvement (DATMO) est responsable de la modélisation des pièces mobiles dans l'environnement. Afin de réaliser un bon raisonnement et contrôle, le système doit modéliser correctement l'environnement. La détection précise et la classification des objets en mouvement est un aspect essentiel d'un système de suivi d'objets. Classification des objets en mouvement est nécessaire pour déterminer le comportement possible des objets entourant le véhicule, et il est généralement réalisée au niveau de suivi des objets. La connaissance de la classe d'objets en mouvement au niveau de la détection peut aider à améliorer leur suivi. La plupart des solutions de perception actuels considèrent informations de classification seulement comme information additional pour la sortie final de la perception. Aussi, la gestion de l'information incomplète est une exigence importante pour les systèmes de perception. Une information incomplète peut être originaire de raisons liées à la détection, tels que les problèmes d calibrage et les dysfonctionnements des capteurs; ou des perturbations de la scène, comme des occlusions, des problèmes de météo et objet déplacement. Les principales contributions de cette thèse se concentrent sur ​​la scène DATMO. Précisément, nous pensons que l'inclusion de la classe de l'objet comme un élément clé de la représentation de l'objet et la gestion de l'incertitude de plusieurs capteurs de détections, peut améliorer les résultats de la tâche de perception. Par conséquent, nous abordons les problèmes de l'association de données, la fusion de capteurs, la classification et le suivi à différents niveaux au sein de la phase de DATMO. Même si nous nous concentrons sur un ensemble de trois capteurs principaux: radar, lidar, et la caméra, nous proposons une architecture modifiables pour inclure un autre type ou nombre de capteurs. Premièrement, nous définissons une représentation composite de l'objet pour inclure des informations de classe et de l'état d'objet deouis le début de la tâche de perception. Deuxièmement, nous proposons, mettre en œuvre, et comparons deux architectures de perception afin de résoudre le problème de DATMO selon le niveau où l'association des objets, la fusion et la classification des informations sont inclus et appliquées. Nos méthodes de fusion de données sont basées sur la théorie de l'evidence, qui est utilisé pour gérer et inclure l'incertitude de la détection du capteur et de la classification des objets. Troisièmement, nous proposons une approche d'association de données bassée en la théorie de l'evidence pour établir une relation entre deux liste des détections d'objets. Quatrièmement, nous intégrons nos approches de fusion dans le cadre d'une application véhicule en temps réel. Cette intégration a été réalisée dans un réelle démonstrateur de véhicule du projet European InteractIVe. Finalement, nous avons analysé et évalué expérimentalement les performances des méthodes proposées. Nous avons comparé notre fusion rapproche les uns contre les autres et contre une méthode state-of-the-art en utilisant des données réelles de scénarios de conduite différents. Ces comparaisons sont concentrés sur la détection, la classification et le suivi des différents objets en mouvement: piétons, vélos, voitures et camions. / Advanced driver assistance systems (ADAS) help drivers to perform complex driving tasks and to avoid or mitigate dangerous situations. The vehicle senses the external world using sensors and then builds and updates an internal model of the environment configuration. Vehicle perception consists of establishing the spatial and temporal relationships between the vehicle and the static and moving obstacles in the environment. Vehicle perception is composed of two main tasks: simultaneous localization and mapping (SLAM) deals with modelling static parts; and detection and tracking moving objects (DATMO) is responsible for modelling moving parts in the environment. In order to perform a good reasoning and control, the system has to correctly model the surrounding environment. The accurate detection and classification of moving objects is a critical aspect of a moving object tracking system. Therefore, many sensors are part of a common intelligent vehicle system. Classification of moving objects is needed to determine the possible behaviour of the objects surrounding the vehicle, and it is usually performed at tracking level. Knowledge about the class of moving objects at detection level can help improve their tracking. Most of the current perception solutions consider classification information only as aggregate information for the final perception output. Also, management of incomplete information is an important requirement for perception systems. Incomplete information can be originated from sensor-related reasons, such as calibration issues and hardware malfunctions; or from scene perturbations, like occlusions, weather issues and object shifting. It is important to manage these situations by taking them into account in the perception process. The main contributions in this dissertation focus on the DATMO stage of the perception problem. Precisely, we believe that including the object's class as a key element of the object's representation and managing the uncertainty from multiple sensors detections, we can improve the results of the perception task, i.e., a more reliable list of moving objects of interest represented by their dynamic state and appearance information. Therefore, we address the problems of sensor data association, and sensor fusion for object detection, classification, and tracking at different levels within the DATMO stage. Although we focus on a set of three main sensors: radar, lidar, and camera, we propose a modifiable architecture to include other type or number of sensors. First, we define a composite object representation to include class information as a part of the object state from early stages to the final output of the perception task. Second, we propose, implement, and compare two different perception architectures to solve the DATMO problem according to the level where object association, fusion, and classification information is included and performed. Our data fusion approaches are based on the evidential framework, which is used to manage and include the uncertainty from sensor detections and object classifications. Third, we propose an evidential data association approach to establish a relationship between two sources of evidence from object detections. We observe how the class information improves the final result of the DATMO component. Fourth, we integrate the proposed fusion approaches as a part of a real-time vehicle application. This integration has been performed in a real vehicle demonstrator from the interactIVe European project. Finally, we analysed and experimentally evaluated the performance of the proposed methods. We compared our evidential fusion approaches against each other and against a state-of-the-art method using real data from different driving scenarios. These comparisons focused on the detection, classification and tracking of different moving objects: pedestrian, bike, car and truck.
5

Large-scale high-performance video surveillance

Sutor, S. R. (Stephan R.) 07 October 2014 (has links)
Abstract The last decade was marked by a set of harmful events ranging from economical crises to organized crime, acts of terror and natural catastrophes. This has led to a paradigm transformation concerning security. Millions of surveillance cameras have been deployed, which led to new challenges, as the systems and operations behind those cameras could not cope with the rapid growth in number of video cameras and systems. Looking at today’s control rooms, often hundreds or even thousands of cameras are displayed, overloading security officers with irrelevant information. The purpose of this research was the creation of a novel video surveillance system with automated analysis mechanisms which enable security authorities and their operators to cope with this information flood. By automating the process, video surveillance was transformed into a proactive information system. The progress in technology as well as the ever increasing demand in security have proven to be an enormous driver for security technology research, such as this study. This work shall contribute to the protection of our personal freedom, our lives, our property and our society by aiding the prevention of crime and terrorist attacks that diminish our personal freedom. In this study, design science research methodology was utilized in order to ensure scientific rigor while constructing and evaluating artifacts. The requirements for this research were sought in close cooperation with high-level security authorities and prior research was studied in detail. The created construct, the “Intelligent Video Surveillance System”, is a distributed, highly-scalable software framework, that can function as a basis for any kind of high-performance video surveillance system, from installations focusing on high-availability to flexible cloud-based installation that scale across multiple locations and tens of thousands of cameras. First, in order to provide a strong foundation, a modular, distributed system architecture was created, which was then augmented by a multi-sensor analysis process. Thus, the analysis of data from multiple sources, combining video and other sensors in order to automatically detect critical events, was enabled. Further, an intelligent mobile client, the video surveillance local control, which addressed remote access applications, was created. Finally, a wireless self-contained surveillance system was introduced, a novel smart camera concept that enabled ad hoc and mobile surveillance. The value of the created artifacts was proven by evaluation at two real-world sites: An international airport, which has a large-scale installation with high-security requirements, and a security service provider, offering a multitude of video-based services by operating a video control center with thousands of cameras connected. / Tiivistelmä Viime vuosikymmen tunnetaan vahingollisista tapahtumista alkaen talouskriiseistä ja ulottuen järjestelmälliseen rikollisuuteen, terrori-iskuihin ja luonnonkatastrofeihin. Tämä tilanne on muuttanut suhtautumista turvallisuuteen. Miljoonia valvontakameroita on otettu käyttöön, mikä on johtanut uusiin haasteisiin, koska kameroihin liittyvät järjestelmät ja toiminnot eivät pysty toimimaan yhdessä lukuisien uusien videokameroiden ja järjestelmien kanssa. Nykyajan valvontahuoneissa voidaan nähdä satojen tai tuhansien kameroiden tuottavan kuvaa ja samalla runsaasti tarpeetonta informaatiota turvallisuusvirkailijoiden katsottavaksi. Tämän tutkimuksen tarkoitus oli luoda uusi videovalvontajärjestelmä, jossa on automaattiset analyysimekanismit, jotka mahdollistavat turva-alan toimijoiden ja niiden operaattoreiden suoriutuvan informaatiotulvasta. Automaattisen videovalvontaprosessin avulla videovalvonta muokattiin proaktiiviseksi tietojärjestelmäksi. Teknologian kehitys ja kasvanut turvallisuusvaatimus osoittautuivat olevan merkittävä ajuri turvallisuusteknologian tutkimukselle, kuten tämä tutkimus oli. Tämä tutkimus hyödyttää yksittäisen ihmisen henkilökohtaista vapautta, elämää ja omaisuutta sekä yhteisöä estämällä rikoksia ja terroristihyökkäyksiä. Tässä tutkimuksessa suunnittelutiedettä sovellettiin varmistamaan tieteellinen kurinalaisuus, kun artefakteja luotiin ja arvioitiin. Tutkimuksen vaatimukset perustuivat läheiseen yhteistyöhön korkeatasoisten turva-alan viranomaisten kanssa, ja lisäksi aiempi tutkimus analysoitiin yksityiskohtaisesti. Luotu artefakti - ’älykäs videovalvontajärjestelmä’ - on hajautettu, skaalautuva ohjelmistoviitekehys, joka voi toimia perustana monenlaiselle huipputehokkaalle videovalvontajärjestelmälle alkaen toteutuksista, jotka keskittyvät saatavuuteen, ja päättyen joustaviin pilviperustaisiin toteutuksiin, jotka skaalautuvat useisiin sijainteihin ja kymmeniin tuhansiin kameroihin. Järjestelmän tukevaksi perustaksi luotiin hajautettu järjestelmäarkkitehtuuri, jota laajennettiin monisensorianalyysiprosessilla. Siten mahdollistettiin monista lähteistä peräisin olevan datan analysointi, videokuvan ja muiden sensorien datan yhdistäminen ja automaattinen kriittisten tapahtumien tunnistaminen. Lisäksi tässä työssä luotiin älykäs kännykkäsovellus, videovalvonnan paikallinen kontrolloija, joka ohjaa sovelluksen etäkäyttöä. Viimeksi tuotettiin langaton itsenäinen valvontajärjestelmä – uudenlainen älykäs kamerakonsepti – joka mahdollistaa ad hoc -tyyppisen ja mobiilin valvonnan. Luotujen artefaktien arvo voitiin todentaa arvioimalla ne kahdessa reaalimaailman ympäristössä: kansainvälinen lentokenttä, jonka laajamittaisessa toteutuksessa on korkeat turvavaatimukset, ja turvallisuuspalveluntuottaja, joka tarjoaa moninaisia videopohjaisia palveluja videovalvontakeskuksen avulla käyttäen tuhansia kameroita.

Page generated in 0.1258 seconds