• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 20
  • 20
  • 7
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

F16 MID-LIFE UPGRADE INSTRUMENTATION SYSTEM SOLVING THE PROBLEM OF SPACE IN THE AIRCRAFT AND IN THE RF SPECTRUM

Siu, David P. 10 1900 (has links)
International Telemetering Conference Proceedings / October 27-30, 1997 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The older F16 jet fighters are currently being flight tested to evaluate the upgraded electronics for aircraft avionics, flight control and weapons systems. An instrumentation system capable of recording three different video signals, recording four Military- Standard-1553B (Mil-Std-1553B) data streams, recording one PCM stream, transmitting the PCM stream, and transmitting two video signals was needed. Using off the shelf equipment, the F16 instrumentation system was design to meet the electronic specifications, limited available space of a small jet fighter, and limited space in the SBand frequency range.
2

Υλοποίηση του MPEG-4 Simple Profile CODEC στην πλατφόρμα TMS320DM6437 για επεξεργασία βίντεο σε πραγματικό χρόνο / Implementation of MPEG-4 Simple Profile CODEC in DSP platform TMS320DM6437 for video processing in real-time

Σωτηρόπουλος, Κωνσταντίνος 30 April 2014 (has links)
Η παρούσα ειδική ερευνητική εργασία εκπονήθηκε στα πλαίσια του Διατμηματικού Προγράμματος Μεταπτυχιακών Σπουδών Ειδίκευσης στα “Συστήματα Επεξεργασίας Σημάτων και Επικοινωνιών” στο Τμήμα Φυσικής του Πανεπιστημίου Πατρών. Αντικείμενο της παρούσας εργασίας είναι η σχεδίαση και ανάπτυξη του MPEG – 4 Simple Profile CODEC στο περιβάλλον Simulink με σκοπό την τελική εκτέλεση του αλγορίθμου DSP που θα προκύψει, στην πλατφόρμα ανάπτυξης TMS320DM6437 EVM. Στο πρώτο κεφάλαιο ορίζεται η έννοια της κωδικοποίησης βίντεο σε πραγματικό χρόνο και περιγράφεται η σύγχυση που επικρατεί γύρω από αυτήν. Επίσης γίνεται μια περιγραφή των επεξεργαστών ψηφιακού σήματος ως προς τα τυπικά χαρακτηριστικά που διαθέτουν, την αρχιτεκτονική τους, την αρχιτεκτονική μνήμης, τα στοιχεία υλικού που διαθέτουν για τη ροή του DSP προγράμματος, ενώ παράλληλα, παρουσιάζεται η ιστορική εξέλιξη των DSPs που οδήγησε στους σύγχρονους DSPs και οι οποίοι, διαθέτουν καλύτερες επιδόσεις από τους προπάτορές τους, και αυτό χάρη στις τεχνολογικές και αρχιτεκτονικές εξελίξεις όπως, οι χαμηλότεροι κανόνες σχεδίασης, η γρήγορη προσπέλαση κρυφής μνήμης δύο επιπέδων, η σχεδίαση του DMA και ενός μεγαλύτερου συστήματος διαύλου. Στο τέλος του κεφαλαίου παρουσιάζεται η αρχιτεκτονική της πλατφόρμας ανάπτυξης TMS320DM6437 EVM καθώς και οι διεπαφές υλικού που διαθέτει για την είσοδο και έξοδο βίντεο/ήχου από αυτήν. Στο δεύτερο κεφάλαιο γίνεται μια εκτενής παρουσίαση των εννοιών που συναντώνται στην κωδικοποίηση βίντεο. Στην αρχή του κεφαλαίου απεικονίζεται το γενικό μοντέλο ενός κωδικοποιητή/αποκωδικοποιητή και βάσει αυτού προχωράμε στην περιγραφή του χρονικού μοντέλου, το οποίο επιβάλλει την πρόβλεψη του τρέχοντος πλαισίου βίντεο χρησιμοποιώντας το προηγούμενο, ενώ παράλληλα, εξηγεί και μεθόδους για την εκτίμηση κίνησης περιοχών (μακρομπλοκ) μέσα στο πλαίσιο ενός βίντεο και το πώς μπορεί να γίνει ο υπολογισμός του σφάλματος κίνησης τους. Στη συνέχεια περιγράφεται το μοντέλο εικόνας το οποίο στην πράξη αποτελείται από τρία συστατικά μέρη: τον μετασχηματισμό (αποσυσχετίζει και συμπιέζει τα δεδομένα), την κβάντιση (μειώνει την ακρίβεια των μετασχηματισμένων δεδομένων) και την ανακατάταξη (ανακατατάσσει τα δεδομένα ούτως ώστε να ομαδοποιήσει μαζί τις σημαντικές τιμές). Οι συντελεστές του μετασχηματισμού μετά την ανακατάταξη και την κωδικοποίηση, μπορούν να κωδικοποιηθούν περαιτέρω με τη χρήση κωδικών μεταβλητού μήκους (Huffman κωδικοποίηση) ή μέσω αριθμητικής κωδικοποίησης. Στο τέλος του κεφαλαίου περιγράφεται το υβριδικό μοντέλο DPCM/DCT CODEC πάνω στον οποίο στηρίζεται και η υλοποίηση του MPEG – 4 Simple Profile CODEC. Στο τρίτο κεφάλαιο ουσιαστικά γίνεται μια περιγραφή των χαρακτηριστικών του MPEG – 4 Simple Profile CODEC, των εργαλείων που χρησιμοποιεί, της έννοιας αντικείμενο που πλέον υπεισέρχεται στην κωδικοποίηση βίντεο καθώς και τα είδη προφίλ και επιπέδων που υποστηρίζει το συγκεκριμένο πρωτόκολλο κωδικοποίησης/αποκωδικοποίησης. Στο τέταρτο κεφάλαιο παρουσιάζεται η υλοποίηση του κωδικοποιητή, του αποκωδικοποιητή του MPEG – 4 Simple Profile CODEC καθώς και των επιμέρους υποσυστημάτων που τους απαρτίζουν. Στο πέμπτο κεφάλαιο περιγράφεται η αλληλεπίδραση του χρήστη με το σύστημα κωδικοποίησης/αποκωδικοποίησης, τι παράμετροι χρειάζονται να δοθούν ως είσοδοι από αυτόν, καθώς και πως είναι δυνατή η χρήση του συγκεκριμένου συστήματος. / This project objective is the design and development of MPEG – 4 Simple Profile CODEC in Simulink environment in order to execute the resulting DSP algorithm on the development platform TMS320DM6437 EVM. The first chapter defines the term of real – time video coding which sometimes is misunderstood by most people. Besides there is a brief description of DSP systems, which includes information about their typical characteristics, their architecture, their memory architecture and the hardware elements provided with in order to support the flow of a DSP program. It is also presented the evolution of DSPs through time, which finally gave the modern DSPs with better performance than their ancestors thanks to the technological and architectonical improvements such as, lower design rules, fast-access two-level cache, (E)DMA circuitry and a wider bus system. At the end of this chapter it is presented the architecture of TMS320DM6437 EVM board and its input/output hardware interfaces for video and sound. At the second chapter there is an extensive presentation of terms found at the science of coding/decoding video. At the beginning of this chapter it is depicted a general model including a video encoder/decoder and this is the reason for the description of temporal model, which includes the prediction of current frame from the previous one, and at the same time it explains the computation methods of macroblock motion estimation and motion compensation. Continuing it is described the image model aparted from three component parts, the transformation (decorrelation and data compression), the quantization (reduces the accuracy of transformed data) and the reordering (reorders data on a way that groups significant values all together). The transform coefficients after reordering and coding, can be further coding by using variable length coding (Huffman coding) or arithmetic coding. At the end of the chapter the hybrid model of DPCM/DCT CODEC is described and this is the one where the implementation of MPEG – 4 Simple Profile CODEC has been set up. At the third chapter there is a description about the characteristics of MPEG – 4 Simple Profile CODEC, the tools used, the “object” term, which appears on video coding/decoding and also what are the profiles and levels supported by the specific video encoding/decoding protocol. Finally it is described how the coding of rectangular frames is done and the Simulink model of MPEG – 4 Simple Profile CODEC which is the base for the implementation of DSP algorithm executed on the development platform. At the forth chapter we present the implementation of MPEG – 4 Simple Profile CODEC encoder/decoder and their partial subsystems. At the fifth chapter it is described the interaction between user and the CODEC, what are the parameters needed to be entered as inputs and how the system can be used.
3

Estudo comparativo de codificação tridimensional para o SBTVD

Tomé, Adenilson José Araujo January 2015 (has links)
Orientador: Prof. Dr. Celso Setsuo Kurashima / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Engenharia da Informação, 2015. / No dia 2 de dezembro de 2007 o Brasil deu seu primeiro passo rumo à implantação do Sistema Brasileiro de TV Digital (SBTVD). A adoção do padrão H.264/MPEG-4 AVC proporciona tecnologias avançadas para codificação de áudio e vídeo que possibilitam a transmissão de áudio e vídeo de alta definição e, também a codificação de vídeo tridimensional para transmissão. A presente dissertação de mestrado visa estudar a codificação de conteúdo tridimensional através do padrão H.264/MPEG-4 AVC para a inserção dos sinais codificados dentro do SBTVD. O trabalho apresenta um estudo sobre o SBTVD, as suas normas, a camada de transporte e a multiplexação dos sinais digitais. Para dar suporte a essa dissertação foram estudadas técnicas de captura de vídeos a partir de N câmeras, codificação de fluxos de vídeo tridimensionais e o encapsulamento e transmissão dos fluxos de vídeo dentro do sistema Brasileiro de TV Digital. Foram codificados vídeos side by side, formato já amplamente utilizado, inclusive no Brasil pela emissora aberta Rede TV! e o formato com suporte a múltiplas vistas. Os vídeos codificados foram transmitidos e analisados mensurando e comparando a qualidade de cada vídeo produzido. / On December 2, 2007 Brazil has taken its first step towards the implementation of the Brazilian Digital TV System (SBTVD). The adoption of the H.264/MPEG-4 AVC standard provides advanced technologies for encoding audio and video that enable the broadcasting of audio and high definition video, and also the three-dimensional video coding for transmission. This dissertation aims to study the encoding of three-dimensional content via H.264/MPEG-4 AVC standard for the insertion of signals encoded within the SBTVD. This dissertation presents a study on the SBTVD, its rules, the transport layer and multiplexing of digital signals. To support this thesis were studied techniques for capturing video from N cameras, encoding video streams and three-dimensional encapsulation and transmission of video streams in the Brazilian Digital TV system. Were coded videos side by side, already widely used format, including Brazil by broadcast channel Rede TV! and format that supports multiple views. The videos were transmitted coded and analyzed by measuring and comparing the quality of each video produced.
4

Modeling of Video Quality for Automatic Video Analysis and Its Applications in Wireless Camera Networks

Kong, Lingchao 01 October 2019 (has links)
No description available.
5

Kódování 4K videa v reálném čase s technologií NVENC / 4K real-time video encoding using NVENC technology

Buchta, Martin January 2020 (has links)
Diploma thesis is focused on real-time 4K video encoding using NVENC technology. First chapter describes the most used video codecs H.264 and HEVC. There is an explanation of the principle of graphic cards and their programmable units. Analysis of the solution of open source Video Codec SDK is also part of the thesis. The main focus of the thesis is an implementation of an application which can handle 4K video encoding from multiple cameras in real time. Performance and qualitative tests were performed for application. Results of these tests were analyzed and discussed.
6

Performance Improvement and Energy Saving Solutions In Phase Unwrapping and Video Communication Applications

Barabadi, Bardia 20 August 2021 (has links)
In the form of images and videos, visual content has always attracted considerable interest and attention to itself since the early days of the computer era. Although, due to the high density of information in such contents, it has always been challenging to generate, process and broadcast videos and images. These challenges grew along with the demand for higher quality content and attained the research community's attention to themselves. Even though many works have been done by researchers and engineers in academic and industrial environments, the demand for high-quality content introduces new constraints on the quality, performance (speed) and energy consumption. This thesis focuses on a couple of image and video processing applications and introduces new approaches and tweaks to improve the performance and save resources while keeping the functionality intact. In the first part, we target Interferometric Synthetic Aperture Radar (InSAR), an imaging technique used by satellites to capture the earth's surface. Many algorithms have been developed to extract useful information, such as height and displacement, from such images. However, the sheer size of these images, along with the complexity of most of these algorithms, lead to very long processing time and resource utilization. In this work, we take one of the dominant algorithms used for almost every In-SAR application, Phase Unwrapping, and introduce an approach to gain up to 6.5 times speedups. We evaluated our method on InSAR images taken by the Radarsat-2 sensor and showed its impact on a real-world application. In the second part of this thesis, we look at a prevalent application, video streaming. These days video streaming dominates the internet traffic, so any slight improvement in terms of energy consumption or resource utilization will make a sizable difference. Although the streamers use various encoding techniques, the quality of experience of the clients prevents them from overplaying these techniques. On the other hand, there has been a growing interest in another venture of research which focuses on developing techniques that aim to restore the quality of the videos that have been subjected to compression. Although these techniques are used by many users on the receiver side, the streamers often ignore their capabilities. In our work, we introduce an approach that makes the streamer aware of the capabilities of the receiver and utilizes that awareness to reduce the cost of transmission without compromising the end user's quality of experience. We demonstrated the technique and proved our concept by applying it to the HEVC encoding standard and JCT-VC dataset. / Graduate
7

Évaluation de la qualité et transmission en temps-réel de vidéos médicales compressées : application à la télé-chirurgie robotisée / Compressed video quality assessment and transmission : application to tele-surgery

Nouri, Nedia 09 September 2011 (has links)
L'évolution des techniques chirurgicales, par l'utilisation de robots, permet des interventions mini-invasives avec une très grande précision et ouvre des perspectives d'interventions chirurgicales à distance, comme l'a démontré la célèbre expérimentation « Opération Lindbergh » en 2001. La contrepartie de cette évolution réside dans des volumes de données considérables qui nécessitent des ressources importantes pour leur transmission. La compression avec pertes de ces données devient donc inévitable. Celle-ci constitue un défi majeur dans le contexte médical, celui de l'impact des pertes sur la qualité des données et leur exploitation. Mes travaux de thèse concernent l'étude de techniques permettant l'évaluation de la qualité des vidéos dans un contexte de robotique chirurgicale. Deux approches méthodologiques sont possibles : l'une à caractère subjectif et l'autre à caractère objectif. Nous montrons qu'il existe un seuil de tolérance à la compression avec pertes de type MPEG2 et H.264 pour les vidéos chirurgicales. Les résultats obtenus suite aux essais subjectifs de la qualité ont permis également de mettre en exergue une corrélation entre les mesures subjectives effectuées et une mesure objective utilisant l'information structurelle de l'image. Ceci permet de prédire la qualité telle qu'elle est perçue par les observateurs humains. Enfin, la détermination d'un seuil de tolérance à la compression avec pertes a permis la mise en place d'une plateforme de transmission en temps réel sur un réseau IP de vidéos chirurgicales compressées avec le standard H.264 entre le CHU de Nancy et l'école de chirurgie / The digital revolution in medical environment speeds up development of remote Robotic-Assisted Surgery and consequently the transmission of medical numerical data such as pictures or videos becomes possible. However, medical video transmission requires significant bandwidth and high compression ratios, only accessible with lossy compression. Therefore research effort has been focussed on video compression algorithms such as MPEG2 and H.264. In this work, we are interested in the question of compression thresholds and associated bitrates are coherent with the acceptance level of the quality in the field of medical video. To evaluate compressed medical video quality, we performed a subjective assessment test with a panel of human observers using a DSCQS (Double-Stimuli Continuous Quality Scale) protocol derived from the ITU-R BT-500-11 recommendations. Promising results estimate that 3 Mbits/s could be sufficient (compression ratio aroundthreshold compression level around 90:1 compared to the original 270 Mbits/s) as far as perceived quality is concerned. Otherwise, determining a tolerance to lossy compression has allowed implementation of a platform for real-time transmission over an IP network for surgical videos compressed with the H.264 standard from the University Hospital of Nancy and the school of surgery
8

Nové algoritmy pro kódování videosekvencí / New video coding algorithms

Zach, Ondřej January 2020 (has links)
Předložená dizertační práce se zabývá moderními algoritmy pro kódovaní videosekvencí, zejména algoritmem High Efficiency Video Coding, a jeho použítím v prostředí online streamování. Vzhledem k tomu, že chování koncových diváků směřuje ke sledování video obsahu kdykoli a kdekoli, způsob, jakým je obsah doručen k divákovi, se stává stejně důležitým, jakým je samotné kódování. V této práci se zaměřujeme na užití HEVC ve službách založených na HTTP adaptivním streamování, zejména ve službách využívajích DASH. Dále se zabýváme dalšími aspekty, které mají vliv na kvalitu zážitku (Quality of Experience) tak, jak jej vnímá koncový uživatel. Takovými jsou na příklad přítomnost reklamy či další systémové parametry. Abychom mohli sbírat názory uživatelů, pro naše experimenty často používáme crowdsourcing. Z tohoto důvodu je část této práce věnována samotnému crowdsourcingu a tomu, jak jej lze využít pro hodnocení kvality videa.
9

Recording Rendering API Commands for Instant Replay : A Runtime Overhead Comparison to Real-Time Video Encoding

Holmberg, Marcus January 2020 (has links)
Background. Instant replay allows an application to highlight events without exporting a video of the whole session. Hardware-accelerated video encoding allows replay footage to be encoded in real-time with less to no impact on the runtime performance of the actual simulation in the application. Hardware-accelerated video encoding, however, is not supported on all devices such as low-tier mobile devices, nor all platforms like web browsers. When hardware-acceleration is not supported, the replay has to be encoded using a software-implemented encoder instead. Objectives. To evaluate if recording rendering API commands is a suitable replacement of real-time encoding when hardware-accelerated video encoding is not supported. Method. An experimental research method is used to make quantitative measurements of the proposed approach, Reincore, and a real-time encoder. The measured metrics is frame time and memory consumption. The Godot game engine is modified with modules for real-time video encoding (H.264, H.265 and VP9 codecs) and rendering API command recording and replaying. The engine is also used to create test scenes to evaluate if object count, image motion, object loading/unloading, replay video resolution and replay video duration has any impact on the runtime overhead of frame time and memory consumption. Results. The implemented rendering API command replayer, Reincore, appears to have minimal to no impact on the frame time overhead in all scenarios, except for a spike in increased frame time when the replayer initializes synchronization. Reincore show to be overall inferior to real-time video encoding in terms of runtime memory overhead. Conclusions. Overall, real-time encoding using the H.264 or H.265 show a similar result in frame time as recording rendering commands. However, command recording implies a more significant overhead of memory usage than real-time encoding. The frame time of using the VP9 codec for real-time encoding is inferior to recording rendering API commands. / Bakgrund. Återspelning tillåter applikationer att visa upp händelser utan att exportera en video för hela sessionen. Hårdvaruaccelererad videokodning tillåter video av återspelning att kodas i realtid med minimal påverkan på applikationens prestanda för simulering. Hårdvaruaccelererad videokodning stöds dock inte alltid på alla enheter eller plattformar, så som lågt presterande mobila enheter eller webbläsare. När hårdvaruacceleration inte stöds, måste videokodning ske med en mjukvarubaserad implementering istället. Syfte. Att utvärdera om återspelning genom inspelade renderingskommandon som fördröjer arbetet för videokodning är ett lämpligt alternativ till videokodning i realtid, när hårdvaruacceleration inte stöds. Metod. En experimentel forskningsmetod används för att samla kvantitativ mätdata från den föreslagna tillvägagången, Reincore, and en realtidsvidekodare. Mätdatan består av bildtid och minnesanvändning. Genom att modifiera spelmotorn Godot skapas moduler för realtids-videokodning samt inspelning av renderingskommandon. Spelmotorn används också för att skapa testscener för att utvärdera om antal objekt, bildrörelse, skapande av objekt under körning, upplösning eller videolängd har någon inverkan på bildtid eller minnesanvändning. Resultat. Den implementerade renderingskommando-inspelaren, Reincore, visar minimal påverkan på bildtid, med undantag för en temporär ökning när återspelaren initierar synkronisering. Reincore visar sig vara underlägsen till realtids-videokodning när det gäller minnesanvändning. Slutsatser. Realtids-videokodning med H.264 eller H.265 som video-codec visar övergripande bättre resultat för återspelning än renderingskommandoinspelning, när det gäller både bildtid samt minnesanvändning. Bildtiden för VP9 video-codec för realtids-videokodning visar däremot sämre resultat än renderingskommandinspelning.
10

Seminar Hochleistungsrechnen und Benchmarking: x264-Encoder als Benchmark

Naumann, Stefan January 2014 (has links)
Bei der modernen Videoencodierung werden viele Berechnungen benötigt. Unter anderem wird das Bild in Makroblöcke zerlegt, Bewegungsvektoren berechnet und Bewegungsvorhersagen getroffen, um Speicherplatz für die komprimierte Datei zu sparen. Der x264-Encoder versucht das auf verschiedene Arten und Weisen zu realisieren, wodurch der eigentliche Encodier-Vorgang langsam wird und auf älteren oder langsameren PCs deutlich länger dauert als andere Verfahren. Außerdem verwendet der x264-Encoder Standards wie SSE, AVX oder OpenCL um Zeit zu sparen, indem mehrere Daten gleichzeitig berechnet werden. Daher eignet sich x264 auch zur Evaluation solcher Standards und der Untersuchung des Geschwindigkeitsgewinns durch die Verwendung von Vektoroperationen oder Grafikbeschleunigung.

Page generated in 0.0741 seconds