• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 29
  • 12
  • 10
  • 9
  • 9
  • 7
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Very Low Bitrate Video Communication : A Principal Component Analysis Approach

Söderström, Ulrik January 2008 (has links)
A large amount of the information in conversations come from non-verbal cues such as facial expressions and body gesture. These cues are lost when we don't communicate face-to-face. But face-to-face communication doesn't have to happen in person. With video communication we can at least deliver information about the facial mimic and some gestures. This thesis is about video communication over distances; communication that can be available over networks with low capacity since the bitrate needed for video communication is low. A visual image needs to have high quality and resolution to be semantically meaningful for communication. To deliver such video over networks require that the video is compressed. The standard way to compress video images, used by H.264 and MPEG-4, is to divide the image into blocks and represent each block with mathematical waveforms; usually frequency features. These mathematical waveforms are quite good at representing any kind of video since they do not resemble anything; they are just frequency features. But since they are completely arbitrary they cannot compress video enough to enable use over networks with limited capacity, such as GSM and GPRS. Another issue is that such codecs have a high complexity because of the redundancy removal with positional shift of the blocks. High complexity and bitrate means that a device has to consume a large amount of energy for encoding, decoding and transmission of such video; with energy being a very important factor for battery-driven devices. Drawbacks of standard video coding mean that it isn't possible to deliver video anywhere and anytime when it is compressed with such codecs. To resolve these issues we have developed a totally new type of video coding. Instead of using mathematical waveforms for representation we use faces to represent faces. This makes the compression much more efficient than if waveforms are used even though the faces are person-dependent. By building a model of the changes in the face, the facial mimic, this model can be used to encode the images. The model consists of representative facial images and we use a powerful mathematical tool to extract this model; namely principal component analysis (PCA). This coding has very low complexity since encoding and decoding only consist of multiplication operations. The faces are treated as single encoding entities and all operations are performed on full images; no block processing is needed. These features mean that PCA coding can deliver high quality video at very low bitrates with low complexity for encoding and decoding. With the use of asymmetrical PCA (aPCA) it is possible to use only semantically important areas for encoding while decoding full frames or a different part of the frames. We show that a codec based on PCA can compress facial video to a bitrate below 5 kbps and still provide high quality. This bitrate can be delivered on a GSM network. We also show the possibility of extending PCA coding to encoding of high definition video.
22

HTTP Live Streaming : En studie av strömmande videoprotokoll

Swärd, Rikard January 2013 (has links)
Användningen av strömmande video ökar snabbt just nu. Ett populärt konceptär adaptive bitrate streaming som går ut på att en video kodas i flera olikabithastigheter. Dessa videor tas sedan och delas upp i små filer och görstillgänglig via internet. När du vill spela upp en sådan video laddar du först hemen fil som beskriver vart filerna finns och i vilka bithastigheter de är kodade i.Mediaspelaren kan där efter börja ladda hem filerna och spela upp dom. Om defysiska förutsättningarna, som exempelvis nedladdningshastighet eller CPUbelastning,ändras under uppspelningen kan mediaspelaren enkelt byta kvalitépå videon genom att börja ladda filer av en annan bithastighet och slippa attvideon laggar. Denna rapport tar därför en närmare titt på fyra tekniker inomadaptive bitrate streaming. De som undersöks är HTTP Live Streaming,Dynamic Adaptive Streaming over HTTP, HTTP Dynamic Streaming ochSmooth Streaming med avseende på vilka protokoll som dom använder.Rapporten undersöker även hur Apple och FFmpeg har implementerat HTTPLive streaming med avseende på hur mycket data som behövs läsas i en filinnan videon kan börja spelas upp. Rapporten visar att det inte är så storaskillnader mellan de fyra teknikerna. Dock sticker Dynamic AdaptiveStreaming over HTTP ut lite genom att vara helt oberoende av vilket ljud ellervideoprotokoll som används. Rapporten visar också på en brist i specificeringenav HTTP Live Streaming då det inte är specificerat att första komplettabildrutan i videoströmmen bör ligga i början av filen. I Apples implementationbehövs upp till 30 kB data läsas innan uppspelning kan påbörjas medan iFFmpegs implementation är det ca 600 byte. / The use of streaming video is growing rapidly at the moment. A popular conceptis adaptive bitrate streaming, which is when a video gets encoded in severaldifferent bit rates. These videos are then split into small files and made availablevia the internet. When you want to play such a video, you first download afile that describes where the files are located and in what bitrates they are encodedin. The media player then begin downloading the files and play them. Ifthe physical conditions, such as the download speed or CPU load, changes duringplayback, the media player can easily change the quality of the video bystarting to downloading files of a different bit rate and avoid that the video lags.This report will take a closer look at four techniques in adaptive bitrate streaming.They examined techniques are HTTP Live Streaming, Dynamic AdaptiveStreaming over HTTP, HTTP Dynamic Streaming and Smooth Streaming andwhich protocols they use. The report also examines how Apple and FFmpeg hasimplemented HTTP Live Streaming with respect to how much data is needed toread a file before the video can begin to be played. The report shows that thereare no large differences between the four techniques. However, Dynamic AdaptiveStreaming over HTTP stood out a bit by being completely independent ofany audio or video protocols. The report also shows a shortcoming in the specificationof HTTP Live Streaming as it is not specified that the first completeframe of the video stream should be at the beginning of the file. In Apple's implementationits needed to read up to 30 KB of data before playback can bestarted while in FFmpeg's implementation its about 600 bytes.
23

AN EVALUATION OF SDN AND NFV SUPPORT FOR PARALLEL, ALTERNATIVE PROTOCOL STACK OPERATIONS IN FUTURE INTERNETS

Suresh, Bhushan 09 July 2018 (has links)
Virtualization on top of high-performance servers has enabled the virtualization of network functions like caching, deep packet inspection, etc. Such Network Function Virtualization (NFV) is used to dynamically adapt to changes in network traffic and application popularity. We demonstrate how the combination of Software Defined Networking (SDN) and NFV can support the parallel operation of different Internet architectures on top of the same physical hardware. We introduce our architecture for this approach in an actual test setup, using CloudLab resources. We start of our evaluation in a small setup where we evaluate the feasibility of the SDN and NFV architecture and incrementally increase the complexity of the setup to run a live video streaming application. We use two vastly different protocol stacks, namely TCP/IP and NDN to demonstrate the capability of our approach. The evaluation of our approach shows that it introduces a new level of flexibility when it comes to operation of different Internet architectures on top of the same physical network and with this flexibility provides the ability to switch between the two protocol stacks depending on the application.
24

SUPPORTING DATA CENTER AND INTERNET VIDEO APPLICATIONS WITH STRINGENT PERFORMANCE NEEDS: MEASUREMENTS AND DESIGN

Ehab Mohammad Ghabashneh (18257911) 28 March 2024 (has links)
<p dir="ltr">Ensuring a high quality of experience for Internet applications is challenging owing to the significant variability (e.g., of traffic patterns) inherent to both cloud data-center networks and wide area networks. This thesis focuses on optimizing application performance by both conducting measurements to characterize traffic variability, and designing applications that can perform well in the face of variability. On the data center side, a key aspect that impacts performance is traffic burstiness at fine granular time scales. Yet, little is know about traffic burstiness and how it impacts application loss. On the wide area side, we focus on video applications as a major traffic driver. While optimizing traditional videos traffic remains a challenge, new forms of video such as 360◦ introduce additional challenges such as respon- siveness in addition to the bandwidth uncertainty challenge. In this thesis, we make three contributions.</p><p dir="ltr"><b>First</b>, for data center networks, we present Millisampler, a lightweight network traffic char- acterization tool for continual monitoring which operates at fine configurable time scales, and deployed across all servers in a large real-world data center networks. Millisampler takes a host-centric perspective to characterize traffic across all servers within a data center rack at the same time. Next, we present data-center-scale joint analysis of burstiness, contention, and loss. Our results show (i) bursts are likely to encounter contention; (ii) contention varies significantly over short timescales; and (iii) higher contention need not lead to more loss, and the interplay with workload and burst properties matters.</p><p dir="ltr"><b>Second</b>, we consider challenges with traditional video in wide area networks. We take a step towards understanding the interplay between Content-Delivery-Networks (CDNs), and video performance through end-to-end measurements. Our results show that (i) video traffic in a session can be sourced from multiple CDN layers, and (ii) throughput can vary signifi- cantly based on the traffic source. Next we evaluate the potential benefits of exposing CDN information to the client Adaptive-Bit-Rate (ABR) algorithm. Emulation experiments show the approach has the potential to reduce prediction inaccuracies, and enhance video quality of experience (QoE).</p><p dir="ltr"><b>Third</b>, for 360◦ videos, we argue for a new streaming model which is explicitly designed for continuous, rather than stalling, playback to preserve interactivity. Next, we propose Dragonfly, a new 360° system that leverages the additional degrees of freedom provided by this design point. Dragonfly proactively skips tiles (i.e., spatial segment of the video) using a model that defines an overall utility function that captures factors relevant to user experience. We conduct a user study which shows that majority of interactivity feedback indicating Dragonfly being highly reactive, while the majority of state-of-the-art’s feedback indicates the systems are slow to react. Further, extensive emulations show Dragonfly improves the image quality significantly without stalling playback.</p>
25

A Study of Factors Which Influence QoD of HTTP Video Streaming Based on Adobe Flash Technology

Sun, Bin, Uppatumwichian, Wipawat January 2013 (has links)
Recently, there has been a significant rise in the Hyper-Text Transfer Protocol (HTTP) video streaming usage worldwide. However, the knowledge of performance of HTTP video streaming is still limited, especially in the aspect of factors which affect video quality. The reason is that HTTP video streaming has different characteristics from other video streaming systems. In this thesis, we show how the delivered quality of a Flash video playback is affected by different factors from diverse layers of the video delivery system, including congestion control algorithm, delay variation, playout buffer length, video bitrate and so on. We introduce Quality of Delivery Degradation (QoDD) then we use it to measure how much the Quality of Delivery (QoD) is degraded in terms of QoDD. The study is processed in a dedicated controlled environment, where we could alter the influential factors and then measure what is happening. After that, we use statistic method to analyze the data and find the relationships between influential factors and quality of video delivery which are expressed by mathematic models. The results show that the status and choices of factors have a significant impact on the QoD. By proper control of the factors, the quality of delivery could be improved. The improvements are approximately 24% by TCP memory size, 63% by congestion control algorithm, 30% by delay variation, 97% by delay when considering delay variation, 5% by loss and 92% by video bitrate.
26

Bitrate Reduction Techniques for Low-Complexity Surveillance Video Coding

Gorur, Pushkar January 2016 (has links) (PDF)
High resolution surveillance video cameras are invaluable resources for effective crime prevention and forensic investigations. However, increasing communication bandwidth requirements of high definition surveillance videos are severely limiting the number of cameras that can be deployed. Higher bitrate also increases operating expenses due to higher data communication and storage costs. Hence, it is essential to develop low complexity algorithms which reduce data rate of the compressed video stream without affecting the image fidelity. In this thesis, a computer vision aided H.264 surveillance video encoder and four associated algorithms are proposed to reduce the bitrate. The proposed techniques are (I) Speeded up foreground segmentation, (II) Skip decision, (III) Reference frame selection and (IV) Face Region-of-Interest (ROI) coding. In the first part of the thesis, a modification to the adaptive Gaussian Mixture Model (GMM) based foreground segmentation algorithm is proposed to reduce computational complexity. This is achieved by replacing expensive floating point computations with low cost integer operations. To maintain accuracy, we compute periodic floating point updates for the GMM weight parameter using the value of an integer counter. Experiments show speedups in the range of 1.33 - 1.44 on standard video datasets where a large fraction of pixels are multimodal. In the second part, we propose a skip decision technique that uses a spatial sampler to sample pixels. The sampled pixels are segmented using the speeded up GMM algorithm. The storage pattern of the GMM parameters in memory is also modified to improve cache performance. Skip selection is performed using the segmentation results of the sampled pixels. In the third part, a reference frame selection algorithm is proposed to maximize the number of background Macroblocks (MB’s) (i.e. MB’s that contain background image content) in the Decoded Picture Buffer. This reduces the cost of coding uncovered background regions. Distortion over foreground pixels is measured to quantify the performance of skip decision and reference frame selection techniques. Experimental results show bit rate savings of up to 94.5% over methods proposed in literature on video surveillance data sets. The proposed techniques also provide up to 74.5% reduction in compression complexity without increasing the distortion over the foreground regions in the video sequence. In the final part of the thesis, face and shadow region detection is combined with the skip decision algorithm to perform ROI coding for pedestrian surveillance videos. Since person identification requires high quality face images, MB’s containing face image content are encoded with a low Quantization Parameter setting (i.e. high quality). Other regions of the body in the image are considered as RORI (Regions of reduced interest) and are encoded at low quality. The shadow regions are marked as Skip. Techniques that use only facial features to detect faces (e.g. Viola Jones face detector) are not robust in real world scenarios. Hence, we propose to initially detect pedestrians using deformable part models. The face region is determined using the deformed part locations. Detected pedestrians are tracked using an optical flow based tracker combined with a Kalman filter. The tracker improves the accuracy and also avoids the need to run the object detector on already detected pedestrians. Shadow and skin detector scores are computed over super pixels. Bilattice based logic inference is used to combine multiple likelihood scores and classify the super pixels as ROI, RORI or RONI. The coding mode and QP values of the MB’s are determined using the super pixel labels. The proposed techniques provide a further reduction in bitrate of up to 50.2%.
27

Wake-up Receiver for Ultra-low Power Wireless Sensor Networks

Bdiri, Sadok 05 July 2021 (has links)
In ultra-low power Wireless Sensor Networks (WSNs) sensor nodes need to interact, depending on the application, even at a rapid pace while preserving battery life. Wireless communication brings thereby quite the burden as the radio transceiver requires a relative huge amount of power during both transmission or reception phases. In WSNs with on demand communication, the sensor nodes are required to maintain responsiveness and to act the sooner they receive a request, reducing the overall latency of the network. The aspect is more challenging in asynchronous WSN as the receiver possesses no information about the packet arrival time. In a purely on-demand communication, duty-cycling shows little to almost no improvement. The receiving node, in such scheme, is expected to last for years while also being accessible to other peers. Here arises the utility of an external ultra-low power radio receiver known as Wake-up Receiver (WuRx). Its essential task is to remain as the only part of the system running while the rest of the systems enters the lowest power mode (i.e., sleep state). Once a request signal is received, it notifies the host processor and other peripherals for an incoming communication. With the sensor node being in sleep state (WuRx active only), substantial power levels can be achieved. If the WuRx is able to interact rapidly, the added latency remains negligible. As crucial performance figures, the sensitivity and bit rate are immediately affected by the extreme low-power budget at diifferent magnitudes, depending mainly on the incorporated architecture. This thesis focuses on the design of a feature-balanced WuRx. The passive radio frequency architecture (PRF) relies on passive detection while consuming zero power to extract On-Off-Keying (OOK) modulated envelopes. The featured sensitivity, however, is reduced compared to more complex architectures. A WuRx based on PRF architecture can effectively enable short-range applications. The sensitivity can vary with respect to several parameters including the total generated noise, circuit technology and topology. Two variants of the PRF WuRxs are introduced with the baseband amplifier being the main change. The first revision employs a high performance amplifier with reduced average energy consumption, thanks to a novel power gating control. The second variant focuses on employing an ultra-low power baseband amplifier as it is expected to be in a continuous active state. This thesis also brings the necessary analysis on the passive front-end with the intention to enhance the overall WuRx sensitivity. Proof of concepts are embedded in sensor node boards and feature -61 dBm and -64 dBm of sensitivity for the first and the second variant, respectively, at a packet-error-rate (PER) of 1% whilst demanding a similar power of 7.2 µW during packet listening. During packet decoding, the first variant demands a 150 µW of power, caused greatly by the baseband amplifier. The achieved latency is less than 30 ms and the bit rate is 4 kbit/s, Manchester encoding. For long-range applications, a higher sensitivityWuRx is proposed based on Tuned-RF (TRF) architecture. By embedding a low-noise amplifier (LNA) in the receiver chain, very weak radio signal can be detected. TheWuRx emphasizes higher sensitivity of -90 dBm. The design of the LNA prioritized the highest gain and lowest bias current by sacrifcing the linearity that poses little impact on signal integrity for the OOK modulated signals. The total active power consumption of the TRF WuRx is 1.38 mW. In this work, a fast sampling approach based on power gating protocol allows a drastic reduction in energy consumption on average. By being able to sample in matter of few microseconds, the WuRx is able to detect the presence of a packet and return to sleep state right after packet decoding. Being power-gated dropped the average power consumption to 2.8 µW at a packet detection latency of 32 ms for less than 2 s of interval time between communication requests. The proposed solutions are able to decode a minimum length of 16-bit pattern and operate in the license-free ISM band 868 MHz. This thesis also includes the analysis and implementation of low-power front-end building blocks that are employed by the proposed WuRx.:1 Introduction 1.1 Motivation 1.2 Wake-up Receiver Design Requirements 1.2.1 Energy Consumption 1.2.2 Network Coverage and Robustness 1.2.3 Wake-up Packet Addressing 1.2.4 WuPt Detection Latency 1.2.5 Hosting System, Form-factor and Fabrication Technology 1.3 Thesis Organisation 2 Wireless Sensor Networks 2.1 Radio Communication 2.1.1 Electromagnetic Spectrum 2.1.2 Link Budget Analysis 2.2 Asynchronous Radio Receiver Duty-cycle Control 2.2.1 B-MAC and X-MAC Protocols 2.2.2 Energy and Latency Analysis 2.3 Power Supply Requirements 2.3.1 Low Self-discharge Battery 2.3.2 Energy Harvester 2.4 Summary 3 State-of-the-Art of Wake-up Receivers 3.1 Wake-up Receiver Architectural Analysis 3.1.1 Passive RF Detector 3.1.2 Classical Radio Architectures 3.2 Wake-up Receiver Back-end Stages 3.2.1 Baseband Amplifiers 3.2.2 Analog to Digital Conversion 3.2.3 Wake-up Packet Decoder 3.3 Power Consumption Reduction at Circuit Level 3.3.1 Power Gating 3.3.2 Interference Rejection and Filtering 3.4 Summary 4 Proposal of Novel Wake-up Receivers 4.1 Ultra-low Power On-demand Communication in Wireless Sensor Networks: Challenges and Requirements 4.2 Passive RF Wake-up Receiver 4.3 Power-gated Tuned-RF Wake-up Receiver 5 Low-power RF Front-end 5.1 Narrow-band Low-noise Amplifier (LNA) 5.1.1 Topology 5.1.2 Voltage Gain 5.1.3 Stability 5.1.4 Noise Figure 5.1.5 Linearity 5.2 Envelope Detector 5.2.1 Theory of Square-law Detection and Sensitivity Analysis 5.2.2 Single-Diode Envelope Detector 5.2.3 Voltage Multiplier Envelope Detector 5.3 Hardware Assessment 5.3.1 LNA 5.3.2 Envelope Detector 5.4 Summary 6 Passive RF Wake-up Receiver 6.1 Circuit Implementation 6.1.1 Address Decoder 6.1.2 Envelope Detector 6.1.3 Power-gated Baseband Amplifier 6.1.4 Ultra Low-power Baseband Amplifier 6.2 Experimental Results 6.2.1 Wireless Sensor Node 6.2.2 Measurements 6.3 Summary 7 Power-gated Tuned-RF Wake-up Receiver 7.1 Power-gating Protocol 7.2 Circuit Design 7.2.1 Radio Front-end 7.2.2 Data Slicer 7.2.3 Digital Baseband 7.3 Performance Evaluation 7.4 Summary 8 Conclusion 8.1 Performance Summary 8.2 Future Perspective 8.3 Applications A Two-tone Simulation Setup B Diode Models and Simulation Setup C Preamble Detection C Code Implementation Bibliography Publications / In drahtlosen Sensornetzwerken (WSNs) mit extrem geringem Stromverbrauch müssen Sensorknoten je nach Anwendung kurze Latenzzeiten erreichen ohne die Batterielebensdauer zu beeinträchtigen. Die drahtlose Kommunikation bringt dabei eine ziemliche Belastung mit sich, da der Funktransceiver sowohl während der Sende- als auch der Empfangsphase relativ viel Strom benötigt. Einige marktfähige Funktransceiver benötigen durchschnittlich ca. 10 mA im Empfangsmodus sowie 30 mA im Sendemodus. Deshalb wird heutzutage das sogenannte Duty-Cycling mit bestimmten Sende-, Empfangs- und langen Schlafzeitintervallen eingeführt. Während der Schlafphase ist der Empfänger nicht ansprechbar. Was wiederum zu einer massiven Erhöhung der Latenzzeit führen kann. In vielen Anwendungen und insbesondere im Rahmen der Digitalisierung von Prozessen wird mittlerweile die Fähigkeit On-Demand mit sehr kurzen Latenzzeiten zu kommunizieren verlangt. Diese Anforderung steht in einem Wiederspruch zum genannten Duty-cycle Betrieb. Um dieses Dilemma zu lösen wird im Rahmen dieser Doktorarbeit ein Funkempfänger mit extrem geringen Stromverbrauch untersucht und entwickelt. Mit Hilfe des extrem niedrigen Stromverbrauches kann der Funkempfänger ständig empfangsbereit sein. Er wird zum Hauptempfänger mit dem hohen Stromverbrauch zugeschaltet, so dass nur nach Aufforderung der Hauptempfänger aktiv sein wird. Dieser Empfänger wird Wake-up Empfänger (WuRx) genannt. Seine wesentliche Aufgabe besteht darin, als einziger Teil des Gesamtknotens aktiv zu sein, während der Rest in den Modus mit dem niedrigsten Stromverbrauch versetzt wird. Sobald ein Anforderungssignal empfangen wird, weckt er den Haupt-Prozessor und andere Peripheriegeräte über eine eingehende Kommunikation. Somit ist der Aufweckempfänger essenziell für die Zuverlässigkeit der drahtlosen Kommunikation. Sein Stromverbrauch sollte im µA Bereich sein. Seine Empfangsbereitschaft hängt entscheidend von seiner Empfindlichkeit sowie Bitrate ab. Eine Verbesserung der Empfindlichkeit und Erhöhung der Bitrate würden zwangsläufig zu einer Erhöhung des Stromverbrauches führen. Im Rahmen dieser Doktorarbeit werden unterschiedliche Architekturen von Aufweckempfängern untersucht und umgesetzt. Zusammenhänge zwischen Empfindlichkeit, Bitrate und Stromverbrauch wurden analysiert und mögliche Grenzen gezeigt. Ein wesentliches Augenmerk war dabei, Off-the-Shelf Komponenten zu verwenden. Im Rahmen dieser Doktorabeit wurden in Abhängigkeit von der zu erreichenden Reichweite und Häufigkeit der Kommunikation zwei wesentliche Architekturen mit geeigneten Empfindlichkeiten und extrem geringem Stromverbrauch entwickelt. Für kurze Reichweiten wurde eine passive Hochfrequenzarchitektur (PRF Architektur) basierend auf einer passiven Erkennung von OOK-modulierten (On-Off-Keying) Signalen mittels Hüllkurvenbildung entwickelt. Die erreichte Empfindlichkeit von ca. -64 dBm stellt eine wesentliche Verbesserung gegenüber dem Stand der Technik und Forschung mit einer Empfindlichkeit von ca. -52 dBm dar. Die Empfindlichkeit kann in Bezug auf verschiedene Parameter variieren, einschließlich des insgesamt erzeugten Rauschens, der Schaltungstechnologie und der Topologie. Zwei Varianten der PRF WuRxs wurden realisiert, wobei der Basisbandverstärker die Hauptänderung darstellt. Die erste Version verwendet einen Hochleistungsverstärker mit reduziertem durchschnittlichen Energieverbrauch dank einer neuartigen Leistungssteuerung. Die zweite Variante konzentriert sich auf die Verwendung eines Basisbandverstärkers mit extrem geringer Leistung, da erwartet wird, dass er sich in einem kontinuierlichen aktiven Zustand befindet. Diese Arbeit bringt auch die notwendige Analyse des passiven Front-Ends mit der Absicht, die allgemeine WuRx-Empfindlichkeit zu verbessern. Nachweise der Wirksamkeit sind in Sensorknotenmodulen eingebettet und verfügen über -61 dBm und -64 dBm Empfindlichkeit für die erste bzw. die zweite Variante bei einer Paketfehlerrate (PER) von 1 %, während beim Abhören von Paketen eine ähnliche Leistung von 7.2 µW gefordert wird. Während der Paketdecodierung erfordert die erste Variante eine Leistung von 150 µW, die stark durch den Basisbandverstärker verursacht wird. Die erreichte Latenz beträgt weniger als 30 ms und die Bitrate beträgt 4 kbit/s mit einer Manchester-Codierung. Für Anwendungen mit großer Reichweite wird ein WuRx mit höherer Empfindlichkeit vorgeschlagen. Dieser basiert auf einer TunedRF (TRF) -Architektur. Dabei werden sehr schwache Funksignale durch einen rauscharmen Verstärker (LNA) erkannt und verstärkt. Der WuRx erreicht eine bessere Empfindlichkeit von ca. –90 dBm. Dabei wurde das Augenmerk auf die höchste Verstärkung verbunden mit dem niedrigsten Vorspannungsstrom gelegt. Der LNA wird dann im nicht-linearen Bereich betrieben. Dieser Betriebsmodus beeinflusst nur im geringeren Maße die Signalintegrität der OOK-modulierten Signale. Der gesamte Leistungsverbrauch des TRF WuRx beträgt 1.38 mW. Um den Gesamtleistungsverbrauch im µW Bereich zu reduzieren, wird im Rahmen dieser Arbeit das sogenannte Power-Gating-Protokoll eingeführt. Dabei wird das Funkkanal zyklisch abgetastet. Der WuRx kann innerhalb von wenigen Mikrosekunden das Vorhandensein eines Pakets erkennen und direkt nach der Paketdecodierung in den Ruhezustand zurückkehren. Durch diesen Ansatz konnte der durchschnittliche Stromverbrauch bei einer Paketerkennungslatenz von ca. 32 ms innerhalb einer Abtastrate von 2 s auf 2.8 µW reduziert werden. Die vorgeschlagenen Lösungen können eine Mindestlänge von 16-Bit-Mustern decodieren und im lizenzfreien ISM-Band 868 MHz arbeiten.:1 Introduction 1.1 Motivation 1.2 Wake-up Receiver Design Requirements 1.2.1 Energy Consumption 1.2.2 Network Coverage and Robustness 1.2.3 Wake-up Packet Addressing 1.2.4 WuPt Detection Latency 1.2.5 Hosting System, Form-factor and Fabrication Technology 1.3 Thesis Organisation 2 Wireless Sensor Networks 2.1 Radio Communication 2.1.1 Electromagnetic Spectrum 2.1.2 Link Budget Analysis 2.2 Asynchronous Radio Receiver Duty-cycle Control 2.2.1 B-MAC and X-MAC Protocols 2.2.2 Energy and Latency Analysis 2.3 Power Supply Requirements 2.3.1 Low Self-discharge Battery 2.3.2 Energy Harvester 2.4 Summary 3 State-of-the-Art of Wake-up Receivers 3.1 Wake-up Receiver Architectural Analysis 3.1.1 Passive RF Detector 3.1.2 Classical Radio Architectures 3.2 Wake-up Receiver Back-end Stages 3.2.1 Baseband Amplifiers 3.2.2 Analog to Digital Conversion 3.2.3 Wake-up Packet Decoder 3.3 Power Consumption Reduction at Circuit Level 3.3.1 Power Gating 3.3.2 Interference Rejection and Filtering 3.4 Summary 4 Proposal of Novel Wake-up Receivers 4.1 Ultra-low Power On-demand Communication in Wireless Sensor Networks: Challenges and Requirements 4.2 Passive RF Wake-up Receiver 4.3 Power-gated Tuned-RF Wake-up Receiver 5 Low-power RF Front-end 5.1 Narrow-band Low-noise Amplifier (LNA) 5.1.1 Topology 5.1.2 Voltage Gain 5.1.3 Stability 5.1.4 Noise Figure 5.1.5 Linearity 5.2 Envelope Detector 5.2.1 Theory of Square-law Detection and Sensitivity Analysis 5.2.2 Single-Diode Envelope Detector 5.2.3 Voltage Multiplier Envelope Detector 5.3 Hardware Assessment 5.3.1 LNA 5.3.2 Envelope Detector 5.4 Summary 6 Passive RF Wake-up Receiver 6.1 Circuit Implementation 6.1.1 Address Decoder 6.1.2 Envelope Detector 6.1.3 Power-gated Baseband Amplifier 6.1.4 Ultra Low-power Baseband Amplifier 6.2 Experimental Results 6.2.1 Wireless Sensor Node 6.2.2 Measurements 6.3 Summary 7 Power-gated Tuned-RF Wake-up Receiver 7.1 Power-gating Protocol 7.2 Circuit Design 7.2.1 Radio Front-end 7.2.2 Data Slicer 7.2.3 Digital Baseband 7.3 Performance Evaluation 7.4 Summary 8 Conclusion 8.1 Performance Summary 8.2 Future Perspective 8.3 Applications A Two-tone Simulation Setup B Diode Models and Simulation Setup C Preamble Detection C Code Implementation Bibliography Publications
28

Fisheye live streaming : A study of the dewarping function and the performance of the streaming / Fisköga-objektiv direktsändning : Ett studie av fisköga-förvrängning korrigering samt prestanda av direktsändning

Zhengyu, Wang, Al-Shorji, Yousuf January 2018 (has links)
Provision of live streaming of video from fisheye camera is a popular business in the IT sector. Video dewarping is one of its special fields that expands rapidly. As the requirement of video quality becomes higher and higher, there is an increasing need for efficient solutions that can be utilized to process videos in attempts to gain desirable results. The problem is to determine the right combination of transmission bitrate and resolution for live streaming of the dewarped videos. The purpose of this thesis is to develop a prototype solution for dewarping video from fisheye camera and re-stream it to a client. This prototype is used for testing combinations of bitrate and resolution of the video in different scenarios. A system is devised to live stream a video from a fisheye camera, dewarp the video in a server and display the video in media players. The results reveal that the combination of bitrate 3.5 - 4.5 Mbps and resolution 720p is best suited for transmission to avoid noticeable lagging in playback. Comments of observers prove the promising use of the dewarped videos as Virtual Reality(VR) technology. / Direktsänd videoströmning från en kamera med fiskögaobjektiv är ett populärt och snabbväxande, speciellt inom vissa områden som videoförvrängning korrigering. Eftersom kravet på hög högkvalitativ video blir högre och högre, ökas också behovet av en effektiv videobearbetnings lösning för att få önskvärda resultat. Problemet är att bestämma rätt kombination av överföringsbithastighet och upplösning för direktströmning av bearbetade videon. Syftet med detta examensarbete är att utveckla en prototyplösning som korrigerar videoförvrängning från en kamera med fisköga-objektiv samt vidaresända den korrigerade videon till en klient. Denna prototyp används för att testa olika kombinationer av bithastighet och upplösning i olika scenarier. Ett prototypsystem utvecklades för att direktsända video från en kamera med fisköga-objektiv, korrigera videoförvrängningen i en server och spela upp de korrigerade video i en mediaspelare. Resultatet visar att kombinationen av bithastigheten mellan 3.5 - 4.5 Mbps och upplösningen 720p är den mest lämpliga för att undvika märkbara fördröjningar hos klienten. Den potentiella framtida användningen av den bearbetade videon inom Virtuell verklighet (VV) är lovande baserat på observatörernas kommentarer.
29

Mejora del streaming de vídeo en DASH con codificación de bitrate variable mediante el algoritmo Look Ahead y mecanismos de coordinación para la reproducción, y propuesta de nuevas métricas para la evaluación de la QoE

Belda Ortega, Román 19 July 2021 (has links)
[ES] Esta tesis presenta diversas propuestas encaminadas a mejorar la transmisión de vídeo a través del estándar DASH (Dynamic Adaptive Streaming over HTTP). Este trabajo de investigación estudia el protocolo de transmisión DASH y sus características. A la vez, plantea la codificación con calidad constante y bitrate variable como modo de codificación del contenido de vídeo más indicado para la transmisión de contenido bajo demanda mediante el estándar DASH. Derivado de la propuesta de utilización del modo de codificación de calidad constante, cobra mayor importancia el papel que juegan los algoritmos de adaptación en la experiencia de los usuarios al consumir el contenido multimedia. En este sentido, esta tesis presenta un algoritmo de adaptación denominado Look Ahead el cual, sin modificar el estándar, permite utilizar la información de los tamaños de los segmentos de vídeo incluida en los contenedores multimedia para evitar tomar decisiones de adaptación que desemboquen en paradas no deseadas en la reproducción de contenido multimedia. Con el objetivo de evaluar las posibles mejoras del algoritmo de adaptación presentado, se proponen tres modelos de evaluación objetiva de la QoE. Los modelos propuestos permiten predecir de forma sencilla la QoE que tendrían los usuarios de forma objetiva, utilizando parámetros conocidos como el bitrate medio, el PSNR (Peak Signal-to-Noise Ratio) y el valor de VMAF (Video Multimethod Assessment Fusion). Todos ellos aplicados a cada segmento. Finalmente, se estudia el comportamiento de DASH en entornos Wi-Fi con alta densidad de usuarios. En este contexto, se producen un número elevado de paradas en la reproducción por una mala estimación de la tasa de transferencia disponible debida al patrón ON/OFF de descarga de DASH y a la variabilidad del acceso al medio de Wi-Fi. Para paliar esta situación, se propone un servicio de coordinación basado en la tecnología SAND (MPEG's Server and Network Assisted DASH) que proporciona una estimación de la tasa de transferencia basada en la información del estado de los players de los clientes. / [CA] Aquesta tesi presenta diverses propostes encaminades a millorar la transmissió de vídeo a través de l'estàndard DASH (Dynamic Adaptive Streaming over HTTP). Aquest treball de recerca estudia el protocol de transmissió DASH i les seves característiques. Alhora, planteja la codificació amb qualitat constant i bitrate variable com a manera de codificació del contingut de vídeo més indicada per a la transmissió de contingut sota demanda mitjançant l'estàndard DASH. Derivat de la proposta d'utilització de la manera de codificació de qualitat constant, cobra major importància el paper que juguen els algorismes d'adaptació en l'experiència dels usuaris en consumir el contingut. En aquest sentit, aquesta tesi presenta un algoritme d'adaptació denominat Look Ahead el qual, sense modificar l'estàndard, permet utilitzar la informació de les grandàries dels segments de vídeo inclosa en els contenidors multimèdia per a evitar prendre decisions d'adaptació que desemboquin en una parada indesitjada en la reproducció de contingut multimèdia. Amb l'objectiu d'avaluar les possibles millores de l'algoritme d'adaptació presentat, es proposen tres models d'avaluació objectiva de la QoE. Els models proposats permeten predir de manera senzilla la QoE que tindrien els usuaris de manera objectiva, utilitzant paràmetres coneguts com el bitrate mitjà, el PSNR (Peak Signal-to-Noise Ratio) i el valor de VMAF (Video Multimethod Assessment Fusion). Tots ells aplicats a cada segment. Finalment, s'estudia el comportament de DASH en entorns Wi-Fi amb alta densitat d'usuaris. En aquest context es produeixen un nombre elevat de parades en la reproducció per una mala estimació de la taxa de transferència disponible deguda al patró ON/OFF de descàrrega de DASH i a la variabilitat de l'accés al mitjà de Wi-Fi. Per a pal·liar aquesta situació, es proposa un servei de coordinació basat en la tecnologia SAND (MPEG's Server and Network Assisted DASH) que proporciona una estimació de la taxa de transferència basada en la informació de l'estat dels players dels clients. / [EN] This thesis presents several proposals aimed at improving video transmission through the DASH (Dynamic Adaptive Streaming over HTTP) standard. This research work studies the DASH transmission protocol and its characteristics. At the same time, this work proposes the use of encoding with constant quality and variable bitrate as the most suitable video content encoding mode for on-demand content transmission through the DASH standard. Based on the proposal to use the constant quality encoding mode, the role played by adaptation algorithms in the user experience when consuming multimedia content becomes more important. In this sense, this thesis presents an adaptation algorithm called Look Ahead which, without modifying the standard, allows the use of the information on the sizes of the video segments included in the multimedia containers to avoid making adaptation decisions that lead to undesirable stalls during the playback of multimedia content. In order to evaluate the improvements of the presented adaptation algorithm, three models of objective QoE evaluation are proposed. These models allow to predict in a simple way the QoE that users would have in an objective way, using well-known parameters such as the average bitrate, the PSNR (Peak Signal-to-Noise Ratio) and the VMAF (Video Multimethod Assessment Fusion). All of them applied to each segment. Finally, the DASH behavior in Wi-Fi environments with high user density is analyzed. In this context, there could be a high number of stalls in the playback because of a bad estimation of the available transfer rate due to the ON/OFF pattern of DASH download and to the variability of the access to the Wi-Fi environment. To relieve this situation, a coordination service based on SAND (MPEG's Server and Network Assisted DASH) is proposed, which provides an estimation of the transfer rate based on the information of the state of the clients' players. / Belda Ortega, R. (2021). Mejora del streaming de vídeo en DASH con codificación de bitrate variable mediante el algoritmo Look Ahead y mecanismos de coordinación para la reproducción, y propuesta de nuevas métricas para la evaluación de la QoE [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/169467

Page generated in 0.1711 seconds