• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 114
  • 28
  • 19
  • 8
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 247
  • 68
  • 50
  • 49
  • 40
  • 38
  • 33
  • 31
  • 23
  • 22
  • 19
  • 19
  • 18
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Scalable Multiple Description Coding and Distributed Video Streaming over 3G Mobile Networks

Zheng, Ruobin January 2003 (has links)
In this thesis, a novel Scalable Multiple Description Coding (SMDC) framework is proposed. To address the bandwidth fluctuation, packet loss and heterogeneity problems in the wireless networks and further enhance the error resilience tools in Moving Pictures Experts Group 4 (MPEG-4), the joint design of layered coding (LC) and multiple description coding (MDC) is explored. It leverages a proposed distributed multimedia delivery mobile network (D-MDMN) to provide path diversity to combat streaming video outage due to handoff in Universal Mobile Telecommunications System (UMTS). The corresponding intra-RAN (Radio Access Network) handoff and inter-RAN handoff procedures in D-MDMN are studied in details, which employ the principle of video stream re-establishing to replace the principle of data forwarding in UMTS. Furthermore, a new IP (Internet Protocol) Differentiated Services (DiffServ) video marking algorithm is proposed to support the unequal error protection (UEP) of LC components of SMDC. Performance evaluation is carried through simulation using OPNET Modeler 9. 0. Simulation results show that the proposed handoff procedures in D-MDMN have better performance in terms of handoff latency, end-to-end delay and handoff scalability than that in UMTS. Performance evaluation of our proposed IP DiffServ video marking algorithm is also undertaken, which shows that it is more suitable for video streaming in IP mobile networks compared with the previously proposed DiffServ video marking algorithm (DVMA).
92

Automated Construction of Macromodels from Frequency Data for Simulation of Distributed Interconnect Networks

Min, Sung-Hwan 12 April 2004 (has links)
As the complexity of interconnects and packages increases and the rise and fall time of the signal decreases, the electromagnetic effects of distributed passive devices are becoming an important factor in determining the performance of gigahertz systems. The electromagnetic behavior extracted using an electromagnetic simulation or from measurements is available as frequency dependent data. This information can be represented as a black box called a macromodel, which captures the behavior of the passive structure at the input/output ports. In this dissertation, the macromodels have been categorized as scalable, passive and broadband macromodels. The scalable macromodels for building design libraries of passive devices have been constructed using multidimensional rational functions, orthogonal polynomials and selective sampling. The passive macromodels for time-domain simulation have been constructed using filter theory and multiport passivity formulae. The broadband macromodels for high-speed simulation have been constructed using band division, selector, subband reordering, subband dilation and pole replacement. An automated construction method has been developed. The construction time of the multiport macromodel has been reduced. A method for reducing the order of the macromodel has been developed. The efficiency of the methods was demonstrated through embedded passive devices, known transfer functions and distributed interconnect networks.
93

Energy Efficient Multicast Scheduling with Adaptive Modulation and Coding for IEEE 802.16e Wireless Metropolitan Area Networks

Hsu, Chao-Yuan 14 July 2011 (has links)
One of the major applications driving wireless network services is video streaming, which is based on the ability to simultaneously multicast the same video contents to a group of users, thus reducing the bandwidth consumption. On the other hand, due to slow progress in battery technology, the investigation of power saving technologies becomes important. IEEE 802.16e (also known as Mobile WiMAX) is currently the international MAC (medium access control) standard for wireless metropolitan area networks. However, in 802.16e, the power saving class for multicast traffic is designed only for best-effort-based management operations. On the other hand, SMBC-AMC adopts the concepts of ¡§multicast superframe¡¨ and ¡§logical broadcast channel¡¨ to support push-based multicast applications. However, SMBC-AMC requires that (1) the number of frames in each logical broadcast channel must be equal, (2) all mobile stations must have the same duty cycle, and (3) the base station must use the same modulation to send data in a frame. These imply that SMBC-AMC is too inflexible to reach high multicast energy throughput. In this thesis, we propose cross-layer energy efficient multicast scheduling algorithms, called EEMS-AMC, for scalable video streaming. The goal of EEMS-AMC is to find a multicast data scheduling such that the multicast energy throughput of a WiMAX network is maximum. Specifically, EEMS-AMC has the following attractive features: (1) By means of admission control and the restriction of the multicast superframe length, EEMS-AMC ensures that the base layer data of all admitted video streams can be delivered to mobile stations in timeliness requirements. (2) EEMS-AMC adopts the greedy approach to schedule the base layer data such that the average duty cycle of all admitted stations can approach to the theoretical minimum. (3) EEMS-AMC uses the metric ¡§potential multicast throughput¡¨ to find the proper modulation for each enhancement layer data and uses the metric ¡§multicast energy throughput gain¡¨ to find the near-optimal enhancement layer data scheduling. Simulation results show that EEMS-AMC significantly outperforms SMBC-AMC in terms of average duty cycle, multicast energy throughput, multicast packet loss rate, and normalized total utility.
94

Resource Allocation for MIMO Relay and Scalable H.264/AVC Video Transmission over Cooperative Communication Networks

Wu, Yi-Sian 10 September 2012 (has links)
This thesis proposes resource allocation algorithms for multi-input multi-output (MIMO) relay and Scalable H.264/AVC video transmission over cooperative communication networks. For MIMO relay, we explore the reception diversity with maximal ratio combining (MRC) and transmission diversity with space-time block codes (STBC) respectively. Then, a condition is proposed to maximize the overall output signal-to-noise ratio (SNR). In this condition, the ineffective relays will be excluded in sequence from the cooperation. Simulation results indicate that the effect of bit error rate (BER) through the relay selection is similar to the scheme which applies all relays, but the amounts of used relay decreased. For Scalable H.264/AVC video, by introducing frame significance analysis, the video quality dependency between coding frame and its references is investigated for temporal layers and quality layers. The proposed algorithm allocates the relay and sub-band to each layer based on channel conditions and the priority of classified video packets. Experimental results indicate that the proposed algorithm is superior to the temporal-based allocation and quality-based allocation cooperative schemes.
95

The Research of Very Low Bit-Rate and Scalable Video Compression Using Cubic-Spline Interpolation

Wang, Chih-Cheng 18 June 2001 (has links)
This thesis applies the one-dimensional (1-D) and two-dimensional (2-D) cubic-spline interpolation (CSI) schemes to MPEG standard for very low-bit rate video coding. In addition, the CSI scheme is used to implement the scalable video compression scheme in this thesis. The CSI scheme is based on the least-squares method with a cubic convolution function. It has been shown that the CSI scheme yields a very accurate algorithm for smoothing and obtains a better quality of reconstructed image than linear interpolation, linear-spline interpolation, cubic convolution interpolation, and cubic B-spline interpolation. In order to obtain a very low-bit rate video, the CSI scheme is used along with the MPEG-1 standard for video coding. Computer simulations show that this modified MPEG not only avoids the blocking effect caused by MPEG at high compression ratio but also gets a very low-bit rate video coding scheme that still maintains a reasonable video quality. Finally, the CSI scheme is also used to achieve the scalable video compression. This new scalable video compression scheme allows the data rate to be dynamically changed by the CSI scheme, which is very useful when operates under communication networks with different transmission capacities.
96

Design, Manufacturing, and Assembly of a Flexible Thermoelectric Device

Martinez, Christopher Anthony 01 January 2013 (has links)
This thesis documents the design, manufacturing, and assembly of a flexible thermoelectric device. Such a device has immediate use in haptics, medical, and athletic applications. The governing theory behind the device is explained and a one dimensional heat transfer model is developed to estimate performance. This model and consideration for the manufacturing and assembly possibilities are the drivers behind the decisions made in design choices. Once the design was finalized, manufacturing methods for the various components were explored. The system was created by etching copper patterns on a copper/polyimide laminate and screen printing solder paste onto the circuits. Thermoelectric elements were manually assembled. Several proof of concept prototypes were made to validate the approach. Development of the assembly process also involved proof of concept prototyping and partial assembly analysis. A full scale device was produced and tested to assess its thermoelectric behavior. The resulting performance was an interface temperature drop of 3 °C in 10 seconds with 1.5 A supplied, and a maximum temperature drop of 9.9 °C after 2 minutes with 2.5 A supplied. While the measured behavior fell short of predictions, it appears to be adequate for the intended purpose. The differences appear to be due to larger than expected thermal resistances between the device and the heat sinks and some possible degradation of the thermoelectric elements due to excess solder coating the edges.
97

Accelerated Fuzzy Clustering

Parker, Jonathon Karl 01 January 2013 (has links)
Clustering algorithms are a primary tool in data analysis, facilitating the discovery of groups and structure in unlabeled data. They are used in a wide variety of industries and applications. Despite their ubiquity, clustering algorithms have a flaw: they take an unacceptable amount of time to run as the number of data objects increases. The need to compensate for this flaw has led to the development of a large number of techniques intended to accelerate their performance. This need grows greater every day, as collections of unlabeled data grow larger and larger. How does one increase the speed of a clustering algorithm as the number of data objects increases and at the same time preserve the quality of the results? This question was studied using the Fuzzy c-means clustering algorithm as a baseline. Its performance was compared to the performance of four of its accelerated variants. Four key design principles of accelerated clustering algorithms were identified. Further study and exploration of these principles led to four new and unique contributions to the field of accelerated fuzzy clustering. The first was the identification of a statistical technique that can estimate the minimum amount of data needed to ensure a multinomial, proportional sample. This technique was adapted to work with accelerated clustering algorithms. The second was the development of a stopping criterion for incremental algorithms that minimizes the amount of data required, while maximizing quality. The third and fourth techniques were new ways of combining representative data objects. Five new accelerated algorithms were created to demonstrate the value of these contributions. One additional discovery made during the research was that the key design principles most often improve performance when applied in tandem. This discovery was applied during the creation of the new accelerated algorithms. Experiments show that the new algorithms improve speedup with minimal quality loss, are demonstrably better than related methods and occasionally are an improvement in both speedup and quality over the base algorithm.
98

Adaptive video transmission over wireless channels with optimized quality of experiences

Chen, Chao, active 2013 18 February 2014 (has links)
Video traffic is growing rapidly in wireless networks. Different from ordinary data traffic, video streams have higher data rates and tighter delay constraints. The ever-varying throughput of wireless links, however, cannot support continuous video playback if the video data rate is kept at a high level. To this end, adaptive video transmission techniques are employed to reduce the risk of playback interruptions by dynamically matching the video data rate to the varying channel throughput. In this dissertation, I develop new models to capture viewers' quality of experience (QoE) and design adaptive transmission algorithms to optimize the QoE. The contributions of this dissertation are threefold. First, I develop a new model for the viewers' QoE in rate-switching systems in which the video source rate is adapted every several seconds. The model is developed to predict an important aspect of QoE, the time-varying subjective quality (TVSQ), i.e., the up-to-the-moment subjective quality of a video as it is played. I first build a video database of rate-switching videos and measure TVSQs via a subjective study. Then, I parameterize and validate the TVSQ model using the measured TVSQs. Finally, based on the TVSQ model, I design an adaptive rate-switching algorithm that optimizes the time-averaged TVSQs of wireless video users. Second, I propose an adaptive video transmission algorithm to optimize the Overall Quality (OQ) of rate-switching videos, i.e., the viewers' judgement on the quality of the whole video. Through the subjective study, I find that the OQ is strongly correlated with the empirical cumulative distribution function (eCDF) of the video quality perceived by viewers. Based on this observation, I develop an adaptive video transmission algorithm that maximizes the number of video users who satisfy given constraints on the eCDF of perceived video qualities. Third, I propose an adaptive transmission algorithm for scalable videos. Different from the rate-switching systems, scalable videos support rate adaptation for each video frame. The proposed adaptive transmission algorithm maximizes the time-averaged video quality while maintaining continuous video playback. When the channel throughput is high, the algorithm increases the video data rate to improve video quality. Otherwise, the algorithm decreases the video data rate to buffer more videos and to reduce the risk of playback interruption. Simulation results show that the performance of the proposed algorithm is close to a performance upper bound. / text
99

Modeling Large Social Networks in Context

Ho, Qirong 01 July 2014 (has links)
Today’s social and internet networks contain millions or even billions of nodes, and copious amounts of side information (context) such as text, attribute, temporal, image and video data. A thorough analysis of a social network should consider both the graph and the associated side information, yet we also expect the algorithm to execute in a reasonable amount of time on even the largest networks. Towards the goal of rich analysis on societal-scale networks, this thesis provides (1) modeling and algorithmic techniques for incorporating network context into existing network analysis algorithms based on statistical models, and (2) strategies for network data representation, model design, algorithm design and distributed multi-machine programming that, together, ensure scalability to very large networks. The methods presented herein combine the flexibility of statistical models with key ideas and empirical observations from the data mining and social networks communities, and are supported by software libraries for cluster computing based on original distributed systems research. These efforts culminate in a novel mixed-membership triangle motif model that easily scales to large networks with over 100 million nodes on just a few cluster machines, and can be readily extended to accommodate network context using the other techniques presented in this thesis.
100

Motion compensation-scalable video coding

Αθανασόπουλος, Διονύσιος 17 September 2007 (has links)
Αντικείμενο της διπλωματικής εργασίας αποτελεί η κλιμακοθετήσιμη κωδικοποίηση βίντεο (scalable video coding) με χρήση του μετασχηματισμού wavelet. Η κλιμακοθετήσιμη κωδικοποίηση βίντεο αποτελεί ένα πλαίσιο εργασίας, όπου από μια ενιαία συμπιεσμένη ακολουθία βίντεο μπορούν να προκύψουν αναπαραστάσεις του βίντεο με διαφορετική ποιότητα, ανάλυση και ρυθμό πλαισίων. Η κλιμακοθετησιμότητα του βίντεο αποτελεί σημαντική ιδιότητα ενός συστήματος στις μέρες μας, όπου το video-streaming και η επικοινωνία με βίντεο γίνεται μέσω μη αξιόπιστων μέσων διάδοσης και μεταξύ τερματικών με διαφορετικές δυνατότητες Στην εργασία αυτή αρχικά μελετάται ο μετασχηματισμός wavelet, ο οποίος αποτελεί το βασικό εργαλείο για την κλιμακοθετήσιμη κωδικοποίηση τόσο εικόνων όσο και ακολουθιών βίντεο. Στην συνέχεια, αναλύουμε την ιδέα της ανάλυσης πολλαπλής διακριτικής ικανότητας (multiresolution analysis) και την υλοποίηση του μετασχηματισμού wavelet με χρήση του σχήματος ανόρθωσης (lifting scheme), η οποία προκάλεσε νέο ενδιαφέρον στο χώρο της κλιμακοθετήσιμης κωδικοποίησης βίντεο. Τα κλιμακοθετήσιμα συστήματα κωδικοποίησης βίντεο διακρίνονται σε δύο κατηγορίες: σε αυτά που εφαρμόζουν το μετασχηματισμό wavelet πρώτα στο πεδίο του χρόνου και έπειτα στο πεδίο του χώρου και σε αυτά που εφαρμόζουν το μετασχηματισμό wavelet πρώτα στο πεδίο του χώρου και έπειτα στο πεδίο του χρόνου. Εμείς εστιάzουμε στη πρώτη κατηγορία και αναλύουμε τη διαδικάσια κλιμακοθετήσιμης κωδικοποίησης/αποκωδικοποίησης καθώς και τα επιμέρους κομμάτια από τα οποία αποτελείται. Τέλος, εξετάζουμε τον τρόπο με τον οποίο διάφορες παράμετρoι επηρεάζουν την απόδοση ενός συστήματος κλιμακοθετήσιμης κωδικοποίησης βίντεο και παρουσιάζουμε τα αποτελέσματα από τις πειραματικές μετρήσεις. Βασιζόμενοι στα πειραματικά αποτελέσματα προτείνουμε έναν προσαρμοστικό τρόπο επιλογής των παραμέτρων με σκοπό τη βελτίωση της απόδοσης και συγχρόνως τη μείωση της πολυπλοκότητας. / In this master thesis we examine the scalable video coding based on the wavelet transform. Scalable video coding refers to a compression framework where content representations with different quality, resolution, and frame-rate can be extracted from parts of one compressed bitstream. Scalable video coding based on motion-compensated spatiotemporal wavelet decompositions is becoming increasingly popular, as it provides coding performance competitive with state-of-the-art coders, while trying to accommodate varying network bandwidths and different receiver capabilities (frame-rate, display size, CPU, etc.) and to provide solutions for network congestion or video server design. In this master thesis we investigate the wavelet transform, the multiresolution analysis and the lifting scheme. Then, we focus on the scalable video coding/decoding. There exist two different architectures of scalable video coding. The first one performs the wavelet transform firstly on the temporal direction and then performs the spatial wavelet decomposition. The other architecture performs firstly the spatial wavelet transform and then the temporal decomposition. We focus on the first architecture, also known as t+2D scalable coding systems. Several coding parameters affect the performance of the scalable video coding scheme such as the number of temporal levels and the interpolation filter used for subpixel accuracy. We have conducted extensive experiments in order to test the influence of these parameters. The influence of these parameters proves to be dependent on the video content. Thus, we present an adaptive way of choosing the value of these parameters based on the video content. Experimental results show that the proposed method not only significantly improves the performance but reduces the complexity of the coding procedure.

Page generated in 0.0347 seconds