• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 571
  • 101
  • 53
  • 51
  • 43
  • 34
  • 29
  • 24
  • 19
  • 17
  • 16
  • 10
  • 10
  • 9
  • 6
  • Tagged with
  • 1098
  • 314
  • 278
  • 206
  • 142
  • 136
  • 122
  • 121
  • 118
  • 112
  • 94
  • 92
  • 86
  • 83
  • 82
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Exploration Framework For Detecting Outliers In Data Streams

Sean, Viseth 27 April 2016 (has links)
Current real-world applications are generating a large volume of datasets that are often continuously updated over time. Detecting outliers on such evolving datasets requires us to continuously update the result. Furthermore, the response time is very important for these time critical applications. This is challenging. First, the algorithm is complex; even mining outliers from a static dataset once is already very expensive. Second, users need to specify input parameters to approach the true outliers. While the number of parameters is large, using a trial and error approach online would be not only impractical and expensive but also tedious for the analysts. Worst yet, since the dataset is changing, the best parameter will need to be updated to respond to user exploration requests. Overall, the large number of parameter settings and evolving datasets make the problem of efficiently mining outliers from dynamic datasets very challenging. Thus, in this thesis, we design an exploration framework for detecting outliers in data streams, called EFO, which enables analysts to continuously explore anomalies in dynamic datasets. EFO is a continuous lightweight preprocessing framework. EFO embraces two optimization principles namely "best life expectancy" and "minimal trial," to compress evolving datasets into a knowledge-rich abstraction of important interrelationships among data. An incremental sorting technique is also used to leverage the almost ordered lists in this framework. Thereafter, the knowledge abstraction generated by EFO not only supports traditional outlier detection requests but also novel outlier exploration operations on evolving datasets. Our experimental study conducted on two real datasets demonstrates that EFO outperforms state-of-the-art technique in terms of CPU processing costs when varying stream volume, velocity and outlier rate.
222

Profiling large-scale live video streaming and distributed applications

Deng, Jie January 2018 (has links)
Today, distributed applications run at data centre and Internet scales, from intensive data analysis, such as MapReduce; to the dynamic demands of a worldwide audience, such as YouTube. The network is essential to these applications at both scales. To provide adequate support, we must understand the full requirements of the applications, which are revealed by the workloads. In this thesis, we study distributed system applications at different scales to enrich this understanding. Large-scale Internet applications have been studied for years, such as social networking service (SNS), video on demand (VoD), and content delivery networks (CDN). An emerging type of video broadcasting on the Internet featuring crowdsourced live video streaming has garnered attention allowing platforms such as Twitch to attract over 1 million concurrent users globally. To better understand Twitch, we collected real-time popularity data combined with metadata about the contents and found the broadcasters rather than the content drives its popularity. Unlike YouTube and Netflix where content can be cached, video streaming on Twitch is generated instantly and needs to be delivered to users immediately to enable real-time interaction. Thus, we performed a large-scale measurement of Twitchs content location revealing the global footprint of its infrastructure as well as discovering the dynamic stream hosting and client redirection strategies that helped Twitch serve millions of users at scale. We next consider applications that run inside the data centre. Distributed computing applications heavily rely on the network due to data transmission needs and the scheduling of resources and tasks. One successful application, called Hadoop, has been widely deployed for Big Data processing. However, little work has been devoted to understanding its network. We found the Hadoop behaviour is limited by hardware resources and processing jobs presented. Thus, after characterising the Hadoop traffic on our testbed with a set of benchmark jobs, we built a simulator to reproduce Hadoops job traffic With the simulator, users can investigate the connections between Hadoop traffic and network performance without additional hardware cost. Different network components can be added to investigate the performance, such as network topologies, queue policies, and transport layer protocols. In this thesis, we extended the knowledge of networking by investigated two widelyused applications in the data centre and at Internet scale. We (i) studied the most popular live video streaming platform Twitch as a new type of Internet-scale distributed application revealing that broadcaster factors drive the popularity of such platform, and we (ii) discovered the footprint of Twitch streaming infrastructure and the dynamic stream hosting and client redirection strategies to provide an in-depth example of video streaming delivery occurring at the Internet scale, also we (iii) investigated the traffic generated by a distributed application by characterising the traffic of Hadoop under various parameters, (iv) with such knowledge, we built a simulation tool so users can efficiently investigate the performance of different network components under distributed application.
223

Bandwidth-efficient video streaming with network coding on peer-to-peer networks

Huang, Shenglan January 2017 (has links)
Over the last decade, live video streaming applications have gained great popularity among users but put great pressure on video servers and the Internet. In order to satisfy the growing demands for live video streaming, Peer-to-Peer(P2P) has been developed to relieve the video servers of bandwidth bottlenecks and computational load. Furthermore, Network Coding (NC) has been proposed and proved as a significant breakthrough in information theory and coding theory. According to previous research, NC not only brings substantial improvements regarding throughput and delay in data transmission, but also provides innovative solutions for multiple issues related to resource allocation, such as the coupon-collection problem, allocation and scheduling procedure. However, the complex NC-driven P2P streaming network poses substantial challenges to the packet scheduling algorithm. This thesis focuses on the packet scheduling algorithm for video multicast in NC-driven P2P streaming network. It determines how upload bandwidth resources of peer nodes are allocated in different transmission scenarios to achieve a better Quality of Service(QoS). First, an optimized rate allocation algorithm is proposed for scalable video transmission (SVT) in the NC-based lossy streaming network. This algorithm is developed to achieve the tradeoffs between average video distortion and average bandwidth redundancy in each generation. It determines how senders allocate their upload bandwidth to different classes in scalable data so that the sum of the distortion and the weighted redundancy ratio can be minimized. Second, in the NC-based non-scalable video transmission system, the bandwidth ineffi- ciency which is caused by the asynchronization communication among peers is reduced. First, a scalable compensation model and an adaptive push algorithm are proposed to reduce the unrecoverable transmission caused by network loss and insufficient bandwidth resources. Then a centralized packet scheduling algorithm is proposed to reduce the unin- formative transmission caused by the asynchronized communication among sender nodes. Subsequently, we further propose a distributed packet scheduling algorithm, which adds a critical scalability property to the packet scheduling model. Third, the bandwidth resource scheduling for SVT is further studied. A novel multiple- generation scheduling algorithm is proposed to determine the quality classes that the receiver node can subscribe to so that the overall perceived video quality can be maxi- mized. A single generation scheduling algorithm for SVT is also proposed to provide a faster and easier solution to the video quality maximization function. Thorough theoretical analysis is conducted in the development of all proposed algorithms, and their performance is evaluated via comprehensive simulations. We have demon- strated, by adjusting the conventional transmission model and involving new packet scheduling models, the overall QoS and bandwidth efficiency are dramatically improved. In non-scalable video streaming system, the maximum video quality gain can be around 5dB compared with the random push method, and the overall uninformative transmiss- sion ratio are reduced to 1% - 2%. In scalable video streaming system, the maximum video quality gain can be around 7dB, and the overall uninformative transmission ratio are reduced to 2% - 3%.
224

Digital disruption in the recording industry

Sun, Hyojung January 2017 (has links)
With the rise of peer-to-peer software like Napster, many predicted that the digitalisation, sharing and dematerialisation of music would bring a radical transformation within the recording industry. This opened up a period of controversy and uncertainty in which competing visions were articulated of technology-induced change, markedly polarised between utopian and dystopian accounts with no clear view of ways forwards. A series of moves followed as various players sought to valorise music on the digital music networks, culminating in an emergence of successful streaming services. This thesis examines why there was a mismatch between initial predictions and what has actually happened in the market. It offers a detailed examination of the innovation processes through which digital technology was implemented and domesticated in the recording industry. This reveals a complex, contradictory and constantly evolving landscape in which the development of digital music distribution was far removed from the smooth development trajectories envisaged by those who saw these developments as following a simple trajectory shaped by technical or economic determinants. The research is based upon qualitative data analysis of fifty five interviews with a wide range of entrepreneurs and innovators, focusing on two successful innovation cases with different points of insertion within the digital recording industry; (1) Spotify: currently the world’s most popular digital music streaming service; and (2) INgrooves: an independent digital music distribution service provider whose system is also used by Universal Music Group. The thesis applies perspectives from the Social Shaping of Technology (“SST”) and its extension into Social Learning in Technological Innovation. It explores the widely dispersed processes of innovation through which the complex set of interactions amongst heterogeneous players who have conflicting interests and differing commitments involved in the digital music networks guided diverging choices in relation to particular market conditions and user requirements. The thesis makes three major contributions to understanding digital disruption in the recording industry. (1) In contrast to prevailing approaches which take P2P distribution as the single point of focus, the study investigates the multiplicity of actors and sites of innovation in the digital recording industry. It demonstrates that the dematerialisation of music did not lead to a simple, e.g. technologically-driven transformation of the industry. Instead a diverse array of realignments had to take place across the music sector to develop digital music valorisation networks. (2) By examining the detailed processes involved in the evolution of digital music services, it highlights the ways in which business models are shaped through a learning process of matching and finding constantly changing digital music users’ needs. Based on the observation that business models must be discovered in the course of making technologies work in the market, a new framework of ‘social shaping of business models’ is proposed in order to conceptualise business models as an emergent process in which firms refine their strategies in the light of emerging circumstances. (3) Drawing upon the concepts of musical networks (Leyshon 2001) and mediation (Hennion 1989), the thesis investigates the interaction of the diverse actors across the circuit of the recording business – production, distribution, valorisation, and consumption. The comprehensive analysis of the intricate interplay between innovation actors and their interactions in the economic, cultural, legal and institutional context highlights the need to develop a more sophisticated and nuanced understanding of the recording industry.
225

Resource management in multimedia communication systems

Hou, Yuen Tan 01 January 2003 (has links)
No description available.
226

Modeling and analysis of P2P streaming.

January 2008 (has links)
Zhou, Yipeng. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 64-66). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.2 --- Contribution --- p.2 / Chapter 1.3 --- Organization --- p.4 / Chapter 2 --- Related Work --- p.5 / Chapter 2.1 --- Work of Streaming --- p.5 / Chapter 2.2 --- Work of P2P VoD --- p.6 / Chapter 3 --- Basic Model of Synchronized Case --- p.8 / Chapter 4 --- Model of Chunk Selection Strategies --- p.13 / Chapter 4.1 --- Chunk Selection Strategies --- p.13 / Chapter 4.1.1 --- Greedy Strategy --- p.14 / Chapter 4.1.2 --- Rarest First Strategy --- p.15 / Chapter 4.1.3 --- "Buffer Size, Peer Population and Continuity" --- p.16 / Chapter 4.1.4 --- Mixed Strategy --- p.17 / Chapter 4.2 --- Some Conclusion and Extension --- p.19 / Chapter 4.3 --- Metrics --- p.20 / Chapter 4.3.1 --- Continuity --- p.20 / Chapter 4.3.2 --- Start-up Latency --- p.20 / Chapter 5 --- Experiment and Application --- p.22 / Chapter 5.1 --- Numerical Examples and Analysis --- p.22 / Chapter 5.2 --- Sensitivity study --- p.30 / Chapter 5.2.1 --- Discrete Model with Factor --- p.30 / Chapter 5.2.2 --- Validate Discrete Model with Factor --- p.31 / Chapter 5.2.3 --- Server Use Pull Strategy --- p.31 / Chapter 5.2.4 --- Vary Subset Size Touched by Server --- p.32 / Chapter 5.3 --- Application to Real-world Protocols --- p.32 / Chapter 6 --- Model of Unsynchronized Case --- p.34 / Chapter 6.1 --- The model for unsynchronized playback --- p.34 / Chapter 6.1.1 --- Overlap maximization problem --- p.37 / Chapter 6.1.2 --- Properties of the synchronized cluster --- p.38 / Chapter 6.2 --- Analysis of playback continuity --- p.40 / Chapter 6.2.1 --- Peers with different buffer sizes --- p.41 / Chapter 6.2.2 --- Analysis of two clusters with a lag --- p.44 / Chapter 7 --- Performance Evaluation of Unsynchronized System --- p.48 / Chapter 7.1 --- Performance Evaluation --- p.48 / Chapter 8 --- conclusion --- p.54 / Chapter 8.1 --- Conclusion --- p.54 / Chapter A --- Equation Derivation --- p.56 / Bibliography --- p.64
227

Measurement and application of many-to-one data flows.

January 2007 (has links)
Ho, Po Yee. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (leaves 77-81). / Abstracts in English and Chinese. / Acknowledgements --- p.i / Abstract --- p.ii / 摘要 --- p.iii / Chapter Chapter 1 --- Introduction --- p.1 / Chapter Chapter 2 --- Background and Related Work --- p.4 / Chapter 2.1 --- Link/Path Capacity --- p.4 / Chapter 2.2 --- Unutilized Bandwidth --- p.5 / Chapter 2.3 --- Achievable Bandwidth --- p.5 / Chapter Chapter 3 --- Measurement Methodology --- p.7 / Chapter 3.1 --- PlanetLab Measurement --- p.8 / Chapter 3.2 --- FTP Measurement --- p.10 / Chapter Chapter 4 --- Analysis of Measurement Data --- p.12 / Chapter 4.1 --- Per-Flow Achievable Bandwidth --- p.13 / Chapter 4.2 --- Inter-Flow Correlation --- p.14 / Chapter 4.3 --- Intra-Flow Temporal Correlation --- p.16 / Chapter 4.4 --- Intra-Flow Bandwidth Variation --- p.18 / Chapter 4.5 --- Predictability of Bandwidth Properties --- p.22 / Chapter 4.6 --- Long-term Flow Properties --- p.26 / Chapter Chapter 5 --- A Mathematical Framework --- p.28 / Chapter 5.1 --- Bandwidth Variations --- p.28 / Chapter 5.2 --- Bandwidth Predictability --- p.31 / Chapter 5.3 --- Sensitivity Analysis --- p.34 / Chapter Chapter 6 --- Predictive Buffering Algorithm --- p.41 / Chapter 6.1 --- Related Work --- p.43 / Chapter 6.2 --- System Model --- p.44 / Chapter 6.3 --- Prediction Algorithm for Constant Bit-Rate Videos --- p.45 / Chapter 6.4 --- Prediction Algorithm for Variable Bit-Rate Videos --- p.46 / Chapter 6.5 --- Parameter Estimation --- p.47 / Chapter Chapter 7 --- Performance Evaluation --- p.49 / Chapter 7.1 --- Trace-Driven Simulation Setup --- p.49 / Chapter 7.2 --- Performance over CBR Videos --- p.50 / Chapter 7.2.1 --- Video Playback Performance --- p.51 / Chapter 7.2.2 --- Buffering Time --- p.57 / Chapter 7.3 --- Performance over VBR Videos --- p.61 / Chapter 7.3.1 --- Video Playback Performance --- p.62 / Chapter 7.3.2 --- Buffering Time --- p.66 / Chapter Chapter 8 --- Future Work --- p.69 / Chapter 8.1 --- Playback Rate Adaptation --- p.70 / Chapter 8.2 --- Sender Selection Algorithm --- p.71 / Chapter 8.3 --- Dynamic Flow Allocation --- p.72 / Chapter 8.4 --- Predictive Flow Allocation --- p.73 / Chapter 8.5 --- Challenge in P2P Applications --- p.74 / Chapter Chapter 9 --- Conclusion --- p.76 / Bibliography
228

The Gratification Niches of Traditional and Digital Radio

Shelline, Don G 01 March 2016 (has links)
We live in an age where science fiction is quickly becoming science fact. Dick Tracy's 2-way wrist TVs are Apple Watches. Automated smart homes are plentiful. Cars are now able to drive themselves. And in those cars, riders no longer need to depend on a deejay to choose their music for them; these listeners build their own radio stations, on the spot, out of any music and conversation they want to hear, all at the touch of a button that is fully connected to Wi-Fi, the internet, and unlimited cell data plans.This research will examine digital radio's impact upon traditional radio in the current media environment. It will first take a look at the history of radio, specifically examining radio's reaction and adaptation when a new form of competitive media moved into the mass communication environment, and how radio fared in the face of that competition. The research will then look at uses and gratifications for both traditional and digital radio, which will be analyzed using media niche theory. From this, we will ascertain the niche breadth of each medium, as well as how much overlap exists between the two, and finally, which medium achieves niche superiority over the other in terms of gratifications observed.
229

Dynamic features of neural activity in primary auditory cortex captured by an integrate-and-fire network model for auditory streaming

Mahat, Aarati 01 December 2018 (has links)
Past decades of auditory research have identified several acoustic features that influence perceptual organization of sound, in particular, the frequency of tones and the rate of presentation. One class of stimuli that have been intensively studied are sequences of tones that alternate in frequency. They are typically presented in patterns of repeating doublets ABAB… or repeating triplets ABA-ABA-... where the symbol “-” stands for a gap of silence between triplets repeats. The duration of each tone or silence is typically tens to hundreds of milliseconds, and listeners hearing the sequence perceive either one auditory object ("stream integration") or two separate auditory objects (“stream segregation”). Animal studies have characterized single- and multi- unit neural activity and event-related local field potentials while systematically varying frequency separation between tones (ΔF) or the presentation rate (PR). They found that the B tone responses in doublets were differentially suppressed with increasing PR and that the B tones responses in triplets decreased with larger ΔF. However, the neural mechanisms underlying these animal data have yet to be explained. In this study, we built an integrate-and-fire network model of the primary auditory cortex (AC) that accurately reproduced the experimental results. Then, we extended the model to account for basic spectro-temporal features of electrocorticography (ECoG) recordings from the posteriomedial part of the Heschl's gyrus (HGPM; cortical area equivalent to the AC of monkeys), obtained from humans listening to sequences of triplets ABA-. Finally, we constructed a firing rate reduced model of the proposed integrate-and-fire network and analyzed its dynamics as function of parameters. A large network of voltage-dependent leaky integrate-and-fire neurons (3600 excitatory, 900 inhibitory) was constructed to simulate neural activity from layers 3/4 of AC during streaming of tone triplets. Parameters describing synaptic and membrane properties were based on experimental data from early studies of AC. Network structure assumed spatially-dependent probability of connections and tonotopic organization. Subpopulations of neurons were tuned to different frequencies along the tonotopic map. In-silico recordings were performed during the presentation of long sequences of triplets and/or doublets. The network’s output was derived with two types of measurements in mind: spiking activity of individual neurons and/or local populations of neurons, and local field potentials. The network spiking neural activity reproduced reliably data reports, including dependence of responses to the B tone in triplets ABA- on stimulus parameter ΔF. Approximations of average evoked potentials (AEPs) from ECoG signals recorded at four depth contacts placed over human HGPM during auditory streaming of triplets were also obtained.
230

Scalable and cost-effective framework for continuous media-on-demand.

Nguyen, Dang Nam Chi January 2006 (has links)
This dissertation was motivated by the exponential growth in bandwidth capacity of the Internet, coupled with the immense growth of broadband adoption by the public. This has led to the development of a wide variety of new online services. Chief amongst the emerging applications is the delivery of multimedia contents to the end users via the network on-demand. It is the “on-demand” aspect that has led to problems which, despite the advances in hardware technology and network capacity, have hampered wide scale adoption of multimedia delivery. The focus of this dissertation was to address these problems, namely: scalability, cost-effectiveness, and network quality of service for timely presentation of multimedia contents. We proposed an architecture, which we referred to as “Delayed-Multicast”, to address the scalability problem. The new architecture introduced buffers within the network to reduce demands on core network bandwidth and server load. A feasibility study of the architecture was conducted through the use of a prototype. It was found that such a system is within reach by demonstrating the prototype using cheap, common-of-the-shelf (COTS) components, and with help of freely available system software such Linux with real-time support. The introduction of buffers within the network led to the requirement of how to minimize buffer space. We developed an optimal algorithm for allocating buffer space in a single level caching layout (i.e. only one buffer in the transmission path from the server to the end user). For the case of multi-levels network caching, we thoroughly examined different optimization problems from an algorithmic perspective. These problems included how to minimize total system memory, and minimize the maximum memory used per node. We proved that determining the optimal buffer allocation in many of these iv v cases is an NP-complete problem. Consequently, we developed heuristics to handle multi-level caching and showed through simulations that the heuristics greatly help in minimizing buffer space and network bandwidth requirement. An important aspect of the heuristics was how to handle the case when the arrival times of client requests were not known a priori. For these “online” problems we also proposed heuristics that can significantly reduce overall system resource requirements. If the cost of buffer space was also taken into account along with the cost of network bandwidth, a different optimization problem was how to minimize the total system cost. Here, we also proposed heuristics, which in simulations show that the total system cost can be significantly reduced. Besides the problems associated with resource allocation, in terms of buffer space and bandwidth, we also examined the problem of how to provision the necessary network quality of service on-demand. Most current networks rely on best-effort delivery which is ill suited for the delivery of multimedia traffic. We proposed a solution which relied on the use of a programmable network plane, that is present in many current routers, to dynamically alter the priority of flows within the network in real-time. We also demonstrated the effectiveness of the flow prioritization on an actual Nortel router. Finally, we examined the problem of how to admit and achieve fair bandwidth allocation for the end-users within a Differentiated Service (DiffServ) network. Diff- Serv is an IETF standard that aims to provide a “better than best-effort” network in a scalable manner, and is used widely, especially within the same autonomous domain for prioritization different classes of traffic. However, there are open problems on how to provide fair bandwidth allocation amongst competing flows. We proposed an edge-aware resource discovery loop, which as the name suggests, sent packets to gather information about the internal states of the core network. With this information, we proposed a price-based admission control algorithm for use within the DiffServ network that would allow fair admission, effective congestion control, and fair bandwidth allocation amongst different traffic flows.

Page generated in 0.0553 seconds