• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 807
  • 129
  • 55
  • 8
  • 4
  • 1
  • Tagged with
  • 1004
  • 570
  • 264
  • 233
  • 214
  • 200
  • 199
  • 138
  • 128
  • 107
  • 103
  • 97
  • 82
  • 72
  • 71
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Fernmeldegeheimnis und Überwachung : Schutzbereiche und Eingriffe, Durchführung und Kosten /

Himberger, Simon. January 2004 (has links) (PDF)
Univ., Diss.--Wien, 2003.
312

Abgabeverfahren bei begrenzten Ressourcen wie z. B. Telekommunikationsfrequenzen unter wettbewerbsrechtlichen Gesichtspunkten

Berger, Thomas R. G. January 2008 (has links)
Zugl.: München, Univ., Diss., 2007
313

P2P Live Video Streaming

Chatzidrossos, Ilias January 2010 (has links)
<p>The ever increasing demand for video content directed the focus of researchfrom traditional server-based schemes to peer-to-peer systems for videodelivery. In such systems, video data is delivered to the users by utilizing theresources of the users themselves, leading to a potentially scalable solution.Users connect to each other, forming a p2p overlay network on top of theInternet and exchange the video segments among themselves. The performanceof a p2p system is characterized by its capability to deliver the videocontent to all peers without errors and with the smallest possible delay. Thisconstitutes a challenge since peers dynamically join and leave the overlay andalso contribute different amounts of resources to the system.The contribution of this thesis lies in two areas. The first area is theperformance evaluation of the most prominent p2p streaming architectures.We study the streaming quality in multiple-tree-based systems. We derivemodels to evaluate the stability of a multiple tree overlay in dynamic scenariosand the efficiency of the data distribution over the multiple trees. Then, westudy the data propagation in mesh-based overlays. We develop a generalframework for the evaluation of forwarding algorithms in such overlays anduse this framework to evaluate the performance of four different algorithms.The second area of the thesis is a study of streaming in heterogeneous p2poverlays. The streaming quality depends on the aggregate resources that peerscontribute to the system: low average contribution leads to low streamingquality. Therefore, maintaining high streaming quality requires mechanismsthat either prohibit non-contributing peers or encourage contribution. In thisthesis we investigate both approaches. For the former, we derive a model tocapture the evolution of available capacity in an overlay and propose simpleadmission control mechanisms to avoid capacity drainage. For the latter, inour last work, we propose a novel incentive mechanism that maximizes thestreaming quality in an overlay by encouraging highly contributing peers tooffer more of their resources.</p> / QC 20100506
314

Simulation of wireless communications in underground tunnels

He, Shabai January 2012 (has links)
The new released 4G standard wireless communication reminds us that higher transmission data rate and more reliable service are required urgently. However, to fulfill the demand can face problems in a complex environment like mines. In this thesis, characterization of underground tunnel mines with the idea of combating intersymbol interference effect is presented.            Ray tracing simulation method is applied to characterize channel impulse response in different positions of an underground tunnel. From this channel impulse response, we can obtain how intersymbol interference affects different wireless systems. Intersymbol interference occurs due to multipath propagation of time dispersion channel.           Adaptive Equalization is the most effective way to compensate intersymbol interference. Adaptive filter adapts filter coefficients to compensate the channel so that the combination of the filter and channel offers a flat frequency response and linear phase. The bit error rate performance without using adaptive equalization is compared with using equalizer. Moreover, adaptive equalization approaches using RLS and LMS algorithms are compared with each other. The tradeoff between convergence rate, computation cost instability and ensemble averaged minimum squared errors are analyzed to determine how to select the optimum adaptive equalizer.
315

Impairment Mitigation in High Capacity and Cost-efficient Optical Data Links

Iglesias Olmedo, Miguel January 2017 (has links)
The work presented in this thesis fits within the broader area of fiber optics communications. This is an important area of research as it provides a breeding ground for the present and future technologies supporting the Internet. Due to the ever-increasing bandwidth demands worldwide, the network infrastructures that make up the Internet are continuously being upgraded. This thesis aims to identify key segments of the Internet that are deemed to become the Internet's bottleneck if new technology does not replace the current one. These are datacenter intra and inter-connects, and metropolitan core area networks. In each category, we provide a comprehensive overview of the state of the art, identify key impairments affecting data transmission, and suggest solutions to overcome them.   For datacenter intra and inter-connects, the key impairments are lack of bandwidth from electro-optic devices, and dispersion. Solutions attempting to tackle these impairments must be constrained by cost and power consumption. The provided solution is MultiCAP, an alternative advanced modulation format that is more tolerable to dispersion and provides bandwidth management features, while being flexible enough to sacrifice performance in order to gain simplicity. MultiCAP was the first advanced modulation format to achieve over 100~Gb/s in 2013 for a data-center interconnect and set the world record on data transmission over a single VCSEL in 2014 for a short reach data link.    On metro-core networks, the challenge is to efficiently mitigate carrier induced frequency noise generated by modern semiconductor lasers. We point out that, when such lasers are employed, the commonly used laser linewidth fails to estimate system performance, and we propose an alternative figure of merit we name "Effective Linewidth". We derive this figure of merit analytically, explore it by numerical simulations and experimentally validate our results by transmitting a 28~Gbaud DP-16QAM over an optical link. / <p>QC 20170602</p> / GRIFFON
316

Sustainable Throughput Measurements for Video Streaming

Nutalapati, Hima Bindu January 2017 (has links)
With the increase in demand for video streaming services on the hand held mobile terminals with limited battery life, it is important to maintain the user Quality of Experience (QoE) while taking the resource consumption into consideration. Hence, the goal is to offer as good quality as feasible, avoiding as much user-annoyance as possible. Hence, it is essential to deliver the video, avoiding any uncontrollable quality distortions. This can be possible when an optimal (or desirable) throughput value is chosen such that exceeding the particular threshold results in entering a region of unstable QoE, which is not feasible. Hence, the concept of QoE-aware sustainable throughput is introduced as the maximal value of the desirable throughput that avoids disturbances in the Quality of Experience (QoE) due to delivery issues, or keeps them at an acceptable minimum. The thesis aims at measuring the sustainable throughput values when video streams of different resolutions are streamed from the server to a mobile client over wireless links, in the presence of network disturbances packet loss and delay. The video streams are collected at the client side for quality assessment and the maximal throughput at which the QoE problems can still be kept at a desired level is determined. Scatter plots were generated for the individual opinion scores and their corresponding throughput values for the disturbance case and regression analysis is performed to find the best fit for the observed data. Logarithmic, exponential, linear and power regressions were considered in this thesis. The R-squared values are calculated for each regression model and the model with R-squared value closest to 1 is determined to be the best fit. Power regression model and logarithmic model have the R-squared values closest to 1.  Better quality ratings have been observed for the low resolution videos in the presence of packet loss and delay for the considered test cases. It can be observed that the QoE disturbances can be kept at a desirable level for the low resolution videos and from the test cases considered for the investigation, 360px video is more resilient in case of high delay and packet loss values and has better opinion score values. Hence, it can be observed that the throughput is sustainable at this threshold.
317

Wireless system design : NB-IoT downlink simulator

Krasowski, Piotr, Troha, Douglas January 2017 (has links)
The newly defined NB-IoT standard currently lacks a toolkit and simulator. In order to develop algorithms for this new standard there is a need for channels and signals as reference during tests. MATLAB is commonly used for testing LTE signals and therefore the toolkit was developed in this environment. The toolkit focuses primarily on the Layer 1-relevant functionality of NB-IoT, the grid generation, encoding, rate-matching and modulation of channels. The simulator focuses on testing the developed toolkit in a virtual LTE NB-IoT environment. The virtual environment attempts to emulate a base station and a terminal. The path followed is scheduling, channel processing, grid generation, QPSK and OFDM modulation through a modeled channel, OFDM demodulation, channel estimation, equalisation, QPSK demodulation and reversal of channel processing. The simulator tests primarily the NPDSCH channel implementations. Measurements of bit error and block error rates were made and it was concluded that they follow the expected trends. More testing is required to validate the remaining channels. A sector equaliser and an interpolating equaliser were tested by measuring block error rate and checking constellation diagrams and it was concluded that the performance of the interpolation equaliser is more consistent. In order to improve the equalisation further the noise estimation must be reworked.
318

Power Profiling of different Heterogeneous Computers

Atla, Prashant January 2017 (has links)
Context: In the present world, there is an increase in the usage of com- munication services. The growth in the usage and services relying on the communication network has brought in the increase in energy consumption for all the resources involved like computers and other networking compo- nent. Energy consumption has become an other efficient metric, so there is a need of efficient networking services in various fields which can be obtained by using the efficient networking components like computers. For that pur- pose we have to know about the energy usage behavior of that component. Similarly as there is a growth in use of large data-centers there is a huge requirement of computation resources. So for an efficient use of these re- sources we need the measurement of each component of the system and its contribution towards the total power consumption of the system. This can be achieved by power profiling of different heterogeneous computers for es- timating and optimizing the usage of the resources. Objectives: In this study, we investigate the power profiles of different heterogeneous computers, under each system component level by using a predefined workload. The total power consumption of each system compo- nent is measured and evaluated using the open energy monitor(OEM). Methods: In oder to perform the power profile an experimental test bed is implemented. Experiments with different workload on each component are conducted on all the computers. The power for all the system under test is measured by using the OEM which is connected to each system under test(SUT). Results: From the results obtained, the Power profiles of different SUT’s are tabulated and analyzed. The power profiles are done in component level under different workload scenarios for four different heterogeneous comput- ers. From the results and analysis it can be stated that there is a variation in power consumed by each component of a computer based on its con- figuration. From the results we evaluate the property of super positioning principle.
319

Analyzing VoIP connectivity and performance issues

Sadaoui, Mehenni January 2019 (has links)
The appearance of Voice over IP (VoIP) revolutionized the telecommunications word, this technology delivers voice communications over the internet protocol (IP) networks instead of the public switched telephone network (PSTN), calls can be made between two VoIP phones as well as between a VoIP phone and an analog phone connected to a VoIP adapter [1]. The use of this technology gives access to more communication options compared to the conventional telephony but the users face different problems, mostly connectivity and performance issues related to different factors such as latency and jitter [2], these factors affect directly the call quality and can result in choppy voice, echoes, or even in a call failure. The main objective of this work was to create a tool for automatic analysis and evaluation from packet traces, identify connectivity and performance issues, reconstruct the audio streams and estimate the call quality. The results of this work showed that the objectives sated above are met, where a tool that automatically analyzes VoIP calls is created, this tool takes non encrypted pcap files as input and returns a list of calls with different parameters related to connectivity and performance such as delay and jitter, it does as well reconstruct the audio of every VoIP stream and plots the waveform and spectrum of the reconstructed audio for evaluation purposes.
320

Performance Analysis of the Impact of Vertical Scaling on Application Containerized with Docker : Kubernetes on Amazon Web Services - EC2

Midigudla, Dhananjay January 2019 (has links)
Containers are being used widely as a base technology to pack applications and microservice architecture is gaining popularity to deploy large scale applications, with containers running different aspects of the application. Due to the presence of dynamic load on the service, a need to scale up or scale down compute resources to the containerized applications arises in order to maintain the performance of the application. Objectives To evaluate the impact of vertical scaling on the performance of a containerized application deployed with Docker container and Kubernetes that includes identification of the performance metrics that are mostly affected and hence characterize the eventual negative effect of vertical scaling. Method Literature study on kubernetes and docker containers followed by proposing a vertical scaling solution that can add or remove compute resources like cpu and memory to the containerized application. Results and Conclusions Latency and connect times were the analyzed performance metrics of the containerized application. From the obtained results, it was concluded that vertical scaling has no significant impact on the performance of a containerized application in terms of latency and connect times.

Page generated in 0.0692 seconds