• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 807
  • 129
  • 55
  • 8
  • 4
  • 1
  • Tagged with
  • 1004
  • 570
  • 264
  • 233
  • 214
  • 200
  • 199
  • 138
  • 128
  • 107
  • 103
  • 97
  • 82
  • 72
  • 71
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
451

Distribution and Individual Watermarking of Streamed Content for Copy Protection

Stenborg, Karl-Göran January 2005 (has links)
Media such as movies and images are nowadays produced and distributed digitally. It is usually simple to make copies of digital content. Consequently illegal pirate copies can be duplicated and distributed in large quantities. One way to deter authorized content receivers from illegally redistributing the media is watermarking. If individual watermarks are contained in the digital media and a receiver is a pirate and redistributes it, the pirate at the same time distributes his identity. Thus a located pirate copy can be traced back to the pirate. The watermarked media should otherwise be indistinguishable from the original media content. To distribute media content scalable transmission methods such as broadcast and multicast should be used. This way the distributor will only need to transmit the media once to reach all his authorized receivers. But since the same content is distributed to all receivers the requirement of individual watermarks seems to be contradictory. In this thesis we will show how individually watermarked media content can be transmitted in a scalable way. Known methods will be reviewed and a new method will be presented. The new method is independent of what type of distribution that is used. A system with robust watermarks that are difficult to remove is described. Only small parts of the media content will be needed to identify the pirates. The method will only give a small data expansion compared to distribution of non-watermarked media. We will also show how information theory tools can be used to expand the amount of data in the watermarks given a specific size of the media used for the watermarking. These tools can also be used to identify parts of the watermark that have been changed by deliberate deterioration of the watermarked media, made by pirates. / <p>Report code: LiU-Tek-Lic-2005:67.</p>
452

Beyond Pulse Position Modulation : a Feasibility Study

Gustafsson, Danielle January 2023 (has links)
During the thesis work, a feasibility study of the BPPM error-correction protocol is performed. The beyond pulse position modulation (BPPM) protocol was invented at Ericsson AB and describes a modulation encoding using vertically and horizontally polarized single photons for optical transmission and error-correction. The thesis work is a mixture of both experimental laboratory work and theoretical software simulations which are intended to mimic actual optical fiber transmission. One aspect of the project work involves designing the optical communication system which is used to evaluate the probabilities of transmission errors in the form of false detections and losses of light. During the project work, the BPPM protocol is implemented and used for software simulated error generation and correction. With the available laboratory setup used as the point of reference, error-correction using the BPPM protocol is studied using pulses of light containing more than one photon. The results show that the BPPM protocol can be used to recover some of the information that is lost during optical fiber transmission. Factors such as the size of the codewords, the number of photons per pulse and detection efficiency of the utilized single photon detector (SPD) have a significant impact on the success of the transmission.
453

Analysis and Measurement of Key Performance Indicators for MIMO Antennas

Kynman, Ossian January 2023 (has links)
Multiple-in multiple-out (MIMO) is a wireless communication technique where antenna arrays, at the receiver and transmitter, utilize signal multipath propagation to increase data throughput capacity. The unique benefits MIMO provides have over the last 20 years led to the steady increase in usage in both Wi-Fi and mobile networks. Predicting the performance of an antenna array designed for MIMO is more difficult than predicting the performance of a single antenna. This is due to the increased performance deriving from the processed combination of information from each antenna element. To determine the increased benefits that additional antenna elements can provide to a wireless system, the statistical correlation between the signals received from all antenna element needs to be evaluated. This correlation is expressed with the correlation coefficient $\rho$. The correlation coefficient may be estimated from the far field radiation pattern measured in an anechoic chamber, or measured from the statistically isotropic and homogeneous radiation environment provided by a reverberation chamber. However, Blanch, et al. 2003, proposed a much simpler method to estimate the correlation coefficient using a Vector Network Analyzer (VNA) to measure scattering parameters (S-parameters) while assuming perfect antenna efficiency. In 2005 Hallbjörner proposed a modified version of the estimation including the effect of antenna efficiency. This project aimed to measure and compare the results from the two types of chamber tests along with the two S-parameter based approximation methods mentioned. To accomplish this, three different antenna arrays, with four elements each with varying efficiency and mutual coupling, were designed and manufactured. The antenna arrays were then measured in an anechoic chamber, in a reverberation chamber, and had their S-parameters determined with a VNA. From the measurements it was found that the results from both types of chamber tests agree well, indicating that both tests are viable methods of signal correlation estimation. The S-parameter method proposed by Blanch was found to be inaccurate for the antennas tested, likely due to low radiation efficiencies. However, the approximation method proposed by Hallbjörner produced better results, but requires the efficiencies of the antennas which is generally not simple to determine. In conclusion it is found that S-parameter measurements, which are commonly used by the wireless industry, do not provide valid estimates of the MIMO performance of antenna arrays unless they are complemented with measurements of antenna efficiency. / <p></p><p></p><p></p>
454

The Deeper Investigation of SmartHealthcare Systems using 5G Security

Ananthula, Bindu, Budde, Niharika January 2023 (has links)
A promising approach to raising the caliber and accessibility of healthcare services is the development of Smart Healthcare Systems. However, the union of wireless networks and smart medical devices has created additional security issues, such as the possibility of identity theft, data breaches, and denial-of-service assaults. These flaws emphasize the significance of creating a safe and dependable smart healthcare system that can safeguard patient data and guarantee the confidentiality of private medical information. This study suggests adopting 5G security standards to address the security issues with smart healthcare systems. The STRIDE threat modeling approach, which includes six threat categories (spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege), is used in this study to investigate potential threats in smart healthcare systems. The report suggests using strong encryption protocols, such as AES-CCM and ECDH, between smart healthcare equipment and5G-AKA to reduce these potential threats. The proposed approach showed appreciable advancements in data security and privacy. According to the findings, 5G security standards can be used to efficiently reduce security risks in smart healthcare systems and establish a trustworthy and secure platform for delivering medical services. The study emphasizes the significance of including strong security controls in Smart Healthcare Systems to secure patient information and raise the standard of treatment generally.
455

Tolerance Analysis of a Multi-mode Ceramic Resonator

Naeem, Khawar January 2013 (has links)
No description available.
456

Towards Performance Evaluation and Future Applications of eBPF

Gunturu, Manideep, Aluguri, Rohan January 2024 (has links)
Extended Berkeley Packet Filter (eBPF) is an instruction set and an execution environment inside the Linux kernel. eBPF improves flexibility for data processing and is realized via a virtual machine featuring both a Just-In-Time (JIT) compiler and an interpreter running in the kernel. It executes custom eBPF programs supplied by the user, effectively moving kernel functionality into user space. eBPF has received widespread adoption by companies such as Facebook, Netflix, and academia for a wide range of application domains. eBPF can be used to program the eXpress DataPath (XDP), a kernel network layer that processes packets closer to the NetworkInterface Card (NIC) for fast packet processing. In this thesis, eBPF with XDP, and Iptables, are considered as a Network function(NF), implemented in a Virtual Machine (VM) for packet filtering. The traffic source(source VM) and traffic sink (destination VM) are present in the same subnet. The aim of this thesis is, to understand and investigate the implementation of NFs inVMs and to analyze performance metrics. In VirtualBox, VMs are created to implement the NFs. The results are obtained for the measurements that are essential for the performance evaluation of the NFs, and presented in graphs.
457

Relaying Protocols for Wireless Networks

Nasiri Khormuji, Majid January 2008 (has links)
Motivated by current applications in multihop transmission and ad hoc networks, the classical three-node relay channel consisting of a source-destination pair and a relay has received significant attention. One of the crucial aspects of the relay channel is the design of proper relaying protocols, i.e., how the relay should take part into transmission. The thesis addresses this problem and provides a partial answer to that. In this thesis, we propose and study two novel relaying protocols. The first one is based on constellation rearrangement (CR) and is suitable for higher-order modulation schemes. With CR, the relay uses a bit-symbol mapping that is different from the one used by the source. We find the optimal bit-symbol mappings for both the source and the relay and the associated optimal detectors, and show that the improvement over conventional relaying with Gray mapping at the source and the relay can amount to a power gain of several dB. This performance improvement comes at no additional power or bandwidth expense, and at virtually no increase in complexity. The second one is a half-duplex decode-and-forward (DF) relaying scheme based on partial repetition (PR) coding at the relay. With PR, if the relay decodes the received message successfully, it re-encodes the message using the same channel code as the one used at the source, but retransmits only a fraction of the codeword. We analyze the proposed scheme and optimize the cooperation level (i.e., the fraction of the message that the relay should transmit). We compare our scheme with conventional repetition in which the relay retransmits the entire decoded message, and with parallel coding, and additionally with dynamic DF. The finite SNR analysis reveals that the proposed partial repetition can provide a gain of several dB over conventional repetition. Surprisingly, the proposed scheme is able to achieve the same performance as that of parallel coding for some relay network configurations, but at a much lower complexity. Additionally, the thesis treats the problem of resource allocation for collaborative transmit diversity using DF protocols with different type of CSI feedback at the source. One interesting observation that emerges is that the joint powerbandwidth allocation only provides marginal gain over the relaying protocols with optimal bandwidth allocation. / QC 20101119
458

Bit-Rate Allocation, Scheduling, and Statistical Multiplexing for Wireless Video Streaming

Vukadinovic, Vladimir January 2008 (has links)
Due to the scarcity of wireless resources, efficient resource allocation is essential to the success of cellular systems. With the proliferation of bandwidth-hungry multimedia applications with diverse traffic characteristics and quality of service requirements, the resource management is becoming particularly challenging. In this thesis, we address some of the key link-layer resource allocation mechanisms that affect the performance of video streaming in cellular systems: bit-rate allocation, opportunistic scheduling, and statistical multiplexing. The bit-rate allocation problem involves the distortion-optimal assignment of source, channel, and pilot data rates under link capacity constraints. We derive an analytical model that captures the video distortion as a function of these data rates and, based on it, we study various bit-rate allocation strategies. The opportunistic scheduling problem addresses the throughput-optimal assignment of time-slots among users with diverse channel conditions under certain fairness constraints. We focus on two aspects of the opportunistic scheduling: the performance of delay-constrained streaming applications and possible extensions of the opportunistic concepts to multicast scenarios. Finally, the statistical multiplexing is a resource-efficient method for smoothing out the extreme burstiness of video streams. We study possible statistical multiplexing gains of H.264 video streams in the context of E-MBMS architecture. / QC 20101126
459

The role of fault management in the embedded system design

Vitucci, Carlo January 2024 (has links)
In the last decade, the world of telecommunications has seen the value ofservices definitively affirmed and the loss of the connectivity value. This changeof pace in the use of the network (and available hardware resources) has ledto continuous, unlimited growth in data traffic, increased incomes for serviceproviders, and a constant erosion of operators’ incomes for voice and ShortMessage Service (SMS) traffic.The change in mobile service consumption is evident to operators. Themarket today is in the hands of over the top (OTT) media content deliverycompanies (Google, Meta, Netflix, Amazon, etc.), and The fifth generation ofmobile networks (5G), the latest generation of mobile architecture, is nothingother than how operators can invest in system infrastructure to participate in theprosperous service business.With the advent of 5G, the worlds of cloud and telecommunications havefound their meeting point, paving the way for new infrastructures and ser-vices, such as smart cities, industry 4.0, industry 5.0, and Augmented Reality(AR)/Virtual Reality (VR). People, infrastructures, and devices are connected toprovide services that we even struggle to imagine today, but a highly intercon-nected system requires high levels of reliability and resilience.Hardware reliability has increased since the 1990s. However, it is equallycorrect to mention that the introduction of new technologies in the nanometerdomain and the growing complexity of on-chip systems have made fault man-agement critical to guarantee the quality of the service offered to the customerand the sustainability of the network infrastructure. In this thesis, our first contribution is a review of the fault managementimplementation framework for the radio access network domain. Our approachintroduces a holistic vision in fault management where there is increasingly moresignificant attention to the recovery action, the crucial target of the proposedframework. A new contribution underlines the attention toward the recoverytarget: we revisited the taxonomy of faults in mobile systems to enhance theresult of the recovery action, which, in our opinion, must be propagated betweenthe different layers of an embedded system ( hardware, firmware, middleware,and software). The practical adoption of the new framework and the newtaxonomy allowed us to make a unique contribution to the thesis: the proposalof a new algorithm for managing system memory errors, both temporary (soft)and permanent (hard)The holistic vision of error management we introduced in this thesis involveshardware that proactively manages faults. An efficient implementation of faultmanagement is only possible if the hardware design considers error-handlingtechniques and methodologies. Another contribution of this thesis is the def-inition of the fault management requirements for the RAN embedded systemhardware design.Another primary function of the proposed fault management framework isfault prediction. Recognizing error patterns means allowing the system to reactin time, even before the error condition occurs, or identifying the topology of theerror to implement more targeted and, therefore, more efficient recovery actions.The operating temperature is always a critical characteristic of embedded radioaccess network systems. Base stations must be able to work in very differenttemperature conditions. However, the working temperature also directly affectsthe probability of error for the system. In this thesis, we have also contributed interms of a machine-learning algorithm for predicting the working temperature ofbase stations in radio access networks — a first step towards a more sophisticatedimplementation of error prevention and prediction.
460

Das Mobiltelefon im Spiegel fiktionaler Fernsehserien : symbolische Modelle der Handyaneignung /

Karnowski, Veronika. January 2008 (has links)
Zugl.: Zürich, Universiẗat, Diss., 2008.

Page generated in 0.1141 seconds