• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 838
  • 117
  • 79
  • Tagged with
  • 1034
  • 673
  • 671
  • 298
  • 290
  • 290
  • 225
  • 214
  • 161
  • 155
  • 116
  • 83
  • 81
  • 73
  • 72
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Determining recording time of digital soundrecordings using the ENF criterion / Tidsbestämning av digitala ljudinspelningar med hjälp av ENF-kriteriet

Andersson, Fredrik January 2009 (has links)
<p>In forensic investigations, verification of digital recordings is an important as-pect. There are numerous methods to verify authentication of recordings, but itis difficult to determine when the media was recorded. By studying the electricalnetwork frequency, one can find a unique signature and then match the recordingto this signature. By matching a recorded signal to a database, which contains allnecessary information, one can find the time when the recording was made.</p>
252

Distributed source coding in sensor networks : A practical implementation

Petersen, Sigmund Seehuus January 2007 (has links)
<p>In this thesis we take a closer look at wireless sensor networks and source coding. A necessary condition for this work to have any meaning is that the sensors in the network are spatially co-located and that there is correlation between the data the sensors observe. When there is correlation, there is redundancy in the information communicated that can be removed by source coding techniques. This can be done by emph{distributed source coding}. Slepian and Wolf showed theoretically that there is no rate loss no matter if the sensors are communicating. cite{slepian73} Wyner and Ziv expanded this from the lossless case of Slepian and Wolf to apply to lossy source coding. cite{wyner76} Pradhan and Ramchandran found a practical implementation for the theory of Slepian-Wolf and Wyner-Ziv based on channel coding principles. cite{pradhan03} This can be done because the correlation between any two sources can be modelled as a channel with an error probability. We build our work on their ideas. The channel coding technique we have found most advantageous for this scheme is emph{Low Density Parity-Check} coding. LDPC coding is the most advanced form of linear block coding up to date. It is represented by a sparse parity-check matrix. While LDPC coding in the traditional sense is used for bandwidth expansion of the source to protect it from channel errors, it is used for bandwidth compression, or rate reduction, in the distributed sense. The distributed LDPC scheme is used on medical ECG data as an example. Due to lack of time and the comprehensive task, the adpapted message-passing decoding algorithm needed to fulfill the implementation could not be finished. We have illustrated the distributed encoder system with a $(7,4)$-Hamming code to give an example. The performance of this system is not good enough for any practical use, but will function as a guideline for possible future work in the area.</p>
253

Development of the Control Interface for the Fast Magnet Current Change Monitor (FMCM) : Part of the LHC Machine Protection System at CERN

Jevard, Tonje Vik January 2007 (has links)
<p>A large number of magnets, both superconducting and normal conducting, are installed for the guidance of the two proton beams around the Large Hadron Collider (LHC), the world's largest particle accelerator currently under construction at the European Organization for Nuclear Research (CERN). Due to the unprecedented energies stored in the beams and the magnets, sophisticated systems are under development to protect the equipment in case of failure. However, scenarios have been identified where failures in the magnet powering will lead to very fast beam losses in less than 100 $mu$s, due to the low time constants of the electrical circuits and the consequent fast current decay. For these circuits, systems that are currently deployed will not be fast enough to generate and transmit a beam dump request before the magnetic field change affects the beam trajectory. A dedicated system for the detection of such fast failures is already operational at the Hadron-Electron Ring Accelerator (HERA) in Hamburg. This system, the Fast Magnet Current Change Monitor (FMCM), has been adapted to meet CERN requirements and needs to be integrated into the CERN accelerator environment. For remote monitoring and Post Mortem analysis every FMCM is connected to the CERN control system by means of an RS-422 interface. This master's thesis is focused on the software development and analysis of the control interface for the described FMCM units. The communication between the FMCMs and the CERN control system has been designed and implemented in C++, following the guidelines given by the Front End Software Architecture (FESA) framework. An analysis of the RS-422 interface with respect to Signal Integrity and Electromagnetic Compatibility verified the current setup of the RS-422 serial interface for the given transmission parameters. Transient bursts are considered to be the most common type of disturbance in the LHC and the related surface buildings. Hence, error detection has been implemented to ensure reliable communication by causing retransmissions of the data until it has been correctly received.</p>
254

Theory, Simulation and Measurement of Wireless Multipath Fading Channels

Mella, Kristian January 2007 (has links)
<p>Multipath fading is a very common phenomenon in signal transmission over wireless channels. When a signal is transmitted over multipath channels, it is subject to reflection, diffraction and refraction. This results in multiple versions of the same signal to arrive at the receiver, each of which has suffered from various path loss, time-delay, phase shift and often also frequency shift. The latter is a result of Doppler shifts, which is experienced whenever a relative movement between the receiver and transmitter or scatterers is present. The communication environment changes quickly over location or time, thus introducing uncertainties to the channel response. Such channels result in increased system complexity, and the propagation effects need to be identified in order to predict the channel behaviour. Path loss is experienced in all types of radio channels, and its metrics are often determined by empirical path loss models. The path loss effects the mean received signal level, whereas large-scale fading (Shadowing) results in large-scale fluctuations of this received level. These variations are superimposed by the small-scale fluctuations, or small-scale fading, caused by multipath reception and Doppler shifts. Small-scale fading is simulated to gain a better understanding of these effects. In order to observe these effects satisfactory, the whole digital radio communication system chain must be simulated. Simulations are also performed for estimating the data capacity over both mobile and fixed multipath channels, and the resulting capacity of multipath reception exceeds the capacity of a flat channel due to increased received energy. In order to classify the effect of multipath channels on signal transmission, the profile of the channel for a given scenario has to be known, i.e. channel metrics such as the RMS delay spread is essential for a successful radio system design. A multipath channel profile and its RMS delay spread can be derived from a vast number of channel measurements performed for a given scenario. Measurements on the multipath channel impulse response have been performed, RMS delay spread has been calculated, and the procedure of the channel measurement process itself is simulated in Matlab.</p>
255

Underwater Communications : An OFDM-system for Underwater Communications

Gregersen, Svein Erik Søndervik January 2007 (has links)
<p>In the fall 2006 NTNU (The Norwegian University and Science and Technology) initiated a strategic project in cooperations with SINTEF where the aim is to gain more knowledge about underwater acoustic communications. This study is a part of this project and focuses on a system for underwater communication. A orthogonal frequency division multiplexing (OFDM) system using differential quadrature phase shift keying (DQPSK) has been defined and implemented in MATLAB. The system has been characterized through thorough simulations and testing. Initial measurements has also been carried out in order to test the developed system on a real underwater acoustic channel and the results have been analysed.</p>
256

Diffusion-Based Model for Noise-Induced Hearing Loss

Aas, Sverre, Tronstad, Tron Vedul January 2007 (has links)
<p>Among several different damaging mechanisms, oxidative stress is found to play an important role in noise-induced hearing loss (NIHL). This is supported by both findings of oxidative damage after noise exposure, and the fact that upregulation of antioxidant defenses seem to reduce the ears susceptibility to noise. Oxidative stress mechanisms could help explain several of the characteristics of NIHL, and we therefore believe that it would be advantageous to estimate noise-induced hearing impairment on the basis of these, rather than the prevailing energy based methods. In this thesis we have tried to model progress of NIHL using diffusion principles, under the assumption that accumulation of reactive oxygen species (ROS) is the cause of hearing impairment. Production, and the subsequent accumulation, of ROS in a group of outer hair cells (OHCs) is assessed by different implementations of sound pressure as in-parameter, and the ROS concentration is used in estimation of noise-induced threshold shift. The amount of stress experienced by the ear is implemented as a summation of ROS concentration with different exponents of power. Measured asymptotic threshold shift (ATS) values are used as a calibrator for the development of threshold shifts. Additionally the results are evaluated in comparison to the standards developed by the International Organization for Standardization (ISO) and the American Occupational Safety and Health Administration (OSHA). Results indicate that ROS production is not directly proportional to the sound pressure, rather anaccelerated formation and accumulation for increasing sound pressure levels (SPLs). Indications are also that the correlation between concentration of ROS and either temporary threshold shift (TTS) and/or permanent threshold shift (PTS) is more complex than our assumption. Because our model is based on diffusion principles we get the same tendency of noise-induced hearing loss development as experimentally measured TTS development. It also takes into account the potentially damaging mechanisms which occur during recovery after exposure, and has the ability to use TTS data for calibration. We therefore suggest that modeling of ROS accumulation in the hair cells could be used advantageously to estimate noise-induced hearing loss.</p> / .
257

Energy-Efficient Link Adaptation and Resource Allocation in Energy-Constrained Wireless Ad Hoc Networks

Krogsveen, Even January 2007 (has links)
<p>Wireless ad hoc networks have a number of advantages over traditional, infrastructure-based networks. Robustness and easy deployment are two of the main advantages. However, the distributed nature of such networks raises a number of design challenges, especially when energy-efficiency and QoS requirements are to be taken into consideration. These challenges can only be met by allowing closer cooperation and mutual adaptation between the protocol layers, referred to as a cross-layer design paradigm. In energy-constrained wireless ad hoc networks, each node can only transmit to a limited number of other nodes directly. Hence, in order to reach distant destinations, intermediate nodes must relay the traffic of their peer nodes, resulting in multihop routes. The total energy consumption associated with a end-to-end transmission over such a route can be significantly reduced if the nodes are correctly configured. A cross-layer, optimization scheme, based on adaptive modulation and power control, is proposed in this thesis. The optimization scheme assumes that an existing route has been found, and allows QoS requirements in terms of end-to-end bit error rate and delay. Both transmission and circuit energy consumption is taken into consideration. By jointly optimizing all nodes throughout the route, the total energy consumption can be reduced by more than 50%, compared to a fixed-rate system. The adaptive system also exhibits superior capabilities to meet stringent QoS requirements. Results for both continuous and discrete rate adaptation is produced, and it is found that discrete adaptation causes only a small performance degradation, compared to the optimal, continuous case. Simulations also show that the system is vulnerable to inaccurate link state information. Finally, the effects of maximum-rate limitation and ignoring the circuit power consumption is investigated.</p>
258

A study of Forward Models in Seismic Inversion

Nilsen, Maria January 2007 (has links)
<p>Knowledge about the physcical parameters of the seafloor is often important information. This master’s thesis looks at seismic inversion to find these parameters. The choice of forward model is highly emphasised. A seismic inversion has a number of variables which can be changed and altered to obtain a good result. The forward model will have a big impact on the results of the inversion. Both the time spent on the inversion, and which parameters the inversion will be best suited to estimate will be determined by the choice of forward model. An inversion code written in Matlab by Fredrik Helland is used. It uses genethic algorithms as optimization, and OSIRIS as forward model. This code is expanded to deal with several forward models and seafloor geometries. Testing of the inversion code shows that all the forward models serves different perposes. The ray tracing model is still at a consept level, but should be usable in the future when it runs a bit faster and can deal with more than 3 layers. The dispersion method and the wave number integration method both work well and the results show that using a combination of them might be the best choice if all the geoacoustic parameters of the seafloor is sought.</p>
259

Experience with the Construction and Use of Polyphonic Test Signals based on Single Monophonic Recordings for Localisation Listening Tests

Ursin, Torbjørn January 2007 (has links)
<p>The paper presents experiments made in search of answers to two principal questions: 1. Can one single musician be made to sound like several musicians playing together? 2. In a music ensemble, where one of its constituents has a distinctive spectrum; how do the deviant spectral components influence a listener’s ability of localising the source? In the first part of the experiment, a flute ensemble was attempted simulated. Based on a recoring of one flute playing a short piece, the flute was multiplied into a quintet. On the way, several properties were manipulated in an attempt to make the quintet sound like a real quintet; timing, spectrum, intensity, and phase. In the second part, one flute in a quintet was subject to a spectral tilt, i.e. high frequency components were boosted while low frequency components were diminished. A test panel was engaged to help evaluating the questions. First, the panel compared the simulated quintet to a reference quintet, trying to identify the simulation from the reference. Subsequently, listening to a reference quintet, the panel tried to localise the one flute which had undergone a spectral tilt. A musical piece was played 5 times; first, one of the flutes was moderately tilted, then the tilt’s magnitude was increased for every run until eventually being noticeable. For each run, the test panel was asked to indicate the tilted flute, or a random flute if none appeared tilted to them. The majority of the test panel did not manage to tell the simulated quintet from the reference. However, the reference may have been imperfect, and the simulation process somewhat affects sound quality. When it comes to localisation, a rather excessive tilt was necessary for the test panel to be able to localise it - even though more moderate tilts were clearly audible.</p>
260

Radio Resource Allocation for Increased Capacity in Cellular Networks

Dybdahl, Sigbjørn Hernes January 2007 (has links)
<p>Cellular networks are widely deployed for wireless communication, and as the number of users of these networks increase, so does the need for higher spectral efficiency. Clever measures have to be taken in order to increase throughput for wireless networks because of the scarcity of radio resources. Ever higher rates are demanded, but we also want to conserve a fair distribution of the available resources. Therefore, we consider the problem of joint power allocation and user scheduling, while achieving a desired level of fairness in wireless cellular systems. Dynamic resource allocation is employed for the full reuse networks simulated, in order to cope with inter-cell interference and to optimize spectrum efficiency. Binary power allocation is implemented and compared to the performance without power control, for minimum transmit power levels equal to 0 and greater than 0. We show that binary power control with individual power levels for each cell is optimal for two-cell networks. We also present an extension to the proportional fair scheduling for multi-cell networks, and analyze its performance for different cell sizes and time windows. Finally, we highlight the equality between multi-cell, multi-user and multi-carrier proportional fair scheduling. Simulation results show how power control and user scheduling increase throughput, reduce power consumption and achieve a desired level of fairness. Hence, we can obtain considerable gains for the network throughput through adaptive power allocation and multiuser diversity.</p>

Page generated in 0.0746 seconds