• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 838
  • 117
  • 79
  • Tagged with
  • 1034
  • 673
  • 671
  • 298
  • 290
  • 290
  • 225
  • 214
  • 161
  • 155
  • 116
  • 83
  • 81
  • 73
  • 72
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Diffusion-Based Model for Noise-Induced Hearing Loss

Aas, Sverre, Tronstad, Tron Vedul January 2007 (has links)
Among several different damaging mechanisms, oxidative stress is found to play an important role in noise-induced hearing loss (NIHL). This is supported by both findings of oxidative damage after noise exposure, and the fact that upregulation of antioxidant defenses seem to reduce the ears susceptibility to noise. Oxidative stress mechanisms could help explain several of the characteristics of NIHL, and we therefore believe that it would be advantageous to estimate noise-induced hearing impairment on the basis of these, rather than the prevailing energy based methods. In this thesis we have tried to model progress of NIHL using diffusion principles, under the assumption that accumulation of reactive oxygen species (ROS) is the cause of hearing impairment. Production, and the subsequent accumulation, of ROS in a group of outer hair cells (OHCs) is assessed by different implementations of sound pressure as in-parameter, and the ROS concentration is used in estimation of noise-induced threshold shift. The amount of stress experienced by the ear is implemented as a summation of ROS concentration with different exponents of power. Measured asymptotic threshold shift (ATS) values are used as a calibrator for the development of threshold shifts. Additionally the results are evaluated in comparison to the standards developed by the International Organization for Standardization (ISO) and the American Occupational Safety and Health Administration (OSHA). Results indicate that ROS production is not directly proportional to the sound pressure, rather anaccelerated formation and accumulation for increasing sound pressure levels (SPLs). Indications are also that the correlation between concentration of ROS and either temporary threshold shift (TTS) and/or permanent threshold shift (PTS) is more complex than our assumption. Because our model is based on diffusion principles we get the same tendency of noise-induced hearing loss development as experimentally measured TTS development. It also takes into account the potentially damaging mechanisms which occur during recovery after exposure, and has the ability to use TTS data for calibration. We therefore suggest that modeling of ROS accumulation in the hair cells could be used advantageously to estimate noise-induced hearing loss. / .
342

Power Allocation In Cognitive Radio

Canto Nieto, Ramon, Colmenar Ortega, Diego January 2008 (has links)
One of the major challenges in design of wireless networks is the use of the frequency spectrum. Numerous studies on spectrum utilization show that 70% of the allocated spectrum is in fact not utilized. This guides researchers to think about better ways for using the spectrum, giving rise to the concept of Cognitive Radio (CR). Maybe one of the main goals when designing a CR system is to achieve the best way of deciding when a user should be active and when not. In this thesis, the performance of Binary Power Allocation protocol is deeply analyzed under different conditions for a defined network. The main metric used is probability of outage, studying the behavior of the system for a wide range of values for different transmission parameters such as rate, outage probability constraints, protection radius, power ratio and maximum transmission power. All the studies will be performed with a network in which we have only one Primary User for each cell, communicating with a Base Station. This user will share this cell with N potential secondary users, randomly distributed in space, communicating with their respective secondary receivers, from which only M will be allowed to transmit according to the Binary Control Power protocol. In order to widely analyze the system and guide the reader to a better comprehension of its behavior, different considerations are taken. Firstly an ideal model with no error in the channel information acquisition and random switching “off” of the user is presented. Secondly, we will try to improve the behavior of the system by developing some different methods in the decision of dropping a user when it is resulting harmful for the primary user communication. Besides this, more realistic approaches of the channel state information are performed, including Log-normal and Gaussian error distributions. Methods and modifications used to reach the obtained analytical results are presented in detail, and these results are followed by simulation performances. Some results that do not accord with theoretical expectations are also presented and commented, in order to open further ways of developing and researching.
343

A Pragmatic Approach to Modulation Scaling Based Power Saving for Maximum Communication Path Lifetime in Wireless Sensor Networks

Malavia Marín, Raúl January 2008 (has links)
The interest in Wireless Sensor Networks is rapidly increasing due to their interesting advantages related to cost, coverage and network deployment. They are present in civil applications and in most scenarios depend upon the batteries which are the exclusive power source for the tiny sensor nodes. The energy consumption is an important issue for research, and many interesting projects have been developed in several areas. They focus on topology topics, Medium Access Control or physical issues. Many projects aim at the physical layer where the node's power consumption is optimized through scaling the modulation scheme used in node communications. Results show that an optimal modulation scheme can lead to the minimum power consumption over the whole wireless sensor network. A usual simplification in research is to target individual paths and not take into account the whole network. However nodes may be part of several paths, and therefore nodes closer to the sinks may consume higher amounts of energy. This fact is the chief motivation of our research, where modulation scaling over the nodes with more energy is performed in order to increase the lifetime of the nodes having lower energy reserves. Simulation results showed typical values of path lifetime expectancy of 50 to 120 percent higher than comparable power-aware methods.
344

Performance of a Multichannel Audio Correction System Outside the Sweetspot. : Further Investigations of the Trinnov Optimizer.

Wille, Joachim Olsen January 2008 (has links)
This report is a continuation of the student project "Evaluation of TrinnovOptimizer audio reproduction system". It will further investigate theproperties and function of the Trinnov Optimizer, a correction system foraudio reproduction systems. During the student project measurements wereperformed in an anechoic lab to provide information on the functionality andabilities of the Trinnov Optimizer. Massive amounts of data were recorded,and that has also been the foundation of this report. The new work that hasbeen done is by interpreting these results through the use of Matlab. The Optimizer by Trinnov [9 ] is a standalone system for reproductionof audio over a single or multiple loudspeaker setup. It is designed tocorrect frequency and phase response in addition to correcting loudspeakerplacements and cancel simple early re?ections in a multiple loudspeakersetup. The purpose of further investigating this issue was to understandmore about the sound?eld produced around the listening position, and togive more detailed results on the changes in the sound?eld after correction.Importance of correcting the system not only in the listening position, butalso in the surrounding area, is obvious because there is often more than onelistener. This report gives further insight in physical measurements ratherthan subjective statements, on the performance of a room and loudspeakercorrection device. WinMLS has been used to measure the system with single, and multiplemicrophone setups. Some results from the earlier student project are alsoin this report to verify measurement methods, and to show correspondancebetween the di?erent measuring systems. Therefore some of the data havebeen compared to the Trinnov Optimizer's own measurements and appear similar in this report. Some errors found in the initial report, the results from the phase response measurements, have also been corrected. Multiple loudspeakers in a 5.0 setup have been measured with 5 microphones on a rotating boom to measure the soundpressure over an area around the listening position. This allowed the e?ect of simple re?ections cancellation, and the ability to generate virtual sources to be investigated. For the speci?c cases that were investigated in this report, the Optimizer showed the following: ? Frequency and phase response will in every situation be optimized to the extent of the Optimizers algorithms. ? Every case shows improvement in the frequency and phase response over the whole measured area. ? Direct frontal re?ections was deconvolved up to 300Hz over the whole measured area with a radius of 56cm. ? A re?ection from the side was deconvolved roughly up to 200Hz for microphones 1 through 3, up to a radius of 31.25cm, and up to 100Hz for microphones 4 and 5. ? The ability to create virtual sources corresponds fairly to the theoretical expectations. The video sequences that were developed give an interesting new angle on the problems that were investigated. Other than looking at plots of di?erent angles which is di?cult and time consuming, the videos showed an intuitive perspective that enlightened the same issues as the common presented data of frequency and phase response measurements.
345

Ultra-Wideband Sensor-Communication

Amat Pascual, Ángel José January 2008 (has links)
One of the fundamentals concerns in wireless communications with battery operated terminals is the battery life. Basically there are two ways of reducing power consumption: the algorithms should be simple and efficiently implemented (at least in the wireless terminals), and the transmit power should be limited. In this document is considered discrete time, progressive signal transmission with feedback [ramstad]. For forward Gaussian channel, with an ideal feedback channel, the system performs according to OPTA (Optimal Performance Theoretically Attainable[berger]). In this case, with substantial bandwidth expansion through multiple retransmissions, the power can be lowered to a theoretical minimum. In the case of a non-ideal return channel the results are limited by the feedback channel's signal-to-noise ratio. Going one step forward, a more realistic view of the channel will consider fading due to multiple reflections, especially in indoors scenarios. In this thesis it is discussed how to model the channel fading and how to simulate it from different probability distributions. After, some solutions to avoid, or at least reduce, all the undesirable effects caused by the fading will be proposed. In these solutions, the fading characteristics (power and dynamic range) and the application requirements will play a vary important role in the final system design. Finally, a realistic signal will be tried to be sent in a realistic scenario. This will be audio transmission over fading channels. Then, the results will be compared in general terms to a similar equipment such as generic wireless microphone system.
346

Autonomous Algorithms for Dynamic Spectrum Management in DSL Systems

Rognsvåg, Jan Vidar January 2008 (has links)
Krysstale mellom tvinnede parkabler er blitt den dominerende kilden for interferens i dagens DSL (Digital Subscriber Lines) systemer. I eksisterende standarder er det satt maksimalverdier for utsendt effekttetthet basert på estimat av verste tilfelle interferens for alle par-til-par kombinasjoner av krysstale med følgelig begrensninger i overføringskapasitet. Videre blir effekt statisk tildelt over hele den tilgjengelige båndbredden uten hensyn til frekvensavhengig attenuasjon og interferens i metoder basert på statisk frekvensbruk (SSM - Static Spectrum Management). Dette arbeidet tar for seg utviklingen av autonome algoritmer for dynamisk frekvensbruk (DSM – Dynamic Spectrum Management) innen trådbasert kommunikasjon over det eksisterende kobbernettet. DSM åpner for effekttildeling basert på målinger av kanalvariasjoner i frekvensbåndet og terminaler kan slik begrense sin egen utsendte effekttetthet dersom ønsket overføringsrate allerede er oppnådd. En slik dynamisk allokering av spektrum gjør det mulig å prioritere frekvensbånd med høye signal-til-støy forhold (SNR) samtidig som sterkt forstyrrede delbånd får kraftig reduksjon av tildelt effekt eller blir fullstendig slått av. Ulike algoritmer ble implementert og analysert, med den velkjente water-filling algoritmen spesielt sentral i beregningene, som gav svært gode resultater med hensyn til eksisterende effektallokering basert på SSM.
347

Sensor Array Signal Processing for Source Localization

Manzano García-Muñoz, Cristina January 2008 (has links)
This work is a study about source localization methods, more precisely, about beamforming approaches. The necessary background theory is provided first, and then, further developed to explain the basis of each approach. The studied problem consists in an array of sensors in which the signal to process is impinging. Several examples of inciding signals are provided in order to compare the performance of the methods. The goal of the approaches is to find the Incident Signal Power and the Direction Of Arrival of the Signal (or Signals) Of Interest. With these information, the source can be located in angle and range. After the study, the conclusions will show which methods to chose depending on the application pursued. Finally, some ideas or guidelines about future investigation on the field, will be given.
348

Objective Image Quality Metrics for Ultrasound Imaging

Simpson, Cecilie Øinæs January 2009 (has links)
Objective evaluation of the image quality on ultrasound images is a comprehensive task due to the relatively low image quality compared to other imaging techniques. It is desirable to objectively determine the quality of ultrasound images since quantification of the quality removes the subjective evaluation which can lead to varying results. The scanner will also be more user friendly if the user is given feedback on the quality of the current image. This thesis has investigated in the objective evaluation of image quality in phantom images. It has been emphasized on the parameter spatial variance which is incorporated in the image analysis system developed during the project assignment. The spatial variance was tested for a variety of settings as for instance different beam densities and number of MLAs. In addition, different power spectra have been evaluated related to the ProbeContact algorithm developed by the Department of Circulation and Medical Imaging (ISB). The algorithm has also been incorporated in the image analysis system. The results show that the developed algorithm gives a good indication of the spatial variance. An image gets more and more spatially variant as the beam density decreases. If the beam density goes below the Nyquist sampling limit, the point target will appear to move more slowly when passing a beam since the region between the two beams are undersampled. This effect will be seen in the correlation coefficient plots which is used as a measure of spatial variance. The results from the calculations related to the ProbeContact algorithm show that rearranging the order of the averaging and the Fourier transformation will have an impact on the calculated probe contact, but the differences are tolerable. All the evaluated methods can be used, but performing Fourier transform before averaging can be viewed as the best solution since it gives a lateral power spectrum with low variance and a smooth mean frequency and bandwidth when they are compared for several frames. This is suggested with the reservations of that basic settings are used. Performing 1D (in the lateral direction) or 2D Fourier transform before averaging will not have any impact of the resulting power spectrum as long as normalized Fourier tranform is used. The conclusion is that the image analysis system, including the spatial variance parameter, is a good tool for evaluating various parameters related to image quality. The system is improved by the ProbeContact algorithm which gives a good indication of the image quality based on the acoustic contact of the probe. Even though the image analysis system is limited to phantom images, the thesis is a starting point in the process of obtaining objective evaluation of the image quality in clinical images since others may use it as a basis for their work.
349

Real-Time JPEG2000 Video Decoding on General-Purpose Computer Hardware

Halsteinli, Erlend January 2009 (has links)
There is widespread use of compression in multimedia content delivery, e.g. within video on demand services and transport links between live events and production sites. The content must undergo compression prior to transmission in order to deliver high quality video and audio over most networks, this is especially true for high definition video content. JPEG2000 is a recent image compression standard and a suitable compression algorithm for high definition, high rate video. With its highly flexible embedded lossless and lossy compression scheme, JPEG2000 has a number of advantages over existing video codecs. The only evident drawbacks with respect to real-time applications, are that the computational complexity is quite high and that JPEG2000, being an image compression codec as opposed to video codec, typically has higher bandwidth requirements. Special-purpose hardware can deliver high performance, but is expensive and not easily updated. A JPEG2000 decoder application running on general-purpose computer hardware can complement solutions depending on special-purpose hardware and will experience performance scaling together with the available processing power. In addition, production costs will be none-existing, once developed. The application implemented in this project is a streaming media player. It receives a compressed video stream through an IP interface, decodes it frame by frame and presents the decoded frames in a window. The decoder is designed to better take advantage of the processing power available in today's desktop computers. Specifically, decoding is performed on both CPU and GPU in order to decode minimum 50 frames per second of a 720p JPEG2000 video stream. The CPU executed part of the decoder application is written in C++, based on the Kakadu SDK and involve all decoding steps up to and including reverse wavelet transform. The GPU executed part of the decoder is enabled by the CUDA programming language, and include luma upsampling and irreversible color transform. Results indicate that general purpose computer hardware today easily can decode JPEG2000 video at bit rates up to 45 Mbit/s. However, when the video stream is received at 50 fps through the IP interface, packet loss at the socket level limits the attained frame rate to about 45 fps at rates of 40 Mbit/s or lower. If this packet loss could be eliminated, real-time decoding would be obtained up to 40 Mbit/s. At rates above 40 Mbit/s, the attained frame rate is limited by the decoder performance and not the packet loss. Higher codestream rates should be endurable if reverse wavelet transform could be mapped from the CPU to the GPU, since the current pipeline is highly unbalanced.
350

An exploration of user needs and experiences towards an interactive multi-view video presentation

Danielsen, Eivind January 2009 (has links)
After a literature review about multi-view video technologies, it was focused on a multi-view video presentation where the user receives multiple video streams and can freely switch between them. User interaction was considered to be a key function for this system. The goal was to explore user needs and expectations towards an interactive multi-view video presentation. A multi-view video player was implemented according to specifications in possible scenarios and users needs and expectations conducted through an online survey. The media player was written in objective-C, Cocoa and was developed using the integrated development environment tool XCode and graphics user interface tool Interface Builder. The media player was built around Quicktime's framework QTKit. A plugin tool, Perian, added extra media format support to QuickTime. The results from the online survey shows that the minority has experience with such a multi-view video presentation. However, those who had tried multi-view video are positive towards it. The usage of the system is strongly dependent on content. The content should be highly entertainment- and action-oriented. Switching of views was to be considered a key feature by experienced users of the conducted test of the multi-view video player. This feature provides a more interactive application and more satisfied users, when the content is suitable for multi-view video. However, rearranging and hiding of views also contributed to a positive viewing experience. However, it is important to notice that these results are not complete in order to fully investigate users need and expectations towards an interactive multi-view video presentation.

Page generated in 0.1074 seconds