• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 8
  • Tagged with
  • 29
  • 29
  • 29
  • 16
  • 13
  • 11
  • 11
  • 11
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

MR Spectroscopy : Real-Time Quantification of in-vivo MR Spectroscopic data

Massé, Kunal January 2009 (has links)
<p>In the last two decades, magnetic resonance spectroscopy (MRS) has had an increasing success in biomedical research. This technique has the faculty of discerning several metabolites in human tissue non-invasively and thus offers a multitude of medical applications. In clinical routine, quantification plays a key role in the evaluation of the different chemical elements. The quantification of metabolites characterizing specific pathologies helps physicians establish the patient's diagnosis. Estimating quantities of metabolites remains a major challenge in MRS. This thesis presents the implementation of a promising quantification algorithm called selective-frequency singular value decomposition (SELF-SVD). Numerous tests on simulated MRS data have been carried out to bring an insight on the complex dependencies between the various components of the data. Based on the test results, suggestions have been made on how best to set the SELF-SVD parameters depending on the nature of the data. The algorithm has also been tested for the first time with in-vivo 1H MRS data, in which SELF-SVD quantification results allow the localization of a brain tumor.</p>
22

The Optimal Packet Duration of ALOHA and CSMA in Ad Hoc Wireless Networks

Corneliussen, Jon Even January 2009 (has links)
<p>In this thesis the optimal transmission rate in ad hoc wireless networks is analyzed. The performance metric used in the analysis is probability of outage. In our system model, users/packets arrive randomly in space and time according to a Poisson point process, and are thereby transmitted to their intended destinations using either ALOHA or CSMA as the MAC protocol. Our model is based on an SINR requirement, i.e., the received SINR must be above some predetermined threshold value, for the whole duration of a packet, in order for the transmission to be considered successful. If this is not the case an outage has occurred. In order to analyze how the transmission rate affects the probability of outage, we assume packets of K bits, and let the packet duration, T, vary. The nodes in the network then transmit packets with a requested transmission rate of Rreq=K/T bits per second. We incorporate transmission rate into already existing lower bounds on the probability of outage of ALOHA and CSMA, and use these expressions to find the optimal packet duration that minimizes the probability of outage. For the ALOHA protocol, we derive an analytic expression for the optimal spectral efficiency of the network as a function of path loss, which is used to find the optimal packet duration Topt . For the CSMA protocol, the optimal packet duration is observed through simulations. We find that in order to minimize the probability of outage in our network, we should choose our system parameters such that our requested transmission rate divided by system bandwidth is equal to the optimal spectral efficiency of our network.</p>
23

Adaptive Coding and Modulation Techniques for HF Communication : Performance of different adaption techniques implemented with the HDL+ protocol

Carlsen, Martin January 2009 (has links)
<p>The main goal of this thesis is to present two good alternatives for the HDL+ protocol proposed for ratification in STANAG 4538, as this partially is restricted by a patent claims. The HDL+ protocol is used as a starting point, and in order to accommodate for the patented parts, the adaptive process is altered, and the code combining process is removed for the highest rate. For simplifying the comparison between the performance of the proposed protocols, and the HDL+, both proposed protocols is simulated in a MATLAB environment, over the same channels as Harris has presented the throughput capabilities of the HDL+. These channels include the AWGN, single tap channel with flat fading, the ITU-MLD channel, and the ITU-MLD channel with Long- and Intermediate- Time SNR variations. By analyzing the results, it is clear that the current implementation of the proposed protocols does not achieve as high throughput as the HDL+, but there are indications that there is potential for better results if further development is performed.</p>
24

Computer Assisted Pronunciation Training : Evaluation of non-native vowel length pronunciation

Versvik, Eivind January 2009 (has links)
Computer Assisted Pronunciation Training systems have become popular tools to train on second languages. Many second language learners prefer to train on pronunciation in a stress free environment with no other listeners. There exists no such tool for training on pronunciation of the Norwegian language. Pronunciation exercises in training systems should be directed at important properties in the language which the second language learners are not familiar with. In Norwegian two acoustically similar words can be contrasted by the vowel length, these words are called vowel length words. The vowel length is not important in many other languages. This master thesis has examined how to make the part of a Computer Assisted Pronunciation Training system which can evaluate non-native vowel length pronunciations. To evaluate vowel length pronunciations a vowel length classifier was developed. The approach was to segment utterances using automatic methods (Dynamic Time Warping and Hidden Markov Models). The segmented utterances were used to extract several classification features. A linear classifier was used to discriminate between short and long vowel length pronunciations. The classifier was trained by the Fisher Linear Discriminant principle. A database of Norwegian words of minimal pairs with respect to vowel length was recorded. Recordings from native Norwegians were used for training the classifier. Recordings from non-natives (Chinese and Iranians) were used for testing, resulting in an error rate of 6.7%. Further, confidence measures were used to improve the error rate to 3.4% by discarding 8.3% of the utterances. It could be argued that more than half of the discarded utterances were correctly discarded because of errors in the pronunciation. A CAPT demo, which was developed in an former assignment, was improved to use classifiers trained with the described approach.
25

MR Spectroscopy : Real-Time Quantification of in-vivo MR Spectroscopic data

Massé, Kunal January 2009 (has links)
In the last two decades, magnetic resonance spectroscopy (MRS) has had an increasing success in biomedical research. This technique has the faculty of discerning several metabolites in human tissue non-invasively and thus offers a multitude of medical applications. In clinical routine, quantification plays a key role in the evaluation of the different chemical elements. The quantification of metabolites characterizing specific pathologies helps physicians establish the patient's diagnosis. Estimating quantities of metabolites remains a major challenge in MRS. This thesis presents the implementation of a promising quantification algorithm called selective-frequency singular value decomposition (SELF-SVD). Numerous tests on simulated MRS data have been carried out to bring an insight on the complex dependencies between the various components of the data. Based on the test results, suggestions have been made on how best to set the SELF-SVD parameters depending on the nature of the data. The algorithm has also been tested for the first time with in-vivo 1H MRS data, in which SELF-SVD quantification results allow the localization of a brain tumor.
26

The Optimal Packet Duration of ALOHA and CSMA in Ad Hoc Wireless Networks

Corneliussen, Jon Even January 2009 (has links)
In this thesis the optimal transmission rate in ad hoc wireless networks is analyzed. The performance metric used in the analysis is probability of outage. In our system model, users/packets arrive randomly in space and time according to a Poisson point process, and are thereby transmitted to their intended destinations using either ALOHA or CSMA as the MAC protocol. Our model is based on an SINR requirement, i.e., the received SINR must be above some predetermined threshold value, for the whole duration of a packet, in order for the transmission to be considered successful. If this is not the case an outage has occurred. In order to analyze how the transmission rate affects the probability of outage, we assume packets of K bits, and let the packet duration, T, vary. The nodes in the network then transmit packets with a requested transmission rate of Rreq=K/T bits per second. We incorporate transmission rate into already existing lower bounds on the probability of outage of ALOHA and CSMA, and use these expressions to find the optimal packet duration that minimizes the probability of outage. For the ALOHA protocol, we derive an analytic expression for the optimal spectral efficiency of the network as a function of path loss, which is used to find the optimal packet duration Topt . For the CSMA protocol, the optimal packet duration is observed through simulations. We find that in order to minimize the probability of outage in our network, we should choose our system parameters such that our requested transmission rate divided by system bandwidth is equal to the optimal spectral efficiency of our network.
27

Adaptive Coding and Modulation Techniques for HF Communication : Performance of different adaption techniques implemented with the HDL+ protocol

Carlsen, Martin January 2009 (has links)
The main goal of this thesis is to present two good alternatives for the HDL+ protocol proposed for ratification in STANAG 4538, as this partially is restricted by a patent claims. The HDL+ protocol is used as a starting point, and in order to accommodate for the patented parts, the adaptive process is altered, and the code combining process is removed for the highest rate. For simplifying the comparison between the performance of the proposed protocols, and the HDL+, both proposed protocols is simulated in a MATLAB environment, over the same channels as Harris has presented the throughput capabilities of the HDL+. These channels include the AWGN, single tap channel with flat fading, the ITU-MLD channel, and the ITU-MLD channel with Long- and Intermediate- Time SNR variations. By analyzing the results, it is clear that the current implementation of the proposed protocols does not achieve as high throughput as the HDL+, but there are indications that there is potential for better results if further development is performed.
28

Distribution Based Spectrum Sensing in Cognitive Radio

Christiansen, Jørgen Berle January 2010 (has links)
Blind spectrum sensing in cognitive radio is being addressed in this thesis. Particular emphasis is put on performance in the low signal to noise range. It is shown how methods relying on traditional sample based estimation methods, such as the energy detector and autocorrelation based detectors, suffer at low SNRs. This problem is attempted to be solved by investigating how higher order statistics and information theoretic distance measures can be applied to do spectrum sensing. Results from a thorough literature survey indicate that the information theoretic distance gls{kl} divergence is promising when trying to devise a novel cognitive radio spectrum sensing scheme. Two novel detection algorithms based on Kullback-Leibler divergence estimation are proposed. However, unfortunately only one of them has a fully proven theoretical foundation. The other has a partial theoretical framework, supported by empirical results. Detection performance of the two proposed detectors in comparison with two reference detectors is assessed. The two reference detectors are the energy detector, and an autocorrelation based detector. Through simulations, it is shown that the proposed KL divergence based algorithms perform worse than the energy detector for all the considered scenarios, while one of them performs better than the autocorrelation based detector for certain signals. The reason why the detectors perform worse than the energy detector, despite the good properties of the estimators at low signal to noise ratios, is that the KL divergence between signal and noise is small. The low divergence stems from the fact that both signal and noise have very similar probability density distributions. Detection performance is also assessed by applying the detectors to raw data of a downconverted UMTS signal. It is shown that the noise distribution deviates from the standard assumption (circularly symmetric complex white Gaussian). Due to this deviation, the autocorrelation based reference detector and the two proposed Kullback-Leibler divergence based detectors are challenged. These detectors rely heavily on the aforementioned assumption, and fail to function properly when applied to signals with deviating characteristics.
29

3D-overflatemodellering basert på stereoskopiske bilder / 3D Surface Modeling based on Stereoscopic Images

Røren, Thomas Thorsen January 2012 (has links)
Prosjektet har omhandlet utvikling av en prototype 3D-modell som er basert p&#229; stereoskopiske bilder. Hensikten er &#229; modellere kroniske huds&#229;r for visualisering og dybdeestimering av s&#229;rene. Modelleringen vil kunne brukes til areal- og volumberegninger av s&#229;rene som gir verdifull diagnostisk informasjon. Det ble gjort geometrisk kalibrering, bilderektifikasjon og bildematching av stereobildene for &#229; kunne foreta modelleringen. Utfordringer under disse prosessene var knyttet til presisjon av bildematchingen av homogene omr&#229;der og h&#248;ydeoppl&#248;sningen i modellen. Bildematchingen kan forbedres ved &#229; forbehandle stereobildene for &#229; &#248;ke kontrasten, og h&#248;ydeoppl&#248;sningen kan &#248;kes ved &#229; endre det eksperimentelle oppsettet. Som en videref&#248;ring av dette prosjektet skal 3D-modelleringen kombineres med hyperspektrale avbildninger for i tillegg &#229; kunne gi spektral informasjon om s&#229;ret.

Page generated in 0.0805 seconds