• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 192
  • 137
  • 2
  • Tagged with
  • 329
  • 302
  • 301
  • 290
  • 229
  • 214
  • 73
  • 69
  • 15
  • 11
  • 8
  • 8
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Ultrasound Contrast Imaging - Improved Tissue Suppression in Amplitude Modulation

Hofstad, Erlend Fagertun January 2006 (has links)
<p>The ability to image myocardial perfusion is very important in order to detect coronary diseases. GE Vingmed Ultrasound uses contrast agent in combination with a pulse inversion (PI) technique to do the imaging. But this technique does not function sufficiently for all patients. Therefore have other techniques been tested out, including transmission of pulses with different amplitude (AM), to enhance the nonlinear signal from contrast bubbles. But a problem achieving sufficient cancellation of linear tissue signal is a feebleness of the method. In this diploma work has an effort been put into enhancing the tissue suppression in amplitude modulation. First the source of the lack of suppression was searched for by measuring electrical and acoustical pulses. The further examination revealed a dissymmetry in between pulses of different amplitude. To reduce this error were several attempts to make a compensation filter performed, which finally resulted in a filter created of echo data acquired from a tissue mimicking phantom. The filter was furthermore tested out on a flow phantom to see how it affected the signal from tissue and contrast bubbles, compared to the former use of a constant instead of the filter. The comparison showed 1.5-3.2 dB increase in tissue suppression (TS). But unfortunately did the filtering process slightly reduce the contrast signal as well, which resulted in a smaller increase of Contrast-to-Tissue-Ratio (CTR) than TS; 1.0-2.8 dB. During the work was the source of another problem concerning tissue suppression discovered. In earlier work by the author cite{prosjekt} the experimental results suffered from low TS around the transmitted frequency, which was found inexplicable at that time. This problem was revealed to be caused by reverberations from one pulse, interfering with the echoes from the next pulse. The solution suggested in this thesis is to transmit pulses in such a way that every pulse used to create an image has a relatively equal pulse in front. For instance, if a technique employs two pulses to create an image, and the first has half the amplitude and opposite polarity of the second. Then, to eliminate the reverberations must the first imaging pulse have a pulse in front which has half the amplitude and opposite polarity of the pulse in front of the second imaging pulse.</p>
112

Practical Evaluation of IEEE 802.15.4/ZigBee Medical Sensor Networks

Hansen, Mats Skogholt, Støa, Stig January 2006 (has links)
<p>In clinical diagnostics and treatment of patients, several biological parameters have to be measured and monitored. Typical physiological parameters important to the medical staff are blood gas, invasive blood pressures, pulse rate, temperature, electrocardiogram (ECG), etc. Introducing a wireless network system for the sensor data results in greater flexibility both for the patient and for the medical staff. Wireless connections make use of digital data, which along with other digital data may enable novel clinical as well as logistics applications. With the use of computers or PDA’s at other locations within the hospital, the medical staff would be able to monitor a patient regardless of his or her position as long as the he or she is connected to the network. The goal of this thesis is to investigate to what extent today’s of-the-shelf sensor network platforms is suitable for biomedical sensor networks, i.e. whether the requirements of medical data gathering are fulfilled. IEEE 802.15.4 is a wireless standard that promises great flexibility, low cost, small hardware and low power consumption. It uses the publicly available 2.4 GHz ISM band for the radio and supports a data rate of 250 kbps. Several network structures are supported, and multi-hop transmissions promise great suppleness for dynamic networks. Medical sensor data as ECG and blood pressure operate with data rates of 3.2 kbps and 1.6 kbps respectively. With these rates, a theoretical upper limit for the network size in terms of number of nodes is found to be 47 nodes. Because the radio uses the increasingly crowded ISM band, coexistence issues have been investigated. The MICAz platform is used to test examine the possibilities of the IEEE 802.15.4 standard in a medical sensor scenario. Several simulation tools are assessed, and TrueTime is found to best suit the simulation needs for the intended experiments. The experiments carried out examine the possibility of implementing a wireless sensor network for medical sensor data. Four MICAz nodes have been set up in different experiments. Simulations are used to expand the results to a grater number of nodes. The first experiment aims to reveal the range and radio reliability or indoor and outdoor environments. The indoor experiment experiences loss of the direct line of sight component, whereas the outdoor experiment always has a strong line of sight component. Further, an experiment is set up to find the packet loss under controlled conditions with varying packet sizes and possible resends on faulty transmissions. The same experiments meant to uncover the effect of interference from IEEE 802.11 stations. Another experiment employs the multi-hop capabilities of the nodes. Finally a simulation is used to investigate the network breakdown in terms of network density. The signal strength experiment revealed fluctuating results for the indoor experiment. The outdoor experiment showed that the nodes have an effective working range of 15 meters with low packet loss and about 25 meters without breaking the connection. The packet loss test for different sized packets and resends exposed a timing issue in the network access of the nodes. By using the standard defined backoff system of the CSMA/CA algorithm, large packet losses were found. No significant increase in packet loss was uncovered in the interference experiments. By implementing a preemptive backoff scheme, far better results were achieved for both the interference free and interference case. The multi-hop experiment exposed another shortcoming of the hardware. Large number of packets in the network seem to saturate the micro controller on the motes, and very irregular results were experienced. The corresponding simulation showed no sign of network saturation. The breakdown test showed that with the data rates of the medical sensors used, up to 10 nodes can be deployed before the packet error rate passes 5%. For large packets the equivalent number is 20 because of better utilization of the network resources. By implementing a simple compression algorithm known as delta encoding, the capacity of the system was doubled without any added error probability. There are still many unsolved and unaddressed issues regarding medical sensor networks with IEEE 802.15.4. The MICAz platform is to immature for heavy network operations like routing of large amounts of data. The full potential of IEEE 802.15.4 is yet to be investigated with the beacon mode. Energy and security issues have to be addressed together with QoS. All in all, the results from the experiments point in a negative direction, but with improvements to the platform and OS together with a thought-through adaptation of the standard, acceptable results may emerge. IEEE 802.15.4 is a wireless standard that promises great flexibility, low cost, small hardware and low power consumption. It uses the publicly available 2.4 GHz ISM band for the radio and supports a data rate of 250 kbps. Several network structures are supported, and multi-hop transmissions promise great suppleness for dynamic networks. Medical sensor data as ECG and blood pressure operate with data rates of 3.2 kbps and 1.6 kbps respectively. With these rates, a theoretical upper limit for the network size in terms of number of nodes is found to be 47 nodes. Because the radio uses the increasingly crowded ISM band, coexistence issues have been investigated. The MICAz platform is used to test examine the possibilities of the IEEE 802.15.4 standard in a medical sensor scenario. Several simulation tools are assessed, and TrueTime is found to best suit the simulation needs for the intended experiments. The experiments carried out examine the possibility of implementing a wireless sensor network for medical sensor data. Four MICAz nodes have been set up in different experiments. Simulations are used to expand the results to a grater number of nodes. The first experiment aims to reveal the range and radio reliability or indoor and outdoor environments. The indoor experiment experiences loss of the direct line of sight component, whereas the outdoor experiment always has a strong line of sight component. Further, an experiment is set up to find the packet loss under controlled conditions with varying packet sizes and possible resends on faulty transmissions. The same experiments meant to uncover the effect of interference from IEEE 802.11 stations. Another experiment employs the multi-hop capabilities of the nodes. Finally a simulation is used to investigate the network breakdown in terms of network density. The signal strength experiment revealed fluctuating results for the indoor experiment. The outdoor experiment showed that the nodes have an effective working range of 15 meters with low packet loss and about 25 meters without breaking the connection. The packet loss test for different sized packets and resends exposed a timing issue in the network access of the nodes. By using the standard defined backoff system of the CSMA/CA algorithm, large packet losses were found. No significant increase in packet loss was uncovered in the interference experiments. By implementing a preemptive backoff scheme, far better results were achieved for both the interference free and interference case. The multi-hop experiment exposed another shortcoming of the hardware. Large number of packets in the network seem to saturate the micro controller on the motes, and very irregular results were experienced. The corresponding simulation showed no sign of network saturation. The breakdown test showed that with the data rates of the medical sensors used, up to 10 nodes can be deployed before the packet error rate passes 5%. For large packets the equivalent number is 20 because of better utilization of the network resources. By implementing a simple compression algorithm known as delta encoding, the capasity of the system was doubled without any added error probability. There are still many unsolved and unaddressed issues regarding medical sensor networks with IEEE 802.15.4. The MICAz platform is to immature for heavy network operations like routing of large amounts of data. The full potential of IEEE 802.15.4 is yet to be investigated with the beacon mode. Energy and security issues have to be addressed together with QoS. All in all, the results from the experiments point in a negative direction, but with improvements to the platform and OS together with a thought-through adaptation of the standard, acceptable results may emerge.</p>
113

Evaluation of multiuser scheduling algorithm in OFDM for different services

Bahillo Martinez, Alfonso January 2006 (has links)
<p>The goal of this Master Thesis is to study shared radio resources among users with different services requirements. The analyzed properties of the wireless connection are fairness, throughput and delay for users demanding different services and QoS requirements. Four scheduling algorithms are used for allocating system resources. Two of them, Max Rate and Round Robin, are used as references to analyze throughput and fairness respectively. The other two algorithms, Proportional Fair Scheduling and Rate Craving Greedy, exploit the idea of multiuser diversity improving the throughput without comprising fairness. Different fading radio channel models are investigated, but only urban environments and pedestrian users are simulated in this report. OFDM has been the technique used to transmit signals over the wireless channel. The performance of these algorithms is analyzed and compared through MATLAB computer simulations.</p>
114

In vivo Magnetic Resonance Spectroscopy and Diffusion Weighted Magnetic Resonance Imaging for Non-Invasive Monitoring of Treatment Response of Subcutaneous HT29 Xenografts in Mice

Røe, Kathrine January 2006 (has links)
<p>This work investigates whether in vivo magnetic resonance spectroscopy (MRS) and diffusion-weighted magnetic resonance imaging (DW-MRI) can be used for non-invasive monitoring of treatment response in an experimental tumor model. Twenty-nine nude mice with colorectal adenocarcinoma HT29 xenografts on each flank were included into 2 separate experiments. In the first experiment control tumors were compared to tumors irradiated with 15 Gy at Day 2. MR baseline values were established at Day 1 followed by 4 post-treatment MR examinations. Mice were sacrificed for histological response evaluation and high-resolution ex vivo magic angle spinning (HR-MAS) MRS of tumor tissue samples for correlation with in vivo MR data. The second experiment included 3 groups recieving combined chemoradiation therapy; Control group, Capecitabine (359 mg/kg daily Day 1 - Day 5) group and Capecitabine (359 mg/kg daily Day 1 - Day 5) + Oxaliplatin (10 mg/kg at Day 2) group. All left-sided tumors were irradiated with 15 Gy at Day 2. Three repeated MR examinations were compared to the MR baseline values established at Day 1. After MR examinations the mice were sacrificed for histological response evaluation. The choice of chemoterapy was based on a clinical patient study currently running at Rikshospitalet-Radiumhospitalet HF, the LARC-RRP (Locally Advanced Rectal Cancer - Radiation Response Prediction) study. In Experiment 1, localized 1H MR spectra were acquired at short (35 ms) and long (144 ms) echo times (TEs) using a single-voxel technique. The metabolite choline is related to tumor growth. The choline peak area relative to the unsuppressed 35 ms TE water area in the same voxel, i.e. the normalized choline ratio, was assessed in all MRS examinations. For both TEs, the choline ratio increased after irradiation, followed by a decrease and a renewed increase 12 days after irradiation. In Experiment 1, statistically significant differences at the 0.1 level were observed between the choline ratios at Day 5 and Day 12 (p = 0.068) for short TE and between the ratios at Day 3 and Day 8 (p = 0.05) for long TE. The change in choline ratio was in accordance with the tumor necrotic fraction (NF) found in histological analyses. Principal component analysis (PCA) revealed a correlation between the score values of ex vivo HR-MAS MR spectra and necrosis. This suggests a correlation between ex vivo and in vivo MRS. In both experiments, the diffusion in the HT29 xenografts varied during treatment. There was a correlation between the amount of necrosis in tumor and the calculated apparent diffusion coefficient (ADC) obtained from DW-MRI examinations. In Experiment 1, statistically significant differences at the 0.1 level were observed between the ADCs at Day 3 and Day 5 (p = 0.05), between Day 5 and Day 12 (p = 0.068), and between Day 8 and Day 12 (p = 0.068). The HT29 xenografts responded to treatment with an initial increase of necrosis due to the short-term effect of treatment, stimulating development of fibrosis. In accordance to the change in choline and ADC, the level of necrosis increased 8 - 12 days after start of treatment, which might correspond to the long-term effect of treatment. The findings in this work shows that in vivo MRS and DW-MRI can be used for non-invasive monitoring of treatment response in an experimental tumor model. This suggests that in vivo MRS and DW-MRI could yield important information about a tumors response to therapy.</p>
115

Estimation of Size Distribution and Abundance of Zooplankton based on Measured Acoustic Backscattered Data

Storetvedt, Kjetil January 2006 (has links)
<p>In the later years the scientist community has bin investigating the possibility of using zoo plankton as a commercial resource. It is therefore of interest to investigate the size distribution and abundance of zoo plankton. NTNU has for this purpose developed an Acoustical plankton recorder or APR, for determination of the size distribution and abundance of plankton. The system utilizes three frequencies namely 200 kHz, 710 kHz and 1 MHz for the task. A specific kind of plankton called Callanus finmarchicus which has primarily a size range from 1-3 mm is considered the most interesting one since it has a large population in the Norwegian Sea. In this report the signal processing from the “raw” measured values with the APR to the calculated distribution of plankton is described. The following steps has bin carried out in the signal processing: • First of the measured values are processed with an exact “Time Varied Gain Function” or TVG. This function will compensate for the range from the APR to the target by information about the absorption, the range, the pulse-length and the band-width of the receiver. The exact TVG function has bin used since the ordinary TVG function will give a positive bias for ranges below 10 meters • After the TVG function the Echo Integrator equation with noise subtraction is used. This will reduce the effect of noise in the measurements and improve the linearity principle. • At the end inversion is carried out. Two different algorithms are used in the purpose of the inversion, namely the “Least Square non Negative” and the GA algorithm. The inversion will try to find the best fit between the measured data and the mathematically modelled plankton distribution and thereby calculating the size distribution and abundance of plankton. The echo integration with noise subtraction works by calculating the energy in the measurement over a series of samples contributing to a distance in the measurement in active mode. The samples contributing to the same distance is then used to calculate the energy in passive mode and this value is then subtracted from the energy in active mode. The accuracy of the method is dependent on the number of samples used giving better results with an increasing number of samples. Therefore the method has to be used with consideration to the resolution needed in the measurements. The “Least Square non Negative” and the GA algorithm are compared by testing them one the same synthetic data. The result is that in some cases the “Least Square non Negative” seems to work better but in other cases the GA algorithm gives the best results. Both methods has got problems in determining the abundance of smaller plankton when large plankton or potentially air bubbles are present.</p>
116

Subjective quality evaluation of the effect of packet loss in High-Definition Video

Vorren, Sander Sunde January 2006 (has links)
<p>Video streamed over packet-switched networks such as the Internet are vulnerable to packet loss, which result in a degradation of quality. This degradation can be measured subjectively or by objective measures. The Internet is a best-effort media-unaware environment where all packets receive equal quality of service (QoS), disregarding the fact that some packets are more essential in streaming multimedia applications. Using the differentiated services (DiffServ) model, unequal degrees of QoS are offered, resulting in essential packets being prioritized through the network. The main objective of this thesis is to conduct an informal subjective evaluation experiment, where the test material used consists of high-definition video distorted by various packet loss rates, using both the best effort Internet and DiffServ as underlying channel models. The results from the subjective evaluation experiment are compared to those of the objective video quality estimation to see how well the objective models perform. The video sequences are encoded by using the H.264/AVC video compression standard, and further transmitted in RTP packets. Packet loss is introduced by using a DiffServ simulator, where decoded distorted sequences are assessed. Results show that the NTIA and SSIM were the video quality models with respectively the highest and the lowest performance regarding PLCC, SRCC and RMSE. The NTIA model had statistically significant higher performance than SSIM using PLCC and SRCC with a 95% confidence interval. When comparing packet loss rate versus objective measures, the performance of best effort degrades more rapidly than the performance of DiffServ. However, the results from the subjective evaluations did not show any statistically significant differences between the two channel models using a 90% confidence interval. The DMOS values were categorized into low, medium and high packet loss rates. Studying the high (5%-10%) packet loss rate category, the DiffServ model achieved a higher mean DMOS value compared to the Best Effort model. For low (0%-2.5%) and medium (2.5%-4%) packet loss rates categories the Best Effort model achieved a higher mean DMOS value compared to the DiffServ model.</p>
117

Automatic Intima-Lumen detection in Cardiovascular Ultrasound.

Edvardsen, Kristin Holm January 2006 (has links)
<p>The main goal in this thesis has been to develop an algorithm that robustly identifies the Carotid artery wall boundaries throughout a heart cycle in an ultrasound image. An existing automatic vessel detection algorithm (AVDA) uses tissue velocity imaging(TVI) and B-mode data to score candidate points from various criteria. The candidate pair with the lowest score gets selected. AVDA was extended by implementing two alternative criteria to the existing external cost criterion. The first by combining the information on gradient and standard deviation in the intensity signatures; the GradTrans criterion (GTC).The second by exploiting the shape of the intensity signature across the vessel wall in the detection of the intima-lumen interfaces and using a Model Matching criterion with 1.5 Gaussian curves (MMC1.5). An edge criteria modeling tool (ECMT) was developed for the purpose of studying the intensity signatures across the vessel wall to find out how the various cost criteria score the signatures of different datasets. In addition, the ECMT was used for parameter tuning. The implemented criteria were verified by comparing automatically detected edges with manual detected edges on 22 datasets. In addition automatically detected vessel diameter was compared against manually detected diameter on 49 datasets. The verification indicated that the GTC is not a good criterion for detecting the intima-lumen interface. The GTC either completely failed to detect the wall or detected the media-adventitia boundary instead of the intima-lumen boundary. The MMC1.5 criterion, on the other hand, seems promising. The criterion seems to often detect the wall correctly or with a small deviation. Compared with manual diameter measurement, MMC1.5 had a bias of 0.014 mm and std of the error was 1.056. In some images the criterion failed completely in correct detection of the wall. The reason being that a similar structure in the wall or an artifact in the lumen was detected instead. After removing these outlays, the diameter detection by MMC1.5 had a bias of 0.146 mm and std of 0.347 mm. The criterion has to be developed further to be more robust and less time consuming, and to overcome the problems of complete failure. A feasability study was done to see whether the AVDA can be used on Brachialis recordings to increase the efficiency of the Flow Mediated Dilation measures. The study seems promising when using the Gradient criterion. This makes it possible to automatically measure diameter changes throughout a heart cycle as well as measuring the differences in diameter between recordings.</p>
118

3D Speckle-Tracking : Sub-voxel Techniques and Tracking Accuracy

Bostad, Ingvild Haraldsen January 2006 (has links)
<p>Speckle tracking is a method that is useful for examining myocardial function. The method allows the cardiologist examine the left ventricular wall function when there are indications of a myocardial infarction. The method is well developed and in diagnostic use for 2D cardiac ultrasound imaging. Different techniques for sub sample interpolation is investigated and tested. Sub sample methods can be useful due to the fact that they allow the tracking algorithm to track smaller displacements. The assessment is based on four dimensional ultrasound images of a tissue phantom, with time being the fourth dimension. Tracking in 3D ultrasound images is useful for being able to track those points in a 2D ultrasound image that moves in and out of the 2D imaging plane during a heart cycle. Good tracking in several dimensions demands good resolution ultrasound images. The resolution needs to be sufficiently good in both lateral and axial directions. This is needed to be able to track in sub pixel distances. To accomplish this, one needs to find techniques for sub pixel resolution and to find how these affects the quality of the sampling. The results indicate that interpolation improve the tracking accuracy. This is consistent for all the different interpolation methods, but not for the different depths in the ultrasound images nor for the different ultrasound images.</p>
119

Dynamic Spectrum Management in DSL Systems

Villalba, Pablo Villalba January 2007 (has links)
<p>The candidate shall study different methods for dynamic spectral management in DSL systems, with main emphasis on autonomeous methods like iterative waterfilling. The methods shall be implemented in Matlab.</p>
120

Musical descriptors : An assessment of psychoacoustical models in the presence of lossy compression

Gunderson, Steinar Heimdal January 2007 (has links)
<p>A simple system for recognizing music is presented, based on various musical descriptors, numbers that describe some aspect of the music. Various descriptors are discussed; in particular, a novel descriptor, the floor-1 cepstral coefficient (F1CC) measure, a refinement of MFCCs based on the Vorbis psychoacoustical model is presented and evaluated. Also, various forms of statistical dimensionality reduction, among them PCA and LDA, are considered in the present context. Finally, a few directions for future work are discussed.</p>

Page generated in 0.0596 seconds