• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 63
  • 20
  • 10
  • 7
  • 6
  • 6
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 134
  • 46
  • 45
  • 33
  • 31
  • 30
  • 17
  • 17
  • 16
  • 15
  • 15
  • 14
  • 13
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Application de la réflectométrie GNSS à l'étude des redistributions des masses d'eau à la surface de la terre / Application of GNSS reflectometry to the study of water storage redistribution over the Earth's surface

Roussel, Nicolas 26 November 2015 (has links)
La réflectométrie GNSS (ou GNSS-R) est une technique de télédétection originale et pportuniste qui consiste à analyser les ondes électromagnétiques émises en continu par la soixantaine de satellites des systèmes de positionnement GNSS (GPS, GLONASS, etc.), qui sont captées par une antenne après réflexion sur la surface terrestre. Ces signaux interagissent avec la surface réfléchissante et contiennent donc des informations sur ses propriétés. Au niveau de l'antenne, les ondes réfléchies interfèrent avec celles arrivant directement des satellites. Ces interférences sont particulièrement visibles dans le rapport signal-sur-bruit (SNR, i.e., Signal-to-Noise Ratio), paramètre enregistré par une station GNSS classique. Il est ainsi possible d'inverser les séries temporelles du SNR pour estimer des caractéristiques du milieu réfléchissant. Si la faisabilité et l'intérêt de cette méthode ne sont plus à démontrer, la mise en oeuvre de cette technique pose un certain nombre de problèmes, à savoir quelles précisions et résolutions spatio-temporelles peuvent être atteintes, et par conséquent, quels sont les observables géophysiques accessibles. Mon travail de thèse a pour objectif d'apporter des éléments de réponse sur ce point, et est axé sur le développement méthodologique et l'exploitation géophysique des mesures de SNR réalisées par des stations GNSS classiques. Je me suis focalisé sur l'estimation des variations de hauteur de l'antenne par rapport à la surface réfléchissante (altimétrie) et de l'humidité du sol en domaine continental. La méthode d'inversion des mesures SNR que je propose a été appliquée avec succès pour déterminer les variations locales de : (1) la hauteur de la mer au voisinage du phare de Cordouan du 3 mars au 31 mai 2013 où les ondes de marées et la houle ont pu être parfaitement identifiées ; et (2) l'humidité du sol dans un champ agricole à proximité de Toulouse, du 5 février au 15 mars 2014. Ma méthode permet de s'affranchir de certaines restrictions imposées jusqu'à présent dans les travaux antérieurs, où la vitesse de variation verticale de la surface de réflexion était supposée négligeable. De plus, j'ai développé un simulateur qui m'a permis de tester l'influence de nombreux paramètres (troposphère, angle d'élévation du satellite, hauteur d'antenne, relief local, etc.) sur la trajectoire des ondes réfléchies et donc sur la position des points de réflexion. Mon travail de thèse montre que le GNSS-R est une alternative performante et un complément non négligeable aux techniques de mesure actuelles, en faisant le lien entre les différentes résolutions temporelles et spatiales actuellement atteintes par les outils classiques (sondes, radar, diffusiomètres, etc.). Cette technique offre l'avantage majeur d'être basé sur un réseau de satellites déjà en place et pérenne, et est applicable à n'importe quelle station GNSS géodésique, notamment celles des réseaux permanents (e.g., le RGP français). Ainsi, en installant une chaîne de traitement de ces acquisitions de SNR en domaine côtier, il serait possible d'utiliser les mesures continues des centaines de stations pré-existantes, et d'envisager de réaliser des mesures altimétriques à l'échelle locale, ou de mesurer l'humidité du sol pour les antennes situées à l'intérieur des terres. / GNSS reflectometry (or GNSS-R) is an original and opportunistic remote sensing technique based on the analysis of the electromagnetic waves continuously emitted by GNSS positioning systems satellites (GPS, GLONASS, etc.) that are captured by an antenna after reflection on the Earth's surface. These signals interact with the reflective surface and hence contain information about its properties. When they reach the antenna, the reflected waves interfere with those coming directly from the satellites. This interference is particularly visible in the signal-to-noise ratio (SNR) parameter recorded by conventional GNSS stations. It is thus possible to reverse the SNR time series to estimate the reflective surface characteristics. If the feasibility and usefulness of thismethod are well established, the implementation of this technique poses a number of issues. Namely the spatio-temporal accuracies and resolutions that can be achieved and thus what geophysical observables are accessible.The aim of my PhD research work is to provide some answers on this point, focusing on the methodological development and geophysical exploitation of the SNR measurements performed by conventional GNSS stations. I focused on the estimation of variations in the antenna height relative to the reflecting surface (altimetry) and on the soil moisture in continental areas. The SNR data inversion method that I propose has been successfully applied to determine local variations of : (1) the sea level near the Cordouan lighthouse (not far from Bordeaux, France) from March 3 to May 31, 2013, where the main tidal periods and waves have been clearly identified ; and (2) the soil moisture in an agricultural plot near Toulouse, France, from February 5 to March 15, 2014. My method eliminates some restrictions imposed in earlier work, where the velocity of the vertical variation of the reflective surface was assumed to be negligible. Furthermore, I developed a simulator that allowed me to assess the influence of several parameters (troposphere, satellite elevation angle, antenna height, local relief, etc.) on the path of the reflected waves and hence on the position of the reflection points. My work shows that GNSS-R is a powerful alternative and a significant complement to the current measurement techniques, establishing a link between the different temporal and spatial resolutions currently achieved by conventional tools (sensors, radar, scatterometer, etc.). This technique offers the major advantage of being based on already-developed and sustainable satellites networks, and can be applied to any GNSS geodetic station, including permanent networks (e.g., the French RGP). Therefore, by installing a processing chain of these SNR acquisitions, data from hundreds of pre-existing stations could be used to make local altimetry measurements in coastal areas or to estimate soil moisture for inland antennas.
62

Hierarchical clustering using equivalence test : application on automatic segmentation of dynamic contrast enhanced image sequence / Clustering hiérarchique en utilisant le test d’équivalent : application à la segmentation automatique des séries dynamiques de perfusion

Liu, Fuchen 11 July 2017 (has links)
L'imagerie de perfusion permet un accès non invasif à la micro-vascularisation tissulaire. Elle apparaît comme un outil prometteur pour la construction de biomarqueurs d'imagerie pour le diagnostic, le pronostic ou le suivi de traitement anti-angiogénique du cancer. Cependant, l'analyse quantitative des séries dynamiques de perfusion souffre d'un faible rapport signal sur bruit (SNR). Le SNR peut être amélioré en faisant la moyenne de l'information fonctionnelle dans de grandes régions d'intérêt, qui doivent néanmoins être fonctionnellement homogènes. Pour ce faire, nous proposons une nouvelle méthode pour la segmentation automatique des séries dynamiques de perfusion en régions fonctionnellement homogènes, appelée DCE-HiSET. Au coeur de cette méthode, HiSET (Hierarchical Segmentation using Equivalence Test ou Segmentation hiérarchique par test d'équivalence) propose de segmenter des caractéristiques fonctionnelles ou signaux (indexées par le temps par exemple) observées discrètement et de façon bruité sur un espace métrique fini, considéré comme un paysage, avec un bruit sur les observations indépendant Gaussien de variance connue. HiSET est un algorithme de clustering hiérarchique qui utilise la p-valeur d'un test d'équivalence multiple comme mesure de dissimilarité et se compose de deux étapes. La première exploite la structure de voisinage spatial pour préserver les propriétés locales de l'espace métrique, et la seconde récupère les structures homogènes spatialement déconnectées à une échelle globale plus grande. Etant donné un écart d'homogénéité $\delta$ attendu pour le test d'équivalence multiple, les deux étapes s'arrêtent automatiquement par un contrôle de l'erreur de type I, fournissant un choix adaptatif du nombre de régions. Le paramètre $\delta$ apparaît alors comme paramètre de réglage contrôlant la taille et la complexité de la segmentation. Théoriquement, nous prouvons que, si le paysage est fonctionnellement constant par morceaux avec des caractéristiques fonctionnelles bien séparées entre les morceaux, HiSET est capable de retrouver la partition exacte avec grande probabilité quand le nombre de temps d'observation est assez grand. Pour les séries dynamiques de perfusion, les hypothèses, dont dépend HiSET, sont obtenues à l'aide d'une modélisation des intensités (signaux) et une stabilisation de la variance qui dépend d'un paramètre supplémentaire $a$ et est justifiée a posteriori. Ainsi, DCE-HiSET est la combinaison d'une modélisation adaptée des séries dynamiques de perfusion avec l'algorithme HiSET. A l'aide de séries dynamiques de perfusion synthétiques en deux dimensions, nous avons montré que DCE-HiSET se révèle plus performant que de nombreuses méthodes de pointe de clustering. En terme d'application clinique de DCE-HiSET, nous avons proposé une stratégie pour affiner une région d'intérêt grossièrement délimitée par un clinicien sur une série dynamique de perfusion, afin d'améliorer la précision de la frontière des régions d'intérêt et la robustesse de l'analyse basée sur ces régions tout en diminuant le temps de délimitation. La stratégie de raffinement automatique proposée est basée sur une segmentation par DCE-HiSET suivie d'une série d'opérations de type érosion et dilatation. Sa robustesse et son efficacité sont vérifiées grâce à la comparaison des résultats de classification, réalisée sur la base des séries dynamiques associées, de 99 tumeurs ovariennes et avec les résultats de l'anapathologie sur biopsie utilisés comme référence. Finalement, dans le contexte des séries d'images 3D, nous avons étudié deux stratégies, utilisant des structures de voisinage des coupes transversales différentes, basée sur DCE-HiSET pour obtenir la segmentation de séries dynamiques de perfusion en trois dimensions. (...) / Dynamical contrast enhanced (DCE) imaging allows non invasive access to tissue micro-vascularization. It appears as a promising tool to build imaging biomarker for diagnostic, prognosis or anti-angiogenesis treatment monitoring of cancer. However, quantitative analysis of DCE image sequences suffers from low signal to noise ratio (SNR). SNR may be improved by averaging functional information in large regions of interest, which however need to be functionally homogeneous. To achieve SNR improvement, we propose a novel method for automatic segmentation of DCE image sequence into functionally homogeneous regions, called DCE-HiSET. As the core of the proposed method, HiSET (Hierarchical Segmentation using Equivalence Test) aims to cluster functional (e.g. with respect to time) features or signals discretely observed with noise on a finite metric space considered to be a landscape. HiSET assumes independent Gaussian noise with known constant level on the observations. It uses the p-value of a multiple equivalence test as dissimilarity measure and consists of two steps. The first exploits the spatial neighborhood structure to preserve the local property of the metric space, and the second recovers (spatially) disconnected homogeneous structures at a larger (global) scale. Given an expected homogeneity discrepancy $\delta$ for the multiple equivalence test, both steps stop automatically through a control of the type I error, providing an adaptive choice of the number of clusters. Parameter $\delta$ appears as the tuning parameter controlling the size and the complexity of the segmentation. Assuming that the landscape is functionally piecewise constant with well separated functional features, we prove that HiSET will retrieve the exact partition with high probability when the number of observation times is large enough. In the application for DCE image sequence, the assumption is achieved by the modeling of the observed intensity in the sequence through a proper variance stabilization, which depends only on one additional parameter $a$. Therefore, DCE-HiSET is the combination of this DCE imaging modeling step with our statistical core, HiSET. Through a comparison on synthetic 2D DCE image sequence, DCE-HiSET has been proven to outperform other state-of-the-art clustering-based methods. As a clinical application of DCE-HiSET, we proposed a strategy to refine a roughly manually delineated ROI on DCE image sequence, in order to improve the precision at the border of ROIs and the robustness of DCE analysis based on ROIs, while decreasing the delineation time. The automatic refinement strategy is based on the segmentation through DCE-HiSET and a series of erosion-dilation operations. The robustness and efficiency of the proposed strategy are verified by the comparison of the classification of 99 ovarian tumors based on their associated DCE-MR image sequences with the results of biopsy anapathology used as benchmark. Furthermore, DCE-HiSET is also adapted to the segmentation of 3D DCE image sequence through two different strategies with distinct considerations regarding the neighborhood structure cross slices. This PhD thesis has been supported by contract CIFRE of the ANRT (Association Nationale de la Recherche et de la Technologie) with a french company INTRASENSE, which designs, develops and markets medical imaging visualization and analysis solutions including Myrian®. DCE-HiSET has been integrated into Myrian® and tested to be fully functional.
63

Smarta nivåmätningar av dagvatten i realtid : Med en ny metod baserad på Time-of-Flight LiDAR sensorn VL53L1X

Burgos, Marcelo January 2022 (has links)
Dagvatten transporteras via dagvattenbrunnar in i ledningsnät och bort från betongytor. Problem uppstår då dessa sätter igen vilket leder till att vägar och bostäder kan översvämmas. Detta medför ett behov att övervaka när dagvattenbrunnarna sätter igen. Sundsvalls kommun har tillsammans med Mittuniversitet använt sig av differentiella tryckgivare för att mäta vattennivåer i syfte att detektera när dagvattenbrunnarna sätter igen. Tryckgivaren fungera bra under sommartid men ger felaktiga utslag under vintern. Det har därmed föreslagits en kontakt fri lösningsmetod som omfattar ToF LiDAR sensorn VL53L1X. ToF LiDAR sensorer används för att bestämma avstånd till objekt, eftersom dessa inte vanligtvis appliceras inom vattennivåmätningar är detta en ny metod i det området. Syftet med arbetet var att utreda ifall ToF LiDAR sensorn VL53L1X kunde användas för att mäta vattennivåer samt avgöra om den kan tillämpas för övervakning av dagvatten under sommar- och vintertid. Övergripande mål var att utreda ifall sensorn kunde implementeras i en nod. Ett flertal förstudier gjordes för att utreda vilka faktorer som påverkade mätresultatet och för att karakterisera sensorns konfigurering för att anpassa sensorn för vattennivåmätningar i avsikt att effektivisera mätmetoden. Det framgick av förstudierna att faktorerna vattengrumlighet, solljus och avstånd till mätobjektet påverkade mätningarna så att mätresultatet försämrades. Det har konstaterats med arbetet som underlag att sensorn kan mäta vattennivåer och kan tillämpas för att övervaka dagvattennivåer. Mätresultat vid vattennivåmätningar kan åstadkommas med en mätnoggrannhet på ca 28 mm och ett mätfel på ca 46 mm. Mätresultatet gäller under omständigheterna att vattnet är rent, under påverkan av solljus samt att sensorns höjdposition är maximalt 90 cm. / Stormwater is transported through stormwater wells into a passage system and away from concrete surfaces. Problem emerges when these clog and causes flooding on roads and housings. These convey a necessity for monitoring the wells so that they do not clog. Sundsvall municipality has together with Mittuniversitet, used differential pressure sensors to measure water levels to detect when the stormwater wells clog. The differential pressure sensor operates well under summer season but during winter, it gives inadequate readings. Therefore, a contact free method has been suggested, that comprises the ToF LiDAR sensor VL53L1X. ToF LiDAR sensors are applied to determine distances, and are not usually used to measure water levels, therefore the suggested method is novice method for these types of applications. The purpose of the study was to investigate if the ToF LiDAR sensor VL53L1X could be used to measure water levels and to decide if it can be applied for monitoring of stormwater during summer- and winter season. The overall purpose was to investigate if the sensor could be implemented in a node. Various pre-studies were done to examine what factors could influence the measurement results and to characterize the sensor configuration to adapt the sensor for water level measurements to increase the effectiveness of the measuring method. The outcome of the pre-studies was that the factors water turbidity, sunlight, and distance to measuring object influenced the measurements so that the measuring results became inferior. It has been established with the studies as ground that the sensor can measure water levels and that it can be applied to monitor stormwater levels. The measuring result when applied to water level measuring could be obtained with an accuracy of approximately 28 mm and an error of approximately 46 mm. The measuring result apply during the conditions that the water is clean, with influence of sunlight and given that the sensor height position is maximum 90 cm.
64

Mechanisms of Deep Brain Stimulation for the Treatment of Parkinson's Disease: Evidence from Experimental and Computational Studies

So, Rosa Qi Yue January 2012 (has links)
<p>Deep brain stimulation (DBS) is used to treat the motor symptoms of advanced Parkinson's disease (PD). Although this therapy has been widely applied, the mechanisms of action underlying its effectiveness remain unclear. The goal of this dissertation was to investigate the mechanisms underlying the effectiveness of subthalamic nucleus (STN) DBS by quantifying changes in neuronal activity in the basal ganglia during both effective and ineffective DBS.</p><p>Two different approaches were adopted in this study. The first approach was the unilateral 6-hydroxydopamine (6-OHDA) lesioned rat model. Using this animal model, we developed behavioral tests that were used to quantify the effectiveness of DBS with various frequencies and temporal patterns. These changes in behavior were correlated with changes in the activity of multiple single neurons recorded from the globus pallidus externa (GPe) and substantia nigra reticulata (SNr). The second approach was a computational model of the basal ganglia-thalamic network. The output of the model was quantified using an error index that measured the fidelity of transmission of information in model thalamic neurons. We quantified changes in error index as well as neural activity within the model GPe and globus pallidus interna (GPi, equivalent to the SNr in rats).</p><p>Using these two approaches, we first quantified the effects of different frequencies of STN DBS. High frequency stimulation was more effective than low frequency stimulation at reducing motor symptoms in the rat, as well as improving the error index of the computational model. In both the GPe and SNr/GPi from the rat and computational model, pathological low frequency oscillations were present. These low frequency oscillations were suppressed during effective high frequency DBS but not low frequency DBS. Furthermore, effective high frequency DBS generated oscillations in neural firing at the same frequency of stimulation. Such changes in neuronal firing patterns were independent of changes in firing rates.</p><p>Next, we investigated the effects of different temporal patterns of high frequency stimulation. Stimulus trains with the same number of pulses per second but different coefficients of variation (CVs) were delivered to the PD rat as well as PD model. 130 Hz regular DBS was more effective than irregular DBS at alleviating motor symptoms of the PD rat and improving error index in the computational model. However, the most irregular stimulation pattern was still more effective than low frequency stimulation. All patterns of DBS were able to suppress the pathological low frequency oscillations present in the GPe and SNr/GPi, but only 130 Hz stimulation increased high frequency 130 Hz oscillations. Therefore, the suppression of pathological low frequency neural oscillations was necessary but not sufficient to produce the maximum benefits of DBS.</p><p>The effectiveness of regular high frequency STN DBS was associated with a decrease in pathological low frequency oscillations and an increase in high frequency oscillations. These observations indicate that the effects of DBS are not only mediated by changes in firing rate, but also involve changes in neuronal firing patterns within the basal ganglia. The shift in neural oscillations from low to high frequency during effective STN DBS suggests that high frequency regular DBS suppresses pathological firing by entraining neurons to the stimulus pulses. </p><p>Therefore, results from this dissertation support the hypothesis that the underlying mechanism of effective DBS is its ability to entrain and regularize neuronal firing, therefore disrupting pathological patterns of activity within the basal ganglia.</p> / Dissertation
65

Multi-antenna Relay Beamforming with Per-antenna Power Constraints

Xiao, Qiang 27 November 2012 (has links)
Multi-antenna relay beamforming is a promising candidate in the next generation wireless communication systems. The assumption of sum power constraint at the relay in previous work is often unrealistic in practice, since each antenna of the relay is limited by its own front-end power amplifier and thus has its own individual power constraint. In this thesis, given per-antenna power constraints, we obtain the semi-closed form solution for the optimal relay beamforming design in the two-hop amplify-and-forward relay beamforming and establish its duality with the point-to-point single-input multiple-output (SIMO) beamforming system. Simulation results show that the per-antenna power constraint case has much lower per-antenna peak power and much smaller variance of per-antenna power usage than the sum-power constraint case. A heuristic iterative algorithm to minimize the total power of relay network is proposed.
66

Multi-antenna Relay Beamforming with Per-antenna Power Constraints

Xiao, Qiang 27 November 2012 (has links)
Multi-antenna relay beamforming is a promising candidate in the next generation wireless communication systems. The assumption of sum power constraint at the relay in previous work is often unrealistic in practice, since each antenna of the relay is limited by its own front-end power amplifier and thus has its own individual power constraint. In this thesis, given per-antenna power constraints, we obtain the semi-closed form solution for the optimal relay beamforming design in the two-hop amplify-and-forward relay beamforming and establish its duality with the point-to-point single-input multiple-output (SIMO) beamforming system. Simulation results show that the per-antenna power constraint case has much lower per-antenna peak power and much smaller variance of per-antenna power usage than the sum-power constraint case. A heuristic iterative algorithm to minimize the total power of relay network is proposed.
67

Determining The Asymmetry In Supernova Explosions By Studying The Radial Velocities Of Ob Runaway Stars

Dincel, Baha 01 July 2012 (has links) (PDF)
Understanding the asymmetry in core collapse supernova explosions is pointed out by various astrophysicists as it is the key factor in determining the observational properties of the pulsars. The initial kick given by the ex- plosion to the pulsar affects its spin period and space velocity. Up to now, although the observations do not show a direct relation between the observational features of the pulsar and its space velocity, they show a clear relation between the spin period and the magnetic field strength, hence its radiation processes. In this thesis, as the method, tracing the companions of progenitors if they were in close binaries, which becomes a runaway star after the supernova explosion was chosen. Over the candidates selected in Guseinov et al (2005), the spectral types of 11 runaway candidates from 7 supernova remnants determined through analyzing their spectroscopic observations. Radial velocity determination was applied to the discovered B6V type star GSC 03156-01430 inside the supernova remnant G78.2+2.1. Also by studying the proper motion data, we compared the motion of the runaway star and the related pulsar in order to determine the asymmetry in the supernova explosion. The neutron star PSR 2021+4026 is moving with a 2-D velocity of &sim / 580 km/s with respect to the rest frame of its birth association Cyg OB9. &sim / 550 km/s more than expected in the symmetric case. Re-constructing the pre-supernova binary shows that the asymmetry in the supernova explosion does not depend on the binarity.
68

Advanced Transceiver Algorithms for OFDM(A) Systems

Mahmoud, Hisham A. 25 March 2009 (has links)
With the increasing advancements in the digital technology, future wireless systems are promising to support higher data rates, higher mobile speeds, and wider coverage areas, among other features. While further technological developments allow systems to support higher computational complexity, lower power consumption, and employ larger memory units, other resources remain limited. One such resource, which is of great importance to wireless systems, is the available spectrum for radio communications. To be able to support high data rate wireless applications, there is a need for larger bandwidths in the spectrum. Since the spectrum cannot be expanded, studies have been concerned with fully utilizing the available spectrum. One approach to achieve this goal is to reuse the available spectrum through space, time, frequency, and code multiplexing techniques. Another approach is to optimize the transceiver design as to achieve the highest throughput over the used spectrum. From the physical layer perspective, there is a need for a highly flexible and efficient modulation technique to carry the communication signal. A multicarrier modulation technique known as orthogonal frequency division multiplexing (OFDM) is one example of such a technique. OFDM has been used in a number of current wireless standards such as wireless fidelity (WiFi) and worldwide interoperability for microwave access (WiMAX) standards by the Institute of Electrical and Electronics Engineers (IEEE), and has been proposed for future 4G technologies such as the long term evolution (LTE) and LTE-advanced standards by the 3rd Generation Partnership Project (3GPP), and the wireless world initiative new radio (WINNER) standard by the Information society technologies (IST). This is due to OFDM’s high spectral efficiency, resistance to narrow band interference, support for high data rates, adaptivity, and scalability. In this dissertation, OFDM and multiuser OFDM , also known as orthogonal frequency division multiple access (OFDMA), techniques are investigated as a candidate for advanced wireless systems. Features and requirements of future applications are discussed in detail, and OFDM’s ability to satisfy these requirements is investigated. We identify a number of challenges that when addressed can improve the performance and throughput of OFDM-based systems. The challenges are investigated over three stages. In the first stage, minimizing, or avoiding, the interference between multiple OFDMA users as well as adjacent systems is addressed. An efficient algorithm for OFDMA uplink synchronization that maintains the orthogonality between multiple users is proposed. For adjacent channel interference, a new spectrum shaping method is proposed that can reduce the out-of-band radiation of OFDM signals. Both methods increase the utilization of available spectrum and reduce interference between different users. In the second stage, the goal is to maximize the system throughput for a given available bandwidth. The OFDM system performance is considered under practical channel conditions, and the corresponding bit error rate (BER) expressions are derived. Based on these results, the optimum pilot insertion rate is investigated. In addition, a new pilot pattern that improves the system ability to estimate and equalize various radio frequency (RF) impairments is proposed. In the last stage, acquiring reliable measurements regarding the received signal is addressed. Error vector magnitude (EVM) is a common performance metric that is being used in many of today’s standards and measurement devices. Inferring the signal-to-noise ratio (SNR) from EVM measurements has been investigated for either high SNR values or data-aided systems. We show that using current methods does not yield reliable estimates of the SNR under other conditions. Thus, we consider the relation between EVM and SNR for nondata-aided systems. We provide expressions that allow for accurate SNR estimation under various practical channel conditions.
69

Computerised Microtomography : Non-invasive imaging and analysis of biological samples, with special reference to monitoring development of osteoporosis in small animals

Stenström, Mats January 2001 (has links)
The use of Computerised microtomography (CμT) in biomedical research is well established, with most applications developed at synchrotron facilities. The possibility to non-invasively monitor morphological changes in biological samples, makes it an attractive technique in biomedicine. However, high absorbed doses and long examination times are a disadvantage that limits the possibilities of performing longitudinal examinations. The aim of this work was to optimise CmT using conventional X-ray tubes for applications in non-destructive material testing and for skeleton research in small animals (rat). A calculational model of the imaging system was developed and used to optimise the relation between image quality, expressed as the signal-to-noise ratio (SNR) in detecting a contrasting detail, and imaging time in material testing. The model was modified to optimise the relation between the SNR in detecting a trabecular detail in cancelleous bone and the mean absorbed dose in spongiosa and skin for (rat) tibia and femur. Gastrectomized Sprague-Dawley rats were used to initiate osteoporotic changes. In order to detect differences in between gastrectomized rats and controls, spatial resolutions of 150 mm or better were needed. The minimum absorbed doses in femur spongiosa at SNR = 5 were 1mGy - 700 mGy at spatial resolutions from 100 mm to10 mm. In femur skin, the corresponding minimum absorbed doses were 2 mGy - 2000 mGy. Corresponding values for tibia were 0.3 mGy - 300 mGy for both spongiosa and skin (spatial resolution of 100 mm to10 mm). Taking 0.5 Gy as the tolerance limit for the spongiosa dose, longitudinal studies with six repeated examinations will be possible at a spatial resolution of 25 mm in femur and 17 examinations in tibia.
70

A Comparison of Measures of Signal-To-Noise Ratio, Jitter, Shimmer, and Speaking Fundamental Frequency in Smoking and Nonsmoking Females

Coy, Kelly (Kelly Bishop) 12 1900 (has links)
Fifteen nonsmoking and fifteen smoking females 19 to 36 years of age were evaluated on measures of signal-to-noise ratio (SNR), jitter, shimmer, and speaking fundamental frequency (F0). The results indicated that: 1) there is a significant difference between female smokers and nonsmokers on measures of SNR, mean, and maximum F0 and, 2) there is no significant difference between female smokers and nonsmokers on measures of jitter, shimmer and minimum F0 . The SNR was found to be a powerful tool which is capable of distinguishing subtle vocal characteristics between the subject groups. It would appear that cigarette smoking may have an impact on the voice before distinct laryngeal pathologies are present.

Page generated in 0.0273 seconds