• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 37
  • 9
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 159
  • 55
  • 47
  • 44
  • 39
  • 31
  • 25
  • 22
  • 20
  • 18
  • 18
  • 17
  • 15
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Statistical Analysis of Geolocation Fundamentals Using Stochastic Geometry

O'Lone, Christopher Edward 22 January 2021 (has links)
The past two decades have seen a surge in the number of applications requiring precise positioning data. Modern cellular networks offer many services based on the user's location, such as emergency services (e.g., E911), and emerging wireless sensor networks are being used in applications spanning environmental monitoring, precision agriculture, warehouse and manufacturing logistics, and traffic monitoring, just to name a few. In these sensor networks in particular, obtaining precise positioning data of the sensors gives vital context to the measurements being reported. While the Global Positioning System (GPS) has traditionally been used to obtain this positioning data, the deployment locations of these cellular and sensor networks in GPS-constrained environments (e.g., cities, indoors, etc.), along with the need for reliable positioning, requires a localization scheme that does not rely solely on GPS. This has lead to localization being performed entirely by the network infrastructure itself, or by the network infrastructure aided, in part, by GPS. In the literature, benchmarking localization performance in these networks has traditionally been done in a deterministic manner. That is, for a fixed setup of anchors (nodes with known location) and a target (a node with unknown location) a commonly used benchmark for localization error, such as the Cramer-Rao lower bound (CRLB), can be calculated for a given localization strategy, e.g., time-of-arrival (TOA), angle-of-arrival (AOA), etc. While this CRLB calculation provides excellent insight into expected localization performance, its traditional treatment as a deterministic value for a specific setup is limited. Rather than trying to gain insight into a specific setup, network designers are more often interested in aggregate localization error statistics within the network as a whole. Questions such as: "What percentage of the time is localization error less than x meters in the network?" are commonplace. In order to answer these types of questions, network designers often turn to simulations; however, these come with many drawbacks, such as lengthy execution times and the inability to provide fundamental insights due to their inherent ``block box'' nature. Thus, this dissertation presents the first analytical solution with which to answer these questions. By leveraging tools from stochastic geometry, anchor positions and potential target positions can be modeled by Poisson point processes (PPPs). This allows for the CRLB of position error to be characterized over all setups of anchor positions and potential target positions realizable within the network. This leads to a distribution of the CRLB, which can completely characterize localization error experienced by a target within the network, and can consequently be used to answer questions regarding network-wide localization performance. The particular CRLB distribution derived in this dissertation is for fourth-generation (4G) and fifth-generation (5G) sub-6GHz networks employing a TOA localization strategy. Recognizing the tremendous potential that stochastic geometry has in gaining new insight into localization, this dissertation continues by further exploring the union of these two fields. First, the concept of localizability, which is the probability that a mobile is able to obtain an unambiguous position estimate, is explored in a 5G, millimeter wave (mm-wave) framework. In this framework, unambiguous single-anchor localization is possible with either a line-of-sight (LOS) path between the anchor and mobile or, if blocked, then via at least two NLOS paths. Thus, for a single anchor-mobile pair in a 5G, mm-wave network, this dissertation derives the mobile's localizability over all environmental realizations this anchor-mobile pair is likely to experience in the network. This is done by: (1) utilizing the Boolean model from stochastic geometry, which statistically characterizes the random positions, sizes, and orientations of reflectors (e.g., buildings) in the environment, (2) considering the availability of first-order (i.e., single-bounce) reflections as well as the LOS path, and (3) considering the possibility that reflectors can either facilitate or block reflections. In addition to the derivation of the mobile's localizability, this analysis also reveals that unambiguous localization, via reflected NLOS signals exclusively, is a relatively small contributor to the mobile's overall localizability. Lastly, using this first-order reflection framework developed under the Boolean model, this dissertation then statistically characterizes the NLOS bias present on range measurements. This NLOS bias is a common phenomenon that arises when trying to measure the distance between two nodes via the time delay of a transmitted signal. If the LOS path is blocked, then the extra distance that the signal must travel to the receiver, in excess of the LOS path, is termed the NLOS bias. Due to the random nature of the propagation environment, the NLOS bias is a random variable, and as such, its distribution is sought. As before, assuming NLOS propagation is due to first-order reflections, and that reflectors can either facilitate or block reflections, the distribution of the path length (i.e., absolute time delay) of the first-arriving multipath component (MPC) is derived. This result is then used to obtain the first NLOS bias distribution in the localization literature that is based on the absolute delay of the first-arriving MPC for outdoor time-of-flight (TOF) range measurements. This distribution is shown to match exceptionally well with commonly assumed gamma and exponential NLOS bias models in the literature, which were only attained previously through heuristic or indirect methods. Finally, the flexibility of this analytical framework is utilized by further deriving the angle-of-arrival (AOA) distribution of the first-arriving MPC at the mobile. This distribution gives novel insight into how environmental obstacles affect the AOA and also represents the first AOA distribution, of any kind, derived under the Boolean model. In summary, this dissertation uses the analytical tools offered by stochastic geometry to gain new insights into localization metrics by performing analyses over the entire ensemble of infrastructure or environmental realizations that a target is likely to experience in a network. / Doctor of Philosophy / The past two decades have seen a surge in the number of applications requiring precise positioning data. Modern cellular networks offer many services based on the user's location, such as emergency services (e.g., E911), and emerging wireless sensor networks are being used in applications spanning environmental monitoring, precision agriculture, warehouse and manufacturing logistics, and traffic monitoring, just to name a few. In these sensor networks in particular, obtaining precise positioning data of the sensors gives vital context to the measurements being reported. While the Global Positioning System (GPS) has traditionally been used to obtain this positioning data, the deployment locations of these cellular and sensor networks in GPS-constrained environments (e.g., cities, indoors, etc.), along with the need for reliable positioning, requires a localization scheme that does not rely solely on GPS. This has lead to localization being performed entirely by the network infrastructure itself, or by the network infrastructure aided, in part, by GPS. When speaking in terms of localization, the network infrastructure consists of what are called anchors, which are simply nodes (points) with a known location. These can be base stations, WiFi access points, or designated sensor nodes, depending on the network. In trying to determine the position of a target (i.e., a user, or a mobile), various measurements can be made between this target and the anchor nodes in close proximity. These measurements are typically distance (range) measurements or angle (bearing) measurements. Localization algorithms then process these measurements to obtain an estimate of the target position. The performance of a given localization algorithm (i.e., estimator) is typically evaluated by examining the distance, in meters, between the position estimates it produces vs. the actual (true) target position. This is called the positioning error of the estimator. There are various benchmarks that bound the best (lowest) error that these algorithms can hope to achieve; however, these benchmarks depend on the particular setup of anchors and the target. The benchmark of localization error considered in this dissertation is the Cramer-Rao lower bound (CRLB). To determine how this benchmark of localization error behaves over the entire network, all of the various setups of anchors and the target that would arise in the network must be considered. Thus, this dissertation uses a field of statistics called stochastic geometry} to model all of these random placements of anchors and the target, which represent all the setups that can be experienced in the network. Under this model, the probability distribution of this localization error benchmark across the entirety of the network is then derived. This distribution allows network designers to examine localization performance in the network as a whole, rather than just for a specific setup, and allows one to obtain answers to questions such as: "What percentage of the time is localization error less than x meters in the network?" Next, this dissertation examines a concept called localizability, which is the probability that a target can obtain a unique position estimate. Oftentimes localization algorithms can produce position estimates that congregate around different potential target positions, and thus, it is important to know when algorithms will produce estimates that congregate around a unique (single) potential target position; hence the importance of localizability. In fifth generation (5G), millimeter wave (mm-wave) networks, only one anchor is needed to produce a unique target position estimate if the line-of-sight (LOS) path between the anchor and the target is unimpeded. If the LOS path is impeded, then a unique target position can still be obtained if two or more non-line-of-sight (NLOS) paths are available. Thus, over all possible environmental realizations likely to be experienced in the network by this single anchor-mobile pair, this dissertation derives the mobile's localizability, or in this case, the probability the LOS path or at least two NLOS paths are available. This is done by utilizing another analytical tool from stochastic geometry known as the Boolean model, which statistically characterizes the random positions, sizes, and orientations of reflectors (e.g., buildings) in the environment. Under this model, considering the availability of first-order (i.e., single-bounce) reflections as well as the LOS path, and considering the possibility that reflectors can either facilitate or block reflections, the mobile's localizability is derived. This result reveals the roles that the LOS path and the NLOS paths play in obtaining a unique position estimate of the target. Using this first-order reflection framework developed under the Boolean model, this dissertation then statistically characterizes the NLOS bias present on range measurements. This NLOS bias is a common phenomenon that arises when trying to measure the distance between two nodes via the time-of-flight (TOF) of a transmitted signal. If the LOS path is blocked, then the extra distance that the signal must travel to the receiver, in excess of the LOS path, is termed the NLOS bias. As before, assuming NLOS propagation is due to first-order reflections and that reflectors can either facilitate or block reflections, the distribution of the path length (i.e., absolute time delay) of the first-arriving multipath component (MPC) (or first-arriving ``reflection path'') is derived. This result is then used to obtain the first NLOS bias distribution in the localization literature that is based on the absolute delay of the first-arriving MPC for outdoor TOF range measurements. This distribution is shown to match exceptionally well with commonly assumed NLOS bias distributions in the literature, which were only attained previously through heuristic or indirect methods. Finally, the flexibility of this analytical framework is utilized by further deriving angle-of-arrival (AOA) distribution of the first-arriving MPC at the mobile. This distribution yields the probability that, for a specific angle, the first-arriving reflection path arrives at the mobile at this angle. This distribution gives novel insight into how environmental obstacles affect the AOA and also represents the first AOA distribution, of any kind, derived under the Boolean model. In summary, this dissertation uses the analytical tools offered by stochastic geometry to gain new insights into localization metrics by performing analyses over all of the possible infrastructure or environmental realizations that a target is likely to experience in a network.
102

Russia's national interests towards the Caucasus: implications for Georgian sovereignty

Papava, David Z. 06 1900 (has links)
Approved for public release; distribution is unlimited / This thesis explores the causes of Russian foreign policy towards Georgia. It argues that the Russian Federation continues to pursue a policy which weakens the sovereignty of the Caucasus. The main priority of this thesis is to identify why the Russian Federation seems to be pursuing a set of policies that economically and politically weaken the sovereignty of Georgia. Therefore, this thesis examines the forces and factors of Russian domestic politics that drive Russian national interests towards the Caucasus. The analysis focuses on one particular issue-area: the role of the economic elite in shaping Russia's domestic and foreign policies vis-a-vis the state in the electricity sector. In focusing on the energy policies of the Russian Federation, this thesis reveals the negative consequences for Georgia's sovereignty that result from a strong Russian influence in the region. This thesis analyzes how Russian national interests towards Georgia challenge the latter to establish autonomous decisionmaking with regard to its foreign policy and to exercise its own authority through an exclusive competence in internal affairs of the state. In conclusion, this thesis offers policy prescriptions on how Georgia might best preserve its sovereignty with respect to the Russian Federation in terms of energy dependency. / Civilian, Ministry of Defense, Georgia
103

Dimensioneringsmetoder för platta på mark utsatt för koncentrerad last / Analysis methods for slabs on ground subjected to concentrated loading

Johansson-Näslund, Sackarias, Gripeteg, Johan January 2016 (has links)
Purpose: Despite many previous articles and tests on the subject “analysis methods for concrete slabs on ground subjected to concentrated loading” there is still uncertainty on which analysis method to use and if the results correspond to real failure loads. The purpose of this study has been to evaluate and compare different analysis methods for slabs on ground subjected to concentrated loading.Method: Initially literature studies was performed where different analysis methods were studied. Three methods were chosen based on different aspects. It was found that A. Losbergs (1961) method was mainly used in Sweden while other countries in Europe used Meyerhofs (1962) method. Rao & Singhs (1986) method has a similar approach compared to Meyerhofs but ads two different types of failure modes. Two peer re-viewed articles were also chosen from which secondary data could be retrieved. The articles described tests where concrete slabs were loaded until failure. The test condi-tions were used to perform calculations with the three analysis methods. A comparison was made between the test results and the results from calculations.Findings: It is concluded that there are some differences between Losbergs, Meyerhofs and Rao & Singhs analysis methods. Largely the three methods require the same input, they differ in selection of analysis solution, but despite a degree of variation of the calculation results the overall picture for the different loading cases are quite unified. For central loading all analysis methods result in a capacity lower than the test values, varying from 56% to 93% of the failure load. Concerning edge and corner cases the spread of results is even wider. Calculations for the reinforced slab results in a capacity higher than the test values while calculations for the plain concrete slab results in a capacity considerably lower than the test values.Implications: The results in this study indicates that the three analysis methods are applicable for internal loading. The spread of the results makes it difficult to estimate the margin to the actual failure load, but the safety factors according to Eurocode 2 should provide a safe failure margin. Regarding edge and corner cases it is more diffi-cult to draw conclusions due to the large spread of results. Further research and testing is needed.Limitations: The study is limited to three analysis methods and the results from two articles where two different concrete slabs were tested. Inclusion of additional analysis methods and articles with test results would expand the generalizability of the study. However due to the limitations of the extent of the study and disposable time this was not possible.Keywords: Meyerhof, Rao & Singh, Losberg, concrete slab on ground, concrete slab on grade, point load, concentrated load.
104

Traitement d'antenne adapté aux modèles linéaires intégrant une interférence structurée. Application aux signaux mécaniques.

Bouleux, Guillaume 04 December 2007 (has links) (PDF)
Le cadre d'étude de ce travail est celui du traitement d'antenne appliqué au modèle linéaire. Dans divers domaines, comme en bio-médical ou encore en RADAR, le nombre de Directions D'Arrivées (DDA) d'intérêts est un ensemble réduit de toutes les directions constituant le modèle. Nous optons donc pour un modèle structuré s'écrivant <br /><br />Observation = Signal d'intérêt + Interférence structurée + Bruit<br /><br />Où l'interférence structurée est composée d'un certain nombre de Directions D'Arrivées connues ou estimées. De ce modèle, nous proposons deux types d'approches : (1) nous supposons disposer de la connaissance de M-S DDA sur un total de M et (2) nous souhaitons estimer de manière séquentielle M DDA.<br /><br />La littérature fournit des solutions pour résoudre le problème d'estimation de S DDA d'intérêts sur un total de M. Les solutions proposées utilisent une déflation orthogonale du sous-espace signal bruité. Nous donnons alors une nouvelle Borne de Cramér-Rao (CRB) que nous nommons Prior-CRB associée à ce type modèle et nous montrons sous quelles conditions (très restrictives) cette borne est inférieure à une CRB classique issue du modèle linéaire composé de M DDA. Pour s'absoudre des contraintes liées au modèle à déflation orthogonale nous proposons alors d'employer une déflation oblique en place de la déflation orthogonale. Nous construisons alors de nouveau estimateurs des DDA d'intérêts. A la vue des simulations, les performances sont bien meilleures que les algorithmes à déflation orthogonale et nous proposons d'expliquer ces performances par la dérivation des variances théoriques de chacun des estimateurs proposés. Ainsi, via l'analyse de ces variances, nous montrons pourquoi la projection oblique est plus appropriée et nous donnons une relation d'ordre de chacune des variances associées aux algorithmes étudiés.<br /><br />Ici encore le problème de l'estimation séquentielle de M DDA est un problème suscitant un grand intérêt. Seulement, les solutions proposées dans la littérature utilisent une déflation orthogonale pour annuler séquentiellement les directions préalablement estimées via un critère MUSIC modifié. Nous nous démarquons en proposant un algorithme qui pondère, par une fonction quadratique de forçage à zéro, le pseudo-spectre de MUSIC. Cette approche montre de bien meilleures performances que les méthodes à déflation orthogonale et permet de s'affranchir très nettement de la résolution de Rayleigh grâce au contrôle de la fonction de pondération. Nous montrons de plus que cet algorithme est efficace et que l'erreur de propagation peut s'annuler via le réglage d'un paramètre de la fonction de pondération. Pour caractériser au mieux les performances de cet algorithme nous proposons une CRB, que nous nommons la Interfering-CRB issue d'un modèle linéaire constitué d'une DDA d'intérêt et de M-1 DDA interférentes (DDA estimées préalablement ou restant à estimer). Nous montrons que cette borne « reflète » bien l'algorithme ZF-MUSIC.
105

Nouvelles méthodes en filtrage particulaire-Application au recalage de navigation inertielle par mesures altimétriques

DAHIA, Karim 04 January 2005 (has links) (PDF)
L'objectif de ce mémoire est de développer et d'étudier un nouveau type de filtre particulaire appelé le filtre de Kalman-particulaire à noyaux (KPKF). Le KPKF modélise la densité conditionnelle de l'état comme un mélange de gaussiennes centrées sur les particules et ayant des matrices de covariance de norme petite. L'algorithme du KPKF utilise une correction de l'état de type Kalman et une correction de type particulaire modifiant les poids des particules. Un nouveau type de ré-échantillonnage permet de préserver la structure de cette mixture. Le KPKF combine les avantages du filtre particulaire régularisé en terme de robustesse et du filtre de Kalman étendu en terme de précision. Cette nouvelle méthode de filtrage a été appliquée au recalage de navigation inertielle d'un aéronef disposant d'un radio altimètre. Les résultats obtenus montrent que le KPKF fonctionne pour de grandes zones d'incertitude initiale de position.
106

Cartes incertaines et planification optimale pour la localisation d'un engin autonome

Celeste, Francis 10 February 2010 (has links) (PDF)
Des avancées importantes ont été réalisées dans le domaine de la robotique mobile. L'usage croissant des robots terrestres et des drones de petite taille, n'est possible que par l'apport de capacités d'autonomie de mouvement dans l'environnement d'évolution. La problématique de la localisation du système, par la mise en correspondance de mesures issues des capteurs embarqués avec des primitives contenues dans une carte, est primordiale. Ce processus, qui s'appuie sur la mise en oeuvre de techniques de fusion, a été très étudié. Dans cette thèse, nous proposons de définir des méthodes de planification du mouvement d'un mobile, avec pour objectif de garantir une performance de localisation à partir d'une carte incertaine donnée a priori, et ce lors de l'exécution. Une méthode de génération contrôlée de réalisations de cartes bruitées, exploitant la théorie des processus ponctuels, est d'abord présentée. Cette base de cartes permet de construire des cartes multi-niveaux pour la localisation. Le critère d'optimisation est défini à partir de fonctionnelles de la borne de Cramèr-Rao a posteriori, qui tient compte de l'incertitude sur la dynamique du mobile et de la cartographie incertaine. Nous proposons différentes approches, basées sur la méthode de cross-entropie, pour obtenir des stratégies de déplacement avec des modèles de dynamique discret et continu. La qualité des solutions optimales fournies par ces approches heuristiques est analysée en utilisant des résultats de la théorie des valeurs extrêmes. Enfin, nous esquissons une démarche pour l'amélioration ciblée de cartes sous contrainte de ressources afin d'améliorer la performance de localisation.
107

Étude théorique et expérimentale du suivi de particules uniques en conditions extrêmes : imagerie aux photons uniques

Cajgfinger, Thomas 19 October 2012 (has links) (PDF)
Ce manuscrit présente mon travail de thèse portant sur le détecteur de photons uniques electron-bombarded CMOS (ebCMOS) à haute cadence de lecture (500 images/seconde). La première partie compare trois détecteurs ultra-sensibles et et leurs méthodes d'amélioration de la sensibilité au photon : le CMOS bas bruit (sCMOS), l'electron-multiplying CCD (emCCD) à multiplication du signal par pixel et l'ebCMOS à amplification par application d'un champ électrique. La méthode de mesure de l'impact intra-pixel des photons sur le détecteur ebCMOS est présentée. La seconde partie compare la précision de localisation de ces trois détecteurs dans des conditions extrêmes de très bas flux de photons (<10 photons/image). La limite théorique est tout d'abord calculée à l'aide de la limite inférieure de Cramér-Rao pour des jeux de paramètres significatifs. Une comparaison expérimentale des trois détecteurs est ensuite décrite. Le montage permet la création d'un ou plusieurs points sources contrôlés en position, nombre de photons et bruit de fond. Les résultats obtenus permettent une comparaison de l'efficacité, de la pureté et de la précision de localisation des sources. La dernière partie décrit deux expériences réalisées avec la caméra ebCMOS. La première consiste au suivi de nano-cristaux libres (D>10 μm²/s ) au centre Nanoptec avec l'équipe de Christophe Dujardin. La seconde s'intéresse à la nage de bactéries en surface à l'institut Joliot Curie avec l'équipe de Laurence Lemelle. L'algorithme de suivi de sources ponctuelles au photon unique avec l'implémentation d'un filtre de Kalman est aussi décrit.
108

Target Localization Methods For Frequency-only Mimo Radar

Kalkan, Yilmaz 01 September 2012 (has links) (PDF)
This dissertation is focused on developing the new target localization and the target velocity estimation methods for frequency-only multi-input, multi-output (MIMO) radar systems with widely separated antennas. If the frequency resolutions of the transmitted signals are enough, only the received frequencies and the Doppler shifts can be used to find the position of the target. In order to estimate the position and the velocity of the target, most multistatic radars or radar networks use multiple independent measurements from the target such as time-of-arrival (TOA), angle-of-arrival (AOA) and frequency-of-arrival (FOA). Although, frequency based systems have many advantages, frequency based target localization methods are very limited in literature because of the fact that highly non-linear equations are involved in solutions. In this thesis, alternative target localization and the target velocity estimation methods are proposed for frequency-only systems with low complexity. One of the proposed methods is able to estimate the target position and the target velocity based on the measurements of the Doppler frequencies. Moreover, the target movement direction can be estimated efficiently. This method is referred to as &quot / Target Localization via Doppler Frequencies - TLDF&quot / and it can be used for not only radar but also all frequency-based localization systems such as Sonar or Wireless Sensor Networks. Besides the TLDF method, two alternative target position estimation methods are proposed as well. These methods are based on the Doppler frequencies, but they requires the target velocity vector to be known. These methods are referred to as &quot / Target Localization via Doppler Frequencies and Target Velocity - TLD&amp / V methods&quot / and can be divided two sub-methods. One of them is based on the derivatives of the Doppler Frequencies and hence it is called as &quot / Derivated Doppler - TLD&amp / V-DD method&quot / . The second method uses the Maximum Likelihood (ML) principle with grid search, hence it is referred to as &quot / Sub-ML, TLD&amp / V-subML method&quot / . The more realistic signal model for ground based, widely separated MIMO radar is formed as including Swerling target fluctuations and the Doppler frequencies. The Cramer-Rao Bounds (CRB) are derived for the target position and the target velocity estimations for this signal model. After the received signal is constructed, the Doppler frequencies are estimated by using the DFT based periodogram spectral estimator. Then, the estimated Doppler frequencies are collected in a fusion center to localize the target. Finally, the multiple targets localization problem is investigated for frequency-only MIMO radar and a new data association method is proposed. By using the TLDF method, the validity of the method is simulated not only for the targets which are moving linearly but also for the maneuvering targets. The proposed methods can localize the target and estimate the velocity of the target with less error according to the traditional isodoppler based method. Moreover, these methods are superior than the traditional method with respect to the computational complexity. By using the simulations with MATLAB, the superiorities of the proposed methods to the traditional method are shown.
109

TOA-Based Robust Wireless Geolocation and Cramér-Rao Lower Bound Analysis in Harsh LOS/NLOS Environments

Yin, Feng, Fritsche, Carsten, Gustafsson, Fredrik, Zoubir, Abdelhak M January 2013 (has links)
We consider time-of-arrival based robust geolocation in harsh line-of-sight/non-line-of-sight environments. Herein, we assume the probability density function (PDF) of the measurement error to be completely unknown and develop an iterative algorithm for robust position estimation. The iterative algorithm alternates between a PDF estimation step, which approximates the exact measurement error PDF (albeit unknown) under the current parameter estimate via adaptive kernel density estimation, and a parameter estimation step, which resolves a position estimate from the approximate log-likelihood function via a quasi-Newton method. Unless the convergence condition is satisfied, the resolved position estimate is then used to refine the PDF estimation in the next iteration. We also present the best achievable geolocation accuracy in terms of the Cramér-Rao lower bound. Various simulations have been conducted in both real-world and simulated scenarios. When the number of received range measurements is large, the new proposed position estimator attains the performance of the maximum likelihood estimator (MLE). When the number of range measurements is small, it deviates from the MLE, but still outperforms several salient robust estimators in terms of geolocation accuracy, which comes at the cost of higher computational complexity.
110

The Country And The Village: Representations of the Rural in Twentieth-century South Asian Literatures

Mohan, Anupama 05 September 2012 (has links)
Twentieth-century Indian and Sri Lankan literatures (in English, in particular) have shown a strong tendency towards conceptualising the rural and the village within the dichotomous paradigms of utopia and dystopia. Such representations have consequently cast the village in idealized (pastoral) or in realist (counter-pastoral/dystopic) terms. In Chapters One and Two, I read together Mohandas Gandhi’s Hind Swaraj (1908) and Leonard Woolf’s The Village in the Jungle (1913) and argue that Gandhi and Woolf can be seen at the head of two important, but discrete, ways of reading the South Asian village vis-à-vis utopian thought, and that at the intersection of these two ways lies a rich terrain for understanding the many forms in which later twentieth-century South Asian writers chose to re-create city-village-nation dialectics. In this light, I examine in Chapter Three the work of Raja Rao (Kanthapura, 1938) and O. V. Vijayan (The Legends of Khasak, 1969) and in Chapter Four the writings of Martin Wickramasinghe (Gamperaliya, 1944) and Punyakante Wijenaike (The Waiting Earth, 1966) as providing a re-visioning of Gandhi’s and Woolf’s ideas of the rural as a site for civic and national transformation. I conclude by examining in Chapter Five Michael Ondaatje’s Anil’s Ghost (2000) and Amitav Ghosh’s The Hungry Tide (2005) as emblematic of a recent turn in South Asian fiction centred on the rural where the village embodies a “heterotopic” space that critiques and offers a conceptual alternative to the categorical imperatives of utopia and dystopia. I use Michel Foucault’s notion of the “heterotopia” to re-evaluate the utopian dimension in these novels. Although Foucault himself under-theorized the notion of heterotopia and what he did say connected the idea to urban landscapes and imaginaries, we may yet recuperate from his formulations a “third space” of difference that provides an opportunity to rethink the imperatives of utopia in literature and helps understand the rural in twentieth-century South Asian writing in new ways.

Page generated in 0.0207 seconds