• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 6
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 79
  • 79
  • 15
  • 11
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Development of three AI techniques for 2D platform games

Persson, Martin January 2005 (has links)
This thesis serves as an introduction to anyone that has an interest in artificial intelligence games and has experience in programming or anyone who knows nothing of computer games but wants to learn about it. The first part will present a brief introduction to AI, then it will give an introduction to games and game programming for someone that has little knowledge about games. This part includes game programming terminology, different game genres and a little history of games. Then there is an introduction of a couple of common techniques used in game AI. The main contribution of this dissertation is in the second part where three techniques that never were properly implemented before 3D games took over the market are introduced and it is explained how they would be done if they were to live up to today’s standards and demands. These are: line of sight, image recognition and pathfinding. These three techniques are used in today’s 3D games so if a 2D game were to be released today the demands on the AI would be much higher then they were ten years ago when 2D games stagnated. The last part is an evaluation of the three discussed topics.
52

Optimisation des lois de commande d’un imageur sur critère optronique. Application à un imageur à deux étages de stabilisation. / Line of Sight controller global tuning based on a high-level optronic criterion. Application to a double-stage stabilization platform

Frasnedo, Sophie 06 December 2016 (has links)
Ces travaux sur la stabilisation de la Ligne de Visée d’un dispositif optronique s’inscrivent dans le contexte actuel de durcissement des exigences de stabilisation et de réduction du temps accordé à la synthèse des lois de commande.Ils incluent dans un premier temps l’amélioration de la performance intrinsèque de stabilisation du système. La solution proposée ici est l’ajout d’un étage de stabilisation supplémentaire à une structure de stabilisation existante. L’architecture de ce nouvel étage est définie. Les composants sont choisis parmi les technologies existantes puis caractérisés expérimentalement. Un modèle complet du système à deux étages de stabilisation est ensuite proposé.L’objectif de ces travaux comprend également la simplification des procédures d’élaboration des lois de commande par l’utilisation d’une fonction de coût F incluant notamment la Fonction de Transfert de Modulation (qui quantifie le flou introduit par l’erreur de stabilisation dans l’image) en lieu et place ducritère dérivé usuel qui nécessite des vérifications supplémentaires et qui peut s’avérer conservatif.L’évaluation de F étant coûteuse en temps de calcul, un algorithme d’optimisation bayésienne, adapté à l’optimisation des fonctions coûteuses, permet la synthèse des lois de commande du système dans un temps compatible avec les contraintes industrielles, à partir de la modélisation du système précédemment proposée. / The presented work on the Line of Sight stabilization of an optronic device meets the heightened demands regarding stabilization performances that come with the reduction of the time allowed to controller tuning.It includes the intrinsinc improvement of the system stabilization. The proposed solution features a double stabilization stage built from a single stabilization stage existing system. The new architecture is specified and the new components are chosen among the existing technology and experimentally characterized. A complete double stabilization stage model is then proposed.The simplification of the controller tuning process is another goal. The designed cost function F includes a high-level optronic criterion, the Modulation Transfer Function (that quantifies the level of blur broughtinto the image by the residual motion of the platform) instead of the usual low-level and potentially conservative criterion.The function F is costly to evaluate. In order to tune the controller parameters within industrial time constraints, a Bayesian algorithm, adapted to optimization with a reduced budget of evaluations, is implemented.Controllers of both stabilization stages are simultaneously tuned thanks to the previously developped system model.
53

An Approach For Computing Intervisibility Using Graphical Processing U

Tracy, Judd 01 January 2004 (has links)
In large scale entity-level military force-on-force simulations it is essential to know when one entity can visibly see another entity. This visibility determination plays an important role in the simulation and can affect the outcome of the simulation. When virtual Computer Generated Forces (CGF) are introduced into the simulation these intervisibilities must now be calculated by the virtual entities on the battlefield. But as the simulation size increases so does the complexity of calculating visibility between entities. This thesis presents an algorithm for performing these visibility calculations using Graphical Processing Units (GPU) instead of the Central Processing Units (CPU) that have been traditionally used in CGF simulations. This algorithm can be distributed across multiple GPUs in a cluster and its scalability exceeds that of CGF-based algorithms. The poor correlations of the two visibility algorithms are demonstrated showing that the GPU algorithm provides a necessary condition for a "Fair Fight" when paired with visual simulations.
54

A Coverage Area Estimation Model for Interference-Limited Non-Line-of-Sight Point-to-Multipoint Fixed Broadband Wireless Communication Systems

RamaSarma, Vaidyanathan 04 October 2002 (has links)
First-generation, line-of-sight (LOS) fixed broadband wireless access techniques have been around for several years. However, services based on this technology have been limited in scope to service areas where transceivers can communicate with their base stations, unimpeded by trees, buildings and other obstructions. This limitation has serious consequences in that the system can deliver only 50% to 70% coverage within a given cell radius, thus affecting earned revenue. Next generation broadband fixed wireless access techniques are aimed at achieving a coverage area greater than 90%. To achieve this target, these techniques must be based on a point-to-multipoint (PMP) cellular architecture with low base station antennas, thus possessing the ability to operate in true non-line-of-sight (NLOS) conditions. A possible limiting factor for these systems is link degradation due to interference. This thesis presents a new model to estimate the levels of co-channel interference for such systems operating within the 3.5 GHz multichannel multipoint distribution service (MMDS) band. The model is site-specific in that it uses statistical building/roof height distribution parameters obtained from practically modeling several metropolitan cities in the U.S. using geographic information system (GIS) tools. This helps to obtain a realistic estimate and helps analyze the tradeoff between cell radius and modulation complexity. Together, these allow the system designer to decide on an optimal location for placement of customer premises equipment (CPE) within a given cell area. / Master of Science
55

Autonomous Unmanned Ground Vehicle (UGV) Follower Design

Chen, Yuanyan 19 September 2016 (has links)
No description available.
56

Statistical Analysis of Geolocation Fundamentals Using Stochastic Geometry

O'Lone, Christopher Edward 22 January 2021 (has links)
The past two decades have seen a surge in the number of applications requiring precise positioning data. Modern cellular networks offer many services based on the user's location, such as emergency services (e.g., E911), and emerging wireless sensor networks are being used in applications spanning environmental monitoring, precision agriculture, warehouse and manufacturing logistics, and traffic monitoring, just to name a few. In these sensor networks in particular, obtaining precise positioning data of the sensors gives vital context to the measurements being reported. While the Global Positioning System (GPS) has traditionally been used to obtain this positioning data, the deployment locations of these cellular and sensor networks in GPS-constrained environments (e.g., cities, indoors, etc.), along with the need for reliable positioning, requires a localization scheme that does not rely solely on GPS. This has lead to localization being performed entirely by the network infrastructure itself, or by the network infrastructure aided, in part, by GPS. In the literature, benchmarking localization performance in these networks has traditionally been done in a deterministic manner. That is, for a fixed setup of anchors (nodes with known location) and a target (a node with unknown location) a commonly used benchmark for localization error, such as the Cramer-Rao lower bound (CRLB), can be calculated for a given localization strategy, e.g., time-of-arrival (TOA), angle-of-arrival (AOA), etc. While this CRLB calculation provides excellent insight into expected localization performance, its traditional treatment as a deterministic value for a specific setup is limited. Rather than trying to gain insight into a specific setup, network designers are more often interested in aggregate localization error statistics within the network as a whole. Questions such as: "What percentage of the time is localization error less than x meters in the network?" are commonplace. In order to answer these types of questions, network designers often turn to simulations; however, these come with many drawbacks, such as lengthy execution times and the inability to provide fundamental insights due to their inherent ``block box'' nature. Thus, this dissertation presents the first analytical solution with which to answer these questions. By leveraging tools from stochastic geometry, anchor positions and potential target positions can be modeled by Poisson point processes (PPPs). This allows for the CRLB of position error to be characterized over all setups of anchor positions and potential target positions realizable within the network. This leads to a distribution of the CRLB, which can completely characterize localization error experienced by a target within the network, and can consequently be used to answer questions regarding network-wide localization performance. The particular CRLB distribution derived in this dissertation is for fourth-generation (4G) and fifth-generation (5G) sub-6GHz networks employing a TOA localization strategy. Recognizing the tremendous potential that stochastic geometry has in gaining new insight into localization, this dissertation continues by further exploring the union of these two fields. First, the concept of localizability, which is the probability that a mobile is able to obtain an unambiguous position estimate, is explored in a 5G, millimeter wave (mm-wave) framework. In this framework, unambiguous single-anchor localization is possible with either a line-of-sight (LOS) path between the anchor and mobile or, if blocked, then via at least two NLOS paths. Thus, for a single anchor-mobile pair in a 5G, mm-wave network, this dissertation derives the mobile's localizability over all environmental realizations this anchor-mobile pair is likely to experience in the network. This is done by: (1) utilizing the Boolean model from stochastic geometry, which statistically characterizes the random positions, sizes, and orientations of reflectors (e.g., buildings) in the environment, (2) considering the availability of first-order (i.e., single-bounce) reflections as well as the LOS path, and (3) considering the possibility that reflectors can either facilitate or block reflections. In addition to the derivation of the mobile's localizability, this analysis also reveals that unambiguous localization, via reflected NLOS signals exclusively, is a relatively small contributor to the mobile's overall localizability. Lastly, using this first-order reflection framework developed under the Boolean model, this dissertation then statistically characterizes the NLOS bias present on range measurements. This NLOS bias is a common phenomenon that arises when trying to measure the distance between two nodes via the time delay of a transmitted signal. If the LOS path is blocked, then the extra distance that the signal must travel to the receiver, in excess of the LOS path, is termed the NLOS bias. Due to the random nature of the propagation environment, the NLOS bias is a random variable, and as such, its distribution is sought. As before, assuming NLOS propagation is due to first-order reflections, and that reflectors can either facilitate or block reflections, the distribution of the path length (i.e., absolute time delay) of the first-arriving multipath component (MPC) is derived. This result is then used to obtain the first NLOS bias distribution in the localization literature that is based on the absolute delay of the first-arriving MPC for outdoor time-of-flight (TOF) range measurements. This distribution is shown to match exceptionally well with commonly assumed gamma and exponential NLOS bias models in the literature, which were only attained previously through heuristic or indirect methods. Finally, the flexibility of this analytical framework is utilized by further deriving the angle-of-arrival (AOA) distribution of the first-arriving MPC at the mobile. This distribution gives novel insight into how environmental obstacles affect the AOA and also represents the first AOA distribution, of any kind, derived under the Boolean model. In summary, this dissertation uses the analytical tools offered by stochastic geometry to gain new insights into localization metrics by performing analyses over the entire ensemble of infrastructure or environmental realizations that a target is likely to experience in a network. / Doctor of Philosophy / The past two decades have seen a surge in the number of applications requiring precise positioning data. Modern cellular networks offer many services based on the user's location, such as emergency services (e.g., E911), and emerging wireless sensor networks are being used in applications spanning environmental monitoring, precision agriculture, warehouse and manufacturing logistics, and traffic monitoring, just to name a few. In these sensor networks in particular, obtaining precise positioning data of the sensors gives vital context to the measurements being reported. While the Global Positioning System (GPS) has traditionally been used to obtain this positioning data, the deployment locations of these cellular and sensor networks in GPS-constrained environments (e.g., cities, indoors, etc.), along with the need for reliable positioning, requires a localization scheme that does not rely solely on GPS. This has lead to localization being performed entirely by the network infrastructure itself, or by the network infrastructure aided, in part, by GPS. When speaking in terms of localization, the network infrastructure consists of what are called anchors, which are simply nodes (points) with a known location. These can be base stations, WiFi access points, or designated sensor nodes, depending on the network. In trying to determine the position of a target (i.e., a user, or a mobile), various measurements can be made between this target and the anchor nodes in close proximity. These measurements are typically distance (range) measurements or angle (bearing) measurements. Localization algorithms then process these measurements to obtain an estimate of the target position. The performance of a given localization algorithm (i.e., estimator) is typically evaluated by examining the distance, in meters, between the position estimates it produces vs. the actual (true) target position. This is called the positioning error of the estimator. There are various benchmarks that bound the best (lowest) error that these algorithms can hope to achieve; however, these benchmarks depend on the particular setup of anchors and the target. The benchmark of localization error considered in this dissertation is the Cramer-Rao lower bound (CRLB). To determine how this benchmark of localization error behaves over the entire network, all of the various setups of anchors and the target that would arise in the network must be considered. Thus, this dissertation uses a field of statistics called stochastic geometry} to model all of these random placements of anchors and the target, which represent all the setups that can be experienced in the network. Under this model, the probability distribution of this localization error benchmark across the entirety of the network is then derived. This distribution allows network designers to examine localization performance in the network as a whole, rather than just for a specific setup, and allows one to obtain answers to questions such as: "What percentage of the time is localization error less than x meters in the network?" Next, this dissertation examines a concept called localizability, which is the probability that a target can obtain a unique position estimate. Oftentimes localization algorithms can produce position estimates that congregate around different potential target positions, and thus, it is important to know when algorithms will produce estimates that congregate around a unique (single) potential target position; hence the importance of localizability. In fifth generation (5G), millimeter wave (mm-wave) networks, only one anchor is needed to produce a unique target position estimate if the line-of-sight (LOS) path between the anchor and the target is unimpeded. If the LOS path is impeded, then a unique target position can still be obtained if two or more non-line-of-sight (NLOS) paths are available. Thus, over all possible environmental realizations likely to be experienced in the network by this single anchor-mobile pair, this dissertation derives the mobile's localizability, or in this case, the probability the LOS path or at least two NLOS paths are available. This is done by utilizing another analytical tool from stochastic geometry known as the Boolean model, which statistically characterizes the random positions, sizes, and orientations of reflectors (e.g., buildings) in the environment. Under this model, considering the availability of first-order (i.e., single-bounce) reflections as well as the LOS path, and considering the possibility that reflectors can either facilitate or block reflections, the mobile's localizability is derived. This result reveals the roles that the LOS path and the NLOS paths play in obtaining a unique position estimate of the target. Using this first-order reflection framework developed under the Boolean model, this dissertation then statistically characterizes the NLOS bias present on range measurements. This NLOS bias is a common phenomenon that arises when trying to measure the distance between two nodes via the time-of-flight (TOF) of a transmitted signal. If the LOS path is blocked, then the extra distance that the signal must travel to the receiver, in excess of the LOS path, is termed the NLOS bias. As before, assuming NLOS propagation is due to first-order reflections and that reflectors can either facilitate or block reflections, the distribution of the path length (i.e., absolute time delay) of the first-arriving multipath component (MPC) (or first-arriving ``reflection path'') is derived. This result is then used to obtain the first NLOS bias distribution in the localization literature that is based on the absolute delay of the first-arriving MPC for outdoor TOF range measurements. This distribution is shown to match exceptionally well with commonly assumed NLOS bias distributions in the literature, which were only attained previously through heuristic or indirect methods. Finally, the flexibility of this analytical framework is utilized by further deriving angle-of-arrival (AOA) distribution of the first-arriving MPC at the mobile. This distribution yields the probability that, for a specific angle, the first-arriving reflection path arrives at the mobile at this angle. This distribution gives novel insight into how environmental obstacles affect the AOA and also represents the first AOA distribution, of any kind, derived under the Boolean model. In summary, this dissertation uses the analytical tools offered by stochastic geometry to gain new insights into localization metrics by performing analyses over all of the possible infrastructure or environmental realizations that a target is likely to experience in a network.
57

A ROBUST DIGITAL WIRELESS LINK FOR TACTICAL UAV’S

Takacs, Edward, Durso, Christopher M., Dirdo, David 10 1900 (has links)
ITC/USA 2005 Conference Proceedings / The Forty-First Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2005 / Riviera Hotel & Convention Center, Las Vegas, Nevada / A conventionally designed radio frequency amplifier operated in its linear region exhibits low DC to RF conversion efficiency. Typically, for a power amplifier designed for digital modulation applications, the amplifier is operated “backed-off” from its P1dB point by a factor of 10 or -10 dB. The typical linear amplifier is biased for either Class A or Class A/B operation depending on the acceptable design trade-offs between efficiency and linearity between these two methods. A novel design approach to increasing the efficiency of a linear RF power amplifier using a modified Odd-Way Doherty technique is presented in this paper. The design was simulated, built and then tested. The design yields improvements in efficiency and linearity.
58

Geometric Model for Tracker-Target Look Angles and Line of Sight Distance

Laird, Daniel T. 10 1900 (has links)
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV / To determine the tracking abilities of a Telemetry (TM) antenna control unit (ACU) requires 'truth data' to analyze the accuracy of measured, or observed tracking angles. This requires we know the actual angle, i.e., that we know where the target is above the earth. The positional truth is generated from target time-space position information (TSPI), which implicitly places the target's global positioning system (GPS) as the source of observational accuracy. In this paper we present a model to generate local look-angles (LA) and line-of-sight (LoS) distance with respect to (w.r.t.) target global GPS. We ignore inertial navigation system (INS) data in generating relative position at time T; thus we model the target as a global point in time relative to the local tracker's global fixed position in time. This is the first of three companion papers on tracking This is the first of three companion papers on tracking analyses employing Statistically Defensible Test & Evaluation (SDT&E) methods.
59

Three-dimensional scene recovery for measuring sighting distances of rail track assets from monocular forward facing videos

Warsop, Thomas E. January 2011 (has links)
Rail track asset sighting distance must be checked regularly to ensure the continued and safe operation of rolling stock. Methods currently used to check asset line-of-sight involve manual labour or laser systems. Video cameras and computer vision techniques provide one possible route for cheaper, automated systems. Three categories of computer vision method are identified for possible application: two-dimensional object recognition, two-dimensional object tracking and three-dimensional scene recovery. However, presented experimentation shows recognition and tracking methods produce less accurate asset line-of-sight results for increasing asset-camera distance. Regarding three-dimensional scene recovery, evidence is presented suggesting a relationship between image feature and recovered scene information. A novel framework which learns these relationships is proposed. Learnt relationships from recovered image features probabilistically limit the search space of future features, improving efficiency. This framework is applied to several scene recovery methods and is shown (on average) to decrease computation by two-thirds for a possible, small decrease in accuracy of recovered scenes. Asset line-of-sight results computed from recovered three-dimensional terrain data are shown to be more accurate than two-dimensional methods, not effected by increasing asset-camera distance. Finally, the analysis of terrain in terms of effect on asset line-of-sight is considered. Terrain elements, segmented using semantic information, are ranked with a metric combining a minimum line-of-sight blocking distance and the growth required to achieve this minimum distance. Since this ranking measure is relative, it is shown how an approximation of the terrain data can be applied, decreasing computation time. Further efficiency increases are found by decomposing the problem into a set of two-dimensional problems and applying binary search techniques. The combination of the research elements presented in this thesis provide efficient methods for automatically analysing asset line-of-sight and the impact of the surrounding terrain, from captured monocular video.
60

An adaptive autopilot design for an uninhabited surface vehicle

Annamalai, Andy S. K. January 2014 (has links)
An adaptive autopilot design for an uninhabited surface vehicle Andy SK Annamalai The work described herein concerns the development of an innovative approach to the design of autopilot for uninhabited surface vehicles. In order to fulfil the requirements of autonomous missions, uninhabited surface vehicles must be able to operate with a minimum of external intervention. Existing strategies are limited by their dependence on a fixed model of the vessel. Thus, any change in plant dynamics has a non-trivial, deleterious effect on performance. This thesis presents an approach based on an adaptive model predictive control that is capable of retaining full functionality even in the face of sudden changes in dynamics. In the first part of this work recent developments in the field of uninhabited surface vehicles and trends in marine control are discussed. Historical developments and different strategies for model predictive control as applicable to surface vehicles are also explored. This thesis also presents innovative work done to improve the hardware on existing Springer uninhabited surface vehicle to serve as an effective test and research platform. Advanced controllers such as a model predictive controller are reliant on the accuracy of the model to accomplish the missions successfully. Hence, different techniques to obtain the model of Springer are investigated. Data obtained from experiments at Roadford Reservoir, United Kingdom are utilised to derive a generalised model of Springer by employing an innovative hybrid modelling technique that incorporates the different forward speeds and variable payload on-board the vehicle. Waypoint line of sight guidance provides the reference trajectory essential to complete missions successfully. The performances of traditional autopilots such as proportional integral and derivative controllers when applied to Springer are analysed. Autopilots based on modern controllers such as linear quadratic Gaussian and its innovative variants are integrated with the navigation and guidance systems on-board Springer. The modified linear quadratic Gaussian is obtained by combining various state estimators based on the Interval Kalman filter and the weighted Interval Kalman filter. Change in system dynamics is a challenge faced by uninhabited surface vehicles that result in erroneous autopilot behaviour. To overcome this challenge different adaptive algorithms are analysed and an innovative, adaptive autopilot based on model predictive control is designed. The acronym ‘aMPC’ is coined to refer to adaptive model predictive control that is obtained by combining the advances made to weighted least squares during this research and is used in conjunction with model predictive control. Successful experimentation is undertaken to validate the performance and autonomous mission capabilities of the adaptive autopilot despite change in system dynamics.

Page generated in 0.0411 seconds