• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 453
  • 82
  • 77
  • 47
  • 41
  • 40
  • 38
  • 20
  • 13
  • 7
  • 7
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 984
  • 597
  • 329
  • 263
  • 138
  • 100
  • 98
  • 71
  • 69
  • 68
  • 68
  • 66
  • 62
  • 61
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
731

The optimization of gesture recognition techniques for resource-constrained devices

Niezen, Gerrit 26 January 2009 (has links)
Gesture recognition is becoming increasingly popular as an input mechanism for human-computer interfaces. The availability of MEMS (Micro-Electromechanical System) 3-axis linear accelerometers allows for the design of an inexpensive mobile gesture recognition system. Wearable inertial sensors are a low-cost, low-power solution to recognize gestures and, more generally, track the movements of a person. Gesture recognition algorithms have traditionally only been implemented in cases where ample system resources are available, i.e. on desktop computers with fast processors and large amounts of memory. In the cases where a gesture recognition algorithm has been implemented on a resource-constrained device, only the simplest algorithms were implemented to recognize only a small set of gestures. Current gesture recognition technology can be improved by making algorithms faster, more robust, and more accurate. The most dramatic results in optimization are obtained by completely changing an algorithm to decrease the number of computations. Algorithms can also be optimized by profiling or timing the different sections of the algorithm to identify problem areas. Gestures have two aspects of signal characteristics that make them difficult to recognize: segmentation ambiguity and spatio-temporal variability. Segmentation ambiguity refers to not knowing the gesture boundaries, and therefore reference patterns have to be matched with all possible segments of input signals. Spatio-temporal variability refers to the fact that each repetition of the same gesture varies dynamically in shape and duration, even for the same gesturer. The objective of this study was to evaluate the various gesture recognition algorithms currently in use, after which the most suitable algorithm was optimized in order to implement it on a mobile device. Gesture recognition techniques studied include hidden Markov models, artificial neural networks and dynamic time warping. A dataset for evaluating the gesture recognition algorithms was gathered using a mobile device’s embedded accelerometer. The algorithms were evaluated based on computational efficiency, recognition accuracy and storage efficiency. The optimized algorithm was implemented in a user application on the mobile device to test the empirical validity of the study. / Dissertation (MEng)--University of Pretoria, 2009. / Electrical, Electronic and Computer Engineering / unrestricted
732

Early detection of cardiac arrhythmia based on Bayesian methods from ECG data / La détection précoce des troubles du rythme cardiaque sur la base de méthodes bayésiens à partir des données ECG

Montazeri Ghahjaverestan, Nasim 10 July 2015 (has links)
L'apnée est une complication fréquente chez les nouveaux-nés prématurés. L'un des problèmes les plus fréquents est l'épisode d'apnée bradycardie dont la répétition influence de manière négative le développement de l'enfant. C'est pourquoi les enfants prématurés sont surveillés en continu par un système de monitoring. Depuis la mise en place de ce système, l'espérance de vie et le pronostic de vie des prématurés ont été considérablement améliorés et ainsi la mortalité réduite. En effet, les avancées technologiques en électronique, informatique et télécommunications ont conduit à l'élaboration de systèmes multivoies de monitoring néonatal de plus en plus performants. L'un des principaux signaux exploités dans ces systèmes est l'électrocardiogramme (ECG). Toutefois, même si l'analyse de l'ECG a évolué au fil des années, l'ensemble des informations qu'il fournit n'est pas encore totalement exploité dans les processus de décision, notamment en monitoring en Unité de Soins Intensifs en Néonatalogie (USIN). L'objectif principal de cette thèse est d'améliorer la prise en compte des dynamiques multi-dimensionnelles en proposant de nouvelles approches basées sur un formalisme bayésien, pour la détection précoce des apnées bradycardies chez le nouveau-né prématuré. Aussi, dans cette thèse, nous proposons deux approches bayésiennes, basées sur les caractéristiques de signaux biologiques en vue de la détection précoce de l'apnée bradycardie des nouveaux-nés prématurés. Tout d'abord avec l'approche de Markov caché, nous proposons deux extensions du Modèle de Markov Caché (MMC) classique. La première, qui s'appelle Modèle de Markov Caché Couplé (MMCC), créé une chaîne de Markov à chaque dimension de l'observation et établit un couplage entre les chaînes. La seconde, qui s'appelle Modèle Semi-Markov Caché Couplé (MSMCC), combine les caractéristiques du modèle de MSMC avec le mécanisme de couplage entre canaux. Pour les deux nouveaux modèles (MMCC et MSMCC), les algorithmes récursifs basées sur la version classique de Forward-Backward sont introduits pour résoudre les problèmes d'apprentissage et d'inférence dans le cas couplé. En plus des modèles de Markov, nous proposons deux approches passées sur les filtres de Kalman pour la détection d'apnée. La première utilise les modifications de la morphologie du complexe QRS et est inspirée du modèle générateur de McSharry, déjà utilisé en couplant avec un filtre de Kalman étendu dans le but de détecter des changements subtils de l'ECG, échantillon par échantillon. La deuxième utilise deux modèles AR (l'un pour le processus normal et l'autre pour le processus de bradycardie). Les modèles AR sont appliqués sur la série RR, alors que le filtre de Kalman suit l'évolution des paramètres du modèle AR et fournit une mesure de probabilité des deux processus concurrents. / Apnea-bradycardia episodes (breathing pauses associated with a significant fall in heart rate) are the most common disease in preterm infants. Consequences associated with apnea-bradycardia episodes involve a compromise in oxygenation and tissue perfusion, a poor neuromotor prognosis at childhood and a predisposing factor to sudden-death syndrome in preterm newborns. It is therefore important that these episodes are recognized (early detected or predicted if possible), to start an appropriate treatment and to prevent the associated risks. In this thesis, we propose two Bayesian Network (BN) approaches (Markovian and Switching Kalman Filter) for the early detection of apnea bradycardia events on preterm infants, using different features extracted from electrocardiographic (ECG) recordings. Concerning the Markovian approach, we propose new frameworks for two generalizations of the classical Hidden Markov Model (HMM). The first framework, Coupled Hidden Markov Model (CHMM), is accomplished by assigning a Markov chain (channel) to each dimension of observation and establishing a coupling among channels. The second framework, Coupled Hidden semi Markov Model (CHMM), combines the characteristics of Hidden semi Markov Model (HSMM) with the above-mentioned coupling concept. For each framework, we present appropriate recursions in order to use modified Forward-Backward (FB) algorithms to solve the learning and inference problems. The proposed learning algorithm is based on Maximum Likelihood (ML) criteria. Moreover, we propose two new switching Kalman Filter (SKF) based algorithms, called wave-based and R-based, to present an index for bradycardia detection from ECG. The wave-based algorithm is established based on McSarry's dynamical model for ECG beat generation which is used in an Extended Kalman filter algorithm in order to detect subtle changes in ECG sample by sample. We also propose a new SKF algorithm to model normal beats and those with bradycardia by two different AR processes.
733

Analyse du surcroît de la population agricole en Pologne et en Turquie : une étude comparative / Analysisof the overpopulation in the agricultural sector in Poland and Turkey

Akdere, Özlem 13 December 2013 (has links)
La Pologne et la Turquie témoignent depuis plusieurs années d’une transition économique semblable à travers une forte croissance du PIB, une augmentation des exportations et surtout une hausse des flux internationaux de capitaux. Malgré la transformation économique, le secteur agricole demeure encore une activité importante dans leur économie respective. Comparée aux autres pays européens, le décalage important entre la contribution de l’agriculture au PIB et le nombre des personnes employées révèle une très faible productivité de la main-d’œuvre. L’agriculture représente une source principale d’emploi notamment dans la zone rurale. En dépit de la diminution constante de l’emploi agricole de ces dernières années, il existe une surpopulation dans le secteur et un problème du chômage déguisé. La Pologne, membre de l’Union européenne (UE) depuis mai 2004, a bénéficié fortement des fonds structurels afin d’améliorer et de moderniser son agriculture. Quant à la Turquie, pays candidat à l’UE depuis octobre 2005, elle tente d’adapter son agriculture à la politique agricole commune (PAC). Notre recherche est essentiellement basée sur une étude comparative des pays présentant de nombreuses similitudes mais aussi de réelles divergences quant au niveau de leur structure agraire. À travers des réformes mises en vigueur pendant et après la période d’adhésion en Pologne, on cherche à répondre à la question si les réformes appliquées en Pologne sont ou non transposables au cas de la Turquie. / Poland and Turkey have been demonstrating for several years now a similar economic transition through a strong growth of GDP, a boost in exports and especially an increase in the flow of international capitals. Despite the economic transformation, the agricultural sector remains an important activity in their respective economies. Compared to other European countries, the large gap between the contribution of agriculture to GDP and the number of employees shows a very low productivity of labor. Agriculture is a main source of employment especially in the rural areas. Despite the steady decline in agricultural employment in recent years, the field is overpopulated and the hidden unemployment problem is conceals. Poland, a member of the European Union (EU) since May 2004, has greatly benefited from the Structural Funds to improve and modernize its agriculture. As for Turkey, an EU candidate since October 2005, it tries to adapt its agriculture to the Common Agricultural Policy (CAP). Our research is mainly based on a comparative study of countries with many similarities but also real differences in the level of their agrarian structure. With the help of reforms that came into effect during and after the period of accession of Poland to the EU, we will try to find whether the reforms implemented in Poland are transferable or not to the case of Turkey.
734

Modelling and analysis of wireless MAC protocols with applications to vehicular networks

Jafarian, Javad January 2014 (has links)
The popularity of the wireless networks is so great that we will soon reach the point where most of the devices work based on that, but new challenges in wireless channel access will be created with these increasingly widespread wireless communications. Multi-channel CSMA protocols have been designed to enhance the throughput of the next generation wireless networks compared to single-channel protocols. However, their performance analysis still needs careful considerations. In this thesis, a set of techniques are proposed to model and analyse the CSMA protocols in terms of channel sensing and channel access. In that respect, the performance analysis of un-slotted multi-channel CSMA protocols is studied through considering the hidden terminals. In the modelling phase, important parameters such as shadowing and path loss impairments are being considered. Following that, due to the high importance of spectrum sensing in CSMA protocols, the Double-Threshold Energy Detector (DTED) is thoroughly investigated in this thesis. An iterative algorithm is also proposed to determine optimum values of detection parameters in a sensing-throughput problem formulation. Vehicle-to-Roadside (V2R) communication, as a part of Intelligent Transportation System (ITS), over multi-channel wireless networks is also modelled and analysed in this thesis. In this respect, through proposing a novel mathematical model, the connectivity level which an arbitrary vehicle experiences during its packet transmission with a RSU is also investigated.
735

Phoneme duration modelling for speaker verification

Van Heerden, Charl Johannes 26 June 2009 (has links)
Higher-level features are considered to be a potential remedy against transmission line and cross-channel degradations, currently some of the biggest problems associated with speaker verification. Phoneme durations in particular are not altered by these factors; thus a robust duration model will be a particularly useful addition to traditional cepstral based speaker verification systems. In this dissertation we investigate the feasibility of phoneme durations as a feature for speaker verification. Simple speaker specific triphone duration models are created to statistically represent the phoneme durations. Durations are obtained from an automatic hidden Markov model (HMM) based automatic speech recognition system and are modeled using single mixture Gaussian distributions. These models are applied in a speaker verification system (trained and tested on the YOHO corpus) and found to be a useful feature, even when used in isolation. When fused with acoustic features, verification performance increases significantly. A novel speech rate normalization technique is developed in order to remove some of the inherent intra-speaker variability (due to differing speech rates). Speech rate variability has a negative impact on both speaker verification and automatic speech recognition. Although the duration modelling seems to benefit only slightly from this procedure, the fused system performance improvement is substantial. Other factors known to influence the duration of phonemes are incorporated into the duration model. Utterance final lengthening is known be a consistent effect and thus “position in sentence” is modeled. “Position in word” is also modeled since triphones do not provide enough contextual information. This is found to improve performance since some vowels’ duration are particularly sensitive to its position in the word. Data scarcity becomes a problem when building speaker specific duration models. By using information from available data, unknown durations can be predicted in an attempt to overcome the data scarcity problem. To this end we develop a novel approach to predict unknown phoneme durations from the values of known phoneme durations for a particular speaker, based on the maximum likelihood criterion. This model is based on the observation that phonemes from the same broad phonetic class tend to co-vary strongly, but that there is also significant cross-class correlations. This approach is tested on the TIMIT corpus and found to be more accurate than using back-off techniques. / Dissertation (MEng)--University of Pretoria, 2009. / Electrical, Electronic and Computer Engineering / unrestricted
736

An approach for the sustainability of ICT centres implemented by Technikon SA in Southern Africa

Hulbert, David Thomas 16 July 2008 (has links)
The study primarily described and analysed the attempts made through the former Technikon SA at the implementation of ICT centres in Southern Africa over a period of six years. Based on some contemporary theory, the study suggested an approach for the implementation of ICT centres in developing regions. In the introduction, the study deals with the problems of technology transfer to developing regions and refers to the impact of globalisation on third world economies. In particular, the study highlights the barriers to technology transfer with specific emphasis on the peculiarities that are unique to each region. The study further analysed the approach that was used by the former Technikon SA for the deployment of ICT centres especially as ICT centres were considered by many as an ideal manner for the transfer of technology. In order to contextualise the understand and findings of the research, the study relied on the fact that the research was based on a longitudinal study. The advantages of this longitudinal study meant it was possible to observe and record the life of an ICT centre over a significant period of time. Not only was it evident that there was little regard for the respective communities needs and the that there was no indication of these ICT centres being successful, but that the same mistakes were being repeated. At national level, an enormous amount of effort and money had been channelled into the roll out of ICT centres with little guarantee of success. International symposiums suggested that through technology, third world economies could make the quantum leap into the information age and that the deployment of ICT centres was one of the ways in which this could be achieved at the local level. There was little evidence to suggest that any significant success had been achieved through the many attempts at ICT centre deployment. Through the study, a research instrument was developed that was used to assess and measure the success of each of the centres. The approach for ICT deployment suggested in the study, was based on the research instrument as well as on models developed by certain theorists (Heeks, Van Ardenhoven and Snyman). The study in the end analysed the nature and impact of implementing ICT centres without considering the critical elements that were identified as critical success factors for ICT centre success. Critical success factors that include role players from government to the community, local, ownership, identification of local needs, local knowledge, an understanding of the local conditions, support structures and partnerships were shown to be key to the success of and ensuring sustainability of ICT centres. The study also provides a perspective on the conflict that arose between the implementer of ICT centres and the communities. / Thesis (PhD (Information Science))--University of Pretoria, 2009. / Information Science / unrestricted
737

Méthodes de lissage et d'estimation dans des modèles à variables latentes par des méthodes de Monte-Carlo séquentielles / Smoothing and estimation methods in hidden variable models through sequential Monte-Carlo methods

Dubarry, Cyrille 09 October 2012 (has links)
Les modèles de chaînes de Markov cachées ou plus généralement ceux de Feynman-Kac sont aujourd'hui très largement utilisés. Ils permettent de modéliser une grande diversité de séries temporelles (en finance, biologie, traitement du signal, ...) La complexité croissante de ces modèles a conduit au développement d'approximations via différentes méthodes de Monte-Carlo, dont le Markov Chain Monte-Carlo (MCMC) et le Sequential Monte-Carlo (SMC). Les méthodes de SMC appliquées au filtrage et au lissage particulaires font l'objet de cette thèse. Elles consistent à approcher la loi d'intérêt à l'aide d'une population de particules définies séquentiellement. Différents algorithmes ont déjà été développés et étudiés dans la littérature. Nous raffinons certains de ces résultats dans le cas du Forward Filtering Backward Smoothing et du Forward Filtering Backward Simulation grâce à des inégalités de déviation exponentielle et à des contrôles non asymptotiques de l'erreur moyenne. Nous proposons également un nouvel algorithme de lissage consistant à améliorer une population de particules par des itérations MCMC, et permettant d'estimer la variance de l'estimateur sans aucune autre simulation. Une partie du travail présenté dans cette thèse concerne également les possibilités de mise en parallèle du calcul des estimateurs particulaires. Nous proposons ainsi différentes interactions entre plusieurs populations de particules. Enfin nous illustrons l'utilisation des chaînes de Markov cachées dans la modélisation de données financières en développant un algorithme utilisant l'Expectation-Maximization pour calibrer les paramètres du modèle exponentiel d'Ornstein-Uhlenbeck multi-échelles / Hidden Markov chain models or more generally Feynman-Kac models are now widely used. They allow the modelling of a variety of time series (in finance, biology, signal processing, ...) Their increasing complexity gave birth to approximations using Monte-Carlo methods, among which Markov Chain Monte-Carlo (MCMC) and Sequential Monte-Carlo (SMC). SMC methods applied to particle filtering and smoothing are dealt with in this thesis. These methods consist in approximating the law of interest through a particle population sequentially defined. Different algorithms have already been developed and studied in the literature. We make some of these results more precise in the particular of the Forward Filtering Backward Smoothing and Forward Filtering Backward Simulation by showing exponential deviation inequalities and by giving non-asymptotic upper bounds to the mean error. We also introduce a new smoothing algorithm improving a particle population through MCMC iterations and allowing to estimate the estimator variance without further simulation. Part of the work presented in this thesis is devoted to the parallel computing of particle estimators. We study different interaction schemes between several particle populations. Finally, we also illustrate the use of hidden Markov chains in the modelling of financial data through an algorithm using Expectation-Maximization to calibrate the exponential Ornstein-Uhlenbeck multiscale stochastic volatility model
738

Modèles de mélange et de Markov caché non-paramétriques : propriétés asymptotiques de la loi a posteriori et efficacité / Non Parametric Mixture Models and Hidden Markov Models : Asymptotic Behaviour of the Posterior Distribution and Efficiency

Vernet, Elodie, Edith 15 November 2016 (has links)
Les modèles latents sont très utilisés en pratique, comme en génomique, économétrie, reconnaissance de parole... Comme la modélisation paramétrique des densités d’émission, c’est-à-dire les lois d’une observation sachant l’état latent, peut conduire à de mauvais résultats en pratique, un récent intérêt pour les modèles latents non paramétriques est apparu dans les applications. Or ces modèles ont peu été étudiés en théorie. Dans cette thèse je me suis intéressée aux propriétés asymptotiques des estimateurs (dans le cas fréquentiste) et de la loi a posteriori (dans le cadre Bayésien) dans deux modèles latents particuliers : les modèles de Markov caché et les modèles de mélange. J’ai tout d’abord étudié la concentration de la loi a posteriori dans les modèles non paramétriques de Markov caché. Plus précisément, j’ai étudié la consistance puis la vitesse de concentration de la loi a posteriori. Enfin je me suis intéressée à l’estimation efficace du paramètre de mélange dans les modèles semi paramétriques de mélange. / Latent models have been widely used in diverse fields such as speech recognition, genomics, econometrics. Because parametric modeling of emission distributions, that is the distributions of an observation given the latent state, may lead to poor results in practice, in particular for clustering purposes, recent interest in using non parametric latent models appeared in applications. Yet little thoughts have been given to theory in this framework. During my PhD I have been interested in the asymptotic behaviour of estimators (in the frequentist case) and the posterior distribution (in the Bayesian case) in two particuliar non parametric latent models: hidden Markov models and mixture models. I have first studied the concentration of the posterior distribution in non parametric hidden Markov models. More precisely, I have considered posterior consistency and posterior concentration rates. Finally, I have been interested in efficient estimation of the mixture parameter in semi parametric mixture models.
739

Lägesosäkerhet vid mätning av dold punkt med totalstation och GNSS

Persson, Patrik, Sjölén, Dennis January 2018 (has links)
En dold punkt är en punkt som inte kan mätas direkt utan måste mätas indirekt med hjälp av t.ex. Global Navigation Satellite System (GNSS) eller totalstation. Det finns ett flertal olika metoder med GNSS och totalstation som passar till olika inmätningssituationer för att mäta en dold punkt. Mätning av dolda punkter med totalstation inträffar ofta i industrimiljöer där rör och liknande hänger i vägen för totalstationens siktlinje till mätobjektet. GNSS med nätverks-realtids kinematisk (nätverks-RTK) mätning, en metod som ökar inom mätningsjobb, är en bra metod för att indirekt mäta dolda punkter utomhus där det antingen är dålig mottagning av satellitsignaler eller inte är möjligt att ställa upp en antenn över punkten. Syftet med denna studie är att undersöka hur bra lägesosäkerhet det går att uppnå för mätning av dold punkt med GNSS och totalstation och även jämföra de olika metoderna som testas. Fem olika metoder beskrivs för att kunna bestämma en dold punkts koordinater med totalstation. Bl.a. en med stång och prismor för mätning i plan och höjd, som även kommer användas i denna studie. Lägesosäkerheten 0,1 mm i både plan och höjd bör kunna uppnås med den metoden. Metoder som kan användas med GNSS och nätverks-RTK är t.ex. en rak linje och dess bäring, skärningen av två raka linjer och skärningen av två längdmätningar. Med nätverks-RTK kan mätningar uppnå en lägesosäkerhet på millimeter-nivå baserat på SWEPOS nätverks-RTK-tjänst. Det är även viktigt med tidskorrelation mellan mätningar om de ska göras oberoende av varandra. Resultaten av lägesosäkerheten i denna studie kan sedan jämföras med de från tidigare studier, om liknande värden kan erhållas vid mätning av dold punkt och hur mycket de skiljer sig. Metoden med totalstation som testats i denna studie är en stång med två prismor på som hålls mot den dolda punkten. Prismorna på stången mättes in med totalstationen och med hjälp av punkternas koordinater kan bäringen mellan dem räknas ut, vektorn förlängs till den dolda punkten, och sedan kan den dolda punktens koordinater räknas ut. De metoder som testats med GNSS är beräkning med en rak linje och dess bäring och beräkning med dubbla längdmätningar. För både GNSS och totalstationsmätningarna har minsta kvadratmetoden använts för att beräkna den dolda punkten och dess lägesosäkerhet. Fyra olika varianter av totalstationsmätningarna utfördes. 0,7 m fast anlagd prismastång med manuell inriktning, 1,0 m fast anlagd prismastång med manuell inriktning, 1,0 m fast anlagd prismastång med automatisk inriktning och 1,0 m handhållen prismastång med manuell inriktning. Alla varianter utfördes i två mätningsomgångar. Lägesosäkerheten vid mätningar för en dold punkt med totalstation var i denna studie 1-2 mm i plan och runt 1 mm i höjd, lägst lägesosäkerhet gav manuell inriktning (0,7 m mellan prismorna) med 0,93 mm i plan och 0,79 mm i höjd. Vilken mätningsvariant som var bäst med totalstationsmätningarna varierade mellan mätningsomgångarna, men skillnaden dem emellan var inte så stor. Det är därför svårt att säga säkert vilken variant som ger bäst lägesosäkerhet med det antalet mätningar som utfördes i denna studie. Med GNSS erhölls osäkerheter på som lägst 7,3 mm där dubbla längdmätningar med stativ gav bäst resultat. Om GNSS-mottagaren placerades på ett stativ eller hölls upp med eller utan stödpinnar förändrade inte slutresultatet så mycket, men som väntat gav stativet lägst lägesosäkerhet. / A hidden point is a point that can´t be measured directly but must be measured indirectly using, for example, Global Navigation Satellite System (GNSS) or total station. There are several different methods with GNSS and total station that fit into different survey situations to survey a hidden point. Measurement of hidden points with total stations often occurs in industrial environments where pipes and the like hang in the way of the total station's line of sight to the measuring object. GNSS with network-Real-Time Kinematic positioning (network-RTK), a method that increases in measurement jobs, is a great way to indirectly measure hidden points outdoors where either poor reception of satellite signals or the ability to set an antenna over the point is not possible. The purpose of this study is to investigate how good measurement uncertainty it is possible to obtain when measuring hidden points with GNSS and total station and also compare the different methods tested. Five different methods are described to determine the coordinates of a hidden point with a total station. Among other things, one with a bar and prisms for measurements horizontally and in height, which will also be used in this study. Position uncertainty 0.1 mm both horizontally and in height should be achievable with that method. Methods that can be used with GNSS and network RTK are for example straight line and its bearing, the intersection of two straight lines and the intersection of two length measurements. With network RTK, measurements can achieve a position uncertainty in millimeters based on SWEPOS network RTK service. It is also important for time correlation between measurements to be made independently. The results of position uncertainty in this study can then be compared to those of previous studies, if similar values can be obtained when measuring hidden points and how much they differ. The method used for total station in this study is a bar with two prisms on it held against the hidden point. The prisms on the bar were measured with the total station and the bearing between them can be calculated with the help of the coordinates of the points, the vector is extended to the hidden point and then the coordinates of the hidden point can be calculated. The methods tested with GNSS are the calculation of a straight line and its bearing and calculation with double length measurements. For both GNSS and total station measurements, the least squares method has been used to calculate the hidden point and its measurement uncertainty. Four different alternatives of the total station measurements were performed. 0.7 m fixed prism bar with manual alignment, 1.0 m fixed prism bar with manual alignment, 1.0 m fixed prism bar with automatic alignment and 1.0 m hand-held prism bar with manual alignment. All alternatives were performed in two measuring rounds. Measurement uncertainty for measurements for a hidden point with total station in this study was 1-2 mm horizontally and around 1 mm in height, the lowest measurement uncertainty gave manual alignment (0.7 m between the prisms) with 0.93 mm horizontally and 0.79 mm in height. The measuring alternative which was the best with total station measurements varied between the two measuring rounds, but the difference between them was not that large. It is therefore difficult to say which method gives the best measurement uncertainty with the number of measurements performed in this study. GNSS received uncertainties of at lowest 7.3 mm where double length measurements with tripod yielded the best results. If the GNSS receiver was placed on a tripod or held up with or without support did not change the final result that much, but as expected, the tripod provided the lowest measurement uncertainty.
740

Managing Manufacturing Outsourcing Relationships

Skowronski, Keith Collins 22 November 2016 (has links)
No description available.

Page generated in 0.0903 seconds