681 |
Modelagem e previsão de volatilidade para o setor siderúrgico brasileiro : volatilidade estocástica versus determinísticaRibeiro, Bruno Passos Spínola January 2009 (has links)
A busca da correta modelagem e previsão de volatilidade em séries financeiras é o que motiva grande parte dos analistas e gestores de carteiras. Esta dissertação buscou, portanto comparar dois tipos de modelos de volatilidade - determinística e estocástica - para as três principais séries de retornos de ações do setor siderúrgico brasileiro, quais sejam: Gerdau PN (GGBR4), Usiminas PN (USIM5) e CSN ON (CSNA3). Os três ativos apresentaram estruturas semelhantes para suas volatilidades. Para as três séries foram encontradas especificações determinísticas do tipo AR (1) - EGARCH (1,1) e AR (1) - TGARCH (0,1), ambas com volatilidades estimadas muito próximas. No caso estocástico optou-se por um modelo AR (1) - SV Estacionário para as três séries de retornos. A maior persistência foi observada no ativo da Gerdau, mostrando que um choque sobre o ativo da Gerdau demora mais a se dissipar do que um choque de mesma magnitude sobre os ativos de Usiminas e CSN. Quanto ao efeito alavancagem, a ação da Usiminas apresentou o maior resultado estimado, mostrando que retornos negativos em um dado instante t geram maior volatilidade no período seguinte (t+1) sobre o ativo da Usiminas. Por último comparou-se a qualidade preditiva das duas classes de modelos de volatilidade por meio de previsões um passo à frente durante 21 dias utilizando-se três estatísticas de previsão - erro médio (ME), raiz do erro quadrático médio (RMSE) e erro absoluto médio (MAE). Para o ativo USIM5 as três estatísticas sugerem que o modelo escolhido deve ser o estocástico. Para o ativo GGBR4 e CSNA3 o ME sugere que o modelo escolhido deve ser o determinístico e o RMSE e o MAE sugerem que o modelo escolhido deve ser o estocástico. / The accurate modeling and forecasting of volatility in financial series is what motivates most analysts and portfolio managers. This dissertation sought therefore to compare two types of volatility models - deterministic and stochastic - for three major series of stock returns of the Brazilian steel industry, namely: Gerdau PN (GGBR4), Usiminas PN (USIM5) and CSN ON (CSNA3). The three assets had similar structures to their volatilities. For all the series we found deterministic specifications of the type AR (1) - EGARCH (1,1) and AR (1) - TGARCH (0.1), both with very close volatility estimates. In the stochastic case we chose a model AR (1) - SV Stationary for the three sets of returns. The highest persistence was observed in the asset of Gerdau, showing that a shock on this asset takes longer to dissipate than a clash of the same magnitude on the assets of Usiminas and CSN. For the leverage effect, the series of Usiminas had the highest estimated results, showing that negative returns in a given time t generate greater volatility in period (t +1) on the asset of Usiminas. Finally we compared the predictive quality of the two classes of volatility models through a one step ahead forecast for 21 days using three statistics for forecasting - mean error (ME), mean squared error (RMSEA) and mean absolute error (MAE). For the asset USIM5 the three statistics suggest that the chosen model should be the stochastic. For the assets GGBR4 and CSNA3 the ME suggests that the chosen model should be the deterministic and the RMSE and MAE suggest that the chosen model should be the stochastic.
|
682 |
Controle preditivo retroalimentado por estados estimados, aplicado a uma planta laboratorialPaim, Anderson de Campos January 2009 (has links)
A retroalimentação de controladores preditivos que utilizam modelos em espaço de estado pode ser realizada de duas formas: (a) correção por bias, em que as saídas preditas são corrigidas adicionando-se um valor proporcional a discrepância encontrada entre o valor medido atual e sua respectiva predição e por (b) retroalimentação dos estados, onde se determinam as condições iniciais através da estimação dos estados, e a partir de uma melhor condição inicial se realizam as predições futuras usadas no cálculo das ações de controle. Nesta dissertação estas duas abordagens são comparadas utilizando a Planta Laboratorial de Seis Tanques Esféricos. As técnicas de Filtro de Kalman Estendido (EKF) e Filtro de Kalman Estendido com Restrições (CEKF) foram empregadas para estimar os estados não medidos. Inicialmente foram feitos testes off-line destes algoritmos de estimação. Para estes testes são utilizados uma série de dados da planta laboratorial do estudo de caso, na qual são estudadas as influências de diversos fatores de ajuste que determinam a qualidade final de estimação. Estes ajustes serviram de base para a aplicação destes algoritmos em tempo real, quando então, estimadores de estados estão associados ao sistema de controle do processo baseado em um algoritmo de controle preditivo. Após se ter certificado a qualidade das estimações de estado, partiu-se para sua utilização como uma alternativa de retroalimentação de controladores preditivos. Estes resultados foram comparados com os obtidos através da correção simples por bias. Os resultados experimentais apontam para uma marginal piora devido à retroalimentação por estimadores de estados frente à correção por bias, pelo menos para o caso do controlador preditivo linear utilizado na comparação. Entretanto, espera-se que resultados melhores sejam obtidos no caso de modelos preditivos não-lineares, uma vez que nestes casos o modelo é bem mais sensível à qualidade da condição inicial. / The feedback of controllers that use predictive models in state space can be accomplished in two ways: (a) bias correction, where the predicted outputs are corrected by adding a value proportional to the discrepancy found between the current measurement and its respective prediction; and by (b) state feedback, which establishes the initial conditions through the states estimation, and from a better initial condition are carried out the future predictions used in the calculation of control. In this thesis these two approaches are compared using a Laboratorial Plant of Six Spherical Tanks. The techniques of Extended Kalman Filter (EKF) and Constraint Extended Kalman Filter (CEKF) were used to estimate the unmeasured states. Initially, tests were carried out off-line for theses estimation algorithms. For such testing are used a dataset of the plant in case study, in which are studied the influences of several adjustment factors that they determine the final quality of estimation. These adjustments were used of base for the application of these algorithms in real time, when then state estimators are associated with the system of process control based on a predictive control algorithm. After having ascertained the quality of the state estimates, begins its use as an alternative for feedback of predictive controllers. These results were compared with those obtained by the simple correction of bias. The experimental results show a marginal worsening due to feedback from state estimated compared with bias correction, at least for the case of linear predictive controller used in the comparison. However, one expects that better results will be obtained in the case of non-linear predictive models, since in these cases the model is much more sensitive to the quality of the initial condition.
|
683 |
Módulo de auto-localização para um agente exploratório usando Filtro de Kalman / Self-localization module for exploratory agent using kalman filterMachado, Karla Fedrizzi January 2003 (has links)
Construir um robô capaz de realizar tarefas sem qualquer interferência humana é um dos maiores desafios da Robótica Move!. Dispondo apenas de sensores, um robô autônomo precisa explorar ambientes desconhecidos e, simultaneamente, construir um mapa confiável a fim de se localizar e realizar a tarefa. Na presença de erros de odometria, o robô não consegue se auto-localizar corretamente em seu mapa interno e acaba por construir um mapa deformado e não condizente com a realidade. Este trabalho apresenta uma solução para o problema da auto-localização de robô moveis autônomos. Esta solução faz use de um método linear de calculo de estimativas chamado Filtro de Kalman para corrigir a posição do robô em seu mapa intern° do ambiente enquanto realiza a exploração. A proposta leva em consideração que toda entidade que se movimenta em um ambiente conta sempre com alguns pontos de referencia para se localizar. Estes pontos são implementados como objetos especiais chamados marcas de Kalman. Em simulação, o reconhecimento das marcas pode ser feito de duas maneiras: através de sua posição no mapa ou através de sua identidade. Nos experimentos realizados em simulação, o método é testado para diferentes erros no angulo de orientação do robô. Os resultados são comparados levando em consideração as deformações no mapa gerado, com e sem marcas de Kalman, e o erro médio da posição do robô durante todo o processo exploratório. / Build a robot capable of performing tasks without any human interference is one of the biggest challenges of the Mobile Robotics. Having only sensors, an autonomous robot needs to explore unknown environments and, simultaneously, build a reliable map in order to get its own location and perform the task. In the presence of odometry errors, the robot is not capable of establish its own position on its internal map and ends up building a deformed map that does not reflect reality. This paper presents a solution for the problem of self-localization of autonomous mobile robots. This solution uses a linear method for calculating estimatives called Kalman Filter to correct the robot's position on its internal mapping of the environment while exploring. The proposal considers that any being that moves in an environment always counts on having some reference points to establish its own position. This points are implemented as special objects called Kalman landmarks. In simulation, the recognition of such landmarks can be done in two different ways: through its position on the map or through its identity. In the experiments performed in simulations, the method is tested for different errors in the robot's inclination angle. The results are compared considering the deformations on the generated map, with and without the Kalman landmarks, and the average error of the robot's position during the exploratory process.
|
684 |
Autonomous Quadrotor Navigation by Detecting Vanishing Points in Indoor EnvironmentsJanuary 2018 (has links)
abstract: Toward the ambitious long-term goal of a fleet of cooperating Flexible Autonomous Machines operating in an uncertain Environment (FAME), this thesis addresses various perception and control problems in autonomous aerial robotics. The objective of this thesis is to motivate the use of perspective cues in single images for the planning and control of quadrotors in indoor environments. In addition to providing empirical evidence for the abundance of such cues in indoor environments, the usefulness of these perspective cues is demonstrated by designing a control algorithm for navigating a quadrotor in indoor corridors. An Extended Kalman Filter (EKF), implemented on top of the vision algorithm, serves to improve the robustness of the algorithm to changing illumination.
In this thesis, vanishing points are the perspective cues used to control and navigate a quadrotor in an indoor corridor. Indoor corridors are an abundant source of parallel lines. As a consequence of perspective projection, parallel lines in the real world, that are not parallel to the plane of the camera, intersect at a point in the image. This point is called the vanishing point of the image. The vanishing point is sensitive to the lateral motion of the camera and hence the quadrotor. By tracking the position of the vanishing point in every image frame, the quadrotor can navigate along the center of the corridor.
Experiments are conducted using the Augmented Reality (AR) Drone 2.0. The drone is equipped with the following componenets: (1) 720p forward facing camera for vanishing point detection, (2) 240p downward facing camera, (3) Inertial Measurement Unit (IMU) for attitude control , (4) Ultrasonic sensor for estimating altitude, (5) On-board 1 GHz Processor for processing low level commands. The reliability of the vision algorithm is presented by flying the drone in indoor corridors. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2018
|
685 |
A Control System For A 3-Axis Camera StabilizerHasnain, Bakhtiyar Asef, Algoz, Ali January 2018 (has links)
The purpose of the project is to implement a control system for a 3- axis camera stabilizer. The stabilization is done by controlling three blushless DC motors driving the yaw, pitch and roll movements of the camera stabilizer's frame, respectively. The stabilizer's frame (equipped with three motors) is used in this project, and it is directly taken from a commercial product, Feiyu Tech G4S. The control system concerned in this project consists of a Teensy 3.6 microcontroller unit (MCU) implemented with three PID controllers, the motor drivers to drive the three motors, and an inertial measurement unit (IMU) of 9 degrees of freedom.The MCU is also used to process the IMU angle measurements of the camera position in 3- axis motion, in particular, it converts the IMU raw data to an angle for each of the axis, it then processes the angle data using a Kalman filter to reduce the noise. In the end of the project a prototype has been built and tested, it uses the control system to run the stabilizing process. It is shown to work quite successfully. In particular, it can run smoothly in the roll and pitch axes and compensate for unwanted movement, however the yaw axis does not function as intended due to a misplacement as well as poor calibration process of the magnetometer sensor in the IMU, which is left for future work.
|
686 |
Extraction et débruitage de signaux ECG du foetus. / Extraction of Fetal ECGNiknazar, Mohammad 07 November 2013 (has links)
Les malformations cardiaques congénitales sont la première cause de décès liés à une anomalie congénitale. L’´electrocardiogramme du fœtus (ECGf), qui est censé contenir beaucoup plus d’informations par rapport aux méthodes échographiques conventionnelles, peut ˆêtre mesuré´e par des électrodes sur l’abdomen de la mère. Cependant, il est tr`es faible et mélangé avec plusieurs sources de bruit et interférence y compris l’ECG de la mère (ECGm) dont le niveau est très fort. Dans les études précédentes, plusieurs méthodes ont été proposées pour l’extraction de l’ECGf à partir des signaux enregistrés par des électrodes placées à la surface du corps de la mère. Cependant, ces méthodes nécessitent un nombre de capteurs important, et s’avèrent inefficaces avec un ou deux capteurs. Dans cette étude trois approches innovantes reposant sur une paramétrisation algébrique, statistique ou par variables d’état sont proposées. Ces trois méthodes mettent en œuvre des modélisations différentes de la quasi-périodicité du signal cardiaque. Dans la première approche, le signal cardiaque et sa variabilité sont modélisés par un filtre de Kalman. Dans la seconde approche, le signal est découpé en fenêtres selon les battements, et l’empilage constitue un tenseur dont on cherchera la décomposition. Dans la troisième approche, le signal n’est pas modélisé directement, mais il est considéré comme un processus Gaussien, caractérisé par ses statistiques à l’ordre deux. Dans les différentes modèles, contrairement aux études précédentes, l’ECGm et le (ou les) ECGf sont modélisés explicitement. Les performances des méthodes proposées, qui utilisent un nombre minimum de capteurs, sont évaluées sur des données synthétiques et des enregistrements réels, y compris les signaux cardiaques des fœtus jumeaux. / Congenital heart defects are the leading cause of birth defect-related deaths. The fetal electrocardiogram (fECG), which is believed to contain much more information as compared with conventional sonographic methods, can be measured by placing electrodes on the mother’s abdomen. However, it has very low power and is mixed with several sources of noise and interference, including the strong maternal ECG (mECG). In previous studies, several methods have been proposed for the extraction of fECG signals recorded from the maternal body surface. However, these methods require a large number of sensors, and are ineffective with only one or two sensors. In this study, state modeling, statistical and deterministic approaches are proposed for capturing weak traces of fetal cardiac signals. These three methods implement different models of the quasi-periodicity of the cardiac signal. In the first approach, the heart rate and its variability are modeled by a Kalman filter. In the second approach, the signal is divided into windows according to the beats. Stacking the windows constructs a tensor that is then decomposed. In a third approach, the signal is not directly modeled, but it is considered as a Gaussian process characterized by its second order statistics. In all the different proposed methods, unlike previous studies, mECG and fECG(s) are explicitly modeled. The performances of the proposed methods, which utilize a minimal number of electrodes, are assessed on synthetic data and actual recordings including twin fetal cardiac signals.
|
687 |
Evaluation of TDOA based Football Player’s Position Tracking Algorithm using Kalman FilterKanduri, Srinivasa Rangarajan Mukhesh, Medapati, Vinay Kumar Reddy January 2018 (has links)
Time Difference Of Arrival (TDOA) based position tracking technique is one of the pinnacles of sports tracking technology. Using radio frequency com-munication, advanced filtering techniques and various computation methods, the position of a moving player in a virtually created sports arena can be iden-tified using MATLAB. It can also be related to player’s movement in real-time. For football in particular, this acts as a powerful tool for coaches to enhanceteam performance. Football clubs can use the player tracking data to boosttheir own team strengths and gain insight into their competing teams as well. This method helps to improve the success rate of Athletes and clubs by analyz-ing the results, which helps in crafting their tactical and strategic approach to game play. The algorithm can also be used to enhance the viewing experienceof audience in the stadium, as well as broadcast.In this thesis work, a typical football field scenario is assumed and an arrayof base stations (BS) are installed along perimeter of the field equidistantly.The player is attached with a radio transmitter which emits radio frequencythroughout the assigned game time. Using the concept of TDOA, the position estimates of the player are generated and the transmitter is tracked contin-uously by the BS. The position estimates are then fed to the Kalman filter, which filters and smoothens the position estimates of the player between the sample points considered. Different paths of the player as straight line, circu-lar, zig-zag paths in the field are animated and the positions of the player are tracked. Based on the error rate of the player’s estimated position, the perfor-mance of the Kalman filter is evaluated. The Kalman filter’s performance is analyzed by varying the number of sample points.
|
688 |
Influência da ionosfera no posicionamento GPS : estimativas dos resíduos no contexto de duplas diferenças e eliminação dos efeitos de 2ª e 3ª ordem /Marques, Haroldo Antonio. January 2008 (has links)
Orientador: João Francisco Galera Monico / Banca: José Tadeu Garcia Tommaselli / Banca: Edvaldo Simões da Fonseca Júnior / Resumo: Dados de receptores GPS de dupla freqüência são, em geral, processados utilizando a combinação ion-free, o que permite eliminar os efeitos de primeira ordem da ionosfera. Porém, os efeitos de segunda e terceira ordem, geralmente, são negligenciados no processamento de dados GPS. Nesse trabalho, esses efeitos foram levados em consideração no processamento dos dados. Foram investigados os modelos matemáticos associados a esses efeitos, as transformações envolvendo o campo magnético da Terra e a utilização do TEC advindo dos Mapas Globais da Ionosfera ou calculados a partir das pseudodistâncias. Numa outra investigação independente, os efeitos residuais de primeira ordem da ionosfera, resultantes da dupla diferença da pseudodistância e da fase da onda portadora, foram considerados como incógnitas no ajustamento. Porém, esses efeitos residuais foram tratados como pseudo-observações, associados aos processos aleatórios random walk e white noise e, adicionados ao algoritmo de filtro de Kalman. Dessa forma, o modelo matemático preserva a característica de número inteiro da ambigüidade da fase, facilitando a aplicação de algoritmos de solução da ambigüidade, que no caso desse trabalho, utilizou-se o método LAMBDA. Para o caso da consideração dos efeitos de segunda e terceira ordem da ionosfera, foram realizados processamentos de dados GPS envolvendo o modo relativo e o Posicionamento por Ponto Preciso. Os resultados mostraram que a não consideração desses efeitos no processamento dos dados GPS pode introduzir variações da ordem de três a quatro milímetros nas coordenadas das estações. / Abstract: Data from dual frequency receiver, in general, are processed using the ion-free combination that allows the elimination of the first order ionospheric effects. However, the second and third order ionospheric effects, generally, are neglected in the GPS data processing. In this work, these effects were taken into account in the GPS data processing. In this case, it was investigated the mathematical models associated with the second and third order effects, the transformations involving the Earth magnetic field and the use of TEC from Ionosphere Global Maps or calculated from the pseudoranges. The first order ionosphere residual effects, resulting from pseudorange and phase double difference, were taken into account as unknown in the adjustment. However, these effects were treated as pseudo-observables and it was associated with the random process random walk and white noise and added to the Kalman filter algorithm. Therefore, the mathematical model preserves the phase ambiguity "integerness", facilitating the application of ambiguity resolution approaches, which in the case of this work, it was used the LAMBDA method. / Mestre
|
689 |
Sistema de detecção em tempo real de faixas de sinalização de trânsito para veículos inteligentes utilizando processamento de imagemAlves, Thiago Waszak January 2017 (has links)
A mobilidade é uma marca da nossa civilização. Tanto o transporte de carga quanto o de passageiros compartilham de uma enorme infra-estrutura de conexões operados com o apoio de um sofisticado sistema logístico. Simbiose otimizada de módulos mecânicos e elétricos, os veículos evoluem continuamente com a integração de avanços tecnológicos e são projetados para oferecer o melhor em conforto, segurança, velocidade e economia. As regulamentações organizam o fluxo de transporte rodoviário e as suas interações, estipulando regras a fim de evitar conflitos. Mas a atividade de condução pode tornar-se estressante em diferentes condições, deixando os condutores humanos propensos a erros de julgamento e criando condições de acidente. Os esforços para reduzir acidentes de trânsito variam desde campanhas de re-educação até novas tecnologias. Esses tópicos têm atraído cada vez mais a atenção de pesquisadores e indústrias para Sistemas de Transporte Inteligentes baseados em imagens que visam a prevenção de acidentes e o auxilio ao seu motorista na interpretação das formas de sinalização urbana. Este trabalho apresenta um estudo sobre técnicas de detecção em tempo real de faixas de sinalização de trânsito em ambientes urbanos e intermunicipais, com objetivo de realçar as faixas de sinalização da pista para o condutor do veículo ou veículo autônomo, proporcionando um controle maior da área de tráfego destinada ao veículo e prover alertas de possíveis situações de risco. A principal contribuição deste trabalho é otimizar a formar como as técnicas de processamento de imagem são utilizas para realizar a extração das faixas de sinalização, com o objetivo de reduzir o custo computacional do sistema. Para realizar essa otimização foram definidas pequenas áreas de busca de tamanho fixo e posicionamento dinâmico. Essas áreas de busca vão isolar as regiões da imagem onde as faixas de sinalização estão contidas, reduzindo em até 75% a área total onde são aplicadas as técnicas utilizadas na extração de faixas. Os resultados experimentais mostraram que o algoritmo é robusto em diversas variações de iluminação ambiente, sombras e pavimentos com cores diferentes tanto em ambientes urbanos quanto em rodovias e autoestradas. Os resultados mostram uma taxa de detecção correta média de 98; 1%, com tempo médio de operação de 13,3 ms. / Mobility is an imprint of our civilization. Both freight and passenger transport share a huge infrastructure of connecting links operated with the support of a sophisticated logistic system. As an optimized symbiosis of mechanical and electrical modules, vehicles are evolving continuously with the integration of technological advances and are engineered to offer the best in comfort, safety, speed and economy. Regulations organize the flow of road transportation machines and help on their interactions, stipulating rules to avoid conflicts. But driving can become stressing on different conditions, leaving human drivers prone to misjudgments and creating accident conditions. Efforts to reduce traffic accidents that may cause injuries and even deaths range from re-education campaigns to new technologies. These topics have increasingly attracted the attention of researchers and industries to Image-based Intelligent Transportation Systems that aim to prevent accidents and help your driver in the interpretation of urban signage forms. This work presents a study on real-time detection techniques of traffic signaling signs in urban and intermunicipal environments, aiming at the signaling lanes of the lane for the driver of the vehicle or autonomous vehicle, providing a greater control of the area of traffic destined to the vehicle and to provide alerts of possible risk situations. The main contribution of this work is to optimize how the image processing techniques are used to perform the lanes extraction, in order to reduce the computational cost of the system. To achieve this optimization, small search areas of fixed size and dynamic positioning were defined. These search areas will isolate the regions of the image where the signaling lanes are contained, reducing up to 75% the total area where the techniques used in the extraction of lanes are applied. The experimental results showed that the algorithm is robust in several variations of ambient light, shadows and pavements with different colors, in both urban environments and on highways and motorways. The results show an average detection rate of 98.1%, with average operating time of 13.3 ms.
|
690 |
Performance Comparison of Localization Algorithms for UWB Measurements with Closely Spaced AnchorsNilsson, Max January 2018 (has links)
Tracking objects or people in an indoor environment has a wide variety of uses in many different areas, similarly to positioning systems outdoors. Indoor positioning systems operate in a very different environment however, having to deal with obstructions while also having high accuracy. A common solution for indoor positioning systems is to have three or more stationary anchor antennas spread out around the perimeter of the area that is to be monitored. The position of a tag antenna moving in range of the anchors can then be found using trilateration. One downside of such a setup is that the anchors must be setup in advance, meaning that rapid deployment to new areas of such a system may be impractical. This thesis aims to investigate the possibility of using a different setup, where three anchors are placed close together, so as to fit in a small hand-held device. This would allow the system to be used without any prior setup of anchors, making rapid deployment into new areas more feasible. The measurements done by the antennas for use in trilateration will always contain noise, and as such algorithms have had to be developed in order to obtain an approximation of the position of a tag in the presence of noise. These algorithms have been developed with the setup of three spaced out anchors in mind, and may not be sufficiently accurate when the anchors are spaced very closely together. To investigate the feasibility of such a setup, this thesis tested four different algorithms with the proposed setup, to see its impact on the performance of the algorithms. The algorithms tested are the Weighted Block Newton, Weighted Clipped Block Newton, Linear Least Squares and Non-Linear Least Squares algorithms. The Linear Least Squares algorithm was also run with measurements that were first run through a simple Kalman filter. Previous studies have used the algorithms to find an estimated position of the tag and compared their efficiency using the positional error of the estimate. This thesis will also use the positional estimates to determine the angular position of the estimate in relation to the anchors, and use that to compare the algorithms. Measurements were done using DWM1001 Ultra Wideband (UWB) antennas, and four different cases were tested. In case 1 the anchors and tag were 10 meters apart in line-of-sight, case two were the same as case 1 but with a person standing between the tag and the anchors. In case 3 the tag was moved behind a wall with an adjacent open door, and in case 4 the tag was in the same place as in case 3 but the door was closed. The Linear Least Squares algorithm using the filtered measurements was found to be the most effective in all cases, with a maximum angular error of less than 5$^\circ$ in the worst case. The worst case here was case 2, showing that the influence of a human body has a strong effect on the UWB signal, causing large errors in the estimates of the other algorithms. The presence of a wall in between the anchors and tag was found to have a minimal impact on the angular error, while having a larger effect on the spatial error. Further studies regarding the effects of the human body on UWB signals may be necessary to determine the feasibility of handheld applications, as well as the effect of the tag and/or the anchors moving on the efficiency of the algorithms.
|
Page generated in 0.0422 seconds