41 |
Secure public-key encryption from factorisation-related problemsBrown, Jaimee January 2007 (has links)
Public key encryption plays a vital role in securing sensitive data in practical applications. The security of many encryption schemes relies on mathematical problems related to the difficulty of factoring large integers. In particular, subgroup problems in composite order groups are a general class of problems widely used in the construction of secure public-key encryption schemes. This thesis studies public-key encryption schemes that are provably secure based on the difficulty of subgroup or other integer factorisation related problems in the standard model. Firstly, a number of new public-key encryption schemes are presented which are secure in the sense of indistinguishability against chosen-ciphertext attack in the standard model. These schemes are obtained by instantiating the two previous paradigms for chosen-ciphertext security by Cramer and Shoup, and Kurosawa and Desmedt, with three previously studied subgroup membership problems. The resulting schemes are very efficient, and are comparable if not superior in terms of efficiency when compared to previously presented instantiations. Secondly, a new approach is presented for constructing RSA-related public key encryption schemes secure in the sense of indistinguishability against chosenciphertext attack without random oracles. This new approach requires a new set of assumptions, called the Oracle RSA-type assumptions. The motivating observation is that RSA-based encryption schemes can be viewed as tag-based encryption schemes, and as a result can be used as a building block in a previous technique for obtaining chosen-ciphertext security. Two example encryption schemes are additionally presented, each of which is of comparable efficiency to other public key schemes of similar security. Finally, the notion of self-escrowed public-key infrastructures is revisited, and a security model is defined for self-escrowed encryption schemes. The security definitions proposed consider adversarial models which reflect an attacker's ability to recover private keys corresponding to public keys of the attacker's choice. General constructions for secure self-escrowed versions of ElGamal, RSA, Cramer-Shoup and Kurosawa-Desmedt encryption schemes are also presented, and efficient instantiations are provided. In particular, one instantiation solves the 'key doubling problem' observed in all previous self-escrowed encryption schemes. Also, for another instantiation a mechanism is described for distributing key recovery amongst a number of authorities.
|
42 |
Analise de tecnicas de localização em redes de sensores sem fio / Analysis of localization techniques in wireless sensor networksMoreira, Rafael Barbosa 26 February 2007 (has links)
Orientador: Paulo Cardieri / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-09T19:04:48Z (GMT). No. of bitstreams: 1
Moreira_RafaelBarbosa_M.pdf: 769599 bytes, checksum: 765bba4630a38b38a3832828cf0947b7 (MD5)
Previous issue date: 2007 / Resumo: Nesta dissertação, o problema da localização em redes de sensores sem fio é investigado. É apresentada uma análise de desempenho de técnicas de localização por meio de simulação e por meio da avaliação do limite de Cramér-Rao para o erro de localização. Em ambas as formas de análise foram avaliados efeitos de diversos fatores no desempenho, relacionados à topologia da rede e ao ambiente de propagação . Na análise por meio de simulação, foram consideradas as técnicas de localização baseadas em observações de potência do sinal recebido, enquanto que na análise usando o limite de Cramér-Rao, foram analisadas também as técnicas baseadas no tempo de chegada e no ângulo de chegada do sinal recebido. Este trabalho também avaliou os efeitos da polarização das estimativas de distâncias (usadas no processo de localização) no limite inferior de Cramér-Rao. Esta polarização é geralmente desprezada na literatura, o que pode levar a imprecisões no cálculo do limite de Cramér-Rao, em certas condições de propagação. Uma nova expressão para este limite foi derivada para um caso simples de estimador, considerando agora a polarização. Tomando como base o desenvolvimento desta nova expressão, foi derivada também uma nova expressão para o limite inferior de Cramér-Rao considerando os efeitos do desvanecimento lognormal e do desvanecimento Nakagami do canal de propagação / Abstract: This dissertation investigates on the localization problem in wireless sensor networks. A performance analysis of localization techniques through simulations and the Cramér-Rao lower bound is presented. The effects of several parameters on the localization performance are investigated, including network topology and propagation environment. The simulation analysis considered localization techniques based on received signal strength observations, while the Cramér-Rao analysis considered also techniques based on the time of arrival and angle of arrival of the received signal. This work also investigated how the Cramér-Rao limit is affected by the observation bias in localization techniques based on the received signal strength. This bias is usually neglected in the literature, what may lead to imprecisions on the Cramér-Rao limit computation under certain propagation conditions. A new expression for this limit was derived for a simple estimator case, now considering the bias. With the development of this new expression, it was also derived a new expression for the Cramér-Rao lower bound considering the effects of lognormal fading and Nakagami fading on the propagation channel / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica
|
43 |
Statistical Methods for Image Change Detection with UncertaintyLingg, Andrew James January 2012 (has links)
No description available.
|
44 |
Statistical Analysis of Geolocation Fundamentals Using Stochastic GeometryO'Lone, Christopher Edward 22 January 2021 (has links)
The past two decades have seen a surge in the number of applications requiring precise positioning data. Modern cellular networks offer many services based on the user's location, such as emergency services (e.g., E911), and emerging wireless sensor networks are being used in applications spanning environmental monitoring, precision agriculture, warehouse and manufacturing logistics, and traffic monitoring, just to name a few. In these sensor networks in particular, obtaining precise positioning data of the sensors gives vital context to the measurements being reported. While the Global Positioning System (GPS) has traditionally been used to obtain this positioning data, the deployment locations of these cellular and sensor networks in GPS-constrained environments (e.g., cities, indoors, etc.), along with the need for reliable positioning, requires a localization scheme that does not rely solely on GPS. This has lead to localization being performed entirely by the network infrastructure itself, or by the network infrastructure aided, in part, by GPS.
In the literature, benchmarking localization performance in these networks has traditionally been done in a deterministic manner. That is, for a fixed setup of anchors (nodes with known location) and a target (a node with unknown location) a commonly used benchmark for localization error, such as the Cramer-Rao lower bound (CRLB), can be calculated for a given localization strategy, e.g., time-of-arrival (TOA), angle-of-arrival (AOA), etc. While this CRLB calculation provides excellent insight into expected localization performance, its traditional treatment as a deterministic value for a specific setup is limited.
Rather than trying to gain insight into a specific setup, network designers are more often interested in aggregate localization error statistics within the network as a whole. Questions such as: "What percentage of the time is localization error less than x meters in the network?" are commonplace. In order to answer these types of questions, network designers often turn to simulations; however, these come with many drawbacks, such as lengthy execution times and the inability to provide fundamental insights due to their inherent ``block box'' nature. Thus, this dissertation presents the first analytical solution with which to answer these questions. By leveraging tools from stochastic geometry, anchor positions and potential target positions can be modeled by Poisson point processes (PPPs). This allows for the CRLB of position error to be characterized over all setups of anchor positions and potential target positions realizable within the network. This leads to a distribution of the CRLB, which can completely characterize localization error experienced by a target within the network, and can consequently be used to answer questions regarding network-wide localization performance. The particular CRLB distribution derived in this dissertation is for fourth-generation (4G) and fifth-generation (5G) sub-6GHz networks employing a TOA localization strategy.
Recognizing the tremendous potential that stochastic geometry has in gaining new insight into localization, this dissertation continues by further exploring the union of these two fields. First, the concept of localizability, which is the probability that a mobile is able to obtain an unambiguous position estimate, is explored in a 5G, millimeter wave (mm-wave) framework. In this framework, unambiguous single-anchor localization is possible with either a line-of-sight (LOS) path between the anchor and mobile or, if blocked, then via at least two NLOS paths. Thus, for a single anchor-mobile pair in a 5G, mm-wave network, this dissertation derives the mobile's localizability over all environmental realizations this anchor-mobile pair is likely to experience in the network. This is done by: (1) utilizing the Boolean model from stochastic geometry, which statistically characterizes the random positions, sizes, and orientations of reflectors (e.g., buildings) in the environment, (2) considering the availability of first-order (i.e., single-bounce) reflections as well as the LOS path, and (3) considering the possibility that reflectors can either facilitate or block reflections. In addition to the derivation of the mobile's localizability, this analysis also reveals that unambiguous localization, via reflected NLOS signals exclusively, is a relatively small contributor to the mobile's overall localizability.
Lastly, using this first-order reflection framework developed under the Boolean model, this dissertation then statistically characterizes the NLOS bias present on range measurements. This NLOS bias is a common phenomenon that arises when trying to measure the distance between two nodes via the time delay of a transmitted signal. If the LOS path is blocked, then the extra distance that the signal must travel to the receiver, in excess of the LOS path, is termed the NLOS bias. Due to the random nature of the propagation environment, the NLOS bias is a random variable, and as such, its distribution is sought. As before, assuming NLOS propagation is due to first-order reflections, and that reflectors can either facilitate or block reflections, the distribution of the path length (i.e., absolute time delay) of the first-arriving multipath component (MPC) is derived. This result is then used to obtain the first NLOS bias distribution in the localization literature that is based on the absolute delay of the first-arriving MPC for outdoor time-of-flight (TOF) range measurements. This distribution is shown to match exceptionally well with commonly assumed gamma and exponential NLOS bias models in the literature, which were only attained previously through heuristic or indirect methods. Finally, the flexibility of this analytical framework is utilized by further deriving the angle-of-arrival (AOA) distribution of the first-arriving MPC at the mobile. This distribution gives novel insight into how environmental obstacles affect the AOA and also represents the first AOA distribution, of any kind, derived under the Boolean model.
In summary, this dissertation uses the analytical tools offered by stochastic geometry to gain new insights into localization metrics by performing analyses over the entire ensemble of infrastructure or environmental realizations that a target is likely to experience in a network. / Doctor of Philosophy / The past two decades have seen a surge in the number of applications requiring precise positioning data. Modern cellular networks offer many services based on the user's location, such as emergency services (e.g., E911), and emerging wireless sensor networks are being used in applications spanning environmental monitoring, precision agriculture, warehouse and manufacturing logistics, and traffic monitoring, just to name a few. In these sensor networks in particular, obtaining precise positioning data of the sensors gives vital context to the measurements being reported. While the Global Positioning System (GPS) has traditionally been used to obtain this positioning data, the deployment locations of these cellular and sensor networks in GPS-constrained environments (e.g., cities, indoors, etc.), along with the need for reliable positioning, requires a localization scheme that does not rely solely on GPS. This has lead to localization being performed entirely by the network infrastructure itself, or by the network infrastructure aided, in part, by GPS.
When speaking in terms of localization, the network infrastructure consists of what are called anchors, which are simply nodes (points) with a known location. These can be base stations, WiFi access points, or designated sensor nodes, depending on the network. In trying to determine the position of a target (i.e., a user, or a mobile), various measurements can be made between this target and the anchor nodes in close proximity. These measurements are typically distance (range) measurements or angle (bearing) measurements. Localization algorithms then process these measurements to obtain an estimate of the target position.
The performance of a given localization algorithm (i.e., estimator) is typically evaluated by examining the distance, in meters, between the position estimates it produces vs. the actual (true) target position. This is called the positioning error of the estimator. There are various benchmarks that bound the best (lowest) error that these algorithms can hope to achieve; however, these benchmarks depend on the particular setup of anchors and the target. The benchmark of localization error considered in this dissertation is the Cramer-Rao lower bound (CRLB). To determine how this benchmark of localization error behaves over the entire network, all of the various setups of anchors and the target that would arise in the network must be considered. Thus, this dissertation uses a field of statistics called stochastic geometry} to model all of these random placements of anchors and the target, which represent all the setups that can be experienced in the network. Under this model, the probability distribution of this localization error benchmark across the entirety of the network is then derived. This distribution allows network designers to examine localization performance in the network as a whole, rather than just for a specific setup, and allows one to obtain answers to questions such as: "What percentage of the time is localization error less than x meters in the network?"
Next, this dissertation examines a concept called localizability, which is the probability that a target can obtain a unique position estimate. Oftentimes localization algorithms can produce position estimates that congregate around different potential target positions, and thus, it is important to know when algorithms will produce estimates that congregate around a unique (single) potential target position; hence the importance of localizability. In fifth generation (5G), millimeter wave (mm-wave) networks, only one anchor is needed to produce a unique target position estimate if the line-of-sight (LOS) path between the anchor and the target is unimpeded. If the LOS path is impeded, then a unique target position can still be obtained if two or more non-line-of-sight (NLOS) paths are available. Thus, over all possible environmental realizations likely to be experienced in the network by this single anchor-mobile pair, this dissertation derives the mobile's localizability, or in this case, the probability the LOS path or at least two NLOS paths are available. This is done by utilizing another analytical tool from stochastic geometry known as the Boolean model, which statistically characterizes the random positions, sizes, and orientations of reflectors (e.g., buildings) in the environment. Under this model, considering the availability of first-order (i.e., single-bounce) reflections as well as the LOS path, and considering the possibility that reflectors can either facilitate or block reflections, the mobile's localizability is derived. This result reveals the roles that the LOS path and the NLOS paths play in obtaining a unique position estimate of the target.
Using this first-order reflection framework developed under the Boolean model, this dissertation then statistically characterizes the NLOS bias present on range measurements. This NLOS bias is a common phenomenon that arises when trying to measure the distance between two nodes via the time-of-flight (TOF) of a transmitted signal. If the LOS path is blocked, then the extra distance that the signal must travel to the receiver, in excess of the LOS path, is termed the NLOS bias. As before, assuming NLOS propagation is due to first-order reflections and that reflectors can either facilitate or block reflections, the distribution of the path length (i.e., absolute time delay) of the first-arriving multipath component (MPC) (or first-arriving ``reflection path'') is derived. This result is then used to obtain the first NLOS bias distribution in the localization literature that is based on the absolute delay of the first-arriving MPC for outdoor TOF range measurements. This distribution is shown to match exceptionally well with commonly assumed NLOS bias distributions in the literature, which were only attained previously through heuristic or indirect methods. Finally, the flexibility of this analytical framework is utilized by further deriving angle-of-arrival (AOA) distribution of the first-arriving MPC at the mobile. This distribution yields the probability that, for a specific angle, the first-arriving reflection path arrives at the mobile at this angle. This distribution gives novel insight into how environmental obstacles affect the AOA and also represents the first AOA distribution, of any kind, derived under the Boolean model.
In summary, this dissertation uses the analytical tools offered by stochastic geometry to gain new insights into localization metrics by performing analyses over all of the possible infrastructure or environmental realizations that a target is likely to experience in a network.
|
45 |
HANDLE CONCEPT FOR STRING TRIMMER / HANDLEKONCEPT STRING TRIMMERVanaja Murugesapillai, Anoop January 2019 (has links)
String trimmers or weed cutter or strimmer is one of the inevitable tool for a professional gardener. They are helping gardeners for maintaining the lawn and farmland. These machines have been used by human for a quite long time. Professionals normally uses the bike handle type string trimmer and the amateurs who has small gardens normally uses loop handle one. The bike handle models are more powerful and gives more comfort for long time working. While performing the trimming process users are using the handles to steer and control the machine. Handle is the main touch point of the machine and it needs to provide enough comfort and assist the functions.Cramer, a German garden tools brand owned by The Globe Group, is focusing on researching and developing Garden power tools for amateurs as well as professional users. For them it is important to satisfy the customer and improve the user experience of their products.This project is focused only on the handle of the string trimmer to improve the overall user experience. This project mainly deals with the ergonomic aspects of the handle for a larger percentile of population. It is very important to make a handle were the professional users are going to use longer period of time. Along with the comfort this project put stress on increase the user experience by giving modern, friendly and premium expression visually as well as physically. Even though the project is to develop a handle concept, it needs to go along with the overall brand language.
|
46 |
Traitement d'antenne adapté aux modèles linéaires intégrant une interférence structurée. Application aux signaux mécaniques.Bouleux, Guillaume 04 December 2007 (has links) (PDF)
Le cadre d'étude de ce travail est celui du traitement d'antenne appliqué au modèle linéaire. Dans divers domaines, comme en bio-médical ou encore en RADAR, le nombre de Directions D'Arrivées (DDA) d'intérêts est un ensemble réduit de toutes les directions constituant le modèle. Nous optons donc pour un modèle structuré s'écrivant <br /><br />Observation = Signal d'intérêt + Interférence structurée + Bruit<br /><br />Où l'interférence structurée est composée d'un certain nombre de Directions D'Arrivées connues ou estimées. De ce modèle, nous proposons deux types d'approches : (1) nous supposons disposer de la connaissance de M-S DDA sur un total de M et (2) nous souhaitons estimer de manière séquentielle M DDA.<br /><br />La littérature fournit des solutions pour résoudre le problème d'estimation de S DDA d'intérêts sur un total de M. Les solutions proposées utilisent une déflation orthogonale du sous-espace signal bruité. Nous donnons alors une nouvelle Borne de Cramér-Rao (CRB) que nous nommons Prior-CRB associée à ce type modèle et nous montrons sous quelles conditions (très restrictives) cette borne est inférieure à une CRB classique issue du modèle linéaire composé de M DDA. Pour s'absoudre des contraintes liées au modèle à déflation orthogonale nous proposons alors d'employer une déflation oblique en place de la déflation orthogonale. Nous construisons alors de nouveau estimateurs des DDA d'intérêts. A la vue des simulations, les performances sont bien meilleures que les algorithmes à déflation orthogonale et nous proposons d'expliquer ces performances par la dérivation des variances théoriques de chacun des estimateurs proposés. Ainsi, via l'analyse de ces variances, nous montrons pourquoi la projection oblique est plus appropriée et nous donnons une relation d'ordre de chacune des variances associées aux algorithmes étudiés.<br /><br />Ici encore le problème de l'estimation séquentielle de M DDA est un problème suscitant un grand intérêt. Seulement, les solutions proposées dans la littérature utilisent une déflation orthogonale pour annuler séquentiellement les directions préalablement estimées via un critère MUSIC modifié. Nous nous démarquons en proposant un algorithme qui pondère, par une fonction quadratique de forçage à zéro, le pseudo-spectre de MUSIC. Cette approche montre de bien meilleures performances que les méthodes à déflation orthogonale et permet de s'affranchir très nettement de la résolution de Rayleigh grâce au contrôle de la fonction de pondération. Nous montrons de plus que cet algorithme est efficace et que l'erreur de propagation peut s'annuler via le réglage d'un paramètre de la fonction de pondération. Pour caractériser au mieux les performances de cet algorithme nous proposons une CRB, que nous nommons la Interfering-CRB issue d'un modèle linéaire constitué d'une DDA d'intérêt et de M-1 DDA interférentes (DDA estimées préalablement ou restant à estimer). Nous montrons que cette borne « reflète » bien l'algorithme ZF-MUSIC.
|
47 |
Target Localization Methods For Frequency-only Mimo RadarKalkan, Yilmaz 01 September 2012 (has links) (PDF)
This dissertation is focused on developing the new target localization and the target velocity estimation methods for frequency-only multi-input, multi-output (MIMO) radar systems with widely separated antennas. If the frequency resolutions of the transmitted signals are enough, only the received frequencies and the Doppler shifts can be used to find the position of the target.
In order to estimate the position and the velocity of the target, most multistatic radars or radar networks use multiple independent measurements from the target such as time-of-arrival (TOA), angle-of-arrival (AOA) and frequency-of-arrival (FOA). Although, frequency based systems have many advantages, frequency based target localization methods are very limited in literature because of the fact that highly non-linear equations are involved in solutions. In this thesis, alternative target localization and the target velocity estimation methods are proposed for frequency-only systems with low complexity.
One of the proposed methods is able to estimate the target position and the target velocity based on the measurements of the Doppler frequencies. Moreover, the target movement direction can be estimated efficiently. This method is referred to as " / Target Localization via Doppler Frequencies - TLDF" / and it can be used for not only radar but also all frequency-based localization systems such as Sonar or Wireless Sensor Networks.
Besides the TLDF method, two alternative target position estimation methods are proposed as well. These methods are based on the Doppler frequencies, but they requires the target velocity vector to be known. These methods are referred to as " / Target Localization via Doppler Frequencies and Target Velocity - TLD& / V methods" / and can be divided two sub-methods. One of them is based on the derivatives of the Doppler Frequencies and hence it is called as " / Derivated Doppler - TLD& / V-DD method" / . The second method uses the Maximum Likelihood (ML) principle with grid search, hence it is referred to as " / Sub-ML, TLD& / V-subML method" / .
The more realistic signal model for ground based, widely separated MIMO radar is formed as including Swerling target fluctuations and the Doppler frequencies. The Cramer-Rao Bounds (CRB) are derived for the target position and the target velocity estimations for this signal model. After the received signal is constructed, the Doppler frequencies are estimated by using the DFT based periodogram spectral estimator. Then, the estimated Doppler frequencies are collected in a fusion center to localize the target.
Finally, the multiple targets localization problem is investigated for frequency-only MIMO radar and a new data association method is proposed. By using the TLDF method, the validity of the method is simulated not only for the targets which are moving linearly but also for the maneuvering targets.
The proposed methods can localize the target and estimate the velocity of the target with less error according to the traditional isodoppler based method. Moreover, these methods are superior than the traditional method with respect to the computational complexity. By using the simulations with MATLAB, the superiorities of the proposed methods to the traditional method are shown.
|
48 |
TOA-Based Robust Wireless Geolocation and Cramér-Rao Lower Bound Analysis in Harsh LOS/NLOS EnvironmentsYin, Feng, Fritsche, Carsten, Gustafsson, Fredrik, Zoubir, Abdelhak M January 2013 (has links)
We consider time-of-arrival based robust geolocation in harsh line-of-sight/non-line-of-sight environments. Herein, we assume the probability density function (PDF) of the measurement error to be completely unknown and develop an iterative algorithm for robust position estimation. The iterative algorithm alternates between a PDF estimation step, which approximates the exact measurement error PDF (albeit unknown) under the current parameter estimate via adaptive kernel density estimation, and a parameter estimation step, which resolves a position estimate from the approximate log-likelihood function via a quasi-Newton method. Unless the convergence condition is satisfied, the resolved position estimate is then used to refine the PDF estimation in the next iteration. We also present the best achievable geolocation accuracy in terms of the Cramér-Rao lower bound. Various simulations have been conducted in both real-world and simulated scenarios. When the number of received range measurements is large, the new proposed position estimator attains the performance of the maximum likelihood estimator (MLE). When the number of range measurements is small, it deviates from the MLE, but still outperforms several salient robust estimators in terms of geolocation accuracy, which comes at the cost of higher computational complexity.
|
49 |
Modélisation de signaux fortement non stationnaires à phase et à amplitude locales polynomiales.Jabloun, Meryem 10 July 2007 (has links) (PDF)
Ce travail de recherche est consacré à l'élaboration et le développement d'une nouvelle méthode d'estimation<br />et de reconstruction de signaux fortement non-stationnaires, modulés non-linéairement à la fois<br />en amplitude et en fréquence. L'estimation de tels signaux dans un contexte trés bruité est un problème<br />délicat et les méthodes existantes de la littérature présentent plusieurs inconvénients dans ce cas.<br />Nous avons montré comment une approche locale permet une meilleure adaptabilité du modèle à la<br />nature des variations locales des amplitudes et des fréquences instantanées. Les résultats de l'estimation<br />sont par conséquent améliorés. L'originalité de la méthode proposée tient à l'application de modèles paramétriques bien adaptés sur des segments temporels de courtes durées extraits du signal étudié. Nous<br />avons proposé une stratégie de segmentation puis une stratégie de fusion des segments estimés permettant<br />la reconstruction du signal dans la totalité de sa durée. L'approche proposée permet de s'affranchir d'un<br />modèle global du signal requérant un ordre d'approximation élevé.<br />La validation de l'efficacité de l'estimation a été effectuée au préalable sur un segment temporel court.<br />Le modèle considéré localement consiste en une approximation polynomiale de la fréquence et de l'amplitude<br />exprimée dans une base polynomiale discrète et orthonormale que nous avons calculée. Cette base<br />permet de réduire le couplage entre les paramètres du modèle. Nous proposons et comparons deux techniques<br />différentes pour estimer ces derniers. La première est fondée sur la maximisation de la fonction<br />de vraisemblance en utilisant la technique d'optimisation stochastique le recuit simulé. Tandis que la<br />deuxième se base sur une approche Bayésienne employant les méthodes MCMC simulées par l'algorithme<br />de Metroplois-Hastings.<br />Nous montrons, sur des simulations et également sur des signaux réels, que l'approche proposée fournit<br />de bons résultats d'estimation par comparaison à celles de la HAF.
|
50 |
Capture de mouvement par mesure de distances dans un réseau corporel hétérogèneAloui, Saifeddine 05 February 2013 (has links) (PDF)
La capture de mouvement ambulatoire est un sujet en plein essor pour des applications aussi diverses que le suivi des personnes âgées, l'assistance des sportifs de haut niveau, la réhabilitation fonctionnelle, etc. Ces applications exigent que le mouvement ne soit pas contraint par un système externe, qu'il puisse être réalisé dans différentes situations, y compris en extérieur, que l'équipement soit léger et à un faible coût, qu'il soit réellement ambulatoire et sans procédure complexe de calibration.Actuellement, seuls les systèmes utilisant un exosquelette ou bien des modules inertiels (souvent combinés avec des modules magnétiques) permettent d'effectuer de la capture de mouvement de façon ambulatoire. Le poids de l'exosquelette est très important et il impose des contraintes sur les mouvements de la personne, ce qui le rend inutilisable pour certaines applications telles que le suivi de personnes âgées. La technologie inertielle est plus légère. Elle permet d'effectuer la capture du mouvement sans contrainte sur l'espace de mesure ou sur les mouvements réalisés. Par contre, elle souffre de dérives des gyromètres, et le système doit être recalibré.L'objectif de cette thèse est de développer un système de capture de mouvement de chaînes articulées, bas-coût et temps réel, réellement ambulatoire, ne nécessitant pas d'infrastructure de capture spécifique, permettant une utilisation dans de nombreux domaines applicatifs (rééducation, sport, loisirs, etc.).On s'intéresse plus particulièrement à des mesures intra-corporelles. Ainsi, tous les capteurs sont placés sur le corps et aucun dispositif externe n'est utilisé. Outre un démonstrateur final permettant de valider l'approche proposée, on s'astreint à développer également des outils qui permettent de dimensionner le système en termes de technologie, nombre et position des capteurs, mais également à évaluer différents algorithmes de fusion des données. Pour ce faire, on utilise la borne de Cramer-Rao.Le sujet est donc pluridisciplinaire. Il traite des aspects de modélisation et de dimensionnement de systèmes hybrides entièrement ambulatoires. Il étudie des algorithmes d'estimation adaptés au domaine de la capture de mouvement corps entier en traitant les problématiques d'observabilité de l'état et en tenant compte des contraintes biomécaniques pouvant être appliquées. Ainsi, un traitement adapté permet de reconstruire en temps réel la posture du sujet à partir de mesures intra-corporelles, la source étant également placée sur le corps.
|
Page generated in 0.0575 seconds