• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43
  • 35
  • 1
  • Tagged with
  • 208
  • 34
  • 33
  • 27
  • 19
  • 17
  • 17
  • 16
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Non-reversible mathematical transforms for secure biometric face recognition

Dabbah, Mohammad A. January 2008 (has links)
As the demand for higher and more sophisticated security solutions has dramatically increased, a trustworthy and a more intelligent authentication technology has to takeover. That is biometric authentication. Although biometrics provides promising solutions, it is still a pattern recognition and artificial intelligence grand challenge. More importantly, biometric data in itself are vulnerable and requires comprehensive protection that ensures their security at every stage of the authentication procedure including the processing stage. Without this protection biometric authentication cannot replace traditional authentication methods. This protection however cannot be accomplished using conventional cryptographic methods due to the nature of biometric data, its usage and inherited dynamical changes. The new protection method has to transform the biometric data into a secure domain where original information cannot be reversed or retrieved. This secure domain has also to be suitable for accurate authentication performance. In addition, due to the permanence characteristic of the biometric data and the limited number of valid biometrics for each individual, the transform has to be able to generate multiple versions of the same original biometric trait. This to facilitate the replacement and the cancellation of any compromised transformed template with a newer one without compromising the security of the system. Hence the name of the transform that is best known as cancellable biometric. Two cancellable face biometric transforms have been designed, implemented and analysed in this thesis, the Polynomial and Co-occurrence Mapping (PCoM) and the Randomised Radon Signatures (RRS). The PCoM transform is based on high-order polynomial function mappings and co-occurrence matrices derived from the face images. The secure template is formed by the Hadamard product of the generated metrics. A mathematical framework of the two-dimensional Principal Component Analysis (2DPCA) recognition is established for accuracy performance evaluation and analysis. The RRS transform is based on the Radon Transform (RT) and the random projection. The Radon Signature is generated from the parametric Radon domain of the face and mixed with the random projection of the original face image. The transform relies on the extracted signatures and the Johnson-Lindenstrauss lemma for high accuracy performance. The Fisher Discriminant Analysis (FDA) is used for evaluating the accuracy performance of the transformed templates. Each of the transforms has its own security analysis besides a comprehensive security analysis for both. This comprehensive analysis is based on a conventional measure for the Exhaustive Search Attack (ESA) and a new derived measure based on the lower-bound guessing entropy for Smart Statistical Attack (SSA). This entropy measure is shown to be greater than the Shannon lower-bound of the guessing entropy for the transformed templates. This shows that the transforms provide greater security while the ESA analysis demonstrates immunity against brute force attacks. In terms of authentication performance, both transforms have either maintained or improved the accuracy of authentication. The PCoM has maintained the recognition rates for the CMU Advance Multimedia Processing Lab (AMP) and the CMU Pose, Illumination & Expression (PIE) databases at 98.35% and 90.13% respectively while improving the rate for the Olivetti Research Ltd (ORL) database to 97%. The transform has achieved a maximum recognition performance improvement of 4%. Meanwhile, the RRS transform has obtained an outstanding performance by achieving zero error rates for the ORL and PIE databases while improving the rate for the AMP by 37.50%. In addition, the transform has significantly enhanced the genuine and impostor distributions separations by 263.73%, 24.94% and 256.83% for the ORL, AMP and PIE databases while the overlap of these distributions have been completely eliminated for the ORL and PIE databases.
122

Perceptible affordances and feedforward for gestural interfaces : assessing effectiveness of gesture acquisition with unfamiliar interactions

Chueke, J. January 2016 (has links)
The move towards touch-based interfaces disrupts the established ways in which users manipulate and control graphical user interfaces. The predominant mode of interaction established by the desktop interface is to ‘double-click’ an icon in order to open an application, file or folder. Icons show users where to click and their shape, colour and graphic style suggests how they respond to user action. In sharp contrast, in a touch-based interface, an action may require a user to form a gesture with a certain number of fingers, a particular movement, and in a specific place. Often, none of this is suggested in the interface. This thesis adopts the approach of research through design to address the problem of how to inform the user about which gestures are available in a given touch-based interface, how to perform each gesture, and, finally, the effect of each gesture on the underlying system. Its hypothesis is that presenting automatic and animated visual prompts that depict touch and preview gesture execution will mitigate the problems users encounter when they execute commands within unfamiliar gestural interfaces. Moreover, the thesis claims the need for a new framework to assess the efficiency of gestural UI designs. A significant aspect of this new framework is a rating system that was used to assess distinct phases within the users’ evaluation and execution of a gesture. In order to support the thesis hypothesis, two empirical studies were conducted. The first introduces the visual prompts in support of training participants in unfamiliar gestures and gauges participants’ interpretation of their meaning. The second study consolidates the design features that yielded fewer error rates in the first study and assesses different interaction techniques, such as the moment to display the visual prompt. Both studies demonstrate the benefits in providing visual prompts to improve user awareness of available gestures. In addition, both studies confirm the efficiency of the rating system in identifying the most common problems users have with gestures and identifying possible design features to mitigate such problems. The thesis contributes: 1) a gesture-and-effect model and a corresponding rating system that can be used to assess gestural user interfaces, 2) the identification of common problems users have with unfamiliar gestural interfaces and design recommendations to mitigate these problems, and 3) a novel design technique that will improve user awareness of unfamiliar gestures within novel gestural interfaces.
123

Speech segmentation and speaker diarisation for transcription and translation

Sinclair, Mark January 2016 (has links)
This dissertation outlines work related to Speech Segmentation – segmenting an audio recording into regions of speech and non-speech, and Speaker Diarization – further segmenting those regions into those pertaining to homogeneous speakers. Knowing not only what was said but also who said it and when, has many useful applications. As well as providing a richer level of transcription for speech, we will show how such knowledge can improve Automatic Speech Recognition (ASR) system performance and can also benefit downstream Natural Language Processing (NLP) tasks such as machine translation and punctuation restoration. While segmentation and diarization may appear to be relatively simple tasks to describe, in practise we find that they are very challenging and are, in general, ill-defined problems. Therefore, we first provide a formalisation of each of the problems as the sub-division of speech within acoustic space and time. Here, we see that the task can become very difficult when we want to partition this domain into our target classes of speakers, whilst avoiding other classes that reside in the same space, such as phonemes. We present a theoretical framework for describing and discussing the tasks as well as introducing existing state-of-the-art methods and research. Current Speaker Diarization systems are notoriously sensitive to hyper-parameters and lack robustness across datasets. Therefore, we present a method which uses a series of oracle experiments to expose the limitations of current systems and to which system components these limitations can be attributed. We also demonstrate how Diarization Error Rate (DER), the dominant error metric in the literature, is not a comprehensive or reliable indicator of overall performance or of error propagation to subsequent downstream tasks. These results inform our subsequent research. We find that, as a precursor to Speaker Diarization, the task of Speech Segmentation is a crucial first step in the system chain. Current methods typically do not account for the inherent structure of spoken discourse. As such, we explored a novel method which exploits an utterance-duration prior in order to better model the segment distribution of speech. We show how this method improves not only segmentation, but also the performance of subsequent speech recognition, machine translation and speaker diarization systems. Typical ASR transcriptions do not include punctuation and the task of enriching transcriptions with this information is known as ‘punctuation restoration’. The benefit is not only improved readability but also better compatibility with NLP systems that expect sentence-like units such as in conventional machine translation. We show how segmentation and diarization are related tasks that are able to contribute acoustic information that complements existing linguistically-based punctuation approaches. There is a growing demand for speech technology applications in the broadcast media domain. This domain presents many new challenges including diverse noise and recording conditions. We show that the capacity of existing GMM-HMM based speech segmentation systems is limited for such scenarios and present a Deep Neural Network (DNN) based method which offers a more robust speech segmentation method resulting in improved speech recognition performance for a television broadcast dataset. Ultimately, we are able to show that the speech segmentation is an inherently ill-defined problem for which the solution is highly dependent on the downstream task that it is intended for.
124

An evaluation of the performance of an optical measurement system for the three-dimensional capture of the shape and dimensions of the human body

Orwin, Claire Nicola January 2000 (has links)
As the clothing industry moves away from traditional models of mass production there has been increased interest towards customised clothing. The technology to produce cost effective customised clothing is already in place however the prerequisite to customised clothing is accurate body dimensional data. In response, image capture systems have been developed which are capable of recording a three-dimensional image of the body, from which measurements and shape information may be extracted. The use of these systems for customised clothing has, to date, been limited due to issues of inaccuracy, cost and portability. To address the issue of inaccuracy a diagnostic procedure has been developed through the performance evaluation of an image capture system. By systematically evaluating physical and instrumental parameters the more relevant sources of potential error were identified and quantified and subsequently corrected to form a `closed loop' experimental procedure. A systematic test procedure is therefore presented which may be universally applied to image capture systems working on the same principle. The methodology was based upon the isolation and subsequent testing of variables that were thought to be potential sources of error. The process therefore included altering the physical parameters of the target object in relation to the image capture system and amending the configuration and calibration settings within the system. From the evaluation the most relevant sources of error were identified as the cosine effect, measurement point displacement, the dimensional differences between views and the influence of the operator in measurement. The test procedure proved to be effective in both evaluating the performance of the system under investigation and in enabling the quantification of errors. Both random and systematic errors were noted which may be quantified or corrected to enable improved accuracy in the measured results. Recommendations have been made for the improvement of the performance of the current image capture system these include the integration of a cosine effect correction algorithm and suggestions for the automation of the image alignment process. The limitations of the system such as its reliance on manual intervention for both the measurement and stitching processes, are discussed, as is its suitability for providing dimensional information for bespoke clothing production. Recommendations are also made for the creation of an automated test procedure for testing the performance of alternative image capture systems, which involves evaluating the accuracy of object replication both for multiple and single image capture units using calibration objects which combine a range of surfaces.
125

Physiological measurement based automatic driver cognitive distraction detection

Azman, Afizan January 2013 (has links)
Vehicle safety and road safety are two important issues. They are related to each other and road accidents are mostly caused by driver distraction. Issues related to driver distraction like eating, drinking, talking to a passenger, using IVIS (In-Vehicle Information System) and thinking something unrelated to driving are some of the main reasons for road accidents. Driver distraction can be categorized into 3 different types: visual distraction, manual distraction and cognitive distraction. Visual distraction is when driver's eyes are off the road and manual distraction is when the driver takes one or both hands off the steering wheel and places the hand/s on something that is not related to the driving safety. Cognitive distraction whereas happens when a driver's mind is not on the road. It has been found that cognitive distraction is the most dangerous among the three because the thinking process can induce a driver to view and/or handle something unrelated to the safety information while driving a vehicle. This study proposes a physiological measurement to detect driver cognitive distraction. Features like lips, eyebrows, mouth movement, eye movement, gaze rotation, head rotation and blinking frequency are used for the purpose. Three different sets of experiments were conducted. The first experiment was conducted in a lab with faceLAB cameras and served as a pilot study to determine the correlation between mouth movement and eye movement during cognitive distraction. The second experiment was conducted in a real traffic environment using faceAPI cameras to detect movement on lips and eyebrows. The third experiment was also conducted in a real traffic environment. However, both faceLAB and faceAPI toolkits were combined to capture more features. A reliable and stable classification algorithm called Dynamic Bayesian Network (DBN) was used as the main algorithm for analysis. A few more others algorithms like Support Vector Machine (SVM), Logistic Regression (LR), AdaBoost and Static Bayesian Network (SBN) were also used for comparison. Results showed that DBN is the best algorithm for driver cognitive distraction detection. Finally a comparison was also made to evaluate results from this study and those by other researchers. Experimental results showed that lips and eyebrows used in this study are strongly correlated and have a significant role in improving cognitive distraction detection.
126

Automatic number plate recognition on FPGA

Zhai, Xiaojun January 2013 (has links)
Intelligent Transportation Systems (ITSs) play an important role in modern traffic management, which can be divided into intelligent infrastructure systems and intelligent vehicle systems. Automatic Number Plate Recognition systems (ANPRs) are one of infrastructure systems that allow users to track, identify and monitor moving vehicles by automatically extracting their number plates. ANPR is a well proven technology that is widely used throughout the world by both public and commercial organisations. There are a wide variety of commercial uses for the technology that include automatic congestion charge systems, access control and tracing of stolen cars. The fundamental requirements of an ANPR system are image capture using an ANPR camera and processing of the captured image. The image processing part, which is a computationally intensive task, includes three stages: Number Plate Localisation (NPL), Character Segmentation (CS) and Optical Character Recognition (OCR). The common hardware choice for its implementation is often high performance workstations. However, the cost, compactness and power issues that come with these solutions motivate the search for other platforms. Recent improvements in low-power high-performance Field Programmable Gate Arrays (FPGAs) and Digital Signal Processors (DSPs) for image processing have motivated researchers to consider them as a low cost solution for accelerating such computationally intensive tasks. Current ANPR systems generally use a separate camera and a stand-alone computer for processing. By optimising the ANPR algorithms to take specific advantages of technical features and innovations available within new FPGAs, such as low power consumption, development time, and vast on-chip resources, it will be possible to replace the high performance roadside computers with small in-camera dedicated platforms. In spite of this, costs associated with the computational resources required for complex algorithms together with limited memory have hindered the development of embedded vision platforms. The work described in this thesis is concerned with the development of a range of image processing algorithms for NPL, CS and OCR and corresponding FPGA architectures. MATLAB implementations have been used as a proof of concept for the proposed algorithms prior to the hardware implementation. The proposed architectures are speed/area efficient architectures, which have been implemented and verified using the Mentor Graphics RC240 FPGA development board equipped with a 4M Gates Xilinx Virtex-4 LX40. The proposed NPL architecture can localise a number plate in 4.7 ms whilst achieving a 97.8% localisation rate and consuming only 33% of the available area of the Virtex-4 FPGA. The proposed CS architecture can segment the characters within a NP image in 0.2-1.4 ms with 97.7% successful segmentation rate and consumes only 11% of the Virtex-4 FPGA on-chip resources. The proposed OCR architecture can recognise a character in 0.7 ms with 97.3% successful recognition rate and consumes only 23% of the Virtex-4 FPGA available area. In addition to the three main stages, two pre-processing stages which consist of image binarisation, rotation and resizing are also proposed to link these stages together. These stages consume 9% of the available FPGA on-chip resources. The overall results achieved show that the entire ANPR system can be implemented on a single FPGA that can be placed within an ANPR camera housing to create a stand-alone unit. As the benefits of this are drastically improve energy efficiency and removing the need for the installation and cabling costs associated with bulky PCs situated in expensive, cooled, waterproof roadside cabinets.
127

Reconnaissance de primitives discrètes multi-échelles / Multi-scale discrete primitives recognition

Ouattara, Jean Serge Dimitri 04 December 2014 (has links)
Dans cette thèse, nous nous intéressons à la reconnaissance des primitives discrètes multi-échelles. Nous considérons qu'une primitive discrète multi-échelles est une superposition de primitives discrètes de différentes échelles ; et nous proposons des approches qui permettent de déterminer les caractéristiques d'une primitive discrète ou d'une partie d'une primitive discrète.Nous proposons une nouvelle approche de reconnaissance de sous-segment discret qui se base sur des propriétés portant sur l'ordre des restes arithmétiques de la droite discrète. Nous établissons des liens entre les points d'appuis du sous-segment discret et les points ayant des restes arithmétiques minimaux et maximaux sur la droite discrète. D'après les résultats de nos comparaisons, cette approche se relève être plus efficace que des approches existantes.Nous nous intéressons ensuite à des approches de reconnaissance d'arcs et de cercles discrets par le centre généralisé. Nous étudions le dual de la médiatrice généralisée et proposons de calculer le centre généralisé par des calculs de visibilité dans l'espace dual afin de réduire son temps de calcul. Cette approche est valide aussi bien dans une grille régulière que dans une grille irrégulière isothétique.Finalement, nous nous intéressons à des approches de reconnaissance de droite discrète par la préimage généralisée. Nous utilisons la notion de frontière afin de diminuer le nombre d'éléments rentrant dans le calcul de la préimage généralisée ; ce qui simplifie le calcul et réduit le temps de calcul. Cette approche s'applique aussi dans une grille régulière comme dans une grille irrégulière isothétique. / This thesis is about discrete geometry and particularly recognition of multi-scale discrete primitives. We consider that a multiscale discrete primitive is a superimposition of many discrete primitives of different scales. Then we propose approaches of recognition of discrete primitives or parts of a discrete primitives.Firstly we propose a new approach for the recognition of digital subsegment that is based on properties of the sequence of arithmetic remainders of the digital straight line. We show there are sorne links between the leaning points of the digital subsegment and the points that have the minimal and maximal arithmetic remainders on the digital straight line. Based on the results of comparisons with others approaches, the approach seems more efficient. Secondly we present sorne work on improving digital rings and circles recognition by general circumcenter. We use the dual of the generalized bissector in order to simplify the computation of the intersections of generalized bissectors as a polygon stabbing problem. The dual of the generalized bissector is computed likely for pixels of a regular grid or paves of an irregular isothetic grid. Finaly we present some work on improving digital straight line recogrutlon by generalized preimage. To reduce the number of elements to take into account for the computation of the generalized preimage we introduce the concept of boundary. The approach based on boundary could be used in a regular grid or an irregular isothetic grid.
128

Estimation de profondeur de veine sous-invasive non invasive utilisant une imagerie multispectrale et des images de réflectance diffuses / Non-invasive Forearm Subcutaneous Vein Depth Estimation Using Multispectral Imaging and Diffuse Reflectance Images

Meng, Goh Chuan 22 November 2018 (has links)
L'estimation de la profondeur des veines sous-cutanées a été un sujet de recherche important ces dernières années en raison de son importance dans l'optimisation de pose de cathéters, de perfusions et plus généralement de ponctions veineuses. Par le passé, diverses techniques et systèmes de visualisation des veines ont été proposés, cependant le manque d'information sur la profondeur de la veine limite les possibilités pour une automatisation de la ponction veineuse ; le geste clinique restant dans de nombreux cas tributaire des compétences ou de l'expérience des cliniciens. Plusieurs techniques ont été proposées pour estimer la profondeur de la veine en utilisant la réflectance diffuse dont le principe repose sur la mesure de rapport de densité optique (ODR). Le concept de mesure de la profondeur des veines à l'aide de la technique ODR mérite d'être appliqué dans le monde réel en raison de son faible coût, de ses propriétés non invasives et du fait qu'il s'agit d'une technique de mesure sans contact avec la peau. Les travaux initiaux de Nishidate et. Al. [1] ont montré sur fantôme des résultats prometteurs. Cependant, une telle expérience peut ne pas être suffisante pour prouver son application pour la mesure in vivo en raison du manque d'expérience pour les données réelles. Par conséquent, ce travail de thèse a été commencé pour améliorer le modèle proposé par Nishidate et. Al. et l'élargir pour mesurer l'estimation in vivo de la profondeur de la veine sur de vrais patients. Le système proposé intègre de nouveaux composants tels qu'un algorithme de segmentation des veines, une méthode d'estimation plus précise du contenu en mélanine (Cm) et une conception matérielle entièrement nouvelle avec des composants stables. Les résultats obtenus par ODR ont été comparés à des données fournies par une machine Ultrason médicale. Les résultats de l'expérience montrent une corrélation de Pearson forte de 0,843 par rapport aux données échographiques et prouvent que le système développé est fiable pour la mesure in vivo de la profondeur de la veine. En outre, il est proposé d'utiliser un filtre de segmentation de veine optimal (filtre adapté) dans le système d'imagerie pour permettre une segmentation et par la suite une mesure de la profondeur automatique. / The estimation of subcutaneous vein depth has been an important research topic in recent years due to its importance in optimizing the intravenous (IV) access of venipuncture. Various techniques and system of vein visualization were proposed to improve the vein viewing, but the lack of vein depth information limits the system performance in assisting the IV access; thus, the IV access in many cases remains dependent on skill or experience of the clinicians. Several techniques were proposed to estimate the vein depth using diffuse reflectance of which the optical density ratio (ODR) technique is the most complete solution. The concept of measuring the veins depth using ODR based technique is deserved to be applied in the real-world due to its low cost, non-invasive properties and from the fact that it is a non-skin contact measurement technique. Nishidate et. al. [1] suggested an optimum conditions to measure the vein depth and thickness by using ODR which was supported by experiment with customized tissue-like agar gel phantom. However, such experiment may not be sufficient to prove its application for in vivo measurement due to the lack of experiment for real data. Therefore, this thesis work was first started to improve the proposed model by Nishidate et. al. and expand it to measure the in vivo estimation of vein depth on real patients. The proposed system incorporates new components such as an autonomous vein segmentation algorithm, a more accurate estimation method for melanin content (Cm) and a fully new hardware design with reliable parts. Importantly, the experiment estimate the vein depth on real patients as well as a through comparison with Ultrasound data. The experiment results show a strong Pearson correlation of 0.843 as compared to Ultrasound data and this evidence that the developed system is works for the in vivo measurement of vein depth. Besides that, an optimum vein filter (matched filter) is proposed to be used in the imaging system to preserve the most accurate vein detection and allow the system to produce the results with least detection error. The selection of the optimum vein filter has laid an important platform from which to obtain the accurate vein segmentation of a NIR image.
129

Évaluation clinique de la démarche à partir de données 3D / Clinical Gait Assessment using 3D data

Khokhlova, Margarita 19 November 2018 (has links)
L'analyse de la démarche clinique est généralement subjective, étant effectuée par des cliniciens observant la démarche des patients. Des alternatives à une telle analyse sont les systèmes basés sur les marqueurs et les systèmes basés sur les plates-formes au sol. Cependant, cette analyse standard de la marche nécessite des laboratoires spécialisés, des équipements coûteux et de longs délais d'installation et de post-traitement. Les chercheurs ont fait de nombreuses tentatives pour proposer une alternative basée sur la vision par ordinateur pour l'analyse de la demarche. Avec l'apparition de caméras 3D bon marche, le problème de l'évaluation qualitative de la démarche a été re-examiné. Les chercheurs ont réalisé le potentiel des dispositifs de cameras 3D pour les applications d'analyse de mouvement. Cependant, malgré des progrès très encourageants dans les technologies de détection 3D, leur utilisation réelle dans l'application clinique reste rare.Cette thèse propose des modèles et des techniques pour l'évaluation du mouvement à l'aide d'un capteur Microsoft Kinect. En particulier, nous étudions la possibilité d'utiliser différentes données fournies par une caméra RGBD pour l'analyse du mouvement et de la posture. Les principales contributions sont les suivantes. Nous avons réalisé une étude de l'etait de l'art pour estimer les paramètres importants de la démarche, la faisabilité de différentes solutions techniques et les méthodes d'évaluation de la démarche existantes. Ensuite, nous proposons un descripteur de posture basé sur un nuage de points 3D. Le descripteur conçu peut classer les postures humaines statiques a partir des données 3D. Nous construisons un système d'acquisition à utiliser pour l'analyse de la marche basée sur les donnees acquises par un capteur Kinect v2. Enfin, nous proposons une approche de détection de démarche anormale basée sur les données du squelette. Nous démontrons que notre outil d'analyse de la marche fonctionne bien sur une collection de données personnalisées et de repères existants. Notre méthode d'évaluation de la démarche affirme des avances significatives dans le domain, nécessite un équipement limité et est prêt à être utilisé pour l'évaluation de la démarche. / Clinical Gait analysis is traditionally subjective, being performed by clinicians observing patients gait. A common alternative to such analysis is markers-based systems and ground-force platforms based systems. However, this standard gait analysis requires specialized locomotion laboratories, expensive equipment, and lengthy setup and post-processing times. Researchers made numerous attempts to propose a computer vision based alternative for clinical gait analysis. With the appearance of commercial 3D cameras, the problem of qualitative gait assessment was reviewed. Researchers realized the potential of depth-sensing devices for motion analysis applications. However, despite much encouraging progress in 3D sensing technologies, their real use in clinical application remains scarce.In this dissertation, we develop models and techniques for movement assessment using a Microsoft Kinect sensor. In particular, we study the possibility to use different data provided by an RGBD camera for motion and posture analysis. The main contributions of this dissertation are the following. First, we executed a literature study to estimate the important gait parameters, the feasibility of different possible technical solutions and existing gait assessment methods. Second, we propose a 3D point cloud based posture descriptor. The designed descriptor can classify static human postures based on 3D data without the use of skeletonization algorithms. Third, we build an acquisition system to be used for gait analysis based on the Kinect v2 sensor. Fourth, we propose an abnormal gait detection approach based on the skeleton data. We demonstrate that our gait analysis tool works well on a collection of custom data and existing benchmarks. Weshow that our gait assessment approach advances the progress in the field, is ready to be used for gait assessment scenario and requires a minimum of the equipment.
130

Définition et implantation matérielle d'un estimateur de mouvement configurable pour la compression vidéo adaptative

Elhamzi, Wajdi 04 February 2013 (has links)
Pas de résumé en français / No summary

Page generated in 0.0148 seconds