• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1006
  • 455
  • 140
  • 113
  • 73
  • 67
  • 30
  • 28
  • 26
  • 17
  • 14
  • 12
  • 12
  • 12
  • 11
  • Tagged with
  • 2339
  • 570
  • 332
  • 300
  • 277
  • 208
  • 200
  • 196
  • 192
  • 190
  • 182
  • 180
  • 170
  • 147
  • 140
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

A influência dos padrões extremos de crescimento da face sobre o perfil tegumentar, analisada cefalometricamente em jovens leucodermas Brasileiros.

Dainesi, Eduardo Alvares 16 December 1998 (has links)
A maioria dos trabalhos encontrados na literatura a respeito das alterações do perfil facial tegumentar com o passar da idade, analisa estas alterações comparando-se jovens com boa estética facial e com oclusão normal, indicando principalmente a presença ou não de um dimorfismo sexual. Entretanto, pouco se tem pesquisado sobre as alterações do perfil facial tegumentar em relação ao tipo de crescimento da face. Este estudo objetivou a determinação do comportamento do perfil facial tegumentar em relação aos padrões extremos de crescimento da face, em jovens leucodermas brasileiros, analisados cefalometricamente. A amostra constou de 38 jovens leucodermas, de ambos os sexos, sem tratamento ortodôntico ou cirúrgico prévio, com integridade dos arcos dentários e descendentes de portugueses, espanhóis ou italianos. De acordo com a proporção entre a altura facial ântero-inferior (AFAI) e a altura facial anterior total (AFAT), dividiu-se a amostra em dois grupos: um com 19 jovens, apresentando um padrão de crescimento facial vertical e outro, também com 19 jovens, apresentando um padrão horizontal. Estes jovens foram radiografados nas faixas etárias de 6, 9, 12, 15 e 18 anos, compondo 5 fases para a avaliação do comportamento do perfil tegumentar de acordo com o crescimento. As mensurações das variáveis analisadas foram obtidas pelo método computadorizado, com auxílio do programa Dentofacial Planner 7.0. Os resultados indicaram que não houve influência do tipo de crescimento facial sobre as alterações em espessura do perfil tegumentar. Houve uma grande influência do padrão vertical de crescimento sobre as alterações em altura do perfil tegumentar. Em ambos os padrões faciais, proporcionalmente, os maiores aumentos em espessura ocorreram nas regiões nasal e subnasal. Para o padrão horizontal, os maiores aumentos em altura ocorreram nas regiões labial superior e nasal e para o padrão vertical nas regiões labial superior e mentoniana. O padrão de crescimento horizontal exibiu uma maior variação, na maioria das medidas analisadas, dos 9 aos 12 anos de idade, enquanto que o vertical exibiu uma maior variação dos 12 aos 15 anos de idade. Ambos os padrões de crescimento da face, principalmente o vertical, demonstraram uma diminuição da convexidade facial. As demais mensurações analisadas, apesar de algumas oscilações ocorridas entre as fases estudadas, não apresentaram diferenças significantes. / Although the orthodontic literature is replete with studies of facial soft tissue profile changes, only a small number of studies have described facial soft tissue growth in relation to facial types. Therefore the purpose of this study was to evaluate the relationship between extreme vertical facial patterns of growth and soft tissue changes in young white Brazilian subjects, by means of serial cephalometric radiographs. The sample consisted of 38 white subjects of both gender, without previous orthodontic or surgical treatment, with usual orthodontic problems and descending of Portuguese, Spanish or Italian. This sample was classified into excessive, and short lower anterior face height, using the ratio lower anterior face height / total anterior face height (LAFH/TAFH), that yielded two groups with 19 subjects in each. The lateral cephalometric tracings were taken at ages of 6, 9, 12, 15 and 18 years, and then digitized. These data were analyzed with Dentofacial Planner 7.0 from Dentofacial Planner Software Inc., Toronto, Canada. Results indicated that soft tissue thickness changes were not significantly different between the two extreme vertical facial patterns of growth. Soft tissue height changes were significantly greater in the vertical pattern. Proportionally, both patterns had a greater thickness increase in the subnasal and nasal areas. Vertically, short face subjects had greater proportional growth in the nasal and upper labial areas, and long faced had greater increases in the upper labial and chin areas. The soft tissue facial convexity had a decrease in both extreme facial patterns, primarily in the vertical. The other measures had no significant differences between the two extreme vertical facial patterns.
362

Automatic segmentation and registration techniques for 3D face recognition. / Automatic segmentation and registration techniques for three-dimensional face recognition / CUHK electronic theses & dissertations collection

January 2008 (has links)
A 3D range image acquired by 3D sensing can explicitly represent a three-dimensional object's shape regardless of the viewpoint and lighting variations. This technology has great potential to resolve the face recognition problem eventually. An automatic 3D face recognition system consists of three stages: facial region segmentation, registration and recognition. The success of each stage influences the system's ultimate decision. Lately, research efforts are mainly devoted to the last recognition stage in 3D face recognition research. In this thesis, our study mainly focuses on segmentation and registration techniques, with the aim of providing a more solid foundation for future 3D face recognition research. / Then we propose a fully automatic registration method that can handle facial expressions with high accuracy and robustness for 3D face image alignment. In our method, the nose region, which is relatively more rigid than other facial regions in the anatomical sense, is automatically located and analyzed for computing the precise location of a symmetry plane. Extensive experiments have been conducted using the FRGC (V1.0 and V2.0) benchmark 3D face dataset to evaluate the accuracy and robustness of our registration method. Firstly, we compare its results with two other registration methods. One of these methods employs manually marked points on visualized face data and the other is based on the use of a symmetry plane analysis obtained from the whole face region. Secondly, we combine the registration method with other face recognition modules and apply them in both face identification and verification scenarios. Experimental results show that our approach performs better than the other two methods. For example, 97.55% Rank-1 identification rate and 2.25% EER score are obtained by using our method for registration and the PCA method for matching on the FRGC V1.0 dataset. All these results are the highest scores ever reported using the PCA method applied to similar datasets. / We firstly propose an automatic 3D face segmentation method. This method is based on deep understanding of 3D face image. Concepts of proportions of the facial and nose regions are acquired from anthropometrics for locating such regions. We evaluate this segmentation method on the FRGC dataset, and obtain a success rate as high as 98.87% on nose tip detection. Compared with results reported by other researchers in the literature, our method yields the highest score. / Tang, Xinmin. / Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3616. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 109-117). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
363

Embedded Face Detection and Facial Expression Recognition

Zhou, Yun 30 April 2014 (has links)
Face Detection has been applied in many fields such as surveillance, human machine interaction, entertainment and health care. Two main reasons for extensive attention on this typical research domain are: 1) a strong need for the face recognition system is obvious due to the widespread use of security, 2) face recognition is more user friendly and faster since it almost requests the users to do nothing. The system is based on ARM Cortex-A8 development board, including transplantation of Linux operating system, the development of drivers, detecting face by using face class Haar feature and Viola-Jones algorithm. In the paper, the face Detection system uses the AdaBoost algorithm to detect human face from the frame captured by the camera. The paper introduces the pros and cons between several popular images processing algorithm. Facial expression recognition system involves face detection and emotion feature interpretation, which consists of offline training and online test part. Active shape model (ASM) for facial feature node detection, optical flow for face tracking, support vector machine (SVM) for classification is applied in this research.
364

Statistical approaches for facial feature extraction and face recognition. / 抽取臉孔特徵及辨認臉孔的統計學方法 / Statistical approaches for facial feature extraction and face recognition. / Chou qu lian kong te zheng ji bian ren lian kong de tong ji xue fang fa

January 2004 (has links)
Sin Ka Yu = 抽取臉孔特徵及辨認臉孔的統計學方法 / 冼家裕. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2004. / Includes bibliographical references (leaves 86-90). / Text in English; abstracts in English and Chinese. / Sin Ka Yu = Chou qu lian kong te zheng ji bian ren lian kong de tong ji xue fang fa / Xian Jiayu. / Chapter Chapter 1. --- Introduction --- p.1 / Chapter 1.1. --- Motivation --- p.1 / Chapter 1.2. --- Objectives --- p.4 / Chapter 1.3. --- Organization of the thesis --- p.4 / Chapter Chapter 2. --- Facial Feature Extraction --- p.6 / Chapter 2.1. --- Introduction --- p.6 / Chapter 2.2. --- Reviews of Statistical Approach --- p.8 / Chapter 2.2.1. --- Eigenfaces --- p.8 / Chapter 2.2.1.1. --- Eigenfeatures Error! Bookmark not defined / Chapter 2.2.3. --- Singular Value Decomposition --- p.14 / Chapter 2.2.4. --- Summary --- p.15 / Chapter 2.3. --- Review of fiducial point localization methods --- p.16 / Chapter 2.3.1. --- Symmetry based Approach --- p.16 / Chapter 2.3.2. --- Color Based Approaches --- p.17 / Chapter 2.3.3. --- Integral Projection --- p.17 / Chapter 2.3.4. --- Deformable Template --- p.20 / Chapter 2.4. --- Corner-based Fiducial Point Localization --- p.22 / Chapter 2.4.1. --- Facial Region Extraction --- p.22 / Chapter 2.4.2. --- Corner Detection --- p.25 / Chapter 2.4.3. --- Corner Selection --- p.27 / Chapter 2.4.3.1. --- Mouth Corner Pairs Detection --- p.27 / Chapter 2.4.3.2. --- Iris Detection --- p.27 / Chapter 2.5. --- Experimental Results --- p.30 / Chapter 2.6. --- Conclusions --- p.30 / Chapter 2.7. --- Notes on Publications --- p.30 / Chapter Chapter 3. --- Fiducial Point Extraction with Shape Constraint --- p.32 / Chapter 3.1. --- Introduction --- p.32 / Chapter 3.2. --- Statistical Theory of Shape --- p.33 / Chapter 3.2.1. --- Shape Space --- p.33 / Chapter 3.2.2. --- Shape Distribution --- p.34 / Chapter 3.3. --- Shape Guided Fiducial Point Localization --- p.38 / Chapter 3.3.1. --- Shape Constraints --- p.38 / Chapter 3.3.2. --- Intelligent Search --- p.40 / Chapter 3.4. --- Experimental Results --- p.40 / Chapter 3.5. --- Conclusions --- p.42 / Chapter 3.6. --- Notes on Publications --- p.42 / Chapter Chapter 4. --- Statistical Pattern Recognition --- p.43 / Chapter 4.1. --- Introduction --- p.43 / Chapter 4.2. --- Bayes Decision Rule --- p.44 / Chapter 4.3. --- Gaussian Maximum Probability Classifier --- p.46 / Chapter 4.4. --- Maximum Likelihood Estimation of Mean and Covariance Matrix --- p.48 / Chapter 4.5. --- Small Sample Size Problem --- p.50 / Chapter 4.5.1. --- Dispersed Eigenvalues --- p.50 / Chapter 4.5.2. --- Distorted Classification Rule --- p.55 / Chapter 4.6. --- Review of Methods Handling the Small Sample Size Problem --- p.57 / Chapter 4.6.1. --- Linear Discriminant Classifier --- p.57 / Chapter 4.6.2. --- Regularized Discriminant Analysis --- p.59 / Chapter 4.6.3. --- Leave-one-out Likelihood Method --- p.63 / Chapter 4.6.4. --- Bayesian Leave-one-out Likelihood method --- p.65 / Chapter 4.7. --- Proposed Method --- p.68 / Chapter 4.7.1. --- A New Covariance Estimator --- p.70 / Chapter 4.7.2. --- Model Selection --- p.75 / Chapter 4.7.3. --- The Mixture Parameter --- p.76 / Chapter 4.8. --- Experimental results --- p.77 / Chapter 4.8.1. --- Implementation --- p.77 / Chapter 4.8.2. --- Results --- p.79 / Chapter 4.9. --- Conclusion --- p.81 / Chapter 4.10. --- Notes on Publications --- p.82 / Chapter Chapter 5. --- Conclusions and Future works --- p.83 / Chapter 5.1. --- Conclusions and Contributions --- p.83 / Chapter 5.2. --- Future Works --- p.84
365

Audio-guided video based face recognition. / CUHK electronic theses & dissertations collection

January 2006 (has links)
Face recognition is one of the most challenging computer vision research topics since faces appear differently even for the same person due to expression, pose, lighting, occlusion and many other confounding factors in real life. During the past thirty years, a number of face recognition techniques have been proposed. However, all of these methods focus exclusively on image-based face recognition that uses a still image as input data. One problem with the image-based face recognition is that it is possible to use a pre-recorded face photo to confuse a camera to take it as a live subject. The second problem is that the image-based recognition accuracy is still too low in some practical applications comparing to other high accuracy biometric technologies. To alleviate these problems, video based face recognition has been proposed recently. One of the major advantages of video-based face recognition is to prevent the fraudulent system penetration by pre-recorded facial images. The great difficulty to forge a video sequence (possible but very difficult) in front of a live video camera may ensure that the biometric data come from the user at the time of authentication. Another key advantage of the video based method is that more information is available in a video sequence than in a single image. If the additional information can be properly extracted, we can further increase the recognition accuracy. / In this thesis, we develop a new video-to-video face recognition algorithm [86]. The major advantage of the video based method is that more information is available in a video sequence than in a single image. In order to take advantage of the large amount of information in the video sequence and at the same time overcome the processing speed and data size problems we develop several new techniques including temporal and spatial frame synchronization, multi-level subspace analysis, and multi-classifier integration for video sequence processing. An aligned video sequence for each person is first obtained by applying temporal and spatial synchronization, which effectively establishes the face correspondence using the information of both audio and video, then multi-level subspace analysis or multi-classifier integration is employed for further analysis based on the synchronized sequence. The method preserves all the temporal-spatial information contained in a video sequence. Near perfect classification results are obtained on the largest available XM2VTS face video database. In addition, using a similar framework, two kinds of much improved still image based face recognition algorithms [93][94] are developed by incorporating the Gabor representation, nonparametric feature extraction method, and multiple classifier integration techniques. Extensive experiments on two famous face databases (XM2VTS database and Purdue database) clearly show the superiority of our new algorithms. / by Li Zhifeng. / "March 2006." / Adviser: Xiaoou Tang. / Source: Dissertation Abstracts International, Volume: 67-11, Section: B, page: 6621. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (p. 105-114). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
366

Facial feature extraction and its applications =: 臉部特徵之擷取及其應用. / 臉部特徵之擷取及其應用 / Facial feature extraction and its applications =: Lian bu te zheng zhi xie qu ji qi ying yong. / Lian bu te zheng zhi xie qu ji qi ying yong

January 2001 (has links)
Lau Chun Man. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 173-177). / Text in English; abstracts in English and Chinese. / Lau Chun Man. / Acknowledgement --- p.ii / Abstract --- p.iii / Contents --- p.vi / List of Tables --- p.x / List of Figures --- p.xi / Notations --- p.xv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Facial features --- p.1 / Chapter 1.1.1 --- Face region --- p.2 / Chapter 1.1.2 --- Contours and locations of facial organs --- p.2 / Chapter 1.1.3 --- Fiducial points --- p.3 / Chapter 1.1.4 --- Features from Principal Components Analysis --- p.5 / Chapter 1.1.5 --- Relationships between facial features --- p.7 / Chapter 1.2 --- Facial feature extraction --- p.8 / Chapter 1.2.1 --- Extraction of contours and locations of facial organs --- p.9 / Chapter 1.2.2 --- Extraction of fiducial points --- p.9 / Chapter 1.3 --- Face recognition --- p.10 / Chapter 1.4 --- Face animation --- p.11 / Chapter 1.5 --- Thesis outline --- p.12 / Chapter 2 --- Extraction of contours and locations of facial organs --- p.13 / Chapter 2.1 --- Introduction --- p.13 / Chapter 2.2 --- Deformable template model --- p.21 / Chapter 2.2.1 --- Introduction --- p.21 / Chapter 2.2.2 --- Segmentation of facial organs --- p.21 / Chapter 2.2.3 --- Estimation of iris location --- p.22 / Chapter 2.2.4 --- Eye template model --- p.23 / Chapter 2.2.5 --- Eye contour extraction --- p.24 / Chapter 2.2.6 --- Experimental results --- p.26 / Chapter 2.3 --- Integral projection method --- p.28 / Chapter 2.3.1 --- Introduction --- p.28 / Chapter 2.3.2 --- Pre-processing of the intensity map --- p.28 / Chapter 2.3.3 --- Processing of facial mask --- p.28 / Chapter 2.3.4 --- Integral projection --- p.29 / Chapter 2.3.5 --- Extraction of the irises --- p.30 / Chapter 2.3.6 --- Experimental results --- p.30 / Chapter 2.4 --- Active contour model (Snake) --- p.32 / Chapter 2.4.1 --- Introduction --- p.32 / Chapter 2.4.2 --- Forces on active contour model --- p.33 / Chapter 2.4.3 --- Mathematical representation of Snake --- p.33 / Chapter 2.4.4 --- Internal energy --- p.33 / Chapter 2.4.5 --- Image energy --- p.35 / Chapter 2.4.6 --- External energy --- p.36 / Chapter 2.4.7 --- Energy minimization --- p.36 / Chapter 2.4.8 --- Experimental results --- p.36 / Chapter 2.5 --- Summary --- p.38 / Chapter 3 --- Extraction of fiducial points --- p.39 / Chapter 3.1 --- Introduction --- p.39 / Chapter 3.2 --- Theory --- p.42 / Chapter 3.2.1 --- Face region extraction --- p.42 / Chapter 3.2.2 --- Iris detection and energy function --- p.44 / Chapter 3.2.3 --- Extraction of fiducial points --- p.53 / Chapter 3.2.4 --- Optimization of energy functions --- p.54 / Chapter 3.3 --- Experimental results --- p.55 / Chapter 3.4 --- Geometric features --- p.61 / Chapter 3.4.1 --- Definition of geometric features --- p.61 / Chapter 3.4.2 --- Selection of geometric features for face recognition --- p.63 / Chapter 3.4.3 --- Discussion --- p.73 / Chapter 3.5 --- Gaobr features --- p.75 / Chapter 3.5.1 --- Introduction --- p.75 / Chapter 3.5.2 --- Properties of Gabor wavelets --- p.82 / Chapter 3.5.3 --- Gabor features for face recognition --- p.85 / Chapter 3.6 --- Summary --- p.89 / Chapter 4 --- The use of fiducial points for face recognition --- p.90 / Chapter 4.1 --- Introduction --- p.90 / Chapter 4.1.1 --- Problem of face recognition --- p.92 / Chapter 4.1.2 --- Face recognition process --- p.93 / Chapter 4.1.3 --- Features for face recognition --- p.94 / Chapter 4.1.4 --- Distance measure --- p.95 / Chapter 4.1.5 --- Interpretation of recognition results --- p.96 / Chapter 4.2 --- Face recognition by Principal Components Analysis (PCA) --- p.98 / Chapter 4.2.1 --- Introduction --- p.98 / Chapter 4.2.2 --- PCA recognition system overview --- p.101 / Chapter 4.2.3 --- Face database --- p.103 / Chapter 4.2.4 --- Experimental results and analysis --- p.103 / Chapter 4.3 --- Face recognition by geometric features --- p.105 / Chapter 4.3.1 --- System overview --- p.105 / Chapter 4.3.2 --- Face database --- p.107 / Chapter 4.3.3 --- Experimental results and analysis --- p.107 / Chapter 4.3.4 --- Summary --- p.109 / Chapter 4.4 --- Face recognition by Gabor features --- p.110 / Chapter 4.4.1 --- System overview --- p.110 / Chapter 4.4.2 --- Face database --- p.112 / Chapter 4.4.3 --- Experimental results and analysis --- p.112 / Chapter 4.4.4 --- Comparison of recognition rate --- p.123 / Chapter 4.4.5 --- Summary --- p.124 / Chapter 4.5 --- Summary --- p.125 / Chapter 5 --- The use of fiducial points for face animation --- p.126 / Chapter 5.1 --- Introduction --- p.126 / Chapter 5.2 --- Wire-frame model --- p.129 / Chapter 5.2.1 --- Wire-frame model I --- p.129 / Chapter 5.2.2 --- Wire-frame model II --- p.132 / Chapter 5.3 --- Construction of individualized 3-D face mdoel --- p.133 / Chapter 5.3.1 --- Wire-frame fitting --- p.133 / Chapter 5.3.2 --- Texture mapping --- p.136 / Chapter 5.3.3 --- Experimental results --- p.142 / Chapter 5.4 --- Face definition and animation in MPEG4 --- p.144 / Chapter 5.4.1 --- Introduction --- p.144 / Chapter 5.4.2 --- Correspondences between fiducial points and FDPs --- p.146 / Chapter 5.4.3 --- Automatic generation of FDPs --- p.148 / Chapter 5.4.4 --- Generation of expressions by FAPs --- p.148 / Chapter 5.5 --- Summary --- p.152 / Chapter 6 --- Discussions and Conclusions --- p.153 / Chapter 6.1 --- Discussions --- p.153 / Chapter 6.1.1 --- Extraction of contours and locations of facial organs --- p.154 / Chapter 6.1.2 --- Extraction of fiducial points --- p.155 / Chapter 6.1.3 --- The use of fiducial points for face recognition --- p.156 / Chapter 6.1.4 --- The use of fiducial points for face animation --- p.157 / Chapter 6.2 --- Conclusions --- p.160 / Chapter A --- Mathematical derivation of Principal Components Analysis --- p.160 / Chapter B --- Face database --- p.173 / Bibliography
367

A design framework for ISFAR: (an intelligent surveillance system with face recognition).

January 2008 (has links)
Chan, Fai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 104-108). / Abstracts in English and Chinese. / Chapter 1. --- Introduction --- p.14 / Chapter 1.1. --- Background --- p.14 / Chapter 1.1.1. --- Introduction to Intelligent Surveillance System (ISS) --- p.14 / Chapter 1.1.2. --- Typical architecture of Surveillance System --- p.17 / Chapter 1.1.3. --- Single-camera vs Multi-camera Surveillance System --- p.17 / Chapter 1.1.4. --- Intelligent Surveillance System with Face Recognition (ISFAR) --- p.20 / Chapter 1.1.5. --- Minimal requirements for automatic Face Recognition --- p.21 / Chapter 1.2. --- Motivation --- p.22 / Chapter 1.3. --- Major Contributions --- p.26 / Chapter 1.3.1. --- A unified design framework for IS FAR --- p.26 / Chapter 1.3.2. --- Prototyping of IS FAR (ISFARO) --- p.29 / Chapter 1.3.3. --- Evaluation of ISFARO --- p.29 / Chapter 1.4. --- Thesis Organization --- p.30 / Chapter 2. --- Related Works --- p.31 / Chapter 2.1. --- Distant Human Identification (DHID) --- p.31 / Chapter 2.2. --- Distant Targets Identification System --- p.33 / Chapter 2.3. --- Virtual Vision System with Camera Scheduling --- p.35 / Chapter 3. --- A unified design framework for IS FAR --- p.37 / Chapter 3.1. --- Camera system modeling --- p.40 / Chapter 3.1.1. --- Stereo Triangulation (Human face location estimation) --- p.40 / Chapter 3.1.2. --- Camera system calibration --- p.42 / Chapter 3.2. --- Human face detection --- p.44 / Chapter 3.3. --- Human face tracking --- p.46 / Chapter 3.4. --- Human face correspondence --- p.50 / Chapter 3.4.1. --- Information consistency in stereo triangulation --- p.51 / Chapter 3.4.2. --- Proposed object correspondent algorithm --- p.52 / Chapter 3.5. --- Human face location and velocity estimation --- p.57 / Chapter 3.6. --- Human-Camera Synchronization --- p.58 / Chapter 3.6.1. --- Controlling a PTZ Camera for capturing human facial images --- p.60 / Chapter 3.6.2. --- Mathematical Formulation of the Human Face Capturing Problem --- p.61 / Chapter 4. --- Prototyping of lSFAR (ISFARO) --- p.64 / Chapter 4.1. --- Experiment Setup --- p.64 / Chapter 4.2. --- Speed of the PTZ camera 一 AXIS 213 PTZ --- p.67 / Chapter 4.3. --- Performance of human face detection and tracking --- p.68 / Chapter 4.4. --- Performance of human face correspondence --- p.72 / Chapter 4.5. --- Performance of human face location estimation --- p.74 / Chapter 4.6. --- Stability test of the Human-Camera Synchronization model --- p.75 / Chapter 4.7. --- Performance of ISFARO in capturing human facial images --- p.76 / Chapter 4.8. --- System Profiling of ISFARO --- p.79 / Chapter 4.9. --- Summary --- p.79 / Chapter 5. --- Improvements to ISFARO --- p.80 / Chapter 5.1. --- System Dynamics oflSFAR --- p.80 / Chapter 5.2. --- Proposed improvements to ISFARO --- p.82 / Chapter 5.2.1. --- Semi-automatic camera system calibration --- p.82 / Chapter 5.2.2. --- Velocity estimation using Kalman filter --- p.83 / Chapter 5.2.3. --- Reduction in PTZ camera delay --- p.87 / Chapter 5.2.4. --- Compensation of image blurriness due to motion from human --- p.89 / Chapter 5.3. --- Experiment Setup --- p.91 / Chapter 5.4. --- Performance of human face location estimation --- p.91 / Chapter 5.5. --- Speed of the PTZ Camera - SONY SNC RX-570 --- p.93 / Chapter 5.6. --- Performance of human face velocity estimation --- p.95 / Chapter 5.7. --- Performance of improved ISFARO in capturing human facial images --- p.99 / Chapter 6. --- Conclusions --- p.101 / Chapter 7. --- Bibliography --- p.104
368

Conception, réalisation et caractérisation d'un transformateur de commande / Design, realization and characterization of a drive transformer

Mahamat, Ahmat Taha 29 May 2017 (has links)
Ce travail concerne la conception, la réalisation et la caractérisation d’un transformateur de commande pour interrupteurs de puissance à grille isolée, le transformateur assurant l’isolation galvanique entre étage de commande et circuit de puissance. L’objectif du travail n’était pas de répondre à un cahier des charges précis mais de développer une nouvelle voie technologique pour la réalisation de transformateur planaire intégrable. Les principales caractéristiques d’un tel transformateur sont : - une inductance élevée (rapport inductance/surface occupée le plus grand possible) ; - des résistances séries faibles ; - un couplage capacitif entre primaire et secondaire le plus faible possible. Ces contraintes nous ont conduits à étudier un transformateur planaire à couches magnétiques dont les enroulements primaire et secondaire sont enterrés dans le matériau magnétique afin de réduire l’entrefer. La structure Face to Face a été retenue avec un décalage de 45° entre enroulements primaire et secondaire. Après une étude en simulation, chaque enroulement enterré dans un matériau ferrite a été réalisé séparément puis assemblé pour donner naissance au transformateur. De très nombreuses étapes technologiques : micro usinage laser femtoseconde, dépôts de cuivre par pulvérisation cathodique, photolithographie, planarisation, gravure chimique … ont été mises en oeuvre. Le transformateur ainsi réalisé est constitué d’un empilement de couches magnétiques, conductrices et isolantes. Il a été caractérisé des très basses fréquences jusqu’à plusieurs dizaines de MHz. Les résultats de mesure obtenus sont proches des résultats de simulation, la bande passante du transformateur s’étendant de 20kHz à 7MHz / This work concerns the design, realization and characterization of a control transformer for insulated gate power switches, the transformer providing galvanic isolation between driving stage and power circuits. The aim of the work was not to respond to a precise specification but to develop a new technological path for the realization of an integrable planar transformer. The main characteristics of such transformer are: - high inductance (ratio of inductance / area occupied as large as possible); - low series resistances; - a capacitive coupling between primary and secondary as small as possible. These constraints guided us to study a planar transformer with magnetic layers whose primary and secondary windings are buried in the magnetic material in order to reduce the air gap. The Face to Face structure was chosen with a 45 ° offset between primary and secondary windings. After a numerical study, windings buried in a ferrite material were fabricated separately and then assembled to give rise to the transformer. Many technological steps: femtosecond laser micromachining, copper deposits by sputtering, photolithography, planarization, chemical etching ... have been implemented. Thus, the transformer produced consists of a stack of magnetic, conductive and insulating layers. It has been characterized from very low frequencies up to several tens of MHz. The measurement results obtained are close to simulation results, the bandwidth of the transformer extending from 20 kHz to 7 MHz
369

Connectionist models of the perception of facial expressions of emotion

Mignault, Alain, 1962- January 1999 (has links)
No description available.
370

Older Men Working it Out A strong face of ageing and disability

Fleming, Alfred Andrew January 2001 (has links)
This hermeneutical study interprets and describes the phenomena of ageing and living with disability. The lived experiences of 14 older men and the horizon of this researcher developed an understanding of what it is like for men to grow old and, for some, to live with the effects of a major disability. The study is grounded in the philosophical hermeneutics of Gadamer and framed in the context of embodiment, masculinity, and narrative. I conducted multiple in-depth interviews with older men aged from 67 to 83 years of age. Seven of the participants had experienced a stroke and I was able to explore the phenomenon of disability with them. Through thematic and narrative analyses of the textual data interpretations were developed that identified common meanings and understandings of the phenomena of ageing and disability. These themes and narratives reveal that the men�s understandings are at odds with conventional negative views of ageing and disability. These older men are �alive and kicking�, they voice counternarratives to the dominant construction of ageing as decline and weakness, and have succeeded in remaking the lifeworld after stroke. Overall I have come to understand an overarching meaning of older men �working it out� as illustrative of a strong face of ageing and disability. Older men seek out opportunities to participate actively in community life and, despite the challenges of ageing and disability, lead significant and meaningful lives. These findings challenge and extend our limited understandings of men�s experiences of ageing and living with disability. This interpretation offers gendered directions for policy development, clinical practice, and future research.

Page generated in 0.0323 seconds