• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 132
  • 39
  • 33
  • 21
  • 11
  • 9
  • 9
  • 7
  • 6
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 317
  • 317
  • 160
  • 66
  • 62
  • 58
  • 44
  • 44
  • 37
  • 37
  • 36
  • 35
  • 35
  • 33
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Analysis of the human corneal shape with machine learning

Bouazizi, Hala 01 1900 (has links)
Cette thèse cherche à examiner les conditions optimales dans lesquelles les surfaces cornéennes antérieures peuvent être efficacement pré-traitées, classifiées et prédites en utilisant des techniques de modélisation géométriques (MG) et d’apprentissage automatiques (AU). La première étude (Chapitre 2) examine les conditions dans lesquelles la modélisation géométrique peut être utilisée pour réduire la dimensionnalité des données utilisées dans un projet d’apprentissage automatique. Quatre modèles géométriques ont été testés pour leur précision et leur rapidité de traitement : deux modèles polynomiaux (P) – polynômes de Zernike (PZ) et harmoniques sphériques (PHS) – et deux modèles de fonctions rationnelles (R) : fonctions rationnelles de Zernike (RZ) et fonctions rationnelles d’harmoniques sphériques (RSH). Il est connu que les modèles PHS et RZ sont plus précis que les modèles PZ pour un même nombre de coefficients (J), mais on ignore si les modèles PHS performent mieux que les modèles RZ, et si, de manière plus générale, les modèles SH sont plus précis que les modèles R, ou l’inverse. Et prenant en compte leur temps de traitement, est-ce que les modèles les plus précis demeurent les plus avantageux? Considérant des valeurs de J (nombre de coefficients du modèle) relativement basses pour respecter les contraintes de dimensionnalité propres aux taches d’apprentissage automatique, nous avons établi que les modèles HS (PHS et RHS) étaient tous deux plus précis que les modèles Z correspondants (PZ et RR), et que l’avantage de précision conféré par les modèles HS était plus important que celui octroyé par les modèles R. Par ailleurs, les courbes de temps de traitement en fonction de J démontrent qu’alors que les modèles P sont traités en temps quasi-linéaires, les modèles R le sont en temps polynomiaux. Ainsi, le modèle SHR est le plus précis, mais aussi le plus lent (un problème qui peut en partie être remédié en appliquant une procédure de pré-optimisation). Le modèle ZP était de loin le plus rapide, et il demeure une option intéressante pour le développement de projets. SHP constitue le meilleur compromis entre la précision et la rapidité. La classification des cornées selon des paramètres cliniques a une longue tradition, mais la visualisation des effets moyens de ces paramètres sur la forme de la cornée par des cartes topographiques est plus récente. Dans la seconde étude (Chapitre 3), nous avons construit un atlas de cartes d’élévations moyennes pour différentes variables cliniques qui pourrait s’avérer utile pour l’évaluation et l’interprétation des données d’entrée (bases de données) et de sortie (prédictions, clusters, etc.) dans des tâches d’apprentissage automatique, entre autres. Une base de données constituée de plusieurs milliers de surfaces cornéennes antérieures normales enregistrées sous forme de matrices d’élévation de 101 by 101 points a d’abord été traitée par modélisation géométrique pour réduire sa dimensionnalité à un nombre de coefficients optimal dans une optique d’apprentissage automatique. Les surfaces ainsi modélisées ont été regroupées en fonction de variables cliniques de forme, de réfraction et de démographie. Puis, pour chaque groupe de chaque variable clinique, une surface moyenne a été calculée et représentée sous forme de carte d’élévations faisant référence à sa SMA (sphère la mieux ajustée). Après avoir validé la conformité de la base de donnée avec la littérature par des tests statistiques (ANOVA), l’atlas a été vérifié cliniquement en examinant si les transformations de formes cornéennes présentées dans les cartes pour chaque variable étaient conformes à la littérature. C’était le cas. Les applications possibles d’un tel atlas sont discutées. La troisième étude (Chapitre 4) traite de la classification non-supervisée (clustering) de surfaces cornéennes antérieures normales. Le clustering cornéen un domaine récent en ophtalmologie. La plupart des études font appel aux techniques d’extraction des caractéristiques pour réduire la dimensionnalité de la base de données cornéennes. Le but est généralement d’automatiser le processus de diagnostique cornéen, en particulier en ce qui a trait à la distinction entre les cornées normales et les cornées irrégulières (kératocones, Fuch, etc.), et dans certains cas, de distinguer différentes sous-classes de cornées irrégulières. L’étude de clustering proposée ici se concentre plutôt sur les cornées normales afin de mettre en relief leurs regroupements naturels. Elle a recours à la modélisation géométrique pour réduire la dimensionnalité de la base de données, utilisant des polynômes de Zernike, connus pour leur interprétativité transparente (chaque terme polynomial est associé à une caractéristique cornéenne particulière) et leur bonne précision pour les cornées normales. Des méthodes de différents types ont été testées lors de prétests (méthodes de clustering dur (hard) ou souple (soft), linéaires or non-linéaires. Ces méthodes ont été testées sur des surfaces modélisées naturelles (non-normalisées) ou normalisées avec ou sans traitement d’extraction de traits, à l’aide de différents outils d’évaluation (scores de séparabilité et d’homogénéité, représentations par cluster des coefficients de modélisation et des surfaces modélisées, comparaisons statistiques des clusters sur différents paramètres cliniques). Les résultats obtenus par la meilleure méthode identifiée, k-means sans extraction de traits, montrent que les clusters produits à partir de surfaces cornéennes naturelles se distinguent essentiellement en fonction de la courbure de la cornée, alors que ceux produits à partir de surfaces normalisées se distinguent en fonction de l’axe cornéen. La dernière étude présentée dans cette thèse (Chapitre 5) explore différentes techniques d’apprentissage automatique pour prédire la forme de la cornée à partir de données cliniques. La base de données cornéennes a d’abord été traitée par modélisation géométrique (polynômes de Zernike) pour réduire sa dimensionnalité à de courts vecteurs de 12 à 20 coefficients, une fourchette de valeurs potentiellement optimales pour effectuer de bonnes prédictions selon des prétests. Différentes méthodes de régression non-linéaires, tirées de la bibliothèque scikit-learn, ont été testées, incluant gradient boosting, Gaussian process, kernel ridge, random forest, k-nearest neighbors, bagging, et multi-layer perceptron. Les prédicteurs proviennent des variables cliniques disponibles dans la base de données, incluant des variables géométriques (diamètre horizontal de la cornée, profondeur de la chambre cornéenne, côté de l’œil), des variables de réfraction (cylindre, sphère et axe) et des variables démographiques (âge, genre). Un test de régression a été effectué pour chaque modèle de régression, défini comme la sélection d’une des 256 combinaisons possibles de variables cliniques (les prédicteurs), d’une méthode de régression, et d’un vecteur de coefficients de Zernike d’une certaine taille (entre 12 et 20 coefficients, les cibles). Tous les modèles de régression testés ont été évalués à l’aide de score de RMSE établissant la distance entre les surfaces cornéennes prédites (les prédictions) et vraies (les topographies corn¬éennes brutes). Les meilleurs d’entre eux ont été validés sur l’ensemble de données randomisé 20 fois pour déterminer avec plus de précision lequel d’entre eux est le plus performant. Il s’agit de gradient boosting utilisant toutes les variables cliniques comme prédicteurs et 16 coefficients de Zernike comme cibles. Les prédictions de ce modèle ont été évaluées qualitativement à l’aide d’un atlas de cartes d’élévations moyennes élaborées à partir des variables cliniques ayant servi de prédicteurs, qui permet de visualiser les transformations moyennes d’en groupe à l’autre pour chaque variables. Cet atlas a permis d’établir que les cornées prédites moyennes sont remarquablement similaires aux vraies cornées moyennes pour toutes les variables cliniques à l’étude. / This thesis aims to investigate the best conditions in which the anterior corneal surface of normal corneas can be preprocessed, classified and predicted using geometric modeling (GM) and machine learning (ML) techniques. The focus is on the anterior corneal surface, which is the main responsible of the refractive power of the cornea. Dealing with preprocessing, the first study (Chapter 2) examines the conditions in which GM can best be applied to reduce the dimensionality of a dataset of corneal surfaces to be used in ML projects. Four types of geometric models of corneal shape were tested regarding their accuracy and processing time: two polynomial (P) models – Zernike polynomial (ZP) and spherical harmonic polynomial (SHP) models – and two corresponding rational function (R) models – Zernike rational function (ZR) and spherical harmonic rational function (SHR) models. SHP and ZR are both known to be more accurate than ZP as corneal shape models for the same number of coefficients, but which type of model is the most accurate between SHP and ZR? And is an SHR model, which is both an SH model and an R model, even more accurate? Also, does modeling accuracy comes at the cost of the processing time, an important issue for testing large datasets as required in ML projects? Focusing on low J values (number of model coefficients) to address these issues in consideration of dimensionality constraints that apply in ML tasks, it was found, based on a number of evaluation tools, that SH models were both more accurate than their Z counterparts, that R models were both more accurate than their P counterparts and that the SH advantage was more important than the R advantage. Processing time curves as a function of J showed that P models were processed in quasilinear time, R models in polynomial time, and that Z models were fastest than SH models. Therefore, while SHR was the most accurate geometric model, it was the slowest (a problem that can partly be remedied by applying a preoptimization procedure). ZP was the fastest model, and with normal corneas, it remains an interesting option for testing and development, especially for clustering tasks due to its transparent interpretability. The best compromise between accuracy and speed for ML preprocessing is SHP. The classification of corneal shapes with clinical parameters has a long tradition, but the visualization of their effects on the corneal shape with group maps (average elevation maps, standard deviation maps, average difference maps, etc.) is relatively recent. In the second study (Chapter 3), we constructed an atlas of average elevation maps for different clinical variables (including geometric, refraction and demographic variables) that can be instrumental in the evaluation of ML task inputs (datasets) and outputs (predictions, clusters, etc.). A large dataset of normal adult anterior corneal surface topographies recorded in the form of 101×101 elevation matrices was first preprocessed by geometric modeling to reduce the dimensionality of the dataset to a small number of Zernike coefficients found to be optimal for ML tasks. The modeled corneal surfaces of the dataset were then grouped in accordance with the clinical variables available in the dataset transformed into categorical variables. An average elevation map was constructed for each group of corneal surfaces of each clinical variable in their natural (non-normalized) state and in their normalized state by averaging their modeling coefficients to get an average surface and by representing this average surface in reference to the best-fit sphere in a topographic elevation map. To validate the atlas thus constructed in both its natural and normalized modalities, ANOVA tests were conducted for each clinical variable of the dataset to verify their statistical consistency with the literature before verifying whether the corneal shape transformations displayed in the maps were themselves visually consistent. This was the case. The possible uses of such an atlas are discussed. The third study (Chapter 4) is concerned with the use of a dataset of geometrically modeled corneal surfaces in an ML task of clustering. The unsupervised classification of corneal surfaces is recent in ophthalmology. Most of the few existing studies on corneal clustering resort to feature extraction (as opposed to geometric modeling) to achieve the dimensionality reduction of the dataset. The goal is usually to automate the process of corneal diagnosis, for instance by distinguishing irregular corneal surfaces (keratoconus, Fuch, etc.) from normal surfaces and, in some cases, by classifying irregular surfaces into subtypes. Complementary to these corneal clustering studies, the proposed study resorts mainly to geometric modeling to achieve dimensionality reduction and focuses on normal adult corneas in an attempt to identify their natural groupings, possibly in combination with feature extraction methods. Geometric modeling was based on Zernike polynomials, known for their interpretative transparency and sufficiently accurate for normal corneas. Different types of clustering methods were evaluated in pretests to identify the most effective at producing neatly delimitated clusters that are clearly interpretable. Their evaluation was based on clustering scores (to identify the best number of clusters), polar charts and scatter plots (to visualize the modeling coefficients involved in each cluster), average elevation maps and average profile cuts (to visualize the average corneal surface of each cluster), and statistical cluster comparisons on different clinical parameters (to validate the findings in reference to the clinical literature). K-means, applied to geometrically modeled surfaces without feature extraction, produced the best clusters, both for natural and normalized surfaces. While the clusters produced with natural corneal surfaces were based on the corneal curvature, those produced with normalized surfaces were based on the corneal axis. In each case, the best number of clusters was four. The importance of curvature and axis as grouping criteria in corneal data distribution is discussed. The fourth study presented in this thesis (Chapter 5) explores the ML paradigm to verify whether accurate predictions of normal corneal shapes can be made from clinical data, and how. The database of normal adult corneal surfaces was first preprocessed by geometric modeling to reduce its dimensionality into short vectors of 12 to 20 Zernike coefficients, found to be in the range of appropriate numbers to achieve optimal predictions. The nonlinear regression methods examined from the scikit-learn library were gradient boosting, Gaussian process, kernel ridge, random forest, k-nearest neighbors, bagging, and multilayer perceptron. The predictors were based on the clinical variables available in the database, including geometric variables (best-fit sphere radius, white-towhite diameter, anterior chamber depth, corneal side), refraction variables (sphere, cylinder, axis) and demographic variables (age, gender). Each possible combination of regression method, set of clinical variables (used as predictors) and number of Zernike coefficients (used as targets) defined a regression model in a prediction test. All the regression models were evaluated based on their mean RMSE score (establishing the distance between the predicted corneal surfaces and the raw topographic true surfaces). The best model identified was further qualitatively assessed based on an atlas of predicted and true average elevation maps by which the predicted surfaces could be visually compared to the true surfaces on each of the clinical variables used as predictors. It was found that the best regression model was gradient boosting using all available clinical variables as predictors and 16 Zernike coefficients as targets. The most explicative predictor was the best-fit sphere radius, followed by the side and refractive variables. The average elevation maps of the true anterior corneal surfaces and the predicted surfaces based on this model were remarkably similar for each clinical variable.
312

The Use and Utility of Disaster Facebook Groups for Managing Communication Networks after the Camp Fire: A Case Study of the Unique Spaces for Connection for Survivors' Resilience and Recovery

Bailey C Benedict (11197701) 28 July 2021 (has links)
With natural disasters occurring with more frequency and severity, understanding how to facilitate survivors’ resilience and recovery is becoming increasingly important. The Camp Fire in California, which started on November 8, 2018, was one of the most destructive wildfires in recorded history in terms of loss of life and damage to property. Aid from many types of entities (e.g., non-profits, governments, and for-profits) at various levels (e.g., local, state, and federal) was available to survivors, but perhaps the most influential source of support was Disaster Facebook Groups. In the month after the Camp Fire, around 50 Camp Fire Facebook Groups (CFFGs) were created, with over 100 CFFGs existing over the course of recovery. CFFGs are Facebook Groups with the goal of helping Camp Fire survivors. The support exchanged in CFFGs was immense and ranged from financial assistance to emotional support to community building. <br><br>This dissertation offers a mixed-method, event-specific case study of the use and utility of Disaster Facebook Groups after the Camp Fire. I examined how CFFGs offered unique and valuable spaces for connection that allowed members to engage in resilience organizing and disaster response and recovery. To conduct this case study, after engaging in observations of the Groups for over two years, I interviewed 25 administrators of CFFGs and distributed a survey in the Groups that was completed by survivors of the Camp Fire who were members of at least one CFFG during their recovery. I used network perspectives and the Communication Theory of Resilience (Buzzanell, 2010, 2019) as lenses through which administrators’ and survivors’ experiences with CFFGs was understood. I also analyzed the two datasets using multiple and mixed methods but primarily thematic analysis and path modeling. <br><br>The analyses for this case study are presented in four studies. The first two studies provide an understanding of the spaces for connection offered by CFFGs (i.e., characterizing the CFFGs and describing the spaces for connection as both helpful and hurtful), while the last two studies examine the use and utility of CFFGs (i.e., explaining the evolution of activity in CFFGs and investigating the connectivity and social support in CFFGs). <br><br>Across the four studies, I explored three central arguments, which are the primary contributions of this dissertation. First, I advocated for incorporating network thinking into resilience theorizing. With the findings of this dissertation, I extend the Communication Theory of Resilience by offering “managing communication networks” as a refinement of its fourth process of resilience (i.e., using and maintaining communication networks). Managing communication networks addresses the active strategies people use to manage their communication networks, including expanding, contracting, maintaining, and using their communication networks, as they endure and overcome hardship. I also forward the argument that people’s resilience is encompassed by their social networks, meaning their social network can be passively implicated by their resilience or actively involved in their resilience, but can also initiate resilience on their behalf.<br><br>Second, I contended Disaster Facebook Groups offer unique and valuable spaces for connection that facilitate resilience organizing and disaster response for at least five reasons. I argued that Disaster Facebook Groups empower emergent organizing; privilege local knowledge; are convenient; lack anonymity which adds authenticity; and allow for individualization. The findings of this dissertation provide evidence of how these reasons converged in CFFGs to enable members to exchange support that was not, and could not be, available elsewhere.<br><br>Third, I hypothesized that the use of Disaster Facebook Groups would predict the utility of Disaster Facebook Groups, resilience, and recovery for survivors. I tested two models that use different variables to represent the use and utility of CFFGs and recovery from the Camp Fire. The first model investigated how activity in CFFGs influenced the perceived helpfulness of CFFGs and how both the activity in and perceived helpfulness of CFFGs influenced the extent of recovery for survivors. I used retrospective data about five time points across survivors’ first two years of recovery and found the model was most explanative up to one month after the Fire. The second model assessed how various indicators of connectivity in CFFGs impacted received social support (i.e., informational, emotional, and tangible support), resilience, and satisfaction with recovery for survivors. The intensity of survivors’ connections to CFFGs, when they joined their first CFFG, and how many Facebook Friends they gained from their participation in CFFGs were the most predictive indicators of connectivity. From the Groups, survivors reported receiving informational support more than emotional support and emotional support more than tangible support.<br><br>I put the findings of the four studies, as well as the three central arguments, in conversation with each other in the discussion section, focusing on theory, practice, and methodology. Regarding theory, I contribute network thinking to resilience theorizing: I underscore resilience as an inherently networked process; I acknowledge expanding and contracting communication networks as sub-processes of resilience that complement but are distinctly different from using and maintaining communication networks; and I forward “managing communication networks” as a refinement and extension of the Communication Theory of Resilience’s fourth process of resilience (i.e., using and maintaining communication networks). Related to practice, I call for the continuation of conversations around Disaster Facebook Groups as unique and valuable spaces for connection, particularly regarding the five reasons I established. I also give suggestions for practice related to the use and utility of Disaster Facebook Groups for disaster response and recovery. For methodological considerations, I discuss the importance of forming relationships with participants when engaging in research about online communities and natural disasters and call to question the translation of findings about social media across platforms and the role of neoliberalism in resilience and disaster research and practice. Despite its limitations, this dissertation makes meaningful contributions to theory, practice, and methodology, while offering fruitful directions for future research. This mixed-method, event-specific case study brings attention to the influential citizen-driven disaster response in Facebook Groups after the Camp Fire. <br>
313

Использование машинного обучения для автоматической интерпретации данных из систем веб-аналитики : магистерская диссертация / Using machine learning to automatically interpret data from web analytics systems

Цинцов, Н. В., Tsintsov, N. V. January 2023 (has links)
В данной работе был разработан и реализован комплексный подход к анализу и интерпретации пользовательских данных, собранных в рамках системы веб-аналитики. Применяя методы машинного обучения и аналитики данных, были исследованы и выявлены ключевые события пользователей, влияющие на определенные бизнес-метрики. Начальные этапы проекта включали сбор и предварительную обработку данных, с последующей кластеризацией для выявления скрытых взаимосвязей и структур. Использовались или тестировались различные библиотеки для объяснимости работы моделей машинного обучении, такие как Eli5 и SHAP. Для решения задачи тестировались кластеризации, включая K-средних, DBSCAN, спектральную кластеризацию и OPTICS. В качестве алгоритмов применялась логистическая регрессия, случайны лес и CatBoost. Применялась нейронная сеть. Для определения значимости признаков использовались методы Permutation Importance, с применением моделей логистической регрессии, случайного леса и нейронной сети. Основным результатом стала разработка скрипта, осуществляющего автоматический сбор, обработку данных и определение наиболее значимых событий. Полученный инструментарий значительно облегчает задачу аналитиков, помогая определять ключевые аспекты поведения пользователей и строить более эффективные стратегии взаимодействия. Применение полученных результатов имеет высокий потенциал для улучшения бизнес–решений и оптимизации работы с пользовательской аудиторией. / In this work, an integrated approach to the analysis and interpretation of user data collected within the framework of a web analytics system was developed and implemented. Using machine learning and data analytics methods, key user events that impact certain business metrics were investigated and identified. The initial stages of the project included data collection and pre-processing, followed by clustering to identify hidden relationships and structures. Various libraries have been used or tested to make machine learning models explainable, such as Eli5 and SHAP. Clusterings including K-means, DBSCAN, spectral clustering, and OPTICS were tested to solve the problem. The algorithms used were logistic regression, random forest and CatBoost. A neural network was used. To determine the significance of features, Permutation Importance methods were used using logistic regression, random forest and neural network models. The main result was the development of a script that automatically collects, processes data and determines the most significant events. The resulting tools greatly facilitate the task of analysts, helping to identify key aspects of user behavior and build more effective interaction strategies. The application of the results obtained has high potential for improving business decisions and optimizing work with the user audience.
314

Stylometry: Quantifying Classic Literature For Authorship Attribution : - A Machine Learning Approach

Yousif, Jacob, Scarano, Donato January 2024 (has links)
Classic literature is rich, be it linguistically, historically, or culturally, making it valuable for future studies. Consequently, this project chose a set of 48 classic books to conduct a stylometric analysis on the defined set of books, adopting an approach used by a related work to divide the books into text segments, quantify the resulting text segments, and analyze the books using the quantified values to understand the linguistic attributes of the books. Apart from the latter, this project conducted different classification tasks for other objectives. In one respect, the study used the quantified values of the text segments of the books for classification tasks using advanced models like LightGBM and TabNet to assess the application of this approach in authorship attribution. From another perspective, the study utilized a State-Of-The-Art model, namely, RoBERTa for classification tasks using the segmented texts of the books instead to evaluate the performance of the model in authorship attribution. The results uncovered the characteristics of the books to a reasonable degree. Regarding the authorship attribution tasks, the results suggest that segmenting and quantifying text using stylometric analysis and supervised machine learning algorithms is practical in such tasks. This approach, while showing promise, may still require further improvements to achieve optimal performance. Lastly, RoBERTa demonstrated high performance in authorship attribution tasks.
315

Automatické třídění fotografií podle obsahu / Automatic Photography Categorization

Veľas, Martin January 2013 (has links)
This thesis deals with content based automatic photo categorization. The aim of the work is to experiment with advanced techniques of image represenatation and to create a classifier which is able to process large image dataset with sufficient accuracy and computation speed. A traditional solution based on using visual codebooks is enhanced by computing color features, soft assignment of visual words to extracted feature vectors, usage of image segmentation in process of visual codebook creation and dividing picture into cells. These cells are processed separately. Linear SVM classifier with explicit data embeding is used for its efficiency. Finally, results of experiments with above mentioned techniques of the image categorization are discussed.
316

Human Pose and Action Recognition using Negative Space Analysis

Janse Van Vuuren, Michaella 12 1900 (has links)
This thesis proposes a novel approach to extracting pose information from image sequences. Current state of the art techniques focus exclusively on the image space occupied by the body for pose and action recognition. The method proposed here, however, focuses on the negative spaces: the areas surrounding the individual. This has resulted in the colour-coded negative space approach, an image preprocessing step that circumvents the need for complicated model fitting or template matching methods. The approach can be described as follows: negative spaces surrounding the human silhouette are extracted using horizontal and vertical scanning processes. These negative space areas are more numerous, and undergo more radical changes in shape than the single area occupied by the figure of the person performing an action. The colour-coded negative space representation is formed using the four binary images produced by the scanning processes. Features are then extracted from the colour-coded images. These are based on the percentage of area occupied by distinct coloured regions as well as the bounding box proportions. Pose clusters are identified using feedback from an independent action set. Subsequent images are classified using a simple Euclidean distance measure. An image sequence is thus temporally segmented into its corresponding pose representations. Action recognition simply becomes the detection of a temporally ordered sequence of poses that characterises the action. The method is purely vision-based, utilising monocular images with no need for body markers or special clothing. Two datasets were constructed using several actors performing different poses and actions. Some of these actions included actors waving their arms, sitting down or kicking a leg. These actions were recorded against a monochrome background to simplify the segmentation of the actors from the background. The actions were then recorded on DV cam and digitised into a data base. The silhouette images from these actions were isolated and placed in a frame or bounding box. The next step was to highlight the negative spaces using a directional scanning method. This scanning method colour-codes the negative spaces of each action. What became immediately apparent is that very distinctive colour patterns formed for different actions. To emphasise the action, different colours were allocated to negative spaces surrounding the image. For example, the space between the legs of an actor standing in a T - pose with legs apart would be allocated yellow, while the space below the arms were allocated different shades of green. The space surrounding the head would be different shades of purple. During an action when the actor moves one leg up in a kicking fashion, the yellow colour would increase. Inversely, when the actor closes his legs and puts them together, the yellow colour filling the negative space would decrease substantially. What also became apparent is that these coloured negative spaces are interdependent and that they influence each other during the course of an action. For example, when an actor lifts one of his legs, increasing the yellow-coded negative space, the green space between that leg and the arm decreases. This interrelationship between colours hold true for all poses and actions as presented in this thesis. In terms of pose recognition, it is significant that these colour coded negative spaces and the way the change during an action or a movement are substantial and instantly recognisable. Compare for example, looking at someone lifting an arm as opposed to seeing a vast negative space changing shape. In a controlled research environment, several actors were instructed to perform a number of different actions. After colour coding the negative spaces, it became apparent that every action can be recognised by a unique colour coded pattern. The challenge is to ascribe a numerical presentation, a mathematical quotation, to extract the essence of what is so visually apparent. The essence of pose recognition and it's measurability lies in the relationship between the colours in these negative spaces and how they impact on each other during a pose or an action. The simplest way of measuring this relationship is by calculating the percentage of each colour present during an action. These calculated percentages become the basis of pose and action recognition. By plotting these percentages on a graph confirms that the essence of these different actions and poses can in fact been captured and recognised. Despite variations in these traces caused by time differences, personal appearance and mannerisms, what emerged is a clear recognisable pattern that can be married to an action or different parts of an action. 7 Actors might lift their left leg, some slightly higher than others, some slower than others and these variations in terms of colour percentages would be recorded as a trace, but there would be very specific stages during the action where the traces would correspond, making the action recognisable.In conclusion, using negative space as a tool in human pose and tracking recognition presents an exiting research avenue because it is influenced less by variations such as difference in personal appearance and changes in the angle of observation. This approach is also simplistic and does not rely on complicated models and templates
317

Channel Probing for an Indoor Wireless Communications Channel

Hunter, Brandon 13 March 2003 (has links) (PDF)
The statistics of the amplitude, time and angle of arrival of multipaths in an indoor environment are all necessary components of multipath models used to simulate the performance of spatial diversity in receive antenna configurations. The model presented by Saleh and Valenzuela, was added to by Spencer et. al., and included all three of these parameters for a 7 GHz channel. A system was built to measure these multipath parameters at 2.4 GHz for multiple locations in an indoor environment. Another system was built to measure the angle of transmission for a 6 GHz channel. The addition of this parameter allows spatial diversity at the transmitter along with the receiver to be simulated. The process of going from raw measurement data to discrete arrivals and then to clustered arrivals is analyzed. Many possible errors associated with discrete arrival processing are discussed along with possible solutions. Four clustering methods are compared and their relative strengths and weaknesses are pointed out. The effects that errors in the clustering process have on parameter estimation and model performance are also simulated.

Page generated in 0.0471 seconds