• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 240
  • 10
  • 10
  • 9
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 322
  • 322
  • 144
  • 122
  • 116
  • 99
  • 73
  • 66
  • 62
  • 57
  • 57
  • 54
  • 52
  • 52
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Projection of High-Dimensional Genome-Wide Expression on SOM Transcriptome Landscapes

Nikoghosyan, Maria, Loeffler-Wirth, Henry, Davidavyan, Suren, Binder, Hans, Arakelyan, Arsen 23 January 2024 (has links)
The self-organizing maps portraying has been proven to be a powerful approach for analysis of transcriptomic, genomic, epigenetic, single-cell, and pathway-level data as well as for “multi-omic” integrative analyses. However, the SOM method has a major disadvantage: it requires the retraining of the entire dataset once a new sample is added, which can be resource- and timedemanding. It also shifts the gene landscape, thus complicating the interpretation and comparison of results. To overcome this issue, we have developed two approaches of transfer learning that allow for extending SOM space with new samples, meanwhile preserving its intrinsic structure. The extension SOM (exSOM) approach is based on adding secondary data to the existing SOM space by “meta-gene adaptation”, while supervised SOM portrayal (supSOM) adds support vector machine regression model on top of the original SOM algorithm to “predict” the portrait of a new sample. Both methods have been shown to accurately combine existing and new data. With simulated data, exSOM outperforms supSOM for accuracy, while supSOM significantly reduces the computing time and outperforms exSOM for this parameter. Analysis of real datasets demonstrated the validity of the projection methods with independent datasets mapped on existing SOM space. Moreover, both methods well handle the projection of samples with new characteristics that were not present in training datasets.
302

Alternative Solution to Catastrophical Forgetting on FewShot Instance Segmentation

Álvarez Fernández Del Vallado, Juan January 2021 (has links)
Video instance segmentation is a rapidly-growing research area within the computer vision field. Models for segmentation require data already annotated, which can be a daunting task when starting from scratch. Although there are some publicly available datasets for image instance segmentation, they are limited to the application they target. This work proposes a new approach to training an instance segmentation model using transfer learning, notably reducing the need for annotated data. Transferring knowledge from domain A to domain B can result in catastrophical forgetting, leading to an algorithm unable to properly generalize and remember the previous knowledge acquired at the initial domain. This problem is studied and a solution is proposed based on data transformations applied precisely at the process of transferring knowledge to the target domain following the empirical research method and using publicly available video instance segmentation datasets as resources for the experiments. Conclusions show there is a relationship between the data transformations and ability to generalize both domains. / Segmentering av videointervjuer är ett snabbt växande forskningsområde inom datorseende. Modeller för segmentering kräver data som redan är annoterade, vilket kan vara en krävande uppgift när man börjar från början. Även om det finns några offentligt tillgängliga datamängder för bildinstanssegmentering är de begränsade till den tillämpning de är inriktade på. I detta arbete föreslås en ny metod för att träna en modell för instanssegmentering med hjälp av överföringsinlärning, vilket framför allt minskar behovet av annoterade data. Överföring av kunskap från domän A till domän B kan resultera i katastrofal glömska, vilket leder till att en algoritm inte kan generalisera och komma ihåg den tidigare kunskap som förvärvats i den ursprungliga domänen. Detta problem studeras och en lösning föreslås som bygger på datatransformationer som tillämpas just vid överföringen av kunskap till måldomänen enligt den empiriska forskningsmetoden och med hjälp av offentligt tillgängliga datamängder för segmentering av videointervjuer som resurser för experimenten. Slutsatserna visar att det finns ett samband mellan datatransformationer och förmågan att generalisera båda områdena.
303

Real-time hand segmentation using deep learning / Hand-segmentering i realtid som använder djupinlärning

Favia, Federico January 2021 (has links)
Hand segmentation is a fundamental part of many computer vision systems aimed at gesture recognition or hand tracking. In particular, augmented reality solutions need a very accurate gesture analysis system in order to satisfy the end consumers in an appropriate manner. Therefore the hand segmentation step is critical. Segmentation is a well-known problem in image processing, being the process to divide a digital image into multiple regions with pixels of similar qualities. Classify what pixels belong to the hand and which ones belong to the background need to be performed within a real-time performance and a reasonable computational complexity. While in the past mainly light-weight probabilistic and machine learning approaches were used, this work investigates the challenges of real-time hand segmentation achieved through several deep learning techniques. Is it possible or not to improve current state-of-theart segmentation systems for smartphone applications? Several models are tested and compared based on accuracy and processing speed. Transfer learning-like approach leads the method of this work since many architectures were built just for generic semantic segmentation or for particular applications such as autonomous driving. Great effort is spent on organizing a solid and generalized dataset of hands, exploiting the existing ones and data collected by ManoMotion AB. Since the first aim was to obtain a really accurate hand segmentation, in the end, RefineNet architecture is selected and both quantitative and qualitative evaluations are performed, considering its advantages and analysing the problems related to the computational time which could be improved in the future. / Handsegmentering är en grundläggande del av många datorvisionssystem som syftar till gestigenkänning eller handspårning. I synnerhet behöver förstärkta verklighetslösningar ett mycket exakt gestanalyssystem för att tillfredsställa slutkonsumenterna på ett lämpligt sätt. Därför är handsegmenteringssteget kritiskt. Segmentering är ett välkänt problem vid bildbehandling, det vill säga processen att dela en digital bild i flera regioner med pixlar av liknande kvaliteter. Klassificera vilka pixlar som tillhör handen och vilka som hör till bakgrunden måste utföras i realtidsprestanda och rimlig beräkningskomplexitet. Medan tidigare använts huvudsakligen lättviktiga probabilistiska metoder och maskininlärningsmetoder, undersöker detta arbete utmaningarna med realtidshandsegmentering uppnådd genom flera djupinlärningstekniker. Är det möjligt eller inte att förbättra nuvarande toppmoderna segmenteringssystem för smartphone-applikationer? Flera modeller testas och jämförs baserat på noggrannhet och processhastighet. Transfer learning-liknande metoden leder metoden för detta arbete eftersom många arkitekturer byggdes bara för generisk semantisk segmentering eller för specifika applikationer som autonom körning. Stora ansträngningar läggs på att organisera en gedigen och generaliserad uppsättning händer, utnyttja befintliga och data som samlats in av ManoMotion AB. Eftersom det första syftet var att få en riktigt exakt handsegmentering, väljs i slutändan RefineNetarkitekturen och både kvantitativa och kvalitativa utvärderingar utförs med beaktande av fördelarna med det och analys av problemen relaterade till beräkningstiden som kan förbättras i framtiden.
304

Effective estimation of battery state-of-health by virtual experiments via transfer- and meta-learning

Schmitt, Jakob, Horstkötter, Ivo, Bäker, Bernard 15 March 2024 (has links)
The continuous monitoring of the state-of-health (SOH) of electric vehicles (EV) represents a problem with great research relevance due to the time-consuming battery cycling and capacity measurements that are usually required to create a SOH estimation model. Instead of the widely used approach of modelling the battery’s degradation behaviour with as little cycling effort as possible, the applied SOH monitoring approach is the first of its kind that is solely based on commonly logged battery management system (BMS) signals and does not rely on tedious capacity measurements. These are used to train the digital battery twins, which are subsequently subjected to virtual capacity tests to estimate the SOH. In this work, transfer-learning is applied to increase the data and computational efficiency of the digital battery twins training process to facilitate a real-world application as it enables SOH estimation for unknown ageing states due to the selective parameter initialisation at less than a tenth of the common training time. However, the successful SOH estimation with a mean SOH deviation of 0.05% using transfer-learning still requires the presence of pauses in the dataset. Meta-learning extends the idea of transfer-learning as the baseline model simultaneously takes several ageing states into account. Learning the basic battery-electric behaviour it is forced to preserve a certain level of uncertainty at the same time, which seems crucial for the successful fine-tuning of the model parameters based on three pause-free load profiles resulting in a mean SOH deviation of 0.85%. This optimised virtual SOH experiment framework provides the cornerstone for a scalable and robust estimation of the remaining battery capacity on a pure data basis.
305

Hierarchical Control of Simulated Aircraft / Hierarkisk kontroll av simulerade flygplan

Mannberg, Noah January 2023 (has links)
This thesis investigates the effectiveness of employing pretraining and a discrete "control signal" bottleneck layer in a neural network trained in aircraft navigation through deep reinforcement learning. The study defines two distinct tasks to assess the efficacy of this approach. The first task is utilized for pretraining specific parts of the network, while the second task evaluates the potential benefits of this technique. The experimental findings indicate that the network successfully learned three main macro actions during pretraining. flying straight ahead, turning left, and turning right, and achieved high rewards on the task. However, utilizing the pretrained network on the transfer task yielded poor performance, possibly due to the limited effective action space or deficiencies in the training process. The study discusses several potential solutions, such as incorporating multiple pretraining tasks and alterations of the training process as avenues for future research. Overall, this study highlights the challanges and opportunities associated with combining pretraining with a discrete bottleneck layer in the context of simulated aircraft navigation using reinforcement learning. / Denna studie undersöker effektiviteten av att använda förträning och en diskret "styrsignal" som fungerar som flaskhals i ett neuralt nätverk tränat i flygnavigering med hjälp av djup förstärkande inlärning. Studien definierar två olika uppgifter för att bedöma effektiviteten hos denna metod. Den första uppgiften används för att förträna specifika delar at nätverket, medan den andra uppgiften utvärderar de potentiella fördelarna med denna teknik. De experimentella resultaten indikerar att nätverket framgångsrikt lärde sig tre huvudsakliga makrohandlingar under förträningen: att flyga rakt fram, att svänga vänster och att svänga höger, och uppnådde höga belöningar för uppgiften. Men att använda det förtränade nätverket för den uppföljande uppgiften gav dålig prestation, möjligen på grund av det begränsade effektiva handlingsutrymmet eller begränsningar i träningsprocessen. Studien diskuterar flera potentiella lösningar, såsom att inkorporera flera förträningsuppgifter och ändringar i träningsprocessen, som möjliga framtida forskningsvägar. Sammantaget belyser denna studie de utmaningar och möjligheter som är förknippade med att kombinera förträning med ett diskret flaskhalslager inom kontexten av simulerad flygnavigering och förstärkningsinlärning.
306

Deep Neural Network for Classification of H&E-stained Colorectal Polyps : Exploring the Pipeline of Computer-Assisted Histopathology

Brunzell, Stina January 2024 (has links)
Colorectal cancer is one of the most prevalent malignancies globally and recently introduced digital pathology enables the use of machine learning as an aid for fast diagnostics. This project aimed to develop a deep neural network model to specifically identify and differentiate dysplasia in the epithelium of colorectal polyps and was posed as a binary classification problem. The available dataset consisted of 80 whole slide images of different H&E-stained polyp sections, which were parted info smaller patches, annotated by a pathologist. The best performing model was a pre-trained ResNet-18 utilising a weighted sampler, weight decay and augmentation during fine tuning. Reaching an area under precision-recall curve of 0.9989 and 97.41% accuracy on previously unseen data, the model’s performance was determined to underperform compared to the task’s intra-observer variability and be in alignment with the inter-observer variability. Final model made publicly available at https://github.com/stinabr/classification-of-colorectal-polyps.
307

[pt] APLICAÇÕES DE APRENDIZADO PROFUNDO NO MONITORAMENTO DE CULTURAS: CLASSIFICAÇÃO DE TIPO, SAÚDE E AMADURECIMENTO DE CULTURAS / [en] APPLICATIONS OF DEEP LEARNING FOR CROP MONITORING: CLASSIFICATION OF CROP TYPE, HEALTH AND MATURITY

GABRIEL LINS TENORIO 18 May 2020 (has links)
[pt] A eficiência de culturas pode ser aprimorada monitorando-se suas condições de forma contínua e tomando-se decisões baseadas em suas análises. Os dados para análise podem ser obtidos através de sensores de imagens e o processo de monitoramento pode ser automatizado utilizando-se algoritmos de reconhecimento de imagem com diferentes níveis de complexidade. Alguns dos algoritmos de maior êxito estão relacionados a abordagens supervisionadas de aprendizagem profunda (Deep Learning) as quais utilizam formas de Redes Neurais de Convolucionais (CNNs). Nesta dissertação de mestrado, empregaram-se modelos de aprendizagem profunda supervisionados para classificação, regressão, detecção de objetos e segmentação semântica em tarefas de monitoramento de culturas, utilizando-se amostras de imagens obtidas através de três níveis distintos: Satélites, Veículos Aéreos Não Tripulados (UAVs) e Robôs Terrestres Móveis (MLRs). Ambos satélites e UAVs envolvem o uso de imagens multiespectrais. Para o primeiro nível, implementou-se um modelo CNN baseado em Transfer Learning para a classificação de espécies vegetativas. Aprimorou-se o desempenho de aprendizagem do transfer learning através de um método de análise estatística recentemente proposto. Na sequência, para o segundo nível, implementou-se um algoritmo segmentação semântica multitarefa para a detecção de lavouras de cana-de-açúcar e identificação de seus estados (por exemplo, saúde e idade da cultura). O algoritmo também detecta a vegetação ao redor das lavouras, sendo relevante na busca por ervas daninhas. No terceiro nível, implementou-se um algoritmo Single Shot Multibox Detector para detecção de cachos de tomate. De forma a avaliar o estado dos cachos, utilizaram-se duas abordagens diferentes: uma implementação baseada em segmentação de imagens e uma CNN supervisionada adaptada para cálculos de regressão capaz de estimar a maturação dos cachos de tomate. De forma a quantificar cachos de tomate em vídeos para diferentes estágios de maturação, empregou-se uma implementação de Região de Interesse e propôs-se um sistema de rastreamento o qual utiliza informações temporais. Para todos os três níveis, apresentaram-se soluções e resultados os quais superam as linhas de base do estado da arte. / [en] Crop efficiency can be improved by continually monitoring their state and making decisions based on their analysis. The data for analysis can be obtained through images sensors and the monitoring process can be automated by using image recognition algorithms with different levels of complexity. Some of the most successful algorithms are related to supervised Deep Learning approaches which use a form of Convolutional Neural Networks (CNNs). In this master s dissertation, we employ supervised deep learning models for classification, regression, object detection, and semantic segmentation in crop monitoring tasks, using image samples obtained through three different levels: Satellites, Unmanned Aerial Vehicles (UAVs) and Unmanned Ground Vehicles (UGVs). Both satellites and UAVs levels involve the use of multispectral images. For the first level, we implement a CNN model based on transfer learning to classify vegetative species. We also improve the transfer learning performance by a newly proposed statistical analysis method. Next, for the second level, we implement a multi-task semantic segmentation algorithm to detect sugarcane crops and infer their state (e.g. crop health and age). The algorithm also detects the surrounding vegetation, being relevant in the search for weeds. In the third level, we implement a Single Shot Multibox detector algorithm to detect tomato clusters. To evaluate the cluster s state, we use two different approaches: an implementation based on image segmentation and a supervised CNN regressor capable of estimating their maturity. In order to quantify the tomato clusters in videos at different maturation stages, we employ a Region of Interest implementation and also a proposed tracking system which uses temporal information. For all the three levels, we present solutions and results that outperform state-of-the art baselines.
308

BERTie Bott’s Every Flavor Labels : A Tasty Guide to Developing a Semantic Role Labeling Model for Galician

Bruton, Micaella January 2023 (has links)
For the vast majority of languages, Natural Language Processing (NLP) tools are either absent entirely, or leave much to be desired in their final performance. Despite having nearly 4 million speakers, one such low-resource language is Galician. In an effort to expand available NLP resources, this project sought to construct a dataset for Semantic Role Labeling (SRL) and produce a baseline for future research to use in comparisons. SRL is a task which has shown success in amplifying the final output for various NLP systems, including Machine Translation and other interactive language models. This project was successful in that fact and produced 24 SRL models and two SRL datasets; one Galician and one Spanish. mBERT and XLM-R were chosen as the baseline architectures; additional models were first pre-trained on the SRL task in a language other than the target to measure the effects of transfer-learning. Scores are reported on a scale of 0.0-1.0. The best performing Galician SRL model achieved an f1 score of 0.74, introducing a baseline for future Galician SRL systems. The best performing Spanish SRL model achieved an f1 score of 0.83, outperforming the baseline set by the 2009 CoNLL Shared Task by 0.025. A pre-processing method, verbal indexing, was also introduced which allowed for increased performance in the SRL parsing of highly complex sentences; effects were amplified in scenarios where the model was both pre-trained and fine-tuned on datasets utilizing the method, but still visible even when only used during fine-tuning. / För de allra flesta språken saknas språkteknologiska verktyg (NLP) helt, eller för dem de var i finns tillgängliga är dessa verktygs prestanda minst sagt, sämre än medelmåttig. Trots sina nästan 4 miljoner talare, är galiciska ett språk med brist på tillräckliga resurser. I ett försök att utöka tillgängliga NLP-resurser för språket, konstruerades i detta projekt en uppsättning data för så kallat Semantic Role Labeling (SRL) som sedan användes för att utveckla grundläggande SRL-modeller att falla tillbaka på och jämföra  med i framtida forskning. SRL är en uppgift som har visat framgång när det gäller att förstärka slutresultatet för olika NLP-system, inklusive maskinöversättning och andra interaktiva språkmodeller. I detta avseende visade detta projekt på framgång och som del av det utvecklades 24 SRL-modeller och två SRL-datauppsåttningar; en galicisk och en spansk. mBERT och XLM-R valdes som baslinjearkitekturer; ytterligare modeller tränades först på en SRL-uppgift på ett språk annat än målspråket för att mäta effekterna av överföringsinlärning (Transfer Learning) Poäng redovisas på en skala från 0.0-1.0. Den galiciska SRL-modellen med bäst prestanda uppnådde ett f1-poäng på 0.74, vilket introducerar en baslinje för framtida galiciska SRL-system. Den bästa spanska SRL-modellen uppnådde ett f1-poäng på 0.83, vilket överträffade baslinjen +0.025 som sattes under CoNLL Shared Task 2009. I detta projekt introduceras även en ny metod för behandling av lingvistisk data, så kallad verbalindexering, som ökade prestandan av mycket komplexa meningar. Denna prestandaökning först märktes ytterligare i de scenarier och är en modell både förtränats och finjusterats på uppsättningar data som behandlats med metoden, men visade även på märkbara förbättringar då en modell endast genomgått finjustering. / Para la gran mayoría de los idiomas, las herramientas de procesamiento del lenguaje natural (NLP) están completamente ausentes o dejan mucho que desear en su desempeño final. A pesar de tener casi 4 millones de hablantes, el gallego continúa siendo un idioma de bajos recursos. En un esfuerzo por expandir los recursos de NLP disponibles, el objetivo de este proyecto fue construir un conjunto de datos para el Etiquetado de Roles Semánticos (SRL) y producir una referencia para que futuras investigaciones puedan utilizar en sus comparaciones. SRL es una tarea que ha tenido éxito en la amplificación del resultado final de varios sistemas NLP, incluida la traducción automática, y otros modelos de lenguaje interactivo. Este proyecto fue exitoso en ese hecho y produjo 24 modelos SRL y dos conjuntos de datos SRL; uno en gallego y otro en español. Se eligieron mBERT y XLM-R como las arquitecturas de referencia; previamente se entrenaron modelos adicionales en la tarea SRL en un idioma distinto al idioma de destino para medir los efectos del aprendizaje por transferencia. Las puntuaciones se informan en una escala de 0.0 a 1.0. El modelo SRL gallego con mejor rendimiento logró una puntuación de f1 de 0.74, introduciendo un objetivo de referencia para los futuros sistemas SRL gallegos. El modelo español de SRL con mejor rendimiento logró una puntuación de f1 de 0.83, superando la línea base establecida por la Tarea Compartida CoNLL de 2009 en 0.025. También se introdujo un método de preprocesamiento, indexación verbal, que permitió un mayor rendimiento en el análisis SRL de oraciones muy complejas; los efectos se amplificaron cuando el modelo primero se entrenó y luego se ajustó con los conjuntos de datos que utilizaban el método, pero los efectos aún fueron visibles incluso cuando se lo utilizó solo durante el ajuste.
309

Diffusion Tensor Imaging Analysis for Subconcussive Trauma in Football and Convolutional Neural Network-Based Image Quality Control That Does Not Require a Big Dataset

Ikbeom Jang (5929832) 14 May 2019 (has links)
Diffusion Tensor Imaging (DTI) is a magnetic resonance imaging (MRI)-based technique that has frequently been used for the identification of brain biomarkers of neurodevelopmental and neurodegenerative disorders because of its ability to assess the structural organization of brain tissue. In this work, I present (1) preclinical findings of a longitudinal DTI study that investigated asymptomatic high school football athletes who experienced repetitive head impact and (2) an automated pipeline for assessing the quality of DTI images that uses a convolutional neural network (CNN) and transfer learning. The first section addresses the effects of repetitive subconcussive head trauma on the white matter of adolescent brains. Significant concerns exist regarding sub-concussive injury in football since many studies have reported that repetitive blows to the head may change the microstructure of white matter. This is more problematic in youth-aged athletes whose white matter is still developing. Using DTI and head impact monitoring sensors, regions of significantly altered white matter were identified and within-season effects of impact exposure were characterized by identifying the volume of regions showing significant changes for each individual. The second section presents a novel pipeline for DTI quality control (QC). The complex nature and long acquisition time associated with DTI make it susceptible to artifacts that often result in inferior diagnostic image quality. We propose an automated QC algorithm based on a deep convolutional neural network (DCNN). Adaptation of transfer learning makes it possible to train a DCNN with a relatively small dataset in a short time. The QA algorithm detects not only motion- or gradient-related artifacts, but also various erroneous acquisitions, including images with regional signal loss or those that have been incorrectly imaged or reconstructed.
310

Reconnaissance de postures humaines par fusion de la silhouette et de l'ombre dans l'infrarouge

Gouiaa, Rafik 01 1900 (has links)
Les systèmes multicaméras utilisés pour la vidéosurveillance sont complexes, lourds et coûteux. Pour la surveillance d'une pièce, serait-il possible de les remplacer par un système beaucoup plus simple utilisant une seule caméra et une ou plusieurs sources lumineuses en misant sur les ombres projetées pour obtenir de l'information 3D ? Malgré les résultats intéressants offerts par les systèmes multicaméras, la quantité d'information à traiter et leur complexité limitent grandement leur usage. Dans le même contexte, nous proposons de simplifier ces systèmes en remplaçant une caméra par une source lumineuse. En effet, une source lumineuse peut être vue comme une caméra qui génère une image d'ombre révélant l'objet qui bloque la lumière. Notre système sera composé par une seule caméra et une ou plusieurs sources lumineuses infrarouges (invisibles à l'oeil). Malgré les difficultés prévues quant à l'extraction de l'ombre et la déformation et l'occultation de l'ombre par des obstacles (murs, meubles...), les gains sont multiples en utilisant notre système. En effet, on peut éviter ainsi les problèmes de synchronisation et de calibrage de caméras et réduire le coût en remplaçant des caméras par de simples sources infrarouges. Nous proposons deux approches différentes pour automatiser la reconnaissance de postures humaines. La première approche reconstruit la forme 3D d'une personne pour faire la reconnaissance de la posture en utilisant des descripteurs de forme. La deuxième approche combine directement l'information 2D (ombre+silhouette) pour faire la reconnaissance de postures. Scientifiquement, nous cherchons à prouver que l'information offerte par une silhouette et l'ombre générée par une source lumineuse est suffisante pour permettre la reconnaissance de postures humaines élémentaires (p.ex. debout, assise, couchée, penchée, etc.). Le système proposé peut être utilisé pour la vidéosurveillance d'endroits non encombrés tels qu'un corridor dans une résidence de personnes âgées (pour la détection des chutes p. ex.) ou d'une compagnie (pour la sécurité). Son faible coût permettrait un plus grand usage de la vidéosurveillance au bénéfice de la société. Au niveau scientifique, la démonstration théorique et pratique d'un tel système est originale et offre un grand potentiel pour la vidéosurveillance. / Human posture recognition (HPR) from video sequences is one of the major active research areas of computer vision. It is one step of the global process of human activity recognition (HAR) for behaviors analysis. Many HPR application systems have been developed including video surveillance, human-machine interaction, and the video retrieval. Generally, applications related to HPR can be achieved using mainly two approaches : single camera or multi-cameras. Despite the interesting performance achieved by multi-camera systems, their complexity and the huge information to be processed greatly limit their widespread use for HPR. The main goal of this thesis is to simplify the multi-camera system by replacing a camera by a light source. In fact, a light source can be seen as a virtual camera, which generates a cast shadow image representing the silhouette of the person that blocks the light. Our system will consist of a single camera and one or more infrared light sources. Despite some technical difficulties in cast shadow segmentation and cast shadow deformation because of walls and furniture, different advantages can be achieved by using our system. Indeed, we can avoid the synchronization and calibration problems of multiple cameras, reducing the cost of the system and the amount of processed data by replacing a camera by one light source. We introduce two different approaches in order to automatically recognize human postures. The first approach directly combines the person’s silhouette and cast shadow information, and uses 2D silhouette descriptor in order to extract discriminative features useful for HPR. The second approach is inspired from the shape from silhouette technique to reconstruct the visual hull of the posture using a set of cast shadow silhouettes, and extract informative features through 3D shape descriptor. Using these approaches, our goal is to prove the utility of the combination of person’s silhouette and cast shadow information for recognizing elementary human postures (stand, bend, crouch, fall,...) The proposed system can be used for video surveillance of uncluttered areas such as a corridor in a senior’s residence (for example, for the detection of falls) or in a company (for security). Its low cost may allow greater use of video surveillance for the benefit of society.

Page generated in 0.109 seconds