• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 348
  • 42
  • 20
  • 13
  • 10
  • 8
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 541
  • 541
  • 253
  • 210
  • 173
  • 134
  • 113
  • 111
  • 108
  • 89
  • 87
  • 80
  • 75
  • 74
  • 73
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Deep Learning Models for Context-Aware Object Detection

Arefiyan Khalilabad, Seyyed Mostafa 15 September 2017 (has links)
In this thesis, we present ContextNet, a novel general object detection framework for incorporating context cues into a detection pipeline. Current deep learning methods for object detection exploit state-of-the-art image recognition networks for classifying the given region-of-interest (ROI) to predefined classes and regressing a bounding-box around it without using any information about the corresponding scene. ContextNet is based on an intuitive idea of having cues about the general scene (e.g., kitchen and library), and changes the priors about presence/absence of some object classes. We provide a general means for integrating this notion in the decision process about the given ROI by using a pretrained network on the scene recognition datasets in parallel to a pretrained network for extracting object-level features for the corresponding ROI. Using comprehensive experiments on the PASCAL VOC 2007, we demonstrate the effectiveness of our design choices, the resulting system outperforms the baseline in most object classes, and reaches 57.5 mAP (mean Average Precision) on the PASCAL VOC 2007 test set in comparison with 55.6 mAP for the baseline. / MS / The object detection problem is to find objects of interest in a given image and draw boxes around them with object labels. With the emergence of deep learning in recent years, current object detection methods use deep learning technologies. The detection process is solely based on features which are extracted from several thousand regions in the given image. We propose a novel framework for incorporating scene information in the detection process. For example, if we know the image is taken from a kitchen, the probability of seeing a cow or an airplane decreases and observation probability of plates and persons increases. Our new detection network uses this intuition to improve the detection accuracy. Using extensive experiments, we show the proposed methods outperform the baseline for almost all object types.
322

Automatisierte Erkennung anatomischer Strukturen und Dissektionsebenen im Rahmen der roboterassistierten anterioren Rektumresektion mittels Künstlicher Intelligenz

Carstens, Matthias 09 July 2024 (has links)
Als dritthäufigstes Krebsvorkommen und zweithäufigste Krebstodesursache hat das kolorektale Karzinom (KRK) einen hohen Stellenwert für die interdisziplinäre Therapie in der Onkologie. Bei etwa 50% der Patienten befindet sich das KRK im Rektum. Die Behandlung erfolgt kurativ durch die operative Entfernung des Rektums samt der regionären Lymphknoten. Bis heute konnten keine klinischen bzw. onkologischen Vorteile der roboterassistierten Rektumresektion gegenüber der konventionell laparoskopischen Variante bewiesen werden. In dieser Arbeit wurde mithilfe maschineller Lernverfahren (Künstlicher Intelligenz, KI) ein Algorithmus trainiert, welcher bestimmte kritische anatomische Strukturen und Dissektionsebenen automatisch identifizieren kann. Damit soll zukünftig eine Assistenzfunktion etablieren werden, welche dem Chirurgen dabei helfen soll, autonome Nerven und Blutgefäße zu schonen, was das onkologische Outcome verbessern könnte. Insgesamt wurden 29 anteriore Rektumresektionen berücksichtigt, welche je in 5 OP-Phasen eingeteilt wurden (Peritoneale Inzision, Gefäßdissektion, Mediale Mobilisation, Laterale Mobilisation, Mesorektale Exzision). Etwa 500 – 2.500 Bilder wurden von jeder Phase aus den Operationsvideos extrahiert und bestimmte Strukturen wurden semantisch segmentiert. Die Leave-One-Out-Kreuzvalidierung wurde für die Algorithmus-Validierung angewendet. Als maschinelles Lernverfahren diente ein Mask R-CNN basierender Deep Learning-Algorithmus. Um die Prädiktionen evaluieren zu können, wurden die Objekterkennungs-Metriken Intersection over Union (IoU), Precision, Recall, F1 und Specificity berechnet. Gute IoU-Werte konnten bei der Instrumentenerkennung (IoU bis zu 0,82 ± 0,26), bei der Gerota’schen Faszie (IoU: 0,74 ± 0,03) und beim Mesokolon (IoU: 0,65 ± 0,05) während der medialen Mobilisation, bei der Abdominal wall (IoU: 0,78 ± 0,04) und beim Fat (IoU: 0,64 ± 0,10) während der lateralen Mobilisation und beim Peritoneum, welches beim ersten Einschnitt inzidiert wird, erreicht werden (IoU: 0,69 ± 0,22). Eine weniger präzise automatische Erkennung wurde bei der mesorektalen Faszie (IoU: 0,28 ± 0,08), beim Mesorektum (IoU: 0,45 ± 0,08), beim Kolon und Dünndarm (IoU: 0,46 ± 0,09 bzw. 0,33 ± 0,24) und der Vena mesenterica inferior (IoU: 0,25 ± 0,17) berechnet. Unzureichende Werte wurden bei den eigentlichen Dissektionslinien, den Bläschendrüsen und bei der Arteria mesenterica inferior erzielt, mit durchschnittlichen IoU-Werten kleiner 0,01 bis 0,16. Das künstliche neuronale Netzwerk erkannte zudem meist etwas ziemlich gut oder erkannte es gar nicht. Mittelgute Einzelwerte sind selten. Zusammenfassend zeigen diese Ergebnisse, dass eine KI in der Lage ist, anatomische Strukturen in laparoskopischen Aufnahmen bei einer solch komplexen OP zu erkennen. Für die KI ist es schwierig, vor allem kleinere oder hochvariabel aussehende Strukturen wie die Bläschendrüsen, Blutgefäße oder die mesorektale Faszie zu identifizieren. Es ist anzunehmen, dass die Prädiktionen mit einem größeren und diverseren Trainingsdatensatz verbessert werden können. Für Strukturen wie Dissektionslinien, für welche keine wirklichen optischen Abhebungen von anderen Strukturen im Bild bestehen, könnten andere Bereiche für die Einblendung einer Schnittführungslinie in den Bildern von Bedeutung sein. Eine zukünftige Implementierung dieser Methode in den Operationssaal im Rahmen einer Navigationsfunktion für den Chirurgen wäre demzufolge möglich.
323

Detection of Rail Clip with YOLO on Raspberry Pi

Shahi, Jonathan January 2024 (has links)
I en modern värld där artificiell intelligens blir allt mer integrerad i våra dagliga liv är en av de mest grundläggande och nödvändiga färdigheterna för en AI att lära sig och bearbeta information, särskilt genom objektdetektering. Det finns många algoritmer som kan användas för denna specifika uppgift, men vårt huvudsakliga fokus ligger på "You Only Look Ones", även känd som YOLO-algoritmen. Denna studie fördjupar sig i användningen av YOLO inom inbyggda system specifikt för att upptäcka tågrelaterade objekt på en Raspberry Pi. Målet med denna studie är att övervinna begränsningar i processorkraft och minne, typiska för småskaliga databehandlingsplattformar som Raspberry Pi, samtidigt som hög detekteringsnoggrannhet, hastighet och låg energiförbrukning bibehålls. Detta uppnås genom att träna YOLO-modellen med olika bildupplösningar och olika inställningar av hyperparametrar och sedan köra inferens på dem så att energiförbrukningen kan beräknas. Resultaten indikerar att även om lägre upplösningar resulterar i lägre noggrannhet, minskar de avsevärt de beräkningsmässiga kraven på Raspberry Pi, vilket gör det till en genomförbar lösning för realtidsapplikationer i miljöer där tillgången till ström är begränsad. / In a modern world where artificial intelligence (AI) is becoming increasingly integrated into our daily lives, one of the most fundamental and essential skills for an AI is to learn and process information especially through object detection. There are many algorithms that could be used for this specific task but our mainly focus is on "You Only Look Ones" aka YOLO algorithm. This study dives into the use of YOLO within embedded systems specifically for detecting train-related objects on a Raspberry Pi. The aim of this study is to overcome limitations in processing power and memory, typical in small-scale computing platforms like Raspberry pi, while maintaining high detection accuracy, fast processing time and low energy consumption. This is achieved by training the YOLO model with different image resolutions and different hyper parameters tuning then running inference on them so that the energy consumption can be calculated. The results indicate that while lower resolutions result in lower accuracy, they significantly reduce the computational demands on the Raspberry Pi, making it a viable solution for real-time applications in environments where power availability is limited
324

Toward a Computational Historiography of Alchemy: Challenges and Obstacles of Object Detection for Historical Illustrations of Mining, Metallurgy and Distillation in 16th–17th Century Print

Lang, Sarah, Liebl, Bernhard, Burghardt, Manuel 04 July 2024 (has links)
This study explores the use of modern computer vision methods for object detection in historical images extracted from 16th–17th century printed books containing illustrations of distillation, mining, metallurgy, and alchemical apparatus. We found that the transfer of knowledge from contemporary photographic data to historical etchings proves less effective than anticipated, revealing limitations in current methods like visual feature descriptors, pixel segmentation, representation learning, and object detection with YOLOv8. These finddings highlight the stylistic disparities between modern images and early print illustrations, suggesting new research directions for historical image analysis.
325

Carried baggage detection and recognition in video surveillance with foreground segmentation

Tzanidou, Giounona January 2014 (has links)
Security cameras installed in public spaces or in private organizations continuously record video data with the aim of detecting and preventing crime. For that reason, video content analysis applications, either for real time (i.e. analytic) or post-event (i.e. forensic) analysis, have gained high interest in recent years. In this thesis, the primary focus is on two key aspects of video analysis, reliable moving object segmentation and carried object detection & identification. A novel moving object segmentation scheme by background subtraction is presented in this thesis. The scheme relies on background modelling which is based on multi-directional gradient and phase congruency. As a post processing step, the detected foreground contours are refined by classifying the edge segments as either belonging to the foreground or background. Further contour completion technique by anisotropic diffusion is first introduced in this area. The proposed method targets cast shadow removal, gradual illumination change invariance, and closed contour extraction. A state of the art carried object detection method is employed as a benchmark algorithm. This method includes silhouette analysis by comparing human temporal templates with unencumbered human models. The implementation aspects of the algorithm are improved by automatically estimating the viewing direction of the pedestrian and are extended by a carried luggage identification module. As the temporal template is a frequency template and the information that it provides is not sufficient, a colour temporal template is introduced. The standard steps followed by the state of the art algorithm are approached from a different extended (by colour information) perspective, resulting in more accurate carried object segmentation. The experiments conducted in this research show that the proposed closed foreground segmentation technique attains all the aforementioned goals. The incremental improvements applied to the state of the art carried object detection algorithm revealed the full potential of the scheme. The experiments demonstrate the ability of the proposed carried object detection algorithm to supersede the state of the art method.
326

Visual Representations and Models: From Latent SVM to Deep Learning

Azizpour, Hossein January 2016 (has links)
Two important components of a visual recognition system are representation and model. Both involves the selection and learning of the features that are indicative for recognition and discarding those features that are uninformative. This thesis, in its general form, proposes different techniques within the frameworks of two learning systems for representation and modeling. Namely, latent support vector machines (latent SVMs) and deep learning. First, we propose various approaches to group the positive samples into clusters of visually similar instances. Given a fixed representation, the sampled space of the positive distribution is usually structured. The proposed clustering techniques include a novel similarity measure based on exemplar learning, an approach for using additional annotation, and augmenting latent SVM to automatically find clusters whose members can be reliably distinguished from background class.  In another effort, a strongly supervised DPM is suggested to study how these models can benefit from privileged information. The extra information comes in the form of semantic parts annotation (i.e. their presence and location). And they are used to constrain DPMs latent variables during or prior to the optimization of the latent SVM. Its effectiveness is demonstrated on the task of animal detection. Finally, we generalize the formulation of discriminative latent variable models, including DPMs, to incorporate new set of latent variables representing the structure or properties of negative samples. Thus, we term them as negative latent variables. We show this generalization affects state-of-the-art techniques and helps the visual recognition by explicitly searching for counter evidences of an object presence. Following the resurgence of deep networks, in the last works of this thesis we have focused on deep learning in order to produce a generic representation for visual recognition. A Convolutional Network (ConvNet) is trained on a largely annotated image classification dataset called ImageNet with $\sim1.3$ million images. Then, the activations at each layer of the trained ConvNet can be treated as the representation of an input image. We show that such a representation is surprisingly effective for various recognition tasks, making it clearly superior to all the handcrafted features previously used in visual recognition (such as HOG in our first works on DPM). We further investigate the ways that one can improve this representation for a task in mind. We propose various factors involving before or after the training of the representation which can improve the efficacy of the ConvNet representation. These factors are analyzed on 16 datasets from various subfields of visual recognition. / <p>QC 20160908</p>
327

Object Detection and Tracking Using Uncalibrated Cameras

Amara, Ashwini 14 May 2010 (has links)
This thesis considers the problem of tracking an object in world coordinates using measurements obtained from multiple uncalibrated cameras. A general approach to track the location of a target involves different phases including calibrating the camera, detecting the object's feature points over frames, tracking the object over frames and analyzing object's motion and behavior. The approach contains two stages. First, the problem of camera calibration using a calibration object is studied. This approach retrieves the camera parameters from the known locations of ground data in 3D and their corresponding image coordinates. The next important part of this work is to develop an automated system to estimate the trajectory of the object in 3D from image sequences. This is achieved by combining, adapting and integrating several state-of-the-art algorithms. Synthetic data based on a nearly constant velocity object motion model is used to evaluate the performance of camera calibration and state estimation algorithms.
328

Mise en relation d'images et de modèles 3D avec des réseaux de neurones convolutifs / Relating images and 3D models with convolutional neural networks

Suzano Massa, Francisco Vitor 09 February 2017 (has links)
La récente mise à disposition de grandes bases de données de modèles 3D permet de nouvelles possibilités pour un raisonnement à un niveau 3D sur les photographies. Cette thèse étudie l'utilisation des réseaux de neurones convolutifs (CNN) pour mettre en relation les modèles 3D et les images.Nous présentons tout d'abord deux contributions qui sont utilisées tout au long de cette thèse : une bibliothèque pour la réduction automatique de la mémoire pour les CNN profonds, et une étude des représentations internes apprises par les CNN pour la mise en correspondance d'images appartenant à des domaines différents. Dans un premier temps, nous présentons une bibliothèque basée sur Torch7 qui réduit automatiquement jusqu'à 91% des besoins en mémoire pour déployer un CNN profond. Dans un second temps, nous étudions l'efficacité des représentations internes des CNN extraites d'un réseau pré-entraîné lorsqu'il est appliqué à des images de modalités différentes (réelles ou synthétiques). Nous montrons que malgré la grande différence entre les images synthétiques et les images naturelles, il est possible d'utiliser certaines des représentations des CNN pour l'identification du modèle de l'objet, avec des applications possibles pour le rendu basé sur l'image.Récemment, les CNNs ont été utilisés pour l'estimation de point de vue des objets dans les images, parfois avec des choix de modélisation très différents. Nous présentons ces approches dans un cadre unifié et nous analysons les facteur clés qui ont une influence sur la performance. Nous proposons une méthode d'apprentissage jointe qui combine à la fois la détection et l'estimation du point de vue, qui fonctionne mieux que de considérer l'estimation de point de vue de manière indépendante.Nous étudions également l'impact de la formulation de l'estimation du point de vue comme une tâche discrète ou continue, nous quantifions les avantages des architectures de CNN plus profondes et nous montrons que l'utilisation des données synthétiques est bénéfique. Avec tous ces éléments combinés, nous améliorons l'état de l'art d'environ 5% pour la précision de point de vue moyenne sur l'ensemble des données Pascal3D+.Dans l'étude de recherche de modèle d'objet 3D dans une base de données, l'image de l'objet est fournie et l'objectif est d'identifier parmi un certain nombre d'objets 3D lequel correspond à l'image. Nous étendons ce travail à la détection d'objet, où cette fois-ci un modèle 3D est donné, et l'objectif consiste à localiser et à aligner le modèle 3D dans image. Nous montrons que l'application directe des représentations obtenues par un CNN ne suffit pas, et nous proposons d'apprendre une transformation qui rapproche les répresentations internes des images réelles vers les représentations des images synthétiques. Nous évaluons notre approche à la fois qualitativement et quantitativement sur deux jeux de données standard: le jeu de données IKEAobject, et le sous-ensemble du jeu de données Pascal VOC 2012 contenant des instances de chaises, et nous montrons des améliorations sur chacun des deux / The recent availability of large catalogs of 3D models enables new possibilities for a 3D reasoning on photographs. This thesis investigates the use of convolutional neural networks (CNNs) for relating 3D objects to 2D images.We first introduce two contributions that are used throughout this thesis: an automatic memory reduction library for deep CNNs, and a study of CNN features for cross-domain matching. In the first one, we develop a library built on top of Torch7 which automatically reduces up to 91% of the memory requirements for deploying a deep CNN. As a second point, we study the effectiveness of various CNN features extracted from a pre-trained network in the case of images from different modalities (real or synthetic images). We show that despite the large cross-domain difference between rendered views and photographs, it is possible to use some of these features for instance retrieval, with possible applications to image-based rendering.There has been a recent use of CNNs for the task of object viewpoint estimation, sometimes with very different design choices. We present these approaches in an unified framework and we analyse the key factors that affect performance. We propose a joint training method that combines both detection and viewpoint estimation, which performs better than considering the viewpoint estimation separately. We also study the impact of the formulation of viewpoint estimation either as a discrete or a continuous task, we quantify the benefits of deeper architectures and we demonstrate that using synthetic data is beneficial. With all these elements combined, we improve over previous state-of-the-art results on the Pascal3D+ dataset by a approximately 5% of mean average viewpoint precision.In the instance retrieval study, the image of the object is given and the goal is to identify among a number of 3D models which object it is. We extend this work to object detection, where instead we are given a 3D model (or a set of 3D models) and we are asked to locate and align the model in the image. We show that simply using CNN features are not enough for this task, and we propose to learn a transformation that brings the features from the real images close to the features from the rendered views. We evaluate our approach both qualitatively and quantitatively on two standard datasets: the IKEAobject dataset, and a subset of the Pascal VOC 2012 dataset of the chair category, and we show state-of-the-art results on both of them
329

Machine visual feedback through CNN detectors : Mobile object detection for industrial application

Rexhaj, Kastriot January 2019 (has links)
This paper concerns itself with object detection as a possible solution to Valmet’s quest for a visual-feedback system that can help operators and other personnel to more easily interact with their machines and equipment. New advancements in deep learning, specifically CNN models, have been exploring neural networks with detection-capabilities. Object detection has historically been mostly inaccessible to the industry due the complex solutions involving various tricky image processing algorithms. In that regard, deep learning offers a more easily accessible way to create scalable object detection solutions. This study has therefore chosen to review recent literature detailing detection models with a selective focus on factors making them realizable on ARM hardware and in turn mobile devices like phones. An attempt was made to single out the most lightweight and hardware efficient model and implement it as a prototype in order to help Valmet in their decision process around future object detection products. The survey led to the choice of a SSD-MobileNetsV2 detection architecture due to promising characteristics making it suitable for performance-constrained smartphones. This CNN model was implemented on Valmet’s phone of choice, Samsung Galaxy S8, and it successfully achieved object detection functionality. Evaluation shows a mean average precision of 60 % in detecting objects and a 4.7 FPS performance on the chosen phone model. TensorFlow was used for developing, training and evaluating the model. The report concludes with recommending Valmet to pursue solutions built on-top of these kinds of models and further wishes to express an optimistic outlook on this type of technology for the future. Realizing performance of this magnitude on a mid-tier phone using deep learning (which historically is very computationally intensive) sets us up for great strides with this type of technology in the future; and along with better smartphones, great benefits are expected to both industry and consumers. / Den här rapporten behandlar objekt detektering som en möjlig lösning på Valmets efterfrågan av ett visuellt återkopplingssystem som kan hjälpa operatörer och annan personal att lättare interagera med maskiner och utrustning. Nya framsteg inom djupinlärning har dem senaste åren möjliggjort framtagande av neurala nätverksarkitekturer med detekteringsförmågor. Då industrisektorn svårare tar till sig högst specialiserade algoritmer och komplexa bildbehandlingsmetoder (som tidigare varit fallet med objekt detektering) så ger djupinlärningsmetoder istället upphov till att skapa självlärande system som är återanpassningsbara och närmast intuitiva i dem fall där sådan teknologi åberopas. Den här studien har därför valt att studera ett par sådana teknologier för att hitta möjliga implementeringar som kan realiseras på något så enkelt som en mobiltelefon. Urvalet har därför bestått i att hitta detekteringsmodeller som är hårdvarumässigt resurssnåla och implementera ett sådant system för att agera prototyp och underlag till Valmets vidare diskussioner kring objekt-detekteringsslösningar. Studien valde att implementera en SSD-MobileNetsV2 modellarkitektur då den uppvisade lovande egenskaper kring hårdvarukraven. Modellen implementerades och utvärderades på Valmets mest förekommande telefon Samsung Galaxy S8 och resultatet visade på en god förmåga för modellen att detektera objekt. Den valda modellen gav 60 % precision på utvärderingsbilderna och lyckades nå 4.7 FPS på den implementerade telefonen. TensorFlow användes för programmering och som stödjande mjukvaruverktyg för träning, utvärdering samt vidare implementering. Studien påpekar optimistiska förväntningar av denna typ av teknologi; kombinerat med bättre smarttelefoner i framtiden kan det leda till revolutionerande lösningar för både industri och konsumenter.
330

Leannet : uma arquitetura que utiliza o contexto da cena para melhorar o reconhecimento de objetos

Silva, Leandro Pereira da 27 March 2018 (has links)
Submitted by PPG Ci?ncia da Computa??o (ppgcc@pucrs.br) on 2018-06-15T16:40:47Z No. of bitstreams: 1 LEANDRO PEREIRA DA SILVA_DIS.pdf: 16008947 bytes, checksum: 327a925ea56fcca0a86530a0eb3b1637 (MD5) / Approved for entry into archive by Sheila Dias (sheila.dias@pucrs.br) on 2018-06-26T13:25:28Z (GMT) No. of bitstreams: 1 LEANDRO PEREIRA DA SILVA_DIS.pdf: 16008947 bytes, checksum: 327a925ea56fcca0a86530a0eb3b1637 (MD5) / Made available in DSpace on 2018-06-26T13:34:22Z (GMT). No. of bitstreams: 1 LEANDRO PEREIRA DA SILVA_DIS.pdf: 16008947 bytes, checksum: 327a925ea56fcca0a86530a0eb3b1637 (MD5) Previous issue date: 2018-03-27 / Computer vision is the science that aims to give computers the capability of see- ing the world around them. Among its tasks, object recognition intends to classify objects and to identify where each object is in a given image. As objects tend to occur in particular environments, their contextual association can be useful to improve the object recognition task. To address the contextual awareness on object recognition task, the proposed ap- proach performs the identification of the scene context separately from the identification of the object, fusing both information in order to improve the object detection. In order to do so, we propose a novel architecture composed of two convolutional neural networks running in parallel: one for object identification and the other to the identification of the context where the object is located. Finally, the information of the two-streams architecture is concatenated to perform the object classification. The evaluation is performed using PASCAL VOC 2007 and MS COCO public datasets, by comparing the performance of our proposed approach with architectures that do not use the scene context to perform the classification of the ob- jects. Results show that our approach is able to raise in-context object scores, and reduces out-of-context objects scores. / A vis?o computacional ? a ci?ncia que permite fornecer aos computadores a ca- pacidade de verem o mundo em sua volta. Entre as tarefas, o reconhecimento de objetos pretende classificar objetos e identificar a posi??o onde cada objeto est? em uma imagem. Como objetos costumam ocorrer em ambientes particulares, a utiliza??o de seus contex- tos pode ser vantajosa para melhorar a tarefa de reconhecimento de objetos. Para utilizar o contexto na tarefa de reconhecimento de objetos, a abordagem proposta realiza a iden- tifica??o do contexto da cena separadamente da identifica??o do objeto, fundindo ambas informa??es para a melhora da detec??o do objeto. Para tanto, propomos uma nova arquite- tura composta de duas redes neurais convolucionais em paralelo: uma para a identifica??o do objeto e outra para a identifica??o do contexto no qual o objeto est? inserido. Por fim, a informa??o de ambas as redes ? concatenada para realizar a classifica??o do objeto. Ava- liamos a arquitetura proposta com os datasets p?blicos PASCAL VOC 2007 e o MS COCO, comparando o desempenho da abordagem proposta com abordagens que n?o utilizam o contexto. Os resultados mostram que nossa abordagem ? capaz de aumentar a probabili- dade de classifica??o para objetos que est?o em contexto e reduzir para objetos que est?o fora de contexto.

Page generated in 0.0937 seconds