• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 263
  • Tagged with
  • 263
  • 263
  • 263
  • 263
  • 263
  • 30
  • 28
  • 27
  • 25
  • 24
  • 24
  • 21
  • 19
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Human Body Part Detection And Multi-human Tracking Insurveillance Videos

Topcu, Hasan Huseyin 01 May 2012 (has links) (PDF)
With the recent developments in Computer Vision and Pattern Recognition, surveillance applications are equipped with the capabilities of event/activity understanding and interpretation which usually require recognizing humans in real world scenes. Real world scenes such as airports, streets and train stations are complex because they involve many people, complicated occlusions and cluttered backgrounds. Although complex real world scenes exist, human detectors have the capability to locate pedestrians accurately even in complex scenes and visual trackers have the capability to track targets in cluttered environments. The integration of visual object detection and tracking, which are the fundamental features of available surveillance applications, is one of the solutions for multi-human tracking problem in crowded scenes which is studied in this thesis. In this thesis, human body part detectors, which are capable of detecting human heads and human upper body parts, are trained with Support Vector Machines (SVM) by using Histogram of Oriented Gradients (HOG), which is one of the state-of-the-art descriptor for human detection. The training process is elaborated by investigating the effects of the parameters of the HOG descriptor. The human heads and upper body parts are searched in the region of interests (ROI) computed by detecting motion. In addition, these human body part detectors are integrated with a multi-human tracker which solves the data association problem with the Multi Scan Markov Chain Monte Carlo Data Association (MCMCDA) algorithm. Associated measurements of human upper body part locations are used for state correction for each track. State estimation is done through Kalman Filter. The performance of detectors are evaluated using MIT Pedestrian dataset and INRIA Human dataset.
132

Fmdbms - A Fuzzy Mpeg-7 Database Management System

Ercin, Nazif Ilker 01 June 2012 (has links) (PDF)
Continuous progress in multimedia research in recent years have led to proliferation of their applications in everyday life. The ever-growing demand in high performance multimedia applications creates the need for new and efficient storage and retrieval techniques. There exist numerous studies in the literature attempting to describe the content of these multimedia documents. Moving Picture Experts Group&rsquo / s XML based MPEG-7 is one of these studies that makes it possible to describe multimedia content in terms of both low and high level properties. MPEG-7 DDL allows defining new types using already defined types. Within the past ten years, it became a widely accepted standard in multimedia applications. In this thesis, an XML database application is developed to manage MPEG-7 descriptions, utilizing eXist XML DB as the database management system and a JAVA application as the frontend. MPEG-7 Description Schemes are extended by introducing fuzzy semantic types, such as FuzzyObject and FuzzyEvent, using the MPEG-7 DDL. From this point of view, the application of fuzzy XML methods in MPEG-7 standard is a novel approach.
133

Ugurel, Gokhan 01 June 2012 (has links) (PDF)
In real time embedded systems, more and more developers are choosing the soft processor option to save money, power and area on their boards. Reconfigurability concept of the soft processor gives more options to the designer, also solving the problem of processor obsolescence. Another increasing trend is using real time operating systems (RTOSs) for microprocessors or microcontrollers. RTOSs help software developers to meet the critical deadlines of the real time environment with their deterministic and predictable behaviour. Providing service APIs and fast response times for task management, memory and interrupts / RTOSs decrease the development time of on going, and also future, projects of software developers. Comparing RTOSs on RTOS-specific benchmark criteria, called RTOS benchmarking in the literature, helps software developers to choose the appropriate RTOS for their requirements and provokes RTOS companies to strengthen their products on areas where they are weak. This study will compare three popular RTOSs on Xilinx&rsquo / s soft processor platform MicroBlaze. Xilkernel, &micro / C/OS-II and FreeRTOS are selected among nine available RTOSs for MicroBlaze and are compared against critical RTOS benchmarking criteria, which are task preemption time, task preemption time under load, get/release semaphore time, pass/receive message time, get/release fixed sized dynamic memory time, UART RS-422 message interrupt serving time, RTOS initialization time and memory footprint data. Results are interpreted using architectural concepts of the RTOSs considered.
134

Algorithms For The Weapon - Target Allocation Problem

Turan, Ayse 01 July 2012 (has links) (PDF)
Within the air defense domain, the Weapon-Target Allocation problem is a fundamental problem. This problem deals with the allocation of a set ofiring units or weapons to a set of hostile targets so that the total expected effect on targets is maximized. The Weapon-Target Allocation problem has been proven to be NP-Complete by Lloyd and Witsenhausen [14]. In this thesis, the use of various algorithms including search algorithms, maximum marginal return algorithms, evolutionary algorithms and bipartite graph matching algorithms are demonstrated to solve the problem. Algorithms from the literature are adjusted to the problem and implemented. In addition, existing algorithms are improved by taking care of the maximum allowed time criterion. A testbed is developed to be able to compare the algorithms. The developed testbed allows users to implement new algorithms and compare the algorithms that are selected by the users easily. Using the testbed, implemented algorithms are compared based on optimality and performance criteria. The results are examined and by combining the algorithms that give better results, a new algorithm is proposed to solve the problem more effciently. The proposed algorithm is also compared to the other algorithms and computational results of the algorithms are presented.
135

Assessing Standard-compliance Of Public Institution Web Sites Of Turkey

Yalcinkaya, Sinem 01 September 2005 (has links) (PDF)
Since 2003, almost every public institution has developed a web site through etransformation project (eDTr) in Turkey. When there are so many institutions and different web sites, a need for standard becomes inevitable. To address this need, a standard was published by T&Uuml / RKSAT A.S. in January 2009 which constitutes of the rules and recommendations for these web sites referencing international web standards as well. The purpose of this study is to analyze the compatibility of public institutions&rsquo / web sites in Turkey with the T&Uuml / RKSAT standard. In this study, 32 rules are selected to be verified for 50 public institution web sites. 20 of the rules are verified with a tool named WSSCV which is developed in the context of this thesis, 5 of the rules are verified with a commercial tool named Total Validator, and 7 of the rules are verified manually. Results show that, the standard prepared by T&Uuml / RKSAT is not used during the development of a public institution web application. Compliance of the checked web sites to the standard is very low. However, the standard also needs to be updated according to today&rsquo / s technology.
136

Emotion Analysis Of Turkish Texts By Using Machine Learning Methods

Boynukalin, Zeynep 01 July 2012 (has links) (PDF)
Automatically analysing the emotion in texts is in increasing interest in today&rsquo / s research fields. The aim is to develop a machine that can detect type of user&rsquo / s emotion from his/her text. Emotion classification of English texts is studied by several researchers and promising results are achieved. In this thesis, an emotion classification study on Turkish texts is introduced. To the best of our knowledge, this is the first study on emotion analysis of Turkish texts. In English there exists some well-defined datasets for the purpose of emotion classification, but we could not find datasets in Turkish suitable for this study. Therefore, another important contribution is the generating a new data set in Turkish for emotion analysis. The dataset is generated by combining two types of sources. Several classification algorithms are applied on the dataset and results are compared. Due to the nature of Turkish language, new features are added to the existing methods to improve the success of the proposed method.
137

Fusing Semantic Information Extracted From Visual, Auditory And Textual Data Of Videos

Gulen, Elvan 01 July 2012 (has links) (PDF)
In recent years, due to the increasing usage of videos, manual information extraction is becoming insufficient to users. Therefore, extracting semantic information automatically turns out to be a serious requirement. Today, there exists some systems that extract semantic information automatically by using visual, auditory and textual data separately but the number of studies that uses more than one data source is very limited. As some studies on this topic have already shown, using multimodal video data for automatic information extraction ensures getting better results by guaranteeing increase in the accuracy of semantic information that is retrieved from visual, auditory and textual sources. In this thesis, a complete system which fuses the semantic information that is obtained from visual, auditory and textual video data is introduced. The fusion system carries out the following procedures / analyzing and uniting the semantic information that is extracted from multimodal data by utilizing concept interactions and consequently generating a semantic dataset which is ready to be stored in a database. Besides, experiments are conducted to compare results obtained from the proposed multimodal fusion operation with results obtained as an outcome of semantic information extraction from just one modality and other fusion methods. The results indicate that fusing all available information along with concept relations yields better results than any unimodal approaches and other traditional fusion methods in overall.
138

Introducing Rolling Axis Into Motion Controlled Gameplay As A New Degree Of Freedom Using Microsoft Kinect

Bozgeyikli, Evren C. 01 September 2012 (has links) (PDF)
Motion controlling is a rapidly improving area of game technologies. In the last few years, motion sensing devices for video games such as Nintendo Wii, Microsoft Kinect for Xbox 360 and Sony PlayStation Move have gained popularity among players with many compatible motion controlled games. Microsoft Kinect for Xbox 360 provides a controller free interaction system in which the player controls games by using only body movements. Although Kinect provides a natural way of interaction, rolling action of body joints are not recognized within the standard motion sensing scope of the tool. Aim of this thesis is to provide an improved gameplay system with an increased degree of freedom by introducing rolling axis of movement using Microsoft Kinect for Xbox 360 for motion sensing. This improved gameplay system provides the players a more natural and accurate way of motion controlled interaction, eliminating unnatural gestures that are needed to be memorized to compensate for lacking of the roll movement recognition.
139

Enhancing Content Management Systems With Semantic Capabilities

Gonul, Suat 01 August 2012 (has links) (PDF)
Content Management Systems (CMS) generally store data in a way that the content is distributed among several relational database tables or stored in files as a whole without any distinctive characteristics. These storage mechanisms cannot provide the management of semantic information about the data. They lack semantic retrieval, search and browsing of the stored content. To enhance non-semantic CMSes with advanced semantic features, the semantics within the CMS itself and additional semantic information related with the actual managed content should also be taken into account. However, extracting implicit knowledge from the legacy CMSes, lifting to a semantic content management system environment and providing semantic operations on the content is a challenging task which includes adoption of several latest advancements in information extraction (IE), information retrieval (IR) and Semantic Web areas. In this study, we propose an integrative approach including automatic lifting of content from legacy systems, automatic annotation of data with the information retrieved from the Linked Open Data (LOD) cloud and several semantic operations on the content in terms of storage and search. We use a simple RDF path language to create custom, semantic indexes and filter annotations obtained from LOD cloud in a way that is eligible for specific use cases. Filtered annotations are materialized along with the actual content of document in dedicated indexes. This semantix indexing infrastructure allows semantically meaningful search facilities on top of it. We realize our approach in the scope of Apache Stanbol project, which is a subproject developed in the scope of IKS project, by focusing on document storage and retrival parts of it. We evaluate our approach in healthcare domain with different domain ontologies (SNOMED/CT, ART, RXNORM) in addition to DBpedia as parts of LOD cloud which are used annotate documents and content obtained from different health portals.
140

Human Activity Classification Using Spatio-temporal

Akpinar, Kutalmis 01 September 2012 (has links) (PDF)
This thesis compares the state of the art methods and proposes solutions for human activity classification from video data. Human activity classification is finding the meaning of human activities, which are captured by the video. Classification of human activity is needed in order to improve surveillance video analysis and summarization, video data mining and robot intelligence. This thesis focuses on the classification of low level human activities which are used as an important information source to determine high level activities. In this study, the feature relation histogram based activity description proposed by Ryoo et al. (2009) is implemented and extended. The feature histogram is widely used in feature based approaches / however, the feature relation histogram has the ability to represent the locational information of the features. Our extension defines a new set of relations between the features, which makes the method more effective for action description. Classifications are performed and results are compared using feature histogram, Ryoo&rsquo / s feature relation histogram and our feature relation histogram using the same datasets and the feature type. Our experiments show that feature relation histogram performs slightly better than the feature histogram, our feature relation histogram is even better than both of the two. Although the difference is not clearly observable in the datasets containing periodic actions, a 12% improvement is observed for the non-periodic action datasets. Our work shows that the spatio-temporal relation represented by our new set of relations is a better way to represent the activity for classification.

Page generated in 0.0596 seconds