• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 263
  • Tagged with
  • 263
  • 263
  • 263
  • 263
  • 263
  • 30
  • 28
  • 27
  • 25
  • 24
  • 24
  • 21
  • 19
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Pattern Extraction By Using Both Spatial And Temporal Features On Turkish Meteorological Data

Goler, Isil 01 January 2011 (has links) (PDF)
With the growth in the size of datasets, data mining has been an important research topic and is receiving substantial interest from both academia and industry for many years. Especially, spatio-temporal data mining, mining knowledge from large amounts of spatio-temporal data, is a highly demanding field because huge amounts of spatio-temporal data are collected in various applications. Therefore, spatio-temporal data mining requires the development of novel data mining algorithms and computational techniques for a successful analysis of large spatio-temporal databases. In this thesis, a spatio-temporal mining technique is proposed and applied on Turkish meteorological data which has been collected from various weather stations in Turkey. This study also includes an analysis and interpretation of spatio-temporal rules generated for Turkish Meteorological data set. We introduce a second level mining technique which is used to define general trends of the patterns according to the spatial changes. Genarated patterns are investigated under different temporal sets in order to monitor the changes of the events with respect to temporal changes.
112

Development Of A Grid-aware Master Worker Framework For Artificial Evolution

Ketenci, Ahmet 01 December 2010 (has links) (PDF)
Genetic Algorithm (GA) has become a very popular tool for various kinds of problems, including optimization problems with wider search spaces. Grid search techniques are usually not feasible or ineffective at finding a solution, which is good enough. The most computationally intensive component of GA is the calculation of the goodness (fitness) of candidate solutions. However, since the fitness calculation of each individual does not depend each other, this process can be parallelized easily. The easiest way to reach high amounts of computational power is using grid. Grids are composed of multiple clusters, thus they can offer much more resources than a single cluster. On the other hand, grid may not be the easiest environment to develop parallel programs, because of the lack of tools or libraries that can be used for communication among the processes. In this work, we introduce a new framework, GridAE, for GA applications. GridAE uses the master worker model for parallelization and offers a GA library to users. It also abstracts the message passing process from users. Moreover, it has both command line interface and web interface for job management. These properties makes the framework more usable for developers even with limited parallel programming or grid computing experience. The performance of GridAE is tested with a shape optimization problem and results show that the framework is more convenient to problems with crowded populations.
113

Tsunami Source Inversion Using Genetic Algorithm

Sen, Caner 01 February 2011 (has links) (PDF)
Tsunami forecasting methodology developed by the United States National Oceanic and Atmospheric Administration&rsquo / s Center for Tsunami Research is based on the concept of a pre-computed tsunami database which includes tsunami model results from Mw 7.5 earthquakes called tsunami source functions. Tsunami source functions are placed along the subduction zones of the oceans of the world in several rows. Linearity of tsunami propagation in an open ocean allows scaling and/or combination of the pre-computed tsunami source functions. An offshore scenario is obtained through inverting scaled and/or combined tsunami source functions against Deep-ocean Assessment and Reporting of Tsunami (DART) buoy measurements. A graphical user interface called Genetic Algorithm for INversion (GAIN) was developed in MATLAB using general optimization toolbox to perform an inversion. The 15 November 2006 Kuril and 27 February 2010 Chile tsunamis are chosen as case studies. One and/or several DART buoy measurement(s) is/are used to test different error minimization functions with/without earthquake magnitude as constraint. The inversion results are discussed comparing the forecasting model results with the tide gage measurements.
114

Route Optimization For Solid Waste Transportation Using Parallel Hybrid Genetic Algorithms

Uskay, Selim Onur 01 December 2010 (has links) (PDF)
The transportation phase of solid waste management is highly critical as it may constitute approximately 60 to 75 percent of the total cost. Therefore, even a small amount of improvement in the collection operation can result in a significant saving in the overall cost. Despite the fact that there exist a considerable amount of studies on Vehicle Routing Problem (VRP), a vast majority of the existing studies are not integrated with GIS and hence they do not consider the path constraints of real road networks for waste collection such as one-way roads and U-Turns. This study involves the development of computer software that optimizes the waste collection routes for solid waste transportation considering the path constraints and road gradients. In this study, two different routing models are proposed. The aim of the first model is to minimize the total distance travelled whereas that of the second model is to minimize the total fuel consumption that depends on the loading conditions of the truck and the road gradient. A comparison is made between these two approaches. It is expected that the two approaches generate routes having different characteristics. The obtained results are satisfactory. The distance optimization model generates routes that are shorter in length whereas the fuel consumption optimization model generates routes that are slightly higher in length but provides waste collection on steeply inclined roads with lower truck load. The resultant routes are demonstrated on a 3D terrain view.
115

Exploiting Information Extraction Techniques For Automatic Semantic Annotation And Retrieval Of News Videos In Turkish

Kucuk, Dilek 01 February 2011 (has links) (PDF)
Information extraction (IE) is known to be an effective technique for automatic semantic indexing of news texts. In this study, we propose a text-based fully automated system for the semantic annotation and retrieval of news videos in Turkish which exploits several IE techniques on the video texts. The IE techniques employed by the system include named entity recognition, automatic hyperlinking, person entity extraction with coreference resolution, and event extraction. The system utilizes the outputs of the components implementing these IE techniques as the semantic annotations for the underlying news video archives. Apart from the IE components, the proposed system comprises a news video database in addition to components for news story segmentation, sliding text recognition, and semantic video retrieval. We also propose a semi-automatic counterpart of system where the only manual intervention takes place during text extraction. Both systems are executed on genuine video data sets consisting of videos broadcasted by Turkish Radio and Television Corporation. The current study is significant as it proposes the first fully automated system to facilitate semantic annotation and retrieval of news videos in Turkish, yet the proposed system and its semi-automated counterpart are quite generic and hence they could be customized to build similar systems for video archives in other languages as well. Moreover, IE research on Turkish texts is known to be rare and within the course of this study, we have proposed and implemented novel techniques for several IE tasks on Turkish texts. As an application example, we have demonstrated the utilization of the implemented IE components to facilitate multilingual video retrieval.
116

Using Google Analytics, Card Sorting And Search Statistics For Getting Insights About Metu Website

Dalci, Mustafa 01 February 2011 (has links) (PDF)
websites are one of the most popular and quickest way for communicating with users and providing information. Measuring the effectiveness of website, availability of information on website and information architecture on users
117

Automatic Reconstruction Of Photorealistic 3-d Building Models From Satellite And Ground-level Images

Sumer, Emre 01 April 2011 (has links) (PDF)
This study presents an integrated framework for the automatic generation of the photorealistic 3-d building models from satellite and ground-level imagery. First, the 2-d building patches and the corresponding footprints are extracted from a high resolution imagery using an adaptive fuzzy-genetic algorithm approach. Next, the photorealistic facade textures are automatically extracted from the single ground-level building images using a developed approach, which includes facade image extraction, rectification, and occlusion removal. Finally, the textured 3-d building models are generated automatically by mapping the corresponding textures onto the facades of the models. The developed 2-d building extraction and delineation approach was implemented on a selected urban area of the Batikent district of Ankara, Turkey. The building regions were extracted with an approximate detection rate of 93%. Moreover, the overall delineation accuracy was computed to be 3.9 meters. The developed concept for facade image extraction was tested on two distinct datasets. The facade image extraction accuracies were computed to be 82% and 81% for the Batikent and eTrims datasets, respectively. As to rectification results, 60% and 80% of the facade images provided errors under ten pixels for the Batikent and eTrims datasets, respectively. In the evaluation of occlusion removal, the average scores were computed to be 2.58 and 2.28 for the Batikent and eTrims datasets, respectively. The scores are ranked between 1 (Excellent) to 6 (Unusable). The modeling of the total 110 single buildings with the photorealistic textures took about 50 minutes of processor running time and yielded a satisfactory level of accuracy.
118

Acquisition Of Liver Specific Parasites-bacteria-drugs-diseases-genes Knowledge From Medline

Yildirim, Pinar 01 April 2011 (has links) (PDF)
Biomedical literature such as MEDLINE articles are rich resources for discovering and tracking disease and drug knowledge. For example, information regarding the drugs that are used with a particular disease or the changes in drug usage over time is valulable. However, this information is buried in thousands of MEDLINE articles. Acquiring knowledge from these articles requires complex processes depending on the biomedical text mining techniques. Today, parasitic and bacterial diseases affect hundreds of millions of people worldwide. They result in significant mortality and devastating social and economic consequences. There are many control and eradication programs conducted in the world. Also, many drugs are developed for diseases caused from parasites and bacteria. In this study, research was conducted of parasites (bacteria affecting the liver) and treatment drugs were tested. Also, relationships between these diseases and genes, along with parasites and bacteria were searched through data and biomedical text mining techniques. This study reveals that the treatment of parasites and bacteria seems to be stable over the last four decades. The methodology introduced in this study also presents a reference model to acquire medical knowledge from the literature.
119

A Verification Approach For Dynamics Of Metamodel Based Conceptual Models Of The Mission Space

Eryilmaz, Utkan 01 June 2011 (has links) (PDF)
Conceptual models were introduced in the simulation world in order to describe the problem domain in detail before any implementation is attempted. One of the recent approaches for conceptual modeling of the military mission space is the KAMA approach which provides a process description, a UML based notation, and a supporting tool for developing conceptual models. The prominence of the approach stems from availability of guidance and applications in real life case studies. Although the credibility of a conceptual model can be leveraged through use of a structured notation and tools, the verification and validation activities must be performed to arrive at more credible conceptual models. A conceptual model includes two categories of information: static and dynamic. The dynamic information describes the changes that occur over time. In this study, the dynamic characteristics of the conceptual models described in KAMA notation are explored and a verification approach based on these is proposed. The dynamical aspects of KAMA notation and example conceptual models provide the necessary information for characterization of the dynamical properties of conceptual models. Using these characteristics as a basis, an approach is formulated that consists of formal and semiformal techniques as well as supporting tools. For description of additional properties for dynamic verification, an extended form of KAMA is developed, called the KAMA-DV notation. The approach is applied on two different real-life case studies and its effectiveness is compared with earlier verification studies.
120

An Ontology And Conceptual Graph Based Best Matching Algorithm For Context-aware Applications

Koushaeian, Reza 01 May 2011 (has links) (PDF)
Context-aware computing is based on using knowledge about the current context. Interpretation of current context to an understandable knowledge is carried out by reasoning over context and in some cases by matching the current context with the desired context. In this thesis we concentrated on context matching issue in context-aware computing domain. Context matching can be done in various ways like it is done in other matching processes. Our matching approach is best matching in order to generate granular similarity results and not to be limited to Boolean values. We decided to use Ontology as the encoded domain knowledge for our matching method. Context matching method is related to the method that we represent context. We selected conceptual graphs to represent the context. We proposed a generic algorithm for context matching based on the ontological information that benefit from the conceptual graph theory and its advantages.

Page generated in 0.0672 seconds