• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 603
  • 285
  • 85
  • 61
  • 40
  • 18
  • 17
  • 16
  • 16
  • 16
  • 15
  • 12
  • 6
  • 5
  • 5
  • Tagged with
  • 1347
  • 236
  • 168
  • 163
  • 140
  • 124
  • 110
  • 109
  • 103
  • 93
  • 90
  • 90
  • 89
  • 82
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

Abstraction In Reinforcement Learning

Girgin, Sertan 01 March 2007 (has links) (PDF)
Reinforcement learning is the problem faced by an agent that must learn behavior through trial-and-error interactions with a dynamic environment. Generally, the problem to be solved contains subtasks that repeat at different regions of the state space. Without any guidance an agent has to learn the solutions of all subtask instances independently, which degrades the learning performance. In this thesis, we propose two approaches to build connections between different regions of the search space leading to better utilization of gained experience and accelerate learning is proposed. In the first approach, we first extend existing work of McGovern and propose the formalization of stochastic conditionally terminating sequences with higher representational power. Then, we describe how to efficiently discover and employ useful abstractions during learning based on such sequences. The method constructs a tree structure to keep track of frequently used action sequences together with visited states. This tree is then used to select actions to be executed at each step. In the second approach, we propose a novel method to identify states with similar sub-policies, and show how they can be integrated into reinforcement learning framework to improve the learning performance. The method uses an efficient data structure to find common action sequences started from observed states and defines a similarity function between states based on the number of such sequences. Using this similarity function, updates on the action-value function of a state are reflected to all similar states. This, consequently, allows experience acquired during learning be applied to a broader context. Effectiveness of both approaches is demonstrated empirically by conducting extensive experiments on various domains.
412

A Comparative Analysis Of The Eu And Turkey: Macroeconomic Convergence And Trade Similarity

Akca, Ayse 01 September 2010 (has links) (PDF)
The aim of this thesis is to evaluate the sufficiency of Turkey for joining the Economic and Monetary Union of the European Union (EMU) in terms of similarity and convergence. The study has been conducted in a comparative and descriptive way. First, the similarity and convergence of Turkey to some selected countries are examined with respect to her macroeconomic position. When taking EMU as a benchmark and comparing the convergence of Turkey with the convergence of some of the countries and country groups, it is found that the macroeconomic deficiencies of Turkey are not in an extent that characterizes Turkey as a totally insufficient candidate for EMU. Next, whether there are similarity and convergence in trade structures of Turkey and the European Union of 15 member states (EU15) for the period between 1995 and 2008 is inspected. The results indicated that Turkish export structure is clearly converging to the export structure of EU15 in the course of time. In general, findings of the thesis indicated that there is mostly a continuous convergence in all of the indicators considered but still Turkey does not meet all of the convergence criteria, perfectly. Therefore, as a result of the examinations, some suggestions have been made which would facilitate EMU membership of Turkey.
413

Option Pricing With Fractional Brownian Motion

Inkaya, Alper 01 October 2011 (has links) (PDF)
Traditional financial modeling is based on semimartingale processes with stationary and independent increments. However, empirical investigations on financial data does not always support these assumptions. This contradiction showed that there is a need for new stochastic models. Fractional Brownian motion (fBm) was proposed as one of these models by Benoit Mandelbrot. FBm is the only continuous Gaussian process with dependent increments. Correlation between increments of a fBm changes according to its self-similarity parameter H. This property of fBm helps to capture the correlation dynamics of the data and consequently obtain better forecast results. But for values of H different than 1/2, fBm is not a semimartingale and classical Ito formula does not exist in that case. This gives rise to need for using the white noise theory to construct integrals with respect to fBm and obtain fractional Ito formulas. In this thesis, the representation of fBm and its fundamental properties are examined. Construction of Wick-Ito-Skorohod (WIS) and fractional WIS integrals are investigated. An Ito type formula and Girsanov type theorems are stated. The financial applications of fBm are mentioned and the Black&amp / Scholes price of a European call option on an asset which is assumed to follow a geometric fBm is derived. The statistical aspects of fBm are investigated. Estimators for the self-similarity parameter H and simulation methods of fBm are summarized. Using the R/S methodology of Hurst, the estimations of the parameter H are obtained and these values are used to evaluate the fractional Black&amp / Scholes prices of a European call option with different maturities. Afterwards, these values are compared to Black&amp / Scholes price of the same option to demonstrate the effect of long-range dependence on the option prices. Also, estimations of H at different time scales are obtained to investigate the multiscaling in financial data. An outlook of the future work is given.
414

An Ontology-based Hybrid Recommendation System Using Semantic Similarity Measure And Feature Weighting

Ceylan, Ugur 01 September 2011 (has links) (PDF)
The task of the recommendation systems is to recommend items that are relevant to the preferences of users. Two main approaches in recommendation systems are collaborative filtering and content-based filtering. Collaborative filtering systems have some major problems such as sparsity, scalability, new item and new user problems. In this thesis, a hybrid recommendation system that is based on content-boosted collaborative filtering approach is proposed in order to overcome sparsity and new item problems of collaborative filtering. The content-based part of the proposed approach exploits semantic similarities between items based on a priori defined ontology-based metadata in movie domain and derived feature-weights from content-based user models. Using the semantic similarities between items and collaborative-based user models, recommendations are generated. The results of the evaluation phase show that the proposed approach improves the quality of recommendations.
415

Identifying Network Dynamics with Large Access Graph and Case-Based Reasoning

Lin, Yi-Yao 11 July 2002 (has links)
This study adopts large access graph algorithm and case-base reasoning approach to generalize user access patterns and diagnose network events respectively for facilitating the network management. Large access graph (LAG) algorithm discovers the frequently inter-connections among hosts to provide an overview of network access relation. The case-based reasoning (CBR) system diagnoses the instant network events with the past experience. NetFlow log data collected from the router of the dormitory network of National Sun Yat-Sen University is used for demonstrating these two methods. The evaluation results measured by recall, precision, and accuracy show that these two mechanisms are useful to support the network administer to keep track of network access relations and diagnose the network events.
416

Data Warehouse Change Management Based on Ontology

Tsai, Cheng-Sheng 12 July 2003 (has links)
In the thesis, we provide a solution to solve a schema change problem. In a data warehouse system, if schema changes occur in a data source, the overall system will lose the consistency between the data sources and the data warehouse. These schema changes will render the data warehouse obsolete. We have developed three stages to handle schema changes occurring in databases. They are change detection, diagnosis, and handling. Recommendations are generated by DB-agent to information DW-agent to notify the DBA what and where a schema change affects the star schema. In the study, we mainly handle seven schema changes in a relational database. All of them, we not only handle non-adding schema changes but also handling adding schema changes. A non-adding schema change in our experiment has high correct mapping rate as using a traditional mappings between a data warehouse and a database. For an adding schema change, it has many uncertainties to diagnosis and handle. For this reason, we compare similarity between an adding relation or attribute and the ontology concept or concept attribute to generate a good recommendation. The evaluation results show that the proposed approach is capable to detect these schema changes correctly and to recommend the DBA about the changes appropriately.
417

Retrieval by spatial similarity based on interval neighbor group

Huang, Yen-Ren 23 July 2008 (has links)
The objective of the present work is to employ a multiple-instance learning image retrieval system by incorporating a spatial similarity measure. Multiple-Instance learning is a way of modeling ambiguity in supervised learning given multiple examples. From a small collection of positive and negative example images, semantically relevant concepts can be derived automatically and employed to retrieve images from an image database. The degree of similarity between two spatial relations is linked to the distance between the associated nodes in an Interval Neighbor Group (ING). The shorter the distance, the higher degree of similarity, while a longer one, a lower degree of similarity. Once all the pairwise similarity values are derived, an ensemble similarity measure will then integrate these pairwise similarity assessments and give an overall similarity value between two images. Therefore, images in a database can be quantitatively ranked according to the degree of ensemble similarity with the query image. Similarity retrieval method evaluates the ensemble similarity based on the spatial relations and common objects present in the maximum common subimage between the query and a database image are considered. Therefore, reliable spatial relation features extracted from the image, combined with a multiple-instance learning paradigm to derive relevant concepts, can produce desirable retrieval results that better match user¡¦s expectation. In order to demonstrate the feasibility of the proposed approach, two sets of test for querying an image database are performed, namely, the proposed RSS-ING scheme v.s. 2D Be-string similarity method, and single-instance vs. multiple-instance learning. The performance in terms of similarity curves, execution time and memory space requirement show favorably for the proposed multiple-instance spatial similarity-based approach.
418

A Self-Constructing Fuzzy Feature Clustering for Text Categorization

Liu, Ren-jia 26 August 2009 (has links)
Feature clustering is a powerful method to reduce the dimensionality of feature vectors for text classification. In this paper, we propose a fuzzy similarity-based self-constructing algorithm for feature clustering. The words in the feature vector of a document set are grouped into clusters based on similarity test. Words that are similar to each other are grouped into the same cluster. Each cluster is characterized by a membership function with statistical mean and deviation. When all the words have been fed in, a desired number of clusters are formed automatically. We then have one extracted feature for each cluster. The extracted feature corresponding to a cluster is a weighted combination of the words contained in the cluster. By this algorithm, the derived membership functions match closely with and describe properly the real distribution of the training data. Besides, the user need not specify the number of extracted features in advance, and trial-and-error for determining the appropriate number of extracted features can then be avoided. 20 Newsgroups data set and Cade 12 web directory are introduced to be our experimental data. We adopt the support vector machine to classify the documents. Experimental results show that our method can run faster and obtain better extracted features than other methods.
419

A Query Dependent Ranking Approach for Information Retrieval

Lee, Lian-Wang 28 August 2009 (has links)
Ranking model construction is an important topic in information retrieval. Recently, many approaches based on the idea of ¡§learning to rank¡¨ have been proposed for this task and most of them attempt to score all documents of different queries by resorting to a single function. In this thesis, we propose a novel framework of query-dependent ranking. A simple similarity measure is used to calculate similarities between queries. An individual ranking model is constructed for each training query with corresponding documents. When a new query is asked, documents retrieved for the new query are ranked according to the scores determined by a ranking model which is combined from the models of similar training queries. A mechanism for determining combining weights is also provided. Experimental results show that this query dependent ranking approach is more effective than other approaches.
420

A Similarity-based Data Reduction Approach

Ouyang, Jeng 07 September 2009 (has links)
Finding an efficient data reduction method for large-scale problems is an imperative task. In this paper, we propose a similarity-based self-constructing fuzzy clustering algorithm to do the sampling of instances for the classification task. Instances that are similar to each other are grouped into the same cluster. When all the instances have been fed in, a number of clusters are formed automatically. Then the statistical mean for each cluster will be regarded as representing all the instances covered in the cluster. This approach has two advantages. One is that it can be faster and uses less storage memory. The other is that the number of new representative instances need not be specified in advance by the user. Experiments on real-world datasets show that our method can run faster and obtain better reduction rate than other methods.

Page generated in 0.0478 seconds