• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 42
  • 10
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 81
  • 24
  • 13
  • 12
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

A New Measure of Classifiability and its Applications

Dong, Ming 08 November 2001 (has links)
No description available.
42

Strategy Synthesis for Multi-Agent Games of Imperfect Information

Lycken, Jakob, Westerlund, Simon January 2020 (has links)
It is a notoriously difficult task to find winningstrategies for multi-agent games. Especially if one or multipleagents lack the information required to determine which statethe game is in. When this type of uncertainty arises in a gameit is referred to as a multi-agent game of imperfect information.In this project we designed and built a tool for strategysynthesis of multi-agent games against nature. The strategysynthesis was knowledge-based and therefore a multi-agentextension of the Knowledge Based Subset Construction, builtby a previous project group, was applied to the input games.This construction creates a new knowledge-based game, withreduced uncertainty compared to the initial multi-agent game ofimperfect information. We constructed the tool using a forwardsearch heuristic which meant that it would locate all existingwinning strategies.We study the performance of the tool by comparing itto a baseline approach relying solely on randomisation. Thiscomparison was performed on five different games. Our toolfound every relevant strategy for each game at least 35% fasterthan the baseline found the same amount of unique winningstrategies. If a strategy can win without transitioning througha state, then that state is not relevant and is not part of thestrategy. The comparison test for this game shows that the toolis working very well. / Det är ökänt svårt att hitta strategier för spel för fleragentspel. Speciellt om en eller flera agenter saknar informationen som krävs för att avgöra vilket tillstånd de befinner sig i. Dessa spel kallas för spel av imperfekt information.  I det här projektet designade och byggde vi ett verktyg för att syntetisera en strategi för fleragentspel mot naturen. Syntetisering var kunskapsbaserad och därför applicerades ett verktyg, Knowledge Based Subset Construction för fleragentspel som skapats av en tidigare grupp, på det önskade spelet. Denna konstruktion skapar ett nytt kunskapsbaserat spel, med minskad osäkerhet jämfört med det initiala flerspelarspelet av imperfekt information. Vi skapade vårt verktyg med en heuristik som bygger på framåt-sök. Detta resulterar i att den hittar alla vinnande strategier.  Vi valde att jämföra vårt verktyg med en slumpbaserad strategisyntes, för att undersöka hur snabbt verktyget är. Vi jämförde med fem olika spel. Verktyget fann alla relevanta vinnande strategier för spelen minst 35% snabbare än vad den slumpbaserade metoden kunde finna lika många unika vinnande strategier som vi. Om en strategi var vinnande utan att passera ett tillstånd var det tillståndet inte relevant och därför inte med i strategien Detta gör att vi anser verktyget som väl fungerande. / Kandidatexjobb i elektroteknik 2020, KTH, Stockholm
43

Seleção de atributos em agrupamento de dados utilizando algoritmos evolutivos / Feature subset selection in data clustering using evolutionary algorithm

Martarelli, Nádia Junqueira 03 August 2016 (has links)
Com o surgimento da tecnologia da informação, o processo de análise e interpretação de dados deixou de ser executado exclusivamente por seres humanos, passando a contar com auxílio computacional para a descoberta de conhecimento em grandes bancos de dados. Este auxílio exige uma organização e ordenação das atividades, antes manualmente exercidas, em um processo composto de três grandes etapas. A primeira etapa deste processo conta com uma tarefa de redução da dimensionalidade, que tem como objetivo a eliminação de atributos que não contribuem para a análise dos dados, resultando portanto, na seleção de um subconjunto dos atributos originais. A seleção de um subconjunto de atributos pode ser encarada como um problema de busca, já que há inúmeras possibilidades de combinação dos atributos originais em subconjuntos. Dessa forma, uma das estratégias de busca que pode ser adotada consiste na busca randômica, executada por um algoritmo genético ou pelas suas variações. Este trabalho propõe a aplicação de duas variações do algoritmo genético, Algoritmo Genético Construtivo e Algoritmo Genético Enviesado com Chave Aleatória, no problema de seleção de atributos em agrupamento de dados, já que estas duas variações ainda não foram aplicadas em tal problema. A fim de verificar o desempenho destas duas variações, comparou-se ambas com a abordagem tradicional do algoritmo genético. Efetuou-se também a comparação entre as duas variações. Para isto, foi utilizada três bases de dados retiradas do repositório UCI de aprendizado de máquinas. Os resultados obtidos mostraram que os desempenhos, em termos de qualidade da solução, dos algoritmos: genético construtivo e genético enviesado com chave aleatório foram melhores, de maneira geral, do que o desempenho da abordagem tradicional. Constatou-se também diferença significativa em termos de eficiência entre as duas variações e a abordagem tradicional. / With the advent of information technology, the process of analysis and interpretation of data left to be run exclusively by humans, going to rely on computational support for knowledge discovery in large databases. This aid requires an organization and sequencing of activities before manually performed in a compound of three major step process. The first step of this process has a reduced dimensionality task, which aims to eliminate attributes that do not contribute to the data analysis, resulting therefore, in selecting a subset of the original attributes. Selecting a subset of attributes can be viewed as a search problem, since there are numerous possible combinations of unique attributes into subsets. Thus, one search strategies that can be adopted is to randomly search, performed by a genetic algorithm or its variants. This paper proposes the application of two variations of the genetic algorithm, Constructive Genetic Algorithm and Biased Random Key Genetic Algorithm in the feature selection problem in data grouping, as these two variations have not been applied in such a problem. In order to verify the performance of the two variations, we compare them with the traditional algorithm, genetic algorithm. It was also executed the comparison between the two variations. For this, we used three databases removed from the UCI repository of machine learning. The results showed that the performance, in term of quality solution, of algorithms: genetic constructive and genetic biased with random key are better than the performance of the traditional approach. It was also observed a significant difference in efficiency between of the two variations and the traditional approach.
44

Erdos-Szekeres type theorems / Erdos-Szekeres type theorems

Eliáš, Marek January 2012 (has links)
Let P = (p1, p2, . . . , pN ) be a sequence of points in the plane, where pi = (xi, yi) and x1 < x2 < · · · < xN . A famous 1935 Erdős-Szekeres theorem asserts that every such P contains a monotone subsequence S of √ N points. Another, equally famous theorem from the same paper implies that every such P contains a convex or concave subsequence of Ω(log N) points. First we define a (k + 1)-tuple K ⊆ P to be positive if it lies on the graph of a function whose kth derivative is everywhere nonnegative, and similarly for a negative (k + 1)-tuple. Then we say that S ⊆ P is kth-order monotone if its (k + 1)- tuples are all positive or all negative. In this thesis we investigate quantitative bound for the corresponding Ramsey-type result. We obtain an Ω(log(k−1) N) lower bound ((k − 1)-times iterated logarithm). We also improve bounds for related problems: Order types and One-sided sets of hyperplanes. 1
45

Seleção de atributos em agrupamento de dados utilizando algoritmos evolutivos / Feature subset selection in data clustering using evolutionary algorithm

Nádia Junqueira Martarelli 03 August 2016 (has links)
Com o surgimento da tecnologia da informação, o processo de análise e interpretação de dados deixou de ser executado exclusivamente por seres humanos, passando a contar com auxílio computacional para a descoberta de conhecimento em grandes bancos de dados. Este auxílio exige uma organização e ordenação das atividades, antes manualmente exercidas, em um processo composto de três grandes etapas. A primeira etapa deste processo conta com uma tarefa de redução da dimensionalidade, que tem como objetivo a eliminação de atributos que não contribuem para a análise dos dados, resultando portanto, na seleção de um subconjunto dos atributos originais. A seleção de um subconjunto de atributos pode ser encarada como um problema de busca, já que há inúmeras possibilidades de combinação dos atributos originais em subconjuntos. Dessa forma, uma das estratégias de busca que pode ser adotada consiste na busca randômica, executada por um algoritmo genético ou pelas suas variações. Este trabalho propõe a aplicação de duas variações do algoritmo genético, Algoritmo Genético Construtivo e Algoritmo Genético Enviesado com Chave Aleatória, no problema de seleção de atributos em agrupamento de dados, já que estas duas variações ainda não foram aplicadas em tal problema. A fim de verificar o desempenho destas duas variações, comparou-se ambas com a abordagem tradicional do algoritmo genético. Efetuou-se também a comparação entre as duas variações. Para isto, foi utilizada três bases de dados retiradas do repositório UCI de aprendizado de máquinas. Os resultados obtidos mostraram que os desempenhos, em termos de qualidade da solução, dos algoritmos: genético construtivo e genético enviesado com chave aleatório foram melhores, de maneira geral, do que o desempenho da abordagem tradicional. Constatou-se também diferença significativa em termos de eficiência entre as duas variações e a abordagem tradicional. / With the advent of information technology, the process of analysis and interpretation of data left to be run exclusively by humans, going to rely on computational support for knowledge discovery in large databases. This aid requires an organization and sequencing of activities before manually performed in a compound of three major step process. The first step of this process has a reduced dimensionality task, which aims to eliminate attributes that do not contribute to the data analysis, resulting therefore, in selecting a subset of the original attributes. Selecting a subset of attributes can be viewed as a search problem, since there are numerous possible combinations of unique attributes into subsets. Thus, one search strategies that can be adopted is to randomly search, performed by a genetic algorithm or its variants. This paper proposes the application of two variations of the genetic algorithm, Constructive Genetic Algorithm and Biased Random Key Genetic Algorithm in the feature selection problem in data grouping, as these two variations have not been applied in such a problem. In order to verify the performance of the two variations, we compare them with the traditional algorithm, genetic algorithm. It was also executed the comparison between the two variations. For this, we used three databases removed from the UCI repository of machine learning. The results showed that the performance, in term of quality solution, of algorithms: genetic constructive and genetic biased with random key are better than the performance of the traditional approach. It was also observed a significant difference in efficiency between of the two variations and the traditional approach.
46

Dynamic Data Citation Service-Subset Tool for Operational Data Management

Schubert, Chris, Seyerl, Georg, Sack, Katharina January 2019 (has links) (PDF)
In earth observation and climatological sciences, data and their data services grow on a daily basis in a large spatial extent due to the high coverage rate of satellite sensors, model calculations, but also by continuous meteorological in situ observations. In order to reuse such data, especially data fragments as well as their data services in a collaborative and reproducible manner by citing the origin source, data analysts, e.g., researchers or impact modelers, need a possibility to identify the exact version, precise time information, parameter, and names of the dataset used. A manual process would make the citation of data fragments as a subset of an entire dataset rather complex and imprecise to obtain. Data in climate research are in most cases multidimensional, structured grid data that can change partially over time. The citation of such evolving content requires the approach of "dynamic data citation". The applied approach is based on associating queries with persistent identifiers. These queries contain the subsetting parameters, e.g., the spatial coordinates of the desired study area or the time frame with a start and end date, which are automatically included in the metadata of the newly generated subset and thus represent the information about the data history, the data provenance, which has to be established in data repository ecosystems. The Research Data Alliance Data Citation Working Group (RDA Data Citation WG) summarized the scientific status quo as well as the state of the art from existing citation and data management concepts and developed the scalable dynamic data citation methodology of evolving data. The Data Centre at the Climate Change Centre Austria (CCCA) has implemented the given recommendations and offers since 2017 an operational service on dynamic data citation on climate scenario data. With the consciousness that the objective of this topic brings a lot of dependencies on bibliographic citation research which is still under discussion, the CCCA service on Dynamic Data Citation focused on the climate domain specific issues, like characteristics of data, formats, software environment, and usage behavior. The current effort beyond spreading made experiences will be the scalability of the implementation, e.g., towards the potential of an Open Data Cube solution.
47

Selection and ranking procedures based on likelihood ratios

Chotai, Jayanti January 1979 (has links)
This thesis deals with random-size subset selection and ranking procedures• • • )|(derived through likelihood ratios, mainly in terms of the P -approach.Let IT , . .. , IT, be k(&gt; 2) populations such that IR.(i = l, . . . , k) hasJ_ K. — 12the normal distribution with unknwon mean 0. and variance a.a , where a.i i i2 . . is known and a may be unknown; and that a random sample of size n^ istaken from . To begin with, we give procedure (with tables) whichselects IT. if sup L(0;x) &gt;c SUD L(0;X), where SÎ is the parameter space1for 0 = (0-^, 0^) ; where (with c: ß) is the set of all 0 with0. = max 0.; where L(*;x) is the likelihood function based on the total1sample; and where c is the largest constant that makes the rule satisfy theP*-condition. Then, we consider other likelihood ratios, with intuitivelyreasonable subspaces of ß, and derive several new rules. Comparisons amongsome of these rules and rule R of Gupta (1956, 1965) are made using differentcriteria; numerical for k=3, and a Monte-Carlo study for k=10.For the case when the populations have the uniform (0,0^) distributions,and we have unequal sample sizes, we consider selection for the populationwith min 0.. Comparisons with Barr and Rizvi (1966) are made. Generalizai&lt;j&lt;k Jtions are given.Rule R^ is generalized to densities satisfying some reasonable assumptions(mainly unimodality of the likelihood, and monotonicity of the likelihoodratio). An exponential class is considered, and the results are exemplifiedby the gamma density and the Laplace density. Extensions and generalizationsto cover the selection of the t best populations (using various requirements)are given. Finally, a discussion oil the complete ranking problem,and on the relation between subset selection based on likelihood ratios andstatistical inference under order restrictions, is given. / digitalisering@umu
48

Adaptive Algorithms for Weighted Queries on Weighted Binary Relations and Labeled Trees

Veraskouski, Aleh 23 July 2007 (has links)
Keyword queries are extremely easy for a user to write. They have become a standard way to query for information in web search engines and most other information retrieval systems whose users are usually laypersons and might not have knowledge about the database schema or contained data. As keyword queries do not impose any structural constraints on the retrieved information, the quality of the obtained results is far from perfect. However, one can hardly improve it without changing the ways the queries are asked and the methods the information is stored in the database. The purpose of this thesis is to propose a method to improve the quality of the information retrieving by adding weights to the existing ways of keyword queries asking and information storing in the database. We consider weighted queries on two different data structures: weighted binary relations and weighted multi-labeled trees. We propose adaptive algorithms to solve these queries and prove the measures of the complexity of these algorithms in terms of the high-level operations. We describe how these algorithms can be implemented and derive the upper bounds on their complexity in two specific models of computations: the comparison model and the word-RAM model.
49

Adaptive Algorithms for Weighted Queries on Weighted Binary Relations and Labeled Trees

Veraskouski, Aleh 23 July 2007 (has links)
Keyword queries are extremely easy for a user to write. They have become a standard way to query for information in web search engines and most other information retrieval systems whose users are usually laypersons and might not have knowledge about the database schema or contained data. As keyword queries do not impose any structural constraints on the retrieved information, the quality of the obtained results is far from perfect. However, one can hardly improve it without changing the ways the queries are asked and the methods the information is stored in the database. The purpose of this thesis is to propose a method to improve the quality of the information retrieving by adding weights to the existing ways of keyword queries asking and information storing in the database. We consider weighted queries on two different data structures: weighted binary relations and weighted multi-labeled trees. We propose adaptive algorithms to solve these queries and prove the measures of the complexity of these algorithms in terms of the high-level operations. We describe how these algorithms can be implemented and derive the upper bounds on their complexity in two specific models of computations: the comparison model and the word-RAM model.
50

Algorithms for irreducible infeasible subset detection in CSP - Application to frequency planning and graph k-coloring

Hu, Jun 27 November 2012 (has links) (PDF)
The frequency assignment (FAP) consists in assigning the frequency on the radio links of a network which satisfiesthe electromagnetic interference among the links. Given the limited spectrum resources for each application, the fre-quency resources are often insufficient to deploy a wireless network without interference. In this case, the network isover-contrained and the problem is infeasible. Our objective is to identify an area with heavy interference.The work presented here concerns the detection for one of these areas with an algorithmic approach based onmodeling the problem by CSP. The problem of frequency assignment can be modeled as a constraint satisfactionproblem (CSP) which is represented by a triple: a set of variables (radio links), a set of constraints (electromagneticinterference) and a set of available frequencies.The interfered area in CSP can be considered a subset of irreducible feasible subset (IIS). An IIS is a infeasiblesubproblem with irreducible size, that is to say that all subsets of an IIS are feasible. The identification of an IIS ina CSP refers to two general interests. First, locating an IIS can easily prove the infeasibility of the problem. Becausethe size of IIS is assumed to be smaller compared to the entire problem, its infeasibility is relatively easier to prove.Second, we can locate the reason of infeasibility, in this case, the decision maker can provide the solutions to relax theconstraints inside IIS, which perhaps leads to a feasible solution to the problem.This work proposes algorithms to identify an IIS in the over-constrained CSP. These algorithms have tested on the well known benchmarks of the FAP and of the problem of graph k-coloring. The results show a significant improve-ment on instances of FAP compared to known methods.

Page generated in 0.0544 seconds