• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 92
  • 36
  • 8
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 158
  • 158
  • 43
  • 37
  • 37
  • 32
  • 29
  • 28
  • 27
  • 25
  • 19
  • 19
  • 18
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Técnicas heurísticas de escalonamento paralelo em workflow / Heuristic scheduling techniques for parallel workflow

Tampelini, Leonardo Garcia, 1983- 20 August 2018 (has links)
Orientador: Jacques Wainer / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-20T14:02:06Z (GMT). No. of bitstreams: 1 Tampelini_LeonardoGarcia_M.pdf: 834632 bytes, checksum: b0f1d3f3d777417870d8ddd0d317ab8e (MD5) Previous issue date: 2012 / Resumo: Com a disseminação de tecnologias de gerenciamento empresarial, empresas procuram promover serviços mais ágeis e de maior qualidade. Neste contexto, áreas como gerenciamento de workflow vêm contribuindo para uma melhor organização na distribuição de tarefas. A aproximação da área de escalonamento com workflow demonstra um grande potencial para atender tais requisitos; porém, uma escassez de trabalhos voltados ao tratamento de estruturas de roteamento paralelas, comumente encontradas em modelos de workflow, é perceptível na literatura de escalonamento. Este trabalho tem por objetivo aproximar essas duas áreas apresentando três novas abordagens de escalonamento voltadas à ordenação de casos dentro de estruturas de roteamento paralelas (AND). Para alcançar tal objetivo, um conjunto de simuladores foi implementado representando o ambiente dinâmico de workflow, suas incertezas, bem como os diferentes cenários onde estruturas do tipo AND podem ocorrer. O desempenho de tais políticas foi comparado com regras amplamente utilizadas em sistemas de workflow, como FIFO (First In First Out), EDD (Earliest Due Date) e SPT (Shortest Processing Time). A análise dos resultados foi efetivada por meio de uma análise de variância (ANOVA) juntamente com o teste de Tukey. Os resultados mostram que é mais vantajoso utilizar técnicas específicas para estrutura de roteamento AND do que apenas aplicar as técnicas mais utilizadas / Abstract: With the dissemination of business management technologies, companies look for to promoting faster services with higher quality. In this context, areas such as workflow management have contributed to a better organization in the distribution of tasks. The approach between scheduling area and workflow area shows great potential to attend these requirements, but a lack of studies directed to the treatment of parallel routing structures, commonly found in workflow models, is apparent escalation in the literature about scheduling. This work aims to approximate these two areas, presenting three new scheduling approaches, directed to the raging of the cases within routing structures parallel (AND). To reach this objective a set of simulators was implemented, representing the dynamic workflow environment, their uncertainties, as well as the different scenarios where that structures such as AND may occur. The performance of these politics was compared with rules widely used in workflow systems, such as FIFO (First In First Out), EDD (Earliest Due Date) and SPT (Shortest Processing Time). The results show that it is more advantageous to use techniques focused on AND routing structure than only apply the most utilized ones / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
122

An automatic test generation method for chip-level circuit descriptions

Barclay, Daniel Scott January 1987 (has links)
An automatic method generates tests for circuits described in a hardware description language (HDL). The input description is in a non-procedural subset of VHDL, with a simplified period-oriented timing model. The fault model, based on previous research, includes micro-operation and control statement faults. The test method uses path-tracing, working directly from the circuit description, not a derived graph or table. Artificial intelligence problem-solving techniques of goals and goal solving are used to represent and manipulate sensitization, justification, and propagation requirements. Backtracking is used to recover from incorrect choices. The method is implemented in ProLog, an artificial intelligence language. Results of this experimental ProLog implementation are summarized and analyzed for strengths and weaknesses of the test method. Suggestions are included to counter the weaknesses. A user's manual is included for the experimental implementation. / M.S.
123

Development and Testing of a Heuristic Line Balancing Program for a Microcomputer

Creech, Dean B. 01 January 1986 (has links) (PDF)
Development, operation, and testing of a heuristic line balancing program for a microcomputer are discussed. Tasks are grouped into work stations along an assembly line such that the number of work stations required is minimized. The model is built primarily using the Hoffman (1963) procedure with modifications described by Gehrlein and Patterson (1975). For purposes of comparison the Rank Positional Weight technique (Helgeson and Birnie, 1961) is also included in the model. Testing included thirty-seven different balances using problems from the literature. For each balance, both Rank Positional Weight and Hoffmann solutions were obtained in the forward and reverse directions. Four measures of performance were considered in this study. These measures of performance were: (1) the average percentage a balance is above the optimum solutions, in terms of number of stations; (2) time to obtain a balance; (3) the best solution in terms of the lowest number of stations and lowest standard deviation of the slack times; and (4) the largest value of minimum station slack time. Overall it was found that the Hoffmann procedure with a delay factor if 1.5 was best suited for the microcomputer application. Further work is recommended to find the optimum delay factor and apply the Modified Hoffmann procedure to solving line balancing problems where the cycle time is minimized given a set of work stations.
124

Heurísticas do design em jogos digitais: o caso League of Legends

Vieira, Guilherme Sousa 11 October 2018 (has links)
Submitted by Filipe dos Santos (fsantos@pucsp.br) on 2018-12-04T11:47:25Z No. of bitstreams: 1 Guilherme Sousa Vieira.pdf: 3021145 bytes, checksum: 0652b6e7a3dbb42c0bef36ecaeb77a24 (MD5) / Made available in DSpace on 2018-12-04T11:47:25Z (GMT). No. of bitstreams: 1 Guilherme Sousa Vieira.pdf: 3021145 bytes, checksum: 0652b6e7a3dbb42c0bef36ecaeb77a24 (MD5) Previous issue date: 2018-10-11 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / The present master's thesis had as its objective a research on the League of Legends game, exploring its design philosophies to identify and discuss within the heuristic theme in game design, using as base the heuristic evaluation of the reworking of one of its characters, the champion Xerath. We start by dissolving the game LoL, how it occurs, how it was in its initial form, how to put the philosophy of work of its developer studio and then we go to the description of the game in its technical and playful aspects. In the sequel we delve into the discussion of heuristics, identity & aesthetics in digital games, where we have developed on top of researched authors, especially the texts of Brenda Laurel, Sherry Turkle and James Paul Gee acquiring a basis for the heuristic evaluation of the character. Next, we focus on the contextualization of the character who came to be evaluated, the champion Xerath, his birth, his involvement with the author and his status before and after the reworking. In its final part, this research retakes the discussed points and applies the heuristic evaluation in the changes that occurred in the character, after that, the process of extrapolation of the work begins and final considerations / A presente dissertação de mestrado teve como seu objetivo uma pesquisa sobre o jogo League of Legends, explorando suas filosofias de design para identificar e discutir dentro do tema heurísticas no design de games, utilizando-se como base a avaliação heurística do Retrabalhamento de uma de suas personagens, o Campeão Xerath. Partimos por uma dissecção do jogo LoL, como ocorre seu surgimento, como era em sua forma inicial, como se coloca a filosofia de trabalho de seu estúdio desenvolvedor e depois partimos para a descrição do jogo em seus aspectos técnicos e lúdicos. Em sequência nos aprofundamos na discussão sobre heurísticas, identidade & estética nos jogos digitais, onde desenvolvemos em cima de autores pesquisados, principalmente os textos de Brenda Laurel, Sherry Turkle e James Paul Gee adquirindo uma base para a avaliação heurística da personagem. Em seguida nos dedicamos a contextualização da personagem que veio ser avaliada, o Campeão Xerath, o seu nascimento, o seu envolvimento com o autor e seu estado antes e após o seu Retrabalhamento. Em sua parte final, essa pesquisa retoma os pontos discutidos e aplica a avaliação heurística nas mudanças que ocorreram na personagem, após isso, se inicia o processo de extrapolação do trabalho e considerações finais
125

Integration of qualitative and quantitative data for decision aiding in production planning

Herrera, Luis Enrique 06 November 2007 (has links)
In this dissertation we have addressed the problem of modeling expertise in domains characterized by unquantifiable, often subjective, information, and using that model of expertise as the foundation for building computer-based decision support systems. The key feature of the expert model is to make explicit the essential characteristics of the knowledge experts use to process objective, quantitative information, for making decisions in environments rich in qualitative data. This model is then used as the basis for an "intelligent" interactive assistant that presents information appropriate for the context to operators who may not have developed the necessary expertise. The core of the assistant is a heuristic algorithm that reflects what an expert decision maker would actually do. The algorithm incorporates a set of production rules, i.e., if-then-else rules, to define relevance conditions of quantitative data. These rules employ a dominance principle, i.e., a heuristic association of the relevance of quantitative data with the attributes of qualitative data, characterized as a set of ordered values. The heuristic algorithm is embedded in the assistant and is used to assist non-expert operators in locating information useful for making decisions. The modeling methodology and the heuristic algorithm are applicable for modeling expertise in a class of decision problems characterized by large amounts of qualitative and quantitative data. The process of structuring the expert's knowledge requires empirical evidence from actual decision problems; this evidence feeds the algorithm with heuristic associations between qualitative and quantitative data. The algorithm uses the dominance principle to decide what information to present for a particular set of conditions.
126

The role of planning in two artificial intelligence architectures

Glossenger, John Kenneth. January 1991 (has links)
Thesis (M.S.)--Kutztown University of Pennsylvania, 1991. / Source: Masters Abstracts International, Volume: 45-06, page: 3181. Typescript. Includes bibliographical references (leaves 75-76).
127

Problemas de empacotamento com itens irregulares : heurísticas e avaliação de construtores de NFP / Irregular packing problems : heuristics and evaluation of NFP constructors

Silveira, Tiago, 1987- 23 August 2018 (has links)
Orientador: Eduardo Candido Xavier / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-23T15:26:56Z (GMT). No. of bitstreams: 1 Silveira_Tiago_M.pdf: 2498154 bytes, checksum: 4bbdff83ad5a399e1c436ffdbeb89a92 (MD5) Previous issue date: 2013 / Resumo: O resumo poderá ser visualizado no texto completo da tese digital / Abstract: The complete abstract is available with the full electronic document / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
128

Stochastic approach to Brokering heuristics for computational grids / Approche stochastique d'heuristiques de méta-ordonnancement dans les grilles de calcul

Berten, Vandy 08 June 2007 (has links)
Computational Grids are large infrastructures composed of several components such as clusters, or massively parallel machines, generally spread across a country or the world, linked together through some network such as Internet, and allowing a transparent access to any resource. Grids have become unavoidable for a large part of the scientific community requiring computational power such as high-energy physics, bioinformatics or earth observation. Large projects are emerging, often at an international level, but even if Grids are on the way of being efficient and user-friendly systems, computer scientists and engineers still have a huge amount of work to do in order to improve their efficiency. Amongst a large number of problems to solve or to improve upon, the problem of scheduling the work and balancing the load is of first importance.<p><p><p>This work concentrates on the way the work is dispatched on such systems, and mainly on how the first level of scheduling – generally name brokering, or meta-sheduling – is performed. We deeply analyze the behavior of popular strategies, compare their efficiency, and propose a new very efficient brokering policy providing notable performances, attested by the large number of simulations we performed and provided in the document.<p><p><p>The work is mainly split in two parts. After introducing the mathematical framework on which the following of the manuscript is based, we study systems where the grid brokering is done without any feed-back information, i.e. without knowing the current state of the clusters when the resource broker – the grid component receiving jobs from clients and performing the brokering – makes its decision. We show here how a computational grid behaves if the brokering is done is such a way that each cluster receives a quantity of work proportional to its computational capacity.<p><p><p>The second part of this work is rather independent from the first one, and consists in the presentation of a brokering strategy, based on Whittle's indices, trying to minimize as much as possible the average sojourn time of jobs. We show how efficient the proposed strategy is for computational grids, compared to the ones popular in production systems. We also show its robustness to several parameter changes, and provide several very efficient algorithms allowing to make the required computations for this index policy. We finally extend our model in several directions.<p> / Doctorat en sciences, Spécialisation Informatique / info:eu-repo/semantics/nonPublished
129

Develop heuristics to the popular Minesweeper game

Huang, Angela Tzujui 01 January 2004 (has links)
This project describes Automine, a program intended to aid in the solving of the Minesweeper computer game. Automine is based on the Linux xwindow C program with xwindow graphic library. The program uses heuristics and probability statistics to help in determining safe squares and squares concealing mines with the goal of allowing a player to achieve minimal time performance. The source code for Automine and for a game simulation is provided in the appendices.
130

Metaheuristic approaches to realistic portfolio optimisation

Busetti, Franco Raoul 06 1900 (has links)
In this thesis we investigate the application of two heuristic methods, genetic algorithms and tabu/scatter search, to the optimisation of realistic portfolios. The model is based on the classical mean-variance approach, but enhanced with floor and ceiling constraints, cardinality constraints and nonlinear transaction costs which include a substantial illiquidity premium, and is then applied to a large I 00-stock portfolio. It is shown that genetic algorithms can optimise such portfolios effectively and within reasonable times, without extensive tailoring or fine-tuning of the algorithm. This approach is also flexible in not relying on any assumed or restrictive properties of the model and can easily cope with extensive modifications such as the addition of complex new constraints, discontinuous variables and changes in the objective function. The results indicate that that both floor and ceiling constraints have a substantial negative impact on portfolio performance and their necessity should be examined critically relative to their associated administration and monitoring costs. Another insight is that nonlinear transaction costs which are comparable in magnitude to forecast returns will tend to diversify portfolios; the effect of these costs on portfolio risk is, however, ambiguous, depending on the degree of diversification required for cost reduction. Generally, the number of assets in a portfolio invariably increases as a result of constraints, costs and their combination. The implementation of cardinality constraints is essential for finding the bestperforming portfolio. The ability of the heuristic method to deal with cardinality constraints is one of its most powerful features. / Decision Sciences / M. Sc. (Operations Research)

Page generated in 0.1484 seconds