• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 322
  • 232
  • 51
  • 27
  • 23
  • 23
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 806
  • 139
  • 127
  • 119
  • 102
  • 98
  • 80
  • 76
  • 72
  • 70
  • 69
  • 69
  • 63
  • 62
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Investigation into the use of meta-heuristics in the optimisation of log positioning during sawmill processing

Du Plessis, Johan de Villiers 12 1900 (has links)
Thesis (MSc (Forest and Wood Science))--University of Stellenbosch, 2011. / ENGLISH ABSTRACT: The percentage yield of sawn timber recovered from logs has a large influence on the profitability of a sawmill. The positioning of the log as it is fed into the primary breakdown saw is one of the factors that impacts on the volume recovery percentage. The log’s position can be adjusted by changes in rotation, offset and skewness and depending on the ranges and increments used for these variables, the number of possible combinations can become substantial. In a sawmill the time available to assess possible log positions is limited and different strategies can be followed to arrive at an optimal or close‐to‐optimal positioning solution without an exhaustive evaluation of solutions. Meta‐heuristics are sometimes used to arrive at solutions for combinatorial optimisation problems in a limited time. The effectiveness of this approach on the optimisation of timber volume recovery based on log form is evaluated in this study using sawmill simulation data of sixty SA pine logs. A new meta‐heuristic, for convenience named the Tentacle algorithm, was developed specifically for the problem of log positioning optimisation. The results obtained with the Tentacle algorithm was compared with results from three existing meta‐heuristics i.e. the Simulated Annealing algorithm, the Population Based Incremental Learning algorithm and the Particle Swarm Optimisation algorithm, in terms of its efficiency and effectiveness in finding good log positioning solutions in a limited time. A fifth method, that of exhaustively searching smaller, high quality areas around the centered and “horns‐up” and “horns‐down” positions in the search space was compared to that of using the meta‐heuristic algorithms. In terms of volume recovery, the Tentacle algorithm performed, on average, the best of the four algorithms evaluated. However, exhaustive searches in the smaller, high quality areas in the search space, outperformed the algorithms. / AFRIKAANSE OPSOMMING: Die herwinningspersentasie van gesaagde planke uit saagblokke het ‘n groot invloed op die winsgewendheid van ‘n saagmeul. Die posisionering van die blok in die primêre saag is een van die faktore wat die herwinningspersentasie beïnvloed. Die blok se posisie kan verstel word deur veranderinge in rotasie, oplyning en skeefheid. Afhangend van die veld ‐en inkrementgrootte kan die hoeveelheid moontlike kombinasies beduidend wees. In ‘n tipiese saagmeul is die beskikbare tyd om moontlike posisies te evalueer beperk en verskeie strategieë kan gevolg word om optimale of nabyoptimale kombinasies te bereik sonder om alle moontlike kombinasies te evalueer. Meta‐heuristieke word soms gebruik om in ‘n beperkte tyd oplossings te vind vir kombinatoriese optimeringsprobleme. Die doeltreffendheid van hierdie metode by die optimering van herwinningspersentasie gebaseer op blokvorm is in hierdie studie ondersoek. Dit is met behulp van saagmeulsimulasiedata soos van sestig SA dennehoutblokke verkry, uitgevoer. ‘n Nuwe meta‐heuristiek, genaamd die Tentakelalgoritme, is tydens hierdie studie spesifiek vir die probleem van blokposisie‐optimering ontwikkel. Die resultate verkry met die Tentakelalgoritme is vergelyk met drie bestaande meta‐heuristieke, nl. die “Simulated Annealing”‐algoritme, die “Population Based Incremental Learning”‐algoritme en die “Particle Swarm Optimisation”‐algoritme in terme van doeltreffendheid om goeie blokposisies in ‘n beperkte tyd te vind. ‘n Vyfde metode, die gebruik van ‘n volledige ondersoek van verkleinde versamelings, rondom hoë‐kwaliteit areas in die soekarea, is vergelyk met die gebruik van die meta‐heuristieke. Hierdie hoë‐kwaliteit areas word gevind rondom die gesentreerde “horns‐up” en “horns‐down” posisies. Die Tentakelalgoritme het gemiddelde die beste herwinningsresultate van die vier meta‐heuristieke wat ondersoek was gelewer. Die volledige ondersoek van verkleinde versamelings in die hoë kwaliteit areas het egter die meta‐heuristieke oortref.
262

Decisions and disinformation : an evaluation of the usefulness of the Fast and Frugal Heuristics Programme in uncovering implicature-type disinformation

Potgieter, Burger Gericke 03 1900 (has links)
Thesis (MA)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: This thesis investigates ways in which the Fast & Frugal Heuristics (F&FH) programme in the field of Judgment and Decision Making (JDM) theory can be brought to bear on the phenomenon of disinformation. The study applies existing theory to develop an argument around the capacity of the F&FH framework to respond in a normative, descriptive and prescriptive fashion specifically to implicature-type disinformation. This leads to conclusions about the usefulness of the programme for a specific field that is supposed to be within the ambit of the programme. The study attempts to answer the research question by examining the philosophical and developmental history of JDM and of disinformation as a theme of inquiry. With the necessary background as context, the phenomenon of disinformation is investigated, specifically in the case of advertisements. Specific focus is given to pictorial metaphor that may lead to disinformation. The study concludes that F&FH only succeeds to some extent in its descriptive capacity, whilst it fails to provide normative or prescriptive insights when faced with implicature-type disinformation in the following ways: firstly, proponents of the F&FH programme seem selfcontradictory about the value of F&FH as a decision making theory – on the one hand they are generally positive about the its descriptive, normative and prescriptive abilities, whilst fully admitting to fundamental problems in every aspect of the theory and its applications. Secondly, even though there is a general admission of the importance of social and cultural elements in decision making, F&FH still remains intrinsically individualistic. As such it will fail to recognise deception and disinformation as those form part of a language act that is specifically designed around hidden motives and specialised persuasion techniques. Thirdly, F&FH will not be able to break free from the underlying issues it faces without breaking free from its philosophical underpinnings. F&FH still remains primarily empiricist through its behaviourist/positivist assumptions and application and as such fails to recognise the validity of concepts such as meaning, belief and attitude. / AFRIKAANSE OPSOMMING: Die tesis ondersoek die wyses waarop die Fast & Frugal Heuristics (F&FH) program in die veld van besluitnemingsteorie van toepassing gemaak kan word op die verskynsel van disinformasie. Die studie gebruik bestaande teorie in terme van normatiewe, voorskrywende en beskrywende toepassings om argument te ontwikkel rondom die kapasiteit van die F&FH raamwerk om te reageer op spesifiek implikatuur-tipe disinformasie. Dit lei tot gevolgtrekkings oor die bruikbaarheid van die program vir ‘n spesifieke veld wat veronderstel is om binne die bestek van die program te val. Die studie poog om die navorsingsvraag te antwoord deur die filosofiese en ontwikkelingsgeskiedenis van besluitnemingsteorie asook disinformasie te ondersoek. Met die nodige agtergrond as konteks word die verskynsel van disinformasie deur implikasie ondersoek, spesifiek in die geval van die advertensies. Daar word spesifiek gefokus op advertensies waar metafore wat ontwikkel word deur visuele beelde waardeur disinformasie geïmpliseer kan word. Die studie maak die gevolgtrekking dat F&FH slegs tot ’n mate sukses behaal as beskrywende teorie terwyl dit nie suksesvol toegepas kan word as normatiewe en voorskrywende teorie nie. Die volgende probleme word uitgelig: eerstens, voorstaanders van die F&FH program hou teenstrydige perspektiewe voor – aan die een kant is hulle oor die algemeen positief oor die teorie se beskrywende, normatiewe en voorskrywende kapasiteite terwyl hulle openlik getuig van die grondliggende probleme in bykans elke faset van die teorie en sy toepassings. Tweedens, ten spyte daarvan dat daar erkenning gegee word aan die sosiale en kulturele aspekte van besluitneming bly F&FH primêr individualisties. As sulks sal dit faal om valshede en disinformasie te herken aangesien beide elemente is van ’n taalaksie wat spesifiek ontwerp is rondom versteekte motiewe en gespesialiseerde oorredingstegnieke. Derdens, F&FH kan nie afstand doen van die onderliggende probleme sonder om weg te breek van die onderliggende filosofiese grondslag nie. F&FH bly hoofsaaklik empiristies deur die behavioristiese/positiwistiese eienskappe in die onderliggende aannames en toepassings – as sulks gee dit nie erkenning aan die geldigheid van konsepte soos betekenis, oortuiging en houding nie.
263

Metaheuristics for petrochemical blending problems

Venter, Lieschen 03 1900 (has links)
ENGLISH ABSTRACT: The main aim in blending problems is to determine the best blend of available ingredients to form a certain quantity of product(s). This product should adhere to strict speci cations. In this study the best blend means the least-cost blend of ingredients (input) required to meet a minimum level of product (output) speci cations. The most prevalent tools to solve blending problems in the industry are by means of spreadsheets, simulators and mathematical programming. While there may be considerable bene t in using these types of tools to identify potential opportunities and infeasibilities, there is a potentially even greater bene t in searching automitically for alternative solutions that are more economical and e cient. Heuristics and metaheuristics are presented as useful alternative solution approaches. In this thesis di erent metaheuristic techniques are developed and applied to three typical blending problems of varied size taken from the petrochemical industry. a fourth instance of real life size is also introduced. Heuristics are developed intuitively, while metaheuristics are adopted from the literature. Random search techniques, such as blind random search and local random search, deliver fair results. Within the class of genetic algorithms the best results for all three problems were obtained using ranked tness assignment with tournament selection of individuals. Good results are also obtained by means of tabu search approaches - even considering the continuous nature of these problems. A simulated annealing approach also yielded fair results. A comparison of the results of the di erent approaches shows that the tabu search technique delivers the best result with respect to solution quality and execution time for all three the problems under consideration. Simulated annealing, however, delivers the best result with respect to solution quality and execution time for the introduced real life size problem. / AFRIKAANSE OPSOMMING: Die hoofdoelwit met die oplos van mengprobleme is om die beste mengsel van beskikbare bestandele te bepaal om 'n sekere hoeveelheid produk(te) te vervaardig. Die produk moet aan streng vereistes voldoen. Die beste kombinasie is die goedkoopste kombinasie van bestandele (toevoer) wat aan die minimum produkvereistes (afvoer) voldoen. Die algemeenste benaderings waarmee mengprobleme in die industrie opgelos word, is met behulp van sigblaaie, simulasies en wiskundige programmering. Hierdie metodes is baie nuttig om belowende oplossings of ontoelaatbaarhede te identi seer, maar dit kan potensieel meer voordelig wees om metodes te gebruik wat sistematies meer ekonomiese en e ektiewe oplossings vind. Heuristieke en metaheuristieke word as goeie alternatiewe oplossingsbenaderings aangebied. In hierdie tesis word verskillende metaheuristiekbenaderings toegepas op drie tipiese mengprobleme van verskillende groottes wat vanuit die petrochemiese industrie spruit. 'n Vierde geval met realistiese (regte wêreld) grootte word ook aangebied. Heuristieke word volgens intuïsie ontwikkel terwyl metaheuristieke aangepas word vanuit die literatuur. Lukrake soektegnieke soos die blinde lukrake soektegniek en die plaaslike lukrake soektegniek lewer redelike resultate. Binne die klas van genetiese algoritmes word die beste resultate gelewer wanneer die algoritme met 'n kombinasie van rangorde ksheidstoekenning en toernooiseleksie van individue geïmplimenteer word. Goeie resultate word ook verkry met behulp van tabusoektogbenaderings ten spyte van die kontinue aard van hierdie probleme. Gesimuleerde tempering lewer ook redelike resultate. 'n Vergelyking van die resultate van die verskillende tegnieke toon dat die tabusoektogtegniek die beste resultate met betrekking tot die kwaliteit van die oplossing sowel as uitvoertyd lewer. Gesimuleerde tempering lewer egter die beste resultate met betrekking tot die kwaliteit van die oplossing sowel as uitvoertyd vir die voorgestelde realistiese grootte probleem.
264

Combining search strategies for distributed constraint satisfaction

Magaji, Amina Sambo-Muhammad January 2015 (has links)
Many real-life problems such as distributed meeting scheduling, mobile frequency allocation and resource allocation can be solved using multi-agent paradigms. Distributed constraint satisfaction problems (DisCSPs) is a framework for describing such problems in terms of related subproblems, called a complex local problem (CLP), which are dispersed over a number of locations, each with its own constraints on the values their variables can take. An agent knows the variables in its CLP plus the variables (and their current value) which are directly related to one of its own variables and the constraints relating them. It knows little about the rest of the problem. Thus, each CLP is solved by an agent which cooperates with other agents to solve the overall problem. Algorithms for solving DisCSPs can be classified as either systematic or local search with the former being complete and the latter incomplete. The algorithms generally assume that each agent has only one variable as they can solve DisCSP with CLPs using “virtual” agents. However, in large DisCSPs where it is appropriate to trade completeness off against timeliness, systematic search algorithms can be expensive when compared to local search algorithms which generally converge quicker to a solution (if a solution is found) when compared to systematic algorithms. A major drawback of local search algorithms is getting stuck at local optima. Significant researches have focused on heuristics which can be used in an attempt to either escape or avoid local optima. This thesis makes significant contributions to local search algorithms for DisCSPs. Firstly, we present a novel combination of heuristics in DynAPP (Dynamic Agent Prioritisation with Penalties), which is a distributed synchronous local search algorithm for solving DisCSPs having one variable per agent. DynAPP combines penalties on values and dynamic agent prioritisation heuristics to escape local optima. Secondly, we develop a divide and conquer approach that handles DisCSP with CLPs by exploiting the structure of the problem. The divide and conquer approach prioritises the finding of variable instantiations which satisfy the constraints between agents which are often more expensive to satisfy when compared to constraints within an agent. The approach also exploits concurrency and combines the following search strategies: (i) both systematic and local searches; (ii) both centralised and distributed searches; and (iii) a modified compilation strategy. We also present an algorithm that implements the divide and conquer approach in Multi-DCA (Divide and Conquer Algorithm for Agents with CLPs). DynAPP and Multi-DCA were evaluated on several benchmark problems and compared to the leading algorithms for DisCSPs and DisCSPs with CLPs respectively. The results show that at the region of difficult problems, combining search heuristics and exploiting problem structure in distributed constraint satisfaction achieve significant benefits (i.e. generally used less computational time and communication costs) over existing competing methods.
265

Optimization of Surgery Delivery Systems

January 2010 (has links)
abstract: Optimization of surgical operations is a challenging managerial problem for surgical suite directors. This dissertation presents modeling and solution techniques for operating room (OR) planning and scheduling problems. First, several sequencing and patient appointment time setting heuristics are proposed for scheduling an Outpatient Procedure Center. A discrete event simulation model is used to evaluate how scheduling heuristics perform with respect to the competing criteria of expected patient waiting time and expected surgical suite overtime for a single day compared to current practice. Next, a bi-criteria Genetic Algorithm is used to determine if better solutions can be obtained for this single day scheduling problem. The efficacy of the bi-criteria Genetic Algorithm, when surgeries are allowed to be moved to other days, is investigated. Numerical experiments based on real data from a large health care provider are presented. The analysis provides insight into the best scheduling heuristics, and the tradeoff between patient and health care provider based criteria. Second, a multi-stage stochastic mixed integer programming formulation for the allocation of surgeries to ORs over a finite planning horizon is studied. The demand for surgery and surgical duration are random variables. The objective is to minimize two competing criteria: expected surgery cancellations and OR overtime. A decomposition method, Progressive Hedging, is implemented to find near optimal surgery plans. Finally, properties of the model are discussed and methods are proposed to improve the performance of the algorithm based on the special structure of the model. It is found simple rules can improve schedules used in practice. Sequencing surgeries from the longest to shortest mean duration causes high expected overtime, and should be avoided, while sequencing from the shortest to longest mean duration performed quite well in our experiments. Expending greater computational effort with more sophisticated optimization methods does not lead to substantial improvements. However, controlling daily procedure mix may achieve substantial improvements in performance. A novel stochastic programming model for a dynamic surgery planning problem is proposed in the dissertation. The efficacy of the progressive hedging algorithm is investigated. It is found there is a significant correlation between the performance of the algorithm and type and number of scenario bundles in a problem instance. The computational time spent to solve scenario subproblems is among the most significant factors that impact the performance of the algorithm. The quality of the solutions can be improved by detecting and preventing cyclical behaviors. / Dissertation/Thesis / Ph.D. Industrial Engineering 2010
266

Down the Rabbit Hole: Perceptions of Identity Formation In and Through the Educative Experience of Women from Working-Class Backgrounds

January 2011 (has links)
abstract: ABSTRACT There is a body of literature--albeit largely from the UK and Australia--that examines the ways in which class and gender influence life course, including educational attainment; however, much of this literature offers explanations and analyses for why individuals choose the life course they do. By assuming a cause-effect relationship between class and gender and life course, these studies perpetuate the idea that life can be predicted and controlled. Such an approach implies there is but one way of viewing--or an "official reading" of--the experience of class and gender. This silences other readings. This study goes beneath these "interpretations" and explores the phenomenon of identity and identity making in women who grew up working-class. Included is an investigation into how these women recognize and participate in their own identity making, identifying the interpretations they created and apply to their experience and the ways in which they juxtapose their educative experience. Using semi-structured interview I interviewed 21 women with working-class habitués. The strategy of inquiry that corresponded best to the goal of this project was heuristics, a variant of empathetic phenomenology. Heuristics distinguishes itself by including the life experience of the researcher while still showing how different people may participate in an event in their lives and how these individuals may give it radically different meanings. This has two effects: (1) the researcher recognizes that their own life experience affects their interpretations of these stories and (2) it elucidates the researcher's own life as it relates to identity formation and educational experience. Two, heuristics encourages different ways of presenting findings through a variety of art forms meant to enhance the immediacy and impact of an experience rather than offer any explanation of it. As a result of the research, four themes essential to locating the experience of women who grew up working class were discovered: making, paying attention, taking care, and up. These themes have pedagogic significance as women with working-class habitués navigate from this social space: the downstream effect of which is how and what these women take up as education. / Dissertation/Thesis / Ph.D. Curriculum and Instruction 2011
267

Etude et résolution de problèmes d'ordonnancement de projets multi-compétences : Intégration à un progiciel intégré libre / Study and resolution methods for multi-skill projects scheduling problems : intégration à un progiciel intégrée libre

Mohamed Dhib, Cheikh 08 April 2013 (has links)
Les travaux de cette thèse réalisée sous contrat CIFRE portent sur des problématiques d’ordonnancement de projets mufti-compétences. Définis en collaboration avec des experts de gestion de projet au sein de la société Néréide, deux modèles d’ordonnancement de projet font l’objet de cette étude. Dans le premier modèle, une tâche est définie par l’ensemble des compétences dont elle a besoin, la charge nécessaire de chaque compétence ainsi que la possibilité d’être interrompue ou non. Pour l’élaboration d’un planning prédictif respectant toutes les contraintes et minimisant la date de fin du projet, nous proposons des heuristiques de liste et métaheuristiques. Un modèle mathématique linéaire en nombres entiers ainsi que des bornes inférieures sont également développés. Dans un second temps, nous proposons, à partir d’un planning prédéfini, des méthodes pour ajuster le planning et répondre aux aléas survenus lors du déroulement du projet. Pour résoudre ce problème réactif, nous proposons une approche exacte itérative basée sur une formulation linéaire en nombres entiers ainsi qu’un algorithme génétique de type NSGA-II. Il s’agit donc d’une approche réactive bicritère où les solutions calculées doivent minimiser à la fois la date d’achèvement du projet et le nombre maximum de changements d’affectation de tâches aux employés. Dans le deuxième modèle, un cas particulier du modèle préemptif précédent est étudié. Nous nous intéressons au cas où une tâche nécessite une seule compétence avec possibilité de préemption seulement si les ressources ne sont pas disponibles (absence, congés, etc.). Dans ce modèle, une tâche est définie également par sa date de disponibilité et une date de fin souhaitée. Un coût d’utilisation personne/compétence est introduit. Pour ce dernier modèle, il s’agit d’un problème d’ordonnancement de projet bicritère, pour lequel les solutions calculées doivent minimiser le retard maximum et le coût global d’affectation des personnes aux tâches. Des heuristiques et métaheuristiques sont proposées pour ce modèle. Certaines méthodes de résolution proposées ont été implémentées sous forme d’add-ons intégrables au framework OFBiz. / The work presented in this thesis deals with multi-skill project scheduling problems. We have studied two models of project scheduling which are defined in collaboration with project management experts in Néréide company. In the first model, a task is defined by a set of required skills, the load needed for each skill as welI as the possibility of preemption. To build a predictive planning which respects aIl problem constraints and minimize the project completion time (makespan), we propose heuristics and meta-heuristics methods. A mixed integer mathematical linear programming model and lower bounds are also proposed. From a predefined planning, we propose an exact method based on a mathematical program as weIl as a genetic algorithm of type NSGA-II allowing to deal with disruptions occurred during the project realization. It is, therefore, a reactive approach in which we look for feasible solutions minimizing both the project completion date and the maximum number of resources assignment changes. In the second studied model, we focus on a case where a task exactly requires one skill with preemption possibility only in case of resources unavailability. In this model, a task is also characterized by its release and due date. A cost per person/skill is given. It is, therefore, a bi-objective problem in which the computed solutions must minimize both the maximum tardiness and the project global cost. Heuristics and meta-heuristics are proposed for solving this problem. Some proposed methods are integrated in the framework OFBiz as add-ons.
268

[en] HISTORY-SENSITIVE RECOVERY OF FEATURES IN CODE OF EVOLVING PROGRAM FAMILIES / [pt] RECUPERAÇÃO SENSÍVEL A HISTÓRIA DE CARACTERÍSTICAS NO CÓDIGO DE FAMÍLIAS DE PROGRAMAS EVOLUTIVAS

CAMILA PATRICIA BAZILIO NUNES 19 January 2017 (has links)
[pt] Uma família de programas pode degenerar devido a mudanças não planejadas e, consequentemente, tendem a prejudicar a manutenção dos membros da família. Esta degeneração é frequentemente causada pelo código de uma característica (feature) da família que é modificada individualmente em cada membro sem considerar outros membros da família. Em casos extremos, o código da família é completamente ou parcialmente replicado e individualmente modificado por diversos membros em evolução. Assim, à medida que uma família evolui, pode não ser mais possível identificar e classificar os elementos de código implementando as características comuns e variáveis. Uma das atividades iminentes para resolver esses problemas é a recuperação sensível à história de características da família. Este processo de recuperação inclui a análise histórica de cada membro da família a fim de identificar e classificar os elementos de implementação (e.g. métodos, atributos) de acordo com a sua natureza de variabilidade. Os trabalhos existentes falham em analisar a evolução dos membros de uma família com o objetivo de recuperar os elementos de implementação das características. Além disso, as técnicas existentes para a análise de características não são efetivas, pois elas apenas levam em consideração a história de um único membro por vez. Em resumo, as contribuições desta tese são divididas em três partes: (i) um catálogo de incompatibilidades de mapeamento para guiar engenheiros de software na corretude e completude de seus mapeamentos de características. Este catálogo é útil para garantir uma melhor eficácia do processo de recuperação durante a análise dos mapeamentos; (ii) um conjunto de cinco heurísticas para a expansão automática de mapeamentos de características ao longo do histórico da família de programas. Essas heurísticas são baseadas na análise histórica multi-dimensional da família e no catálogo de incompatibilidades de mapeamentos; e (iii) um conjunto de heurísticas sensíveis a história para classificar os elementos de implementação de cada característica da família de acordo com seu grau de variabilidade. / [en] A program family might degenerate due to unplanned changes in its implementation, thus hindering the maintenance of family members. This degeneration is often induced by feature code of the program family that is changed individually in each member without considering other family members. In extreme cases, the program family code is fully or partially replicated and individually changed across several evolving members. Hence, as a family evolves over time, it might no longer be possible to identify and classify the implementation elements realizing common and variable features. One of the imminent activities to address these problems is the history-sensitive recovery of program family s features. This recovery process encompasses the historical analysis of each family member in order to identify and classify the implementation elements (i.e. methods, attributes) according to their variability nature. Existing work fails to analyse the evolution of the family members with the goal of recovering features implementation elements. Additionally, existing techniques for feature analysis are not effective as they only take into consideration the history of a single member product. In summary, the contributions of this thesis are threefold: (i) a catalogue of mapping mismatches to guide software engineers in promoting the correctness and completeness of their feature mappings. This catalogue is useful to ensure a better effectiveness of the recovery process during the mapping analysis; (ii) a suite of five heuristics for the automatic expansion of feature mappings throughout the program family history. Those heuristics rely on both the multi-dimensional historical analysis of program families and the catalogue of mapping mismatches; and (iii) a suite of history-sensitive heuristics for classifying the implementation elements realizing each family feature according to their variability degree.
269

Um modelo neural de aprimoramento progressivo para redução de dimensionalidade / A Progressive Enhancement Neural Model for dimensionality reduction

Camargo, Sandro da Silva January 2010 (has links)
Nas últimas décadas, avanços em tecnologias de geração, coleta e armazenamento de dados têm contribuído para aumentar o tamanho dos bancos de dados nas diversas áreas de conhecimento humano. Este aumento verifica-se não somente em relação à quantidade de amostras de dados, mas principalmente em relação à quantidade de características descrevendo cada amostra. A adição de características causa acréscimo de dimensões no espaço matemático, conduzindo ao crescimento exponencial do hipervolume dos dados, problema denominado “maldição da dimensionalidade”. A maldição da dimensionalidade tem sido um problema rotineiro para cientistas que, a fim de compreender e explicar determinados fenômenos, têm se deparado com a necessidade de encontrar estruturas significativas ocultas, de baixa dimensão, dentro de dados de alta dimensão. Este processo denomina-se redução de dimensionalidade dos dados (RDD). Do ponto de vista computacional, a conseqüência natural da RDD é uma diminuição do espaço de busca de hipóteses, melhorando o desempenho e simplificando os resultados da modelagem de conhecimento em sistemas autônomos de aprendizado. Dentre as técnicas utilizadas atualmente em sistemas autônomos de aprendizado, as redes neurais artificiais (RNAs) têm se tornado particularmente atrativas para modelagem de sistemas complexos, principalmente quando a modelagem é difícil ou quando a dinâmica do sistema não permite o controle on-line. Apesar de serem uma poderosa técnica, as RNAs têm seu desempenho afetado pela maldição da dimensionalidade. Quando a dimensão do espaço de entradas é alta, as RNAs podem utilizar boa parte de seus recursos para representar porções irrelevantes do espaço de busca, dificultando o aprendizado. Embora as RNAs, assim como outras técnicas de aprendizado de máquina, consigam identificar características mais informativas para um processo de modelagem, a utilização de técnicas de RDD frequentemente melhora os resultados do processo de aprendizado. Este trabalho propõe um wrapper que implementa um modelo neural de aprimoramento progressivo para RDD em sistemas autônomos de aprendizado supervisionado visando otimizar o processo de modelagem. Para validar o modelo neural de aprimoramento progressivo, foram realizados experimentos com bancos de dados privados e de repositórios públicos de diferentes domínios de conhecimento. A capacidade de generalização dos modelos criados é avaliada por meio de técnicas de validação cruzada. Os resultados obtidos demonstram que o modelo neural de aprimoramento progressivo consegue identificar características mais informativas, permitindo a RDD, e tornando possível criar modelos mais simples e mais precisos. A implementação da abordagem e os experimentos foram realizados no ambiente Matlab, utilizando o toolbox de RNAs. / In recent decades, advances on data generation, collection and storing technologies have contributed to increase databases size in different knowledge areas. This increase is seen not only regarding samples amount, but mainly regarding dimensionality, i.e. the amount of features describing each sample. Features adding causes dimension increasing in mathematical space, leading to an exponential growth of data hypervolume. This problem is called “the curse of dimensionality”. The curse of dimensionality has been a routine problem for scientists, that in order to understand and explain some phenomena, have faced with the demand to find meaningful low dimensional structures hidden in high dimensional search spaces. This process is called data dimensionality reduction (DDR). From computational viewpoint, DDR natural consequence is a reduction of hypothesis search space, improving performance and simplifying the knowledge modeling results in autonomous learning systems. Among currently used techniques in autonomous learning systems, artificial neural networks (ANNs) have becoming particularly attractive to model complex systems, when modeling is hard or when system dynamics does not allow on-line control. Despite ANN being a powerful tool, their performance is affected by the curse of dimensionality. When input space dimension is high, ANNs can use a significant part of their resources to represent irrelevant parts of input space making learning process harder. Although ANNs, and other machine learning techniques, can identify more informative features for a modeling process, DDR techniques often improve learning results. This thesis proposes a wrapper which implements a Progressive Enhancement Neural Model to DDR in supervised autonomous learning systems in order to optimize the modeling process. To validate the proposed approach, experiments were performed with private and public databases, from different knowledge domains. The generalization ability of developed models is evaluated by means of cross validation techniques. Obtained results demonstrate that the proposed approach can identify more informative features, allowing DDR, and becoming possible to create simpler and more accurate models. The implementation of the proposed approach and related experiments were performed in Matlab Environment, using ANNs toolbox.
270

Service restoration and switch allocation in power distribution networks : bounds and algorithms

Benavides Rojas, Alexander Javier January 2010 (has links)
A melhora da confiabilidade em redes de distribuição de energia elétrica é um tema importante para as indústrias de fornecimento de eletricidade, devido aos regulamentos estritos em muitos países. Depois de uma falha na rede, algumas chaves são usadas para isolar a falha, enquanto outras restauram a energia a alguns consumidores. A ótima seleção das chaves que serão abertas ou fechadas para restaurar a energia é conhecido como o problema de restauração de serviço. A instalação de chaves em posições estratégicas pode reduzir o tempo de parada, e assim melhorar a confiabilidade da rede. A seleção ótima de posições para instalar chaves é conhecido como o problema de alocação de chaves. Estes dois problemas estão relacionados estreitamente. Esta dissertação estuda o problema de alocação de chaves, considerando o problema de restauração de serviço como um subproblema. Dois métodos são propostos para estimar a confiabilidade de uma rede de distribuição com um conjunto dado de chaves instaladas. O foco principal está nas heurísticas para resolver o problema composto. Propõe-se aqui métodos como busca tabu, procedimento de busca gulosa adaptativa aleatória (sigla em inglês: GRASP), e procedimento iterativo de construção por amostras com reconexão de caminhos. Também estuda-se o benefício dos métodos de construção gulosa, semigulosa, aleatória e por amostras, e estuda-se o desempenho das estratégias de busca local por amostras, primeira melhoria e melhor melhoria. Os diferentes métodos são comparados e analisados. Os resultados mostram que os métodos por amostras são baratos e levam a soluções de boa qualidade. O procedimento iterativo de construção por amostras com reconexão de caminhos é o melhor método proposto para resolver o problema composto que é proposto nesta dissertação. / The improvement of reliability in electrical power distribution networks is an important issue for electricity supply industries, due to strict regulations in many countries. After a failure in the network, some switches are used to isolate the failure, while others restore the energy to some consumers. The optimal selection of the switches to open or close to restore energy is called the service restoration problem. The installation of switches in strategic places may reduce the outage time in case of blackouts, and thus improve the reliability of the network. The optimal selection of places to install switches is called the switch allocation problem. These two problems are closely related. This dissertation studies the switch allocation problem, considering the service restoration problem as a sub-problem. Two methods are proposed to estimate the reliability of a distribution network with a given set of installed switches. The main focus is in heuristics to solve the joint problem. It proposes methods like tabu search, greedy randomized adaptive search procedure, and iterated sample construction with path relinking. It also studies the benefit of greedy, semigreedy, random, and sample construction methods, and studies the performance of sample, first improvement and best improvement local search strategies. The different methods are compared and analyzed. The results show that sample approaches are inexpensive and lead to solutions of good quality. Iterated sample construction with path relinking is the best method to solve the joint problem that is proposed in this dissertation.

Page generated in 0.0343 seconds