Spelling suggestions: "subject:"1genetic 3research"" "subject:"1genetic 1research""
1 |
Automatic Data Partitioning By Hierarchical Genetic SearchShenoy, U Nagaraj 09 1900 (has links)
CDAC / The introduction of languages like High Performance Fortran (HPF) which allow the programmer to indicate how the arrays used in the
program have to be distributed across the local memories of a multi-computer has not completely unburdened the parallel programmer from the intricacies of these architectures. In order to tap the full potential of these architectures, the compiler has to perform this crucial task of data partitioning automatically. This would not only
unburden the programmer but would make the programs more efficient since the compiler can be made more intelligent to take care of the
architectural nuances.
The topic of this thesis namely the automatic data partitioning deals with finding the best data partition for the various arrays used in
the entire program in such a way that the cost of execution of the entire program is minimized. The compiler could resort to runtime redistribution of the arrays at various points in the program if found profitable. Several aspects of this problem have been proven to be NP-complete. Other researchers have suggested heuristic solutions to solve this problem. In this thesis we propose a genetic algorithm namely the Hierarchical Genetic Search algorithm to solve this problem.
|
2 |
Screening Web Breaks in a Pressroom by Soft ComputingAhmad, Alzghoul January 2008 (has links)
<p>Web breaks are considered as one of the most significant runnability problems</p><p>in a pressroom. This work concerns the analysis of relation between various</p><p>parameters (variables) characterizing the paper, printing press, the printing</p><p>process and the web break occurrence. A large number of variables, 61 in</p><p>total, obtained off-line as well as measured online during the printing process</p><p>are used in the investigation. Each paper reel is characterized by a vector x</p><p>of 61 components.</p><p>Two main approaches are explored. The first one treats the problem as a</p><p>data classification task into "break" and "non break" classes. The procedures</p><p>of classifier training, the selection of relevant input variables and the selection</p><p>of hyper-parameters of the classifier are aggregated into one process based on</p><p>genetic search. The second approach combines procedures of genetic search</p><p>based variable selection and data mapping into a low dimensional space. The</p><p>genetic search process results into a variable set providing the best mapping</p><p>according to some quality function.</p><p>The empirical study was performed using data collected at a pressroom</p><p>in Sweden. The total number of data points available for the experiments</p><p>was equal to 309. Amongst those, only 37 data points represent the web</p><p>break cases. The results of the investigations have shown that the linear</p><p>relations between the independent variables and the web break frequency</p><p>are not strong.</p><p>Three important groups of variables were identified, namely Lab data</p><p>(variables characterizing paper properties and measured off-line in a paper</p><p>mill lab), Ink registry (variables characterizing operator actions aimed to</p><p>adjust ink registry) and Web tension. We found that the most important</p><p>variables are: Ink registry Y LS MD (adjustments of yellow ink registry</p><p>in machine direction on the lower paper side), Air permeability (character-</p><p>izes paper porosity), Paper grammage, Elongation MD, and four variables</p><p>characterizing web tension: Moment mean, Min sliding Mean, Web tension</p><p>variance, and Web tension mean.</p><p>The proposed methods were helpful in finding the variables influencing </p><p>the occurrence of web breaks and can also be used for solving other industrial</p><p>problems.</p>
|
3 |
Dynamic Weapon-target Assignment ProblemGunsel, Emrah 01 September 2008 (has links) (PDF)
The Weapon-Target Assignment (WTA) problem is a fundamental problem arising in defense-related applications of operations research. Optimizing the WTA is about the selection of the most appropriate weapon for each target in the problem. Basically the aim is to have the maximum effect on targets. Different algorithms / branch and bound (B& / B), genetic algorithm (GA), variable neighborhood search (VNS), are used to solve this problem. In this thesis, a more complex version of this problem is defined and adapted to fire support automation (Command Control Communication Computer Intelligence, C4I) systems. For each target, a weapon with
appropriate ammunition, fuel, timing, status, risk is moved to an appropriate ammunitions, economy of fuel, risk analysis and time scheduling are all integrated into the solution. B& / B, GA and VNS are used to solve static and dynamic WTA problem. Simulations have shown that GA and VNS are the
best suited methods to solve the WTA problem.
|
4 |
Screening Web Breaks in a Pressroom by Soft ComputingAhmad, Alzghoul January 2008 (has links)
Web breaks are considered as one of the most significant runnability problems in a pressroom. This work concerns the analysis of relation between various parameters (variables) characterizing the paper, printing press, the printing process and the web break occurrence. A large number of variables, 61 in total, obtained off-line as well as measured online during the printing process are used in the investigation. Each paper reel is characterized by a vector x of 61 components. Two main approaches are explored. The first one treats the problem as a data classification task into "break" and "non break" classes. The procedures of classifier training, the selection of relevant input variables and the selection of hyper-parameters of the classifier are aggregated into one process based on genetic search. The second approach combines procedures of genetic search based variable selection and data mapping into a low dimensional space. The genetic search process results into a variable set providing the best mapping according to some quality function. The empirical study was performed using data collected at a pressroom in Sweden. The total number of data points available for the experiments was equal to 309. Amongst those, only 37 data points represent the web break cases. The results of the investigations have shown that the linear relations between the independent variables and the web break frequency are not strong. Three important groups of variables were identified, namely Lab data (variables characterizing paper properties and measured off-line in a paper mill lab), Ink registry (variables characterizing operator actions aimed to adjust ink registry) and Web tension. We found that the most important variables are: Ink registry Y LS MD (adjustments of yellow ink registry in machine direction on the lower paper side), Air permeability (character- izes paper porosity), Paper grammage, Elongation MD, and four variables characterizing web tension: Moment mean, Min sliding Mean, Web tension variance, and Web tension mean. The proposed methods were helpful in finding the variables influencing the occurrence of web breaks and can also be used for solving other industrial problems.
|
5 |
Automatinės organinių gerklų ligų kategorizavimo sistemos sukūrimas ir įvertinimas / Development and assessment of automated categorization system for organic laryngeal diseasesKašėta, Marius 03 September 2010 (has links)
Sergamumas įvairiomis organinėmis gerklų ligomis, tarp jų ir onkologinėmis, didėja dėl senstančios visuomenės bei civilizacijos sąlygoto gyvenimo būdo pasikeitimo ir aplinkos taršos. Dėl slaptos eigos ir ankstyvų klinikinių simptomų nebuvimo daugiau nei pusė gerklų vėžio atvejų diagnozuojama vėlyvose stadijose. Nemažai reikšmės turi nesavalaikis ankstyvųjų simptomų interpretavimas, gydymo skyrimas ir rezultato prognozavimas. Šiuos netobulumus lemia daugelis faktorių, tarp kurių ne paskutinę vietą užima gydytojo patirties stoka. Siekiant sumažinti diagnostinių tyrimų vertinimo subjektyvumą, patirties skirtumus tarp skirtingos kvalifikacijos tyrėjų ir gydytojų, užtikrinant nuoseklų mokymo ir mokymosi procesą, tiek moksliniu, tiek praktiniu požiūriu yra aktualu sukurti visuotinai prieinamą sistemą, suteikiančią pagalbą diagnozuojant organines gerklų ligas. Tokia sistema turėtų remtis daugeliu skirtingos informacijos šaltinių parametrais, lanksčiai prisitaikyti prie naujų parametrų įvedimo ir informacijos kaupimo. Pasaulyje nėra sukurtos ir visuotinai naudojamos gerklų ligų kategorizavimo sistemos, apjungiančios keletą informacijos šaltinių bei padedančios gydytojui priimti sprendimą neaiškiose situacijose.
Tikslas
Sukurti automatinę organinių gerklų ligų kategorizavimo sistemą, paremtą mikrolaringoskopinių vaizdų, akustinių ir subjektyvių balso parametrų bei pacientų demografinių duomenų analize, ir įvertinti jos efektyvumą.
Darbo uždaviniai
1. Nustatyti statistinių metodų ir... [toliau žr. visą tekstą] / In clinical practice, the diagnostic procedure of laryngeal diseases is based on evaluation of patient's complaints, history, and data of instrumental as well as histological examination. A variety of techniques for examination of the larynx and objective measurements of voice quality have been developed during the last years.
Analysis of patient's complaints and history, evaluation of his/her voice, assessment of data of laryngeal visualization (laryngoscopy) remain the primary information sources used to categorize and diagnose the laryngeal disorders. Automated analysis of voice is increasingly used for detecting and screening laryngeal disorders. Time, frequency, and cepstral domains are usually used to extract features characterizing a voice signal. Analysis of the literature related to automated categorization of voice aiming to detect laryngeal pathologies showed that the categorization is usually based on one, two, or three types of features. There are no reports in the literature attempting to extract a larger variety of features for characterizing a voice signal. Moreover, there are no investigations of the utility of a large variety of feature types in categorizing the voice signal into the healthy and several pathological voice classes using a committee of support vector machines (SVM).
Whilst automated categorization of voice into pathological and healthy classes is rather common, there were very few attempts to create systems for automated analysis of color... [to full text]
|
6 |
SUSTAINABLE LIFETIME VALUE CREATION THROUGH INNOVATIVE PRODUCT DESIGN: A PRODUCT ASSURANCE MODELSeevers, K. Daniel 01 January 2014 (has links)
In the field of product development, many organizations struggle to create a value proposition that can overcome the headwinds of technology change, regulatory requirements, and intense competition, in an effort to satisfy the long-term goals of sustainability. Today, organizations are realizing that they have lost portfolio value due to poor reliability, early product retirement, and abandoned design platforms. Beyond Lean and Green Manufacturing, shareholder value can be enhanced by taking a broader perspective, and integrating sustainability innovation elements into product designs in order to improve the delivery process and extend the life of product platforms.
This research is divided into two parts that lead to closing the loop towards Sustainable Value Creation in product development. The first section presents a framework for achieving Sustainable Lifetime Value through a toolset that bridges the gap between financial success and sustainable product design. Focus is placed on the analysis of the sustainable value proposition between producers, consumers, society, and the environment and the half-life of product platforms. The Half-Life Return Model is presented, designed to provide feedback to producers in the pursuit of improving the return on investment for the primary stakeholders. The second part applies the driving aspects of the framework with the development of an Adaptive Genetic Search Algorithm. The algorithm is designed to improve fault detection and mitigation during the product delivery process. A computer simulation is used to study the effectiveness of primary aspects introduced in the search algorithm, in order to attempt to improve the reliability growth of the system during the development life-cycle.
The results of the analysis draw attention to the sensitivity of the driving aspects identified in the product development lifecycle, which affect the long term goals of sustainable product development. With the use of the techniques identified in this research, cost effective test case generation can be improved without a major degradation in the diversity of the search patterns required to insure a high level of fault detection. This in turn can lead to improvements in the driving aspects of the Half-Life Return Model, and ultimately the goal of designing sustainable products and processes.
|
7 |
Pokročilé optimalizační modely v odpadovém hospodářství / Advanced Optimization Models in Waste ManagementProcházka, Vít January 2014 (has links)
This thesis deals with an optimization of waste collection in a mid-sized town. The model is formulated based on requirements from a real process. To deal with this problem, the original memetic algorithm was developed and implemented in C++.
|
8 |
Predicting Location and Training Effectiveness (PLATE)Bruenner, Erik Rolf 01 June 2023 (has links) (PDF)
Abstract Predicting Location and Training Effectiveness (PLATE) Erik Bruenner
Physical activity and exercise have been shown to have an enormous impact on many areas of human health and can reduce the risk of many chronic diseases. In order to better understand how exercise may affect the body, current kinesiology studies are designed to track human movements over large intervals of time. Procedures used in these studies provide a way for researchers to quantify an individual’s activity level over time, along with tracking various types of activities that individuals may engage in. Movement data of research subjects is often collected through various sensors, such as accelerometers. Data from these specialized sensors may be fed into a deep learning model which can accurately predict what movements a person is making based on aggregated sensor data. However, in order for prediction models to produce accurate classifications of activities, they must be ‘trained’. Training occurs through the process of supervised learning on large amounts of data where movements are already known. These training data sets are also known as ‘validation’ data or ‘ground truth’.
Currently, generation of these ground truth sets is very labor-intensive. To generate these labeled data sets, research assistants must analyze many hours of video footage with research subjects. These research assistants painstakingly categorize each video, second by second, with a description of the activity the subject was engaging in. Using only labeled video, the PLATE project facilitates the generation of ground truth data by developing an artificial intelligence (AI) that predicts video quality labels, along with labels that denote the physical location that these activities occurred in.
The PLATE project builds on previous work by a former graduate student, Roxanne Miller. Miller developed a classification system to categorize subject activities into groups such as ‘Stand’, ‘Sit’, ‘Walk’, ‘Run’, etc. The PLATE project focuses instead on development of AI to generate ground truth training in order to accurately detect and identify the quality of video data, and the location of the video data. In the context of the PLATE project, video quality refers to whether or not a test subject is visible in the frame. Location classifications include categorizing ‘indoors’, ‘outdoors’, and ‘traveling’. More specifically, indoor categories are further identified as ‘house’, ‘office’, ‘school’, ‘store’ or ‘commercial’ space. Outdoor locations are further classified as ‘commercial space’, ‘park/greenspace’, ‘residential’ or ‘neighborhood’.
The nature of our location classification problem lends itself particularly well to a hierarchical classification approach, where general indoor, outdoor, or travel categories are predicted, then separate models predict the subclassifications of these categories. The PLATE project uses three convolutional neural networks in its hierarchical location prediction pipeline, and one convolutional neural network to predict if video frames are high or low quality.
Results from the PLATE project demonstrate that quality can be predicted with an accuracy of 96%, general location with an accuracy of 75%, and specific locations with an accuracy of 31%. The findings and model produced by the PLATE project are utilized in the PathML project as part of a ground truth prediction software for activity monitoring studies.
PathML is a project funded by the NIH as part of a Small Business Research Initiative. Cal Poly partnered with Sentimetrix Inc, a data analytics/machine learning company, to build a methodology for automated labeling of human physical activity. The partnership aims to utilize this methodology to develop a software tool that performs automatic labeling and facilitates the subsequent human inspection. Phase I (proof of concept) of the project took place from September 2021 to August 2022, Phase II (final software production) is pending. This thesis is part of the research that took place during Phase I lifetime, and continues to support Phase II development.
|
9 |
[en] A SIMPLE AND EFFECTIVE HYBRID GENETIC SEARCH FOR THE JOB SEQUENCING AND TOOL SWITCHING PROBLEM / [pt] UMA BUSCA GENÉTICA HÍBRIDA SIMPLES E EFETIVA PARA O PROBLEMA DE SEQUENCIAMENTO DE TAREFAS E TROCA DE FERRAMENTASJORDANA ZERPINI MECLER 19 August 2020 (has links)
[pt] O problema de sequenciamento de tarefas e troca de ferramentas (job sequencing and tool switching problem - SSP) tem sido extensivamente estudado na área de pesquisa operacional, devido à sua relevância prática e interesse metodológico. Dada uma máquina que pode carregar uma quantidade limitada de ferramentas simultaneamente e um número de tarefas que requerem um subconjunto das ferramentas disponíveis, o SSP procura uma sequência de tarefas que minimize o número total de trocas de ferramentas na máquina. Para resolver este problema, é proposta uma busca genética híbrida simples e efetiva baseada em uma representação de solução genérica, um operador de decodificação sob medida, buscas locais eficientes e técnicas de gerenciamento de diversidade. Para orientar a busca, um objetivo secundário desenvolvido para tratar empates é introduzido. Essas técnicas permitem explorar soluções estruturalmente distintas e escapar de ótimos locais. Conforme apresentado nos experimentos computacionais em instâncias clássicas, o algoritmo proposto supera significativamente todas as abordagens anteriores, mesmo sendo de fácil entendimento e implementação. Por fim, resultados obtidos em um novo conjunto de instâncias maiores são reportados para estimular futuras pesquisas e análises comparativas. / [en] The job sequencing and tool switching problem (SSP) has been extensively studied in the field of operations research, due to its practical relevance and methodological interest. Given a machine that can load a limited amount of tools simultaneously and a number of jobs that require a subset of the available tools, the SSP seeks a job sequence that minimizes the number of tool switches in the machine. To solve this problem, we propose a simple and efficient hybrid genetic search based on a generic solution representation, a tailored decoding operator, efficient local searches and diversity management techniques. To guide the search, we introduce a secondary objective designed to break ties. These techniques allow to explore structurally different solutions and escape local optima. As shown in our computational experiments on classical benchmark instances, our algorithm significantly outperforms all previous approaches while remaining simple to apprehend and easy to implement. We finally report results on a new set of larger instances to stimulate future research and comparative analyses.
|
10 |
[en] IMPROVED HYBRID GENETIC SEARCH FOR THE INVENTORY ROUTING PROBLEM / [pt] MELHORIA DE BUSCA GENÉTICA HÍBRIDA PARA O PROBLEMA DE ROTEAMENTO DE INVENTÁRIOBRUNO GUIMARAES DE CASTRO 15 February 2024 (has links)
[pt] Tema: Este estudo investiga o Problema de Roteamento de Inventário
(IRP) no contexto do Gerenciamento de Inventário pelo Fornecedor (VMI),
uma prática comum na cadeia de suprimentos onde os fornecedores assumem a
responsabilidade pela reposição. O IRP, um problema combinatório estudado
amplamente há quase 40 anos, engloba três subproblemas distintos: programação de entregas, gerenciamento de estoque e roteamento de veículos. Problema: Apesar de sua idade, o IRP continua a atrair a atenção da indústria e
da academia. O recente décimo segundo Desafio de Implementação DIMACS dedicou uma
categoria ao IRP, e entre os benchmarks comumente utilizados, 401 instâncias
ainda não possuem soluções ótimas, especialmente no desafiador subconjunto
de instâncias grandes. Hipótese e Justificativa: O framework HGS proposto
por Vidal et al. (2012) surgiu como uma ferramenta proeminente utilizada por
várias equipes de forma satisfatória na competição. No entanto, até onde sabemos, o framework HGS não foi testada para o IRP. Este estudo propõe uma
solução que combina o framework HGS com uma estratégia de busca local eficiente, o método NSIRP proposto por Diniz et al. (2020), para abordar o IRP.
Metodologia: Implementamos a solução proposta e comparamos seu desempenho com 21 abordagens existentes, utilizando os benchmarks da literatura.
Resumo dos Resultados: Nossa abordagem identificou 79 novas Melhores
Soluções Conhecidas (BKS) dentre 1100 instâncias. Se aplicada sob as mesmas
regras da competição DIMACS, nossa solução teria garantido o primeiro lugar.
Contribuições e Impactos: Este trabalho contribui para o desenvolvimento
contínuo de soluções para o IRP, oferecendo uma abordagem eficiente e competitiva que pode inspirar futuras pesquisas e aplicações práticas no campo do
gerenciamento de estoque e roteamento de veículos. / [en] Theme: This study investigates the Inventory Routing Problem (IRP)
within the context of Vendor-Managed Inventory (VMI), a prevalent supply
chain practice where suppliers assume responsibility for replenishment. The
IRP, a combinatorial problem that has been widely studied for almost 40 years,
encompasses three distinct subproblems: delivery scheduling, inventory management, and vehicle routing. Problem: Despite its age, the IRP continues
to attract industry and academia attention. The recent 12th DIMACS Implementation Challenge dedicated a track to the IRP, and among the commonly
used benchmarks, 401 instances still lack optimal solutions, particularly in the
challenging Large instance subset. Hypothesis and Justification: The HGS
framework proposed by Vidal et al. (2012) emerged as a prominent tool used
successfully by numerous teams in the competition. However, to the best of our
knowledge, the HGS framework has not been tested for the IRP. This study
proposes a method combining the HGS framework with an efficient local search
strategy, namely NSIRP proposed by Diniz et al. (2020), to tackle the IRP.
Methodology: We implemented the proposed method and compared its performance to 21 existing methods using the literature benchmarks. Summary
of Results: Our approach identified 79 new Best Known Solutions (BKS) out
of 1100 instances. If applied under the same rules as the DIMACS competition,
our method would have secured the first place. Contributions and Impacts:
This work contribute to the ongoing development of IRP methods, offering an
efficient and competitive approach that may inspire further research and practical applications in the realm of inventory management and vehicle routing.
|
Page generated in 0.0703 seconds