• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 21
  • 13
  • 8
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 173
  • 173
  • 30
  • 28
  • 23
  • 23
  • 21
  • 19
  • 17
  • 16
  • 15
  • 15
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Inferring Genetic Regulatory Networks Using Cost-based Abduction and Its Relation to Bayesian Inference

Andrews, Emad Abdel-Thalooth 16 July 2014 (has links)
Inferring Genetic Regulatory Networks (GRN) from multiple data sources is a fundamental problem in computational biology. Computational models for GRN range from simple Boolean networks to stochastic differential equations. To successfully model GRN, a computational method has to be scalable and capable of integrating different biological data sources effectively and homogeneously. In this thesis, we introduce a novel method to model GRN using Cost-Based Abduction (CBA) and study the relation between CBA and Bayesian inference. CBA is an important AI formalism for reasoning under uncertainty that can integrate different biological data sources effectively. We use three different yeast genome data sources—protein-DNA, protein-protein, and knock-out data—to build a skeleton (unannotated) graph which acts as a theory to build a CBA system. The Least Cost Proof (LCP) for the CBA system fully annotates the skeleton graph to represent the learned GRN. Our results show that CBA is a promising tool in computational biology in general and in GRN modeling in particular because CBA knowledge representation can intrinsically implement the AND/OR logic in GRN while enforcing cis-regulatory logic constraints effectively, allowing the method to operate on a genome-wide scale.Besides allowing us to successfully learn yeast pathways such as the pheromone pathway, our method is scalable enough to analyze the full yeast genome in a single CBA instance, without sub-networking. The scalability power of our method comes from the fact that our CBA model size grows in a quadratic, rather than exponential, manner with respect to data size and path length. We also introduce a new algorithm to convert CBA into an equivalent binary linear program that computes the exact LCP for the CBA system, thus reaching the optimal solution. Our work establishes a framework to solve Bayesian networks using integer linear programming and high order recurrent neural networks through CBA as an intermediate representation.
132

Diagnostic en ligne des systèmes à événements discrets complexes : approche mixte logique/probabiliste / Online diagnosis for complex discrete event systems : mixed approach based on logical/probabilistic

Nguyen, Dang-Trinh 15 October 2015 (has links)
Les systèmes de production auquel nous nous intéressons ici sont caractérisés par leur haut niveau de flexibilité et leur fort niveau d'incertitude lié par exemple à la forte variabilité de la demande, le haut niveau des technologies produites, un flux de production stressant, la présence d'opérateurs humains, de produits, etc. Le domaine de l'industrie du semi-conducteur est un exemple caractéristique de ce type de systèmes. Ces systèmes caractérisent également des équipements nombreux et couteux, des routes de produits diverses, voire même réentrantes sur un même équipement, des équipements de métrologie produits, etc.La présence non systématique d'équipements de métrologie en sortie de chacun des équipements de production (Patterson et al, 2005) rend ce système encore davantage complexe. Cela a en effet pour conséquences des problématiques inéluctables de propagations de défaillances au travers du flux de produits, défaillances qui ne pourront être détectées plus tard qu'au travers d'un arrêt d'équipement non programmé ou alors lors d'un contrôle produit sur un équipement de métrologie. Pour faire face à une telle complexité, un modèle de structure de commande hiérarchique et modulaire est généralement en premier lieu préconisé, il s'agit du modèle CIM (Jones et al, 1990). Ce modèle consiste à décomposer dans un premier temps le système de pilotage en 5 niveaux de commande allant de la couche capteurs/actionneurs en passant par le contrôle-commande et la supervision. Nous nous intéresserons ici plus particulièrement aux trois derniers niveaux temps réels de ce modèle. En effet, lorsqu'une défaillance est détectée au niveau le plus bas de cette pyramide de commande, il s'agit de mettre en place un mécanisme permettant de localiser, en temps réel et de manière efficace, la ou les origines possibles d'une telle défaillance, qu'elle soit propagée, ou non afin de fournir au système d'aide à la décision les informations importantes pour guider l'opérateur humain dans sa phase de maintenance corrective et ainsi contribuer à réduire le temps d'arrêts d'équipements ; l'origine ou la cause de l'arrêt pouvant être l'équipement lui-même (panne de capteur, d'actionneur, déréglage…) ou une mauvaise maintenance, ou encore une recette mal qualifié, etc…L'idée générale que nous défendons ici consiste à s'appuyer sur le mécanisme de génération en ligne du modèle d'historique des opérations exécutées réduit à celles suspectes pour identifier la structure du réseau Bayésien correspondant au modèle de diagnostic ; et de mener par la suite le calcul des probabilités du modèle Bayésien résultant afin de déterminer les candidats à visiter en premier (notion de score) et ainsi contribuer à optimiser la prise de décision pour la maintenance corrective.L'approche générale se veut donc à la croisée d'une approche déterministe et une probabiliste dans un contexte dynamique. Au-delà de ces propositions méthodologiques, nous avons développé une application logicielle permettant de valider notre proposition sur un cas d'étude de la réalité. Les résultats sont particulièrement encourageants et ont fait l'objet de publications des conférences internationales et la soumission dans la revue International Journal of Risk and Reliability. / Today's manufacturing systems are challenged by increasing demand diversity and volume that result in short product life cycles with the emergence of high-mix low-volume production. Therefore, one of the main objectives in the manufacturing domain is to reduce cycle time (CT) while ensuring product quality at reduced cost. In such competitive environment, product quality is ensured by introducing more rigorous controls at each production step that results in extended cycle times and increased production costs. This can be reduced by introducing R2R (run to run) loops where control on the product quality is carried out after multiple consecutive production steps. However, product quality drift, detected by metrology at the end of run-to-run loop, results in stopping respective sequence of production equipment. The manufacturing systems are equipped with sensors that provide basis for real time monitoring and diagnosis; however, placement of these sensors is constrained by its structure and the functions they perform. Besides this, these sensors cannot be placed across the equipment due to associated big data analyses challenge. This also results in non-observable components that limit our ability to support effective real time monitoring and fault diagnosis initiatives. Consequently, production equipment in R2R loop are stopped upon product quality drift detection at the inspection step. It is because of the fact that we are unable to diagnose that which equipment or components are responsible for the product quality drift. As a result, production capacities are reduced not because of faulty equipment or components but due to our inability for efficient and effective diagnosis.In this scenario, the key challenge is to diagnose faulty equipment and localize failure(s) against these unscheduled equipment breakdowns. Moreover, the situation becomes more complex if the potential failure(s) is unknown and requires experts' intervention before corrective maintenance can be applied. In addition to this, new failures can emerge as a consequence of different failures and associated delay in its localization and detection. Therefore, success of the manufacturing domain, in such competitive environment, depends on quick and more accurate fault isolation, detection and diagnosis. This paper proposes a methodology that exploits historical data over unobserved equipment components to reduce search space of potential faulty components followed by more accurate diagnosis of failures and causes. The key focus is to improve the effectiveness and efficiency of real time monitoring of potential faulty components and causes diagnoses.This research focuses on potential diagnosis using Logical Diagnosis model (Deschamps et al., 2007) which that offers real time diagnosis in an automated production system. This reduces the search space for faulty equipment from a given production flow and optimizes the learning step for the subsequent BN. The BN model, based on the graphical structure, received from Logical Diagnosis model then computes joint and conditional probabilities for each node, to support corrective maintenance decisions upon scheduled and unscheduled equipment breakdowns. The proposed method enables real time diagnosis for corrective maintenance in fully or semi-automated manufacturing systems.
133

Modelagem computacional para cálculo da lâmina ideal para irrigação de cana-de-açúcar. / Computational modeling to calculate the optimal amount of irrigation of sugarcane.

Barros, Petrucio Antonio Medeiros 04 December 2009 (has links)
The sugarcane is emphasized in the national economy since colonization to the current days. It contributed mainly to highlight the country as the biggest producer and exporter of sugar and alcohol (ethanol) in the world. Irrigation is the main technology to combat the random rains and decrease productivity oscillations. The success of irrigation is conditioned to the quantity of water applied, since it is expensive. Thus, searching for a minimal quantity of water applied focusing on the maximum return on the investment (deficit irrigation) is a recommended strategy in the context of the sugarcane culture, mainly when the irrigation type is spraying mobile, which makes possible the selection of areas according to their water needs. Agronomical productivity, applied water and rainfall data provided by the Coruripe sugar mill (head office), production functions considering the varieties RB 92579, RB 867515 and RB 93509 were developed. Moreover, it was calculated the water irrigation quantity to obtain the maximum productivity, the maximum net income, and the net income equivalent to the maximum productivity. The results indicated that apply a reduced quantity of water in order to obtain the net income equivalent to the full irrigation is a good strategy. In the cases where exist more available resources, the maximum return of investment should be sought. In this sense, using the Bayesian network tool support, a model was elaborated to determine the net income, treat randomness rainfalls, and make inferences and simulations. The knowledge contained in the networks illustrates that when it rains more than expected, not rare, the irrigation becomes unfeasible. On the other hand, when it rains a little, irrigation is economically viable and essential to continue the business. / A cana-de-açúcar tem destaque na economia nacional desde a colonização até os dias atuais. Ela contribui especialmente para a posição global do país como o maior produtor e exportador de açúcar e de álcool (etanol) do mundo. Fatores climáticos e pluviométricos aleatórios afetam diretamente sua produção. A irrigação é a principal tecnologia para combater a aleatoriedade das chuvas e diminuir as oscilações da produtividade. O êxito econômico desse procedimento está condicionado à lâmina de água aplicada uma vez que a irrigação tem elevados custos. Buscar um compromisso entre fornecer uma lâmina mínima de água e obter o máximo de retorno financeiro (irrigação com déficit) é uma estratégia recomendada à cultura da cana-de-açúcar, principalmente quando o tipo de irrigação é por aspersão com mobilidade, que possibilita irrigar áreas escolhidas em função das necessidades do manejo agrícola. Com dados de produtividade, lâminas de irrigação aplicada e precipitações pluviométricas fornecidos pela Usina Coruripe unidade matriz , foram geradas funções de produção para as variedades RB 92579, RB 867515 e RB 93509. Após isso, foram calculadas as quantidades de água para obter a máxima produtividade, a máxima receita líquida e a receita líquida equivalente à máxima produtividade. Os resultados indicaram que uma boa estratégia é aplicar uma quantidade de água reduzida para obter receita líquida equivalente à da irrigação plena. Para os casos de maior disponibilidade de recursos, deve-se buscar o máximo retorno financeiro. Nesse sentido, de posse do ferramental de redes bayesianas, elaboraram-se modelos para apurar as receitas líquidas, tratar da aleatoriedade das precipitações pluviométricas e fazer inferências e simulações. Os conhecimentos representados nas redes ilustram que quando chove acima do esperado, a irrigação torna-se, não raro, inviável economicamente; já quando chove pouco, ela é viável financeiramente e indispensável à continuidade do negócio.
134

Rede Bayesiana empregada no gerenciamento da saúde dos sistemas na computação em nuvem

Alves, Renato dos Santos 10 August 2016 (has links)
Submitted by Bruna Rodrigues (bruna92rodrigues@yahoo.com.br) on 2016-10-21T11:00:36Z No. of bitstreams: 1 DissRSA.pdf: 2940714 bytes, checksum: 9af799d998ad9646a6f38b0d6e9c382a (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-11-08T18:44:27Z (GMT) No. of bitstreams: 1 DissRSA.pdf: 2940714 bytes, checksum: 9af799d998ad9646a6f38b0d6e9c382a (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-11-08T18:44:32Z (GMT) No. of bitstreams: 1 DissRSA.pdf: 2940714 bytes, checksum: 9af799d998ad9646a6f38b0d6e9c382a (MD5) / Made available in DSpace on 2016-11-08T18:44:39Z (GMT). No. of bitstreams: 1 DissRSA.pdf: 2940714 bytes, checksum: 9af799d998ad9646a6f38b0d6e9c382a (MD5) Previous issue date: 2016-08-10 / Não recebi financiamento / Cloud computing is a convenient computing model, because it allows the ubiquity with on-demand access to a set of configurable and shared features, that can be rapidly provisioned and made available with minimal effort or interaction with the service provider. IaaS is a different way to deliver cloud computing, where infrastructure servers, networking systems, storage, and all the necessary environment for the operating system to run the application are hired as services. Meanwhile, traditional companies still have doubts in relation to the transferring of their data outside of the limits of the corporation. The health of cloud computing systems is fundamental to the business, given the complexity of the systems it is difficult to ensure that all services and resources will work properly. In order to ensure a more appropriate management of the systems and services in the cloud, an architecture is proposed. The architecture has been modularized through specializing monitoring functions, data mining, and inference with Bayesian network. In this architecture are essential records of event monitoring systems and computing resources because the recorded data is mined to identify fault patterns a given system after the result of one or more events in the environment. For mining the monitoring data we proposed two algorithms, one for performing preprocessing of data and another to perform data transformation. As a data mining product obtained, data sets that were the input to create a Bayesian network. Through structural and parametric learning algorithms Bayesinas networks for each systems and services offered by cloud computing were created. The Bayesian network is intended to assist in decision making with prevention, prediction, error correction in systems and services, allowing to manage the health and performance of the most appropriate way systems. To check the compliance of the fault diagnosis of this architecture, we validate accuracy of inference of Bayesian network with cross-validation method using data sets generated by monitoring systems and services. / A computação em nuvem é um modelo de computação conveniente, pois permite a ubiquidade, com acesso sob demanda a um conjunto de recursos configuráveis e compartilhados, que podem ser rapidamente provisionados e disponibilizados com o mínimo de esforço ou interação com o fornecedor do serviço. IaaS é uma maneira diferente de entregar computação em nuvem, onde a infraestrutura de servidores, sistemas de rede, armazenamento e todo o ambiente necessário para o funcionamento do sistema operacional até aplicação são contratados como serviços. Entretanto, empresas tradicionais ainda possuem dúvidas com relação à transferência de seus dados para fora dos limites da corporação. A saúde de sistemas em computação em nuvem é algo fundamental para o negócio, e dada a complexidade dos sistemas é difícil garantir que todos os serviços e recursos funcionem adequadamente. A fim de garantir um gerenciamento mais adequado da saúde dos sistema e serviços na nuvem, propôs-se nesse trabalho uma arquitetura de diagnóstico de saúde de sistema de nuvem. A arquitetura foi modularizada, especializando funções de monitoramento, mineração de dados e inferência com rede Bayesiana. Nessa arquitetura, são fundamentais os registros de eventos de monitoramento dos sistemas e recursos computacionais, pois os dados registrados são minerados para identificar padrões de falhas. Para mineração dos dados de monitoramento foram propostos dois algoritmos: um para realizar a tarefa de pré- processamento dos dados e outro para realizar a transformação dos dados. Como produto da mineração dos dados, foram obtidos conjuntos de dados que foram o insumo para criar a rede Bayesiana. Por meio de algoritmos de aprendizagem estrutural e paramétrica foram criadas redes Bayesinas para cada sistema e disponibilizados por meio da computação em nuvem. A rede Bayesiana tem o objetivo de auxiliar na tomada de decis˜ao com prevenção, previsão, correção de falhas nos sistemas e serviços, permitindo assim gerenciar a saúde e o desempenho dos sistemas de forma mais adequada. Para verificar a aderência da arquitetura ao diagnóstico de falhas, validou-se a precisão de inferência da rede Bayesiana com o método de validação cruzada.
135

Implémentation sur SoC des réseaux Bayésiens pour l'état de santé et la décision dans le cadre de missions de véhicules autonomes / SoC implementation of Bayesian networks for health management and decision making for autonomous vehicles missions

Zermani, Sara 21 November 2017 (has links)
Les véhicules autonomes, tels que les drones, sont utilisés dans différents domaines d'application pour exécuter des missions simples ou complexes. D’un côté, ils opèrent généralement dans des conditions environnementales incertaines, pouvant conduire à des conséquences désastreuses pour l'humain et l'environnement. Il est donc nécessaire de surveiller continuellement l’état de santé du système afin de pouvoir détecter et localiser les défaillances, et prendre la décision en temps réel. Cette décision doit maximiser les capacités à répondre aux objectifs de la mission, tout en maintenant les exigences de sécurité. D’un autre côté, ils sont amenés à exécuter des tâches avec des demandes de calcul important sous contraintes de performance. Il est donc nécessaire de penser aux accélérateurs matériels dédiés pour décharger le processeur et répondre aux exigences de la rapidité de calcul.C’est ce que nous cherchons à démontrer dans cette thèse à double objectif. Le premier objectif consiste à définir un modèle pour l’état de santé et la décision. Pour cela, nous utilisons les réseaux Bayésiens, qui sont des modèles graphiques probabilistes efficaces pour le diagnostic et la décision sous incertitude. Nous avons proposé un modèle générique en nous basant sur une analyse de défaillance de type FMEA (Analyse des Modes de Défaillance et de leurs Effets). Cette analyse prend en compte les différentes observations sur les capteurs moniteurs et contextes d’apparition des erreurs. Le deuxième objectif était la conception et la réalisation d’accélérateurs matériels des réseaux Bayésiens d’une manière générale et plus particulièrement de nos modèles d’état de santé et de décision. N’ayant pas d’outil pour l’implémentation embarqué du calcul par réseaux Bayésiens, nous proposons tout un atelier logiciel, allant d’un réseau Bayésien graphique ou textuel jusqu’à la génération du bitstream prêt pour l’implémentation logicielle ou matérielle sur FPGA. Finalement, nous testons et validons nos implémentations sur la ZedBoard de Xilinx, incorporant un processeur ARM Cortex-A9 et un FPGA. / Autonomous vehicles, such as drones, are used in different application areas to perform simple or complex missions. On one hand, they generally operate in uncertain environmental conditions, which can lead to disastrous consequences for humans and the environment. Therefore, it is necessary to continuously monitor the health of the system in order to detect and locate failures and to be able to make the decision in real time. This decision must maximize the ability to meet the mission objectives while maintaining the security requirements. On the other hand, they are required to perform tasks with large computation demands and performance requirements. Therefore, it is necessary to think of dedicated hardware accelerators to unload the processor and to meet the requirements of a computational speed-up.This is what we tried to demonstrate in this dual objective thesis. The first objective is to define a model for the health management and decision making. To this end, we used Bayesian networks, which are efficient probabilistic graphical models for diagnosis and decision-making under uncertainty. We propose a generic model based on an FMEA (Failure Modes and Effects Analysis). This analysis takes into account the different observations on the monitors and the appearance contexts. The second objective is the design and realization of hardware accelerators for Bayesian networks in general and more particularly for our models of health management and decision-making. Having no tool for the embedded implementation of computation by Bayesian networks, we propose a software workbench covering graphical or textual Bayesian networks up to the generation of the bitstream ready for the software or hardware implementation on FPGA. Finally, we test and validate our implementations on the Xilinx ZedBoard, incorporating an ARM Cortex-A9 processor and an FPGA.
136

Diagnóstico e tratamento de falhas críticas em sistemas instrumentados de segurança. / Diagnosis and treatment of critical faults in safety instrumented systems.

Reinaldo Squillante Júnior 02 December 2011 (has links)
Sistemas Instrumentados de Segurança (SIS) são projetados para prevenir e/ou mitigar acidentes, evitando indesejáveis cenários com alto potencial de risco, assegurando a proteção da saúde das pessoas, proteção do meio ambiente e economia de custos com equipamentos industriais. Desta forma, é extremamente recomendado neste projeto de SIS o uso de métodos formais para garantir as especificações de segurança em conformidade com as normas regulamentadoras vigentes, principalmente para atingir o nível de integridade de segurança (SIL) desejado. Adicionalmente, algumas das normas de segurança como ANSI / ISA S.84.01; IEC 61508, IEC 61511, entre outras, recomendam uma série de procedimentos relacionados ao ciclo de vida de segurança de um projeto de SIS. Desta forma, destacam-se as atividades que compreendem o desenvolvimento e a validação dos algoritmos de controle em que se separam semanticamente os aspectos voltados para o diagnóstico de falhas críticas e o tratamento destas falhas associado a um controle de coordenação para filtrar a ocorrência de falhas espúrias. Portanto, a contribuição deste trabalho é propor um método formal para a modelagem e análise de SIS, incluindo o diagnóstico e o tratamento de falhas críticas, baseado em rede Bayesiana (BN) e rede de Petri (PN). Este trabalho considera o diagnóstico e o tratamento para cada função instrumentada de segurança (SIF) a partir do resultado do estudo de análise de riscos, de acordo com a metodologia de HAZOP (Hazard and Operability). / Safety Instrumented Systems (SIS) are design to prevent and/or mitigate accidents, avoiding undesirable high potential risk scenarios, assuring protection of people health, protecting the environment and saving costs of industrial equipment. It is strongly recommended in this design formal method to assure the safety specifications in accordance to standards regulations, mainly for reaching desired safety integrity level (SIL). Additionally, some of the safety standards such as ANSI/ISA S.84.01; IEC 61508, IEC 61511, among others, guide different activities related to Safety Life Cycle (SLC) design of SIS. In special, there are design activities that involves the development and validation of control algorithm that separate semantically aspects oriented to diagnosis and treatment of critical faults associated with a control coordination to filter spurious failures occurrence. In this context, the contribution of this work is to propose a formal method for modeling and analysis of SIS designed including diagnostic and treatment of critical faults based on Bayesian networks (BN) and Petri nets (PN). This approach considers diagnostic and treatment for each safety instrumented function (SIF) obtained according hazard and operability (HAZOP) methodology.
137

Sistema de apoio à decisão de gerenciamento de risco de Clostridium estertheticum, em matadouro-frigorífico de bovinos / Decision support system for risk management for Clostridium estertheticum bovine slaughterhouses

MELO, Camila Silveira de 07 October 2011 (has links)
Made available in DSpace on 2014-07-29T15:13:44Z (GMT). No. of bitstreams: 1 Tese Camila Silveira de Melo.pdf: 1600630 bytes, checksum: 775f7bf932366a49562642895c0c3764 (MD5) Previous issue date: 2011-10-07 / Abstract: Blown Pack is a process of spoilage which characterizes by the build-up of gas in the package of chilled meat cuts, causing a repulsing aspect. In Brazil, this spoilage has been described in many States, mainly meat for export, which needs longer shelf-life. The main cause of blown Pack is attributed to Clostridium estertheticum, a bacteria which multiplies easily in long-term storage at refrigeration temperatures and also in anaerobic organisms found in vacuum-packed chilled meat. The management of these bacteria in the flowchart of slaughter is very difficult due to its sporulated shape, easy dissemination and growth during the meat processing. Having said that, this work aimed to propose a decision support system for risk management for Clostridium estertheticum in the flowchart of bovine slaughter and vacuum-packed chilled beef cuts. Hence, it was carried out an identification of Control Point and Critical Point Control for bacteria as well as a risk classification and quantification in the process of vacuum-packed chilled beef cuts. The decision support system was based on the concept of Risk Assessment and Bayesian Networks. The system elaboration was performed at Shell Netica program. The probabilistic was manually input during the meeting with experts on the area. The first proposed structure of the system was elaborated for classification and quantification for microbiological risks, having as a parent node, a Risk Characterization of blown pack. The children nodes, linked to the parent node, were elaborated and based on Risk Exposure and Hazard Characterization for microbial deterioration in blown packed spoilage in meat cuts. In the second part of the proposition of the system, it was selected the slaughter operations and the determining factors of contamination in the processes. The parent nodes indentified the contamination risks and the children nodes the slaughter conditions. The system performance was assessed by specificity and sensibility. The proposed model presented a satisfactory performance, being reliable to the productive reality. The system presented high risk for blown pack for C. estertheticum, lactic acid bacteria and Enterobacteriaceae. Concerning its performance in identifying the contamination risk on the flowchart of slaughter, it has been high, with 100% of specificity and sensibility. The proposed model presented clear diagnosis, showing the operations that need more attention by the risk manager‟s part. / O tufamento é um processo de deterioração que se caracteriza pelo acúmulo de gases no interior da embalagem de cortes cárneos, provocando aspecto repugnante. No Brasil, essa deterioração já foi observada em vários Estados, principalmente em lotes destinados a exportação, que necessitam de maior vida de prateleira. O principal causador do tufamento é o Clostridium estertheticum, uma bactéria que se multiplica com facilidade em temperaturas de refrigeração e em anaerobiose, sendo essas condições encontradas em carnes embaladas a vácuo. O gerenciamento dessa bactéria no fluxograma de abate é muito difícil devido à sua forma esporulada e facilidade de disseminação e crescimento durante o processamento da carne bovina. Diante do exposto, objetivou-se com o presente trabalho propor um sistema de apoio à decisão de gerenciamento de risco de Clostridium estertheticum, no fluxograma de abate de bovinos e em cortes cárneos embalados a vácuo. Para isso, foi realizada a identificação dos Pontos de Controle e Pontos Críticos de Controle para a bactéria, além da classificação e quantificação de riscos no processo de obtenção de cortes cárneos refrigerados embalados a vácuo. O sistema de apoio a decisão foi baseado no conceito de Avaliação de Risco e Redes Bayesianas. A elaboração do sistema foi realizada no programa Shell Netica, com as probabilidades inseridas de forma manual durante reuniões com especialista da área. A primeira estrutura do sistema proposto foi elaborada para classificação e quantificação de riscos microbiológicos, possuindo como nó pai a Caracterização do Risco de tufamento. Os nós filhos, ligados ao nó pai, foram elaborados com base na Exposição de Risco e Caracterização de Perigo para deteriorantes incriminados em tufamento de cortes cárneos. Na proposição da segunda parte do sistema, foram elencadas as operações de abate e os fatores determinantes da contaminação dos processos, assim sendo, os nós pais identificaram o risco de contaminação e os nós filhos as condições de abate. O desempenho do sistema foi avaliado por meio da especificidade e sensibilidade. O modelo proposto apresentou desempenho satisfatório, sendo fidedigno à realidade produtiva. O sistema apresentou alto risco de tufamento para C. estertheticum, bactérias ácido láticas e Enterobacteriaceae. Quanto ao seu desempenho em identificar risco de contaminação no fluxograma de abate, esse se mostrou elevado, com 100% de especificidade e sensibilidade. O modelo proposto apresentou diagnósticos claros, apontando as operações que necessitam de maior atenção por parte do gestor de riscos.
138

Low complexity turbo equalization using superstructures

Myburgh, Hermanus Carel January 2013 (has links)
In a wireless communication system the transmitted information is subjected to a number of impairments, among which inter-symbol interference (ISI), thermal noise and fading are the most prevalent. Owing to the dispersive nature of the communication channel, ISI results from the arrival of multiple delayed copies of the transmitted signal at the receiver. Thermal noise is caused by the random fluctuation on electrons in the receiver hardware, while fading is the result of constructive and destructive interference, as well as absorption during transmission. To protect the source information, error-correction coding (ECC) is performed in the transmitter, after which the coded information is interleaved in order to separate the information to be transmitted temporally. Turbo equalization (TE) is a technique whereby equalization (to correct ISI) and decoding (to correct errors) are iteratively performed by iteratively exchanging extrinsic information formed by optimal posterior probabilistic information produced by each algorithm. The extrinsic information determined from the decoder output is used as prior information by the equalizer, and vice versa, allowing for the bit-error rate (BER) performance to be improved with each iteration. Turbo equalization achieves excellent BER performance, but its computational complexity grows exponentially with an increase in channel memory as well as with encoder memory, and can therefore not be used in dispersive channels where the channel memory is large. A number of low complexity equalizers have consequently been developed to replace the maximum a posteriori probability (MAP) equalizer in order to reduce the complexity. Some of the resulting low complexity turbo equalizers achieve performance comparable to that of a conventional turbo equalizer that uses a MAP equalizer. In other cases the low complexity turbo equalizers perform much worse than the corresponding conventional turbo equalizer (CTE) because of suboptimal equalization and the inability of the low complexity equalizers to utilize the extrinsic information effectively as prior information. In this thesis the author develops two novel iterative low complexity turbo equalizers. The turbo equalization problem is modeled on superstructures, where, in the context of this thesis, a superstructure performs the task of the equalizer and the decoder. The resulting low complexity turbo equalizers process all the available information as a whole, so there is no exchange of extrinsic information between different subunits. The first is modeled on a dynamic Bayesian network (DBN) modeling the Turbo Equalization problem as a quasi-directed acyclic graph, by allowing a dominant connection between the observed variables and their corresponding hidden variables, as well as weak connections between the observed variables and past and future hidden variables. The resulting turbo equalizer is named the dynamic Bayesian network turbo equalizer (DBN-TE). The second low complexity turbo equalizer developed in this thesis is modeled on a Hopfield neural network, and is named the Hopfield neural network turbo equalizer (HNN-TE). The HNN-TE is an amalgamation of the HNN maximum likelihood sequence estimation (MLSE) equalizer, developed previously by this author, and an HNN MLSE decoder derived from a single codeword HNN decoder. Both the low complexity turbo equalizers developed in this thesis are able to jointly and iteratively equalize and decode coded, randomly interleaved information transmitted through highly dispersive multipath channels. The performance of both these low complexity turbo equalizers is comparable to that of the conventional turbo equalizer while their computational complexities are superior for channels with long memory. Their performance is also comparable to that of other low complexity turbo equalizers, but their computational complexities are worse. The computational complexity of both the DBN-TE and the HNN-TE is approximately quadratic at best (and cubic at worst) in the transmitted data block length, exponential in the encoder constraint length and approximately independent of the channel memory length. The approximate quadratic complexity of both the DBN-TE and the HNN-TE is mostly due to interleaver mitigation, requiring matrix multiplication, where the matrices have dimensions equal to the data block length, without which turbo equalization using superstructures is impossible for systems employing random interleavers. / Thesis (PhD)--University of Pretoria, 2013. / gm2013 / Electrical, Electronic and Computer Engineering / unrestricted
139

Amélioration des techniques d'optimisation combinatoire par retour d'expérience dans le cadre de la sélection de scénarios de Produit/Projet / Improvement of combinatorial optimization using experience feedback mechanism

Pitiot, Paul 25 May 2009 (has links)
La définition et l’utilisation d'un modèle couplant la conception de produit et la conduite du projet dès les phases amont de l’étude d’un système correspondent à une forte demande industrielle. Ce modèle permet la prise en compte simultanée de décisions issues des deux environnements produit/projet mais il représente une augmentation conséquente de la dimension de l'espace de recherche à explorer pour le système d'aide à la décision, notamment lorsque il s'agit d'une optimisation multiobjectif. Les méthodes de type métaheuristique tel que les algorithmes évolutionnaires, sont une alternative intéressante pour la résolution de ce problème fortement combinatoire. Ce problème présente néanmoins une particularité intéressante et inexploitée : Il est en effet courant de réutiliser, en les adaptant, des composants ou des procédures précédemment mis en œuvre dans les produits/projets antérieurs. L'idée mise en avant dans ce travail consiste à utiliser ces connaissances « a priori » disponibles afin de guider la recherche de nouvelles solutions par l'algorithme évolutionnaire. Le formalisme des réseaux bayésiens a été retenu pour la modélisation interactive des connaissances expertes. De nouveaux opérateurs évolutionnaires ont été définis afin d'utiliser les connaissances contenues dans le réseau. De plus, le système a été complété par un processus d'apprentissage paramétrique en cours d'optimisation permettant d'adapter le modèle si le guidage ne donne pas de bons résultats. La méthode proposée assure à la fois une optimisation plus rapide et efficace, mais elle permet également de fournir au décideur un modèle de connaissances graphique et interactif associé au projet étudié. Une plateforme expérimentale a été réalisée pour valider notre approche. / The definition and use of a model coupling product design and project management in the earliest phase of the study of a system correspond to a keen industrial demand. This model allows simultaneous to take into account decisions resulting from the two environments (product and project) but it represents a consequent increase of the search space dimension for the decision-making system, in particular when it concerns a multiobjective optimization. Metaheuristics methods such as evolutionary algorithm are an interesting way to solve this strongly combinative problem. Nevertheless, this problem presents an interesting and unexploited characteristic: It is indeed current to re-use, by adapting them, the components or the procedures previously implemented in pasted product or project. The idea proposed in this work consists in using this “a priori” knowledge available in order to guide the search for new solutions by the evolutionary algorithm. Bayesian network was retained for the interactive modeling of expert knowledge. New evolutionary operators were defined in order to use knowledge contained in the network. Moreover, the system is completed by a process of parametric learning during optimization witch make it possible to adapt the model if guidance does not give good results. The method suggested ensures both a faster and effective optimization, but it also makes it possible to provide to the decision maker a graphic and interactive model of knowledge linked to studied project. An experimental platform was carried out to validate our approach.
140

Localisation précise et fiable de véhicules par approche multisensorielle / Accurate and reliable vehicle localization thanks to a multisensor approach

Aynaud, Claude 08 December 2015 (has links)
La localisation d’un véhicule est une étape cruciale dans le développement des véhicules intelligents. Les recherches sur ce sujet ont connu un grand essor ces dernières années. L’accent est souvent porté sur la précision de la localisation, nous présentons ici une méthode de localisation sur carte existante dont l’objectif est d’estimer la position du robot non seulement de façon précise mais également de manière fiable. Pour ce faire, l’algorithme développé se présente en deux étapes principales : une étape de sélection et de perception des informations les plus pertinentes et une étape de mise à jour de la position estimée et de la fiabilité, cette dernière étape permet également de détecter et réparer ou éliminer les précédentes erreurs. La perception de l’environnement est réalisée à travers différents capteurs associés à des détecteurs spécifiques. L’humain utilise aussi différents capteurs pour se localiser et va intuitivement sélectionner le plus performant, s’il fait jour il utilisera ses yeux, sinon son oreille ou le toucher. Nous avons développé une approche similaire pour le robot qui tient compte des contraintes environnementales et de l’estimation actuelle pour sélectionner à chaque instant l’ensemble capteur, indice de la scène et détecteur le plus pertinent. La phase de perception, étant pilotée par un processus Top-Down, peut bénéficier d’informations déjà connues permettant ainsi de se focaliser sur l’indice recherché et d’améliorer les phases de détection et d’associations de données. Cette approche Top-Down s’appuie sur un réseau bayésien. Ce dernier permet de modéliser les interactions entre les différents événements qui se produisent en gérant l’incertitude. Il permet une prise en compte facile des différents événements. Par ailleurs le réseau bayésien présente une grande flexibilité pour l’ajout d’événements supplémentaires pouvant engendrer des non-détections (tels que dysfonctionnement de capteurs, conditions météorologiques, etc.). Les données de l’environnement sont rendues disponibles grâce à une carte géoréférencée préalablement fournie. Avec le développement de cartes disponibles facilement sur internet, cette façon de faire permet d’exploiter au mieux l’information déjà fournie. L’utilisation d’une carte géoréférencée permet d’avoir un référentiel commun entre tous les véhicules ou éléments de l’infrastructure facilitant ainsi l’échange d’informations et ouvrant du coup la possibilité d’interaction simplifiées dans le cas de flottes par exemple. Les résultats montrent que l’approche développée est pertinente pour une localisation précise et fiable aussi bien statique que dynamique. L’ajout de nouveaux capteurs se fait naturellement et sans nécessiter d’heuristique particulière. La localisation obtenue est suffisamment précise et fiable pour permettre des applications de conduite autonome utilisant, entre autres, cet algorithme. / Vehicle localization is a crucial step in the development of smart vehicles. The research in this domain has been growing in recent years. Generally, the effort is focused on the localization accuracy, we present here a localization method on existing map where the objective is to estimate the robot position not only with accuracy but also with confidence. To achieve this , the algorithm developed has two main steps : one, selection and perception of the most relevant informations and two, position estimation and confidence update. This last step also allows to detect and eliminate the previous errors. Environment perception is well achieved, thanks to different sensors associated with specific detectors. Humans use different senses, shifting automatically in order to localize themselves depending on the situation of the environment, for e.g if there is enough illumination we depend on eyes, else the ear or the touch otherwise. We have developed a similar approach for the robot that takes into account the specific environmental constraints and actual position estimation to select at each instant the most relevant set of sensor, landmark and detector. The perception step, led by a top-down process, can use already known informations allowing a focus on the searched landmark and an improvement of the detection and data associations steps. This top-down approach is well implemented, thanks to a Bayesian network. Bayesian network allows to model the interactions between the different probable events with management of the uncertainty. With this network, it is very easy to take into account those different events. Moreover, a Bayesian network has a great flexibility to take into consideration additional events that can cause false detections (like sensor failure, meteorological conditions and others). The environment data is obtained with a Georeferenced map (from GIS). With the already available maps on the internet, allows to exploit an already existing information. The use of a Georeferenced map facilitates the communication of informations between a vehicle and several other vehicles or with an element of the infrastructure, that can be very useful for multi vehicle coordination, for example. The results shows that the developed approach is very accurate and reliable for localization, whether static or dynamic, and can be applied for autonomous driving. Moreover, new sensors can be added at ease.

Page generated in 0.0941 seconds