Spelling suggestions: "subject:"net"" "subject:"neto""
161 |
A combinatorial study of soundness and normalization in n-graphsANDRADE, Laís Sousa de 29 July 2015 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-04-24T14:03:12Z
No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
dissertacao-mestrado.pdf: 2772669 bytes, checksum: 25b575026c012270168ca5a4c397d063 (MD5) / Made available in DSpace on 2017-04-24T14:03:12Z (GMT). No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
dissertacao-mestrado.pdf: 2772669 bytes, checksum: 25b575026c012270168ca5a4c397d063 (MD5)
Previous issue date: 2015-07-29 / CNPQ / N-Graphs is a multiple conclusion natural deduction with proofs as directed graphs, motivated by the idea of proofs as geometric objects and aimed towards the study of the geometry of Natural Deduction systems. Following that line of research, this work revisits the system under a purely combinatorial perspective, determining geometrical conditions on the graphs of proofs to explain its soundness criterion and proof growth during normalization. Applying recent developments in the fields of proof graphs, proof-nets and N-Graphs itself, we propose a linear time algorithm for proof verification of the full system, a result that can be related to proof-nets solutions from Murawski (2000) and Guerrini (2011), and a normalization procedure based on the notion of sub-N-Graphs, introduced by Carvalho, in 2014. We first present a new soundness criterion for meta-edges, along with the extension of Carvalho’s sequentization proof for the full system. For this criterion we define an algorithm for proof verification that uses a DFS-like search to find invalid cycles in a proof-graph. Since the soundness criterion in proof graphs is analogous to the proof-nets procedure, the algorithm can also be extended to check proofs in the multiplicative linear logic without units (MLL−) with linear time complexity. The new normalization proposed here combines a modified version of Alves’ (2009) original beta and permutative reductions with an adaptation of Carbone’s duplication operation on sub-N-Graphs. The procedure is simpler than the original one and works as an extension of both the normalization defined by Prawitz and the combinatorial study developed by Carbone, i.e. normal proofs enjoy the separation and subformula properties and have a structure that can represent how patterns lying in normal proofs can be recovered from the graph of the original proof with cuts. / N-Grafos é uma dedução natural de múltiplas conclusões onde provas são representadas como grafos direcionados, motivado pela idéia de provas como objetos geométricos e com o objetivo de estudar a geometria de sistemas de Dedução Natural. Seguindo esta linha de pesquisa, este trabalho revisita o sistema sob uma perpectiva puramente combinatorial, determinando condições geométricas nos grafos de prova para explicar seu critério de corretude e crescimento da prova durante a normalização. Aplicando desenvolvimentos recentes nos campos de grafos de prova, proof-nets e dos próprios N-Grafos, propomos um algoritmo linear para verificação de provas para o sistema completo, um resultado que pode ser comparado com soluções para roof-nets desenvolvidas por Murawski (2000) e Guerrini (2011), e um procedimento de normalização baseado na noção de sub-N-Grafos, introduzidas por Carvalho, em 2014. Apresentamos primeiramente um novo critério de corretude para meta-arestas, juntamente com a extensão para todo o sistema da prova da sequentização desenvolvida por Carvalho. Para este critério definimos um algoritmo para verificação de provas que utiliza uma busca parecida com a DFS (Busca em Profundidade) para encontrar ciclos inválidos em um grafo de prova. Como o critério de corretude para grafos de provas é análogo ao procedimento para proof-nets, o algoritmo pode também ser estendido para validar provas em Lógica Linear multiplicativa sem units (MLL−) com complexidade de tempo linear. A nova normalização proposta aqui combina uma versão modificada das reduções beta e permutativas originais de Alves com uma adaptação da operação de duplicação proposta por Carbone para ser aplicada a sub-N-Grafos. O procedimento é mais simples do que o original e funciona como uma extensão da normalização definida por Prawitz e do estudo combinatorial desenvolvido por Carbone, i.e. provas em forma normal desfrutam das propriedades da separação e subformula e possuem uma estrutura que pode representar como padrões existentes em provas na forma normal poderiam ser recuperados a partir do grafo da prova original com cortes.
|
162 |
Modeling and analysis of telemental health systems with Petri netsAeschliman, Ryan January 1900 (has links)
Master of Science / Industrial & Manufacturing Systems Engineering / David H. Ben-Arieh / Telemental health systems, a form of telemedicine, use electronic communication media to provide patients in remote locations access to psychological and psychiatric specialists. The structure of telemental health systems has a major impact on their performance. Discrete-event simulations offer useful results concerning capacities and utilization of specific resources. Simulation, however, cannot provide theoretical properties of analyzed systems. Petri net representations of systems can overcome this shortfall, offering a wide range of easily-analyzed and useful properties. Their ability to model resource conflict, parallel activities, and failure modes fits nicely with the reality of telemental health systems. Analysis of behavioral properties of Petri nets can provide meaningful information for system analysts. The most useful properties include net boundedness, liveness, and non-reachability of certain undesirable states. The thesis discusses methods to find all these properties. Specifically, it provides property-preserving net reductions to facilitate analysis of boundedness and liveness and describes an integer programming model to solve reachability and coverability problems.
Moreover, this thesis outlines a simulation analysis of synchronous and asynchronous telemental health systems. The paper then describes a Petri net model of a generic telemental health delivery system. The paper subjects the model to an integer programming model and net reduction. The integer programming model indicated that the number of resources in the system remains static, full utilization of resources at a given time is possible, conflict over resources is possible, and improper work prioritization is possible within the model. Net reduction and analysis with open-source software showed that the model is bounded and live. These results can aid telemedicine system architects in diagnosing potential process issues. Additionally, the methods described in the paper provide an excellent tool for further, more granular analysis of telemedicine systems.
|
163 |
Compact reliability and maintenance modeling of complex repairable systemsValenzuela Vega, Rene Cristian 22 May 2014 (has links)
Maintenance models are critical for evaluation of the alternative maintenance
policies for modern engineering systems. A poorly selected policy can result in excessive
life-cycle costs as well as unnecessary risks for catastrophic failures of the system. Economic dependence refers to the difference between the cost of combining the maintenance of a number of components and the cost of performing the same maintenance actions individually. Maintenance that takes advantage of this difference is often called opportunistic.
Large number of components and economic inter-dependence are two pervasive characteristics of modern engineering systems that make the modeling of their maintenance processes particularly challenging. Simulation is able to handle both of these characteristics computationally, but the complexity, especially from the model verification perspective, becomes overwhelming as the number of components increases. This research introduces a new procedure for maintenance models of multi-unit repairable systems with economic dependence among its components and under opportunistic maintenance policies. The procedure is based on the stochastic Petri net with aging tokens modeling framework and it makes use of a component-level model approach to overcome the state explosion of the model combined with a novel order-reduction scheme that effectively combines the impact of other components into a single distribution. The justification for
the used scheme is provided, the accuracy is assessed, and applications for the systems of realistic complexity are considered.
|
164 |
Scaling up malaria interventions. : Integrating free distribution of long lasting insecticide treated mosquito nets during vaccination campaigns. A new strategy to meet the millennium development goalMonclair, Marianne January 2008 (has links)
Objective: To look at the Red Cross and the Red Crescent societies integrated campaigns between 2002 and 2006 with free distribution of insecticide treated nets (ITN)that have taken place and its contribution to the Millennium Development Goals(MDG) and the Abuja target. Method: Review of surveys, evaluations and reports from the International Federation of Red Cross and Red Crescent integrated campaigns. Published articles up to 2007 have been accessed from electronic databases Medline, PubMed, the Cochrane Library and website`s from WHO, UNICEF, GFATM , and related articles available from international organisations web sites in addition to informal discussions and meetings with key stakeholders. Results: The integrated vaccination and free distribution of long lasting insecticidal nets (LLINs) achieved a rapid, high and equal LLIN coverage among all wealth quintiles. The MDG and Abuja target for ITN coverage at household level were reached within a week giving a unique opportunity for a significant reduction in malaria incidences, morbidity and mortality. The ITN possession remained higher than utilisation, but utilisation increased if a follow up visit, ensuring nets being hung and properly used, had taken place at household level post campaign. Conclusion: Large scale free distribution of LLINs bridge the equity gap between poor and rich and increased the use rate among children under five and pregnant women. The low utilisation versus possession remains a challenge and thus a “minimum standard” of a two phased strategy is recommend to reach maximum impact and the MDG; Phase one preparing for pre campaign data, logistical planning and distribution while phase two should focus on a post campaign Keep Up program providing health education at household level to ensure proper net hanging and use. / <p>ISBN 978-91-85721-42-9</p>
|
165 |
Systems reliability modelling for phased missions with maintenance-free operating periodsChew, Samuel P. January 2010 (has links)
In 1996, a concept was proposed by the UK Ministry of Defence with the intention of making the field of reliability more useful to the end user, particularly within the field of military aerospace. This idea was the Maintenance Free Operating Period (MFOP), a duration of time in which the overall system can complete all of its required missions without the need to undergo emergency repairs or maintenance, with a defined probability of success. The system can encounter component or subsystem failures, but these must be carried with no effect to the overall mission, until such time as repair takes place. It is thought that advanced technologies such as redundant systems, prognostics and diagnostics will play a major role in the successful use of MFOP in practical applications. Many types of system operate missions that are made up of several sequential phases. For a mission to be successful, the system must satisfactorily complete each of the objectives in each of the phases. If the system fails or cannot complete its goals in any one phase, the mission has failed. Each phase will require the system to use different items, and so the failure logic changes from phase to phase. Mission unreliability is defined as the probability that the system fails to function successfully during at least one phase of the mission. An important problem is the efficient calculation of the value of mission unreliability. This thesis investigates the creation of a modelling method to consider as many features of systems undergoing both MFOPs and phased missions as possible. This uses Petri nets, a type of digraph allowing storage and transit of tokens which represent system states. A simple model is presented, following which, a more complex model is developed and explained, encompassing those ideas which are believed to be important in delivering a long MFOP with a high degree of confidence. A demonstration of the process by which the modelling method could be used to improve the reliability performance of a large system is then shown. The complex model is employed in the form of a Monte-Carlo simulation program, which is applied to a life-size system such as may be encountered in the real world. Improvements are suggested and results from their implementation analysed.
|
166 |
Numerical and statistical approaches for model checking of stochastic processes / Approches numériques et statistiques pour le model checking des processus stochastiques.Djafri, Hilal 19 June 2012 (has links)
Nous proposons dans cette thèse plusieurs contributions relatives à la vérification quantitative des systèmes. Cette discipline vise à évaluer les propriétés fonctionnelles et les performances d'un système. Une telle vérification requiert deux ingrédients : un modèle formel de représentation d'un système et une logique temporelle pour exprimer la propriété considérée. L'évaluation est alors faite par une méthode statistique ou numérique. La complexité spatiale des méthodes numériques, proportionnelle à la taille de l'espace d'états, les rend impraticables si les systèmes présentent une combinatoire importante. La méthode de comparaison stochastique basée sur les chaînes de Markov censurées réduit la mémoire occupée en restreignant l'analyse à un sous-ensemble des états de la chaîne originale. Dans cette thèse nous fournissons de nouvelles bornes dépendant de l'information disponible relative à la chaîne. Nous introduisons une nouvelle logique temporelle quantitative appelée Hybrid Automata Stochastic Logic (HASL), pour la vérification des processus stochastiques à événements discrets (DESP).HASL emploie les automates linéaires hybrides (LHA) pour sélectionner des préfixes de chemins d'exécution d'un DESP. LHA permet de collecter des informations élaborées durant la génération des chemins, fournissant ainsi à l'utilisateur un moyen d'exprimer des mesures sophistiquées. HASL supporte donc des raisonnements temporels mixés avec une analyse à base de récompenses. Nous avons aussi développé COSMOS, un outil qui implémente la vérification statistique de formules HASL pour des réseaux de Petri stochastiques. Les ateliers flexibles (FMS) ont souvent été modélisés par des réseaux de Petri. Cependant le modélisateur doit avoir une bonne connaissance de ce formalisme. Afin de faciliter cette modélisation nous proposons une méthodologie de modélisation compositionnelle orientée vers les applications qui ne requiert aucune connaissance des réseaux de Petri. / We propose in this thesis several contributions related to the quantitative verification of systems. This discipline aims to evaluate functional and performance properties of a system. Such a verification requires two ingredients: a formal model to represent the system and a temporal logic to express the desired property. Then the evaluation is done with a statistical or numerical method. The spatial complexity of numerical methods which is proportional to the size of the state space of the model makes them impractical when the state space is very large. The method of stochastic comparison with censored Markov chains is one of the methods that reduces memory requirements by restricting the analysis to a subset of the states of the original Markov chain. In this thesis we provide new bounds that depend on the available information about the chain. We introduce a new quantitative temporal logic named Hybrid Automata Stochastic Logic (HASL), for the verification of discrete event stochastic processes (DESP). HASL employs Linear Hybrid Automata (LHA) to select prefixes of relevant execution paths of a DESP. LHA allows rather elaborate information to be collected on-the-fly during path selection, providing the user with a powerful mean to express sophisticated measures. In essence HASL provides a unifying verification framework where temporal reasoning is naturally blended with elaborate reward-based analysis. We have also developed COSMOS, a tool that implements statistical verification of HASL formulas over stochastic Petri nets. Flexible manufacturing systems (FMS) have often been modelized by Petri nets. However the modeler should have a good knowledge of this formalism. In order to facilitate such a modeling we propose a methodology of compositional modeling that is application oriented and does not require any knowledge of Petri nets by the modeler.
|
167 |
Black-Box identification of automated discrete event systems / Identification "boîte-noire" des systèmes automatisés à événements discretsEstrada Vargas, Ana Paula 20 February 2013 (has links)
Cette thèse traite de l'identification des systèmes à événements discrets (SED) automatisés dans un contexte industriel. En particulier, le travail aborde les systèmes formés par un processus et un automate programmable (AP) fonctionnant en boucle fermée - l'identification a pour but d’obtenir un modèle approximatif exprimé en réseaux de Petri interprétés (RPI) à partir du comportement externe observé sous la forme d'une seule séquence de vecteurs d’entrée-sortie de l’AP. Tout d'abord, une analyse des méthodes d'identification est présentée, ainsi qu’une étude comparative des méthodes récentes pour l'identification des SED. Puis le problème abordé est décrit - des importantes caractéristiques technologiques dans les systèmes automatisés par l’AP sont détaillées. Ces caractéristiques doivent être prises en compte dans la résolution du problème, mais elles ne peuvent pas être traitées par les méthodes existantes d’identification. La contribution principale de cette thèse est la création de deux méthodes d’identification complémentaires. La première méthode permet de construire systématiquement un modèle RPI à partir d'une seule séquence entrée-sortie représentant le comportement observable du SED. Les modèles RPI décrivent en détail l’évolution des entrées et sorties pendant le fonctionnement du système. La seconde méthode considère des SED grands et complexes - elle est basée sur une approche statistique qui permettre la construction des modèles en RPI compactes et expressives. Elle est composée de deux étapes - la première calcule à partir de la séquence entrée-sortie, la partie réactive du modèle, constituée de places observables et de transitions. La deuxième étape fait la construction de la partie non-observable, en rajoutant des places pour permettre la reproduction de la séquence entrée-sortie. Les méthodes proposées, basées sur des algorithmes de complexité polynomiale, ont été implémentées en outils logiciels, lesquels ont été testés avec des séquences d’entrée-sortie obtenues à partir des systèmes réels en fonctionnement. Les outils sont décrits et leur application est illustrée à travers deux cas d’étude. / This thesis deals with the identification of automated discrete event systems (DES) operating in an industrial context. In particular the work focuses on the systems composed by a plant and a programmable logic controller (PLC) operating in a closed loop- the identification consists in obtaining an approximate model expressed in interpreted Petri nets (IPN) from the observed behaviour given under the form of a single sequence of input-output vectors of the PLC. First, an overview of previous works on identification of DES is presented as well as a comparative study of the main recent approaches on the matter. Then the addressed problem is stated- important technological characteristics of automated systems and PLC are detailed. Such characteristics must be considered in solving the identification problem, but they cannot be handled by previous identification techniques. The main contribution in this thesis is the creation of two complementary identification methods. The first method allows constructing systematically an IPN model from a single input-output sequence representing the observable behaviour of the DES. The obtained IPN models describe in detail the evolution of inputs and outputs during the system operation. The second method has been conceived for addressing large and complex industrial DES- it is based on a statistical approach yielding compact and expressive IPN models. It consists of two stages- the first one obtains, from the input-output sequence, the reactive part of the model composed by observable places and transitions. The second stage builds the non observable part of the model including places that ensure the reproduction of the observed input-output sequence. The proposed methods, based on polynomial-time algorithms, have been implemented in software tools, which have been tested with input-output sequences obtained from real systems in operation. The tools are described and their application is illustrated through two case studies.
|
168 |
A systems biology approach to multi-scale modelling and analysis of planar cell polarity in Drosophila melanogaster wingGao, Qian January 2013 (has links)
Systems biology aims to describe and understand biology at a global scale where biological systems function as a result of complex mechanisms that happen at several scales. Modelling and simulation are computational tools that are invaluable for description, understanding and prediction these mechanisms in a quantitative and integrative way. Thus multi-scale methods that couple the design, simulation and analysis of models spanning several spatial and temporal scales is becoming a new emerging focus of systems biology. This thesis uses an exemplar – Planar cell polarity (PCP) signalling – to illustrate a generic approach to model biological systems at different spatial scales, using the new concept of Hierarchically Coloured Petri Nets (HCPN). PCP signalling refers to the coordinated polarisation of cells within the plane of various epithelial tissues to generate sub-cellular asymmetry along an axis orthogonal to their apical-basal axes. This polarisation is required for many developmental events in both vertebrates and non-vertebrates. Defects in PCP in vertebrates are responsible for developmental abnormalities in multiple tissues including the neural tube, the kidney and the inner ear. In Drosophila wing, PCP is seen in the parallel orientation of hairs that protrude from each of the approximately 30,000 epithelial cells to robustly point toward the wing tip. This work applies HCPN to model a tissue comprising multiple cells hexagonally packed in a honeycomb formation in order to describe the phenomenon of Planar Cell Polarity (PCP) in Drosophila wing. HCPN facilitate the construction of mathematically tractable, compact and parameterised large-scale models. Different levels of abstraction that can be used in order to simplify such a complex system are first illustrated. The PCP system is first represented at an abstract level without modelling details of the cell. Each cell is then sub-divided into seven virtual compartments with adjacent cells being coupled via the formation of intercellular complexes. A more detailed model is later developed, describing the intra- and inter-cellular signalling mechanisms involved in PCP signalling. The initial model is for a wild-type organism, and then a family of related models, permitting different hypotheses to be explored regarding the mechanisms underlying PCP, are constructed. Among them, the largest model consists of 800 cells which when unfolded yields 164,000 places (each of which is described by an ordinary differential equation). This thesis illustrates the power and validity of the approach by showing how the models can be easily adapted to describe well-documented genetic mutations in the Drosophila wing using the proposed approach including clustering and model checking over time series of primary and secondary data, which can be employed to analyse and check such multi-scale models similar to the case of PCP. The HCPN models support the interpretation of biological observations reported in literature and are able to make sensible predictions. As HCPN model multi-scale systems in a compact, parameterised and scalable way, this modelling approach can be applied to other large-scale or multi-scale systems.
|
169 |
The Principal as Technology Integration LeaderLewis, Dietrick 01 January 2011 (has links)
Technology integration, the incorporation of technology resources and technology-related practices into the daily routines, work, and management of schools, is an essential component of 21st century schools. One of the most important aspects of technology integration is the role that principals play. Despite the importance of this role, many
principals report that their preparation programs did not fully prepare them to lead a technology integration effort. One program designed to assist principals is the Alabama Math, Science, and Technology Initiative (AMSTI). Using International Society for Technology in Education's National Educational Technology Standards for Administrators (NETS-A) as a guide, AMSTI provides training to principals in technology integration.
In an effort to describe the role of the principal in leading a technology integration initiative in 21st century schools, five AMSTI principals who exhibited excellence in technology integration were selected. Data were collected through interviews and observations and then transcribed, coded, and analyzed. From the analysis four themes emerged: lead by example, provide technology opportunities, minimize hindrances, and
train regularly. Research questions were answered based on analysis of data.
Several conclusions were formed by answering the research questions. Principals who were successful in leading a technology integration initiative had a shared vision that included a plan and goals for an initiative. They led by example and got faculty to believe in the merits of the initiative. Principals also showed a willingness to learn and provided
resources such as training and technology for students and faculty. The final report serves as a resource for those charged with leading a technology integration initiative in 21st century schools.
|
170 |
Levels of Perineuronal Nets in the Basolateral Amygdala Are Correlated with Sex Differences in Fear LearningBals, Julia January 2017 (has links)
Thesis advisor: John P. Christianson / Trauma and exposure to extreme stressors greatly increases a person’s vulnerability to developing mental illnesses like post-traumatic stress disorder (PTSD). Patients with PTSD often have impaired fear and safety learning, and despite the fact that women are more than twice as likely to develop PTSD, much of the research on this disorder has relied on the use of male subjects. This paper will review potential contributors to the sex differences seen in PTSD and fear-related learning. Our group has found that female rats show greater fear discrimination abilities than their male counterparts, but show no difference in levels of safety learning. Analysis of specialized extracellular matrix structures called perineuronal nets (PNNs) revealed that females displayed a much higher density of PNNs in the basolateral amygdala (BLA) than males, but not in the prefrontal cortex (PFC). / Thesis (BS) — Boston College, 2017. / Submitted to: Boston College. College of Arts and Sciences. / Discipline: Departmental Honors. / Discipline: Psychology.
|
Page generated in 0.0579 seconds