• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 362
  • 88
  • 70
  • 31
  • 20
  • 12
  • 10
  • 10
  • 5
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 748
  • 511
  • 193
  • 187
  • 143
  • 127
  • 119
  • 102
  • 87
  • 78
  • 75
  • 67
  • 67
  • 56
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

A study of the efficacy of organ cultures to examine wood formation in Pinus radiata D. Don

Putoczki, Tracy Lynn January 2006 (has links)
Pinus radiata D. Don is an economically important plantation species to New Zealand that is susceptible to the wood quality flaw 'intra-ring checking'. Intra-ring checking is a term used to describe radial fractures that can occur in the earlywood portion of a growth ring, altering the appearance and resilience of the wood, thereby decreasing its economic value. This thesis presents a study that was part of a broad, ongoing collaborative investigation directed at understanding wood quality issues, with the long term goal of enhancement of future radiata pine crops. These investigations are funded by the Wood Quality Initiative Ltd., and involve basic science, field trials and engineering studies related to intra-ring checking. Specifically, the present study was designed to establish the effects of the mineral nutrients boron, calcium and magnesium on wood formation, to determine whether they are associated with intra-ring checking. This research was carried out in three stages. Firstly, the ultra-structural and biochemical properties of wood with intra-ring checking were examined to determine if specific features of the cell wall were associated with the incidence of intra-ring checks. Electron microscopy techniques revealed that the CML/S1 region of the cell wall often showed a decrease in CML lignin staining and S1 striations in wood with intra-ring checks. However, Klason and acetyl bromide assays did not show a change in lignin content. In order to understand how changes in the CML/S1 region of the cell wall may occur, methods were required that would allow for the observation of wood formation in a controlled environment. In the second stage of this study, an organ culture technique was successfully developed to allow for the growth of radiata pine cambial tissue, sandwiched between phloem and xylem, on a defined nutrient medium. This nutrient medium was manipulated, using ion-binding resins, to control the amount of boron, calcium and magnesium available to the growing tissues, to determine if variations in wood formation could be induced. In the final stage of this research, an extensive comparative examination of different techniques that could be used for the observation and measurement of selected wood properties was undertaken, in order to determine the efficacy of the organ cultures to study wood formation in an altered nutrient environment. Wood properties were examined for various stages of xylogenesis, beginning with cell division and expansion, followed by cell wall deposition, and lastly with the onset of lignification in order to define the success of the culture technique. Electron microscopy investigations suggested that in the presence of very little boron the CML/S1 wall showed darker striation deposits, while an increase in calcium availability, resulted in a more defined CML/S1/S2 wall region compared to the controls. Further examination of the cell walls suggested that pectin esterification and possibly lignification could also be increased by limited boron availability. However, in many of the observed and measured parameters of wood properties, a great deal of complex 'between-tree' and 'within-culture' variation was observed. The results show that elucidation of the association between nutrient availability and the incidence of intra-ring checking can not be established from this organ culture study. In a concurrent study, the preliminary investigation of arabinogalactan-proteins (AGPs) in radiata pine was undertaken. Radiata pine AGPs were positioned in the compound middle lamella of xylem cells, suggesting potential roles in cell-cell adhesion or cell-cell signalling. For the first time, radiata pine AGPs were isolated and characterized in terms of their protein and carbohydrate composition, both of which yielded features typical of AGPs in other plant species. Unique to radiata pine AGPs was the presence of a large proportion of 5-linked arabinose. While the precise function(s) of AGPs are unknown, the results obtained in this research have established a basis for further investigation into the potential for their involvement in wood formation. Overall, new tools have been established to facilitate future research on radiata pine, a commercially important species, and novel results have been obtained concerning the mechanisms of wood formation therein.
92

Analyse des traces d'exécution pour la vérification des protocoles d'interaction dans les systèmes multiagents

Ben Ayed, Nourchène January 2003 (has links)
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
93

Kinerja: a workflow execution environment

Procter, Sam January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / John Hatcliff / Like all businesses, clinical care groups and facilities are under a range of pressures to enhance the efficacy of their operations. Though there are a number of ways to go about these improvements, one exciting methodology involves the documentation and analysis of clinical workflows. Unfortunately, there is no industry standard tool which supports this, and many available workflow documentation technologies are not only proprietary, but technologically insufficient as well. Ideally, these workflows would be documented at a formal enough level to support their execution; this would allow the partial automation of documented clinical procedures. However, the difficulty involved in this automation effort is substantial: not only is there the irreducible complexity inherent to automation, but a number of the solutions presented so far layer on additional complication. To solve this, the author introduces Kinerja, a state-of-the-art execution environment for formally specified workflows. Operating on a subset of the academically and industrially proven workflow language YAWL, Kinerja allows for both human guided governance and computer guided verification of workflows, and allows for a seamless switching between modalities. Though the base of Kinerja is essentially an integrated framework allowing for considerable extensibility, a number of modules have already been developed to support the checking and executing of clinical workflows. One such module integrates symbolic execution which greatly optimizes the time and space necessary for a complete exploration of a workflow's state space.
94

Attributed Multi-Relational Attention Network for Fact-checking URL Recommendation

You, Di 11 July 2019 (has links)
To combat fake news, researchers mostly focused on detecting fake news and journalists built and maintained fact-checking sites (e.g., Snopes.com and Politifact.com). However, fake news dissemination has been greatly promoted by social media sites, and these fact-checking sites have not been fully utilized. To overcome these problems and complement existing methods against fake news, in this thesis, we propose a deep-learning based fact-checking URL recommender system to mitigate impact of fake news in social media sites such as Twitter and Facebook. In particular, our proposed framework consists of a multi-relational attentive module and a heterogeneous graph attention network to learn complex/semantic relationship between user-URL pairs, user-user pairs, and URL-URL pairs. Extensive experiments on a real-world dataset show that our proposed framework outperforms seven state-of-the-art recommendation models, achieving at least 3~5.3% improvement.
95

Débogage de modèles comportementaux par analyse de contre-exemple / Debugging of Behavioural Models using Counterexample Analysis

Barbon, Gianluca 14 December 2018 (has links)
Le model checking est une technique établie pour vérifier automatiquement qu’un modèle vérifie une propriété temporelle donnée. Lorsque le modèle viole la propriété, le model checker retourne un contre-exemple, i.e., une séquence d’actions menant à un état où la propriété n’est pas satisfaite. Comprendre ce contre-exemple pour le débogage de la spécification est une tâche compliquée pour plusieurs raisons: (i) le contre-exemple peut contenir un grand nombre d’actions; (ii) la tâche de débogage est principalement réalisée manuellement; (iii) le contre-exemple n’indique pas explicitement la source du bogue qui est caché dans le modèle; (iv) les actions les plus pertinentes ne sont pas mises en évidence dans le contre-exemple; (v) le contre-exemple ne donne pas une vue globale du problème.Ce travail présente une nouvelle approche qui rend plus accessible le model checking en simplifiant la compréhension des contre-exemples. Notre solution vise à ne garder que des actions dans des contre-exemples pertinents à des fins de débogage. Pour y parvenir, on détecte dans les modèles des choix spécifiques entre les transitions conduisant à un comportement correct ou à une partie du modèle erroné. Ces choix, que nous appelons neighbourhoods, se révèlent être de grande importance pour la compréhension du bogue à travers le contre-exemple. Pour extraire de tels choix, nous proposons deux méthodes différentes. La première méthode concerne le débogage des contre-exemples pour la violations de propriétés de sûreté. Pour ce faire, elle construit un nouveau modèle de l’original contenant tous les contre-exemples, puis compare les deux modèles pour identifier les neighbourhoods. La deuxième méthode concerne le débogage des contre-exemples pour la violations de propriétés de vivacité. À partir d’une propriété de vivacité, elle étend le modèle avec des informations de préfixe / suffixe correspondants à cette propriété. Ce modèle enrichi est ensuite analysé pour identifier les neighbourhoods.Un modèle annoté avec les neighbourhoods peut être exploité de deux manières. Tout d’abord, la partie erronée du modèle peut être visualisée en se focalisant sur les neighbourhoods, afin d’avoir une vue globale du comportement du bogue. Deuxièmement, un ensemble de techniques d’abstraction que nous avons développées peut être utilisé pour extraire les actions plus pertinentes à partir de contre-exemples, ce qui facilite leur compréhension. Notre approche est entièrement automatisée par un outil que nous avons implémenté et qui a été validé sur des études de cas réels dans différents domaines d’application. / Model checking is an established technique for automatically verifying that a model satisfies a given temporal property. When the model violates the property, the model checker returns a counterexample, which is a sequence of actions leading to a state where the property is not satisfied. Understanding this counterexample for debugging the specification is a complicated task for several reasons: (i) the counterexample can contain a large number of actions; (ii) the debugging task is mostly achieved manually; (iii) the counterexample does not explicitly point out the source of the bug that is hidden in the model; (iv) the most relevant actions are not highlighted in the counterexample; (v) the counterexample does not give a global view of the problem.This work presents a new approach that improves the usability of model checking by simplifying the comprehension of counterexamples. Our solution aims at keeping only actions in counterexamples that are relevant for debugging purposes. This is achieved by detecting in the models some specific choices between transitions leading to a correct behaviour or falling into an erroneous part of the model. These choices, which we call "neighbourhoods", turn out to be of major importance for the understanding of the bug behind the counterexample. To extract such choices we propose two different methods. One method aims at supporting the debugging of counterexamples for safety properties violations. To do so, it builds a new model from the original one containing all the counterexamples, and then compares the two models to identify neighbourhoods. The other method supports the debugging of counterexamples for liveness properties violations. Given a liveness property, it extends the model with prefix / suffix information w.r.t. that property. This enriched model is then analysed to identify neighbourhoods.A model annotated with neighbourhoods can be exploited in two ways. First, the erroneous part of the model can be visualized with a specific focus on neighbourhoods, in order to have a global view of the bug behaviour. Second, a set of abstraction techniques we developed can be used to extract relevant actions from counterexamples, which makes easier their comprehension. Our approach is fully automated by a tool we implemented and that has been validated on real-world case studies from various application areas.
96

Vérification des performances et de la correction des systèmes distribués / Performance and correctness assessmet of distributed systems

Rosa, Cristian 24 October 2011 (has links)
Les systèmes distribués sont au coeur des technologies de l'information.Il est devenu classique de s'appuyer sur multiples unités distribuées pour améliorer la performance d'une application, la tolérance aux pannes, ou pour traiter problèmes dépassant les capacités d'une seule unité de traitement. La conception d'algorithmes adaptés au contexte distribué est particulièrement difficile en raison de l'asynchronisme et du non-déterminisme qui caractérisent ces systèmes. La simulation offre la possibilité d'étudier les performances des applications distribuées sans la complexité et le coût des plates-formes d'exécution réelles. Par ailleurs, le model checking permet d'évaluer la correction de ces systèmes de manière entièrement automatique. Dans cette thèse, nous explorons l'idée d'intégrer au sein d'un même outil un model checker et un simulateur de systèmes distribués. Nous souhaitons ainsi pouvoir évaluer la performance et la correction des applications distribuées. Pour faire face au problème de l'explosion combinatoire des états, nous présentons un algorithme de réduction dynamique par ordre partiel (DPOR), qui effectue une exploration basée sur un ensemble réduit de primitives de réseau. Cette approche permet de vérifier les programmes écrits avec n'importe laquelle des interfaces de communication proposées par le simulateur. Nous avons pour cela développé une spécification formelle complète de la sémantique de ces primitives réseau qui permet de raisonner sur l'indépendance des actions de communication nécessaire à la DPOR. Nous montrons au travers de résultats expérimentaux que notre approche est capable de traiter des programmes C non triviaux et non modifiés, écrits pour le simulateur SimGrid. Par ailleurs, nous proposons une solution au problème du passage à l'échelle des simulations limitées pour le CPU, ce qui permet d'envisager la simulation d'applications pair-à-pair comportant plusieurs millions de noeuds. Contrairement aux approches classiques de parallélisation, nous proposons une parallélisation des étapes internes de la simulation, tout en gardant l'ensemble du processus séquentiel. Nous présentons une analyse de la complexité de l'algorithme de simulation parallèle, et nous la comparons à l'algorithme classique séquentiel pour obtenir un critère qui caractérise les situations où un gain de performances peut être attendu avec notre approche. Un résultat important est l'observation de la relation entre la précision numérique des modèles utilisés pour simuler les ressources matérielles, avec le degré potentiel de parallélisation atteignables avec cette approche. Nous présentons plusieurs cas d'étude bénéficiant de la simulation parallèle, et nous détaillons les résultats d'une simulation à une échelle sans précédent du protocole pair-à-pair Chord avec deux millions de noeuds, exécutée sur une seule machine avec un modèle précis du réseau / Distributed systems are in the mainstream of information technology. It has become standard to rely on multiple distributed units to improve the performance of the application, help tolerate component failures, or handle problems too large to fit in a single processing unit. The design of algorithms adapted to the distributed context is particularly difficult due to the asynchrony and the nondeterminism that characterize distributed systems. Simulation offers the ability to study the performance of distributed applications without the complexity and cost of the real execution platforms. On the other hand, model checking allows to assess the correctness of such systems in a fully automatic manner. In this thesis, we explore the idea of integrating a model checker with a simulator for distributed systems in a single framework to gain performance and correctness assessment capabilities. To deal with the state explosion problem, we present a dynamic partial order reduction algorithm that performs the exploration based on a reduced set of networking primitives, that allows to verify programs written for any of the communication APIs offered by the simulator. This is only possible after the development of a full formal specification with the semantics of these networking primitives, that allows to reason about the independency of the communication actions as required by the DPOR algorithm. We show through experimental results that our approach is capable of dealing with non trivial unmodified C programs written for the SimGrid simulator. Moreover, we propose a solution to the problem of scalability for CPU bound simulations, envisioning the simulation of Peer-to-Peer applications with millions of participating nodes. Contrary to classical parallelization approaches, we propose parallelizing some internal steps of the simulation, while keeping the whole process sequential. We present a complexity analysis of the simulation algorithm, and we compare it to the classical sequential algorithm to obtain a criteria that describes in what situations a speed up can be expected. An important result is the observation of the relation between the precision of the models used to simulate the hardware resources, and the potential degree of parallelization attainable with this approach. We present several case studies that benefit from the parallel simulation, and we show the results of a simulation at unprecedented scale of the Chord Peer-to-Peer protocol with two millions nodes executed in a single machine
97

Uso de dados de diferente suporte em geoestatística e desenvolvimentos em simulação geoestátistica multivariada

Bassani, Marcel Antônio Arcari January 2018 (has links)
Essa tese investiga três problemas: (1) o uso de dados de diferente suporte em geoestatística, (2) simulação multivariada com restrições e (3) verificação da distribuição multivariada. Quando as amostras tem suporte diferente, essa diferença de suporte precisa ser considerada para construir um modelo de teores. A tese propõe a krigagem utilizando covariâncias médias entre as amostras para considerar dados de diferente suporte. A metodologia é comparada com dois métodos: (1) krigagem utilizando covariâncias pontuais entre os dados e (2) o método indireto. A krigagem utilizando covariâncias pontuais entre os dados ignora a diferença de suporte entre os dados. O método indireto trabalha com a variável acumulação, em vez do teor original. A krigagem com covariâncias médias resultou em estimativas mais precisas do que os outros dois métodos. Depósitos minerais multivariados frequentemente têm variáveis que contém restrições de fração e soma. As restrições de fração ocorrem quando uma variável é parte da outra, como a Alumina Aproveitável e Alumina Total em um depósito de bauxita. A Alumina Aproveitável não pode ser maior do que a Alumina Total. Restrições de soma ocorrem quando a soma das variáveis não pode exceder um valor crítico. Por exemplo, a soma de teores não pode ser maior do que cem. A tese desenvolve uma metodologia para cosimular teores com restrições de soma e fração. As simulações reproduzem os histograms, variogramas e relações multivariadas e honram as restrições de soma e fração. As simulações geoestatísticas multivariadas devem reproduzir as relações entre as variáveis. Dentro desse contexto, essa tese investiga a verificação da distribuição multivariada de simulações geoestatísticas. A tese desenvolve uma métrica de distância entre a distribuição multivariada dos dados e das simulações. A métrica desenvolvida foi efetiva para detectar erro e viés. Além disso, a métrica foi usada para comparar métodos de simulação geoestatística multivariada. / This thesis investigates three problems: (1) use of data of different support in geostatistics, (2) multivariate simulation with constraints and (3) verification of the multivariate distribution. When the samples have different support, this difference in support must be considered to build a grade model. The thesis proposes kriging with average covariances between the data to consider data of different support. The methodology is compared with two methods: (1) kriging using point support covariances between the data and (2) the indirect approach. Kriging using point support covariances between the data ignores the difference in support between the data. The indirect approach works with the variable accumulation, instead of the original grade. Kriging with average covariances resulted in more precise estimates than the other two methods. Multivariate mineral deposits often have variables that contain fraction and sum constraints. Fraction constraints occur when a variable is a fraction of the other, such as Recoverable and Total Alumina in a bauxite deposit. The Recoverable Alumina must not exceed Total Alumina. Sum constraints occur when the sum of the variables must not exceed a critical threshold. For instance, the sum of grades must not be above one hundred in a mineral deposit. The thesis develops a methodology to cosimulate grades with sum and fraction constraints. The simulations reproduce the histograms, variograms and multivariate relationships and honor the sum and fraction constraints. Multivariate geostatistical simulations should reproduce the relationships between the variables. In this context, the thesis investigates the verification of the multivariate distribution of geostatistical simulations. The thesis develops a metric to measure the distance between the multivariate distributions of the data and the simulations. The metric developed was effective to detect error and bias. Moreover, the metric was used to compare multivariate simulation methods.
98

Disjunction of Regular Timing Diagrams

Feng, Yu 12 October 2010 (has links)
"Timing diagrams are used in industrial practice as a specification language of circuit components. They have been formalized for efficient use in model checking. This formalization is often more succinct and convenient than the use of temporal logic. We explore the relationship between timing diagrams and temporal logic formulas by showing that closure under disjunction does not hold for timing diagrams. We give an algorithm that returns a disjunction (if any) of two given timing diagrams. We also give algorithms that decide satisfiability of a timing diagram and return exact time separations between events in a timing diagram. An Alloy specification for timing diagrams with one waveform has also been built."
99

Verification of Task Parallel Programs Using Predictive Analysis

Nakade, Radha Vi 01 October 2016 (has links)
Task parallel programming languages provide a way for creating asynchronous tasks that can run concurrently. The advantage of using task parallelism is that the programmer can write code that is independent of the underlying hardware. The runtime determines the number of processor cores that are available and the most efficient way to execute the tasks. When two or more concurrently executing tasks access a shared memory location and if at least one of the accesses is for writing, data race is observed in the program. Data races can introduce non-determinism in the program output making it important to have data race detection tools. To detect data races in task parallel programs, a new Sound and Complete technique based on computation graphs is presented in this work. The data race detection algorithm runs in O(N2) time where N is number of nodes in the graph. A computation graph is a directed acyclic graph that represents the execution of the program. For detecting data races, the computation graph stores shared heap locations accessed by the tasks. An algorithm for creating computation graphs augmented with memory locations accessed by the tasks is also described here. This algorithm runs in O(N) time where N is the number of operations performed in the tasks. This work also presents an implementation of this technique for the Java implementation of the Habanero programming model. The results of this data race detector are compared to Java Pathfinder's precise race detector extension and permission regions based race detector extension. The results show a significant reduction in the time required for data race detection using this technique.
100

Approche réactive pour la conduite en convoi des véhicules autonomes : Modélisation et vérification / Reactive approach for autonomous vehicle platoon systems : modelling and verification

El Zaher, Madeleine 22 November 2013 (has links)
Cette thèse se situe dans la problématique de la conduite en convoi de véhicules autonomes : des ensembles de véhicules qui se déplacent en conservant une configuration spatiale, sans aucune accroche matérielle. Ses objectifs sont d'abord, la définition d'une approche de prise de décision pour les systèmes de convois de véhicules, puis, la définition d'une approche de vérification, adaptée à la preuve de propriétés relatives aux convois de véhicules, avec une attention particulière envers les propriétés de sûreté.L'approche pour la prise de décision est décentralisée et auto organisée : chaque véhicule détermine son comportement de façon locale, à partir de ses propres capacités de perception, sans avoir recours à une communication explicite, de telle sorte que l'organisation du convoi, son maintien et son évolution soient le résultat émergeant du comportement de chaque véhicule. L'approche proposée s'applique a des convois suivant plusieurs types de configuration, et permet des changements dynamiques de configuration.L'approche proposée pour la vérification de propriétés de sûreté des convois de véhicules, adopte le model-checking comme technique de preuve. Pour contourner le problème de l'explosion combinatoire, rencontré dans la vérification des systèmes complexes, nous avons proposé une méthode compositionnelle de vérification, qui consiste a décomposer le système en sous systèmes et à associer une propriété auxiliaire à chacun des sous systèmes. La propriété globale sera ensuite déduite de l'ensemble des propriétés auxiliaires, par l'application d'une règle de déduction compositionnelle. La complexité calculatoire est mieux maîtrisée car le model-checking s'applique aux sous-systèmes. Nous proposons une règle de déduction adaptée aux systèmes de conduite en convoi, en particulier ceux qui sont basés sur des approches décentralisées. La règle considère chaque véhicule comme un composant. Elle est consistante sous la condition que l'ajout d'un nouveau composant au système n'a pas d'influence sur le comportement du reste du système. L'approche décentralisée proposée pour la conduite en convoi satisfait cette condition. Deux propriétés de sûreté ont été vérifiées : absence de collision et évolution confortable pour les passagers / This thesis places in the framework of Platoons, sets of autonomous vehicles that move together while keeping a spatial configuration, without any material coupling. Goals of the thesis are: first, the definition of a decision making approach for platoon systems. Second, the definition of a method for the verification of safety properties associated to the platoon system.The proposed decision making approach is decentralized and self-organized. Platoon vehicles are autonomous, they act based only on their perception capabilities. The configuration emerges as a result of the individual behavior of each of the platoon vehicle. The proposed approach can be applied to platoon with different configurations, and allows for dynamic change of configuration.The proposed verification method uses the model-checking technique. Model checking of complex system can lead to the combinatory explosion problem. To deal with this problem, we choose to use a compositional verification method. Compositional methods decompose system models into different components and associate to each component an auxiliary property. The global property can then be deduced from the set of all the auxiliary properties, by applying a compositional deduction rule. We define a deduction rule suitable for decentralised platoon systems. The deduction rule considers each vehicle as a component. It is applicable under the assumption that adding a new component to an instance of the system does not modify behavior of the instance. Two safety properties have been verified : collision avoidance.

Page generated in 0.0219 seconds