• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 341
  • 133
  • 67
  • 62
  • 37
  • 22
  • 19
  • 14
  • 11
  • 8
  • 7
  • 7
  • 6
  • 5
  • 4
  • Tagged with
  • 872
  • 219
  • 99
  • 95
  • 79
  • 73
  • 68
  • 63
  • 55
  • 51
  • 49
  • 46
  • 44
  • 42
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Dynamic modeling of branches and knot formation in loblolly pine (Pinus taeda L.) trees

Trincado, Guillermo 06 December 2006 (has links)
A stochastic framework to simulate the process of initiation, diameter growth, death and self-pruning of branches in loblolly pine (Pinus taeda L.) trees was developed. A data set was obtained from a destructive sampling of whorl sections from 34 trees growing under different initial spacing. Data from dissected branches were used to develop a model for representing knot shape, which assumed that the live portion of a knot can be modeled by a one-parameter equation and the dead portion by assuming a cylindrical shape. For the developed knot model analytical expressions were derived for estimating the volume of knots (live/dead portions) for three types of branch conditions on simulated trees: (i) live branches, (ii) non-occluded dead branches, and (iii) occluded dead branches. This model was intended to recover information on knots shape and volume during the simulation process of branch dynamics. Three different components were modeled and hierarchically connected: whorl, branches and knots. For each new growing season, whorls and branches are assigned stochastically along and around the stem. Thereafter, branch diameter growth is predicted as function of relative location within the live crown and stem growth. Using a taper equation, the spatial location (X,Y,Z) of both live and dead portion of simulated knots is maintained in order to create a 3D representation of the internal stem structure. At the end of the projection period information on (i) vertical trend of branch diameter and location along and around the stem, (ii) volume of knots, and (iii) spatial location, size and type (live and dead) of knots can be obtained. The proposed branch model was linked to the individual-tree growth and yield model PTAEDA3.1 to evaluate the effect of initial spacing and thinning intensity on branch growth in sawtimber trees. The use of the dynamic branch model permitted generation of additional information on sawlog quality under different management regimes. The arithmetic mean diameter of the largest four branches, one from each radial quadrant of the log (i.e. Branch Index, BI) and the number of whorls per log were considered as indicators of sawlog quality. The developed framework makes it possible to include additional wood properties in the simulation system, allowing linkage with industrial conversion processes (e.g. sawing simulation). This integrated modeling system should promote further research to obtain necessary data on crown and branch dynamics to validate the overall performance of the proposed branch model and to improve its components. / Ph. D.
172

Guiding RTL Test Generation Using Relevant Potential Invariants

Khanna, Tania 02 August 2018 (has links)
In this thesis, we propose to use relevant potential invariants in a simulation-based swarmintelligence-based test generation technique to generate relevant test vectors for design validation at the Register Transfer Level (RTL). Providing useful guidance to the test generator for such techniques is critical. In our approach, we provide guidance by exploiting potential invariants in the design. These potential invariants are obtained using random stimuli such that they are true under these stimuli. Since these potential invariants are only likely to be true, we try to generate stimuli that can falsify them. Any such vectors would help reach some corners of the design. However, the space of potential invariants can be extremely large. To reduce execution time, we also implement a two-layer filter to remove the irrelevant potential invariants that may not contribute in reaching difficult states. With the filter, the vectors generated thus help to reduce the overall test length while still reach the same coverage as considering all unfiltered potential invariants. Experimental results show that with only the filtered potential invariants, we were able to reach equal or better branch coverage than that reported by BEACON in the ITC99 benchmarks, with considerable reduction in vector lengths, at reduced execution time. / Master of Science / Over the recent years, size and complexity of hardware designs are increasing at an enormous rate. Due to this, verification of these designs is of utmost importance and demands much more resources and time than designing of these hardware. To project the information of the designs, developers use Hardware Descriptive Languages (HDL), that includes the important decision points of the system, also called branches of the circuit. There are several methodologies proposed to check how many branches of the design can be traversed by set of inputs. This practice is important to confirm correct functionality of the design as we can catch all the faults in the design at these decision points. Some of these methodologies include checking with random inputs, exhaustively checking for every possible input, investing many hours of labor to verify with appropriate inputs, or simply automating the process of generating inputs. In this thesis, we focus on one such automated process called BEACON or Branch-oriented Evolutionary Ant Colony OptimizatioN. We propose a modification to improve this method by using standard properties of the design. These properties, also known as invariants, help to cover those branches that require extra effort in terms of both inputs and time, and are thus, hard to cover. When we add these significant invariants to the design, modified BEACON is able to cover almost all accessible branches in the system with significantly less amount of time and lesser number of vectors than original BEACON itself, which helps save a lot of resources.
173

Structural Features Related to Tree Crotch Strength

Farrell, Robert William 11 June 2003 (has links)
Crotches were cut out of red maple (Acer rubrum), callery pear (Pyrus calleryana), and sawtooth oak (Quercus acutissima) trees (2.5â -7â d.b.h.) and then pulled apart in an engineering testing machine to identify physical parameters correlated with crotch strength. Parameters measured included the diameter of the branch and of the trunk above and below the crotch, angle of the branch and branch bark ridge, and the length of the crotch and the branch bark ridge. The force required to break each sample was used to calculate breaking strength based on the formula for bending stress. Each parameter was tested for correlation with crotch strength within the individual species and for the three species combined. The ratio of branch diameter over crotch width had the highest correlation coefficient for crotch strength. Branch angle was also correlated with crotch strength but not as highly as the ratio of the diameters. V-shaped crotches (those with included bark) were significantly weaker than U-shaped crotches for all species. The ratio of the two stem diameters greatly influenced the manner in which the crotches broke. In crotches where the branch diameter was 2/3 the size of the trunk or smaller, the crotch broke by being pulled directly out of the trunk. Crotches with branches more than 2/3 the diameter of the trunk broke when the trunk split longitudinally and had significantly lower strength values. These results indicate that increased crotch strength results from a small branch diameter relative to that of the trunk. / Master of Science
174

Designing for the Future: Promoting Ecoliteracy in the Design of Children's Outdoor Play Environments

Freuder, Tracy Grace 24 August 2006 (has links)
Rapid development of U.S. cities and towns along with changes in society and technology are dramatically affecting childhood experience. Wild places and vacant lots for kids to play in are disappearing. Parents have limited time to spend with their children and fear letting them play outside alone. Traffic is a growing safety hazard and there is an increasing desire for entertainment in the form of TV and video games over outdoor exploration. As a result, children are becoming alienated from nature. They are growing up without developing a personal attachment to their natural surroundings or an understanding of their impact on the environment. The design of outdoor play areas can help reconnect children to their surroundings and lead to a more environmentally minded generation. Ecoliteracy suggests an understanding of ecological principles as well as appreciation for the environment and an attitude of stewardship. In addition to helping kids acquire factual knowledge, outdoor play spaces should cultivate a sense of wonder and delight and an emotional appreciation of the living world. Through research, observation and application, this thesis project identifies design criteria for promoting ecoliteracy in outdoor play environments. / Master of Landscape Architecture
175

An Efficient 2-Phase Strategy to Achieve High Branch Coverage

Prabhu, Sarvesh P. 06 March 2012 (has links)
Symbolic execution-based test generation is gaining popularity for software test generation. The increasing complexity of the software program is posing new challenges in software execution-based test generation because of the path explosion problem. We present a new 2-phase symbolic execution driven strategy that achieves high branch coverage in software quickly. Phase 1 follows a greedy approach that quickly covers as many branches as possible by exploring each branch through its corresponding shortest path prefix. Phase 2 covers the remaining branches that are left uncovered if the shortest path to the branch was infeasible. In Phase 1, a basic conflict driven learning is used to skip all the paths that may have any of the earlier encountered conflicting conditions, while in Phase 2, a more intelligent conflict driven learning is used to skip regions that do not have a feasible path to any unexplored branch. This results in considerable reduction in unnecessary SMT solver calls. Experimental results show that significant speedup can be achieved, effectively reducing the time to detect a bug and providing higher branch coverage for a fixed time out period than previous techniques. / Master of Science
176

Semantic Decomposition By Covering

Sripadham, Shankar B. 10 August 2000 (has links)
This thesis describes the implementation of a covering algorithm for semantic decomposition of sentences of technical patents. This research complements the ASPIN project that has a long term goal of providing an automated system for digital system synthesis from patents. In order to develop a prototype of the system explained in a patent, a natural language processor (sentence-interpreter) is required. These systems typically attempt to interpret a sentence by syntactic analysis (parsing) followed by semantic analysis. Quite often, the technical narrative contains grammatical errors, incomplete sentences, anaphoric references and typological errors that can cause the grammatical parse to fail. In such situations, an alternate method that uses a repository of pre-compiled, simple sentences (called frames) to analyze the sentences of the patent can be a useful back up. By semantically decomposing the sentences of patents to a set of frames whose meanings are fully understood, the meaning of the patent sentences can be interpreted. This thesis deals with the semantic decomposition of sentences using a branch and bound covering algorithm. The algorithm is implemented in C++. A number of experiments were conducted to evaluate the performance of this algorithm. The covering algorithm uses a standard branch and bound algorithm to semantically decompose sentences. The algorithm is fast, flexible and can provide good (100 % coverage for some sentences) coverage results. The system covered 67.68 % of the sentence tokens using 3459 frames in the repository. 54.25% of the frames identified by the system in covers for sentences, were found to be semantically correct. The experiments suggest that the performance of the system can be improved by increasing the number of frames in the repository. / Master of Science
177

Madison, Hamilton, and Reagan: The Limits of Executive Power in Foreign Policy and the Reagan Intervention in Nicaragua

Lallinger, Stefan 20 May 2011 (has links)
The distribution of power between the executive branch and the legislative branch in the realm of foreign policy is a delicate balance and one that has been debated since the Founding Fathers met in Philadelphia in the summer of 1787. The debate has gotten no less intense and no less crucial in the modern, nuclear age, and it remains unresolved. The Reagan administration's foray into Nicaragua during the 1980's and its confrontations with Congress during that time period illuminate the complexities of the power-sharing arrangement in foreign policy and offer the ideal case study of executive-legislative war power. The lessons to be drawn from America's involvement in Nicaragua are that the expanded Presidential power in the realm of foreign policy are necessary for the safety of the country in today's world, but dangerous without the vigorous oversight and ultimate check by Congress.
178

Avaliação de métodos ótimos e subótimos de seleção de características de texturas em imagens / Evaluation of optimal and suboptimal feature selection methods applied to image textures

Roncatti, Marco Aurelio 10 July 2008 (has links)
Características de texturas atuam como bons descritores de imagens e podem ser empregadas em diversos problemas, como classificação e segmentação. Porém, quando o número de características é muito elevado, o reconhecimento de padrões pode ser prejudicado. A seleção de características contribui para a solução desse problema, podendo ser empregada tanto para redução da dimensionalidade como também para descobrir quais as melhores características de texturas para o tipo de imagem analisada. O objetivo deste trabalho é avaliar métodos ótimos e subótimos de seleção de características em problemas que envolvem texturas de imagens. Os algoritmos de seleção avaliados foram o branch and bound, a busca exaustiva e o sequential oating forward selection (SFFS). As funções critério empregadas na seleção foram a distância de Jeffries-Matusita e a taxa de acerto do classificador de distância mínima (CDM). As características de texturas empregadas nos experimentos foram obtidas com estatísticas de primeira ordem, matrizes de co-ocorrência e filtros de Gabor. Os experimentos realizados foram a classificação de regiôes de uma foto aérea de plantação de eucalipto, a segmentação não-supervisionada de mosaicos de texturas de Brodatz e a segmentação supervisionada de imagens médicas (MRI do cérebro). O branch and bound é um algoritmo ótimo e mais efiiente do que a busca exaustiva na maioria dos casos. Porém, continua sendo um algoritmo lento. Este trabalho apresenta uma nova estratégia para o branch and bound, nomeada floresta, que melhorou significativamente a eficiência do algoritmo. A avaliação dos métodos de seleção de características mostrou que os melhores subconjuntos foram aqueles obtidos com o uso da taxa de acerto do CDM. A busca exaustiva e o branch and bound, mesmo com a estratégia floresta, foram considerados inviáveis devido ao alto tempo de processamento nos casos em que o número de característica é muito grande. O SFFS apresentou os melhores resultados, pois, além de mais rápido, encontrou as soluções ótimas ou próximas das ótimas. Pôde-se concluir também que a precisão no reconhecimento de padrões aumenta com a redução do número de características e que os melhores subconjuntos freqüentemente são formados por características de texturas obtidas com técnicas diferentes / Texture features are eficient image descriptors and can be employed in a wide range of applications, such as classification and segmentation. However, when the number of features is considerably high, pattern recognition tasks may be compromised. Feature selection helps prevent this problem, as it can be used to reduce data dimensionality and reveal features which best characterise images under investigation. This work aims to evaluate optimal and suboptimal feature selection algorithms in the context of textural features extracted from images. Branch and bound, exhaustive search and sequential floating forward selection (SFFS) were the algorithms investigated. The criterion functions employed during selection were the Jeffries-Matusita (JM) distance and the minimum distance classifier (MDC) accuracy rate. Texture features were computed from first-order statistics, co-occurrence matrices and Gabor filters. Three different experiments have been conducted: classification of aerial picture of eucalyptus plantations, unsupervised segmentation of mosaics of Brodatz texture samples and supervised segmentation of MRI images of the brain. The branch and bound is an optimal algorithm and many times more eficient than exhaustive search. But is still time consuming. This work proposed a novel strategy for the branch and bound algorithm, named forest, which has considerably improved its performance. The evaluation of the feature selection methods has revealed that the best feature subsets were those computed by the MDC accuracy rate criterion function. Exhaustive search and branch and bound approaches have been considered unfeasible, due to their high processing times, especially for high dimensional data. This statement holds even for the branch and bound with the forest strategy. The SFFS approach yielded the best results. Not only was it faster, as it also was capable of finding the optimal or nearly optimal solutions. Finally, it has been observed that the precision of pattern recognition tasks increases as the number of features decreases and that the best feature subsets are those which possess features computed from distinct texture feature methods
179

Algorithmen im Wirkstoffdesign

Thimm, Martin 31 January 2006 (has links)
Die Bestimmung der Ähnlichkeit von molekularen Strukturen und das Clustern solcher Strukturen gemäß Ähnlichkeit sind zwei zentrale Fragen im Wirkstoffdesign. Die Arbeit beschreibt im ersten Teil zwei neue Verfahren zum Vergleich von Molekülen auf 3-dimensionale Ähnlichkeit. Der erste Algorithmus benutzt als Eingabe nur die Koordinaten der Atome der zu vergleichenden Moleküle. Wir können zeigen, daß eine rein geometrische Zielfunktion in der Lage ist, Wirkungsähnlichkeit von Substanzen vorherzusagen, und daß der Algorithmus geeignet ist, Ähnlichkeiten zu finden, die mit bisherigen, einfacheren Methoden nicht gefunden werden konnten. Das zweite Verfahren nutzt zusätzlich noch die Bindungsstruktur der Eingabemoleküle. Es ist flexibel, d.h. alle Konformere der Moleküle werden simultan behandelt. Wir erhalten ein sehr schnelles Verfahren, das bei geeigneter Parametereinstellung auch beweisbar optimale Lösungen liefert. Für praktisch relevante Anwendungen erreichen wir erstmals Laufzeiten, die selbst das Durchsuchen großer Datenbanken ermöglichen. Im zweiten Teil beschreiben wir zwei Methoden, eine Menge von molekularen Strukturen so zu organisieren, daß die Suche nach geometrisch ähnlichen deutlich schneller durchgeführt werden kann als durch lineare Suche. Nach Analyse der Daten mit graphentheoretischen Methoden finden hierarchische Verfahren und repräsentantenbasierte Ansätze ihre Anwendung. Schließlich geben wir einen neuen Algorithmus zum Biclustern von Daten an, einem Problem, das bei der Analyse von Genexpressionsdaten eine wichtige Rolle spielt. Mit graphentheoretischen Methoden konstruieren wir zunächst deterministisch Obermengen von Lösungen, die danach heuristisch ausgedünnt werden. Wir können zeigen, daß dieser neue Ansatz bisherige, vergleichbare z.T. deutlich überbietet. Seine prinzipielle Einfachheit läßt anwendungsbezogene Modifikationen leicht zu. / Two important questions in drug design are the following: "How to compute the similarity of two molecules?" and "How to cluster molecules by similarity?" In the first part we describe two different approaches to compare molecules for 3D-similarity. The first algorithm just uses the 3D coordinates of the atoms as input. We show that this algorithm is able to detect similar activity or similar adverse reaction, even with a simple purely geometry based scoring function. Compared to previous simpler approaches more interesting hits are found. The connectivity structures of the molecular graphs are used by the second algorithm as additional input. This fully flexible approach -- conformers of the molecules are treated simultaneously -- may even find provably optimal solutions. Parameter settings for practically relevant instances allow running times that make it possible to even search large databases. The second part describes two methods to search a database of molecular structures. After analyzing the data with graph theoretical methods two algorithms for two different ranges of similarity are designed. Scanning the database for structures similar to a given query can be accelerated considerably. We use hierarchical methods and dominating set techniques. Finally we propose a new biclustering algorithm. Biclustering problems recently appeared mainly in the context of analysing gene expression data. Again graph theoretical methods are our main tools. In our model biclusters correspond to dense subgraphs of certain bipartite graphs. In a first phase the algorithm deterministically finds supersets of solution candidates. Thinning out these sets by heuristical methods leads to solutions. This new algorithm outperforms former comparable methods and its simple structure make it easy to customize it for practical applications.
180

Parallel Scheduling in the Cloud Systems : Approximate and Exact Methods / Ordonnancement parallèle des systèmes Cloud : méthodes approchées et exactes

Hassan Abdeljabbar Hassan, Mohammed Albarra 15 December 2016 (has links)
Cette thèse porte sur la résolution exacte et heuristique de plusieurs problèmes ayant des applications dans le domaine de l'Informatique dématérialisé (cloud computing). L'Informatique dématérialisée est un domaine en plein extension qui consiste à mutualiser les machines/serveurs en définissant des machines virtuelles représentant des fractions des machines/serveurs. Il est nécessaire d'apporter des solutions algorithmiques performantes en termes de temps de calcul et de qualité des solutions. Dans cette thèse, nous nous sommes intéressés dans un premier temps au problème d'ordonnancement sur plusieurs machines (les machines virtuelles) avec contraintes de précédence, c.-à-d., que certaines tâches ne peuvent s'exécuter que si d'autres sont déjà finies. Ces contraintes représentent une subdivision des tâches en sous tâches pouvant s'exécuter sur plusieurs machines virtuelles. Nous avons proposé plusieurs algorithmes génétiques permettant de trouver rapidement une bonne solution réalisable. Nous les avons comparés avec les meilleurs algorithmes génétiques de la littérature et avons défini les types d'instances où les solutions trouvées sont meilleures avec notre algorithme. Dans un deuxième temps, nous avons modélisé ce problème à l'aide de la programmation linéaire en nombres entiers permettant de résoudre à l'optimum les plus petites instances. Nous avons proposé de nouvelles inégalités valides permettant d'améliorer les performances de notre modèle. Nous avons aussi comparé cette modélisation avec plusieurs formulations trouvées dans la littérature. Dans un troisième temps, nous avons analysé de manière approfondie la sous-structure du sous-graphe d'intervalle ne possédant pas de clique de taille donnée. Nous avons étudié le polytope associé à cette sous-structure et nous avons montré que les facettes que nous avons trouvées sont valides pour le problème d'ordonnancement sur plusieurs machines avec contraintes de précédence mais elles le sont aussi pour tout problème d'ordonnancement sur plusieurs machines. Nous avons étendu la modélisation permettant de résoudre le précédent problème afin de résoudre le problème d'ordonnancement sur plusieurs machines avec des contraintes disjonctives entre les tâches, c.-à-d., que certaines tâches ne peuvent s'exécuter en même temps que d'autres. Ces contraintes représentent le partage de ressources critiques ne pouvant pas être utilisées par plusieurs tâches. Nous avons proposé des algorithmes de séparation afin d'insérer de manière dynamique nos facettes dans la résolution du problème puis avons développé un algorithme de type Branch-and-Cut. Nous avons analysé les résultats obtenus afin de déterminer les inégalités les plus intéressantes afin de résoudre ce problème. Enfin dans le dernier chapitre, nous nous sommes intéressés au problème d'ordonnancement d'atelier généralisé ainsi que la version plus classique d'ordonnancement d'atelier (open shop). En effet, le problème d'ordonnancement d'atelier généralisé est aussi un cas particulier du problème d'ordonnancement sur plusieurs machines avec des contraintes disjonctives entre les tâches. Nous avons proposé une formulation à l'aide de la programmation mathématique pour résoudre ces deux problèmes et nous avons proposé plusieurs familles d'inégalités valides permettant d'améliorer les performances de notre algorithme. Nous avons aussi pu utiliser les contraintes définies précédemment afin d'améliorer les performances pour le problème d'ordonnancement d'atelier généralisé. Nous avons fini par tester notre modèle amélioré sur les instances classiques de la littérature pour le problème d'ordonnancement d'atelier. Nous obtenons de bons résultats permettant d'être plus rapide sur certaines instances / The Cloud Computing appears as a strong concept to share costs and resources related to the use of end-users. As a consequence, several related models exist and are widely used (IaaS, PaaS, SaaS. . .). In this context, our research focused on the design of new methodologies and algorithms to optimize performances using the scheduling and combinatorial theories. We were interested in the performance optimization of a Cloud Computing environment where the resources are heterogeneous (operators, machines, processors...) but limited. Several scheduling problems have been addressed in this thesis. Our objective was to build advanced algorithms by taking into account all these additional specificities of such an environment and by ensuring the performance of solutions. Generally, the scheduling function consists in organizing activities in a specific system imposing some rules to respect. The scheduling problems are essential in the management of projects, but also for a wide set of real systems (telecommunication, computer science, transportation, production...). More generally, solving a scheduling problem can be reduced to the organization and the synchronization of a set of activities (jobs or tasks) by exploiting the available capacities (resources). This execution has to respect different technical rules (constraints) and to provide the maximum of effectiveness (according to a set of criteria). Most of these problems belong to the NP-Hard problems class for which the majority of computer scientists do not expect the existence of a polynomial exact algorithm unless P=NP. Thus, the study of these problems is particularly interesting at the scientific level in addition to their high practical relevance. In particular, we aimed to build new efficient combinatorial methods for solving parallel-machine scheduling problems where resources have different speeds and tasks are linked by precedence constraints. In our work we studied two methodological approaches to solve the problem under the consideration : exact and meta-heuristic methods. We studied three scheduling problems, where the problem of task scheduling in cloud environment can be generalized as unrelated parallel machines, and open shop scheduling problem with different constraints. For solving the problem of unrelated parallel machines with precedence constraints, we proposed a novel genetic-based task scheduling algorithms in order to minimize maximum completion time (makespan). These algorithms combined the genetic algorithm approach with different techniques and batching rules such as list scheduling (LS) and earliest completion time (ECT). We reviewed, evaluated and compared the proposed algorithms against one of the well-known genetic algorithms available in the literature, which has been proposed for the task scheduling problem on heterogeneous computing systems. Moreover, this comparison has been extended to an existing greedy search method, and to an exact formulation based on basic integer linear programming. The proposed genetic algorithms show a good performance dominating the evaluated methods in terms of problems' sizes and time complexity for large benchmark sets of instances. We also extended three existing mathematical formulations to derive an exact solution for this problem. These mathematical formulations were validated and compared to each other by extensive computational experiments. Moreover, we proposed an integer linear programming formulations for solving unrelated parallel machine scheduling with precedence/disjunctive constraints, this model based on the intervaland m-clique free graphs with an exponential number of constraints. We developed a Branch-and-Cut algorithm, where the separation problems are based on graph algorithms. [...]

Page generated in 0.0255 seconds