• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 759
  • 105
  • 69
  • 58
  • 24
  • 24
  • 16
  • 16
  • 16
  • 16
  • 16
  • 16
  • 14
  • 10
  • 7
  • Tagged with
  • 1393
  • 1393
  • 290
  • 200
  • 153
  • 149
  • 124
  • 122
  • 120
  • 119
  • 118
  • 115
  • 109
  • 107
  • 107
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
621

A graph theory-based 'expert system' methodology for structure-activity studies

Henderson, Robert Vann January 1992 (has links)
No description available.
622

Diffusion Connectometry and Graph Theory Reveal Structural “Sweet Spot” for Language Performance

Williamson, Brady January 2017 (has links)
No description available.
623

EFFICIENT GROUP COMMUNICATION AND THE DEGREE-BOUNDED SHORTEST PATH PROBLEM

HELMICK, MICHAEL T. 02 July 2007 (has links)
No description available.
624

Reactor behavior and its relation to chemical reaction network structure

Knight, Daniel William January 2015 (has links)
No description available.
625

A Verified Program for the Enumeration of All Maximal Independent Sets

Merten, Samuel A. January 2016 (has links)
No description available.
626

Some results on the association schemes of bilinear forms /

Huang, Tayuan January 1985 (has links)
No description available.
627

Models for Systemic Risk

Shao, Quentin H. January 2017 (has links)
Systemic risk is the risk that an economic shock may result in the breakdown of the fundamental functions of the financial system. It can involve multiple vectors of infection such as chains of losses or consecutive failures of financial institutions that may ultimately cause the failure of the financial system to provide liquidity, stable prices, and to perform economic activities. This thesis develops methods to quantify systemic risk, its effect on the financial system and perhaps more importantly, to determine its cause. In the first chapter, we provide an overview and a literature review of the topics covered in this thesis. First, we present a literature review on network-based models of systemic risk. Finally we end the first chapter with a review on market impact models. In the second chapter, we consider one unregulated financial institution with constant absolute risk aversion investment risk preferences that optimizes its strategies in a multi asset market impact model with temporary and permanent impact. We prove the existence and derive explicitly the optimal trading strategies. Furthermore, we conduct numerical exploration on the sensitivity of the optimal trading curve. This chapter sets the foundation for further research into multi-agent models and systemic risk models with optimal behaviours. In the third chapter, we extend the market impact models to the multi-agent setting. The agents follow a game theoretic strategy that is constrained by the regulations imposed. Furthermore, the agents must liquidate themselves if they become insolvent or unable to meet the regulations imposed on them. This paper provides a bridge between market impact models and network models of systemic risk. In chapter four, we introduce a financial network model that combines the default and liquidity stress mechanisms into a ``double cascade mapping''. Unlike simpler models, this model can quantify how illiquidity or default of one bank influences the overall level of liquidity stress and default in the system. We derive large-network asymptotic cascade mapping formulas that can be used for efficient network computations of the double cascade. Finally we use systemic risk measures to compare the results of including with and without an asset firesale mechanism. / Thesis / Doctor of Philosophy (PhD)
628

Harnessing Simulated Data with Graphs

Maia, Henrique Teles January 2022 (has links)
Physically accurate simulations allow for unlimited exploration of arbitrarily crafted environments. From a scientific perspective, digital representations of the real world are useful because they make it easy validate ideas. Virtual sandboxes allow observations to be collected at-will, without intricate setting up for measurements or needing to wait on the manufacturing, shipping, and assembly of physical resources. Simulation techniques can also be utilized over and over again to test the problem without expending costly materials or producing any waste. Remarkably, this freedom to both experiment and generate data becomes even more powerful when considering the rising adoption of data-driven techniques across engineering disciplines. These are systems that aggregate over available samples to model behavior, and thus are better informed when exposed to more data. Naturally, the ability to synthesize limitless data promises to make approaches that benefit from datasets all the more robust and desirable. However, the ability to readily and endlessly produce synthetic examples also introduces several new challenges. Data must be collected in an adaptive format that can capture the complete diversity of states achievable in arbitrary simulated configurations while too remaining amenable to downstream applications. The quantity and zoology of observations must also straddle a range which prevents overfitting but is descriptive enough to produce a robust approach. Pipelines that naively measure virtual scenarios can easily be overwhelmed by trying to sample an infinite set of available configurations. Variations observed across multiple dimensions can quickly lead to a daunting expansion of states, all of which must be processed and solved. These and several other concerns must first be addressed in order to safely leverage the potential of boundless simulated data. In response to these challenges, this thesis proposes to wield graphs in order to instill structure over digitally captured data, and curb the growth of variables. The paradigm of pairing data with graphs introduced in this dissertation serves to enforce consistency, localize operators, and crucially factor out any combinatorial explosion of states. Results demonstrate the effectiveness of this methodology in three distinct areas, each individually offering unique challenges and practical constraints, and together showcasing the generality of the approach. Namely, studies observing state-of-the-art contributions in design for additive manufacturing, side-channel security threats, and large-scale physics based contact simulations are collectively achieved by harnessing simulated datasets with graph algorithms.
629

Heuristics for laying out information graphs

Lavinus, Joseph W. 30 December 2008 (has links)
The representation of information in modern database systems is complicated by the need to represent relationships among pieces of information. A natural representation for such databases is the information graph that associates the pieces of information with vertices in the graph and the relationships with edges. Five characteristics of this representation are noteworthy. First, each vertex has a size (in bytes) sufficient to store its corresponding piece of information. Second, retrieval in an information graph may follow a number of patterns; in particular, retrieval of adjacent vertices via edge traversals must be efficient. Third, in many applications such as a dictionary or bibliographic archive, the information graph may be considered static. Fourth, the ultimate home for an information graph is likely to be a roughly linear medium such as a magnetic disk or CD-ROM. Finally, information graphs are quite large-hundreds of thousands of vertices and tens of megabytes in size. / Master of Science
630

Table-driven quadtree traversal algorithms

Lattanzi, Mark R. 01 August 2012 (has links)
Two quadtree algorithms are presented that use table driven traversals to reduce the time complexity required to achieve their respective goals. The first algorithm is a two step process that converts a boundary representation of a polygon into a corresponding region representation of the same image. The first step orders the border pixels of the polygon. The second step fills in the polygon in O(B) time where B is the number of border pixels for the polygon of interest. A table propagates the correct values of upcoming nodes in a simulated traversal of the final region quadtree. This is unique because the pointer representation of the tree being traversed does not exist. A linear quadtree representation is constructed as this traversal proceeds. The second algorithm is an update algorithm for a quadtree (or octtree) of moving particles. Particle simulations have had the long-standing problem of calculating the interactions among n particles. It takes O(n2) time for direct computation of all the interactions between n particles. Greengard [Gree87, Carr87] has devised a way to approximate these calculations in linear time using a tree data structure. However, the particle simulation must still rebuild the particle tree after every iteration, which requires O(n log n) time. Our algorithm updates the existing tree of particles, rather than building a new tree. It operates in near linear time in the number of particles being simulated. The update algorithm uses a table to store particles as they move between nodes of the tree. / Master of Science

Page generated in 0.1595 seconds