• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1535
  • 54
  • 52
  • 45
  • 17
  • 8
  • 6
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 2264
  • 2264
  • 270
  • 265
  • 226
  • 214
  • 206
  • 197
  • 197
  • 154
  • 135
  • 134
  • 133
  • 130
  • 125
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Low-Rank Riemannian Optimization Approach to the Role Extraction Problem

Unknown Date (has links)
This dissertation uses Riemannian optimization theory to increase our understanding of the role extraction problem and algorithms. Recent ideas of using the low-rank projection of the neighborhood pattern similarity measure and our theoretical analysis of the relationship between the rank of the similarity measure and the number of roles in the graph motivates our proposal to use Riemannian optimization to compute a low-rank approximation of the similarity measure. We propose two indirect approaches to use to solve the role extraction problem. The first uses the standard two-phase process. For the first phase, we propose using Riemannian optimization to compute a low-rank approximation of the similarity of the graph, and for the second phase using k-means clustering on the low-rank factor of the similarity matrix to extract the role partition of the graph. This approach is designed to be efficient in time and space complexity while still being able to extract good quality role partitions. We use basic experiments and applications to illustrate the time, robustness, and quality of our two-phase indirect role extraction approach. The second indirect approach we propose combines the two phases of our first approach into a one-phase approach that iteratively approximates the low-rank similarity matrix, extracts the role partition of the graph, and updates the rank of the similarity matrix. We show that the use of Riemannian rank-adaptive techniques when computing the low-rank similarity matrix improves robustness of the clustering algorithm. / A Dissertation submitted to the Department of Mathematics in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Fall Semester 2017. / September 21, 2017. / blockmodeling, graph partitioning, networks, Riemannian optimization, role extraction problem / Includes bibliographical references. / Kyle A. Gallivan, Professor Co-Directing Dissertation; Paul Van Dooren, Professor Co-Directing Dissertation; Gordon Erlebacher, University Representative; Giray Ökten, Committee Member; Mark Sussman, Committee Member.
22

Pattern Identification and Analysis in Urban Flows

January 2018 (has links)
abstract: Two urban flows are analyzed, one concerned with pollutant transport in a Phoenix, Arizona neighborhood and the other with windshear detection at the Hong Kong International Airport (HKIA). Lagrangian measures, identified with finite-time Lyapunov exponents, are first used to characterize transport patterns of inertial pollutant particles. Motivated by actual events the focus is on flows in realistic urban geometry. Both deterministic and stochastic transport patterns are identified, as inertial Lagrangian coherent structures. For the deterministic case, the organizing structures are well defined and are extracted at different hours of a day to reveal the variability of coherent patterns. For the stochastic case, a random displacement model for fluid particles is formulated, and used to derive the governing equations for inertial particles to examine the change in organizing structures due to ``zeroth-order'' random noise. It is found that, (1) the Langevin equation for inertial particles can be reduced to a random displacement model; (2) using random noise based on inhomogeneous turbulence, whose diffusivity is derived from $k$-$\epsilon$ models, major coherent structures survive to organize local flow patterns and weaker structures are smoothed out due to random motion. A study of three-dimensional Lagrangian coherent structures (LCS) near HKIA is then presented and related to previous developments of two-dimensional (2D) LCS analyses in detecting windshear experienced by landing aircraft. The LCS are contrasted among three independent models and against 2D coherent Doppler light detection and ranging (LIDAR) data. Addition of the velocity information perpendicular to the lidar scanning cone helps solidify flow structures inferred from previous studies; contrast among models reveals the intramodel variability; and comparison with flight data evaluates the performance among models in terms of Lagrangian analyses. It is found that, while the three models and the LIDAR do recover similar features of the windshear experienced by a landing aircraft (along the landing trajectory), their Lagrangian signatures over the entire domain are quite different - a portion of each numerical model captures certain features resembling those LCS extracted from independent 2D LIDAR analyses based on observations. Overall, it was found that the Weather Research and Forecast (WRF) model provides the best agreement with the LIDAR data. Finally, the three-dimensional variational (3DVAR) data assimilation scheme in WRF is used to incorporate the LIDAR line of sight velocity observations into the WRF model forecast at HKIA. Using two different days as test cases, it is found that the LIDAR data can be successfully and consistently assimilated into WRF. Using the updated model forecast LCS are extracted along the LIDAR scanning cone and compare to onboard flight data. It is found that the LCS generated from the updated WRF forecasts are generally better correlated with the windshear experienced by landing aircraft as compared to the LIDAR extracted LCS alone, which suggests that such a data assimilation scheme could be used for the prediction of windshear events. / Dissertation/Thesis / Doctoral Dissertation Applied Mathematics 2018
23

Epidemic models on adaptive networks with network structure constraints

Tunc, Ilker 01 January 2013 (has links)
Spread of infectious diseases progresses as a result of contacts between the individuals in a population. Therefore, it is crucial to gain insight into the pattern of connections to better understand and possibly control the spread of infectious diseases. Moreover, people may respond to an epidemic by changing their social behaviors to prevent infection. as a result, the structure of the network of social contacts evolves adaptively as a function of the disease status of the nodes. Recently, the dynamic relationships between different network topologies and adaptation mechanisms have attracted great attention in modeling epidemic spread. However, in most of these models, the original network structure is not preserved due to the adaptation mechanisms involving random changes in the links. In this dissertation, we study more realistic models with network structure constraints to retain aspects of the original network structure.;We study a susceptible-infected-susceptible (SIS) disease model on an adaptive network with two communities. Different levels of heterogeneity in terms of average connectivity and connection strength are considered. We study the effects of a disease avoidance adaptation mechanism based on the rewiring of susceptible-infected links through which the disease could spread. We choose the rewiring rules so that the network structure with two communities would be preserved when the rewiring links occur uniformly. The high dimensional network system is approximated with a lower dimensional mean field description based on a moment closure approximation. Good agreement between the solutions of the mean field equations and the results of the simulations are obtained at the steady state. In contrast to the non-adaptive case, similar infection levels in both of the communities are observed even when they are weakly coupled. We show that the adaptation mechanism tends to bring both the infection level and the average degree of the communities closer to each other.;In this rewiring mechanism, the local neighborhood of a node changes and is never restored to its previous state. However, in real life people tend to preserve their neighborhood of friends. We propose a more realistic adaptation mechanism, where susceptible nodes temporarily deactivate their links to infected neighbors and reactivate the links to those neighbors after they recover. Although the original network is static, the subnetwork of active links is evolving.;We drive mean field equations that predict the behavior of the system at the steady state. Two different regimes are observed. In the slow network dynamics regime, the adaptation simply reduces the effective average degree of the network. However, in the fast network dynamics regime, the adaptation further suppresses the infection level by reducing the dangerous links. In addition, non-monotonic dependence of the active degree on the deactivation rate is observed.;We extend the temporary deactivation adaptation mechanism to a scale-free network, where the degree distribution shows heavy tails. It is observed that the tail of the degree distribution of the active subnetwork has a different exponent than that of the original network. We present a heuristic explanation supporting that observation. We derive improved mean field equations based on a new moment closure approximation which is derived by considering the active degree distribution conditioned on the total degree. These improved mean field equations show better agreement with the simulation results than standard mean field analysis based on homogeneity assumptions.
24

Look-back stopping times and their applications to liquidation risk and exotic options

Li, Bin 01 May 2013 (has links)
In addition to first passage times, many look-back stopping times play a significant role in modeling various risks in insurance and finance as well as in defining financial instruments. Motivated by many recently arisen problems in risk management and exotic options, we study some look-back stopping times including drawdown and drawup, Parisian time and inverse occupation time of some time-homogeneous Markov processes such as diffusion processes and jump-diffusion processes. Since the structures of these look-back stopping times are much more complex than fundamental stopping times such as first passage times, we aim to develop some general approaches to study these stopping times such as approximation approach and perturbation approach. These approaches can be transformed to a wide class of stochastic processes. Many interesting and explicit formulas for these stopping times are derived and based on which we gain quantitative understandings of these problems in insurance and finance. In our study, we mainly use the techniques of Laplace transforms and partial differential equations (PDEs). Due to the complex structures, the distributions of these look-back stopping times are usually not explicit even for the simplest linear Brownian motion. However, under Laplace transforms, many important formulas become explicit and it enables us to conduct further derivations and analysis. Besides, PDE methodology provides us an effective and efficient approach in both theoretical investigation and numerical study of these stopping times.
25

Effects of behavioral changes and mixing patterns in mathematical models for smallpox epidemics

Del Valle, Sara Yemimah 01 January 2005 (has links)
In Chapter 1, we study the effects of behavioral changes in a smallpox attack model. Response strategies to a smallpox bioterrorist attack have focused on interventions such as isolation, contact tracing, quarantine, ring vaccination, and mass vaccination. We formulate and analyze a mathematical model in which some individuals lower their daily contact activity rates once an epidemic has been identified in a community. We use computer simulations to analyze the effects of behavior change alone and in combination with other control measures. We demonstrate that the spread of the disease is highly sensitive to how rapidly people reduce their contact activity. In Chapter 2, we study mixing patterns between age groups using social networks. The course of an epidemic through a population is determined by the interactions among individuals. To capture these elements of reality, we use the contact network simulations for the city of Portland, Oregon that were developed as part of the TRANSIMS/EpiSims project to study and identify mixing patterns. We analyze contact patterns between different age groups and identify those groups who are at higher risk of infection. We describe a new method for estimating transmission matrices that describe the mixing and the probability of transmission between the age groups. We use this matrix in a simple differential equation model for the spread of smallpox. Our differential equation model shows that the epidemic size of a smallpox outbreak could be greatly affected by the level of residual immunity in the population. In Chapter 3, we study the effects of mixing patterns in the presence of population heterogeneity. We investigate the impact that different mixing assumptions have on the spread of a disease in an age-structured differential equation model. We use realistic, semi-bias and bias mixing matrices and investigate the impact that these mixing patterns have on epidemic outcomes when compared to random mixing. Furthermore, we investigate the impact of population heterogeneity such as differences in susceptibility and infectivity within the population for a smallpox epidemic outbreak. We find that different mixing assumptions lead to differences in disease prevalence and final epidemic size.
26

The partially monotone tensor spline estimation of joint distribution function with bivariate current status data

Wu, Yuan 01 July 2010 (has links)
The analysis of joint distribution function with bivariate event time data is a challenging problem both theoretically and numerically. This thesis develops a tensor splinebased nonparametric maximum likelihood estimation method to estimate the joint distribution function with bivariate current status data. The tensor I-splines are developed to replace the traditional tensor B-splines in approximating joint distribution function in order to simplify the restricted maximum likelihood estimation problem in computing. The generalized gradient projection algorithm is used to compute the restricted optimization problem. We show that the proposed tensor spline-based nonparametric estimator is consistent and that the rate of convergence is obtained. Simulation studies with moderate sample sizes show that the finite-sample performance of the proposed estimator is generally satisfactory.
27

Defining new insight into fatal human arrhythmia: a mathematical analysis

Wolf, Roseanne Marie 01 May 2012 (has links)
Background: Normal cardiac excitability depends upon the coordinated activity of ion channels and transporters. Mutations in genes encoding ion channels affecting their biophysical properties have been known for over twenty years as a root cause of potentially fatal human electrical rhythm disturbance (arrhythmias). More recently, defects in ion channel associated protein (e.g. adapter, regulatory, cytoskeletal proteins) have been shown to cause arrhythmia. Mathematical modeling is ideally suited to integrate large volumes of cellular and in vivo data from human patients and animal disease models with the over goal of determining cellular mechanisms for these atypical human cardiac diseases that involve complex defects in ion channel membrane targeting and/or regulation. Methods and Results: Computational models of ventricular, atrial, and sinoatrial cells were used to determine the mechanism for increased susceptibility to arrhythmias and sudden death in human patients with inherited defects in ankyrin-based targeting pathways. The loss of ankyrin-B was first incorporated into detailed models of the ventricular myocyte to identify the cellular mechanism for arrhythmias in human patients with loos-of-function mutations in ANK2 (encodes ankyrin-B). Mathematical modeling was used to identify the cellular pathway responsible for abnormal Ca2+ handling and cardiac arrhythmias in ventricular cells. A multi-scalar computational model of ankyrin-B deficiency in atrial and sinoatrial cells and tissue was then developed to determine the mechanism for the increased susceptibility to atrial fibrillation in these human patients. Finally, a state-based Markov model of the voltage-gated Na+ channel was incorporated into a ventricular cell model and parameter estimation was performed to determine the mechanism for a new class of human arrhythmia variants that confer susceptibility to arrhythmia by interfering with a regulatory complex comprised of a second member of the ankyrin family, ankyrin-G. Conclusions: Ca2+ accumulation was observed at baseline in the ankyrin-B deficient ventricular model, with pro-arrhythmic spontaneous release and afterdepolarizations in the presence of simulated â-adrenergic stimulation, consistent with the finding of catecholaminergic-induced arrhythmias in human patients. The simulations demonstrated that loss of membrane Na+/Ca2+ exchanger and Na+-K+-ATPase contributed to Ca2+ overload and afterdepolarizations, with loss of Na+/Ca2+ exchanger as the dominant mechanism. In the atrial model of ankyrin-B deficiency, the loss of the L-type Ca2+ channel targeting was identified as the dominant mechanism for the initiation of atrial fibrillation. Finally, the simulations showed that human variants affecting ankyrin-G dependent regulation of NaV1.5 results in arrhythmia by mimicking the phosphorylation of the channel. Most importantly, mathematical modeling has been used to the molecular mechanism underlying human arrhythmia syndromes.
28

Workforce and inventory management under uncertain demand

Valeva, Silviya Dimitrova 01 May 2017 (has links)
This thesis studies the problem of production and inventory planning for an organization facing uncertainty in demand. Specifically, we examine the problem of assigning workers to tasks, seeking to maximize profits, while taking in consideration learning through experience and stochasticity in demand. As quantitative descriptions of human learning are nonlinear, we employ a reformulation technique that uses binary and continuous variables and linear constraints. Similarly, as demand is not assumed to be known with certainty, we embed this mixed integer representation of how experience translates to productivity in a stochastic workforce assignment model. We further present a matheuristic solution technique and a Markov decision process formulation with a one-step lookahead that allows for the problem to be solved in stages in time as demand information becomes available. With an extensive computational study, we demonstrate the advantages of the matheuristic approach over an off-the-shelf solver and derive managerial insights about task assignment, workforce capacity development, and inventory management. We show that cross training increases as demand uncertainty increases, worker practice increases as inventory holding costs increase, and workers with less initial experience receive more practice than workers with higher initial experience. We further observe that the proposed lookahead MDP model outperforms similar myopic models by producing both increased profit and decreased lost sales and is especially valuable when expecting high demand variation. By recognizing individual differences in learning and modeling the improvement in productivity through experience, results show that the ability to manage workforce capacity can be an effective substitute for inventory. Additionally, we observe that optimal solutions favor the use of inventory for more valuable products and rely on higher productivity for less valuable ones. Further analysis suggests that slower learners tend to specialize more and teams with slower average learning rate tend to produce more inventory.
29

Classifying 2-string tangles within families and tangle tabulation

Caples, Christine 15 December 2017 (has links)
A knot can be thought of as a knotted piece of string with the ends glued together. A tangle is formed by intersecting a knot with a 3-dimensional ball. The portion of the knot in the interior of the ball along with the fixed intersection points on the surface of the ball form the tangle. Tangles can be used to model protein- DNA binding, so another way to think of a tangle is in terms of segments of DNA (the strings) bounded by the protein complex (the 3-dimensional ball). In this thesis, we look at an algorithm used to list tangles. We also classify tangles into families.
30

Dynamic field theory applied to fMRI signal analysis

Ambrose, Joseph Paul 01 July 2016 (has links)
In the field of cognitive neuroscience, there is a need for theory-based approaches to fMRI data analysis. The dynamic neural field model-based approach has been developing to meet this demand. This dissertation describes my contributions to this approach. The methods and tools were demonstrated through a case study experiment on response selection and inhibition. The experiment was analyzed via both the standard behavioral approach and the new model-based method, and the two methods were compared head to head. The methods were quantitatively comparable at the individual-level of the analysis. At the group level, the model-based method reveals distinct functional networks localized in the brain. This validates the dynamic neural field model-based approach in general as well as my recent contributions.

Page generated in 0.0861 seconds