• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1072
  • 463
  • 266
  • 142
  • 81
  • 58
  • 49
  • 41
  • 41
  • 37
  • 32
  • 22
  • 20
  • 14
  • 14
  • Tagged with
  • 2777
  • 358
  • 293
  • 266
  • 263
  • 257
  • 209
  • 191
  • 161
  • 154
  • 153
  • 134
  • 128
  • 127
  • 122
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
421

Optical Polarization Observations of Epsilon Aurigae During the 2009-2011 Eclipse

Henson, Gary D., Burdette, John, Gray, Sharon 29 May 2012 (has links)
Polarization observations of the unique eclipsing binary, Epsilon Aurigae, are being carried out using a new dual beam imaging polarimeter on the 0.36m telescope of the Harry D. Powell Observatory. This bright binary system has a 27.1 year period with an eclipse duration of nearly two years. The primary is known to be a pulsating F0 supergiant with the secondary a large and essentially opaque disk. We report here on the characteristics of the polarimeter and on the status of V-band observations that are being obtained to better understand the system's geometry and the nature of its two components. In particular, the characteristics of the secondary disk remain a puzzle. Results are compared to polarization observations from the 1982-1984 eclipse.
422

Nondifferentiable Optimization of Lagrangian Dual Formulations for Linear Programs with Recovery of Primal Solutions

Lim, Churlzu 15 July 2004 (has links)
This dissertation is concerned with solving large-scale, ill-structured linear programming (LP) problems via Lagrangian dual (LD) reformulations. A principal motivation for this work arises in the context of solving mixed-integer programming (MIP) problems where LP relaxations, sometimes in higher dimensional spaces, are widely used for bounding and cut-generation purposes. Often, such relaxations turn out to be large-sized, ill-conditioned problems for which simplex as well as interior point based methods can tend to be ineffective. In contrast, Lagrangian relaxation or dual formulations, when applied in concert with suitable primal recovery strategies, have the potential for providing quick bounds as well as enabling useful branching mechanisms. However, the objective function of the Lagrangian dual is nondifferentiable, and hence, we cannot apply popular gradient or Hessian-based optimization techniques that are commonly used in differentiable optimization. Moreover, the subgradient methods that are popularly used are typically slow to converge and tend to stall while yet remote from optimality. On the other hand, more advanced methods, such as the bundle method and the space dilation method, involve additional computational and storage requirements that make them impractical for large-scale applications. Furthermore, although we might derive an optimal or near-optimal solution for LD, depending on the dual-adequacy of the methodology used, a primal solution may not be available. While some algorithmically simple primal solution recovery schemes have been developed in theory to accompany Lagrangian dual optimization, their practical performance has been disappointing. Rectifying these inadequacies is a challenging task that constitutes the focal point for this dissertation. Many practical applications dealing with production planning and control, engineering design, and decision-making in different operational settings fall within the purview of this context and stand to gain by advances in this technology. With this motivation, our primary interests in the present research effort are to develop effective nondifferentiable optimization (NDO) methods for solving Lagrangian duals of large-sized linear programs, and to design practical primal solution recovery techniques. This contribution would then facilitate the derivation of quick bounds/cuts and branching mechanisms in the context of branch-and-bound/cut methodologies for solving mixed-integer programming problems. We begin our research by adapting the Volume Algorithm (VA) of Barahona and Anbil (2000) developed at IBM as a direction-finding strategy within the variable target value method (VTVM) of Sherali et al. (2000). This adaptation makes VA resemble a deflected subgradient scheme in contrast with the bundle type interpretation afforded by the modification of VA as proposed by Bahiense et al. (2002). Although VA was originally developed in order to recover a primal optimal solution, we first present an example to demonstrate that it might indeed converge to a nonoptimal primal solution. However, under a suitable condition on the geometric moving average factor, we establish the convergence of the proposed algorithm in the dual space. A detailed computational study reveals that this approach yields a competitive procedure as compared with alternative strategies including the average direction strategy (ADS) of Sherali and Ulular (1989), a modified Polyak-Kelley cutting-plane strategy (PKC) designed by Sherali et al. (2001), and the modified Volume Algorithm routines RVA and BVA proposed by Bahiense et al. (2002), all embedded within the same VTVM framework. As far as CPU times are concerned, the VA strategy consumed the least computational effort for most problems to attain a near-optimal objective value. Moreover, the VA, ADS, and PKC strategies revealed considerable savings in CPU effort over a popular commercial linear program solver, CPLEX Version 8.1, when used to derive near-optimal solutions. Next, we consider two variable target value methods, the Level Algorithm of Brännlund (1993) and VTVM, which require no prior knowledge of upper bounds on the optimal objective value while guaranteeing convergence to an optimal solution. We evaluate these two algorithms in conjunction with various direction-finding and step-length strategies such as PS, ADS, VA, and PKC. Furthermore, we generalize the PKC strategy by further modifying the cut's right-hand-side values and additionally performing sequential projections onto some previously generated Polyak-Kelley's cutting-planes. We call this a generalized PKC (GPKC) strategy. Moreover, we point out some latent deficiencies in the two aforementioned variable target value algorithms in regard to their target value update mechanisms, and we suggest modifications in order to alleviate these shortcomings. We further explore an additional local search procedure to strengthen the performance of the algorithms. Noting that no related convergence analyses have been presented, we prove the convergence of the Level Algorithm when used in conjunction with the ADS, VA, or GPKC schemes. We also establish the convergence of VTVM when employing GPKC. Based on our computational study, the modified VTVM algorithm produced the best quality solutions when implemented with the GPKC strategy, where the latter performs sequential projections onto the four most recently generated Polyak-Kelley cutting-planes as available. Also, we demonstrate that the proposed modifications and the local search technique significantly improve the overall performance. Moreover, the VTVM procedure was observed to consistently outperform the Level Algorithm as well as a popular heuristic subgradient method of Held et al. (1974) that is widely used in practice. As far as CPU times are concerned, the modified VTVM procedure in concert with the GPKC strategy revealed the best performance, providing near-optimal solutions in about 27.84% of the effort at an average as that required by CPLEX 8.1 to produce the same quality solutions. We next consider the Lagrangian dual of a bounded-variable equality constrained linear programming problem. We develop two novel approaches for solving this problem, which attempt to circumvent or obviate the nondifferentiability of the objective function. First, noting that the Lagrangian dual function is differentiable almost everywhere, whenever the NDO algorithm encounters a nondifferentiable point, we employ a proposed <i>perturbation technique</i> (PT) in order to detect a differentiable point in the vicinity of the current solution from which a further search can be conducted. In a second approach, called the <i>barrier-Lagrangian dual reformulation</i> (BLR) method, the primal problem is reformulated by constructing a barrier function for the set of bounding constraints such that an optimal solution to the original problem can be recovered by suitably adjusting the barrier parameter. However, instead of solving the barrier problem itself, we dualize the equality constraints to formulate a Lagrangian dual function, which is shown to be twice differentiable. Since differentiable pathways are made available via these two proposed techniques, we can advantageously utilize differentiable optimization methods along with popular conjugate gradient schemes. Based on these constructs, we propose an algorithmic procedure that consists of two sequential phases. In Phase I, the PT and BLR methods along with various deflected gradient strategies are utilized, and then, in Phase II, we switch to the modified VTVM algorithm in concert with GPKC (VTVM-GPKC) that revealed the best performance in the previous study. We also designed two target value initialization methods to commence Phase II, based on the output from Phase I. The computational results reveal that Phase I indeed helps to significantly improve the algorithmic performance as compared with implementing VTVM-GPKC alone, even though the latter was run for twice as many iterations as used in the two-phase procedures. Among the implemented procedures, the PT method in concert with certain prescribed deflection and Phase II initialization schemes yielded the best overall quality solutions and CPU time performance, consuming only 3.19% of the effort as that required by CPLEX 8.1 to produce comparable solutions. Moreover, we also tested some ergodic primal recovery strategies with and without employing BLR as a warm-start, and observed that an initial BLR phase can significantly enhance the convergence of such primal recovery schemes. Having observed that the VTVM algorithm requires the fine-tuning of several parameters for different classes of problems in order to improve its performance, our next research investigation focuses on developing a robust variable target value framework that entails the management of only a few parameters. We therefore design a novel algorithm, called the <i>Trust Region Target Value</i> (TRTV) method, in which a trust region is constructed in the dual space, and its center and size are adjusted in a manner that eventually induces a dual optimum to lie at the center of the hypercube trust region. A related convergence analysis has also been conducted for this procedure. We additionally examined a variation of TRTV, where the hyperrectangular trust region is more effectively adjusted for normalizing the effects of the dual variables. In our computational study, we compared the performance of TRTV with that of the VTVM-GPKC procedure. For four direction-finding strategies (PS, VA, ADS, and GPKC), the TRTV algorithm consistently produced better quality solutions than did VTVM-GPKC. The best performance was obtained when TRTV was employed in concert with the PS strategy. Moreover, we observed that the solution quality produced by TRTV was consistently better than that obtained via VTVM, hence lending a greater degree of robustness. As far as computational effort is concerned, the TRTV-PS combination consumed only 4.94% of the CPU time required by CPLEX 8.1 at an average in order to find comparable quality solutions. Therefore, based on our extensive set of test problems, it appears that the TRTV along with the PS strategy is the best and the most robust procedure among those tested. Finally, we explore an outer-linearization (or cutting-plane) method along with a trust region approach for refining available dual solutions and recovering a primal optimum in the process. This method enables us to escape from a jamming phenomenon experienced at a non-optimal point, which commonly occurs when applying NDO methods, as well as to refine the available dual solution toward a dual optimum. Furthermore, we can recover a primal optimal solution when the resulting dual solution is close enough to a dual optimum, without generating a potentially excessive set of constraints. In our computational study, we tested two such trust region strategies, the Box-step (BS) method of Marsten et al. (1975) and a new Box Trust Region (BTR) approach, both appended to the foregoing TRTV-PS dual optimization methodology. Furthermore, we also experimented with deleting nonbinding constraints when the number of cuts exceeds a prescribed threshold value. This proposed refinement was able to further improve the solution quality, reducing the near-zero relative optimality gap for TRTV-PS by 20.6-32.8%. The best strategy turned out to be using the BTR method while deleting nonbinding constraints (BTR-D). As far as the recovery of optimal solutions is concerned, the BTR-D scheme resulted in the best measure of primal feasibility, and although it was terminated after it had accumulated only 50 constraints, it revealed a better performance than the ergodic primal recovery scheme of Shor (1985) that was run for 2000 iterations while also assuming knowledge of the optimal objective value in the dual scheme. In closing, we mention that there exist many optimization methods for complex systems such as communication network design, semiconductor manufacturing, and supply chain management, that have been formulated as large-sized mixed-integer programs, but for which deriving even near-optimal solutions has been elusive due to their exorbitant computational requirements. Noting that the computational effort for solving mixed-integer programs via branch-and-bound/cut methods strongly depends on the effectiveness with which the underlying linear programming relaxations can be solved, applying theoretically convergent and practically effective NDO methods in concert with efficient primal recovery procedures to suitable Lagrangian dual reformulations of these relaxations can significantly enhance the overall computational performance of these methods. We therefore hope that the methodologies designed and analyzed in this research effort will have a notable positive impact on analyzing such complex systems. / Ph. D.
423

Free Surface Waves And Interacting Bouncing Droplets: A Parametric Resonance Case Study

Borja, Francisco J. 07 1900 (has links)
Parametric resonance is a particular type of resonance in which a parameter in a system changes with time. A particularly interesting case is when the parameter changes in a periodic way, which can lead to very intricate behavior. This di↵ers from periodic forcing in that solutions are not necessarily periodic. A system in which parametric resonance is realized is when a fluid bath is shaken periodically, which leads to an e↵ective time dependent gravitational force. This system will be used to study the onset of surface waves in a bath with non-uniform topography. A linear model for the surface waves is derived from the Euler equations in the limit of shallow waves, which includes the geometry of the bottom and surface tension. Experiments are performed to compare with the proposed model and good qualitative agreement is found. Another experiment which relies on a shaking fluid bath is that of bouncing fluid droplets. In the case of two droplets the shaking allows for a larger bouncing droplet to attract a smaller moving droplet in a way that creates a bound system. This bound system is studied and shows some analogous properties to quantum systems, so a quantum mechanical model for a two dimensional atom is studied, as well as a proposed model for the droplet-wave system in terms of equations of fluid mechanics.
424

Subgroup Identification in Clinical Trials

Li, Xiaochen 04 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Subgroup analyses assess the heterogeneity of treatment effects in groups of patients defined by patients’ baseline characteristics. Identifying subgroup of patients with differential treatment effect is crucial for tailored therapeutics and personalized medicine. Model-based variable selection methods are well developed and widely applied to select significant treatment-by-covariate interactions for subgroup analyses. Machine learning and data-driven based methods for subgroup identification have also been developed. In this dissertation, I consider two different types of subgroup identification methods: one is nonparametric machine learning based and the other is model based. In the first part, the problem of subgroup identification was transferred to an optimization problem and a stochastic search technique was implemented to partition the whole population into disjoint subgroups with differential treatment effect. In the second approach, an integrative three-step model-based variable selection method was proposed for subgroup analyses in longitudinal data. Using this three steps variable selection framework, informative features and their interaction with the treatment indicator can be identified for subgroup analysis in longitudinal data. This method can be extended to longitudinal binary or categorical data. Simulation studies and real data examples were used to demonstrate the performance of the proposed methods. / 2022-05-06
425

Linear Mixed Model Selection by Partial Correlation

Alabiso, Audry 29 April 2020 (has links)
No description available.
426

Portfolio Optimization Problems with Transaction Costs

Gustavsson, Stina, Gyllberg, Linnéa January 2023 (has links)
Portfolio theory is a cornerstone of modern finance, and it is based on the idea that an investor can reduce risk by diversifying their investments across various assets. In practice, Harry Markowitz mean-variance optimization theory is expanded upon by taking into account variable and fixed transaction cost, making the model slightly more reliable. Estimation of parameters is done using historical data and the portfolios considered are those that would be of interest to Generation Z. Using transaction costs from some of Sweden's biggest and most popular banks, the impact of the transaction costs can be seen in the presented graphs. Though many more aspects could be considered to make the model even more realistic, the presented results give insight into how one might want to invest in the stock market to increase their chances of a good expected return given a minimal variance (risk).
427

The Physiometrics of Inflammation and Implications for Medical and Psychiatric Research: Toward Empirically-informed Inflammatory Composites

Moriarity, Daniel, 0000-0001-8678-7307 January 2022 (has links)
Most psychoneuroimmunology research examines individual proteins; however, some studies have used summed score composites of all available inflammatory markers without evaluating the appropriateness of this decision. Using three different samples (MIDUS-2: N = 1,255 adults, MIDUS-R: N =863 adults, and ACE: N = 315 adolescents), this study investigates the dimensionality of eight inflammatory proteins (C-reactive protein (CRP), interleukin (IL)-6, IL-8, IL-10, tumor necrosis factor-α (TNF-α), fibrinogen, E-selectin, and intercellular adhesion molecule (ICAM)-1) and compares the resulting factor structure to a) an “a priori” factor structure in which all inflammatory proteins equally load onto a single dimension (a technique that has been used previously) and b) proteins modeled individually (i.e., no latent variable) in terms of model fit, replicability, reliability, temporal stability, and their associations with medical history and depression symptoms. A hierarchical factor structure with two first-order factors (Factor 1A: CRP, IL-6, fibrinogen; Factor 2A: TNF-α, IL-8, IL-10, ICAM-1, IL-6) and a second-order general inflammation factor was identified in MIDUS-2 and replicated in MIDUS-R and partially replicated in ACE (which unfortunately only had CRP, IL-6, IL-8, IL-10, and TNF-α but, unlike the other two, has longitudinal data). Both the empirically-identified structure and modeling proteins individually fit the data better compared to the one-dimensional “a priori” structure. Results did not clearly indicate whether the empirically-identified factor structure or the individual proteins modeled without a latent variable had superior model fit. Modeling the empirically-identified factors and individual proteins (without a latent factor) as outcomes of medical diagnoses resulted in comparable conclusions, but modeling empirically-identified factors resulted in fewer results “lost” to correction for multiple comparisons. Importantly, when the factor scores were recreated in a longitudinal dataset, none of the individual proteins, the “a priori” factor, or the empirically-identified general inflammation factor significantly predicted concurrent depression symptoms in multilevel models. However, both empirically-identified first-order factors were significantly associated with depression, in opposite directions. Measurement properties are reported for the different aggregates and individual proteins as appropriate, which can be used in the design and interpretation of future studies. These results indicate that modeling inflammation as a unidimensional construct equally associated with all available proteins does not fit the data well. Instead, empirically-supported aggregates of inflammation, or individual inflammatory markers, should be used in accordance with theory. Further, the aggregation of shared variance achieved by constructing empirically-supported aggregates might increase predictive validity compared to other modeling choices, maximizing statistical power. / Psychology
428

Development of a Multiscale Internal State Variable Inelasticity-Corrosion Damage Model for Magnesium Alloys

Song, Weiwei 14 August 2015 (has links)
This dissertation proposes a multiscale Internal State Variable (ISV) inelasticity-corrosion damage model that is motivated by experimental microstructure-property relations of magnesium alloys. The corrosion damage framework was laid out based on observation of different corrosion mechanisms occurred on an extruded AM30 magnesium alloys. The extruded AM30 magnesium alloy was studied under two corrosion environments (cyclical salt spray and immersion) in order to observe the corrosion rates under different exposure environments. The coupons were examined at various times to determine the history effects of three corrosion mechanisms: (1) general corrosion; (2) pitting corrosion in terms of the nucleation rate, growth rate, and coalescence rate; and (3) intergranular corrosion. The multiscale ISV corrosion model was developed by bridging the macroscale corrosion damage to the mesoscale electrochemical kinetics, microscale material features, and nanoscale material activation energies. The corrosion testing results of Mg alloys (pure Mg, Mg-2% Al, and Mg-6% Al) were used to develop, calibrate, and validate the model, and good agreement was found between the model results and the corrosion testing data. Finally, the simultaneous effects of corrosion and cyclic loading were tested but not modelled for the extruded AM30 magnesium alloy by conducting fatigue experiments in a 3.5 wt.% NaCl solution environment. The corrosion fatigue life of the AM30 alloy was significantly reduced due to corrosion pit formation on specimen surface, hydrogen diffused into the material , and the fracture surface dissolved into the solution. The corrosion damage that arose on the fatigue specimens reduced the crack nucleation process and enhanced the crack propagation rate.
429

An Integrated Approach for Predicting Nitrogen Status in Early Cotton and Corn

Fox, Amelia Ann Amy 09 May 2015 (has links)
Cotton (Gossypium hirsutum L.) and corn (Zea mays L.) spectral reflectance holds promise for deriving variable rate N (VRN) treatments calibrated with red-edge inflection (REI) type vegetation indices (VIs). The objectives of this study were to define the relationships between two commercially available sensors and the suitable VIs used to predict N status. Field trials were conducted during the 2012-2013 growing seasons using fixed and variable N rates in cotton ranging from 33.6-134.4 kg N ha-1 and fixed N rates in corn ranging from 0.0 to 268.8 kg N ha-1. Leaf N concentration, SPAD chlorophyll and crop yield were analyzed for their relation to fertilizer N treatment. Sensor effects were significant and red-edge VIs most strongly correlated to N status. A theoretical ENDVI index was derived from the research dataset as an improvement and alternative to the Guyot’s Red Edge Inflection and Simplified Canopy Chlorophyll Content Index (SI).
430

Evaluating the Effects of Variable Corn Seedling Emergence and Replanting Methods for Substandard Corn Stands

Pettit, Kevin Allen 04 May 2018 (has links)
Mississippi growers often have issues with corn seedling establishment due to saturated and cool soils, which can reduce productivity. Our first objective was to quantify yield reduction associated with variable emergence. Four patterns simulating various extent of affected plants and four different emergence delays were hand planted uniformly at a standard population. Plants were closely monitored to document emergence variability. Growth stages were measured three separate ways to identify the best field method to characterize stand variability. Data suggest there were yield disadvantages associated with emergence variability. Another objective was to evaluate practical replanting methods for Mid-South corn growers. Treatments included four populations planted at a normal time and replant interval. Two different series of treatments were imposed to evaluate the productivity of intra-planting seed in a partial stand. Corn grain yield was 11% greater when replanting in a clean seedbed, compared to all intra-planted treatments.

Page generated in 0.0484 seconds