• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1311
  • 444
  • 238
  • 177
  • 78
  • 38
  • 29
  • 25
  • 23
  • 19
  • 18
  • 14
  • 12
  • 11
  • 10
  • Tagged with
  • 3074
  • 540
  • 483
  • 471
  • 455
  • 427
  • 417
  • 372
  • 321
  • 301
  • 295
  • 282
  • 262
  • 242
  • 234
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

A Comparison of Discrete and Continuous Survival Analysis

Kim, Sunha 08 May 2014 (has links)
There has been confusion in choosing a proper survival model between two popular survival models of discrete and continuous survival analysis. This study aimed to provide empirical outcomes of two survival models in educational contexts and suggest a guideline for researchers who should adopt a suitable survival model. For the model specification, the study paid attention to three factors of time metrics, censoring proportions, and sample sizes. To arrive at comprehensive understanding of the three factors, the study investigated the separate and combined effect of these factors. Furthermore, to understand the interaction mechanism of those factors, this study examined the role of the factors to determine hazard rates which have been known to cause the discrepancies between discrete and continuous survival models. To provide empirical evidence from different combinations of the factors in the use of survival analysis, this study built a series of discrete and continuous survival models using secondary data and simulated data. In the first study, using empirical data from the National Longitudinal Survey of Youth 1997 (NLSY97), this study compared analyses results from the two models having different sizes of time metrics. In the second study, by having various specifications with combination of two other factors of censoring proportions and sample sizes, this study simulated datasets to build two models and compared the analysis results. The major finding of the study is that discrete models are recommended in the conditions of large units of time metrics, low censoring proportion, or small sample sizes. Particularly, discrete model produced better outcomes for conditions with low censoring proportion (20%) and small number (i.e., four) of large time metrics (i.e., year) regardless of sample sizes. Close examination of those conditions of time metrics, censoring proportion, and sample sizes showed that the conditions resulted into high hazards (i.e., 0.20). In conclusion, to determine a proper model, it is recommended to examine hazards of each of the time units with the specific factors of time metrics, censoring proportion and sample sizes. / Ph. D.
212

A Mathematical Model of a Denitrification Metabolic Network in Pseudomonas aeruginosa

Arat, Seda 23 January 2013 (has links)
Lake Erie, one of the Great Lakes in North America, has witnessed recurrent summertime low oxygen dead zones for decades. This is a yearly phenomenon that causes microbial production of the greenhouse gas nitrous oxide from denitrification. Complete denitrification is a microbial process of reduction of nitrate to nitrogen gas via nitrite, nitric oxide, and greenhouse gas nitrous oxide. After scanning the microbial community in Lake Erie, Pseudomonas aeruginosa is decided to be examined, not because it is abundant in Lake Erie, but because it can perform denitrification under anaerobic conditions. This study focuses on a mathematical model of the metabolic network in Pseudomonas aeruginosa under denitrification and testable hypotheses generation using polynomial dynamical systems and stochastic discrete dynamical systems. Analysis of the long-term behavior of the system changing the concentration level of oxygen, nitrate, and phosphate suggests that phosphate highly affects the denitrification performance of the network. / Master of Science
213

Development of the Adaptive Collision Source Method for Discrete Ordinates Radiation Transport

Walters, William Jonathan 08 May 2015 (has links)
A novel collision source method has been developed to solve the Linear Boltzmann Equation (LBE) more efficiently by adaptation of the angular quadrature order. The angular adaptation method is unique in that the flux from each scattering source iteration is obtained, with potentially a different quadrature order used for each. Traditionally, the flux from every iteration is combined, with the same quadrature applied to the combined flux. Since the scattering process tends to distribute the radiation more evenly over angles (i.e., make it more isotropic), the quadrature requirements generally decrease with each iteration. This method allows for an optimal use of processing power, by using a high order quadrature for the first few iterations that need it, before shifting to lower order quadratures for the remaining iterations. This is essentially an extension of the first collision source method, and is referred to as the adaptive collision source (ACS) method. The ACS methodology has been implemented in the 3-D, parallel, multigroup discrete ordinates code TITAN. This code was tested on a variety of test problems including fixed-source and eigenvalue problems. The ACS implementation in TITAN has shown a reduction in computation time by a factor of 1.5-4 on the fixed-source test problems, for the same desired level of accuracy, as compared to the standard TITAN code. / Ph. D.
214

Predicting Oxygen Transfer in Hypolimnetic Oxygenation Devices

McGinnis, Daniel Frank 08 May 2000 (has links)
The purpose of this research was to apply a discrete-bubble model to predict the performance of several hypolimnetic oxygenators. The model is used to predict the oxygen transfer rate in a hypolimnetic oxygenator based on the initial bubble size formed at the diffuser. The discrete-bubble model is based on fundamental principles, and therefore could also be applied to other mass transfer applications involving the injection of bubbles into a fluid. The discrete-bubble model has been applied to a linear bubble-plume diffuser, a full-lift hypolimnetic aerator and the Speece Cone with promising results. The first step in this research was to investigate the principals of bubble formation at a submerged orifice, bubble rise velocity and bubble mass transfer. The discrete-bubble model is then presented. The model traces a single bubble rising through a fluid, accounting for changes in bubble size due to mass transfer, temperature and hydrostatic pressure. The bubble rise velocity and mass transfer coefficients are given by empirical correlations that depend on the bubble size. Bubble size is therefore recalculated at every increment and the values for the bubble rise velocity and mass transfer coefficients are continually updated. The discrete-bubble model is verified by comparison to experimental data collected in large-scale oxygen transfer tests. Finally, the discrete-bubble model is applied to the three most common hypolimnetic oxygenation systems: the Speece Cone, the bubble-plume diffuser, and the full-lift hypolimnetic oxygenation systems. The latter being presented by Vickie Burris in her thesis, <i>Hypolimnetic Aerators: Predicting Oxygen Transfer and Water Flow Rate</i>. / Master of Science
215

A Discrete Approach to the Poincare-Miranda Theorem

Ahlbach, Connor Thomas 12 May 2013 (has links)
The Poincare-Miranda Theorem is a topological result about the existence of a zero of a function under particular boundary conditions. In this thesis, we explore proofs of the Poincare-Miranda Theorem that are discrete in nature - that is, they prove a continuous result using an intermediate lemma about discrete objects. We explain a proof by Tkacz and Turzanski that proves the Poincare-Miranda theorem via the Steinhaus Chessboard Theorem, involving colorings of partitions of n-dimensional cubes. Then, we develop a new proof of the Poincare-Miranda Theorem that relies on a polytopal generalization of Sperner's Lemma of Deloera - Peterson - Su. Finally, we extend these discrete ideas to attempt to prove the existence of a zero with the boundary condition of Morales.
216

Valuing environmental benefits using the contingent valuation method : an econometric analysis

Kriström, Bengt January 1990 (has links)
The purpose of this study is to investigate methods for assessing the value people place on preserving our natural environments and resources. It focuses on the contingent valuation method, which is a method for directly asking people about their preferences. In particular, the study focuses on the use of discrete response data in contingent valuation experiments.The first part of the study explores the economic theory of the total value of a natural resource, where the principal components of total value are analyzed; use values and non-use values. Our application is a study of the value Swedes' attach to the preservation of eleven forest areas that contain high recreational values and contain unique environmental qualities. Six forests were selected on the basis of an official investigation which includes virgin forests and other areas with unique environmental qualities. In addition, five virgin forests were selected.Two types of valuation questions are analyzed, the continuous and the discrete. The first type of question asks directly about willingness to pay, while the second type suggests a price that the respondent may reject or accept. The results of the continuous question suggest an average willingness to pay of about 1,000 SEK per household for preservation of the areas. Further analysis of the data suggests that this value depends on severi characteristics of the respondent: such as the respondent's income and whether or not the respondent is an altruist.Two econometric approaches are used to analyze the discrete responses; a flexible parametric approach and a non-parametric approach. In addition, a Bayesian approach is described. It is shown that the results of a contingent valuation experiment may depend to some extent on the choice of the probability model. A re-sampling approach and a Monte-Carlo approach is used to shed light on the design of a contingent valuation experiment with discrete responses. The econometric analysis ends with an analysis of the often observed disparity between discrete and continuous valuation questions.A cost-benefit analysis is performed in the final chapter. The purpose of this analysis is to illustrate how the contingent valuation approach may be combined with opportunity cost data to improve the decision-basis in the environmental policy domain. This analysis does not give strong support for a cutting alternative. Finally, the results of this investigation are compared with evidence from other studies.The main conclusion of this study is that assessment of peoples' sentiments towards changes of our natural environments and resources can be a useful supplement to decisions about the proper husbandry of our natural environments and resources. It also highlights the importance of careful statistical analysis of data gained from contingent valuation experiments. / digitalisering@umu
217

On Independence, Matching, and Homomorphism Complexes

Hough, Wesley K. 01 January 2017 (has links)
First introduced by Forman in 1998, discrete Morse theory has become a standard tool in topological combinatorics. The main idea of discrete Morse theory is to pair cells in a cellular complex in a manner that permits cancellation via elementary collapses, reducing the complex under consideration to a homotopy equivalent complex with fewer cells. In chapter 1, we introduce the relevant background for discrete Morse theory. In chapter 2, we define a discrete Morse matching for a family of independence complexes that generalize the matching complexes of suitable "small" grid graphs. Using this matching, we determine the dimensions of the chain spaces for the resulting Morse complexes and derive bounds on the location of non-trivial homology groups. Furthermore, we determine the Euler characteristic for these complexes and prove that several of their homology groups are non-zero. In chapter 3, we introduce the notion of a homomorphism complex for partially ordered sets, placing particular emphasis on maps between chain posets and the Boolean algebras. We extend the notion of folding from general graph homomorphism complexes to the poset case, and we define an iterative discrete Morse matching for these Boolean complexes. We provide formulas for enumerating the number of critical cells arising from this matching as well as for the Euler characteristic. We end with a conjecture on the optimality of our matching derived from connections to 3-equal manifolds
218

Discrete Fractional Hermite-Hadamard Inequality

Arslan, Aykut 01 April 2017 (has links)
This thesis is comprised of three main parts: The Hermite-Hadamard inequality on discrete time scales, the fractional Hermite-Hadamard inequality, and Karush-Kuhn- Tucker conditions on higher dimensional discrete domains. In the first part of the thesis, Chapters 2 & 3, we define a convex function on a special time scale T where all the time points are not uniformly distributed on a time line. With the use of the substitution rules of integration we prove the Hermite-Hadamard inequality for convex functions defined on T. In the fourth chapter, we introduce fractional order Hermite-Hadamard inequality and characterize convexity in terms of this inequality. In the fifth chapter, we discuss convexity on n{dimensional discrete time scales T = T1 × T2 × ... × Tn where Ti ⊂ R , i = 1; 2,…,n are discrete time scales which are not necessarily periodic. We introduce the discrete analogues of the fundamental concepts of real convex optimization such as convexity of a function, subgradients, and the Karush-Kuhn-Tucker conditions. We close this thesis by two remarks for the future direction of the research in this area.
219

Factors Affecting Discrete-Time Survival Analysis Parameter Estimation and Model Fit Statistics

Denson, Kathleen 05 1900 (has links)
Discrete-time survival analysis as an educational research technique has focused on analysing and interpretating parameter estimates. The purpose of this study was to examine the effects of certain data characteristics on the hazard estimates and goodness of fit statistics. Fifty-four simulated data sets were crossed with four conditions in a 2 (time period) by 3 (distribution of Y = 1) by 3 (distribution of Y = 0) by 3 (sample size) design.
220

Automated Conjecturing Approach for Benzenoids

Muncy, David 01 January 2016 (has links)
Benzenoids are graphs representing the carbon structure of molecules, defined by a closed path in the hexagonal lattice. These compounds are of interest to chemists studying existing and potential carbon structures. The goal of this study is to conjecture and prove relations between graph theoretic properties among benzenoids. First, we generate conjectures on upper bounds for the domination number in benzenoids using invariant-defined functions. This work is an extension of the ideas to be presented in a forthcoming paper. Next, we generate conjectures using property-defined functions. As the title indicates, the conjectures we prove are not thought of on our own, rather generated by a process of automated conjecture-making. This program, named Cᴏɴᴊᴇᴄᴛᴜʀɪɴɢ, is developed by Craig Larson and Nico Van Cleemput.

Page generated in 0.0424 seconds