• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 121
  • 70
  • 17
  • 13
  • 5
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 301
  • 64
  • 51
  • 38
  • 25
  • 25
  • 22
  • 21
  • 21
  • 19
  • 18
  • 18
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Impacts of Ignoring Nested Data Structure in Rasch/IRT Model and Comparison of Different Estimation Methods

Chungbaek, Youngyun 06 June 2011 (has links)
This study involves investigating the impacts of ignoring nested data structure in Rasch/1PL item response theory (IRT) model via a two-level and three-level hierarchical generalized linear model (HGLM). Currently, Rasch/IRT models are frequently used in educational and psychometric researches for data obtained from multistage cluster samplings, which are more likely to violate the assumption of independent observations of examinees required by Rasch/IRT models. The violation of the assumption of independent observation, however, is ignored in the current standard practices which apply the standard Rasch/IRT for the large scale testing data. A simulation study (Study Two) was conducted to address this issue of the effects of ignoring nested data structure in Rasch/IRT models under various conditions, following a simulation study (Study One) to compare the performances of three methods, such as Penalized Quasi-Likelihood (PQL), Laplace approximation, and Adaptive Gaussian Quadrature (AGQ), commonly used in HGLM in terms of accuracy and efficiency in estimating parameters. As expected, PQL tended to produce seriously biased item difficulty estimates and ability variance estimates whereas almost unbiased for Laplace or AGQ for both 2-level and 3-level analysis. As for the root mean squared errors (RMSE), three methods performed without substantive differences for item difficulty estimates and ability variance estimates in both 2-level and 3-level analysis, except for level-2 ability variance estimates in 3-level analysis. Generally, Laplace and AGQ performed similarly well in terms of bias and RMSE of parameter estimates; however, Laplace exhibited a much lower convergence rate than that of AGQ in 3-level analyses. The results from AGQ, which produced the most accurate and stable results among three computational methods, demonstrated that the theoretical standard errors (SE), i.e., asymptotic information-based SEs, were underestimated by at most 34% when 2-level analyses were used for the data generated from 3-level model, implying that the Type I error rate would be inflated when the nested data structures are ignored in Rasch/IRT models. The underestimated theoretical standard errors were substantively more severe as the true ability variance increased or the number of students within schools increased regardless of test length or the number of schools. / Ph. D.
172

Modeling, Analysis, and Algorithmic Development of Some Scheduling and Logistics Problems Arising in Biomass Supply Chain, Hybrid Flow Shops, and Assembly Job Shops

Singh, Sanchit 15 July 2019 (has links)
In this work, we address a variety of problems with applications to `ethanol production from biomass', `agile manufacturing' and `mass customization' domains. Our motivation stems from the potential use of biomass as an alternative to non-renewable fuels, the prevalence of `flexible manufacturing systems', and the popularity of `mass customization' in today's highly competitive markets. Production scheduling and design and optimization of logistics network mark the underlying topics of our work. In particular, we address three problems, Biomass Logistics Problem, Hybrid Flow Shop Scheduling Problem, and Stochastic Demand Assembly Job Scheduling Problem. The Biomass Logistics Problem is a strategic cost analysis for setup and operation of a biomass supply chain network that is aimed at the production of ethanol from switchgrass. We discuss the structural components and operations for such a network. We incorporate real-life GIS data of a geographical region in a model that captures this problem. Consequently, we develop and demonstrate the effectiveness of a `Nested Benders' based algorithm for an efficient solution to this problem. The Hybrid Flow Shop Scheduling Problem concerns with production scheduling of a lot over a two-stage hybrid flow shop configuration of machines, and is often encountered in `flexible manufacturing systems'. We incorporate the use of `lot-streaming' in order to minimize the makespan value. Although a general case of this problem is NP-hard, we develop a pseudo-polynomial time algorithm for a special case of this problem when the sublot sizes are treated to be continuous. The case of discrete sublot sizes is also discussed for which we develop a branch-and-bound-based method and experimentally demonstrate its effectiveness in obtaining a near-optimal solution. The Stochastic Demand Assembly Job Scheduling Problem deals with the scheduling of a set of products in a production setting where manufacturers seek to fulfill multiple objectives such as `economy of scale' together with achieving the flexibility to produce a variety of products for their customers while minimizing delivery lead times. We design a novel methodology that is geared towards these objectives and propose a Lagrangian relaxation-based algorithm for efficient computation. / Doctor of Philosophy / In this work, we organize our research efforts in three broad areas - Biomass Supply Chain, Hybrid Flow Shop, and Assembly Job Shop, which are separate in terms of their application but connected by scheduling and logistics as the underlying functions. For each of them, we formulate the problem statement and identify the challenges and opportunities from the viewpoint of mathematical decision making. We use some of the well known results from the theory of optimization and linear algebra to design effective algorithms in solving these specific problems within a reasonable time limit. Even though the emphasis is on conducting an algorithmic analysis of the proposed solution methods and in solving the problems analytically, we strive to capture all the relevant and practical features of the problems during formulation of each of the problem statement, thereby maintaining their applicability. The Biomass Supply Chain pertains to the production of fuel grade ethanol from naturally occurring biomass in the form of switchgrass. Such a system requires establishment of a supply chain and logistics network that connects the production fields at its source, the intermediate points for temporary storage of the biomass, and bio-energy plant and refinery at its end for conversion of the cellulosic content in the biomass to crude oil and ethanol, respectively. We define the components and operations necessary for functioning of such a supply chain. The Biomass Logistics Problem that we address is a strategic cost analysis for setup and operation of such a biomass supply chain network. We focus our attention to a region in South Central Virginia and use the detailed geographic map data to obtain land use pattern in the region. We conduct survey of existing literature to obtain various transportation related cost factors and costs associated with the use of equipment. Our ultimate aim here is to understand the feasibility of running a biomass supply chain in the region of interest from an economic standpoint. As such, we represent the Biomass Logistics Problem with a cost-based optimization model and solve it in a series of smaller problems. A Hybrid Flow Shop (HFS) is a configuration of machines that is often encountered in the flexible manufacturing systems, wherein a particular station of machines can execute processing of jobs/tasks simultaneously. In our work, we approach a specific type of HFS, with a single machine at the first stage and multiple identical machines at the second stage. A batch or lot of jobs/items is considered for scheduling over such an HFS. Depending upon the area of application, such a batch is either allowed to be split into continuous sections or restricted to be split in discrete sizes only. The objective is to minimize the completion time of the last job on its assigned machine at the second stage. We call this problem, Hybrid Flow Shop Scheduling Problem, which is known to be a hard problem in literature. We aim to derive the results which will reduce the complexity of this problem, and develop both exact as well as heuristic methods in order to obtain near-optimal solution to this problem. An Assembly Job Shop is a variant of the classical Job Shop which considers scheduling a set of assembly operations over a set of assembly machines. Each operation can only be started once all the other operations in its precedence relationship are completed. Assembly Job Shop are at the core of some of the highly competitive manufacturing facilities that are principled on the philosophy of Mass Customization. Assuming an inherent nature of demand uncertainty, this philosophy aims to achieve ‘economy of scale’ together with flexibility to produce a variety of products for the customers while minimizing the delivery lead times simultaneously. We incorporate some of these challenges in a concise framework of production scheduling and call this problem as Stochastic Demand Assembly Job Scheduling Problem. We design a novel methodology that is geared towards achieving the set objectives and propose an effective algorithm for efficient computation.
173

Impact of Ignoring Nested Data Structures on Ability Estimation

Shropshire, Kevin O'Neil 03 June 2014 (has links)
The literature is clear that intentional or unintentional clustering of data elements typically results in the inflation of the estimated standard error of fixed parameter estimates. This study is unique in that it examines the impact of multilevel data structures on subject ability which are random effect predictions known as empirical Bayes estimates in the one-parameter IRT / Rasch model. The literature on the impact of complex survey design on latent trait models is mixed and there is no "best practice" established regarding how to handle this situation. A simulation study was conducted to address two questions related to ability estimation. First, what impacts does design based clustering have with respect to desirable statistical properties when estimating subject ability with the one-parameter IRT / Rasch model? Second, since empirical Bayes estimators have shrinkage properties, what impacts does clustering of first-stage sampling units have on measurement validity-does the first-stage sampling unit impact the ability estimate, and if so, is this desirable and equitable? Two models were fit to a factorial experimental design where the data were simulated over various conditions. The first model Rasch model formulated as a HGLM ignores the sample design (incorrect model) while the second incorporates a first-stage sampling unit (correct model). Study findings generally showed that the two models were comparable with respect to desirable statistical properties under a majority of the replicated conditions-more measurement error in ability estimation is found when the intra-class correlation is high and the item pool is small. In practice this is the exception rather than the norm. However, it was found that the empirical Bayes estimates were dependent upon the first-stage sampling unit raising the issue of equity and fairness in educational decision making. A real-world complex survey design with binary outcome data was also fit with both models. Analysis of the data supported the simulation design results which lead to the conclusion that modeling binary Rasch data may resort to a policy tradeoff between desirable statistical properties and measurement validity. / Ph. D.
174

BROADBAND AND MULTI-SCALE ELECTROMAGNETIC SOLVER USING POTENTIAL-BASED FORMULATIONS WITH DISCRETE EXTERIOR CALCULUS AND ITS APPLICATIONS

Boyuan Zhang (18446682) 01 May 2024 (has links)
<p dir="ltr">A novel computational electromagnetic (CEM) solver using potential-based formulations and discrete exterior calculus (DEC) is proposed. The proposed solver consists of two parts: the DEC A-Phi solver and the DEC F-Psi solver. A and Phi are the magnetic vector potential and electric scalar potential of the electromagnetic (EM) field, respectively; F and Psi are the electric vector potential and magnetic scalar potential, respectively. The two solvers are dual to each other, and most research is carried out with respect to the DEC A-Phi solver.</p><p dir="ltr">Systematical approach for constructing the DEC A-Phi matrix equations is provided in this thesis, including the construction of incidence matrices, Hodge star operators and different boundary conditions. The DEC A-Phi solver is proved to be broadband stable from DC to optics, while classical CEM solvers suffer from stability issues at low frequencies (also known as the low-frequency breakdown). The proposed solver is ideal for broadband and multi-scale analysis, which is of great importance in modern industry.</p><p dir="ltr">To empower the proposed solver with the ability to solve industry problems with large number of unknowns, iterative solvers are preferred. The error-minimization mechanism buried in iterative solvers allows user to control the effect of numerical error accumulation to the solution vector. Proper preconditioners are almost always needed to accelerate the convergence of iterative solvers in large scale problems. In this thesis, preconditioning schemes for the proposed solver are studied.</p><p dir="ltr">In the DEC A-Phi solver, current sources can be applied easily, but it is difficult to implement voltage sources. To incorporate voltage sources in the potential-based solver, the DEC F-Psi solver is proposed. The DEC A-Phi and F-Psi solvers are dual formulations to each other, and the construction of the F-Psi solver can be generalized from the A-Phi solver straightforward.</p>
175

Mathematical methods for portfolio management

Ondo, Guy-Roger Abessolo 08 1900 (has links)
Portfolio Management is the process of allocating an investor's wealth to in­ vestment opportunities over a given planning period. Not only should Portfolio Management be treated within a multi-period framework, but one should also take into consideration the stochastic nature of related parameters. After a short review of key concepts from Finance Theory, e.g. utility function, risk attitude, Value-at-rusk estimation methods, a.nd mean-variance efficiency, this work describes a framework for the formulation of the Portfolio Management problem in a Stochastic Programming setting. Classical solution techniques for the resolution of the resulting Stochastic Programs (e.g. L-shaped Decompo­ sition, Approximation of the probability function) are presented. These are discussed within both the two-stage and the multi-stage case with a special em­ phasis on the former. A description of how Importance Sampling and EVPI are used to improve the efficiency of classical methods is presented. Postoptimality Analysis, a sensitivity analysis method, is also described. / Statistics / M. Sc. (Operations Research)
176

Trajectories of Peircean philosophical theology : scriptural reasoning, axiology of thinking, and nested continua

Slater, Gary January 2015 (has links)
The writings of the American pragmatist thinker Charles S. Peirce (1839-1914) provide resources for what this thesis calls the “nested continua model” of theological interpretation. A diagrammatic demonstration of iconic relational logic akin to Peirce’s Existential Graphs, the nested continua model is imagined as a series of concentric circles graphed upon a two-dimensional plane. When faced with some problem of interpretation, one may draw discrete markings that signify that problem’s logical distinctions, then represent in the form of circles successive contexts by which these distinctions may be examined in relation to one another, arranged ordinally at relative degrees of specificity and vagueness, aesthetic intensity and concrete reasonableness. Drawing from Peter Ochs’s Scriptural Reasoning model of interfaith dialogue and Robert C. Neville’s axiology of thinking—each of which makes creative use of Peirce’s logic—this project aims to achieve an analytical unity between these two thinkers’ projects, which can then be addressed to further theological ends. The model hinges between diagrammatic and ameliorative functions, honing its logic to disclose contexts in which its theological or metaphysical claims might, if needed, be revised. Such metaphysical claims include love as that which unites feeling with intelligibility, hell as imprisonment within an opaque circle of interpretation whose distorted reflections render violence upon oneself and others, and the divine as both the center of aesthetic creativity and outermost horizon from which our many layers of interpretive criteria emerge. These are claims made from a particular identity in a particular cultural context, but the logical rules upon which they are based are accessible to all, and the hope of the model is to help people overcome problems of interpretation and orient themselves toward eternity without ignoring the world around them.
177

L'impact de l'adhésion aux statines sur les maladies cérébrovasculaires en prévention primaire dans un contexte réel d'utilisation

Ellia, Laura January 2008 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal.
178

Analyse de sensibilité globale pour les modèles de simulation imbriqués et multiéchelles / Global sensitivity analysis for nested and multiscale modelling

Caniou, Yann 29 November 2012 (has links)
Cette thèse est une contribution à la modélisation imbriquée de systèmes complexes. Elle propose une méthodologie globale pour quantifier les incertitudes et leurs origines dans une chaîne de calcul formée par plusieurs modèles pouvant être reliés les uns aux autres de façon complexe. Ce travail est organisé selon trois axes. D’abord, la structure dedépendance des paramètres du modèle, induite par la modélisation imbriquée, est modélisée de façon rigoureuse grâce à la théorie des copules. Puis, deux méthodes d’analyse de sensibilité adaptées aux modèles à paramètres d’entrée corrélés sont présentées : l’une est basée sur l’analyse de la distribution de la réponse du modèle, l’autre sur la décomposition de la covariance. Enfin, un cadre de travail inspiré de la théorie des graphes est proposé pour la description de l’imbrication des modèles. La méthodologie proposée est appliquée à des exemples industriels d’envergure : un modèle multiéchelles de calcul des propriétés mécaniques du béton par une méthode d’homogénéisation et un modèle multiphysique de calcul de dommage sur la culasse d’un moteur diesel. Les résultats obtenus fournissent des indications importantes pour une amélioration significative de la performance d’une structure. / This thesis is a contribution to the nested modelling of complex systems. A global methodology to quantify uncertainties and their origins in a workflow composed of several models that can be intricately linked is proposed. This work is organized along three axes. First, the dependence structure of the model parameters induced by the nested modelling is rigorously described thanks to the copula theory. Then, two sensitivity analysis methods for models with correlated inputs are presented : one is based on the analysis of the model response distribution and the other one is based on the decomposition of the covariance. Finally, a framework inspired by the graph theory is proposed for the description of the imbrication of the models. The proposed methodology is applied to different industrial applications : a multiscale modelling of the mechanical properties of concrete by homogenization method and a multiphysics approach of the damage on the cylinder head of a diesel engine. The obtained results provide the practitioner with essential informations for a significant improvement of the performance of the structure.
179

Etude spatiale des données collectées à bord des navires de pêche : en quoi est-elle pertinente pour la gestion des rejets de la pêche ? / Spatial analysis of on-board observer programme data : how is it relevant to the management of discards?

Pointin, Fabien 05 November 2018 (has links)
Depuis 2002, les pays membres de l’Union Européenne (UE) collectent, gèrent et fournissent des données nécessaires à la gestion des pêches et des rejets en particulier. Dans ce contexte, les programmes d’observation à la mer collectent des données à bord des navires de pêche sur la composition et la quantité des captures, y compris des rejets. En s’appuyant sur ces données, cette thèse a pour but d’analyser les variabilités spatio-temporelles des débarquements et des rejets de la pêche afin de contribuer à leur gestion. Pour cela, une méthode de cartographie basée sur des grilles à mailles variables a été développée. Cette méthode a été conçue pour produire des cartes pluriannuelles, annuelles et trimestrielles des débarquements et des rejets par espèce ou groupe selon le métier de pêche.Une plateforme basée sur des technologies Big Data a ensuite été utilisée avec pour objectifs d’affiner et d’automatiser la méthode de cartographie. Grâce à un système de stockage en ligne et un système d’analyse à haute performance, un grand nombre de cartes a ainsi pu être produit automatiquement par métier en regroupant ou non les années, les trimestres et les espèces. Finalement, l’utilité des cartes produites pour la gestion des rejets a été démontrée, en particulier dans le cadre de l’Obligation de Débarquement (Règlement (UE) n° 1380/2013). En s’appuyant sur des données de coûts et de revenus de flottilles, ces cartes permettent d’envisager des stratégies d’évitement de zones et/ou périodes de pêche propices aux captures non-désirées minimisant l’impact sur les performances économi / Since 2002, the European Union (EU) Members States collect, manage and supply data forthe management of fisheries and specifically of discards. In this context, at-sea observer programmes collect data on-board fishing vessels on the composition and quantity of catch, including discards. Based on these data, this study aims to analyse the spatio-temporal distribution of landings and discards so as to contribute to their management. In doing so, amapping method based on nested grids has been developed. This method has been designed to produce pluriannual, annual and quarterly maps of landings and discards per species or group of species according to the fishing metier.A platform based on Big Data technologies has then been used with the objectives of refining and automating the mapping method. Using an online storage system and a high-performance computing system, a large number of maps were produced automatically per métier, grouping or not years, quarters and species. Finally, the usefulness of the produced maps for managing discards has been demonstrated, particularly regarding the Landing Obligation (Regulation (UE) n° 1380/2013). Based on fleet cost and revenue data, these maps open up possibilities for identifying fishing zones and/or periods to be avoided (i.e., high bycatch) while minimising the impacts on fleet’s economic performances.
180

Detecção e identificação de Xanthomonas citri subsp. malvacearum em sementes de algodoeiro por meio de técnicas moleculares / Detection and identification of Xanthomonas citri subsp. malvacearum on cotton seeds by means of molecular techniques

Denise Moedim Balani 09 February 2010 (has links)
Xanthomonas citri subsp. malvacearum é o agente causal da mancha angular do algodoeiro, uma importante doença reportada em áreas de produção no Brasil e em todo o mundo. A partir da análise comparativa de sequências parciais do gene rpoB de linhagens de X. citri subsp. malvacearum, X. campestris pv. campestris, X. axonopodis pv. axonopodis e X. citri subsp. citri, desenhou-se o par de primers xam1R/2R. Foram testadas 19 espécies pertencentes ao gênero Xanthomonas, além de bactérias dos gêneros Acidovorax, Burkholderia, Erwinia, Pseudomonas e Ralstonia, e o produto de PCR específico de aproximadamente 560 pares de bases foi observado apenas para linhagens de X. citri subsp. malvacearum. Os primers desenhados mostraram-se altamente sensíveis, apresentando níveis de detecção de 8 ufc/ 5,0 L para suspensões da cultura pura da bactéria e 1,0 ng de DNA genômico de X. citri subsp. malvacearum. No isolamento, a partir de amostras de sementes sabidamente contaminadas, foram obtidas colônias bacterianas com características de morfologia e coloração semelhantes à X. citri subsp. malvacearum. Esses isolados foram submetidos a testes de coloração de Gram, hidrólise de amido, reação de hipersensibilidade (HR) em folhas de fumo e tomateiro, testes de patogenicidade em plantas de algodoeiro, amplificação com os primers específicos desenhados e sequenciamento do fragmento obtido e os resultados obtidos confirmaram a identificação dos mesmos como X. citri subsp. malvacearum. Experimentos combinados de BIO-PCR/nested-PCR foram realizados a partir do material obtido do processo de extração do patógeno das sementes contaminadas utilizando-se na primeira etapa de amplificação os primers correspondentes à parte do gene rpoB e na segunda etapa o produto da primeira amplificação e os primers específicos xam1F/2R. O resultado foi a observação de uma banda de aproximadamente 560 pb correspondente ao fragmento específico de X. citri subsp. malvacearum para todas as amostras testadas. Neste trabalho foi desenvolvido um teste de PCR específico para a detecção e identificação rápida e precisa dessa bactéria em amostras de sementes de algodoeiro. / Xanthomonas citri subsp. malvacearum is the causal agent of angular leaf spot of cotton an important disease reported in production areas in Brazil and worldwide. From the comparative analysis of partial rpoB gene sequences of X. citri subsp. malvacearum, X. campestris pv. campestris, X. axonopodis pv. axonopodis and X. citri subsp. citri strains, the pair of primers xam1F/2R was designed. Nineteen species of the genus Xanthomonas and isolates of the genera Acidovorax, Burkholderia, Erwinia, Pseudomonas and Ralstonia were tested and the specific PCR product of about 560 base pairs was observed only for strains of X. citri subsp. malvacearum. The primers were highly sensitive, with detection levels of 8 cfu/ 5.0 L for suspensions of pure culture of bacteria and 1.0 ng of genomic DNA of X. citri subsp. malvacearum. From contaminated seed samples, bacterial colonies were obtained with characteristic morphology and coloration similar to X. citri subsp. malvacearum. These isolates were tested for Gram stain, starch hydrolysis, hypersensitivity reaction (HR) on tobacco and tomato leaves, pathogenicity tests on cotton plants, amplification with the specific primers designed and sequencing of the fragment obtained. The results confirmed their identification as X. citri subsp. malvacearum. PCR experiments in combination of BIOPCR/ nested-PCR were performed with the material obtained from the extraction process of pathogen from seeds using in the first step of amplification primers corresponding to part of the rpoB gene and the second step the product of the first amplification and the specific primers xam1F/2R. The result was a band of approximately 560 bp corresponding to the specific fragment of X. citri subsp. malvacearum for all samples tested. In this work, a PCR test for the quick detection and accurate identification of this bacterium in seed samples of cotton were developed.

Page generated in 0.0447 seconds