• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 247
  • 237
  • 37
  • 32
  • 18
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 662
  • 662
  • 151
  • 81
  • 59
  • 51
  • 50
  • 43
  • 40
  • 38
  • 38
  • 38
  • 37
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Tamanho de parcela e número de repetições em aveia branca / Size of plot and number of repeats in white oats

Lavezo, André 16 December 2016 (has links)
The study aimed to determine the optimum plot size (Xo) and the number of repetitions to evaluate fresh mass (FM), dry matter (DM) and grain yield (PROD) of oat and check the variability of Xo between cultivars and sowing dates. For this, were evaluated four cultivars (URS Charrua, URS Taura, URS Estampa and URS Corona) in three sowing times (time 1 – 04/28/2014, time 2 – 5/28/2014 and then 3 – 07/14/2014) in 96 trials uniformity of 3 × 3 m for the determination of Xo in FM and DM variables. The determination of Xo for PROD was necessary 32 uniformity of 3m × 3m tests, eight with each cultivar (URS Charrua, URS Taura, URS Estampa and URS Corona). At flowering were collected plants of tests to obtain FM and DM and heavy, obtaining FM, which were later submitted in air circulation oven forced 65 ± 3°C for 48 hours, after obtained by weighing the DM. At the end of oat crop cycle (the grain ripening stage) grains were collected for determination of PROD (kg ha-1). The optimum plot size (Xo) was determined by the method of maximum curvature of the model coefficient of variation and mean comparison between the evaluation periods and cultivars for the measurement of FM and DM, and among cultivars for measurement of PROD, were compared by the Scott Knott test by bootstrap analysis. In oat, there is variability of Xo among cultivars and sowing dates. For the four cultivars in the three sowing dates, Xo 1.66 m2 and 1.73 m2 they are suitable to assess FM and DM, respectively. Four replications to evaluate up to 50 treatments in a completely randomized designs and random blocks are sufficient for differences between average of 44.75% of the average experiment treatments are significant, by Tukey test (p = 0.05) the variables FM and DM. The Xo of 1,57m2 is sufficient to assess PROD in oat these four cultivars. To assess PROD with up to 50 treatments, the DIC and DBA, four replications are sufficient for differences between average of 40.53% of the average experiment treatments are significant, by Tukey test at 5% probability. / O trabalho teve como objetivo determinar o tamanho ótimo de parcela (Xo) e o número de repetições para avaliar massa fresca (MF), massa seca (MS) e produtividade de grãos (PROD) de aveia branca e verificar a variabilidade de Xo entre cultivares e épocas de semeadura. Para isto, foram avaliadas quatro cultivares (URS Charrua, URS Taura, URS Estampa e URS Corona), em três épocas de semeadura (época 1 - 28/04/2014, época 2 - 28/05/2014 e época 3 - 14/07/2014), em 96 ensaios de uniformidade de 3×3 m para a determinação do Xo nas variáveis MF e MS. Para a determinação do Xo para PROD foram necessários 32 ensaios de uniformidade de 3m×3m, sendo oito com cada cultivar (URS Charrua, URS Taura, URS Estampa e URS Corona). No florescimento foram coletadas as plantas dos ensaios destinados a obtenção de MF e MS e pesadas, obtendo a MF, sendo posteriormente submetidas a estufa de circulação de ar forçado 65±3°C, durante 48 horas, para obtenção da MS. Ao final do ciclo de cultivo da aveia (estádio de maturação dos grãos) foram colhidos os grãos para a determinação da PROD (kg ha-1). O tamanho ótimo de parcela (Xo) foi determinado por meio do método da curvatura máxima do modelo do coeficiente de variação e as comparações de médias, entre as épocas de avaliação e cultivares para a mensuração da MV e MS, e entre as cultivares para a mensuração da PROD, foram comparadas pelo teste de Scott Knott via análise de bootstrap. Em aveia branca, há variabilidade de Xo entre as cultivares e as épocas de semeadura. Para as quatro cultivares nas três épocas de semeadura, Xo de 1,66 m2 e 1,73 m2 são adequados para avaliar MF e MS, respectivamente. Quatro repetições, para avaliar até 50 tratamentos, nos delineamentos inteiramente casualizado e blocos ao acaso, são suficientes para que diferenças entre médias de tratamentos de 44,75% da média do experimento sejam significativas, pelo teste de Tukey (p=0,05) nas variáveis MF e MS. O Xo de 1,57m2 é suficiente para avaliar a PROD em aveia branca nessas quatro cultivares. Para avaliar a PROD com até 50 tratamentos, no DIC e DBA, quatro repetições são suficientes para que diferenças entre médias de tratamentos de 40,53% da média do experimento sejam significativas, pelo teste de Tukey, a 5% de probabilidade.
282

TUNING OPTIMIZATION SOFTWARE PARAMETERS FOR MIXED INTEGER PROGRAMMING PROBLEMS

Sorrell, Toni P 01 January 2017 (has links)
The tuning of optimization software is of key interest to researchers solving mixed integer programming (MIP) problems. The efficiency of the optimization software can be greatly impacted by the solver’s parameter settings and the structure of the MIP. A designed experiment approach is used to fit a statistical model that would suggest settings of the parameters that provided the largest reduction in the primal integral metric. Tuning exemplars of six and 59 factors (parameters) of optimization software, experimentation takes place on three classes of MIPs: survivable fixed telecommunication network design, a formulation of the support vector machine with the ramp loss and L1-norm regularization, and node packing for coding theory graphs. This research presents and demonstrates a framework for tuning a portfolio of MIP instances to not only obtain good parameter settings used for future instances of the same class of MIPs, but to also gain insights into which parameters and interactions of parameters are significant for that class of MIPs. The framework is used for benchmarking of solvers with tuned parameters on a portfolio of instances. A group screening method provides a way to reduce the number of factors in a design and reduces the time it takes to perform the tuning process. Portfolio benchmarking provides performance information of optimization solvers on a class with instances of a similar structure.
283

Estimation of treatment effects under combined sampling and experimental designs

Smith, Christina D. January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Dallas E. Johnson / Over the years sampling and experimental design have developed independently with little mutual compatibility. However, many studies do (or should) involve both a sampling design and an experimental design. For example, a polluted site may be exhaustively partitioned into area plots, a random sample of plots selected, and the selected plots randomly assigned to three clean-up regimens. In this research the relationship between sampling design and experimental design is discussed and a basic review of each is given. An estimator that combines sampling and experimental design is presented and it's development explained. Properties of this estimator will be derived and some applications of the estimator will be examined. Finally, a simulation study comparing this estimator with the traditional estimator will be presented.
284

Modelagem e otimização do processo de síntese do ácido propanóico via fermentação do glicerol / Modeling and optimization of process for synthesis of propionic acid through the glycerol fermentation

Coêlho, Dayana de Gusmão, 1986- 07 April 2011 (has links)
Orientador: Rubens Maciel Filho / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Química / Made available in DSpace on 2018-08-18T17:38:14Z (GMT). No. of bitstreams: 1 Coelho_DayanadeGusmao_M.pdf: 1694005 bytes, checksum: d8d6330083def5f9d9b5e26276c66c26 (MD5) Previous issue date: 2011 / Resumo: Atualmente, a grande parcela de produção do ácido propanóico ocorre a partir da síntese química, tendo como matéria-prima o petróleo. O conhecimento das limitações das fontes não-renováveis, sobretudo o esgotamento das matérias derivadas do petróleo, constitui o grande desafio na busca de fontes alternativas, competitivas e sustentáveis, em face do intento na preservação ambiental. O presente trabalho versa acerca do estudo de processo fermentativo, visando a minimizar a exploração desenfreada dos recursos naturais, comandada atualmente pelas fontes petroquímicas, bem como propiciar caminhos paralelos e alternativos por meio de fontes renováveis. Desta forma, este projeto tem como objetivo investigar a produção do ácido propanóico, por meio de processo biotecnológico, utilizando como matéria-prima o glicerol e o microrganismo propionibacterium acidipropionici. Neste âmbito, para a análise do sistema em batelada, será realizada a modelagem matemática do processo, por intermédio de modelos não estruturados, e a simulação do referido processo pelo método de Runge Kutta. Os parâmetros operacionais e cinéticos foram otimizados por intermédio da aplicação da técnica de planejamentos de experimentos por Metodologia de Superfície de Resposta e pelo Algoritmo Genético. Deste modo, por meio de desenvolvimento dos modelos, otimização dos parâmetros e das condições de operação podemos, determinar as condições, limitações e viabilidade do processo constituindo ferramentas fundamentais na investigação do comportamento do processo a fim de maximizar a produção do ácido propanóico possibilitando uma rota alternativa para este processo / Abstract: Currently, a large proportion of propionic acid production occurs from chemical synthesis, with the raw oil. Knowing the limitations of non-renewable sources, especially the depletion of petroleum derived feedstock, is the great challenge in finding alternative sources, competitive and sustainable in the face of intent on environmental preservation. The present paper is about the study of the fermentation process, in order to minimize the uncontrolled exploitation of natural resources, currently led by petrochemical sources, as well as provide alternative and parallel paths through renewable sources. Thus, this project aims to investigate the production of propionic acid by biotechnology process using as raw glycerol and microorganism Propionibacterium acidipropionici. In this context, to analyze the batch system will be the mathematical model of the process through unstructured models, and simulation of this process by the Runge Kutta. The operational parameters and kinetics were optimized through the application of the technique of planning of experiments by Response Surface Methodology and the Genetic Algorithm. Thus, through the development of models, optimization of parameters and operating conditions can determine the conditions, limitations and feasibility of the process constitutes the fundamental tools for investigating the behavior of the process in order to maximize production of propionic acid providing an alternative route for this process / Mestrado / Desenvolvimento de Processos Químicos / Mestre em Engenharia Química
285

Increasing the Feasibility of Multilevel Studies through Design Improvements and Analytic Advancements

Cox, Kyle 19 November 2019 (has links)
No description available.
286

Modelling and optimisation of flexible PVC compound formulation for mine cables

Fechter, Reinhard Heinrich January 2017 (has links)
The thermal stability, fire retardancy and basic mechanical properties, as a function of the mass fractions of the poly(vinyl chloride) (PVC) compound ingredients, can be modelled using 2nd order Scheffé polynomials. The empirical models for each response variable can be determined using statistical experimental design. The particular models for each response variable, which are selected for predictive ability using k-fold cross validation, can be interpreted using statistical analysis of the model terms. The statistical analysis of the model terms can reveal the synergistic or antagonistic interactions between ingredients, some of which have not been reported in literature. The interaction terms in the models also mean that the effect of a certain ingredient is dependent on the mass fractions of the other ingredients. Sensitivity analysis can be used to examine the overall effect of a change in a particular formulation on the response variables. The empirical models can be used to minimise the cost of the PVC compound by varying the formulation. The optimum formulation is a function of the costs of the various ingredients and the limits which are placed on the response variables. To analyse the system as a whole, parametric analysis can be used. The number of different parametric analyses which can be done is very large and depends on the specific questions which need to be answered. Parametric analysis can be used to gain insight into the complex behaviour of the system with changing requirements, as a decision making tool in a commercial environment or to determine the completeness of the different measuring techniques used to describe the thermal stability and fire retardancy of the PVC compound. Statistical experimental design allows for the above methods to be used which leads to significant time and labour savings over attempting to reach the same conclusions using the traditional one-factor-at-a-time experiments with changes in the phr of an ingredient. It is recommended that the data generated for this investigation is analysed in more detail using the methods outlined for this investigation. This can be facilitated by making the analysis of the data (and therefore the data itself) more accessible through a usable interface. The data set itself can also be expanded to include new ingredients requiring very few additional experiments. If a PVC compound that contains none of the ingredients that were used in this investigation is of interest a new separate data set needs to be generated. This can be done by following the same procedure used in this investigation. In fact the method that is used in this investigation can be generalised to optimise the proportions of the ingredients of any mixture. / Dissertation (MEng)--University of Pretoria, 2017. / Chemical Engineering / MEng / Unrestricted
287

Robots Without Faces: Non-Verbal Social Human-Robot Interaction

Bethel, Cindy L 17 June 2009 (has links)
Non-facial and non-verbal methods of affective expression are essential for naturalistic social interaction in robots that are designed to be functional and lack expressive faces (appearance-constrained)such as those used in search and rescue, law enforcement, and military applications. This research identifies five main methods of non-facial and non-verbal affective expression (body movement, posture, orientation, color, and sound). From the psychology, computer science, and robotics literature a set of prescriptive recommendations was distilled for the appropriate non-facial and non-verbal affective expression methods for each of three proximity zones of interest(intimate: contact - 0.46 m, personal: 0.46 - 1.22 m, and social: 1.22 - 3.66 m). These recommendations serve as design guidelines for adding retroactively affective expression through software with minimal or no physical modifications to a robot or designing a new robot. This benefits both the human-robot interaction (HRI) and robotics communities. A large-scale, complex human-robot study was conducted to verify these design guidelines using 128 participants, and four methods of evaluation (self-assessments, psychophysiological measures, behavioral observations, and structured interviews) for convergent validity. The study was conducted in a high-fidelity, confined-space simulated disaster site with all robot interactions performed in the dark. This research investigated whether the use of non-facial and non-verbal affective expression provided a mechanism for naturalistic social interaction between a functional, appearance-constrained robot and the human with which it interacted. As part of this research study, the valence and arousal dimensions of the Self-Assessment Manikin (SAM) were validated for use as an assessment tool for future HRI human-robot studies. Also presented is a set of practical recommendations for designing, planning, and executing a successful, large-scale complex human-robot study using appropriate sample sizes and multiple methods of evaluation for validity and reliability in HRI studies. As evidenced by the results, humans were calmer with robots that exhibited non-facial and non-verbal affective expressions for social human-robot interactions in urban search and rescue applications. The results also indicated that humans calibrated their responses to robots based on their first robot encounter.
288

Identification of material parameters in linear elasticity - some numerical results

Hein, Torsten, Meyer, Marcus 28 November 2007 (has links)
In this paper we present some numerical results concerning the identification of material parameters in linear elasticity by dealing with small deformations. On the basis of a precise example different aspects of the parameter estimation problem are considered. We deal with practical questions such as the experimental design for obtaining sufficient data for recovering the unknown parameters as well as questions of treating the corresponding inverse problems numerically. Two algorithms for solving these problems can be introduced and extensive numerical case studies are presented and discussed.
289

Bayesian Optimal Experimental Design Using Multilevel Monte Carlo

Ben Issaid, Chaouki 12 May 2015 (has links)
Experimental design can be vital when experiments are resource-exhaustive and time-consuming. In this work, we carry out experimental design in the Bayesian framework. To measure the amount of information that can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data about the model parameters. One of the major difficulties in evaluating the expected information gain is that it naturally involves nested integration over a possibly high dimensional domain. We use the Multilevel Monte Carlo (MLMC) method to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, MLMC can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the MLMC method imposes fewer assumptions, such as the asymptotic concentration of posterior measures, required for instance by the Laplace approximation (LA). We test the MLMC method using two numerical examples. The first example is the design of sensor deployment for a Darcy flow problem governed by a one-dimensional Poisson equation. We place the sensors in the locations where the pressure is measured, and we model the conductivity field as a piecewise constant random vector with two parameters. The second one is chemical Enhanced Oil Recovery (EOR) core flooding experiment assuming homogeneous permeability. We measure the cumulative oil recovery, from a horizontal core flooded by water, surfactant and polymer, for different injection rates. The model parameters consist of the endpoint relative permeabilities, the residual saturations and the relative permeability exponents for the three phases: water, oil and microemulsions. We also compare the performance of the MLMC to the LA and the direct Double Loop Monte Carlo (DLMC). In fact, we show that, in the case of the aforementioned examples, MLMC combined with LA turns to be the best method in terms of computational cost.
290

Development of a Non-Intrusive Continuous Sensor for Early Detection of Fouling in Commercial Manufacturing Systems

Fernando Jose Cantarero Rivera (9183332) 31 July 2020 (has links)
<p>Fouling is a critical issue in commercial food manufacturing. Fouling can cause biofilm formation and pose a threat to the safety of food products. Early detection of fouling can lead to informed decision making about the product’s safety and quality, and effective system cleaning to avoid biofilm formation. In this study, a Non-Intrusive Continuous Sensor (NICS) was designed to estimate the thermal conductivity of the product as they flow through the system at high temperatures as an indicator of fouling. Thermal properties of food products are important for product and process design and to ensure food safety. Online monitoring of thermal properties during production and development stages at higher processing temperatures, ~140°C like current aseptic processes, is not possible due to limitations in sensing technology and safety concerns due to high temperature and pressure conditions. Such an in-line and noninvasive sensor can provide information about fouling layer formation, food safety issues, and quality degradation of the products. A computational fluid dynamics model was developed to simulate the flow within the sensor and provide predicted data output. Glycerol, water, 4% potato starch solution, reconstituted non-fat dry milk (NFDM), and heavy whipping cream (HWC) were selected as products with the latter two for fouling layer thickness studies. The product and fouling layer thermal conductivities were estimated at high temperatures (~140°C). Scaled sensitivity coefficients and optimal experimental design were taken into consideration to improve the accuracy of parameter estimates. Glycerol, water, 4% potato starch, NFDM, and HWC were estimated to have thermal conductivities of 0.292 ± 0.006, 0.638 ± 0.013, 0.487 ± 0.009, 0.598 ± 0.010, and 0.359 ± 0.008 W/(m·K), respectively. The thermal conductivity of the fouling layer decreased as the processing time increased. At the end of one hour process time, thermal conductivity achieved an average minimum of 0.365 ± 0.079 W/(m·K) and 0.097 ± 0.037 W/(m·K) for NFDM and HWC fouling, respectively. The sensor’s novelty lies in the short duration of the experiments, the non-intrusive aspect of its measurements, and its implementation for commercial manufacturing.</p>

Page generated in 0.0256 seconds