• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 10
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 45
  • 45
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Commissioning a Commercial Laser Induced Fluorescence System for Characterization of Static Mixer Performance

Ezhilan, Madhumitha 28 August 2017 (has links)
No description available.
22

Generalised analytic queueing network models : the need, creation, development and validation of mathematical and computational tools for the construction of analytic queueing network models capturing more critical system behaviour

Almond, John January 1988 (has links)
Modelling is an important technique in the comprehension and management of complex systems. Queueing network models capture most relevant information from computer system and network behaviour. The construction and resolution of these models is constrained by many factors. Approximations contain detail lost for exact solution and/or provide results at lower cost than simulation. Information at the resource and interactive command level is gathered with monitors under ULTRIX'. Validation studies indicate central processor service times are highly variable on the system. More pessimistic predictions assuming this variability are in part verified by observation. The utility of the Generalised Exponential (GE) as a distribution parameterised by mean and variance is explored. Small networks of GE service centres can be solved exactly using methods proposed for Generalised Stochastic Petri Nets. For two centre. systems of GE type a new technique simplifying the balance equations is developed. A very efficient "building bglloocbka"l. is presented for exactly solving two centre systems with service or transfer blocking, Bernoulli feedback and load dependent rate, multiple GE servers. In the tandem finite buffer algorithm the building block illustrates problems encountered modelling high variability in blocking networks. A parametric validation study is made of approximations for single class closed networks of First-Come-First-Served (FCFS) centres with general service times. The multiserver extension using the building block is validated. Finally the Maximum Entropy approximation is extended to FCFS centres with multiple chains and implemented with computationally efficient convolution.
23

On the Application of the Bootstrap : Coefficient of Variation, Contingency Table, Information Theory and Ranked Set Sampling

Amiri, Saeid January 2011 (has links)
This thesis deals with the bootstrap method. Three decades after the seminal paper by Bradly Efron, still the horizons of this method need more exploration. The research presented herein has stepped into different fields of statistics where the bootstrap method can be utilized as a fundamental statistical tool in almost any application. The thesis considers various statistical problems, which is explained briefly below. Bootstrap method: A comparison of the parametric and the nonparametric bootstrap of variance is presented. The bootstrap of ranked set sampling is dealt with, as well as the wealth of theories and applications on the RSS bootstrap that exist nowadays. Moreover, the performance of RSS in resampling is explored. Furthermore, the application of the bootstrap method in the inference of contingency table test is studied. Coefficient of variation: This part shows the capacity of the bootstrap for inferring the coefficient of variation, a task which the asymptotic method does not perform very well. Information theory: There are few works on the study of information theory, especially on the inference of entropy. The papers included in this thesis try to achieve the inference of entropy using the bootstrap method.
24

Relative vegetation height variation and reflectance of herbaceous-dominated patches in Central Sweden

Santiago, Jo January 2020 (has links)
Semi-natural landscapes are recognized as suitable habitats for different plant species and provide ecosystem services that contribute to increased plant biodiversity. At the stand level, plant biodiversity is influenced by vegetation structure, of which vegetation height is an important parameter. Photogrammetry from drone captured images has the potential to provide a quick and cost-effective analysis of vegetation height. In addition, the relation between spectral signatures and species distribution can indicate where higher plant biodiversity can be found, as species can be identified based on their spectral signature. Spectral signatures are thus used in the current study in conjunction with vegetation height as a proxy for plant biodiversity in herbaceous-dominated patches. Two field surveys were conducted to collect drone data and reflectance data in July and August 2019. Twelve plots of ten metres diametre were delimited in the drone-derived orthophotos around the reflectance readings coordinates. In order to assess vegetation height, the difference between the digital surface model derived from the orthophotos and the national digital elevation model was determined. Two statistical indices were calculated: the modified soil-adjusted vegetation index (MSAVI) and the coefficient of variation of heights (CV). The relationship between the two indices was evaluated as a proxy for plant biodiversity. Drone-derived point clouds can be used to measure vegetation height in herbaceous-dominated environments due to the very fine scale of drone imagery. A possible negative correlation was found between MSAVI and CV on both surveyed months (July r2 = 0.675; August r2 = 0.401) if the outlier plots were removed from the analysis. There is not enough evidence to clearly explain the anomalous behaviour of the outlier plots. Further research is needed to confirm the use of the relationship between vegetation height variability and reflectance as a proxy for plant biodiversity assessment in herbaceous-dominated environments.
25

Reliability Based Design of Lime-Cement Columns based on Total Settlement Criterion

Ehnbom, Victor, Kumlin, Filip January 2011 (has links)
The geotechnical community has since decades been acquainted with the use of statistical approach for design optimizations. This has been approved as an operational method by many practitioners in the field but is yet to see a major full-scale breakthrough and acceptance in practice. The advantage of quantifying the many different sources of uncertainties in a design is already a fairly acknowledged method and is in this report expanded for the use in the case of road embankments founded on soft soil improved by lime-cement columns. Statistical approach was adopted with practice of reliability base design (RBD ) to consider the importance of ingoing variables’ variability with the target of streamlining the result by decreasing uncertainties (by means of increased measurements, careful installation, etc.). By constructing a working model that gives the corresponding area ratio between columns and soil needed to fulfill the different criterion set as input values, weight is put on investigating the effects of different coefficients of variation (COV ). The analyses show that the property variabilities have a significant influence on the requisite area ratio that an active use of RBD is a useful tool for optimizing designs in geotechnical engineering. The methodology favors the contractors own development of the mixing process since higher design values can be utilized when
26

Safety formats for non-linear finite element analyses of reinforced concrete beams loaded to shear failure

Ekesiöö, Anton, Ekhamre, Andreas January 2018 (has links)
There exists several different methods that can be used to implement a level of safety when performing non-linear finite element analysis of a structure. These methods are called safety formats and they estimate safety by different means and formulas which are partly discussed further in this thesis. The aim of this master thesis is to evaluate a model uncertainty factor for one safety format method called the estimation of coefficient of variation method (ECOV) since it is suggested to be included in the next version of Eurocode. The ECOV method will also be compared with the most common and widely used safety format which is the partial factor method (PF). The first part of this thesis presents the different safety formats more thoroughly followed by a theoretical part. The theory part aims to provide a deeper knowledge for the finite element method and non-linear finite element analysis together with some beam theory that explains shear mechanism in different beam types. The study was conducted on six beams in total, three deep beams and three slender beams. The deep beams were previously tested in the 1970s and the slender beams were previously tested in the 1990s, both test series were performed in a laboratory. All beams failed due to shear in the experimental tests. A detailed description of the beams are presented in the thesis. The simulations of the beams were all performed in the FEM- programme ATENA 2D to obtain high resemblance to the experimental test. In the results from the simulations it could be observed that the ECOV method generally got a higher capacity than the PF method. For the slender beams both methods received rather high design capacities with a mean of about 82% of the experimental capacity. For the deep beams both method reached low design capacities with a mean of around 46% of the experimental capacity. The results regarding the model uncertainty factor showed that the mean value for slender beams should be around 1.06 and for deep beams it should be around 1.25.
27

Generalised analytic queueing network models. The need, creation, development and validation of mathematical and computational tools for the construction of analytic queueing network models capturing more critical system behaviour.

Almond, John January 1988 (has links)
Modelling is an important technique in the comprehension and management of complex systems. Queueing network models capture most relevant information from computer system and network behaviour. The construction and resolution of these models is constrained by many factors. Approximations contain detail lost for exact solution and/or provide results at lower cost than simulation. Information at the resource and interactive command level is gathered with monitors under ULTRIX'. Validation studies indicate central processor service times are highly variable on the system. More pessimistic predictions assuming this variability are in part verified by observation. The utility of the Generalised Exponential (GE) as a distribution parameterised by mean and variance is explored. Small networks of GE service centres can be solved exactly using methods proposed for Generalised Stochastic Petri Nets. For two centre. systems of GE type a new technique simplifying the balance equations is developed. A very efficient "building bglloocbka"l. is presented for exactly solving two centre systems with service or transfer blocking, Bernoulli feedback and load dependent rate, multiple GE servers. In the tandem finite buffer algorithm the building block illustrates problems encountered modelling high variability in blocking networks. ': . _. A parametric validation study is made of approximations for single class closed networks of First-Come-First-Served (FCFS) centres with general service times. The multiserver extension using the building block is validated. Finally the Maximum Entropy approximation is extended to FCFS centres with multiple chains and implemented with computationally efficient convolution.
28

A Study of the Effects of Operational Time Variability in Assembly Lines with Linear Walking Workers

Amini Malaki, Afshin January 2012 (has links)
In the present fierce global competition, poor responsiveness, low flexibility to meet the uncertainty of demand, and the low efficiency of traditional assembly lines are adequate motives to persuade manufacturers to adopt highly flexible production tools such as cross-trained workers who move along the assembly line while carrying out their planned jobs at different stations [1]. Cross-trained workers can be applied in various models in assembly lines. A novel model which taken into consideration in many industries nowadays is called the linear walking worker assembly line and employs workers who travel along the line and fully assemble the product from beginning to end [2]. However, these flexible assembly lines consistently endure imbalance in their stations which causes a significant loss in the efficiency of the lines. The operational time variability is one of the main sources of this imbalance [3] and is the focus of this study which investigated the possibility of decreasing the mentioned loss by arranging workers with different variability in a special order in walking worker assembly lines. The problem motivation comes from the literature of unbalanced lines which is focused on bowl phenomenon. Hillier and Boling [4] indicated that unbalancing a line in a bowl shape could reach the optimal production rate and called it bowl phenomenon.  This study chose a conceptual design proposed by a local automotive company as a case study and a discrete event simulation study as the research method to inspect the questions and hypotheses of this research.  The results showed an improvement of about 2.4% in the throughput due to arranging workers in a specific order, which is significant compared to the fixed line one which had 1 to 2 percent improvement. In addition, analysis of the results concluded that having the most improvement requires grouping all low skill workers together. However, the pattern of imbalance is significantly effective in this improvement concerning validity and magnitude.
29

Efficient Confidence Interval Methodologies for the Noncentrality Parameters of Noncentral T-Distributions

Kim, Jong Phil 06 April 2007 (has links)
The problem of constructing a confidence interval for the noncentrality parameter of a noncentral t-distribution based upon one observation from the distribution is an interesting problem with important applications. A general theoretical approach to the problem is provided by the specification and inversion of acceptance sets for each possible value of the noncentrality parameter. The standard method is based upon the arbitrary assignment of equal tail probabilities to the acceptance set, while the choices of the shortest possible acceptance sets and UMP unbiased acceptance sets provide even worse confidence intervals, which means that since the standard confidence intervals are uniformly shorter than those of UMPU method, the standard method are "biased". However, with the correct choice of acceptance sets it is possible to provide an improvement in terms of confidence interval length over the confidence intervals provided by the standard method for all values of observation. The problem of testing the equality of the noncentrality parameters of two noncentral t-distributions is considered, which naturally arises from the comparison of two signal-to-noise ratios for simple linear regression models. A test procedure is derived that is guaranteed to maintain type I error while having only minimal amounts of conservativeness, and comparisons are made with several other approaches to this problem based on variance stabilizing transformations. In summary, these simulations confirm that the new procedure has type I error probabilities that are guaranteed not to exceed the nominal level, and they demonstrate that the new procedure has size and power levels that compare well with the procedures based on variance stabilizing transformations.
30

Determinação do tamanho da amostra para avaliação de híbridos de melão amarelo / Determination of sample size to evaluation of yellow melon hybrids

Moura, Kallyo Halyson Santos 30 April 2008 (has links)
Made available in DSpace on 2016-08-12T19:15:22Z (GMT). No. of bitstreams: 1 kallyo halyson santos moura ok.pdf: 596520 bytes, checksum: 9c71d52ee07aced37426fdeafa2cef96 (MD5) Previous issue date: 2008-04-30 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / The sample size in experiments should be determined so that it can accurately estimate the parameters of interest and to save time, labor and resources. The objective of the present work was to determine the sample size in evaluation experiments of melon hybrids in Mossoró. An experiment was carried out in randomized blocks with three replications to evaluate twenty yellow melon hybrids. The experimental plot consisted of two rows of 5.0 m with twenty plants in the spacing of 2.0 x 0.5 m. The traits evaluated were pulp thickness and firmness, content of soluble solids and fruit mean weight. Data were taken plant to plant, identifying each one harvested fruits. The methods used were of bootstrap resampling, linear model in plateau, sampling ntensity and modified maximum curvature to determine the sample size. The sample size recommended to determining for pulp thickness, fruit mean weight, content of soluble solidsand pulp firmness is 13, 13, 9 and 8 fruits, respectively. The Minimum sample size determined by modified maximum curvature method to pulp thickness, fruit mean weight, content of soluble solids and pulp firmness were 6,32; 7,35; 5,32and 3,25, respectively / O tamanho da amostra em experimentos deve ser determinado para que se possa estimar com precisão adequada os parâmetros de interesse e economizar tempo, mão-de-obra e recursos. O objetivo do presente trabalho foi determinar o tamanho da amostra para avaliação de híbridos de melão amarelo. Foi conduzido um experimento em blocos casualizados com três repetições para avaliar vinte híbridos de melão. A parcela utilizada foi constituída por duas linhas de 5,0 m com vinte plantas no espaçamento de 2,0 x 0,5 m. As características avaliadas foram as seguintes: espessura da polpa, firmeza da polpa, teor de sólidos solúveis e peso médio do fruto. Os dados foram tomados planta a planta, identificando cada um dos frutos colhidos. Foram utilizados os métodos de reamostragem bootstrap, modelo linear segmentado com platô, intensidade de amostragem e máxima curvatura para determinar o tamanho da amostra. O tamanho de amostra recomendado para determinação da espessura da polpa, peso médio do fruto, teor de sólidos solúveis e firmeza da polpa é de 13, 13, 9 e 8 frutos, respectivamente. Os tamanhos mínimos da amostra, obtidos pelo método da máxima curvatura modificada, foram 6,32; 7,35; 5,32 e 3,25 para espessura da polpa, peso médio do fruto, teor de sólidos solúveis e firmeza da polpa, respectivamente

Page generated in 0.1451 seconds