• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 69
  • 22
  • 12
  • 8
  • 6
  • 6
  • 5
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 161
  • 48
  • 24
  • 22
  • 20
  • 20
  • 19
  • 18
  • 17
  • 15
  • 14
  • 14
  • 14
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Testování protokolů pro video na vyžádání v programu Apache JMeter / Video on Demand Protocols Testing using Apache JMeter

Srnec, Tomáš January 2018 (has links)
The master’s thesis deals with testing the application protocol HLS and RTSP in JMeter program. The aim of this thesis is to design and implement a test modules for both protocols, which will perform stress tests. The first part of thesis describes the types of stress tests, JMeter program for performance testing and video on demand services. Next chapter describes selected protokols, especially HLS and RTSP, which are used in this thesis. The practical part contains the design and implementation of test modules including test plans. Finally, the results are processed and commented.
92

Developing an Efficient Cover Cropping System for Maximum Nitrogen Recovery in Massachusetts

Farsad, Ali 13 May 2011 (has links)
Time of planting plays a critical role in nitrogen (N) uptake by rye cover crop (CC). Even a few days of delay in planting can severely decrease CC performance. Evaluating the amount of N accumulation related to time of planting is critical to the farmer who has to optimize the winter rye planting date based on completion of corn harvest, suitable weather conditions and time availability for fall manure application. Winter rye cover crop was planted at 6 planting dates in fall from mid August to early October at weekly intervals from 2004 to 2009. The results suggest that delay from critical planting date (CPD) will decrease rye N uptake dramatically. Suggested CPDs for northwest parts of Massachusetts are not applicable because they are too early (third to fourth week of August). CPDs for central parts of the State are from first to second week of September. Farmers in these zones can take advantage of cover crop by a better time management and planting no later than vii CPD. In Eastern areas of Massachusetts CPD is the third week of September. By evaluating the effect of planting date on rye growth and N accumulation throughout the State, this model provides a powerful decision making tool for increasing N recovery and reducing nutrient leaching. Sixteen units of cost effective and accurate automated lysimeters were designed and installed to measure post-harvest nitrate leaching from a rye cover crop field during the falls and winters of 2007 to 2009. The electronic system was designed to monitor soil tension and apply the equal amount of suction to the sampling media. Hourly data from soil tension and vacuum applied to the system were collected and stored by each unit. A safety system was designed for protecting vacuum pump against unexpected major vacuum leakage events. The controller can be easily reprogrammed for different performance strategies. Other major parts of lysimeter included the power supply systems, vacuum pump, vacuum tanks, sampling jars, suction cups and plates, and electronic valves. The electronic system showed a very reliable and accurate performance in the field condition.
93

Gibbs Sampling and Expectation Maximization Methods for Estimation of Censored Values from Correlated Multivariate Distributions

HUNTER, TINA D. 25 August 2008 (has links)
No description available.
94

A study of flow fields during filling of a sampler

Zhang, Zhi January 2009 (has links)
<p>More and more attention has been paid to decreasing the number and size of non-metallic inclusions existing in the final products recently in steel industries. Therefore, more efforts have been made to monitor the inclusions' size distributions during the metallurgy process, especially at the secondary steelmaking period. A liquid sampling procedure is one of the commonly applied methods that monitoring the inclusion size distribution in ladles, for example, during the secondary steelmaking. Here, a crucial point is that the steel sampler should be filled and solidified without changing the inclusion characteristics that exist at steel making temperatures. In order to preserve the original size and distributions in the extracted samples, it is important to avoid their collisions and coagulations inside samplers during filling. Therefore, one of the first steps to investigate is the flow pattern inside samplers during filling in order to obtain a more in-depth knowledge of the sampling process to make sure that the influence is minimized.</p><p>The main objective of this work is to fundamentally study the above mentioned sampler filling process. A production sampler employed in the industries has been scaled-up according to the similarity of Froude Number in the experimental study. A Particle Image Velocimetry (PIV) was used to capture the flow field and calculate the velocity vectors during the entire experiment. Also, a mathematical model has been developed to have an in-depth investigate of the flow pattern in side the sampler during its filling. Two different turbulence models were applied in the numerical study, the realizable k-ε model and Wilcox k-ω model. The predictions were compared to experimental results obtained by the PIV measurements. Furthermore, it was illustrated that there is a fairly good agreement between the measurements obtained by PIV and calculations predicted by the Wilcox k-ω model. Thus, it is concluded that the Wilcox k-ω model can be used in the future to predict the filling of steel samplers.</p>
95

Bayesian Cluster Analysis : Some Extensions to Non-standard Situations

Franzén, Jessica January 2008 (has links)
The Bayesian approach to cluster analysis is presented. We assume that all data stem from a finite mixture model, where each component corresponds to one cluster and is given by a multivariate normal distribution with unknown mean and variance. The method produces posterior distributions of all cluster parameters and proportions as well as associated cluster probabilities for all objects. We extend this method in several directions to some common but non-standard situations. The first extension covers the case with a few deviant observations not belonging to one of the normal clusters. An extra component/cluster is created for them, which has a larger variance or a different distribution, e.g. is uniform over the whole range. The second extension is clustering of longitudinal data. All units are clustered at all time points separately and the movements between time points are modeled by Markov transition matrices. This means that the clustering at one time point will be affected by what happens at the neighbouring time points. The third extension handles datasets with missing data, e.g. item non-response. We impute the missing values iteratively in an extra step of the Gibbs sampler estimation algorithm. The Bayesian inference of mixture models has many advantages over the classical approach. However, it is not without computational difficulties. A software package, written in Matlab for Bayesian inference of mixture models is introduced. The programs of the package handle the basic cases of clustering data that are assumed to arise from mixture models of multivariate normal distributions, as well as the non-standard situations.
96

Optimisation spatio-temporelle d’efforts de recherche pour cibles manoeuvrantes et intelligentes / Spatio-temporal optimisation of search efforts for smart and reactive moving targets

Chouchane, Mathieu 17 October 2013 (has links)
Dans cette thèse, nous cherchons à répondre à une problématique formulée par la DGA Techniques navales pour surveiller une zone stratégique : planifier le déploiement spatial et temporel optimal d’un ensemble de capteurs de façon à maximiser les chances de détecter une cible mobile et intelligente. La cible est dite intelligente car elle est capable de détecter sous certaines conditions les menaces que représentent les capteurs et ainsi de réagir en adaptant son comportement. Les déploiements générés pouvant aussi avoir un coût élevé nous devons tenir compte de ce critère lorsque nous résolvons notre problématique. Il est important de noter que la résolution d’un problème de ce type requiert, selon les besoins, l’application d’une méthode d’optimisation mono-objectif voire multiobjectif. Jusqu’à présent, les travaux existants n’abordent pas la question du coût des déploiements proposés. De plus la plupart d’entre eux ne se concentrent que sur un seul aspect à la fois. Enfin, pour des raisons algorithmiques, les contraintes sont généralement discrétisées.Dans une première partie, nous présentons un algorithme qui permet de déterminer le déploiement spatio-temporel de capteurs le plus efficace sans tenir compte de son coût. Cette méthode est une application à l’optimisation de la méthode multiniveau généralisée.Dans la seconde partie, nous montrons d’abord que l’utilisation de la somme pondérée des deux critères permet d’obtenir des solutions sans augmenter le temps de calcul. Pour notre seconde approche, nous nous inspirons des algorithmes évolutionnaires d’optimisation multiobjectif et adaptons la méthode multiniveau généralisée à l’optimisation multiobjectif. / In this work, we propose a solution to a problem issued by the DGA Techniques navales in order to survey a strategic area: determining the optimal spatio-temporal deployment of sensors that will maximize the detection probability of a mobile and smart target. The target is said to be smart because it is capable of detecting the threat of the sensors under certain conditions and then of adapting its behaviour to avoid it. The cost of a deployment is known to be very expensive and therefore it has to be taken into account. It is important to note that the wide spectrum of applications within this field of research also reflects the need for a highly complex theoretical framework based on stochastic mono or multi-objective optimisation. Until now, none of the existing works have dealt with the cost of the deployments. Moreover, the majority only treat one type of constraint at a time. Current works mostly rely on operational research algorithms which commonly model the constraints in both discrete space and time.In the first part, we present an algorithm which computes the most efficient spatio-temporal deployment of sensors, but without taking its cost into account. This optimisation method is based on an application of the generalised splitting method.In the second part, we first use a linear combination of the two criteria. For our second approach, we use the evolutionary multiobjective optimisation framework to adapt the generalised splitting method to multiobjective optimisation. Finally, we compare our results with the results of the NSGA-II algorithm.
97

An application of Bayesian Hidden Markov Models to explore traffic flow conditions in an urban area

Andersson, Lovisa January 2019 (has links)
This study employs Bayesian Hidden Markov Models as method to explore vehicle traffic flow conditions in an urban area in Stockholm, based on sensor data from separate road positions. Inter-arrival times are used as the observed sequences. These sequences of inter-arrival times are assumed to be generated from the distributions of four different (and hidden) traffic flow states; nightly free flow, free flow, mixture and congestion. The filtered and smoothed probability distributions of the hidden states and the most probable state sequences are obtained by using the forward, forward-backward and Viterbi algorithms. The No-U-Turn sampler is used to sample from the posterior distributions of all unknown parameters. The obtained results show in a satisfactory way that the Hidden Markov Models can detect different traffic flow conditions. Some of the models have problems with divergence, but the obtained results from those models still show satisfactory results. In fact, two of the models that converged seemed to overestimate the presence of congested traffic and all the models that not converged seem to do adequate estimations of the probability of being in a congested state. Since the interest of this study lies in estimating the current traffic flow condition, and not in doing parameter inference, the model choice of Bayesian Hidden Markov Models is satisfactory. Due to the unsupervised nature of the problematization of this study, it is difficult to evaluate the accuracy of the results. However, a model with simulated data and known states was also implemented, which resulted in a high classification accuracy. This indicates that the choice of Hidden Markov Models is a good model choice for estimating traffic flow conditions.
98

Novel Bayesian Methods for Disease Mapping: An Application to Chronic Obstructive Pulmonary Disease

Liu, Jie 01 May 2002 (has links)
Mapping of mortality rates has been a valuable public health tool. We describe novel Bayesian methods for constructing maps which do not depend on a post stratification of the estimated rates. We also construct posterior modal maps rather than posterior mean maps. Our methods are illustrated using mortality data from chronic obstructive pulmonary diseases (COPD) in the continental United States. Poisson regression models have attracted much attention in the scientific community for their superiority in modeling rare events (including mortality counts from COPD). Christiansen and Morris (JASA 1997) described a hierarchical Bayesian model for heterogeneous Poisson counts under the exchangeability assumption. We extend this model to include latent classes (groups of similar Poisson rates unknown to an investigator). Also, it is standard practice to construct maps using quantiles (e.g., quintiles) of the estimated mortality rates. For example, based on quintiles, the mortality rates are cut into 5 equal size groups, each containing $20\%$ of the data, and a different color is applied to each of them on the map. A potential problem is that, this method assumes an equal number of data in each group, but this is often not the case. The latent class model produces a method to construct maps without using quantiles, providing a more natural representation of the colors. Typically, for rare events, the posterior densities of the rates are skewed, making the posterior mean map inappropriate and inaccurate. Thus, although it is standard practice to present the posterior mean maps, we also develop a method to provide the joint posterior modal map (i.e., the map with the highest posterior probability over the ensemble). For the COPD data, collected 1988-1992 over 798 health service areas, we use Markov chain Monte Carlo methods to fit the model, and an output analysis is used to construct the new maps.
99

Bayesian Predictive Inference Under Informative Sampling and Transformation

Shen, Gang 29 April 2004 (has links)
We have considered the problem in which a biased sample is selected from a finite population, and this finite population itself is a random sample from an infinitely large population, called the superpopulation. The parameters of the superpopulation and the finite population are of interest. There is some information about the selection mechanism in that the selection probabilities are linearly related to the measurements. This is typical of establishment surveys where the selection probabilities are taken to be proportional to the previous year's characteristics. When all the selection probabilities are known, as in our problem, inference about the finite population can be made, but inference about the distribution is not so clear. For continuous measurements, one might assume that the the values are normally distributed, but as a practical issue normality can be tenuous. In such a situation a transformation to normality may be useful, but this transformation will destroy the linearity between the selection probabilities and the values. The purpose of this work is to address this issue. In this light we have constructed two models, an ignorable selection model and a nonignorable selection model. We use the Gibbs sampler and the sample importance re-sampling algorithm to fit the nonignorable selection model. We have emphasized estimation of the finite population parameters, although within this framework other quantities can be estimated easily. We have found that our nonignorable selection model can correct the bias due to unequal selection probabilities, and it provides improved precision over the estimates from the ignorable selection model. In addition, we have described the case in which all the selection probabilities are unknown. This is useful because many agencies (e.g., government) tend to hide these selection probabilities when public-used data are constructed. Also, we have given an extensive theoretical discussion on Poisson sampling, an underlying sampling scheme in our models especially useful in the case in which the selection probabilities are unknown.
100

Estimativa das tensões internas e externas atuantes no amostrador SPT durante sua cravação / Evaluation of internal and external average stresses on the SPT sampler

Zapata Galvis, Juliana 15 July 2015 (has links)
O ensaio SPT (Standard Penetration Test) é um dos ensaios geotécnicos mais utilizados no Brasil e em grande parte do mundo, para determinar o índice NSPT. Esse índice é usado para estimar, através de correlações empíricas, parâmetros do solo, capacidade de suporte, recalque de fundações, etc. Pelo fato destas correlações não terem nenhum fundamento científico, pesquisadores têm procurado desenvolver métodos racionais de análise, baseados em energia. Com esses métodos pode-se determinar a eficiência do ensaio SPT, que é uma importante característica utilizada nas análises dos resultados do ensaio. As quantidades de energia envolvidas no ensaio SPT são determinadas através do método EFV. Para tanto, é necessária a utilização de hastes instrumentadas com acelerômetros e células de carga durante a realização dos ensaios. Com os sinais de força e aceleração foram determinadas as quantidades de energia, força de reação dinâmica experimental do solo, forças de reação teórica estática e dinâmica e tensões atuantes no amostrador. Neste trabalho, como os sinais de força e aceleração foram registrados em uma seção instrumentada logo acima do amostrador, a eficiência do sistema pode ser determinada de acordo com a definição proposta por Aoki e Cintra (2000), e incluindo a correção sugerida por Odebrecht (2003). Nesta pesquisa foi desenvolvido um sistema de extrator de amostras, constituído de uma base, um cilindro hidráulico e uma célula de carga, para quantificar experimentalmente a força de atrito interno para posteriormente determinar as demais tensões que atuam no amostrador e o parâmetro a de Aoki, o qual é a razão entre o atrito interno e o atrito externo entre o solo e o amostrador. / The Standard Penetration Test (SPT) is one of the in-situ geotechnical tests most used in Brazil, as well as in many parts of the world. Through empirical correlations, the NSPT index is used to estimate parameters of the soil, carrying capacity, discharge of foundations, etc. Because these correlations have no scientific basis, researchers have developed rational methods of analysis, based on energy concepts. Using these concepts, the efficiency of the SPT, which is essential in the analysis of the test results, can be assessed. The amounts of energy involved in the SPT test are evaluated by the EFV method. Therefore, it is necessary instrumented rods with accelerometers and load cells for performing the tests. With force and acceleration records, amounts of energy, experimental dynamic reaction force of the soil, theoretical static and dynamic reaction forces and stresses acting on the sampler were assessed. In this work, as the force and acceleration signals were recorded at an instrumented section just above the sampler, the system efficiency could be determined according to the definition proposed by Cintra and Aoki (2000), including the energy corrections suggested by Odebrecht (2003). In this study, a sample extractor system consisting of a base, a hydraulic cylinder and a load cell was designed. The objective of this equipment is to experimentally quantify the internal friction force, allowing evaluating the stresses acting on the sampler. Also, the Aoki\'s a parameter, which is the ratio of internal friction and external friction between the ground and the sampler, could be calculated.

Page generated in 0.0322 seconds