• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 299
  • 103
  • 39
  • 35
  • 32
  • 23
  • 11
  • 10
  • 9
  • 8
  • 8
  • 6
  • 6
  • 5
  • 5
  • Tagged with
  • 691
  • 126
  • 126
  • 123
  • 105
  • 93
  • 89
  • 82
  • 76
  • 70
  • 59
  • 57
  • 54
  • 53
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Rationing & Bayesian expectations with application to the labour market

Förster, Hannah January 2006 (has links)
The first goal of the present work focuses on the need for different rationing methods of the The Global Change and Financial Transition (GFT) work- ing group at the Potsdam Institute for Climate Impact Research (PIK): I provide a toolbox which contains a variety of rationing methods to be ap- plied to micro-economic disequilibrium models of the lagom model family. This toolbox consists of well known rationing methods, and of rationing methods provided specifically for lagom. To ensure an easy application the toolbox is constructed in modular fashion. The second goal of the present work is to present a micro-economic labour market where heterogenous labour suppliers experience consecu- tive job opportunities and need to decide whether to apply for employ- ment. The labour suppliers are heterogenous with respect to their qualifi- cations and their beliefs about the application behaviour of their competi- tors. They learn simultaneously – in Bayesian fashion – about their individ- ual perceived probability to obtain employment conditional on application (PPE) by observing each others’ application behaviour over a cycle of job opportunities. / In vorliegender Arbeit beschäftige ich mich mit zwei Dingen. Zum einen entwickle ich eine Modellierungstoolbox, die verschiedene Rationierungs- methoden enthält. Diese Rationierungsmethoden sind entweder aus der Literatur bekannt, oder wurden speziell für die lagom Modellfamilie ent- wickelt. Zum anderen zeige ich, dass man mit Hilfe von Rationierungsmetho- den aus der Modellierungstoolbox einen fiktiven Arbeitsmarkt modellie- ren kann. Auf diesem agieren arbeitssuchende Agenten, die heterogen im Bezug auf ihre Qualifikation und ihre Vorstellungen über das Bewerbungs- verhalten ihrer Konkurrenten sind. Sie erfahren aufeinanderfolgende Job- angebote und beobachten das Bewerbungsverhalten ihrer Konkurrenten, um in Bayesianischer Weise über ihre individuelle Wahrscheinlichkeit eine Stelle zu erhalten zu lernen.
262

Examining the Effects of Site-Selection Criteria for Evaluating the Effectiveness of Traffic Safety Improvement Countermeasures

Kuo, Pei-Fen 2012 May 1900 (has links)
The before-after study is still the most popular method used by traffic engineers and transportation safety analysts for evaluating the effects of an intervention. However, this kind of study may be plagued by important methodological limitations, which could significantly alter the study outcome. They include the regression-to-the-mean (RTM) and site-selection effects. So far, most of the research on these biases has focused on the RTM. Hence, the primary objective of this study consists of presenting a method that can reduce the site-selection bias when an entry criterion is used in before-after studies for continuous (e.g. speed, reaction times, etc.) and count data (e.g. number of crashes, number of fatalities, etc.). The proposed method documented in this research provides a way to adjust the Naive estimator by using the sample data and without relying on the data collected from the control group, since finding enough appropriate sites for the control group is much harder in traffic-safety analyses. In this study, the proposed method, a.k.a. Adjusted method, was compared to commonly used methods in before-after studies. The study results showed that among all methods evaluated, the Naive is the most significantly affected by the selection bias. Using the CG, the ANCOVA, or the EB method based on a control group (EBCG) method can eliminate the site-selection bias, as long as the characteristics of the control group are exactly the same as those for the treatment group. However, control group data that have same characteristics based on a truncated distribution or sample may not be available in practice. Moreover, site-selection bias generated by using a dissimilar control group might be even higher than with using the Naive method. The Adjusted method can partially eliminate site-selection bias even when biased estimators of the mean, variance, and correlation coefficient of a truncated normal distribution are used or are not known with certainty. In addition, three actual datasets were used to evaluate the accuracy of the Adjusted method for estimating site-selection biases for various types of data that have different mean and sample-size values.
263

Contributions to Bayesian wavelet shrinkage

Remenyi, Norbert 07 November 2012 (has links)
This thesis provides contributions to research in Bayesian modeling and shrinkage in the wavelet domain. Wavelets are a powerful tool to describe phenomena rapidly changing in time, and wavelet-based modeling has become a standard technique in many areas of statistics, and more broadly, in sciences and engineering. Bayesian modeling and estimation in the wavelet domain have found useful applications in nonparametric regression, image denoising, and many other areas. In this thesis, we build on the existing techniques and propose new methods for applications in nonparametric regression, image denoising, and partially linear models. The thesis consists of an overview chapter and four main topics. In Chapter 1, we provide an overview of recent developments and the current status of Bayesian wavelet shrinkage research. The chapter contains an extensive literature review consisting of almost 100 references. The main focus of the overview chapter is on nonparametric regression, where the observations come from an unknown function contaminated with Gaussian noise. We present many methods which employ model-based and adaptive shrinkage of the wavelet coefficients through Bayes rules. These includes new developments such as dependence models, complex wavelets, and Markov chain Monte Carlo (MCMC) strategies. Some applications of Bayesian wavelet shrinkage, such as curve classification, are discussed. In Chapter 2, we propose the Gibbs Sampling Wavelet Smoother (GSWS), an adaptive wavelet denoising methodology. We use the traditional mixture prior on the wavelet coefficients, but also formulate a fully Bayesian hierarchical model in the wavelet domain accounting for the uncertainty of the prior parameters by placing hyperpriors on them. Since a closed-form solution to the Bayes estimator does not exist, the procedure is computational, in which the posterior mean is computed via MCMC simulations. We show how to efficiently develop a Gibbs sampling algorithm for the proposed model. The developed procedure is fully Bayesian, is adaptive to the underlying signal, and provides good denoising performance compared to state-of-the-art methods. Application of the method is illustrated on a real data set arising from the analysis of metabolic pathways, where an iterative shrinkage procedure is developed to preserve the mass balance of the metabolites in the system. We also show how the methodology can be extended to complex wavelet bases. In Chapter 3, we propose a wavelet-based denoising methodology based on a Bayesian hierarchical model using a double Weibull prior. The interesting feature is that in contrast to the mixture priors traditionally used by some state-of-the-art methods, the wavelet coefficients are modeled by a single density. Two estimators are developed, one based on the posterior mean and the other based on the larger posterior mode; and we show how to calculate these estimators efficiently. The methodology provides good denoising performance, comparable even to state-of-the-art methods that use a mixture prior and an empirical Bayes setting of hyperparameters; this is demonstrated by simulations on standard test functions. An application to a real-word data set is also considered. In Chapter 4, we propose a wavelet shrinkage method based on a neighborhood of wavelet coefficients, which includes two neighboring coefficients and a parental coefficient. The methodology is called Lambda-neighborhood wavelet shrinkage, motivated by the shape of the considered neighborhood. We propose a Bayesian hierarchical model using a contaminated exponential prior on the total mean energy in the Lambda-neighborhood. The hyperparameters in the model are estimated by the empirical Bayes method, and the posterior mean, median, and Bayes factor are obtained and used in the estimation of the total mean energy. Shrinkage of the neighboring coefficients is based on the ratio of the estimated and observed energy. The proposed methodology is comparable and often superior to several established wavelet denoising methods that utilize neighboring information, which is demonstrated by extensive simulations. An application to a real-world data set from inductance plethysmography is considered, and an extension to image denoising is discussed. In Chapter 5, we propose a wavelet-based methodology for estimation and variable selection in partially linear models. The inference is conducted in the wavelet domain, which provides a sparse and localized decomposition appropriate for nonparametric components with various degrees of smoothness. A hierarchical Bayes model is formulated on the parameters of this representation, where the estimation and variable selection is performed by a Gibbs sampling procedure. For both the parametric and nonparametric part of the model we are using point-mass-at-zero contamination priors with a double exponential spread distribution. In this sense we extend the model of Chapter 2 to partially linear models. Only a few papers in the area of partially linear wavelet models exist, and we show that the proposed methodology is often superior to the existing methods with respect to the task of estimating model parameters. Moreover, the method is able to perform Bayesian variable selection by a stochastic search for the parametric part of the model.
264

Game theoretic and machine learning techniques for balancing games

Long, Jeffrey Richard 29 August 2006
Game balance is the problem of determining the fairness of actions or sets of actions in competitive, multiplayer games. This problem primarily arises in the context of designing board and video games. Traditionally, balance has been achieved through large amounts of play-testing and trial-and-error on the part of the designers. In this thesis, it is our intent to lay down the beginnings of a framework for a formal and analytical solution to this problem, combining techniques from game theory and machine learning. We first develop a set of game-theoretic definitions for different forms of balance, and then introduce the concept of a strategic abstraction. We show how machine classification techniques can be used to identify high-level player strategy in games, using the two principal methods of sequence alignment and Naive Bayes classification. Bioinformatics sequence alignment, when combined with a 3-nearest neighbor classification approach, can, with only 3 exemplars of each strategy, correctly identify the strategy used in 55\% of cases using all data, and 77\% of cases on data that experts indicated actually had a strategic class. Naive Bayes classification achieves similar results, with 65\% accuracy on all data and 75\% accuracy on data rated to have an actual class. We then show how these game theoretic and machine learning techniques can be combined to automatically build matrices that can be used to analyze game balance properties.
265

Segmentation of human ovarian follicles from ultrasound images acquired <i>in vivo</i> using geometric active contour models and a naïve Bayes classifier

Harrington, Na 14 September 2007
Ovarian follicles are spherical structures inside the ovaries which contain developing eggs. Monitoring the development of follicles is necessary for both gynecological medicine (ovarian diseases diagnosis and infertility treatment), and veterinary medicine (determining when to introduce superstimulation in cattle, or dividing herds into different stages in the estrous cycle).<p>Ultrasound imaging provides a non-invasive method for monitoring follicles. However, manually detecting follicles from ovarian ultrasound images is time consuming and sensitive to the observer's experience. Existing (semi-) automatic follicle segmentation techniques show the power of automation, but are not widely used due to their limited success.<p>A new automated follicle segmentation method is introduced in this thesis. Human ovarian images acquired <i>in vivo</i> were smoothed using an adaptive neighbourhood median filter. Dark regions were initially segmented using geometric active contour models. Only part of these segmented dark regions were true follicles. A naïve Bayes classifier was applied to determine whether each segmented dark region was a true follicle or not. <p>The Hausdorff distance between contours of the automatically segmented regions and the gold standard was 2.43 ± 1.46 mm per follicle, and the average root mean square distance per follicle was 0.86 ± 0.49 mm. Both the average Hausdorff distance and the root mean square distance were larger than those reported in other follicle segmentation algorithms. The mean absolute distance between contours of the automatically segmented regions and the gold standard was 0.75 ± 0.32 mm, which was below that reported in other follicle segmentation algorithms.<p>The overall follicle recognition rate was 33% to 35%; and the overall image misidentification rate was 23% to 33%. If only follicles with diameter greater than or equal to 3 mm were considered, the follicle recognition rate increased to 60% to 63%, and the follicle misidentification rate increased slightly to 24% to 34%. The proposed follicle segmentation method is proved to be accurate in detecting a large number of follicles with diameter greater than or equal to 3 mm.
266

A wearable real-time system for physical activity recognition and fall detection

Yang, Xiuxin 23 September 2010
This thesis work designs and implements a wearable system to recognize physical activities and detect fall in real time. Recognizing peoples physical activity has a broad range of applications. These include helping people maintaining their energy balance by developing health assessment and intervention tools, investigating the links between common diseases and levels of physical activity, and providing feedback to motivate individuals to exercise. In addition, fall detection has become a hot research topic due to the increasing population over 65 throughout the world, as well as the serious effects and problems caused by fall.<p> In this work, the Sun SPOT wireless sensor system is used as the hardware platform to recognize physical activity and detect fall. The sensors with tri-axis accelerometers are used to collect acceleration data, which are further processed and extracted with useful information. The evaluation results from various algorithms indicate that Naive Bayes algorithm works better than other popular algorithms both in accuracy and implementation in this particular application.<p> This wearable system works in two modes: indoor and outdoor, depending on users demand. Naive Bayes classifier is successfully implemented in the Sun SPOT sensor. The results of evaluating sampling rate denote that 20 Hz is an optimal sampling frequency in this application. If only one sensor is available to recognize physical activity, the best location is attaching it to the thigh. If two sensors are available, the combination at the left thigh and the right thigh is the best option, 90.52% overall accuracy in the experiment.<p> For fall detection, a master sensor is attached to the chest, and a slave sensor is attached to the thigh to collect acceleration data. The results show that all falls are successfully detected. Forward, backward, leftward and rightward falls have been distinguished from standing and walking using the fall detection algorithm. Normal physical activities are not misclassified as fall, and there is no false alarm in fall detection while the user is wearing the system in daily life.
267

Spam filter for SMS-traffic

Fredborg, Johan January 2013 (has links)
Communication through text messaging, SMS (Short Message Service), is nowadays a huge industry with billions of active users. Because of the huge userbase it has attracted many companies trying to market themselves through unsolicited messages in this medium in the same way as was previously done through email. This is such a common phenomenon that SMS spam has now become a plague in many countries. This report evaluates several established machine learning algorithms to see how well they can be applied to the problem of filtering unsolicited SMS messages. Each filter is mainly evaluated by analyzing the accuracy of the filters on stored message data. The report also discusses and compares requirements for hardware versus performance measured by how many messages that can be evaluated in a fixed amount of time. The results from the evaluation shows that a decision tree filter is the best choice of the filters evaluated. It has the highest accuracy as well as a high enough process rate of messages to be applicable. The decision tree filter which was found to be the most suitable for the task in this environment has been implemented. The accuracy in this new implementation is shown to be as high as the implementation used for the evaluation of this filter. Though the decision tree filter is shown to be the best choice of the filters evaluated it turned out the accuracy is not high enough to meet the specified requirements. It however shows promising results for further testing in this area by using improved methods on the best performing algorithms.
268

Addressing the Uncertainty Due to Random Measurement Errors in Quantitative Analysis of Microorganism and Discrete Particle Enumeration Data

Schmidt, Philip J. 10 1900 (has links)
Parameters associated with the detection and quantification of microorganisms (or discrete particles) in water such as the analytical recovery of an enumeration method, the concentration of the microorganisms or particles in the water, the log-reduction achieved using a treatment process, and the sensitivity of a detection method cannot be measured exactly. There are unavoidable random errors in the enumeration process that make estimates of these parameters imprecise and possibly also inaccurate. For example, the number of microorganisms observed divided by the volume of water analyzed is commonly used as an estimate of concentration, but there are random errors in sample collection and sample processing that make these estimates imprecise. Moreover, this estimate is inaccurate if poor analytical recovery results in observation of a different number of microorganisms than what was actually present in the sample. In this thesis, a statistical framework (using probabilistic modelling and Bayes’ theorem) is developed to enable appropriate analysis of microorganism concentration estimates given information about analytical recovery and knowledge of how various random errors in the enumeration process affect count data. Similar models are developed to enable analysis of recovery data given information about the seed dose. This statistical framework is used to address several problems: (1) estimation of parameters that describe random sample-to-sample variability in the analytical recovery of an enumeration method, (2) estimation of concentration, and quantification of the uncertainty therein, from single or replicate data (which may include non-detect samples), (3) estimation of the log-reduction of a treatment process (and the uncertainty therein) from pre- and post-treatment concentration estimates, (4) quantification of random concentration variability over time, and (5) estimation of the sensitivity of enumeration processes given knowledge about analytical recovery. The developed models are also used to investigate alternative strategies that may enable collection of more precise data. The concepts presented in this thesis are used to enhance analysis of pathogen concentration data in Quantitative Microbial Risk Assessment so that computed risk estimates are more predictive. Drinking water research and prudent management of treatment systems depend upon collection of reliable data and appropriate interpretation of the data that are available.
269

Game theoretic and machine learning techniques for balancing games

Long, Jeffrey Richard 29 August 2006 (has links)
Game balance is the problem of determining the fairness of actions or sets of actions in competitive, multiplayer games. This problem primarily arises in the context of designing board and video games. Traditionally, balance has been achieved through large amounts of play-testing and trial-and-error on the part of the designers. In this thesis, it is our intent to lay down the beginnings of a framework for a formal and analytical solution to this problem, combining techniques from game theory and machine learning. We first develop a set of game-theoretic definitions for different forms of balance, and then introduce the concept of a strategic abstraction. We show how machine classification techniques can be used to identify high-level player strategy in games, using the two principal methods of sequence alignment and Naive Bayes classification. Bioinformatics sequence alignment, when combined with a 3-nearest neighbor classification approach, can, with only 3 exemplars of each strategy, correctly identify the strategy used in 55\% of cases using all data, and 77\% of cases on data that experts indicated actually had a strategic class. Naive Bayes classification achieves similar results, with 65\% accuracy on all data and 75\% accuracy on data rated to have an actual class. We then show how these game theoretic and machine learning techniques can be combined to automatically build matrices that can be used to analyze game balance properties.
270

Segmentation of human ovarian follicles from ultrasound images acquired <i>in vivo</i> using geometric active contour models and a naïve Bayes classifier

Harrington, Na 14 September 2007 (has links)
Ovarian follicles are spherical structures inside the ovaries which contain developing eggs. Monitoring the development of follicles is necessary for both gynecological medicine (ovarian diseases diagnosis and infertility treatment), and veterinary medicine (determining when to introduce superstimulation in cattle, or dividing herds into different stages in the estrous cycle).<p>Ultrasound imaging provides a non-invasive method for monitoring follicles. However, manually detecting follicles from ovarian ultrasound images is time consuming and sensitive to the observer's experience. Existing (semi-) automatic follicle segmentation techniques show the power of automation, but are not widely used due to their limited success.<p>A new automated follicle segmentation method is introduced in this thesis. Human ovarian images acquired <i>in vivo</i> were smoothed using an adaptive neighbourhood median filter. Dark regions were initially segmented using geometric active contour models. Only part of these segmented dark regions were true follicles. A naïve Bayes classifier was applied to determine whether each segmented dark region was a true follicle or not. <p>The Hausdorff distance between contours of the automatically segmented regions and the gold standard was 2.43 ± 1.46 mm per follicle, and the average root mean square distance per follicle was 0.86 ± 0.49 mm. Both the average Hausdorff distance and the root mean square distance were larger than those reported in other follicle segmentation algorithms. The mean absolute distance between contours of the automatically segmented regions and the gold standard was 0.75 ± 0.32 mm, which was below that reported in other follicle segmentation algorithms.<p>The overall follicle recognition rate was 33% to 35%; and the overall image misidentification rate was 23% to 33%. If only follicles with diameter greater than or equal to 3 mm were considered, the follicle recognition rate increased to 60% to 63%, and the follicle misidentification rate increased slightly to 24% to 34%. The proposed follicle segmentation method is proved to be accurate in detecting a large number of follicles with diameter greater than or equal to 3 mm.

Page generated in 0.045 seconds