• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 753
  • 470
  • 186
  • 85
  • 73
  • 26
  • 24
  • 23
  • 22
  • 21
  • 16
  • 16
  • 11
  • 10
  • 10
  • Tagged with
  • 2016
  • 619
  • 258
  • 211
  • 197
  • 171
  • 164
  • 151
  • 146
  • 139
  • 139
  • 138
  • 137
  • 126
  • 125
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

EXAMINING MINIMUM BETS’ INFLUENCE ON THE ACTUAL BET WAGERED ON FIXED LINE SLOT MACHINES: A DESCRIPTIVE ANALYSIS

Taylor, Kevin 01 May 2016 (has links)
A descriptive analysis was conducted to examine the influence minimum bets 30 credits and 50 credits had on the actual bets wagered on slot machines that operated on fixed-lines. Results suggested that slots with the lower minimum bets correlated with higher wagers. There was a total of 107 participants actively gambling at two casinos located just outside of Chicago, Illinois. The participant pool was divided between 37 males and 70 females. On average, the participants who played the slot machines with a minimum bet of 30 credits actually bet more than the participants who played the slot machines with a minimum bet of 50 credits. More notably, results from a Chi-square test for significance suggested that there is a significant influence between the minimum bet required to play and the presence, or absence, of “minimizers” and “maximizers” (p <.05). Additional data analyses where also conducted that examined gender’s role and wagering tendencies, including an independent t-test. The main purpose of this paper was to examine minimizing and maximizing gambling behavior across low-value and higher-value machines.
12

Local maximum entropy approximation-based modelling of the canine heart

Rama, Ritesh Rao January 2012 (has links)
Local Maximum Entropy (LME) method is an approximation technique which has been known to have good approximation characteristics. This is due to its non-negative shape functions and the weak Kronecker delta property which allow the solutions to be continuous and smooth as compared to the Moving Least Square method (MLS) which is used in the Element Free Galerkin method (EFG). The method is based on a convex optimisation scheme where a non-linear equation is solved with the help of a Newton algorithm, implemented in an in-house code called SESKA. In this study, the aim is to compare LME and MLS and highlight the differences. Preliminary benchmark tests of LME are found to be very conclusive. The method is able to approximate deformation of a cantilever beam with higher accuracy as compared to MLS. Moreover, its rapid convergence rate, based on a Cook's membrane problem, demonstrated that it requires a relatively coarser mesh to reach the exact solution. With those encouraging results, LME is then applied to a larger non-linear cardiac mechanics problem. That is simulating a healthy and a myocardial infarcted canine left ventricle (LV) during one heart beat. The LV is idealised by a prolate spheroidal ellipsoid. It undergoes expansion during the diastolic phase, addressed by a non-linear passive stress model which incorporates the transversely isotropic properties of the material. The contraction, during the systolic phase, is simulated by Guccione's active stress model. The infarct region is considered to be non-contractile and twice as stiff as the healthy tissue. The material loss, especially during the necrotic phase, is incorporated by the use of a homogenisation approach. Firstly, the loss of the contraction ability of the infarct region counteracts the overall contraction behaviour by a bulging deformation where the occurrence of high stresses are noted. Secondly, with regards to the behaviour of LME, it is found to feature high convergence rate and a decrease in computation time with respect to MLS. However, it is also observed that LME is quite sensitive to the nodal spacing in particular for an unstructured nodal distribution where it produces results that are completely unreliable.
13

Estimating Wind Velocities in Atmospheric Mountain Waves Using Sailplane Flight Data

Zhang, Ni January 2012 (has links)
Atmospheric mountain waves form in the lee of mountainous terrain under appropriate conditions of the vertical structure of wind speed and atmospheric stability. Trapped lee waves can extend hundreds of kilometers downwind from the mountain range, and they can extend tens of kilometers vertically into the stratosphere. Mountain waves are of importance in meteorology as they affect the general circulation of the atmosphere, can influence the vertical structure of wind speed and temperature fields, produce turbulence and downdrafts that can be an aviation hazard, and affect the vertical transport of aerosols and trace gasses, and ozone concentration. Sailplane pilots make extensive use of mountain lee waves as a source of energy with which to climb. There are many sailplane wave flights conducted every year throughout the world and they frequently cover large distances and reach high altitudes. Modern sailplanes frequently carry flight recorders that record their position at regular intervals during the flight. There is therefore potential to use this recorded data to determine the 3D wind velocity at positions on the sailplane flight path. This would provide an additional source of information on mountain waves to supplement other measurement techniques that might be useful for studies on mountain waves. The recorded data are limited however, and determination of wind velocities is not straightforward. This thesis is concerned with the development and application of techniques to determine the vector wind field in atmospheric mountain waves using the limited flight data collected during sailplane flights. A detailed study is made of the characteristics, uniqueness, and sensitivity to errors in the data, of the problem of estimating the wind velocities from limited flight data consisting of ground velocities, possibly supplemented by air speed or heading data. A heuristic algorithm is developed for estimating 3D wind velocities in mountain waves from ground velocity and air speed data, and the algorithm is applied to flight data collected during “Perlan Project” flights. The problem is then posed as a statistical estimation problem and maximum likelihood and maximum a posteriori estimators are developed for a variety of different kinds of flight data. These estimators are tested on simulated flight data and data from Perlan Project flights.
14

Introduction to fast Super-Paramagnetic Clustering

Yelibi, Lionel 25 February 2020 (has links)
We map stock market interactions to spin models to recover their hierarchical structure using a simulated annealing based Super-Paramagnetic Clustering (SPC) algorithm. This is directly compared to a modified implementation of a maximum likelihood approach to fast-Super-Paramagnetic Clustering (f-SPC). The methods are first applied standard toy test-case problems, and then to a dataset of 447 stocks traded on the New York Stock Exchange (NYSE) over 1249 days. The signal to noise ratio of stock market correlation matrices is briefly considered. Our result recover approximately clusters representative of standard economic sectors and mixed clusters whose dynamics shine light on the adaptive nature of financial markets and raise concerns relating to the effectiveness of industry based static financial market classification in the world of real-time data-analytics. A key result is that we show that the standard maximum likelihood methods are confirmed to converge to solutions within a Super-Paramagnetic (SP) phase. We use insights arising from this to discuss the implications of using a Maximum Entropy Principle (MEP) as opposed to the Maximum Likelihood Principle (MLP) as an optimization device for this class of problems.
15

Towards a Bayesian framework for optical tomography

Kwee, Ivo Widjaja January 2000 (has links)
No description available.
16

Terminal Palaeocene events in the North Sea and Faeroe-Shetland Basin

King, Adrian January 2001 (has links)
No description available.
17

Circuit Design of Maximum a Posteriori Algorithm for Turbo Code Decoder

Kao, Chih-wei 30 July 2010 (has links)
none
18

Caractérisation de la diversité d'une population à partir de mesures quantifiées d'un modèle non-linéaire. Application à la plongée hyperbare / Characterisation of population diversity from quantified measures of a nonlinear model. Application to hyperbaric diving

Bennani, Youssef 10 December 2015 (has links)
Cette thèse propose une nouvelle méthode pour l'estimation non-paramétrique de densité à partir de données censurées par des régions de formes quelconques, éléments de partitions du domaine paramétrique. Ce travail a été motivé par le besoin d'estimer la distribution des paramètres d'un modèle biophysique de décompression afin d'être capable de prédire un risque d'accident. Dans ce contexte, les observations (grades de plongées) correspondent au comptage quantifié du nombre de bulles circulant dans le sang pour un ensemble de plongeurs ayant exploré différents profils de plongées (profondeur, durée), le modèle biophysique permettant de prédire le volume de gaz dégagé pour un profil de plongée donné et un plongeur de paramètres biophysiques connus. Dans un premier temps, nous mettons en évidence les limitations de l'estimation classique de densité au sens du maximum de vraisemblance non-paramétrique. Nous proposons plusieurs méthodes permettant de calculer cet estimateur et montrons qu'il présente plusieurs anomalies : en particulier, il concentre la masse de probabilité dans quelques régions seulement, ce qui le rend inadapté à la description d'une population naturelle. Nous proposons ensuite une nouvelle approche reposant à la fois sur le principe du maximum d'entropie, afin d'assurer une régularité convenable de la solution, et mettant en jeu le critère du maximum de vraisemblance, ce qui garantit une forte attache aux données. Il s'agit de rechercher la loi d'entropie maximale dont l'écart maximal aux observations (fréquences de grades observées) est fixé de façon à maximiser la vraisemblance des données. / This thesis proposes a new method for nonparametric density estimation from censored data, where the censing regions can have arbitrary shape and are elements of partitions of the parametric domain. This study has been motivated by the need for estimating the distribution of the parameters of a biophysical model of decompression, in order to be able to predict the risk of decompression sickness. In this context, the observations correspond to quantified counts of bubbles circulating in the blood of a set of divers having explored a variety of diving profiles (depth, duration); the biophysical model predicts of the gaz volume produced along a given diving profile for a diver with known biophysical parameters. In a first step, we point out the limitations of the classical nonparametric maximum-likelihood estimator. We propose several methods for its calculation and show that it suffers from several problems: in particular, it concentrates the probability mass in a few regions only, which makes it inappropriate to the description of a natural population. We then propose a new approach relying both on the maximum-entropy principle, in order to ensure a convenient regularity of the solution, and resorting to the maximum-likelihood criterion, to guarantee a good fit to the data. It consists in searching for the probability law with maximum entropy whose maximum deviation from empirical averages is set by maximizing the data likelihood. Several examples illustrate the superiority of our solution compared to the classic nonparametric maximum-likelihood estimator, in particular concerning generalisation performance.
19

The effects of glycine-arginine-alpha-ketoisocaporic acid calcium on maximum strength and muscular endurance

Harris, Mareio Cortez 06 August 2011 (has links)
Glycine-arginine-alpha-ketoisocaporic acid calcium (GAKIC) is a product advertised to increase muscular endurance during exercise via metabolic intervention. Purpose: The purpose of this study was to determine the effect of GAKIC ingestion on maximum strength and muscular endurance. Methods: Utilizing a double-blinded, crossover design, participants completed an upper and lower body resistance exercise protocol once using 11.2gs GAKIC, and the other with a placebo. Results: An increase in maximum strength was observed in the 1RM portion of the lower body protocol phase with statistical trends in the lower body TLV portion of testing. No significant differences were found in upper body 1RM, upper body TLV, HR, BLa, and Glucose between conditions. Conclusion: We concluded that in this protocol, GAKIC increased maximum strength in the 1RM leg press exercise. Further research is encouraged in resistance exercise.
20

Efficient Algorithms for the Maximum Convex Sum Problem

Thaher, Mohammed Shaban Atieh January 2014 (has links)
This research is designed to develop and investigate newly defined problems: the Maximum Convex Sum (MCS), and its generalisation, the K-Maximum Convex Sum (K-MCS), in a two-dimensional (2D) array based on dynamic programming. The study centres on the concept of finding the most useful informative array portion as defined by different parameters involved in data, which is generically expressed in this thesis as the Maximum Sum Problem (MSP). This concept originates in the Maximum Sub-Array (MSA) problem, which relies on rectangular regions to find the informative array portion. From the above it follows that MSA and MCS belong to MSP. This research takes a new stand in using an alternative shape in the MSP context, which is the convex shape. Since 1977, there has been substantial research in the development of the Maximum Sub-Array (MSA) problem to find informative sub-array portions, running in the best possible time complexity. Conventionally the research norm has been to use the rectangular shape in the MSA framework without any investigation into an alternative shape for the MSP. Theoretically there are shapes that can improve the MSP outcome and their utility in applications; research has rarely discussed this. To advocate the use of a different shape in the MSP context requires rigorous investigation and also the creation of a platform to launch a new exploratory research area. This can then be developed further by considering the implications and practicality of the new approach. This thesis strives to open up a new research frontier based on using the convex shape in the MSP context. This research defines the new MCS problem in 2D; develops and evaluates algorithms that serve the MCS problem running in the best possible time complexity; incorporates techniques to advance the MCS algorithms; generalises the MCS problem to cover the K-Disjoint Maximum Convex Sums (K-DMCS) problem and the K-Overlapping Maximum Convex Sums (K-OMCS) problem; and eventually implements the MCS algorithmic framework using real data in an ecology application. Thus, this thesis provides a theoretical and practical framework that scientifically contributes to addressing some of the research gaps in the MSP and the new research path: the MCS problem. The MCS and K-MCS algorithmic models depart from using the rectangular shape as in MSA, and retain a time complexity that is within the best known time complexities of the MSA algorithms. Future in-depth studies on the Maximum Convex Sum (MCS) problem can advance the algorithms developed in this thesis and their time complexity.

Page generated in 0.0928 seconds