Spelling suggestions: "subject:"mixture model"" "subject:"fixture model""
61 |
Forecasting seat sales in passenger airlines: introducing the round-trip modelVaredi, Mehrdad 07 January 2010 (has links)
This thesis aims to improve sales forecasting in the context of passenger airlines. We study two important issues that could potentially improve forecasting accuracy: day-to-day price change rather than price itself, and linking flights that are likely to be considered as pairs for a round trip by passengers; we refer to the latter as the Round-Trip Model (RTM). We find that price change is a significant variable regardless of days remaining to flight in the last three weeks to flight departure, which opens the possibility of planning for revenue maximizing price change patterns. We also find that the RTM can improve the precision of the forecasting models, and provide an improved pricing strategy for planners.
In the study of the effect of price change on sales, analysis of variance is applied; finite regression mixture models were tested to identify linked traffic in the two directions and the linked flights on a route in reverse directions; adaptive neuro-fuzzy inference system (ANFIS) is applied to develop comparative models for studying sales effect between price and price change, and one-way versus round-trip models. The price change model demonstrated more robust results with comparable estimation errors, and the concept model for the round-trip with only one linked flight reduced estimation error by 5%. This empirical study is performed on a database with 22,900 flights which was obtained from a major North American passenger airline.
|
62 |
Towards Finding Optimal Mixture Of Subspaces For Data ClassificationMusa, Mohamed Elhafiz Mustafa 01 October 2003 (has links) (PDF)
In pattern recognition, when data has different structures in different parts of the
input space, fitting one global model can be slow and inaccurate. Learning methods
can quickly learn the structure of the data in local regions, consequently, offering faster
and more accurate model fitting. Breaking training data set into smaller subsets may
lead to curse of dimensionality problem, as a training sample subset may not be enough
for estimating the required set of parameters for the submodels. Increasing the size of
training data may not be at hand in many situations. Interestingly, the data in local
regions becomes more correlated. Therefore, by decorrelation methods we can reduce
data dimensions and hence the number of parameters. In other words, we can find
uncorrelated low dimensional subspaces that capture most of the data variability. The
current subspace modelling methods have proved better performance than the global
modelling methods for the given type of training data structure. Nevertheless these
methods still need more research work as they are suffering from two limitations
2 There is no standard method to specify the optimal number of subspaces.
² / There is no standard method to specify the optimal dimensionality for each
subspace.
In the current models these two parameters are determined beforehand. In this dissertation
we propose and test algorithms that try to find a suboptimal number of
principal subspaces and a suboptimal dimensionality for each principal subspaces automatically.
|
63 |
Driver Modeling Based on Driving Behavior and Its Evaluation in Driver IdentificationMiyajima, Chiyomi, Nishiwaki, Yoshihiro, Ozawa, Koji, Wakita, Toshihiro, Itou, Katsunobu, Takeda, Kazuya, Itakura, Fumitada January 2007 (has links)
No description available.
|
64 |
Wavelet Transform For Texture Analysis With Application To Document AnalysisBusch, Andrew W. January 2004 (has links)
Texture analysis is an important problem in machine vision, with applications in many fields including medical imaging, remote sensing (SAR), automated flaw detection in various products, and document analysis to name but a few. Over the last four decades many techniques for the analysis of textured images have been proposed in the literature for the purposes of classification, segmentation, synthesis and compression. Such approaches include analysis the properties of individual texture elements, using statistical features obtained from the grey-level values of the image itself, random field models, and multichannel filtering. The wavelet transform, a unified framework for the multiresolution decomposition of signals, falls into this final category, and allows a texture to be examined in a number of resolutions whilst maintaining spatial resolution. This thesis explores the use of the wavelet transform to the specific task of texture classification and proposes a number of improvements to existing techniques, both in the area of feature extraction and classifier design. By applying a nonlinear transform to the wavelet coefficients, a better characterisation can be obtained for many natural textures, leading to increased classification performance when using first and second order statistics of these coefficients as features. In the area of classifier design, a combination of an optimal discriminate function and a non-parametric Gaussian mixture model classifier is shown to experimentally outperform other classifier configurations. By modelling the relationships between neighbouring bands of the wavelet trans- form, more information regarding a texture can be obtained. Using such a representation, an efficient algorithm for the searching and retrieval of textured images from a database is proposed, as well as a novel set of features for texture classification. These features are experimentally shown to outperform features proposed in the literature, as well as provide increased robustness to small changes in scale. Determining the script and language of a printed document is an important task in the field of document processing. In the final part of this thesis, the use of texture analysis techniques to accomplish these tasks is investigated. Using maximum a posterior (MAP) adaptation, prior information regarding the nature of script images can be used to increase the accuracy of these methods. Novel techniques for estimating the skew of such documents, normalising text block prior to extraction of texture features and accurately classifying multiple fonts are also presented.
|
65 |
Statistical language modelling for large vocabulary speech recognitionMcGreevy, Michael January 2006 (has links)
The move towards larger vocabulary Automatic Speech Recognition (ASR) systems places greater demands on language models. In a large vocabulary system, acoustic confusion is greater, thus there is more reliance placed on the language model for disambiguation. In addition to this, ASR systems are increasingly being deployed in situations where the speaker is not conscious of their interaction with the system, such as in recorded meetings and surveillance scenarios. This results in more natural speech, which contains many false starts and disfluencies. In this thesis we investigate a novel approach to the modelling of speech corrections. We propose a syntactic model of speech corrections, and seek to determine if this model can improve on the performance of standard language modelling approaches when applied to conversational speech. We investigate a number of related variations to our basic approach and compare these approaches against the class-based N-gram. We also investigate the modelling of styles of speech. Specifically, we investigate whether the incorporation of prior knowledge about sentence types can improve the performance of language models. We propose a sentence mixture model based on word-class N-grams, in which the sentence mixture models and the word-class membership probabilities are jointly trained. We compare this approach with word-based sentence mixture models.
|
66 |
IMPROVED COMPUTATIONAL AND EMPIRICAL MODELS OF HYDROCYCLONESNarasimha Mangadoddy Unknown Date (has links)
The principal objectives of the work described in this thesis were: 1. To develop an improved multiphase CFD model for classifying cyclones and further improve understanding of the separation mechanism based on fluid flow and turbulence inside the cyclone. 2. To develop an improved Empirical model of classifying cyclones, covering a wide range of design and operating conditions. The multi-phase CFD model developed in this work is based on the approach reported by Brennan et al (2002) and Brennan (2003) using Fluent, and involves individual models for the air-core, turbulence, and particle classification. Two-phase VOF and mixture models for an air/water system were used to predict the air-core and the pressure and flow fields on 3D fitted fine grids. The turbulence was resolved using both DRSM (QPS) and LES turbulence models. The predicted mean and turbulent flow field from the LES and DRSM turbulence models were compared with the LDA measurements of Hsieh (1988). The LES model predicts the experimental data more accurately than the DRSM model. The standard mixture model (Manninnen et al, 1996) and the modified mixture model for a water/air/solids system were used to predict cyclone performance. The standard mixture model was able to predict classification efficiency reasonably at low solids concentrations, but under-predicts the recovery of coarse size fractions to underflow. To improve the predictions at moderate to high feed solids, the author modified the slip velocity with additional Bagnold dispersive forces, Saffman lift forces, and a hindered settling correction for particle drag in the mixture model superimposed on an LES turbulence model. Several cyclone geometries were used for validating the multiphase CFD model. The modified mixture model improves prediction of the separation of coarse size particles, and the predicted closely matches the experimental in various cyclones. The particle classification mechanism has been further elucidated using the simulated particle concentration distributions. At high solids concentrations, the modified CFD model predicts the efficiency curve reasonably well, especially the cut-size of the cyclone, but prediction of fine particle recovery to overflow is poor compared to the experimental data. It appears that the fines are significantly affected by turbulent dispersion and the flow resistance due to the high viscosity of the slurry at the apex is not sufficiently accounted for in the modified Mixture model. The improved multi-phase CFD model was validated against two sets of experimental data available in the literature: particle concentrations measured by gamma ray tomography data in a dense medium cyclone (Subramanian, 2002), and particle size distribution inside a hydrocyclone (Renner, 1976). Large eddy simulation (LES) with the modified Mixture model, including medium with a feed size distribution appears to be promising in predicting medium segregation inside a dense medium cyclone. The CFD predicted sample size distributions at different positions are reasonably comparable with Renner’s (1976) experimental data near the wall and in the bottom cone, but differ considerably near the forced vortex region, and also near the tip of the vortex finder wall. The CFD model shows no air-core formation at the low operating pressure used by Renner, which suggests his experiments involved an unusual/unstable forced vortex based cyclone separation. The effect of turbulence on fluid and solid particle motion was also studied in this thesis. The resolved turbulent fluctuations from LES of the hydrocyclone at steady flow were analysed using ensemble averaging. The ratio of the effective turbulent acceleration of each particle size to the centrifugal acceleration was calculated for various cyclones, which showed that turbulent mixing becomes less important with larger particles. The trends in this ratio correlate with the equilibrium positions of the particles from the multiphase LES. The analysis indicates that the short-circuiting might be aggravated by turbulent mixing across the locus of zero vertical velocity (LZVV) against the classification force, and along the vortex finder wall into the inner upflow region of the cyclone. An experimental study of the “fish-hook” effect was pursued in various industrial scale cyclones to evaluate the effect of various cyclone parameters. The observed diameter at which fine particle recovery starts to increase is mainly affected by feed solids content and spigot diameter, but less influenced by feed pressure. The observed particle recovery to the underflow at the fishhook dip size, the bypass, is always higher than the underflow water split. Any cyclone variable that affects the underflow water split, will also affect the bypass value. CFD studies showing high particle Reynolds numbers for coarse particles were used to provide a qualitative mechanism for fines reporting to the underflow in the wakes behind the larger particles (Tang et all. 1992). The Frachon and Cilliers (1999) model was used to fit and evaluate the fishhook parameters. The variations of these fishhook parameters were quantified for changes in cyclone design and operating conditions. The development of an improved empirical hydrocyclone model was attempted by collecting extensive historical data covering a wide range of cyclones. Additional experiments on 10 and 20 inch Krebs cyclones were performed to fill the gaps in the database, especially at low to moderate feed solids concentration and with different cone sections. Tangential velocity, turbulent diffusion, slurry viscosity and particle hindered settling correlations were identified from CFD as the key inputs to the particle classification mechanism for the empirical model. A new cyclone model structure based on a dimensionless approach has been developed. The model for , , Q gives a very good fit to the data, while the model for separation sharpness gave reasonable correlations with the cyclone design and operating conditions. 208 additional data sets were used to validate the new hydrocyclone model.
|
67 |
Bayesian Networks and Gaussian Mixture Models in Multi-Dimensional Data Analysis with Application to Religion-Conflict DataJanuary 2012 (has links)
abstract: This thesis examines the application of statistical signal processing approaches to data arising from surveys intended to measure psychological and sociological phenomena underpinning human social dynamics. The use of signal processing methods for analysis of signals arising from measurement of social, biological, and other non-traditional phenomena has been an important and growing area of signal processing research over the past decade. Here, we explore the application of statistical modeling and signal processing concepts to data obtained from the Global Group Relations Project, specifically to understand and quantify the effects and interactions of social psychological factors related to intergroup conflicts. We use Bayesian networks to specify prospective models of conditional dependence. Bayesian networks are determined between social psychological factors and conflict variables, and modeled by directed acyclic graphs, while the significant interactions are modeled as conditional probabilities. Since the data are sparse and multi-dimensional, we regress Gaussian mixture models (GMMs) against the data to estimate the conditional probabilities of interest. The parameters of GMMs are estimated using the expectation-maximization (EM) algorithm. However, the EM algorithm may suffer from over-fitting problem due to the high dimensionality and limited observations entailed in this data set. Therefore, the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) are used for GMM order estimation. To assist intuitive understanding of the interactions of social variables and the intergroup conflicts, we introduce a color-based visualization scheme. In this scheme, the intensities of colors are proportional to the conditional probabilities observed. / Dissertation/Thesis / M.S. Electrical Engineering 2012
|
68 |
Effects of nickel and manganese on the embrittlement of low-copper pressure vessel steelsZelenty, Jennifer Evelyn January 2016 (has links)
Solute clustering is known to play a significant role in the embrittlement of reactor pressure vessel (RPV) steels. When precipitates form they impede the movement of dislocations, causing an increase in hardness and a shift in the ductile-brittle transition temperature. Over time this can cause the steel to become brittle and more susceptible to fracture. Thus, understanding precipitate formation is of great importance to the nuclear industry. The first part of this thesis aims to isolate and better understand the thermal aging component of embrittlement in low copper, model RPV steels. Currently, relatively little is known about the effects of Ni and Mn in a low copper environment. Therefore, it is of interest to determine if Ni and Mn form precipitates under these conditions. To this end, hardness measurements and atom probe tomography were utilized to link the mechanical properties to the microstructure. After 11,690 hours of thermal aging a statistically significant decrease in hardening was observed. Consistent with hardness measurements, no precipitates were present within the matrix of the thermally aged RPV steels. The local chemistry method was then applied to investigate the very early stages of solute clustering. Association was found to be statistically significant in both the thermally aged and as-received model RPV steels. Therefore, no apparent trends regarding the changes in solute association between the as-received and thermally aged RPV steels were identified. Small, non-random clusters were observed at heterogeneous nucleation sites, such as carbide/matrix interfaces and grain boundaries, within the thermally aged material. The clusters found at the carbide/matrix interfaces were all rich in Mn and approximately 90-150 atoms in size. The clusters located along the observed low-angle grain boundary, however, were significantly larger (on the order of hundreds of atoms) and rich in Ni. Lastly, copper-rich precipitates (CRPs) and Mn- and Ni-rich precipitates (MNPs) were observed within the cementite phase of a high copper and low copper RPV steel, respectively, following long term thermal aging. APT was used to characterize these precipitates and obtain more detailed chemical information. The presence of such precipitates indicates that a range of precipitation can take place within the cementite phase of thermally aged RPV steels. The second part of this thesis aims to investigate the effects of ion irradiation on the microstructure of low copper RPV steels via APT. These steels were ion irradiated with 6.4 MeV Fe<sup>3+</sup> ions with a dose rate of 1.5 x 10<sup>-4</sup> dpa/s at 290°C. MNPs were observed in all five of the RPV steels analyzed. These precipitates were found to have nucleated within the matrix as well as at dislocations and grain boundaries. Using the maximum separation method these MNPs were extracted and characterized. Precipitate composition, size, volume fraction, and number density were determined for each of the five samples. Lastly, several grain boundaries were characterized. Several emerging trends were observed within the samples: Ni content within the precipitates did not vary significantly once a threshold between 30-50% was reached; bulk Mn content appeared to dictate Si and Mn content within the precipitates; and samples low in bulk Ni content were characterized by a higher number density of smaller precipitates. Additionally, by regressing precipitate volume fraction against the interaction of Ni and Mn, a linear relationship was found to be statistically significant.
|
69 |
ADAPTIVE LEARNING OF NEURAL ACTIVITY DURING DEEP BRAIN STIMULATIONJanuary 2015 (has links)
abstract: Parkinson's disease is a neurodegenerative condition diagnosed on patients with
clinical history and motor signs of tremor, rigidity and bradykinesia, and the estimated
number of patients living with Parkinson's disease around the world is seven
to ten million. Deep brain stimulation (DBS) provides substantial relief of the motor
signs of Parkinson's disease patients. It is an advanced surgical technique that is used
when drug therapy is no longer sufficient for Parkinson's disease patients. DBS alleviates the motor symptoms of Parkinson's disease by targeting the subthalamic nucleus using high-frequency electrical stimulation.
This work proposes a behavior recognition model for patients with Parkinson's
disease. In particular, an adaptive learning method is proposed to classify behavioral
tasks of Parkinson's disease patients using local field potential and electrocorticography
signals that are collected during DBS implantation surgeries. Unique patterns
exhibited between these signals in a matched feature space would lead to distinction
between motor and language behavioral tasks. Unique features are first extracted
from deep brain signals in the time-frequency space using the matching pursuit decomposition
algorithm. The Dirichlet process Gaussian mixture model uses the extracted
features to cluster the different behavioral signal patterns, without training or
any prior information. The performance of the method is then compared with other
machine learning methods and the advantages of each method is discussed under
different conditions. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2015
|
70 |
Observing the unobservable? : Segmentation of tourism expenditure in Venice usingunobservable heterogeneity to find latent classesLundberg, Magdalena January 2018 (has links)
Consumer segmentation based on expenditure are usually done by using observedcharacteristics, such as age and income. This thesis highlights the problem with negativeexternalities which Venice suffers from, due to mass tourism. This thesis aims to assesswhether unobservable heterogeneity can be used to detect latent classes within tourismexpenditure. Segmenting the tourism market using this approach is valuable for policy making.Segmenting is also useful for the actors in the market to identify and attract high spenders. Inthat way, a destination may uphold a sustainable level of tourism instead of increasing touristnumbers. The method used for this approach is finite mixture modelling (FMM), which is notmuch used within consumer markets and therefore this thesis also contributes to tourismexpenditure methodology. This thesis adds to the literature by increasing the knowledge aboutthe importance of unobserved factors when segmenting visitors.The results show that four latent classes are found in tourism expenditure. Some of thevariables, which are significant in determining tourism expenditure, are shown to affectexpenditure differently in different classes while some are shown not to be significant. Theconclusions are that segmenting tourism expenditure, using unobserved heterogeneity, issignificant and that variables, which are barely significant in determining the expenditure ofthe population, can be strongly significant in determining the expenditure for a certain class.
|
Page generated in 0.0592 seconds