Spelling suggestions: "subject:"gibbs"" "subject:"hibbs""
111 |
A Combined Motif Discovery MethodLu, Daming 06 August 2009 (has links)
A central problem in the bioinformatics is to find the binding sites for regulatory motifs. This is a challenging problem that leads us to a platform to apply a variety of data mining methods. In the efforts described here, a combined motif discovery method that uses mutual information and Gibbs sampling was developed. A new scoring schema was introduced with mutual information and joint information content involved. Simulated tempering was embedded into classic Gibbs sampling to avoid local optima. This method was applied to the 18 pieces DNA sequences containing CRP binding sites validated by Stormo and the results were compared with Bioprospector. Based on the results, the new scoring schema can get over the defect that the basic model PWM only contains single positioin information. Simulated tempering proved to be an adaptive adjustment of the search strategy and showed a much increased resistance to local optima.
|
112 |
Finding the Man, Husband, Physician & Father: Creating the Role of Doc Gibbs in Thornton Wilder's Our TownPayne, Patrick 17 December 2010 (has links)
This thesis serves as documentation of my efforts to define accurately my creative process as an actor in creating the role of Doc Gibbs in Our Town by Thornton Wilder. This includes research, rehearsal journal, character analysis and evaluation of my performance. Our Town was produced by the University of New Orleans Department of Film, Theatre and Communication Arts in New Orleans, Louisiana. The play was performed in the Robert E. Nims Theatre of the Performing Arts Center at 7:30 pm on the evenings of April 22 through April 24, 2010 and April 29 through May 1, 2010 as well as one matinee at 2:30 pm on Sunday, May 2, 2010.
|
113 |
Application and Further Development of TrueSkill™ Ranking in SportsIbstedt, Julia, Rådahl, Elsa, Turesson, Erik, vande Voorde, Magdalena January 2019 (has links)
The aim of this study was to explore the ranking model TrueSkill™ developed by Microsoft, applying it on various sports and constructing extensions to the model. Two different inference methods for TrueSkill was constructed using Gibbs sampling and message passing. Additionally, the sequential method using Gibbs sampling was successfully extended into a batch method, in order to eliminate game order dependency and creating a fairer, although computationally heavier, ranking system. All methods were further implemented with extensions for taking home team advantage, score difference and finally a combination of the two into consideration. The methods were applied on football (Premier League), ice hockey (NHL), and tennis (ATP Tour) and evaluated on the accuracy of their predictions before each game. On football, the extensions improved the prediction accuracy from 55.79% to 58.95% for the sequential methods, while the vanilla Gibbs batch method reached the accuracy of 57.37%. Altogether, the extensions improved the performance of the vanilla methods when applied on all data sets. The home team advantage performed better than the score difference on both football and ice hockey, while the combination of the two reached the highest accuracy. The Gibbs batch method had the highest prediction accuracy on the vanilla model for all sports. The results of this study imply that TrueSkill could be considered a useful ranking model for other sports as well, especially if tuned and implemented with extensions suitable for the particular sport.
|
114 |
Performance Measurement in the eCommerce Industry.Donkor, Simon 29 April 2003 (has links)
The eCommerce industry introduced new business principles, as well as new strategies for achieving these principles, and as a result some traditional measures of success are no longer valid. We classified and ranked the performance of twenty business-to-consumer eCommerce companies by developing critical benchmarks using the Balanced scorecard methodology. We applied a Latent class model, a statistical model along the Bayesian framework, to facilitate the determination of the best and worst performing companies. An eCommerce site's greatest asset is its customers, which is why some of the most valued and sophisticated metrics used today evolve around customer behavior. The results from our classification and ranking procedure showed that companies that ranked high overall also ranked comparatively well in the customer analysis ranking, For example, Amazon.com, one of the highest rated eCommerce companies with a large customer base ranked second in the critical benchmark developed towards measuring customer analysis. The results from our simulation also showed that the Latent class model is a good fit for the classification procedure, and it has a high classification rate for the worst and best performing companies. The resulting work offers a practical tool with the ability to identify profitable investment opportunities for financial managers and analysts.
|
115 |
Modélisation probabiliste d’impression à l’échelle micrométrique / Probabilistic modeling of prints at the microscopic scaleNguyen, Quoc Thong 18 May 2015 (has links)
Nous développons des modèles probabilistes pour l’impression à l’échelle micrométrique. Tenant compte de l’aléa de la forme des points qui composent les impressions, les modèles proposés pourront être ultérieurement exploités dans différentes applications dont l’authentification de documents imprimés. Une analyse de l’impression sur différents supports papier et par différentes imprimantes a été effectuée. Cette étude montre que la grande variété de forme dépend de la technologie et du papier. Le modèle proposé tient compte à la fois de la distribution du niveau de gris et de la répartition spatiale de l’encre sur le papier. Concernant le niveau de gris, les modèles des surfaces encrées/vierges sont obtenues en sélectionnant les distributions dans un ensemble de lois de forme similaire aux histogrammes et à l’aide de K-S critère. Le modèle de répartition spatiale de l’encre est binaire. Le premier modèle consiste en un champ de variables indépendantes de Bernoulli non-stationnaire dont les paramètres forment un noyau gaussien généralisé. Un second modèle de répartition spatiale des particules d’encre est proposé, il tient compte de la dépendance des pixels à l’aide d’un modèle de Markov non stationnaire. Deux méthodes d’estimation ont été développées, l’une approchant le maximum de vraisemblance par un algorithme de Quasi Newton, la seconde approchant le critère de l’erreur quadratique moyenne minimale par l’algorithme de Metropolis within Gibbs. Les performances des estimateurs sont évaluées et comparées sur des images simulées. La précision des modélisations est analysée sur des jeux d’images d’impression à l’échelle micrométrique obtenues par différentes imprimantes. / We develop the probabilistic models of the print at the microscopic scale. We study the shape randomness of the dots that originates the prints, and the new models could improve many applications such as the authentication. An analysis was conducted on various papers, printers. The study shows a large variety of shape that depends on the printing technology and paper. The digital scan of the microscopic print is modeled in: the gray scale distribution, and the spatial binary process modeling the printed/blank spatial distribution. We seek the best parametric distribution that takes account of the distributions of the blank and printed areas. Parametric distributions are selected from a set of distributions with shapes close to the histograms and with the Kolmogorov-Smirnov divergence. The spatial binary model handles the wide diversity of dot shape and the range of variation of spatial density of inked particles. At first, we propose a field of independent and non-stationary Bernoulli variables whose parameters form a Gaussian power. The second spatial binary model encompasses, in addition to the first model, the spatial dependence of the inked area through an inhomogeneous Markov model. Two iterative estimation methods are developed; a quasi-Newton algorithm which approaches the maximum likelihood and the Metropolis-Hasting within Gibbs algorithm that approximates the minimum mean square error estimator. The performances of the algorithms are evaluated and compared on simulated images. The accuracy of the models is analyzed on the microscopic scale printings coming from various printers. Results show the good behavior of the estimators and the consistency of the models.
|
116 |
Computational petrology: Subsolidus equilibria in the upper mantleSommacal, Silvano, silvano.sommacal@anu.edu.au January 2004 (has links)
Processes that take place in the Earths mantle are not accessible to direct observation. Natural samples of mantle material that have been transported to the surface as xenoliths provide useful information on phase relations and compositions of phases at the pressure and temperature conditions of each rock fragment. In the past, considerable effort has been devoted by petrologists to investigate upper mantle processes experimentally. Results of high temperatures, high pressure experiments have provided insight into lower crust-upper mantle phase relations as a function of temperature, pressure and composition. However, the attainment of equilibrium in these experiments, especially in complex systems, may be very difficult to test rigorously. Furthermore, experimental results may also require extrapolation to different pressures, temperatures or bulk compositions. More recently, thermodynamic modeling has proved to be a very powerful approach to this problem, allowing the deciphering the physicochemical conditions at which mantle processes occur. On the other hand, a comprehensive thermodynamic model to investigate lower crust-upper mantle phase assemblages in complex systems does not exist. ¶
In this study, a new thermodynamic model to describe phase equilibria between silicate and/or oxide crystalline phases has been derived. For every solution phase the molar Gibbs free energy is given by the sum of contributions from the energy of the end-members, ideal mixing on sites, and excess site mixing terms. It is here argued that the end-member term of the Gibbs free energy for complex solid solution phases (e.g. pyroxene, spinel) has not previously been treated in the most appropriate manner. As an example, the correct expression of this term for a pyroxene solution in a general (Na-Ca-Mg-Fe2+-Al-Cr-Fe3+-Si-Ti) system is presented and the principle underlying its formulation for any complex solution phase is elucidated.¶
Based on the thermodynamic model an algorithm to compute lower crust-upper mantle phase equilibria for subsolidus mineral assemblages as a function of composition, temperature and pressure has been developed. Included in the algorithm is a new way to represent the total Gibbs free energy for any multi-phase complex system. At any given temperature and pressure a closed multi-phase system is at its equilibrium condition when the chemical composition of the phases present in the system and the number of moles of each are such that the Gibbs free energy of the system reaches its minimum value. From a mathematical point of view, the determination of equilibrium phase assemblages can, in short, be defined as a constrained minimization problem. To solve the Gibbs free energy minimization problem a Feasible Iterate Sequential Quadratic Programming method (FSQP) is employed. The systems Gibbs free energy is minimized under several different linear and non-linear constraints. The algorithm, coded as a highly flexible FORTRAN computer program (named Gib), has been set up, at the moment, to perform equilibrium calculations in NaO-CaO-MgO-FeO-Al2O3-Cr2O3-Fe2O3- SiO2-TiO2 systems. However, the program is designed in a way that any other oxide component could be easily added.¶
To accurately forward model phase equilibria compositions using Gib, a precise estimation of the thermodynamic data for mineral end-members and of the solution parameters that will be adopted in the computation is needed. As a result, the value of these parameters had to be derived/refined for every solution phase in the investigated systems. A computer program (called GibInv) has been set up, and its implementation is here described in detail, that allows the simultaneous refinement of any of the end-member and mixing parameters. Derivation of internally consistent thermodynamic data is obtained by making use of the Bayesian technique. The program, after being successfully tested in a synthetic case, is initially applied to pyroxene assemblages in the system CaO-MgO-FeO-Al2O3-SiO2 (i.e. CMFAS) and in its constituent subsystems. Preliminary results are presented.¶
The new thermodynamic model is then applied to assemblages of Ca-Mg-Fe olivines and to assemblages of coexisting pyroxenes (orthopyroxene, low Ca- and high Ca clinopyroxene; two or three depending on T-P-bulk composition conditions), in CMFAS system and subsystems. Olivine and pyroxene solid solution and end-member parameters are refined, in part using GibInv and in part on a trial and error basis, and, when necessary, new parameters are derived. Olivine/pyroxene phase relations within such systems and their subsystems are calculated over a wide range of temperatures and pressures and compare very favorably with experimental constraints.
|
117 |
漲跌幅限制下股價行為與財務指標受扭曲程度之研究 / The Impacts of Stock Price Limits on Security Price Behavior and Financial Risk Indices Measures黃健榮, Huang, Je Rome Unknown Date (has links)
我國股市的價格漲跌幅限制已逾三十年的歷史,主管機關維持此一機制的訴求是避免股價波動過於激烈、抑制投機行為。惟停板限制可能帶來的影響,除直覺上的其造成投資者持股風險指標扭曲等問題。經探究中亦歸結出(一)其被引為技術指標、(二)其引致財務風險指標扭曲等問題。
經探究GMM、Gibbs Sampler、與Two-Limit-Tobit Model模型的優劣。本研究發現一般使用的GMM估計量並非不偏,雖然可以藉修正增加其效率性,但仍無法藉以衡量各種的停板影響;Gibbs Sampler則過於依賴特定的先驗分佈,有可能因此而造成偏誤;而目前使用Tobit Model的文獻大都忽略停板限制對股價的影響力,據以產生的估計值亦附含偏誤。
本研究所採樣本期間為79年1月3日至84年10月9日,使用模型為Two-Limit-Tobit Model。為求嚴謹,在使用之前做資料的處理,並利用CAAR來驗證模型的正確性。實證顯示,漲跌停板的設立顯著改變投資人行為,在停板之前本研究發現存在技術指標與標準差統計量的向上偏誤,進而可能誤導實業界財務決策或學術研究結論。 / Thsi Study explores how price limits, which have remained in Taiwan Securities Exchange for over thirty years, affects both security price behavior and security risk indices. Its empirical results add to our understanding of the social costs and benefits of price limits. The SEC has been advocating the merits of price limits, emphasing that they help eliminating speculative trades and reducing security price volatility. In contrast, it remains a popular thought that price limits increase investors’holding costs and risks. To empirically examine the effects of price limits in Taiwan, this papers adopts Two-Limit-Tobit Model, together with CAAR as an indicator for specification validity. My test results lend support to the notion of (1).Technical Indicator Effect immediately before the price limits are hit; (2).Enhancement Effect the day after. Moreover, price limits contribute to bias in both systematic risk and total risk estimates (namely, β and σ) and thus distort investment decisions.
This Study also contributes to the contemporary literature by examining the merits and limitations of GMM, Gibbs Sampler, and Two-Limit-Tobit Model. GMM estimator is subject to statistical bias. One way may gain efficiency via adjustment. And yet GMM ahs pitfalls in directly measuring the price limit effects; The major limitation of the Gibbs Sampler is its reliance on specific prior information and it may lead to bias. And most of the papers adopting Tobit Model simply input the original data into the program, ignoring the fact that price limit may make the following day price data may be contaiminated.
|
118 |
Dynamique de la Motorisation et Usage de l'Automobile en France (L'Île-de-France en Perspective)Collet, Roger 13 November 2007 (has links) (PDF)
Depuis les années 50, l'automobile est en France un sujet majeur de débat. Fabuleux moyen de communication pour ses défenseurs, elle est au contraire associée au gaspillage des ressources naturelles par ses détracteurs. Avec 81% des ménages français équipés en 2001, le plébiscite fait à l'automobile est incontestable. <br />Pourtant, les enjeux écologiques, la bonne régulation du trafic routier et notre espace de vie déjà très automobilisé rendent le “tout-automobile” impossible, alors que le “zéro voiture” semble bien utopique. Dans ce contexte, l'urgence est de civiliser les comportements automobiles. Cela nécessite en premier lieu l'analyse des comportements d'équipement et d'usage des agents, qui est l'objet de notre recherche.<br /> A l'aide des données françaises Parc Auto, nous modélisons trois fondamentaux de la motorisation automobile. L'“automobilité” des ménages est tout d'abord analysée avec le modèle d'addiction rationnelle de Becker. Ensuite, leur degré d'équipement est étudié à l'aide d'un modèle Probit ordonné à structure latente autorégressive. Le dernier volet de la thèse traite du choix qualitatif individuel d'acquisition de voitures, avec l'ajustement d'un modèle Probit polytomique. <br /> Tout au long de cette recherche, la dynamique comportementale est examinée afin de mesurer le poids des décisions passées dans les comportements courants d'équipement et d'usage. Nous mesurons aussi l'impact du revenu des agents et du prix des carburants sur leurs comportements automobiles. Par ailleurs, le zonage considéré, qui découpe notamment la région Île-de-France en Paris-centre, petite, et grande couronne, permet de présenter quelques résultats spécifiques aux franciliens.
|
119 |
Chaines a liaisons completes et mesures de Gibbs unidimensionnellesMAILLARD, GREGORY 26 June 2003 (has links) (PDF)
On introduit un formalisme de mecanique statistique pour l'etude des processus stochastiques discrets (chaines) pour lesquels on prouve : (i) des proprietes generales de chaines extremales, incluant la trivialite de la tribu queue, les correlations a courtes portees, la realisation via des limites a volumes infinis et l'ergodicite, (ii) deux nouvelles conditions pour l'unicite de la chaine coherante, (iii) des resultats de perte de memoire et des proprietes de melange pour des chaines sous le regime de Dobrushin. On considere des systemes a alphabet fini, pouvant avoir une grammaire. On etablit des conditions pour qu'une chaine definisse une mesure de Gibbs et vice-versa. On discute de l'equivalence des criteres d'unicite pour les chaines et les champs et on etablit des bornes pour les taux de continuite des systemes respectifs de probabilites conditionnelles. On prouve un theoreme de (re)construction pour les specifications en partant de conditionnement sur un site.
|
120 |
Identification d'un modèle de comportement thermique de bâtiment à partir de sa courbe de chargeZayane, Chadia 11 January 2011 (has links) (PDF)
Dans un contexte de préoccupation accrue d'économie d'énergie, l'intérêt que présente le développement de stratégies visant à minimiser la consommation d'un bâtiment n'est plus à démontrer. Que ces stratégies consistent à recommander l'isolation des parois, à améliorer la gestion du chauffage ou à préconiser certains comportements de l'usager, une démarche préalable d'identification du comportement thermique de bâtiment s'avère inévitable.<br/>Contrairement aux études existantes, la démarche menée ici ne nécessite pas d'instrumentation du bâtiment. De même, nous considérons des bâtiments en occupation normale, en présence de régulateur de chauffage : inconnue supplémentaire du problème. Ainsi, nous identifions un système global du bâtiment muni de son régulateur à partir de :<br/>données de la station Météo France la plus proche ; la température de consigne reconstruite par connaissance sectorielle ; la consommation de chauffage obtenue par système de Gestion Technique du Bâtiment ou par compteur intelligent ; autres apports calorifiques (éclairage, présence de personnes...) estimés par connaissance sectorielle et thermique. L'identification est d'abord faite par estimation des paramètres (7) définissant le modèle global, en minimisant l'erreur de prédiction à un pas. Ensuite nous avons adopté une démarche d'inversion bayésienne, dont le résultat est une simulation des distributions a posteriori des paramètres et de la température intérieure du bâtiment.<br/>L'analyse des simulations stochastiques obtenues vise à étudier l'apport de connaissances supplémentaires du problème (valeurs typiques des paramètres) et à démontrer les limites des hypothèses de modélisation dans certains cas.
|
Page generated in 0.0423 seconds