Spelling suggestions: "subject:"probabilistic modeling"" "subject:"probabilistica modeling""
11 |
Probabilistic Modeling in Community-based Question Answering ServicesZolaktaf Zadeh, Zeinab 29 February 2012 (has links)
Community-based Question Answering (CQA) services enable members to ask questions and have them answered by the community. These services have the potential of rapidly creating large archives of questions and answers. However, their information is rarely exploited. This thesis presents a new statistical topic model for modeling Question-Answering archives. The model explicitly captures topic dependency and correlation between questions and answers, and models differences in their vocabulary. The proposed model is applied for the task of Question Answering and its performance is evaluated using a dataset extracted from the programming website Stack Overflow. Experimental results show that it achieves improved performance in retrieving the correct answer for a query question compared to the LDA model. The model has also been applied for Automatic Tagging and comparisons with LDA show that the new model achieves better clustering performance for larger numbers of topics.
|
12 |
Modélisation à plusieurs échelles d'un milieu continu hétérogène aléatoire / Stochastic modeling of random heterogeneous materialsTran, Vinh Phuc 05 October 2016 (has links)
Lorsque les longueurs caractéristiques sont bien séparées, la théorie de l'homogénéisation propose un cadre théorique rigoureux pour les matériaux hétérogènes. Dans ce contexte, les propriétés macroscopiques peuvent être calculées à partir de la résolution d’un problème auxiliaire formulé sur un volume élémentaire représentatif (avec des conditions limites adéquates). Dans le présent travail, nous nous intéressons à l’homogénéisation de matériaux hétérogènes décrits à l’échelle la plus fine par deux modèles différents (tous deux dépendant d’une longueur caractéristique spécifique) alors que le milieu homogène équivalent se comporte, dans les deux cas, comme un milieu de Cauchy classique.Dans la première partie, une microstructure aléatoire de type Cauchy est considérée. La résolution numérique du problème auxiliaire, réalisée sur plusieurs réalisations, implique un coût de calcul important lorsque les longueurs caractéristiques des constituants ne sont pas bien séparées et/ou lorsque le contraste mécanique est élevé. Pour surmonter ces limitations, nous basons notre étude sur une description mésoscopique du matériau combinée à la théorie de l'information. Dans cette mésostructure, obtenue par filtrage, les détails les plus fins sont lissés.Dans la seconde partie, nous nous intéressons aux matériaux à gradient dans lesquels il existe au moins une longueur interne, qui induit des effets de taille à l’échelle macroscopique. La microstructure aléatoire est décrite par un modèle à gradient de contrainte récemment proposé. Malgré leur similarité conceptuelle, nous montrerons que le modèle de stress-gradient et strain-gradient définissent deux classes de matériaux distinctes. Nous proposons ensuite des approches simples (méthodes de champs moyens) pour mieux comprendre les hypothèses de modélisation. Les résultats semi-analytiques obtenus nous permettent d’explorer l'influence des paramètres du modèle sur les propriétés macroscopiques et constituent la première étape vers la simulation en champs complets / If the length-scales are well separated, homogenization theory can provide a robust theoretical framework for heterogeneous materials. In this context, the macroscopic properties can be retrieved from the solution to an auxiliary problem, formulated over the representative volume element (with appropriate boundary conditions). In the present work, we focus on the homogenization of heterogeneous materials which are described at the finest scale by two different materials models (both depending on a specific characteristic length) while the homogeneous medium behaves as a classical Cauchy medium in both cases.In the first part, the random microstructure of a Cauchy medium is considered. Solving the auxiliary problem on multiple realizations can be very costly due to constitutive phases exhibiting not well-separated characteristic length scales and/or high mechanical contrasts. In order to circumvent these limitations, our study is based on a mesoscopic description of the material, combined with information theory. In the mesostructure, defined by a filtering framework, the fine-scale features are smoothed out.The second part is dedicated to gradient materials which induce microscopic size-effect due to the existence of microscopic material internal length(s). The random microstructure is described by a newly introduced stress-gradient model. Despite being conceptually similar, we show that the stress-gradient and strain-gradient models define two different classes of materials. Next, simple approaches such as mean-field homogenization techniques are proposed to better understand the assumptions underlying the stress-gradient model. The obtained semi-analytical results allow us to explore the influence on the homogenized properties of the model parameters and constitute a first step toward full-field simulations
|
13 |
A Probabilistic Approach for Automated Discovery of Biomarkers using Expression Data from Microarray or RNA-Seq DatasetsSundaramurthy, Gopinath 03 June 2016 (has links)
No description available.
|
14 |
Etude de la fatigue des aciers laminés à partir de l'auto-échauffement sous sollicitation cyclique : essais, observations, modélisation et influence d'une pré-déformation plastique / Study of fatigue properties of rolled steels from self-heating measurements under cyclic loading : tests, observations, model and influence of a plastic pre-strainMunier, Rémi 03 February 2012 (has links)
La détermination des propriétés en fatigue à grand nombre de cycles des aciers laminés destinés à l'industrie automobile est un processus coûteux en temps et en quantité de matière : 25 éprouvettes et presque un mois d'essais sont nécessaires à l'obtention d'une courbe de fatigue standard. Dans l'objectif de réduire ces temps de caractérisation, une méthode rapide, basée sur l'auto-échauffement de la matière sous sollicitation cyclique est mise en place sur un très grand nombre de nuances. Les mesures d'auto-échauffement mettent en évidence la présence de deux régimes dissipatifs distincts, un pour les plus faibles amplitudes de chargement cyclique et un pour les plus hautes. Un modèle probabiliste à deux échelles est ensuite développé, dont le but est de prévoir le comportement en fatigue à grand nombre de cycles à partir des mesures d'auto-échauffement. Il est composé d'une matrice au comportement élasto-plastique et d'une population d'inclusions possédant un second comportement élasto-plastique dont le seuil d'activation est aléatoire. Les deux régimes d'auto-échauffement peuvent ainsi être décrits fidèlement. En utilisant l'hypothèse du maillon le plus faible et un critère énergétique, une prévision du comportement en fatigue est réalisée. Trois jours seulement sont alors requis pour obtenir une courbe de fatigue complète. La pertinence de l'approche est validée en comparant avec des courbes de fatigue standards. Puis, des observations par microscopie optique, par microscopie à force atomique et par EBSD sont entreprises sur un acier micro-allié. L'objectif est double : en premier lieu, mieux comprendre les phénomènes qui se produisent sous sollicitation cyclique conduisant à l'obtention des deux régimes d'auto-échauffement ; en second lieu, justifier la pertinence des ingrédients introduits dans la modélisation. Dans une seconde grande partie, il est question de l'influence d'une pré-déformation plastique sur l'évolution des propriétés en fatigue. En effet, les composants automobiles obtenus à partir de tôles en acier laminé subissent diverses opérations de mise en forme, conduisant à déformer plastiquement la matière. Ces modifications de l'état de la matière engendrent des évolutions des propriétés en fatigue qui ne sont cependant pas prises en compte dans le dimensionnement actuel des pièces (non déterminées par la méthode standard car trop coûteux en temps). La rapidité de la méthode développée autorise à caractériser ces évolutions. A partir des modifications des propriétés à l'auto-échauffement après divers modes de pré-déformation plastique (traction, traction plane, cisaillement) et en étudiant diverses directions de sollicitation, il est possible de prévoir l'évolution des propriétés en fatigue associée pour de larges gammes de pré-déformations. La qualité des prévisions est validée en comparant avec des courbes de fatigue standards. / The determination of high cycle fatigue properties of high strength steel sheets for automotive industry is time and specimens consuming : 25 specimens and almost one month are required to obtain a traditional fatigue SN curve. In order to reduce the time dedicated to the fatigue characterization, a fast method, based on the self-heating of steels under cyclic loading is developed and applied to a wide range of grades. Self-heating measurements show the presence of two distinct dissipative regimes, a firstone for the low amplitudes of cyclic loading and a secondary one for the highest. A two scales probabilistic model is then developed in order to establish a dialogue between self-heating measurements and the fatigue properties. It is composed by a matrix having an elasto-plastic behaviorand a population of inclusions having a second elasto-plastic behavior with a random activation threshold. Both self-heating regimes can be perfectly described. By using the weakest link theory and an energetic criterion, a prediction of fatigue properties is made. Only three days are required to obtain acomplete fatigue curve. The pertinence of the approach is validated by a comparison with standard fatigue curves. Then, observations with optical microscopy, atomic force microscopy and EBSD are made on a high strength low alloyed steel grade. This study has two objectives: a better understanding of phenomenon occurring during cyclic loading leading to the two self-heating regimes; a justification of the ingredients introduced into the model. In a second important section, it deals with the influence of aplastic pre-strain on the fatigue properties evolution. Indeed, automotive components obtained from high strength steel sheets are subjected to primary forming operations, inducing plastic strain. These modifications lead to fatigue properties evolutions that are nevertheless not taken into account intraditional fatigue design of components (not determined with the standard method because ofprohibitive time). The speed of the proposed approach authorizes to characterize these evolutions. From the modifications of the self-heating properties after different modes of plastic pre-strain (tension, planetension, shearing) and by studying different directions of loading, it is possible to predict the fatigue properties evolutions for a wide range of plastic strain. The quality of the predictions is validated by acomparison with standard fatigue curves.
|
15 |
Modeling Purposeful Adaptive Behavior with the Principle of Maximum Causal EntropyZiebart, Brian D. 01 December 2010 (has links)
Predicting human behavior from a small amount of training examples is a challenging machine learning problem. In this thesis, we introduce the principle of maximum causal entropy, a general technique for applying information theory to decision-theoretic, game-theoretic, and control settings where relevant information is sequentially revealed over time. This approach guarantees decision-theoretic performance by matching purposeful measures of behavior (Abbeel & Ng, 2004), and/or enforces game-theoretic rationality constraints (Aumann, 1974), while otherwise being as uncertain as possible, which minimizes worst-case predictive log-loss (Gr¨unwald & Dawid, 2003).
We derive probabilistic models for decision, control, and multi-player game settings using this approach. We then develop corresponding algorithms for efficient inference that include relaxations of the Bellman equation (Bellman, 1957), and simple learning algorithms based on convex optimization. We apply the models and algorithms to a number of behavior prediction tasks. Specifically, we present empirical evaluations of the approach in the domains of vehicle route preference modeling using over 100,000 miles of collected taxi driving data, pedestrian motion modeling from weeks of indoor movement data, and robust prediction of game play in stochastic multi-player games.
|
16 |
Enabling scalable self-management for enterprise-scale systemsKumar, Vibhore 07 May 2008 (has links)
Implementing self-management for enterprise systems is difficult. First, the scale and complexity of such systems makes it hard to understand and interpret system behavior or worse, the root causes of certain behaviors. Second, it is not clear how the goals specified at a system-level translate to component-level actions that drive the system. Third, the dynamic environments in which such systems operate requires self-management techniques that not only adapt the system but also adapt their own decision making processes. Finally, to build a self-management solution that is acceptable to administrators, it should have the properties of tractability and trust, which allow an administrator to both understand and fine-tune self-management actions.
This dissertation work introduces, implements, and evaluates iManage, a novel system state-space based framework for enabling self-management of enterprise-scale systems. The system state-space, in iManage, is defined to be a collection of monitored system parameters and metrics (termed system variables). In addition, from amongst the system variables, it identifies the variables of interest, which determine the operational status of a system, and the controllable variables, which are the ones that can be deterministically modified to affect the operational status of a system. Using this formal representation, we have developed and integrated into iManage techniques that establish a probabilistic model relating the variables of interest and the controllable variables under the prevailing operational conditions. Such models are then used by iManage to determine corrective actions in case of SLA violations and/or to determine per-component ranges for controllable variables, which if independently adhered to by each component, lead to SLA compliance. To address the issue of scale in determining system models, iManage makes use of a novel state-space partitioning scheme that partitions the state-space into smaller sub-spaces thereby allowing us to more precisely model the critical system aspects. Our chosen modeling techniques are such that the generated models can be easily understood and modified by the administrator. Furthermore, iManage associates each proposed self-management action with a confidence-attribute that determines whether the action in question merits autonomic enforcement or not.
|
17 |
Stratégie multi-échelles de modélisation probabiliste de la fissuration des structures en béton armé / A Multi-Scale Strategy for the Probabilistic Modeling of Reinforced Concrete StructuresNader, Christian 21 April 2017 (has links)
Il n'existe pas de nos jours une approche modélisatrice satisfaisante de la fissuration des structures en béton de grandes dimensions, capable d'apporter à la fois des informations sur son comportement global et local. Il s'agit pourtant d'un enjeu important pour la maîtrise de la durée de vie des structures, qui s'inscrit pleinement dans le cadre du développement durable. Nous introduisons ainsi une nouvelle approche pour modéliser le processus de fissuration dans les grandes structures en béton armé, comme les barrages ou les centrales nucléaires. Pour ces types de structures, il n'est pas raisonnable, en terme de temps de calcul, de modéliser explicitement les armatures et l'interface acier-béton. Néanmoins, l'accès aux données sur la fissuration est impératif pour les problèmes d'analyse structurelle et de diffusion. Nous avons donc développé un modèle de fissuration macroscopique probabiliste basé sur une stratégie de simulation multi-échelles afin d'obtenir de linformation sur la fissuration dans la structure, sans avoir besoin d'utiliser des approches locales. La stratégie est une sorte de processus multi-étapes qui reprend toute la modélisation de la structure dans le cadre de la méthode des éléments finis, du maillage, à la création de modèles et l'identification des paramètres. Le coeur de la stratégie est inspiré des algorithmes de régression (apprentissage supervisé): les données à l'échelle locale | base de données d'apprentissage associées à la connaissance pratique du problème mécanique | aident à formuler le modèle macroscopique. L'identification des paramètres du modèle macroscopique probabiliste dépend du problème traité car elle contient des informations sur le comportement local, obtenues en avance à l'aide de l'expérimentation numérique. Cette information est ensuite projetée à l'échelle des éléments finis macroscopiques par analyse inverse. L'expérimentation numérique est réalisé avec un modèle validé de fissuration du béton et d'un modèle d'interface acier-béton, ce qui permet une description détaillée des processus de fissuration. Bien que la phase d'identification puisse ^etre relativement longue, le calcul structurel est ainsi très efficace en terme de temps de calcul, conduisant à une réduction importante du temps de calcul global, sans perte d'information / précision sur résultats à l'èchelle macroscopique / No modeling approach exists nowadays that can provide reliable information about the cracking process in large/complex reinforced concrete structures. However, this is an important issue for controlling the lifespan of structures, which is at the heart of the principal of sustainable development. In this work we introduce a new approach to model the cracking processes in large reinforced concrete structures, like dams or nuclear power plants. For these types of structures it is unreasonable, due to calculation time, to explicitly model the rebars and the steel-concrete bond. Nevertheless, access to data about the cracking process is imperative for structural analysis and diffusion problems. So in order to draw the information about cracking in the structure, without resorting to the use of local approaches, we developed a probabilistic macroscopic cracking model based on a multi-scale simulation strategy. The strategy is a sort of a multi-steps process that takes over the whole modelization of the structure in the framework of the finite element method, from meshing, to model creation, and parameter identification. The heart of the strategy is inspired from regression (supervised learning) algorithms: data on the local scale | the training data coupled with working knowledge of the mechanical problem | would shape the macroscopic model. The probabilistic macroscopic model's identification is case-specific because it holds information about the local behavior, obtained in advance via numerical experimentation. This information is then projected to the macroscopic finite element scale via inverse analysis. Numerical experiments are performed using a validated cracking model for concrete and a bond model for the steel-concrete interface, allowing for a fine description of the cracking processes. Although the identification phase can be relatively time-consuming, the structural simulation is as a result, very time-efficient, leading to a sensitive reduction of the overall computational time, with no loss in information/accuracy of results on the macroscopic scale
|
18 |
Specialized Agents Task Allocation in Autonomous Multi-Robot SystemsAL-Buraiki, Omar S. M. 25 November 2020 (has links)
With the promise to shape the future of industry, multi-agent robotic technologies have the potential to change many aspects of daily life. Over the coming decade, they are expected to impact transportation systems, military applications such as reconnaissance and surveillance, search-and-rescue operations, or space missions, as well as provide support to emergency first responders.
Motivated by the latest developments in the field of robotics, this thesis contributes to the evolution of the future generation of multi-agent robotic systems as they become smarter, more accurate, and diversified in terms of applications. But in order to achieve these goals, the individual agents forming cooperative robotic systems need to be specialized in what they can accomplish, while ensuring accuracy and preserving the ability to perform diverse tasks.
This thesis addresses the problem of task allocation in swarm robotics in the specific context where specialized capabilities of the individual agents are considered. Based on the assumption that each individual agent possesses specialized functional capabilities and that the expected tasks, which are distributed in the surrounding environment, impose specific requirements, the proposed task allocation mechanisms are formulated in two different spaces. First, a rudimentary form of the team members’ specialization is formulated as a cooperative control problem embedded in the agents’ dynamics control space. Second, an advanced formulation of agents’ specialization is defined to estimate the individual agents’ task allocation probabilities in a dedicated specialization space, which represents the core contribution of this thesis to the advancement and practice in the area of swarm robotics.
The original task allocation process formulated in the specialization space evolves through four stages of development. First, a task features recognition stage is conceptually introduced to leverage the output of a sensing layer embedded in robotic agents to drive the proposed task allocation scheme. Second, a matching scheme is developed to best match each agent’s specialized capabilities with the corresponding detected tasks. At this stage, a general binary definition of agents’ specialization serves as the basis for task-agent association. Third, the task-agent matching scheme is expanded to an innovative probabilistic specialty-based task-agent allocation framework to generalize the concept and exploit the potential of agents’ specialization consideration. Fourth, the general framework is further refined with a modulated definition of the agents’ specialization based on their mechanical, physical structure, and embedded resources. The original framework is extended and a prioritization layer is also introduced to improve the system’s response to complex tasks that are characterized based on the recognition of multiple classes.
Experimental validation of the proposed specialty-based task allocation approach is conducted in simulation and on real-world experiments, and the results are presented and discussed in light of potential applications to demonstrate the effectiveness and efficiency of the proposed framework.
|
19 |
Modélisation bayésienne algorithmique de la reconnaissance visuelle de mots et de l'attention visuelle / Algorithmic Bayesian modeling of visual word recognition and visual attentionPhenix, Thierry 15 January 2018 (has links)
Dans cette thèse, nous proposons un nouveau modèle conceptuel de la reconnaissance visuelle de mots implémenté sous forme mathématique de façon à évaluer sa capacité à reproduire les observations expérimentales du domaine. Une revue critique des modèles computationnels existants nous conduit à définir un cahier des charges sous la forme de cinq hypothèses qui sont à la base du modèle conceptuel proposé : le modèle est doté d'une architecture à trois niveaux (sensoriel, perceptif et lexical) ; le traitement est parallèle sur l'ensemble des lettres du stimulus ; l'encodage positionnel est distribué ; enfin, le traitement sensoriel des lettres intègre position du regard, acuité visuelle et distribution de l'attention visuelle. L'implémentation repose sur la méthodologie de la modélisation bayésienne algorithmique, conduisant au modèle BRAID (pour "Bayesian word Recognition with Attention, Interference and Dynamics").Nous vérifions la capacité du modèle à rendre compte des données obtenues en perceptibilité des lettres (par exemple, effets de supériorité des mots et des pseudo-mots, effets de contexte), en reconnaissance de mots et en décision lexicale (par exemple, effets de fréquence et de voisinage). Au total, nous simulons avec succès 28 expériences comportementales, permettant de rendre compte d'effets subtils dans chacun des domaines ciblés. Nous discutons les choix théoriques du modèle à la lumière de ces résultats expérimentaux, et proposons des perspectives d'extension du modèle, soulignant la flexibilité du formalisme choisi. / In this thesis, we propose an original theoretical framework of visual word recognition, and implement it mathematically to evaluate its ability to reproduce experimental observations of the field. A critical review of previous computational models leads us to define specifications in the form of a set of five hypotheses, that form the basis of the proposed theoretical framework: the model is built on a three-layer architecture (sensory, perceptual, lexical); letter processing is parallel; positional coding is distributed; finally, sensory processing involves gaze position, visual acuity, and visual attention distribution. To implement the model, we rely on the Bayesian algorithmic modeling methodology, and define the BRAID model (for "Bayesian word Recognition with Attention, Interference and Dynamics").
|
20 |
Micromechanical modeling of cleavage fracture in polycrystalline materialsStec, Mateusz January 2008 (has links)
Cleavage fracture in ferritic steels can be defined as a sequence of few critical steps. At first nucleation of a microcrack takes place, often in a hard inclusion. This microcrack then propagates into the surrounding matrix material. The last obstacle before failure is the encounter of grain boundaries. If a microcrack is not arrested during any of those steps, cleavage takes place. Temperature plays an important role since it changes the failure mode from ductile to brittle in a narrow temperature interval. In papers A and B micromechanical models of the last critical phase are developed (cleavage over a grain boundary) in order to examine the mechanics of this phase. An extensive parameter study is performed in Paper A, where cleavage planes of two grains are allowed to tilt relative each other. It is there shown that triaxiality has a significant effect on the largest grain size that can arrest a rapidly propagating microcrack. This effect is explained by the development of the plastic zone prior to crack growth. The effect of temperature, addressed through a change in the visco-plastic response of the ferrite, shows that the critical grain size increases with the temperature. This implies that with an increasing temperature more cracks can be arrested, that is to say that less can become critical and thus that the resistance to fracture increases. Paper B shows simulations of microcrack propagation when the cleavage planes of two neighboring grains are tilted and twisted relatively each other. It is shown that when a microcrack enters a new grain, it first does it along primary cleavage planes. During further growth the crack front is protruded along the primary planes and lags behind along the secondary ones. The effect of tilt and twist on the critical grain size is decoupled with twist misorientation offering a greater resistance to propagation. Simulations of cracking of a particle and microcrack growth across an inclusion-matrix interface are made in Paper C. It is shown that the particle stress can be expressed by an Eshelby type expression modified for plasticity. The analysis of dynamic growth, results in a modified Griffith expression. Both findings are implemented into a micromechanics-based probabilistic model for cleavage that is of a weakest link type and incorporates all critical phases of cleavage: crack nucleation, propagation over particle-matrix interface and into consecutive grains. The proposed model depends on six parameters, which are obtained for three temperatures in Paper D using experimental data from SE(B) tests. At the lowest temperature, -30° , the model gives an excellent prediction of the cumulative failure probability by cleavage fracture and captures the threshold toughness and the experimental scatter. At 25º and 55º the model slightly overestimates the fracture probability. In Paper E a serie of fracture experiments is performed on half-elliptical surface cracks at 25º in order to further verify the model. Experiments show a significant scatter in the fracture toughness. The model significantly overestimates the fracture probability for this crack geometry. / QC 20100910
|
Page generated in 0.1154 seconds