Spelling suggestions: "subject:"[een] STOCHASTIC SYSTEMS"" "subject:"[enn] STOCHASTIC SYSTEMS""
91 |
Advances in the stochastic and deterministic analysis of multistable biochemical networksPetrides, Andreas January 2018 (has links)
This dissertation is concerned with the potential multistability of protein concentrations in the cell that can arise in biochemical networks. That is, situations where one, or a family of, proteins may sit at one of two or more different steady state concentrations in otherwise identical cells, and in spite of them being in the same environment. Models of multisite protein phosphorylation have shown that this mechanism is able to exhibit unlimited multistability. Nevertheless, these models have not considered enzyme docking, the binding of the enzymes to one or more substrate docking sites, which are separate from the motif that is chemically modified. Enzyme docking is, however, increasingly being recognised as a method to achieve specificity in protein phosphorylation and dephosphorylation cycles. Most models in the literature for these systems are deterministic i.e. based on Ordinary Differential Equations, despite the fact that these are accurate only in the limit of large molecule numbers. For small molecule numbers, a discrete probabilistic, stochastic, approach is more suitable. However, when compared to the tools available in the deterministic framework, the tools available for stochastic analysis offer inadequate visualisation and intuition. We firstly try to bridge that gap, by developing three tools: a) a discrete `nullclines' construct applicable to stochastic systems - an analogue to the ODE nullcines, b) a stochastic tool based on a Weakly Chained Diagonally Dominant M-matrix formulation of the Chemical Master Equation and c) an algorithm that is able to construct non-reversible Markov chains with desired stationary probability distributions. We subsequently prove that, for multisite protein phosphorylation and similar models, in the deterministic domain, enzyme docking and the consequent substrate enzyme-sequestration must inevitably limit the extent of multistability, ultimately to one steady state. In contrast, bimodality can be obtained in the stochastic domain even in situations where bistability is not possible for large molecule numbers. We finally extend our results to cases where we have an autophosphorylating kinase, as for example is the case with $Ca^{2+}$/calmodulin-dependent protein kinase II (CaMKII), a key enzyme in synaptic plasticity.
|
92 |
O conceito de estabilizabilidade fraca para sistemas lineares com saltos Markovianos / The weak stabilizability concept for linear systems with Markov jumpManfrim, Amanda Liz Pacífico 08 March 2006 (has links)
Este trabalho introduz os conceitos de controlabilidade fraca e estabilizabilidade fraca para sistemas lineares com parâmetros sujeitos a saltos Markovianos a tempo discreto. É, inicialmente, construída uma coleção de matrizes C que se assemelha às matrizes de controlabilidade de sistemas lineares deterministicos. Essa coleção de matrizes C nos permite definir um conceito de controlabilidade fraca, requerendo que elas sejam de posto completo, assim como introduzir um conceito de estabilizabilidade fraca, dual ao conceito de detetabilidade fraca encontrado na literatura de sistemas com saltos de Markov. Uma característica importante do conceito de estabilizabilidade fraca é a de generalizar o conceito de estabilizabilidade na média quadrática, anteriormente encontrado na literatura. O papel do conceito da estabilizabilidade fraca no problema de filtragem é investigado através de casos de estudo. Estes casos de estudo são desenvolvidos no contexto do filtro de Kalman com observação do parâmetro de Markov e sugerem que a estabilizabilidade fraca em conjunto com a detetabilidade na média quadrática garantem que o estimador seja estável na média quadrática. / This work introduces weak controllability and weak stabilizability concepts for discretetime Markov jump linear system. We introduce a collection of matrices C that resembles controllability matrices of deterministic linear systems. The collection of matrices C allows us to define a weak controllability concept by requiring that the matrices are full rank, as well as to introduce a weak stabilizability concept that is a dual of the weak detectability concept found in the literature of Markov jump systems. An important feature of the introduced concept is that it generalizes the previous concept of mean square stabilizability. The role that the weak stabilizability concept plays in the filtering problem is investigated via case studies. These case studies are developed in the context of Kalman filtering with observation of the Markov parameter, they suggest that weak stabilizability together with mean square stabilizability ensure that the state estimator is mean square stable.
|
93 |
O conceito de estabilizabilidade fraca para sistemas lineares com saltos Markovianos / The weak stabilizability concept for linear systems with Markov jumpAmanda Liz Pacífico Manfrim 08 March 2006 (has links)
Este trabalho introduz os conceitos de controlabilidade fraca e estabilizabilidade fraca para sistemas lineares com parâmetros sujeitos a saltos Markovianos a tempo discreto. É, inicialmente, construída uma coleção de matrizes C que se assemelha às matrizes de controlabilidade de sistemas lineares deterministicos. Essa coleção de matrizes C nos permite definir um conceito de controlabilidade fraca, requerendo que elas sejam de posto completo, assim como introduzir um conceito de estabilizabilidade fraca, dual ao conceito de detetabilidade fraca encontrado na literatura de sistemas com saltos de Markov. Uma característica importante do conceito de estabilizabilidade fraca é a de generalizar o conceito de estabilizabilidade na média quadrática, anteriormente encontrado na literatura. O papel do conceito da estabilizabilidade fraca no problema de filtragem é investigado através de casos de estudo. Estes casos de estudo são desenvolvidos no contexto do filtro de Kalman com observação do parâmetro de Markov e sugerem que a estabilizabilidade fraca em conjunto com a detetabilidade na média quadrática garantem que o estimador seja estável na média quadrática. / This work introduces weak controllability and weak stabilizability concepts for discretetime Markov jump linear system. We introduce a collection of matrices C that resembles controllability matrices of deterministic linear systems. The collection of matrices C allows us to define a weak controllability concept by requiring that the matrices are full rank, as well as to introduce a weak stabilizability concept that is a dual of the weak detectability concept found in the literature of Markov jump systems. An important feature of the introduced concept is that it generalizes the previous concept of mean square stabilizability. The role that the weak stabilizability concept plays in the filtering problem is investigated via case studies. These case studies are developed in the context of Kalman filtering with observation of the Markov parameter, they suggest that weak stabilizability together with mean square stabilizability ensure that the state estimator is mean square stable.
|
94 |
Dynamic Resampling for Preference-based Evolutionary Multi-objective Optimization of Stochastic Systems : Improving the efficiency of time-constrained optimizationSiegmund, Florian January 2016 (has links)
In preference-based Evolutionary Multi-objective Optimization (EMO), the decision maker is looking for a diverse, but locally focused non-dominated front in a preferred area of the objective space, as close as possible to the true Pareto-front. Since solutions found outside the area of interest are considered less important or even irrelevant, the optimization can focus its efforts on the preferred area and find the solutions that the decision maker is looking for more quickly, i.e., with fewer simulation runs. This is particularly important if the available time for optimization is limited, as is the case in many real-world applications. Although previous studies in using this kind of guided-search with preference information, for example, withthe R-NSGA-II algorithm, have shown positive results, only very few of them considered the stochastic outputs of simulated systems. In the literature, this phenomenon of stochastic evaluation functions is sometimes called noisy optimization. If an EMO algorithm is run without any countermeasure to noisy evaluation functions, the performance will deteriorate, compared to the case if the true mean objective values are known. While, in general, static resampling of solutions to reduce the uncertainty of all evaluated design solutions can allow EMO algorithms to avoid this problem, it will significantly increase the required simulation time/budget, as many samples will be wasted on candidate solutions which are inferior. In comparison, a Dynamic Resampling (DR) strategy can allow the exploration and exploitation trade-off to be optimized, since the required accuracy about objective values varies between solutions. In a dense, converged population, itis important to know the accurate objective values, whereas noisy objective values are less harmful when an algorithm is exploring the objective space, especially early in the optimization process. Therefore, a well-designed Dynamic Resampling strategy which resamples the solution carefully, according to the resampling need, can help an EMO algorithm achieve better results than a static resampling allocation. While there are abundant studies in Simulation-based Optimization that considered Dynamic Resampling, the survey done in this study has found that there is no related work that considered how combinations of Dynamic Resampling and preference-based guided search can further enhance the performance of EMO algorithms, especially if the problems under study involve computationally expensive evaluations, like production systems simulation. The aim of this thesis is therefore to study, design and then to compare new combinations of preference-based EMO algorithms with various DR strategies, in order to improve the solution quality found by simulation-based multi-objective optimization with stochastic outputs, under a limited function evaluation or simulation budget. Specifically, based on the advantages and flexibility offered by interactive, reference point-based approaches, studies of the performance enhancements of R-NSGA-II when augmented with various DR strategies, with increasing degrees of statistical sophistication, as well as several adaptive features in terms of optimization parameters, have been made. The research results have clearly shown that optimization results can be improved, if a hybrid DR strategy is used and adaptive algorithm parameters are chosen according to the noise level and problem complexity. In the case of a limited simulation budget, the results allow the conclusions that both decision maker preferences and DR should be used at the same time to achieve the best results in simulation-based multi-objective optimization. / Vid preferensbaserad evolutionär flermålsoptimering försöker beslutsfattaren hitta lösningar som är fokuserade kring ett valt preferensområde i målrymden och som ligger så nära den optimala Pareto-fronten som möjligt. Eftersom lösningar utanför preferensområdet anses som mindre intressanta, eller till och med oviktiga, kan optimeringen fokusera på den intressanta delen av målrymden och hitta relevanta lösningar snabbare, vilket betyder att färre lösningar behöver utvärderas. Detta är en stor fördel vid simuleringsbaserad flermålsoptimering med långa simuleringstider eftersom antalet olika konfigurationer som kan simuleras och utvärderas är mycket begränsat. Även tidigare studier som använt fokuserad flermålsoptimering styrd av användarpreferenser, t.ex. med algoritmen R-NSGA-II, har visat positiva resultat men enbart få av dessa har tagit hänsyn till det stokastiska beteendet hos de simulerade systemen. I litteraturen kallas optimering med stokastiska utvärderingsfunktioner ibland "noisy optimization". Om en optimeringsalgoritm inte tar hänsyn till att de utvärderade målvärdena är stokastiska kommer prestandan vara lägre jämfört med om optimeringsalgoritmen har tillgång till de verkliga målvärdena. Statisk upprepad utvärdering av lösningar med syftet att reducera osäkerheten hos alla evaluerade lösningar hjälper optimeringsalgoritmer att undvika problemet, men leder samtidigt till en betydande ökning av antalet nödvändiga simuleringar och därigenom en ökning av optimeringstiden. Detta är problematiskt eftersom det innebär att många simuleringar utförs i onödan på undermåliga lösningar, där exakta målvärden inte bidrar till att förbättra optimeringens resultat. Upprepad utvärdering reducerar ovissheten och hjälper till att förbättra optimeringen, men har också ett pris. Om flera simuleringar används för varje lösning så minskar antalet olika lösningar som kan simuleras och sökrymden kan inte utforskas lika mycket, givet att det totala antalet simuleringar är begränsat. Dynamisk upprepad utvärdering kan däremot effektivisera flermålsoptimeringens avvägning mellan utforskning och exploatering av sökrymden baserat på det faktum att den nödvändiga precisionen i målvärdena varierar mellan de olika lösningarna i målrymden. I en tät och konvergerad population av lösningar är det viktigt att känna till de exakta målvärdena, medan osäkra målvärden är mindre skadliga i ett tidigt stadium i optimeringsprocessen när algoritmen utforskar målrymden. En dynamisk strategi för upprepad utvärdering med en noggrann allokering av utvärderingarna kan därför uppnå bättre resultat än en allokering som är statisk. Trots att finns ett rikligt antal studier inom simuleringsbaserad optimering som använder sig av dynamisk upprepad utvärdering så har inga relaterade studier hittats som undersöker hur kombinationer av dynamisk upprepad utvärdering och preferensbaserad styrning kan förbättra prestandan hos algoritmer för flermålsoptimering ytterligare. Speciell avsaknad finns det av studier om optimering av problem med långa simuleringstider, som t.ex. simulering av produktionssystem. Avhandlingens mål är därför att studera, konstruera och jämföra nya kombinationer av preferensbaserade optimeringsalgoritmer och dynamiska strategier för upprepad utvärdering. Syftet är att förbättra resultatet av simuleringsbaserad flermålsoptimering som har stokastiska målvärden när antalet utvärderingar eller optimeringstiden är begränsade. Avhandlingen har speciellt fokuserat på att undersöka prestandahöjande åtgärder hos algoritmen R-NSGA-II i kombination med dynamisk upprepad utvärdering, baserad på fördelarna och flexibiliteten som interaktiva referenspunktbaserade algoritmer erbjuder. Exempel på förbättringsåtgärder är dynamiska algoritmer för upprepad utvärdering med förbättrad statistisk osäkerhetshantering och adaptiva optimeringsparametrar. Resultaten från avhandlingen visar tydligt att optimeringsresultaten kan förbättras om hybrida dynamiska algoritmer för upprepad utvärdering används och adaptiva optimeringsparametrar väljs beroende på osäkerhetsnivån och komplexiteten i optimeringsproblemet. För de fall där simuleringstiden är begränsad är slutsatsen från avhandlingen att både användarpreferenser och dynamisk upprepad utvärdering bör användas samtidigt för att uppnå de bästa resultaten i simuleringsbaserad flermålsoptimering.
|
95 |
Learning in Partially Observable Markov Decision ProcessesSachan, Mohit 21 August 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Learning in Partially Observable Markov Decision process (POMDP) is motivated by the essential need to address a number of realistic problems. A number of methods exist for learning in POMDPs, but learning with limited amount of information about the model of POMDP remains a highly anticipated feature. Learning with minimal information is desirable in complex systems as methods requiring complete information among decision makers are impractical in complex systems due to increase of problem dimensionality.
In this thesis we address the problem of decentralized control of POMDPs with unknown transition probabilities and reward. We suggest learning in POMDP using a tree based approach. States of the POMDP are guessed using this tree. Each node in the tree has an automaton in it and acts as a decentralized decision maker for the POMDP. The start state of POMDP is known as the landmark state. Each automaton in the tree uses a simple learning scheme to update its action choice and requires minimal information. The principal result derived is that, without proper knowledge of transition probabilities and rewards, the automata tree of decision makers will converge to a set of actions that maximizes the long term expected reward per unit time obtained by the system. The analysis is based on learning in sequential stochastic games and properties of ergodic Markov chains. Simulation results are presented to compare the long term rewards of the system under different decision control algorithms.
|
96 |
Evaluating the expressiveness of specification languages : for stochastic safety-critical systemsJamil, Fahad Rami January 2024 (has links)
This thesis investigates the expressiveness of specification languages for stochastic safety-critical systems, addressing the need for expressiveness in describing system behaviour formally. Through a case study and specification language enhancements, the research explores the impact of different frameworks on a set of specifications. The results highlight the importance of continuous development in the specification languages to meet the complex behaviours of systems with probabilistic properties. The findings emphasise the need for extending the chosen specification languages more formally, to ensure that the languages can capture the complexity of the systems they describe. The research contributes valuable insights into improving the expressiveness of specification languages for ensuring system safety and operational reliability.
|
97 |
Bayes Filters with Improved Measurements for Visual Object Tracking / Bayes Filter mit verbesserter Messung für das Tracken visueller ObjekteLiu, Guoliang 20 March 2012 (has links)
No description available.
|
Page generated in 0.0392 seconds