Spelling suggestions: "subject:"aptimization algorithms"" "subject:"aptimization a.lgorithms""
91 |
Algorithms For Stochastic Games And Service SystemsPrasad, H L 05 1900 (has links) (PDF)
This thesis is organized into two parts, one for my main area of research in the field of stochastic games, and the other for my contributions in the area of service systems. We first provide an abstract for my work in stochastic games.
The field of stochastic games has been actively pursued over the last seven decades because of several of its important applications in oligopolistic economics. In the past, zero-sum stochastic games have been modelled and solved for Nash equilibria using the standard techniques of Markov decision processes. General-sum stochastic games on the contrary have posed difficulty as they cannot be reduced to Markov decision processes. Over the past few decades the quest for algorithms to compute Nash equilibria in general-sum stochastic games has intensified and several important algorithms such as stochastic tracing procedure [Herings and Peeters, 2004], NashQ [Hu and Wellman, 2003], FFQ [Littman, 2001], etc., and their generalised representations such as the optimization problem formulations for various reward structures [Filar and Vrieze, 1997] have been proposed. However, they suffer from either lack of generality or are intractable for even medium sized problems or both. In our venture towards algorithms for stochastic games, we start with a non-linear optimization problem and then design a simple gradient descent procedure for the same. Though this procedure gives the Nash equilibrium for a sample problem of terrain exploration, we observe that, in general, it need not be true. We characterize the necessary conditions and define KKT-N point. KKT-N points are those Karush-Kuhn-Tucker (KKT) points which corresponding to Nash equilibria. Thus, for a simple gradient based algorithm to guarantee convergence to Nash equilibrium, all KKT points of the optimization problem need to be KKT-N points, which restricts the applicability of such algorithms.
We then take a step back and start looking at better characterization of those points of the optimization problem which correspond to Nash equilibria of the underlying game. As a result of this exploration, we derive two sets of necessary and sufficient conditions. The first set, KKT-SP conditions, is inspired from KKT conditions itself and is obtained by breaking down the main optimization problem into several sub-problems and then applying KKT conditions to each one of those sub-problems. The second set, SG-SP conditions, is a simplified set of conditions which characterize those Nash points more compactly. Using both KKT-SP and SG-SP conditions, we propose three algorithms, OFF-SGSP, ON-SGSP and DON-SGSP, respectively, which we show provide Nash equilibrium strategies for general-sum discounted stochastic games. Here OFF-SGSP is an off-line algorithm while ONSGSP and DON-SGSP are on-line algorithms. In particular, we believe that DON-SGSP is the first decentralized on-line algorithm for general-sum discounted stochastic games. We show that both our on-line algorithms are computationally efficient. In fact, we show that DON-SGSP is not only applicable for multi-agent scenarios but is also directly applicable for the single-agent case, i.e., MDPs (Markov Decision Processes).
The second part of the thesis focuses on formulating and solving the problem of minimizing the labour-cost in service systems. We define the setting of service systems and then model the labour-cost problem as a constrained discrete parameter Markov-cost process. This Markov process is parametrized by the number of workers in various shifts and with various skill levels. With the number of workers as optimization variables, we provide a detailed formulation of a constrained optimization problem where the objective is the expected long-run averages of the single-stage labour-costs, and the main set of constraints are the expected long-run average of aggregate SLAs (Service Level Agreements). For this constrained optimization problem, we provide two stochastic optimization algorithms, SASOC-SF-N and SASOC-SF-C, which use smoothed functional approaches to estimate gradient and perform gradient descent in the aforementioned constrained optimization problem. SASOC-SF-N uses Gaussian distribution for smoothing while SASOC-SF-C uses Cauchy distribution for the same. SASOC-SF-C is the first Cauchy based smoothing algorithm which requires a fixed number (two) of simulations independent of the number of optimization variables. We show that these algorithms provide an order of magnitude better performance than existing industrial standard tool, OptQuest. We also show that SASOC-SF-C gives overall better performance.
|
92 |
Předvídatelnost středoevropských akciových výnosů: Překonají Neuronové sítě moderní ekonomické analýzy? / On the predictibility of Central European stock returns: Do Neural Networks outperform modern economic techniques?Baruník, Jozef January 2006 (has links)
In this thesis we apply neural networks as nonparametric and nonlinear methods to the Central European stock markets returns (Czech, Polish, Hungarian and German) modelling. In the first two chapters we define prediction task and link the classical econometric analysis to neural networks. We also present optimization methods which will be used in the tests, conjugate gradient, Levenberg-Marquardt, and evolutionary search method. Further on, we present statistical methods for comparing the predictive accuracy of the non-nested models, as well as economic significance measures. In the empirical tests we first show the power of neural networks on Mackey-Glass chaotic time series followed by real-world data of the daily and weekly returns of mentioned stock exchanges for the 2000:2006 period. We find neural networks to have significantly lower prediction error than classical models for daily DAX series, weekly PX50 and BUX series. The lags of time-series were used, and also cross-country predictability has been tested, but the results were not significantly different. We also achieved economic significance of predictions with both daily and weekly PX-50, BUX and DAX with 60% accuracy of prediction. Finally we use neural network to learn Black-Scholes model and compared the pricing errors of...
|
93 |
Mravenčí kolonie / Ant colonyHart, Pavel January 2008 (has links)
First part of the thesis is about literature research of optimization algorithms. Three of the algorithms were implemented and tested, concretely the ant colony algorithm, tabu search and simulated annealing. All three algorithms were implemented to solve the traveling salesman problem. In second part of the thesis the algorithms were tested and compared. In last part the influence of the ant colony parameters was evaluated.
|
94 |
Topology optimization of truss-like structures, from theory to practiceRichardson, James 21 November 2013 (has links)
The goal of this thesis is the development of theoretical methods targeting the implementation of topology optimization in structural engineering applications. In civil engineering applications, structures are typically assemblies of many standardized components, such as bars, where the largest gains in efficiency can be made during the preliminary design of the overall structure. The work is aimed mainly at truss-like structures in civil engineering applications, however several of the developments are general enough to encompass continuum structures and other areas of engineering research too. The research aims to address the following challenges:<p>- Discrete variable optimization, generally necessary for truss problems in civil engineering, tends to be computationally very expensive,<p>- the gap between industrial applications in civil engineering and optimization research is quite large, meaning that the developed methods are currently not fully embraced in practice, and<p>- industrial applications demand robust and reliable solutions to the real-world problems faced by the civil engineering profession.<p><p>In order to face these challenges, the research is divided into several research papers, included as chapters in the thesis.<p>Discrete binary variables in structural topology optimization often lead to very large computational cost and sometimes even failure of algorithm convergence. A novel method was developed for improving the performance of topology optimization problems in truss-like structures with discrete design variables, using so-called Kinematic Stability Repair (KSR). Two typical examples of topology optimization problems with binary variables are bracing systems and steel grid shell structures. These important industrial applications of topology optimization are investigated in the thesis. A novel method is developed for topology optimization of grid shells whose global shape has been determined by form-finding. Furthermore a novel technique for façade bracing optimization is developed. In this application a multiobjective approach was used to give the designers freedom to make changes, as the design advanced at various stages of the design process. The application of the two methods to practical<p>engineering problems, inspired a theoretical development which has wide-reaching implications for discrete optimization: the pitfalls of symmetry reduction. A seemingly self-evident method of cardinality reduction makes use of geometric symmetry reduction in structures in order to reduce the problem size. It is shown in the research that this assumption is not valid for discrete variable problems. Despite intuition to the contrary, for symmetric problems, asymmetric solutions may be more optimal than their symmetric counterparts. In reality many uncertainties exist on geometry, loading and material properties in structural systems. This has an effect on the performance (robustness) of the non-ideal, realized structure. To address this, a general robust topology optimization framework for both continuum and truss-like structures, developing a novel analysis technique for truss structures under material uncertainties, is introduced. Next, this framework is extended to discrete variable, multiobjective optimization problems of truss structures, taking uncertainties on the material stiffness and the loading into account. Two papers corresponding to the two chapters were submitted to the journal Computers and Structures and Structural and Multidisciplinary Optimization. Finally, a concluding chapter summarizes the main findings of the research. A number of appendices are included at the end of the manuscript, clarifying several pertinent issues. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
|
95 |
Energy Optimization Strategy for System-Operational ProblemsAl-Ani, Dhafar S. 04 1900 (has links)
<ul> <li>Energy Optimization Stategies</li> <li>Hydraulic Models for Water Distribution Systems</li> <li>Heuristic Multi-objective Optimization Algorithms</li> <li>Multi-objective Optimization Problems</li> <li>System Constraints</li> <li>Encoding Techniques</li> <li>Optimal Pumping Operations</li> <li>Sovling Real-World Optimization Problems </li> </ul> / <p>The water supply industry is a very important element of a modern economy; it represents a key element of urban infrastructure and is an integral part of our modern civilization. Billions of dollars per annum are spent internationally in pumping operations in rural water distribution systems to treat and reliably transport water from source to consumers.</p> <p>In this dissertation, a new multi-objective optimization approach referred to as energy optimization strategy is proposed for minimizing electrical energy consumption for pumping, the cost, pumps maintenance cost, and the cost of maximum power peak, while optimizing water quality and operational reliability in rural water distribution systems. Minimizing the energy cost problem considers the electrical energy consumed for regular operation and the cost of maximum power peak. Optimizing operational reliability is based on the ability of the network to provide service in case of abnormal events (e.g., network failure or fire) by considering and managing reservoir levels. Minimizing pumping costs also involves consideration of network and pump maintenance cost that is imputed by the number of pump switches. Water quality optimization is achieved through the consideration of chlorine residual during water transportation.</p> <p>An Adaptive Parallel Clustering-based Multi-objective Particle Swarm Optimization (APC-MOPSO) algorithm that combines the existing and new concept of Pareto-front, operating-mode specification, selecting-best-efficiency-point technique, searching-for-gaps method, and modified K-Means clustering has been proposed. APC-MOPSO is employed to optimize the above-mentioned set of multiple objectives in operating rural water distribution systems.</p> <p>Saskatoon West is, a rural water distribution system, owned and operated by Sask-Water (i.e., is a statutory Crown Corporation providing water, wastewater and related services to municipal, industrial, government, and domestic customers in the province of Saskatchewan). It is used to provide water to the city of Saskatoon and surrounding communities. The system has six main components: (1) the pumping stations, namely Queen Elizabeth and Aurora; (2) The raw water pipeline from QE to Agrium area; (3) the treatment plant located within the Village of Vanscoy; (4) the raw water pipeline serving four major consumers, including PCS Cogen, PCS Cory, Corman Park, and Agrium; (5) the treated water pipeline serving a domestic community of Village of Vanscoy; and (6) the large Agrium community storage reservoir.</p> <p>In this dissertation, the Saskatoon West WDS is chosen to implement the proposed energy optimization strategy. Given the data supplied by Sask-Warer, the scope of this application has resulted in savings of approximately 7 to 14% in energy costs without adversely affecting the infrastructure of the system as well as maintaining the same level of service provided to the Sask-Water’s clients.</p> <p>The implementation of the energy optimization strategy on the Saskatoon West WDS over 168 hour (i.e., one-week optimization period of time) resulted in savings of approximately 10% in electrical energy cost and 4% in the cost of maximum power peak. Moreover, the results showed that the pumping reliability is improved by 3.5% (i.e., improving its efficiency, head pressure, and flow rate). A case study is used to demonstrate the effectiveness of the multi-objective formulations and the solution methodologies, including the formulation of the system-operational optimization problem as five objective functions. Beside the reduction in the energy costs, water quality, network reliability, and pumping characterization are all concurrently enhanced as shown in the collected results. The benefits of using the proposed energy optimization strategy as replacement for many existing optimization methods are also demonstrated.</p> / Doctor of Science (PhD)
|
96 |
Optimization Algorithm Based on Novelty Search Applied to the Treatment of Uncertainty in ModelsMartínez Rodríguez, David 23 December 2021 (has links)
[ES] La búsqueda novedosa es un nuevo paradigma de los algoritmos de optimización, evolucionarios y bioinspirados, que está basado en la idea de forzar la búsqueda del óptimo global en aquellas partes inexploradas del dominio de la función que no son atractivas para el algoritmo, con la intención de evitar estancamientos en óptimos locales. La búsqueda novedosa se ha aplicado al algoritmo de optimización de enjambre de partículas, obteniendo un nuevo algoritmo denominado algoritmo de enjambre novedoso (NS). NS se ha aplicado al conjunto de pruebas sintéticas CEC2005, comparando los resultados con los obtenidos por otros algoritmos del estado del arte. Los resultados muestran un mejor comportamiento de NS en funciones altamente no lineales, a cambio de un aumento en la complejidad computacional. En lo que resta de trabajo, el algoritmo NS se ha aplicado en diferentes modelos, específicamente en el diseño de un motor de combustión interna, en la estimación de demanda de energía mediante gramáticas de enjambre, en la evolución del cáncer de vejiga de un paciente concreto y en la evolución del COVID-19. Cabe remarcar que, en el estudio de los modelos de COVID-19, se ha tenido en cuenta la incertidumbre, tanto de los datos como de la evolución de la enfermedad. / [CA] La cerca nova és un nou paradigma dels algoritmes d'optimització, evolucionaris i bioinspirats, que està basat en la idea de forçar la cerca de l'òptim global en les parts inexplorades del domini de la funció que no són atractives per a l'algoritme, amb la intenció d'evitar estancaments en òptims locals. La cerca nova s'ha aplicat a l'algoritme d'optimització d'eixam de partícules, obtenint un nou algoritme denominat algoritme d'eixam nou (NS). NS s'ha aplicat al conjunt de proves sintètiques CEC2005, comparant els resultats amb els obtinguts per altres algoritmes de l'estat de l'art. Els resultats mostren un millor comportament de NS en funcions altament no lineals, a canvi d'un augment en la complexitat computacional. En el que resta de treball, l'algoritme NS s'ha aplicat en diferents models, específicament en el disseny d'un motor de combustió interna, en l'estimació de demanda d'energia mitjançant gramàtiques d'eixam, en l'evolució del càncer de bufeta d'un pacient concret i en l'evolució del COVID-19. Cal remarcar que, en l'estudi dels models de COVID-19, s'ha tingut en compte la incertesa, tant de les dades com de l'evolució de la malaltia. / [EN] Novelty Search is a recent paradigm in evolutionary and bio-inspired optimization algorithms, based on the idea of forcing to look for those unexplored parts of the domain of the function that might be unattractive for the algorithm, with the aim of avoiding stagnation in local optima. Novelty Search has been applied to the Particle Swarm Optimization algorithm, obtaining a new algorithm named Novelty Swarm (NS). NS has been applied to the CEC2005 benchmark, comparing its results with other state of the art algorithms. The results show better behaviour in high nonlinear functions at the cost of increasing the computational complexity. During the rest of the thesis, the NS
algorithm has been used in different models, specifically the design of an Internal Combustion Engine, the prediction of energy demand estimation with Grammatical Swarm, the evolution of the bladder cancer of a specific patient and the evolution of COVID-19. It is also remarkable that, in the study of COVID-19 models, uncertainty of the data and the evolution of the disease has been taken in account. / Martínez Rodríguez, D. (2021). Optimization Algorithm Based on Novelty Search Applied to the Treatment of Uncertainty in Models [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/178994
|
97 |
Dynamic Programming Approaches for Estimating and Applying Large-scale Discrete Choice ModelsMai, Anh Tien 12 1900 (has links)
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications.
We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices.
The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms.
Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models.
The second theme is related to the estimation of static discrete choice models with large choice sets.
We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm.
Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process.
The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important.
Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions. / Les gens consacrent une importante part de leur existence à prendre diverses décisions, pouvant affecter leur demande en transport, par exemple les choix de lieux d'habitation et de travail, les modes de transport, les heures de départ, le nombre et type de voitures dans le ménage, les itinéraires ... Les choix liés au transport sont généralement fonction du temps et caractérisés par un grand nombre de solutions alternatives qui peuvent être spatialement corrélées. Cette thèse traite de modèles pouvant être utilisés pour analyser et prédire les choix discrets dans les applications liées aux réseaux de grandes tailles. Les modèles et méthodes proposées sont particulièrement pertinents pour les applications en transport, sans toutefois s'y limiter.
Nous modélisons les décisions comme des séquences de choix, dans le cadre des choix discrets dynamiques, aussi connus comme processus de décision de Markov paramétriques. Ces modèles sont réputés difficiles à estimer et à appliquer en prédiction, puisque le calcul des probabilités de choix requiert la résolution de problèmes de programmation dynamique. Nous montrons dans cette thèse qu'il est possible d'exploiter la structure du réseau et la flexibilité de la programmation dynamique afin de rendre l'approche de modélisation dynamique en choix discrets non seulement utile pour représenter les choix dépendant du temps, mais également pour modéliser plus facilement des choix statiques au sein d'ensembles de choix de très grande taille.
La thèse se compose de sept articles, présentant divers modèles et méthodes d'estimation, leur application ainsi que des expériences numériques sur des modèles de choix discrets de grande taille. Nous regroupons les contributions en trois principales thématiques: modélisation du choix de route, estimation de modèles en valeur extrême multivariée (MEV) de grande taille et algorithmes d'optimisation non-linéaire.
Cinq articles sont associés à la modélisation de choix de route. Nous proposons différents modèles de choix discrets dynamiques permettant aux utilités des chemins d'être corrélées, sur base de formulations MEV et logit mixte.
Les modèles résultants devenant coûteux à estimer, nous présentons de nouvelles approches permettant de diminuer les efforts de calcul. Nous proposons par exemple une méthode de décomposition qui non seulement ouvre la possibilité d'estimer efficacement des modèles logit mixte, mais également d'accélérer l'estimation de modèles simples comme les modèles logit multinomiaux, ce qui a également des implications en simulation de trafic. De plus, nous comparons les règles de décision basées sur le principe de maximisation d'utilité de celles sur la minimisation du regret pour ce type de modèles. Nous proposons finalement un test statistique sur les erreurs de spécification pour les modèles de choix de route basés sur le logit multinomial.
Le second thème porte sur l'estimation de modèles de choix discrets statiques avec de grands ensembles de choix. Nous établissons que certains types de modèles MEV peuvent être reformulés comme des modèles de choix discrets dynamiques, construits sur des réseaux de structure de corrélation. Ces modèles peuvent alors être estimées rapidement en utilisant des techniques de programmation dynamique en combinaison avec un algorithme efficace d'optimisation non-linéaire.
La troisième et dernière thématique concerne les algorithmes d'optimisation non-linéaires dans le cadre de l'estimation de modèles complexes de choix discrets par maximum de vraisemblance. Nous examinons et adaptons des méthodes quasi-Newton structurées qui peuvent être facilement intégrées dans des algorithmes d'optimisation usuels (recherche linéaire et région de confiance) afin d'accélérer le processus d'estimation.
Les modèles de choix discrets dynamiques et les méthodes d'optimisation proposés peuvent être employés dans diverses applications de choix discrets. Dans le domaine des sciences de données, des modèles qui peuvent traiter de grands ensembles de choix et des ensembles de choix séquentiels sont importants. Nos recherches peuvent dès lors être d'intérêt dans diverses applications d'analyse de la demande (analyse prédictive) ou peuvent être intégrées à des modèles d'optimisation (analyse prescriptive). De plus, nos études mettent en évidence le potentiel des techniques de programmation dynamique dans ce contexte, y compris pour des modèles statiques, ouvrant la voie à de multiples directions de recherche future.
|
98 |
Νέες μέθοδοι εκμάθησης για ασαφή γνωστικά δίκτυα και εφαρμογές στην ιατρική και βιομηχανία / New learning techniques to train fuzzy cognitive maps and applications in medicine and industryΠαπαγεωργίου, Ελπινίκη 25 June 2007 (has links)
Αντικείµενο της διατριβής είναι η ανάπτυξη νέων µεθοδολογιών εκµάθησης και σύγκλισης των Ασαφών Γνωστικών ∆ικτύων που προτείνονται για τη βελτίωση και προσαρµογή της συµπεριφοράς τους, καθώς και για την αύξηση της απόδοσής τους, αναδεικνύοντάς τα σε αποτελεσµατικά δυναµικά συστήµατα µοντελοποίησης. Τα νέα βελτιωµένα Ασαφή Γνωστικά ∆ίκτυα, µέσω της εκµάθησης και προσαρµογής των βαρών τους, έχουν χρησιµοποιηθεί στην ιατρική σε θέµατα διάγνωσης και υποστήριξης στη λήψη απόφασης, καθώς και σε µοντέλα βιοµηχανικών συστηµάτων που αφορούν τον έλεγχο διαδικασιών, µε πολύ ικανοποιητικά αποτελέσµατα. Στη διατριβή αυτή παρουσιάζονται, αξιολογούνται και εφαρµόζονται δύο νέοι αλγόριθµοι εκµάθησης χωρίς επίβλεψη των Ασαφών Γνωστικών ∆ικτύων, οι αλγόριθµοι Active Hebbian Learning (AHL) και Nonlinear Hebbian Learning (NHL), βασισµένοι στον κλασσικό αλγόριθµό εκµάθησης χωρίς επίβλεψη τύπου Hebb των νευρωνικών δικτύων, καθώς και µια νέα προσέγγιση εκµάθησης των Ασαφών Γνωστικών ∆ικτύων βασισµένη στους εξελικτικούς αλγορίθµους και πιο συγκεκριµένα στον αλγόριθµο Βελτιστοποίησης µε Σµήνος Σωµατιδίων και στον ∆ιαφοροεξελικτικό αλγόριθµο. Οι προτεινόµενοι αλγόριθµοι AHL και NHL στηρίζουν νέες µεθοδολογίες εκµάθησης για τα ΑΓ∆ που βελτιώνουν τη λειτουργία, και την αξιοπιστία τους, και που παρέχουν στους εµπειρογνώµονες του εκάστοτε προβλήµατος που αναπτύσσουν το ΑΓ∆, την εκµάθηση των παραµέτρων για τη ρύθµιση των αιτιατών διασυνδέσεων µεταξύ των κόµβων. Αυτοί οι τύποι εκµάθησης που συνοδεύονται από την σωστή γνώση του εκάστοτε προβλήµατος-συστήµατος, συµβάλλουν στην αύξηση της απόδοσης των ΑΓ∆ και διευρύνουν τη χρήση τους. Επιπρόσθετα µε τους αλγορίθµους εκµάθησης χωρίς επίβλεψη τύπου Hebb για τα ΑΓ∆, αναπτύσσονται και προτείνονται νέες τεχνικές εκµάθησης των ΑΓ∆ βασισµένες στους εξελικτικούς αλγορίθµους. Πιο συγκεκριµένα, προτείνεται µια νέα µεθοδολογία για την εφαρµογή του εξελικτικού αλγορίθµου Βελτιστοποίησης µε Σµήνος Σωµατιδίων στην εκµάθηση των Ασαφών Γνωστικών ∆ικτύων και πιο συγκεκριµένα στον καθορισµό των βέλτιστων περιοχών τιµών των βαρών των Ασαφών Γνωστικών ∆ικτύων. Με τη µεθοδο αυτή λαµβάνεται υπόψη η γνώση των εµπειρογνωµόνων για τον σχεδιασµό του µοντέλου µε τη µορφή περιορισµών στους κόµβους που µας ενδιαφέρουν οι τιµές των καταστάσεών τους, που έχουν οριστοί ως κόµβοι έξοδοι του συστήµατος, και για τα βάρη λαµβάνονται υπόψη οι περιοχές των ασαφών συνόλων που έχουν συµφωνήσει όλοι οι εµπειρογνώµονες. Έτσι θέτoντας περιορισµούς σε όλα τα βάρη και στους κόµβους εξόδου και καθορίζοντας µια κατάλληλη αντικειµενική συνάρτηση για το εκάστοτε πρόβληµα, προκύπτουν κατάλληλοι πίνακες βαρών (appropriate weight matrices) που µπορούν να οδηγήσουν το σύστηµα σε επιθυµητές περιοχές λειτουργίας και ταυτόχρονα να ικανοποιούν τις ειδικές συνθήκες- περιορισµούς του προβλήµατος. Οι δύο νέες µέθοδοι εκµάθησης χωρίς επίβλεψη που έχουν προταθεί για τα ΑΓ∆ χρησιµοποιούνται και εφαρµόζονται µε επιτυχία σε δυο πολύπλοκα προβλήµατα από το χώρο της ιατρικής, στο πρόβληµα λήψης απόφασης στην ακτινοθεραπεία και στο πρόβληµα κατηγοριοποίησης των καρκινικών όγκων της ουροδόχου κύστης σε πραγµατικές κλινικές περιπτώσεις. Επίσης όλοι οι προτεινόµενοι αλγόριθµοι εφαρµόζονται σε µοντέλα βιοµηχανικών συστηµάτων που αφορούν τον έλεγχο διαδικασιών µε πολύ ικανοποιητικά αποτελέσµατα. Οι αλγόριθµοι αυτοί, όπως προκύπτει από την εφαρµογή τους σε συγκεκριµένα προβλήµατα, βελτιώνουν το µοντέλο του ΑΓ∆, συµβάλλουν σε ευφυέστερα συστήµατα και διευρύνουν τη δυνατότητα εφαρµογής τους σε πραγµατικά και πολύπλοκα προβλήµατα. Η κύρια συνεισφορά αυτής της διατριβής είναι η ανάπτυξη νέων µεθοδολογιών εκµάθησης και σύγκλισης των Ασαφών Γνωστικών ∆ικτύων προτείνοντας δυο νέους αλγορίθµους µη επιβλεπόµενης µάθησης τύπου Hebb, τον αλγόριθµο Active Hebbian Learning και τον αλγόριθµο Nonlinear Hebbian Learning για την προσαρµογή των βαρών των διασυνδέσεων µεταξύ των κόµβων των Ασαφών Γνωστικών ∆ικτύων, καθώς και εξελικτικούς αλγορίθµους βελτιστοποιώντας συγκεκριµένες αντικειµενικές συναρτήσεις για κάθε εξεταζόµενο πρόβληµα. Τα νέα βελτιωµένα Ασαφή Γνωστικά ∆ίκτυα µέσω των αλγορίθµων προσαρµογής των βαρών τους έχουν χρησιµοποιηθεί για την ανάπτυξη ενός ∆ιεπίπεδου Ιεραρχικού Συστήµατος για την υποστήριξη λήψης απόφασης στην ακτινοθεραπεία, για την ανάπτυξη ενός διαγνωστικού εργαλείου για την κατηγοριοποίηση του βαθµού κακοήθειας των καρκινικών όγκων της ουροδόχου κύστης, καθώς και για την επίλυση βιοµηχανικών προβληµάτων για τον έλεγχο διαδικασιών. / The main contribution of this Dissertation is the development of new learning and convergence methodologies for Fuzzy Cognitive Maps that are proposed for the improvement and adaptation of their behaviour, as well as for the increase of their performance, electing them in effective dynamic systems of modelling. The new improved Fuzzy Cognitive Maps, via the learning and adaptation of their weights, have been used in medicine for diagnosis and decision-making, as well as to alleviate the problem of the potential uncontrollable convergence to undesired states in models of industrial process control systems, with very satisfactory results. In this Dissertation are presented, validated and implemented two new learning algorithms without supervision for Fuzzy Cognitive Maps, the algorithms Active Hebbian Learning (AHL) and Nonlinear Hebbian Learning (NHL), based on the classic unsupervised Hebb-type learning algorithm of neural networks, as well as a new approach of learning for Fuzzy Cognitive Maps based on the evolutionary algorithms and more specifically on the algorithm of Particles Swarm Optimization and on the Differential Evolution algorithm. The proposed algorithms AHL and NHL support new learning methodologies for FCMs that improve their operation, efficiency and reliability, and that provide in the experts of each problem that develop the FCM, the learning of parameters for the regulation (fine-tuning) of cause-effect relationships (weights) between the concepts. These types of learning that are accompanied with the right knowledge of each problem-system, contribute in the increase of performance of FCMs and extend their use. Additionally to the unsupervised learning algorithms of Hebb-type for the FCMs, are developed and proposed new learning techniques of FCMs based on the evolutionary algorithms. More specifically, it is proposed a new learning methodology for the application of evolutionary algorithm of Particle Swarm Optimisation in the adaptation of FCMs and more concretely in the determination of the optimal regions of weight values of FCMs. With this method it is taken into consideration the experts’ knowledge for the modelling with the form of restrictions in the concepts that interest us their values, and are defined as output concepts, and for weights are received the arithmetic values of the fuzzy regions that have agreed all the experts. Thus considering restrictions in all weights and in the output concepts and determining a suitable objective function for each problem, result appropriate weight matrices that can lead the system to desirable regions of operation and simultaneously satisfy specific conditions of problem. The first two proposed methods of unsupervised learning that have been suggested for the FCMs are used and applied with success in two complicated problems in medicine, in the problem of decision-making in the radiotherapy process and in the problem of tumor characterization for urinary bladder in real clinical cases. Also all the proposed algorithms are applied in models of industrial systems that concern the control of processes with very satisfactory results. These algorithms, as it results from their application in concrete problems, improve the model of FCMs, they contribute in more intelligent systems and they extend their possibility of application in real and complex problems. The main contribution of the present Dissertation is to develop new learning and convergence methodologies for Fuzzy Cognitive Maps proposing two new unsupervised learning algorithms, the algorithm Active Hebbian Learning and the algorithm Nonlinear Hebbian Learning for the adaptation of weights of the interconnections between the concepts of Fuzzy Cognitive Maps, as well as Evolutionary Algorithms optimizing concrete objective functions for each examined problem. New improved Fuzzy Cognitive Maps via the algorithms of weight adaptation have been used for the development of an Integrated Two-level hierarchical System for the support of decision-making in the radiotherapy, for the development of a new diagnostic tool for tumour characterization of urinary bladder, as well as for the solution of industrial process control problems.
|
99 |
Virtual reality therapy for Alzheimer’s disease with speech instruction and real-time neurofeedback systemAi, Yan 05 1900 (has links)
La maladie d'Alzheimer (MA) est une maladie cérébrale dégénérative qui entraîne une perte progressive de la mémoire, un déclin cognitif et une détérioration graduelle de la capacité d'une personne à faire face à la complexité et à l'exigence des tâches quotidiennes nécessaires pour vivre en autonomie dans notre société actuelle. Les traitements pharmacologiques actuels peuvent ralentir le processus de dégradation attribué à la maladie, mais ces traitements peuvent également provoquer certains effets secondaires indésirables. L'un des traitements non pharmacologiques qui peut soulager efficacement les symptômes est la thérapie assistée par l'animal (T.A.A.). Mais en raison de certaines limitations telles que le prix des animaux et des problèmes d'hygiène, des animaux virtuels sont utilisés dans ce domaine. Cependant, les animaux virtuels animés, la qualité d'image approximative et le mode d'interaction unidirectionnel des animaux qui attendent passivement les instructions de l’utilisateur, peuvent difficilement stimuler le retour émotionnel entre l'utilisateur et les animaux virtuels, ce qui affaiblit considérablement l'effet thérapeutique.
Cette étude vise à explorer l'efficacité de l'utilisation d'animaux virtuels à la place d’animaux vivants et leur impact sur la réduction des émotions négatives chez le patient. Cet objectif a été gardé à l'esprit lors de la conception du projet Zoo Therapy, qui présente un environnement immersif d'animaux virtuels en 3D, où l'impact sur l'émotion du patient est mesuré en temps réel par électroencéphalographie (EEG). Les objets statiques et les animaux virtuels de Zoo Therapy sont tous présentés à l'aide de modèles 3D réels. Les mouvements des animaux, les sons et les systèmes de repérage spécialement développés prennent en charge le comportement interactif simulé des animaux virtuels. De plus, pour que l'expérience d'interaction de l'utilisateur soit plus réelle, Zoo Therapy propose un mécanisme de communication novateur qui met en œuvre une interaction bidirectionnelle homme-machine soutenue par 3 méthodes d'interaction : le menu sur les panneaux, les instructions vocales et le Neurofeedback.
La manière la plus directe d'interagir avec l'environnement de réalité virtuelle (RV) est le menu sur les panneaux, c'est-à-dire une interaction en cliquant sur les boutons des panneaux par le contrôleur de RV. Cependant, il était difficile pour certains utilisateurs ayant la MA d'utiliser le contrôleur de RV. Pour accommoder ceux qui ne sont pas bien adaptés ou compatibles avec le contrôleur de RV, un système d'instructions vocales peut être utilisé comme interface. Ce système a été reçu positivement par les 5 participants qui l'ont essayé.
Même si l'utilisateur choisit de ne pas interagir activement avec l'animal virtuel dans les deux méthodes ci-dessus, le système de Neurofeedback guidera l'animal pour qu'il interagisse activement avec l'utilisateur en fonction des émotions de ce dernier.
Le système de Neurofeedback classique utilise un système de règles pour donner des instructions. Les limites de cette méthode sont la rigidité et l'impossibilité de prendre en compte la relation entre les différentes émotions du participant. Pour résoudre ces problèmes, ce mémoire présente une méthode basée sur l'apprentissage par renforcement (AR) qui donne des instructions à différentes personnes en fonction des différentes émotions. Dans l'expérience de simulation des données émotionnelles synthétiques de la MD, la méthode basée sur l’AR est plus sensible aux changements émotionnels que la méthode basée sur les règles et peut apprendre automatiquement des règles potentielles pour maximiser les émotions positives de l'utilisateur.
En raison de l'épidémie de Covid-19, nous n'avons pas été en mesure de mener des expériences à grande échelle. Cependant, un projet de suivi a combiné la thérapie de RV Zoo avec la reconnaissance des gestes et a prouvé son efficacité en évaluant les valeurs d'émotion EEG des participants. / Alzheimer’s disease (AD) is a degenerative brain disease that causes progressive memory loss, cognitive decline, and gradually impairs one’s ability to cope with the complexity and requirement of the daily routine tasks necessary to live in autonomy in our current society. Actual pharmacological treatments can slow down the degradation process attributed to the disease, but such treatments may also cause some undesirable side effects. One of the non-pharmacological treatments that can effectively relieve symptoms is animal-assisted treatment (AAT). But due to some limitations such as animal cost and hygiene issues, virtual animals are used in this field. However, the animated virtual animals, the rough picture quality presentation, and the one-direction interaction mode of animals passively waiting for the user's instructions can hardly stimulate the emotional feedback background between the user and the virtual animals, which greatly weakens the therapeutic effect.
This study aims to explore the effectiveness of using virtual animals in place of their living counterpart and their impact on the reduction of negative emotions in the patient. This approach has been implemented in the Zoo Therapy project, which presents an immersive 3D virtual reality animal environment, where the impact on the patient’s emotion is measured in real-time by using electroencephalography (EEG). The static objects and virtual animals in Zoo Therapy are all presented using real 3D models. The specially developed animal movements, sounds, and pathfinding systems support the simulated interactive behavior of virtual animals. In addition, for the user's interaction experience to be more real, the innovation of this approach is also in its communication mechanism as it implements a bidirectional human-computer interaction supported by 3 interaction methods: Menu panel, Speech instruction, and Neurofeedback.
The most straightforward way to interact with the VR environment is through Menu panel, i.e., interaction by clicking buttons on panels by the VR controller. However, it was difficult for some AD users to use the VR controller. To accommodate those who are not well suited or compatible with VR controllers, a speech instruction system can be used as an interface, which was received positively by the 5 participants who tried it.
Even if the user chooses not to actively interact with the virtual animal in the above two methods, the Neurofeedback system will guide the animal to actively interact with the user according to the user's emotions.
The mainstream Neurofeedback system has been using artificial rules to give instructions. The limitation of this method is inflexibility and cannot take into account the relationship between the various emotions of the participant. To solve these problems, this thesis presents a reinforcement learning (RL)-based method that gives instructions to different people based on multiple emotions accordingly. In the synthetic AD emotional data simulation experiment, the RL-based method is more sensitive to emotional changes than the rule-based method and can automatically learn potential rules to maximize the user's positive emotions.
Due to the Covid-19 epidemic, we were unable to conduct large-scale experiments. However, a follow-up project combined VR Zoo Therapy with gesture recognition and proved the effectiveness by evaluating participant's EEG emotion values.
|
100 |
Some Contributions to Distribution Theory and ApplicationsSelvitella, Alessandro 11 1900 (has links)
In this thesis, we present some new results in distribution theory for both discrete and continuous random variables, together with their motivating applications.
We start with some results about the Multivariate Gaussian Distribution and its characterization as a maximizer of the Strichartz Estimates. Then, we present some characterizations of discrete and continuous distributions through ideas coming from optimal transportation. After this, we pass to the Simpson's Paradox and see that it is ubiquitous and it appears in Quantum Mechanics as well. We conclude with a group of results about discrete and continuous distributions invariant under symmetries, in particular invariant under the groups $A_1$, an elliptical version of $O(n)$ and $\mathbb{T}^n$.
As mentioned, all the results proved in this thesis are motivated by their applications in different research areas. The applications will be thoroughly discussed. We have tried to keep each chapter self-contained and recalled results from other chapters when needed.
The following is a more precise summary of the results discussed in each chapter.
In chapter \ref{chapter 2}, we discuss a variational characterization of the Multivariate Normal distribution (MVN) as a maximizer of the Strichartz Estimates. Strichartz Estimates appear as a fundamental tool in the proof of wellposedness results for dispersive PDEs. With respect to the characterization of the MVN distribution as a maximizer of the entropy functional, the characterization as a maximizer of the Strichartz Estimate does not require the constraint of fixed variance. In this chapter, we compute the precise optimal constant for the whole range of Strichartz admissible exponents, discuss the connection of this problem to Restriction Theorems in Fourier analysis and give some statistical properties of the family of Gaussian Distributions which maximize the Strichartz estimates, such as Fisher Information, Index of Dispersion and Stochastic Ordering. We conclude this chapter presenting an optimization algorithm to compute numerically the maximizers.
Chapter \ref{chapter 3} is devoted to the characterization of distributions by means of techniques from Optimal Transportation and the Monge-Amp\`{e}re equation. We give emphasis to methods to do statistical inference for distributions that do not possess good regularity, decay or integrability properties. For example, distributions which do not admit a finite expected value, such as the Cauchy distribution. The main tool used here is a modified version of the characteristic function (a particular case of the Fourier Transform). An important motivation to develop these tools come from Big Data analysis and in particular the Consensus Monte Carlo Algorithm.
In chapter \ref{chapter 4}, we study the \emph{Simpson's Paradox}. The \emph{Simpson's Paradox} is the phenomenon that appears in some datasets, where subgroups with a common trend (say, all negative trend) show the reverse trend when they are aggregated (say, positive trend). Even if this issue has an elementary mathematical explanation, the statistical implications are deep. Basic examples appear in arithmetic, geometry, linear algebra, statistics, game theory, sociology (e.g. gender bias in the graduate school admission process) and so on and so forth. In our new results, we prove the occurrence of the \emph{Simpson's Paradox} in Quantum Mechanics. In particular, we prove that the \emph{Simpson's Paradox} occurs for solutions of the \emph{Quantum Harmonic Oscillator} both in the stationary case and in the non-stationary case. We prove that the phenomenon is not isolated and that it appears (asymptotically) in the context of the \emph{Nonlinear Schr\"{o}dinger Equation} as well. The likelihood of the \emph{Simpson's Paradox} in Quantum Mechanics and the physical implications are also discussed.
Chapter \ref{chapter 5} contains some new results about distributions with symmetries. We first discuss a result on symmetric order statistics. We prove that the symmetry of any of the order statistics is equivalent to the symmetry of the underlying distribution. Then, we characterize elliptical distributions through group invariance and give some properties. Finally, we study geometric probability distributions on the torus with applications to molecular biology. In particular, we introduce a new family of distributions generated through stereographic projection, give several properties of them and compare them with the Von-Mises distribution and its multivariate extensions. / Thesis / Doctor of Philosophy (PhD)
|
Page generated in 0.0831 seconds