• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 19
  • 8
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 101
  • 101
  • 17
  • 15
  • 14
  • 13
  • 11
  • 10
  • 10
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Topology optimization of truss-like structures, from theory to practice

Richardson, James 21 November 2013 (has links)
The goal of this thesis is the development of theoretical methods targeting the implementation of topology optimization in structural engineering applications. In civil engineering applications, structures are typically assemblies of many standardized components, such as bars, where the largest gains in efficiency can be made during the preliminary design of the overall structure. The work is aimed mainly at truss-like structures in civil engineering applications, however several of the developments are general enough to encompass continuum structures and other areas of engineering research too. The research aims to address the following challenges:<p>- Discrete variable optimization, generally necessary for truss problems in civil engineering, tends to be computationally very expensive,<p>- the gap between industrial applications in civil engineering and optimization research is quite large, meaning that the developed methods are currently not fully embraced in practice, and<p>- industrial applications demand robust and reliable solutions to the real-world problems faced by the civil engineering profession.<p><p>In order to face these challenges, the research is divided into several research papers, included as chapters in the thesis.<p>Discrete binary variables in structural topology optimization often lead to very large computational cost and sometimes even failure of algorithm convergence. A novel method was developed for improving the performance of topology optimization problems in truss-like structures with discrete design variables, using so-called Kinematic Stability Repair (KSR). Two typical examples of topology optimization problems with binary variables are bracing systems and steel grid shell structures. These important industrial applications of topology optimization are investigated in the thesis. A novel method is developed for topology optimization of grid shells whose global shape has been determined by form-finding. Furthermore a novel technique for façade bracing optimization is developed. In this application a multiobjective approach was used to give the designers freedom to make changes, as the design advanced at various stages of the design process. The application of the two methods to practical<p>engineering problems, inspired a theoretical development which has wide-reaching implications for discrete optimization: the pitfalls of symmetry reduction. A seemingly self-evident method of cardinality reduction makes use of geometric symmetry reduction in structures in order to reduce the problem size. It is shown in the research that this assumption is not valid for discrete variable problems. Despite intuition to the contrary, for symmetric problems, asymmetric solutions may be more optimal than their symmetric counterparts. In reality many uncertainties exist on geometry, loading and material properties in structural systems. This has an effect on the performance (robustness) of the non-ideal, realized structure. To address this, a general robust topology optimization framework for both continuum and truss-like structures, developing a novel analysis technique for truss structures under material uncertainties, is introduced. Next, this framework is extended to discrete variable, multiobjective optimization problems of truss structures, taking uncertainties on the material stiffness and the loading into account. Two papers corresponding to the two chapters were submitted to the journal Computers and Structures and Structural and Multidisciplinary Optimization. Finally, a concluding chapter summarizes the main findings of the research. A number of appendices are included at the end of the manuscript, clarifying several pertinent issues. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
92

Optimization Algorithm Based on Novelty Search Applied to the Treatment of Uncertainty in Models

Martínez Rodríguez, David 23 December 2021 (has links)
[ES] La búsqueda novedosa es un nuevo paradigma de los algoritmos de optimización, evolucionarios y bioinspirados, que está basado en la idea de forzar la búsqueda del óptimo global en aquellas partes inexploradas del dominio de la función que no son atractivas para el algoritmo, con la intención de evitar estancamientos en óptimos locales. La búsqueda novedosa se ha aplicado al algoritmo de optimización de enjambre de partículas, obteniendo un nuevo algoritmo denominado algoritmo de enjambre novedoso (NS). NS se ha aplicado al conjunto de pruebas sintéticas CEC2005, comparando los resultados con los obtenidos por otros algoritmos del estado del arte. Los resultados muestran un mejor comportamiento de NS en funciones altamente no lineales, a cambio de un aumento en la complejidad computacional. En lo que resta de trabajo, el algoritmo NS se ha aplicado en diferentes modelos, específicamente en el diseño de un motor de combustión interna, en la estimación de demanda de energía mediante gramáticas de enjambre, en la evolución del cáncer de vejiga de un paciente concreto y en la evolución del COVID-19. Cabe remarcar que, en el estudio de los modelos de COVID-19, se ha tenido en cuenta la incertidumbre, tanto de los datos como de la evolución de la enfermedad. / [CA] La cerca nova és un nou paradigma dels algoritmes d'optimització, evolucionaris i bioinspirats, que està basat en la idea de forçar la cerca de l'òptim global en les parts inexplorades del domini de la funció que no són atractives per a l'algoritme, amb la intenció d'evitar estancaments en òptims locals. La cerca nova s'ha aplicat a l'algoritme d'optimització d'eixam de partícules, obtenint un nou algoritme denominat algoritme d'eixam nou (NS). NS s'ha aplicat al conjunt de proves sintètiques CEC2005, comparant els resultats amb els obtinguts per altres algoritmes de l'estat de l'art. Els resultats mostren un millor comportament de NS en funcions altament no lineals, a canvi d'un augment en la complexitat computacional. En el que resta de treball, l'algoritme NS s'ha aplicat en diferents models, específicament en el disseny d'un motor de combustió interna, en l'estimació de demanda d'energia mitjançant gramàtiques d'eixam, en l'evolució del càncer de bufeta d'un pacient concret i en l'evolució del COVID-19. Cal remarcar que, en l'estudi dels models de COVID-19, s'ha tingut en compte la incertesa, tant de les dades com de l'evolució de la malaltia. / [EN] Novelty Search is a recent paradigm in evolutionary and bio-inspired optimization algorithms, based on the idea of forcing to look for those unexplored parts of the domain of the function that might be unattractive for the algorithm, with the aim of avoiding stagnation in local optima. Novelty Search has been applied to the Particle Swarm Optimization algorithm, obtaining a new algorithm named Novelty Swarm (NS). NS has been applied to the CEC2005 benchmark, comparing its results with other state of the art algorithms. The results show better behaviour in high nonlinear functions at the cost of increasing the computational complexity. During the rest of the thesis, the NS algorithm has been used in different models, specifically the design of an Internal Combustion Engine, the prediction of energy demand estimation with Grammatical Swarm, the evolution of the bladder cancer of a specific patient and the evolution of COVID-19. It is also remarkable that, in the study of COVID-19 models, uncertainty of the data and the evolution of the disease has been taken in account. / Martínez Rodríguez, D. (2021). Optimization Algorithm Based on Novelty Search Applied to the Treatment of Uncertainty in Models [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/178994 / TESIS
93

Energy Optimization Strategy for System-Operational Problems

Al-Ani, Dhafar S. 04 1900 (has links)
<ul> <li>Energy Optimization Stategies</li> <li>Hydraulic Models for Water Distribution Systems</li> <li>Heuristic Multi-objective Optimization Algorithms</li> <li>Multi-objective Optimization Problems</li> <li>System Constraints</li> <li>Encoding Techniques</li> <li>Optimal Pumping Operations</li> <li>Sovling Real-World Optimization Problems </li> </ul> / <p>The water supply industry is a very important element of a modern economy; it represents a key element of urban infrastructure and is an integral part of our modern civilization. Billions of dollars per annum are spent internationally in pumping operations in rural water distribution systems to treat and reliably transport water from source to consumers.</p> <p>In this dissertation, a new multi-objective optimization approach referred to as energy optimization strategy is proposed for minimizing electrical energy consumption for pumping, the cost, pumps maintenance cost, and the cost of maximum power peak, while optimizing water quality and operational reliability in rural water distribution systems. Minimizing the energy cost problem considers the electrical energy consumed for regular operation and the cost of maximum power peak. Optimizing operational reliability is based on the ability of the network to provide service in case of abnormal events (e.g., network failure or fire) by considering and managing reservoir levels. Minimizing pumping costs also involves consideration of network and pump maintenance cost that is imputed by the number of pump switches. Water quality optimization is achieved through the consideration of chlorine residual during water transportation.</p> <p>An Adaptive Parallel Clustering-based Multi-objective Particle Swarm Optimization (APC-MOPSO) algorithm that combines the existing and new concept of Pareto-front, operating-mode specification, selecting-best-efficiency-point technique, searching-for-gaps method, and modified K-Means clustering has been proposed. APC-MOPSO is employed to optimize the above-mentioned set of multiple objectives in operating rural water distribution systems.</p> <p>Saskatoon West is, a rural water distribution system, owned and operated by Sask-Water (i.e., is a statutory Crown Corporation providing water, wastewater and related services to municipal, industrial, government, and domestic customers in the province of Saskatchewan). It is used to provide water to the city of Saskatoon and surrounding communities. The system has six main components: (1) the pumping stations, namely Queen Elizabeth and Aurora; (2) The raw water pipeline from QE to Agrium area; (3) the treatment plant located within the Village of Vanscoy; (4) the raw water pipeline serving four major consumers, including PCS Cogen, PCS Cory, Corman Park, and Agrium; (5) the treated water pipeline serving a domestic community of Village of Vanscoy; and (6) the large Agrium community storage reservoir.</p> <p>In this dissertation, the Saskatoon West WDS is chosen to implement the proposed energy optimization strategy. Given the data supplied by Sask-Warer, the scope of this application has resulted in savings of approximately 7 to 14% in energy costs without adversely affecting the infrastructure of the system as well as maintaining the same level of service provided to the Sask-Water’s clients.</p> <p>The implementation of the energy optimization strategy on the Saskatoon West WDS over 168 hour (i.e., one-week optimization period of time) resulted in savings of approximately 10% in electrical energy cost and 4% in the cost of maximum power peak. Moreover, the results showed that the pumping reliability is improved by 3.5% (i.e., improving its efficiency, head pressure, and flow rate). A case study is used to demonstrate the effectiveness of the multi-objective formulations and the solution methodologies, including the formulation of the system-operational optimization problem as five objective functions. Beside the reduction in the energy costs, water quality, network reliability, and pumping characterization are all concurrently enhanced as shown in the collected results. The benefits of using the proposed energy optimization strategy as replacement for many existing optimization methods are also demonstrated.</p> / Doctor of Science (PhD)
94

Dynamic Programming Approaches for Estimating and Applying Large-scale Discrete Choice Models

Mai, Anh Tien 12 1900 (has links)
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions. / Les gens consacrent une importante part de leur existence à prendre diverses décisions, pouvant affecter leur demande en transport, par exemple les choix de lieux d'habitation et de travail, les modes de transport, les heures de départ, le nombre et type de voitures dans le ménage, les itinéraires ... Les choix liés au transport sont généralement fonction du temps et caractérisés par un grand nombre de solutions alternatives qui peuvent être spatialement corrélées. Cette thèse traite de modèles pouvant être utilisés pour analyser et prédire les choix discrets dans les applications liées aux réseaux de grandes tailles. Les modèles et méthodes proposées sont particulièrement pertinents pour les applications en transport, sans toutefois s'y limiter. Nous modélisons les décisions comme des séquences de choix, dans le cadre des choix discrets dynamiques, aussi connus comme processus de décision de Markov paramétriques. Ces modèles sont réputés difficiles à estimer et à appliquer en prédiction, puisque le calcul des probabilités de choix requiert la résolution de problèmes de programmation dynamique. Nous montrons dans cette thèse qu'il est possible d'exploiter la structure du réseau et la flexibilité de la programmation dynamique afin de rendre l'approche de modélisation dynamique en choix discrets non seulement utile pour représenter les choix dépendant du temps, mais également pour modéliser plus facilement des choix statiques au sein d'ensembles de choix de très grande taille. La thèse se compose de sept articles, présentant divers modèles et méthodes d'estimation, leur application ainsi que des expériences numériques sur des modèles de choix discrets de grande taille. Nous regroupons les contributions en trois principales thématiques: modélisation du choix de route, estimation de modèles en valeur extrême multivariée (MEV) de grande taille et algorithmes d'optimisation non-linéaire. Cinq articles sont associés à la modélisation de choix de route. Nous proposons différents modèles de choix discrets dynamiques permettant aux utilités des chemins d'être corrélées, sur base de formulations MEV et logit mixte. Les modèles résultants devenant coûteux à estimer, nous présentons de nouvelles approches permettant de diminuer les efforts de calcul. Nous proposons par exemple une méthode de décomposition qui non seulement ouvre la possibilité d'estimer efficacement des modèles logit mixte, mais également d'accélérer l'estimation de modèles simples comme les modèles logit multinomiaux, ce qui a également des implications en simulation de trafic. De plus, nous comparons les règles de décision basées sur le principe de maximisation d'utilité de celles sur la minimisation du regret pour ce type de modèles. Nous proposons finalement un test statistique sur les erreurs de spécification pour les modèles de choix de route basés sur le logit multinomial. Le second thème porte sur l'estimation de modèles de choix discrets statiques avec de grands ensembles de choix. Nous établissons que certains types de modèles MEV peuvent être reformulés comme des modèles de choix discrets dynamiques, construits sur des réseaux de structure de corrélation. Ces modèles peuvent alors être estimées rapidement en utilisant des techniques de programmation dynamique en combinaison avec un algorithme efficace d'optimisation non-linéaire. La troisième et dernière thématique concerne les algorithmes d'optimisation non-linéaires dans le cadre de l'estimation de modèles complexes de choix discrets par maximum de vraisemblance. Nous examinons et adaptons des méthodes quasi-Newton structurées qui peuvent être facilement intégrées dans des algorithmes d'optimisation usuels (recherche linéaire et région de confiance) afin d'accélérer le processus d'estimation. Les modèles de choix discrets dynamiques et les méthodes d'optimisation proposés peuvent être employés dans diverses applications de choix discrets. Dans le domaine des sciences de données, des modèles qui peuvent traiter de grands ensembles de choix et des ensembles de choix séquentiels sont importants. Nos recherches peuvent dès lors être d'intérêt dans diverses applications d'analyse de la demande (analyse prédictive) ou peuvent être intégrées à des modèles d'optimisation (analyse prescriptive). De plus, nos études mettent en évidence le potentiel des techniques de programmation dynamique dans ce contexte, y compris pour des modèles statiques, ouvrant la voie à de multiples directions de recherche future.
95

Νέες μέθοδοι εκμάθησης για ασαφή γνωστικά δίκτυα και εφαρμογές στην ιατρική και βιομηχανία / New learning techniques to train fuzzy cognitive maps and applications in medicine and industry

Παπαγεωργίου, Ελπινίκη 25 June 2007 (has links)
Αντικείµενο της διατριβής είναι η ανάπτυξη νέων µεθοδολογιών εκµάθησης και σύγκλισης των Ασαφών Γνωστικών ∆ικτύων που προτείνονται για τη βελτίωση και προσαρµογή της συµπεριφοράς τους, καθώς και για την αύξηση της απόδοσής τους, αναδεικνύοντάς τα σε αποτελεσµατικά δυναµικά συστήµατα µοντελοποίησης. Τα νέα βελτιωµένα Ασαφή Γνωστικά ∆ίκτυα, µέσω της εκµάθησης και προσαρµογής των βαρών τους, έχουν χρησιµοποιηθεί στην ιατρική σε θέµατα διάγνωσης και υποστήριξης στη λήψη απόφασης, καθώς και σε µοντέλα βιοµηχανικών συστηµάτων που αφορούν τον έλεγχο διαδικασιών, µε πολύ ικανοποιητικά αποτελέσµατα. Στη διατριβή αυτή παρουσιάζονται, αξιολογούνται και εφαρµόζονται δύο νέοι αλγόριθµοι εκµάθησης χωρίς επίβλεψη των Ασαφών Γνωστικών ∆ικτύων, οι αλγόριθµοι Active Hebbian Learning (AHL) και Nonlinear Hebbian Learning (NHL), βασισµένοι στον κλασσικό αλγόριθµό εκµάθησης χωρίς επίβλεψη τύπου Hebb των νευρωνικών δικτύων, καθώς και µια νέα προσέγγιση εκµάθησης των Ασαφών Γνωστικών ∆ικτύων βασισµένη στους εξελικτικούς αλγορίθµους και πιο συγκεκριµένα στον αλγόριθµο Βελτιστοποίησης µε Σµήνος Σωµατιδίων και στον ∆ιαφοροεξελικτικό αλγόριθµο. Οι προτεινόµενοι αλγόριθµοι AHL και NHL στηρίζουν νέες µεθοδολογίες εκµάθησης για τα ΑΓ∆ που βελτιώνουν τη λειτουργία, και την αξιοπιστία τους, και που παρέχουν στους εµπειρογνώµονες του εκάστοτε προβλήµατος που αναπτύσσουν το ΑΓ∆, την εκµάθηση των παραµέτρων για τη ρύθµιση των αιτιατών διασυνδέσεων µεταξύ των κόµβων. Αυτοί οι τύποι εκµάθησης που συνοδεύονται από την σωστή γνώση του εκάστοτε προβλήµατος-συστήµατος, συµβάλλουν στην αύξηση της απόδοσης των ΑΓ∆ και διευρύνουν τη χρήση τους. Επιπρόσθετα µε τους αλγορίθµους εκµάθησης χωρίς επίβλεψη τύπου Hebb για τα ΑΓ∆, αναπτύσσονται και προτείνονται νέες τεχνικές εκµάθησης των ΑΓ∆ βασισµένες στους εξελικτικούς αλγορίθµους. Πιο συγκεκριµένα, προτείνεται µια νέα µεθοδολογία για την εφαρµογή του εξελικτικού αλγορίθµου Βελτιστοποίησης µε Σµήνος Σωµατιδίων στην εκµάθηση των Ασαφών Γνωστικών ∆ικτύων και πιο συγκεκριµένα στον καθορισµό των βέλτιστων περιοχών τιµών των βαρών των Ασαφών Γνωστικών ∆ικτύων. Με τη µεθοδο αυτή λαµβάνεται υπόψη η γνώση των εµπειρογνωµόνων για τον σχεδιασµό του µοντέλου µε τη µορφή περιορισµών στους κόµβους που µας ενδιαφέρουν οι τιµές των καταστάσεών τους, που έχουν οριστοί ως κόµβοι έξοδοι του συστήµατος, και για τα βάρη λαµβάνονται υπόψη οι περιοχές των ασαφών συνόλων που έχουν συµφωνήσει όλοι οι εµπειρογνώµονες. Έτσι θέτoντας περιορισµούς σε όλα τα βάρη και στους κόµβους εξόδου και καθορίζοντας µια κατάλληλη αντικειµενική συνάρτηση για το εκάστοτε πρόβληµα, προκύπτουν κατάλληλοι πίνακες βαρών (appropriate weight matrices) που µπορούν να οδηγήσουν το σύστηµα σε επιθυµητές περιοχές λειτουργίας και ταυτόχρονα να ικανοποιούν τις ειδικές συνθήκες- περιορισµούς του προβλήµατος. Οι δύο νέες µέθοδοι εκµάθησης χωρίς επίβλεψη που έχουν προταθεί για τα ΑΓ∆ χρησιµοποιούνται και εφαρµόζονται µε επιτυχία σε δυο πολύπλοκα προβλήµατα από το χώρο της ιατρικής, στο πρόβληµα λήψης απόφασης στην ακτινοθεραπεία και στο πρόβληµα κατηγοριοποίησης των καρκινικών όγκων της ουροδόχου κύστης σε πραγµατικές κλινικές περιπτώσεις. Επίσης όλοι οι προτεινόµενοι αλγόριθµοι εφαρµόζονται σε µοντέλα βιοµηχανικών συστηµάτων που αφορούν τον έλεγχο διαδικασιών µε πολύ ικανοποιητικά αποτελέσµατα. Οι αλγόριθµοι αυτοί, όπως προκύπτει από την εφαρµογή τους σε συγκεκριµένα προβλήµατα, βελτιώνουν το µοντέλο του ΑΓ∆, συµβάλλουν σε ευφυέστερα συστήµατα και διευρύνουν τη δυνατότητα εφαρµογής τους σε πραγµατικά και πολύπλοκα προβλήµατα. Η κύρια συνεισφορά αυτής της διατριβής είναι η ανάπτυξη νέων µεθοδολογιών εκµάθησης και σύγκλισης των Ασαφών Γνωστικών ∆ικτύων προτείνοντας δυο νέους αλγορίθµους µη επιβλεπόµενης µάθησης τύπου Hebb, τον αλγόριθµο Active Hebbian Learning και τον αλγόριθµο Nonlinear Hebbian Learning για την προσαρµογή των βαρών των διασυνδέσεων µεταξύ των κόµβων των Ασαφών Γνωστικών ∆ικτύων, καθώς και εξελικτικούς αλγορίθµους βελτιστοποιώντας συγκεκριµένες αντικειµενικές συναρτήσεις για κάθε εξεταζόµενο πρόβληµα. Τα νέα βελτιωµένα Ασαφή Γνωστικά ∆ίκτυα µέσω των αλγορίθµων προσαρµογής των βαρών τους έχουν χρησιµοποιηθεί για την ανάπτυξη ενός ∆ιεπίπεδου Ιεραρχικού Συστήµατος για την υποστήριξη λήψης απόφασης στην ακτινοθεραπεία, για την ανάπτυξη ενός διαγνωστικού εργαλείου για την κατηγοριοποίηση του βαθµού κακοήθειας των καρκινικών όγκων της ουροδόχου κύστης, καθώς και για την επίλυση βιοµηχανικών προβληµάτων για τον έλεγχο διαδικασιών. / The main contribution of this Dissertation is the development of new learning and convergence methodologies for Fuzzy Cognitive Maps that are proposed for the improvement and adaptation of their behaviour, as well as for the increase of their performance, electing them in effective dynamic systems of modelling. The new improved Fuzzy Cognitive Maps, via the learning and adaptation of their weights, have been used in medicine for diagnosis and decision-making, as well as to alleviate the problem of the potential uncontrollable convergence to undesired states in models of industrial process control systems, with very satisfactory results. In this Dissertation are presented, validated and implemented two new learning algorithms without supervision for Fuzzy Cognitive Maps, the algorithms Active Hebbian Learning (AHL) and Nonlinear Hebbian Learning (NHL), based on the classic unsupervised Hebb-type learning algorithm of neural networks, as well as a new approach of learning for Fuzzy Cognitive Maps based on the evolutionary algorithms and more specifically on the algorithm of Particles Swarm Optimization and on the Differential Evolution algorithm. The proposed algorithms AHL and NHL support new learning methodologies for FCMs that improve their operation, efficiency and reliability, and that provide in the experts of each problem that develop the FCM, the learning of parameters for the regulation (fine-tuning) of cause-effect relationships (weights) between the concepts. These types of learning that are accompanied with the right knowledge of each problem-system, contribute in the increase of performance of FCMs and extend their use. Additionally to the unsupervised learning algorithms of Hebb-type for the FCMs, are developed and proposed new learning techniques of FCMs based on the evolutionary algorithms. More specifically, it is proposed a new learning methodology for the application of evolutionary algorithm of Particle Swarm Optimisation in the adaptation of FCMs and more concretely in the determination of the optimal regions of weight values of FCMs. With this method it is taken into consideration the experts’ knowledge for the modelling with the form of restrictions in the concepts that interest us their values, and are defined as output concepts, and for weights are received the arithmetic values of the fuzzy regions that have agreed all the experts. Thus considering restrictions in all weights and in the output concepts and determining a suitable objective function for each problem, result appropriate weight matrices that can lead the system to desirable regions of operation and simultaneously satisfy specific conditions of problem. The first two proposed methods of unsupervised learning that have been suggested for the FCMs are used and applied with success in two complicated problems in medicine, in the problem of decision-making in the radiotherapy process and in the problem of tumor characterization for urinary bladder in real clinical cases. Also all the proposed algorithms are applied in models of industrial systems that concern the control of processes with very satisfactory results. These algorithms, as it results from their application in concrete problems, improve the model of FCMs, they contribute in more intelligent systems and they extend their possibility of application in real and complex problems. The main contribution of the present Dissertation is to develop new learning and convergence methodologies for Fuzzy Cognitive Maps proposing two new unsupervised learning algorithms, the algorithm Active Hebbian Learning and the algorithm Nonlinear Hebbian Learning for the adaptation of weights of the interconnections between the concepts of Fuzzy Cognitive Maps, as well as Evolutionary Algorithms optimizing concrete objective functions for each examined problem. New improved Fuzzy Cognitive Maps via the algorithms of weight adaptation have been used for the development of an Integrated Two-level hierarchical System for the support of decision-making in the radiotherapy, for the development of a new diagnostic tool for tumour characterization of urinary bladder, as well as for the solution of industrial process control problems.
96

Reconfigurable Reflective Arrayed Waveguide Grating on Silicon Nitride

Fernández Vicente, Juan 29 April 2021 (has links)
[ES] La presente tesis se ha centrado en el modelado, diseño y demonstración experimental por primera vez del dispositivo Reconfigurable Reflective Arrayed Waveguide Grating (R-RAWG). Para la consecución de este dispositivo que tiene posibilidades de uso en la espectrometría, una plataforma de nitruro de silicio llamada CNM-VLC se ha usado, ya que este material permite operar en un gran ancho de banda. Esta plataforma posee ciertas limitaciones y los elementos necesarios para el funcionamiento de este dispositivo tenían un performance bajo. Por ello, se ha desarrollado y validado una metodología que ha permitido obtener mejores divisores. Además, se ha diseñado un inverted taper que ha mejorado considerablemente el acoplo de luz al chip. Esto ha sido gracias a un exhaustivo análisis de opciones existentes en la literatura que también ha permitido escoger la mejor opción para realizar un espejo reconfigurable en la plataforma sin cambiar ni añadir ningún proceso de fabricación. Se han demostrado espejos reconfigurables gracias a utilizar divisores ópticos realimentados y también se ha desarrollado códigos que predicen el comportamiento del dispositivo experimentalmente. Con todo el trabajo realizado, se ha diseñado un R-RAWG para que pudiera operar en un gran ancho de banda y que los actuadores de fase no tuvieran peligro de estropearse. También se ha desarrollado un código para el modelado del R-RAWG que permite imitar la fabricación de estos dispositivos y que, gracias a esto, se ha desarrollado un método o algoritmo llamado DPASTOR, que usa algoritmos usados en machine learning, para optimizar la respuesta con tan sólo la potencia óptica de salida. Finalmente, se ha diseñado una PCB para poder conectar eléctricamente el chip fotónico y se ha desarrollado un método de medida que ha permitido tener una respuesta estable consiguiendo demostrar multitud de respuestas de filtros ópticos con el mismo dispositivo. / [CAT] La present tesi s'ha centrat en el modelatge, disseny i demonstració experimental per primera vegada del dispositiu Reconfigurable Reflective Arrayed Waveguide Grating (R-RAWG). Per a la consecució d'aquest dispositiu que té possibilitats d'ús en l'espectrometria, una plataforma de nitrur de silici anomenada CNM-VLC s'ha usat ja que aquest material permet operar en una gran amplada de banda. Aquesta plataforma posseeix certes limitacions i els elements necessaris per al funcionament d'aquest dispositiu tenien un performance baix. Per això, s'ha desenvolupat i validat una metodologia que ha permés obtindre millors divisors i també, gràcies als processos de fabricació, s'ha dissenyat un acoplador que ha millorat considerablement l'acoble de llum al xip. Això ha sigut gràcies a un exhaustiu analisis d'opcions existents en la literatura que també ha permés triar la millor opció per a realitzar un espill reconfigurable en la plataforma sense canviar ni afegir cap procés de fabricació. S'han demonstrat espills reconfigurables gràcies a utilitzar divisors realimentats i també s'ha desenvolupat codis que prediuen el comportament del dispostiu experimentalment. Amb tot el treball realitzat, s'ha dissenyat un R-RAWG fent ús de determinades consideracions perquè poguera operar en una gran amplada de banda i que els actuadors de fase no tingueren perill de desbaratar-se. També s'ha desenvolupat un codi per al modelatge del R-RAWG que permet imitar la fabricació d'aquests dispositius i que, gràcies a això, s'ha desenvolupat un mètode o algorisme anomenat DPASTOR, que usa algorismes usats en machine learning, per a optimitzar la resposta amb tan sols la potència òptica d'eixida. Finalment, s'ha dissenyat una PCB per a poder connectar elèctricament el xip fotònic i s'ha desenvolupat un mètode de mesura que ha permés tindre una resposta estable aconseguint demostrar multitud de respostes de filtres òptics amb el mateix dispositiu. / [EN] This thesis is focused on the modelling, design and experimental demonstration for the first time of Reconfigurable Reflective Arrayed Waveguide Grating (R-RAWG) device. In order to build this device, that can be employed in spectrometry, a silicon nitride platform termed CNM-VLC has been chosen since this material allows to operate in broad range of wavelengths. This platform has the necessary elements, but some limitations because the operation of this device had a low performance. Therefore, a methodology has been developed and validated, which has allowed to obtain better splitters. Also an inverted taper has been designed, which has considerably improved the coupling of light to the chip. This has been possible thanks to an exhaustive analysis of existing options in the literature, that has allowed choosing the best option to make a reconfigurable mirror on the platform without changing or adding new manufacturing steps. Reconfigurable mirrors have been demonstrated by using feedback splitters. Furthermore, codes have been developed to predict the behaviour of the actual device. With all the work done, a R-RAWG has been designed by using certain considerations so that it can operate over a broad wavelength range and the phase actuators are not in danger of being damaged. A code has also been developed for the modelling of the R-RAWG, which allows manufacturing imperfections to be considered, thanks to this, a method or algorithm called DPASTOR has been developed. DPASTOR resembles machine learning to optimise the response by just using the optical output power. Finally, a PCB and an assembly with the chip interconnected to it have been made and designed. Moreover, a measurement method has been developed, which has made it possible to have a stable response and to demonstrate a multitude of optical filter responses with the same device. / Fernández Vicente, J. (2021). Reconfigurable Reflective Arrayed Waveguide Grating on Silicon Nitride [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/165783 / TESIS
97

Virtual reality therapy for Alzheimer’s disease with speech instruction and real-time neurofeedback system

Ai, Yan 05 1900 (has links)
La maladie d'Alzheimer (MA) est une maladie cérébrale dégénérative qui entraîne une perte progressive de la mémoire, un déclin cognitif et une détérioration graduelle de la capacité d'une personne à faire face à la complexité et à l'exigence des tâches quotidiennes nécessaires pour vivre en autonomie dans notre société actuelle. Les traitements pharmacologiques actuels peuvent ralentir le processus de dégradation attribué à la maladie, mais ces traitements peuvent également provoquer certains effets secondaires indésirables. L'un des traitements non pharmacologiques qui peut soulager efficacement les symptômes est la thérapie assistée par l'animal (T.A.A.). Mais en raison de certaines limitations telles que le prix des animaux et des problèmes d'hygiène, des animaux virtuels sont utilisés dans ce domaine. Cependant, les animaux virtuels animés, la qualité d'image approximative et le mode d'interaction unidirectionnel des animaux qui attendent passivement les instructions de l’utilisateur, peuvent difficilement stimuler le retour émotionnel entre l'utilisateur et les animaux virtuels, ce qui affaiblit considérablement l'effet thérapeutique. Cette étude vise à explorer l'efficacité de l'utilisation d'animaux virtuels à la place d’animaux vivants et leur impact sur la réduction des émotions négatives chez le patient. Cet objectif a été gardé à l'esprit lors de la conception du projet Zoo Therapy, qui présente un environnement immersif d'animaux virtuels en 3D, où l'impact sur l'émotion du patient est mesuré en temps réel par électroencéphalographie (EEG). Les objets statiques et les animaux virtuels de Zoo Therapy sont tous présentés à l'aide de modèles 3D réels. Les mouvements des animaux, les sons et les systèmes de repérage spécialement développés prennent en charge le comportement interactif simulé des animaux virtuels. De plus, pour que l'expérience d'interaction de l'utilisateur soit plus réelle, Zoo Therapy propose un mécanisme de communication novateur qui met en œuvre une interaction bidirectionnelle homme-machine soutenue par 3 méthodes d'interaction : le menu sur les panneaux, les instructions vocales et le Neurofeedback. La manière la plus directe d'interagir avec l'environnement de réalité virtuelle (RV) est le menu sur les panneaux, c'est-à-dire une interaction en cliquant sur les boutons des panneaux par le contrôleur de RV. Cependant, il était difficile pour certains utilisateurs ayant la MA d'utiliser le contrôleur de RV. Pour accommoder ceux qui ne sont pas bien adaptés ou compatibles avec le contrôleur de RV, un système d'instructions vocales peut être utilisé comme interface. Ce système a été reçu positivement par les 5 participants qui l'ont essayé. Même si l'utilisateur choisit de ne pas interagir activement avec l'animal virtuel dans les deux méthodes ci-dessus, le système de Neurofeedback guidera l'animal pour qu'il interagisse activement avec l'utilisateur en fonction des émotions de ce dernier. Le système de Neurofeedback classique utilise un système de règles pour donner des instructions. Les limites de cette méthode sont la rigidité et l'impossibilité de prendre en compte la relation entre les différentes émotions du participant. Pour résoudre ces problèmes, ce mémoire présente une méthode basée sur l'apprentissage par renforcement (AR) qui donne des instructions à différentes personnes en fonction des différentes émotions. Dans l'expérience de simulation des données émotionnelles synthétiques de la MD, la méthode basée sur l’AR est plus sensible aux changements émotionnels que la méthode basée sur les règles et peut apprendre automatiquement des règles potentielles pour maximiser les émotions positives de l'utilisateur. En raison de l'épidémie de Covid-19, nous n'avons pas été en mesure de mener des expériences à grande échelle. Cependant, un projet de suivi a combiné la thérapie de RV Zoo avec la reconnaissance des gestes et a prouvé son efficacité en évaluant les valeurs d'émotion EEG des participants. / Alzheimer’s disease (AD) is a degenerative brain disease that causes progressive memory loss, cognitive decline, and gradually impairs one’s ability to cope with the complexity and requirement of the daily routine tasks necessary to live in autonomy in our current society. Actual pharmacological treatments can slow down the degradation process attributed to the disease, but such treatments may also cause some undesirable side effects. One of the non-pharmacological treatments that can effectively relieve symptoms is animal-assisted treatment (AAT). But due to some limitations such as animal cost and hygiene issues, virtual animals are used in this field. However, the animated virtual animals, the rough picture quality presentation, and the one-direction interaction mode of animals passively waiting for the user's instructions can hardly stimulate the emotional feedback background between the user and the virtual animals, which greatly weakens the therapeutic effect. This study aims to explore the effectiveness of using virtual animals in place of their living counterpart and their impact on the reduction of negative emotions in the patient. This approach has been implemented in the Zoo Therapy project, which presents an immersive 3D virtual reality animal environment, where the impact on the patient’s emotion is measured in real-time by using electroencephalography (EEG). The static objects and virtual animals in Zoo Therapy are all presented using real 3D models. The specially developed animal movements, sounds, and pathfinding systems support the simulated interactive behavior of virtual animals. In addition, for the user's interaction experience to be more real, the innovation of this approach is also in its communication mechanism as it implements a bidirectional human-computer interaction supported by 3 interaction methods: Menu panel, Speech instruction, and Neurofeedback. The most straightforward way to interact with the VR environment is through Menu panel, i.e., interaction by clicking buttons on panels by the VR controller. However, it was difficult for some AD users to use the VR controller. To accommodate those who are not well suited or compatible with VR controllers, a speech instruction system can be used as an interface, which was received positively by the 5 participants who tried it. Even if the user chooses not to actively interact with the virtual animal in the above two methods, the Neurofeedback system will guide the animal to actively interact with the user according to the user's emotions. The mainstream Neurofeedback system has been using artificial rules to give instructions. The limitation of this method is inflexibility and cannot take into account the relationship between the various emotions of the participant. To solve these problems, this thesis presents a reinforcement learning (RL)-based method that gives instructions to different people based on multiple emotions accordingly. In the synthetic AD emotional data simulation experiment, the RL-based method is more sensitive to emotional changes than the rule-based method and can automatically learn potential rules to maximize the user's positive emotions. Due to the Covid-19 epidemic, we were unable to conduct large-scale experiments. However, a follow-up project combined VR Zoo Therapy with gesture recognition and proved the effectiveness by evaluating participant's EEG emotion values.
98

Some Contributions to Distribution Theory and Applications

Selvitella, Alessandro 11 1900 (has links)
In this thesis, we present some new results in distribution theory for both discrete and continuous random variables, together with their motivating applications. We start with some results about the Multivariate Gaussian Distribution and its characterization as a maximizer of the Strichartz Estimates. Then, we present some characterizations of discrete and continuous distributions through ideas coming from optimal transportation. After this, we pass to the Simpson's Paradox and see that it is ubiquitous and it appears in Quantum Mechanics as well. We conclude with a group of results about discrete and continuous distributions invariant under symmetries, in particular invariant under the groups $A_1$, an elliptical version of $O(n)$ and $\mathbb{T}^n$. As mentioned, all the results proved in this thesis are motivated by their applications in different research areas. The applications will be thoroughly discussed. We have tried to keep each chapter self-contained and recalled results from other chapters when needed. The following is a more precise summary of the results discussed in each chapter. In chapter \ref{chapter 2}, we discuss a variational characterization of the Multivariate Normal distribution (MVN) as a maximizer of the Strichartz Estimates. Strichartz Estimates appear as a fundamental tool in the proof of wellposedness results for dispersive PDEs. With respect to the characterization of the MVN distribution as a maximizer of the entropy functional, the characterization as a maximizer of the Strichartz Estimate does not require the constraint of fixed variance. In this chapter, we compute the precise optimal constant for the whole range of Strichartz admissible exponents, discuss the connection of this problem to Restriction Theorems in Fourier analysis and give some statistical properties of the family of Gaussian Distributions which maximize the Strichartz estimates, such as Fisher Information, Index of Dispersion and Stochastic Ordering. We conclude this chapter presenting an optimization algorithm to compute numerically the maximizers. Chapter \ref{chapter 3} is devoted to the characterization of distributions by means of techniques from Optimal Transportation and the Monge-Amp\`{e}re equation. We give emphasis to methods to do statistical inference for distributions that do not possess good regularity, decay or integrability properties. For example, distributions which do not admit a finite expected value, such as the Cauchy distribution. The main tool used here is a modified version of the characteristic function (a particular case of the Fourier Transform). An important motivation to develop these tools come from Big Data analysis and in particular the Consensus Monte Carlo Algorithm. In chapter \ref{chapter 4}, we study the \emph{Simpson's Paradox}. The \emph{Simpson's Paradox} is the phenomenon that appears in some datasets, where subgroups with a common trend (say, all negative trend) show the reverse trend when they are aggregated (say, positive trend). Even if this issue has an elementary mathematical explanation, the statistical implications are deep. Basic examples appear in arithmetic, geometry, linear algebra, statistics, game theory, sociology (e.g. gender bias in the graduate school admission process) and so on and so forth. In our new results, we prove the occurrence of the \emph{Simpson's Paradox} in Quantum Mechanics. In particular, we prove that the \emph{Simpson's Paradox} occurs for solutions of the \emph{Quantum Harmonic Oscillator} both in the stationary case and in the non-stationary case. We prove that the phenomenon is not isolated and that it appears (asymptotically) in the context of the \emph{Nonlinear Schr\"{o}dinger Equation} as well. The likelihood of the \emph{Simpson's Paradox} in Quantum Mechanics and the physical implications are also discussed. Chapter \ref{chapter 5} contains some new results about distributions with symmetries. We first discuss a result on symmetric order statistics. We prove that the symmetry of any of the order statistics is equivalent to the symmetry of the underlying distribution. Then, we characterize elliptical distributions through group invariance and give some properties. Finally, we study geometric probability distributions on the torus with applications to molecular biology. In particular, we introduce a new family of distributions generated through stereographic projection, give several properties of them and compare them with the Von-Mises distribution and its multivariate extensions. / Thesis / Doctor of Philosophy (PhD)
99

Efficient Algorithms for the Computation of Optimal Quadrature Points on Riemannian Manifolds

Gräf, Manuel 05 August 2013 (has links) (PDF)
We consider the problem of numerical integration, where one aims to approximate an integral of a given continuous function from the function values at given sampling points, also known as quadrature points. A useful framework for such an approximation process is provided by the theory of reproducing kernel Hilbert spaces and the concept of the worst case quadrature error. However, the computation of optimal quadrature points, which minimize the worst case quadrature error, is in general a challenging task and requires efficient algorithms, in particular for large numbers of points. The focus of this thesis is on the efficient computation of optimal quadrature points on the torus T^d, the sphere S^d, and the rotation group SO(3). For that reason we present a general framework for the minimization of the worst case quadrature error on Riemannian manifolds, in order to construct numerically such quadrature points. Therefore, we consider, for N quadrature points on a manifold M, the worst case quadrature error as a function defined on the product manifold M^N. For the optimization on such high dimensional manifolds we make use of the method of steepest descent, the Newton method, and the conjugate gradient method, where we propose two efficient evaluation approaches for the worst case quadrature error and its derivatives. The first evaluation approach follows ideas from computational physics, where we interpret the quadrature error as a pairwise potential energy. These ideas allow us to reduce for certain instances the complexity of the evaluations from O(M^2) to O(M log(M)). For the second evaluation approach we express the worst case quadrature error in Fourier domain. This enables us to utilize the nonequispaced fast Fourier transforms for the torus T^d, the sphere S^2, and the rotation group SO(3), which reduce the computational complexity of the worst case quadrature error for polynomial spaces with degree N from O(N^k M) to O(N^k log^2(N) + M), where k is the dimension of the corresponding manifold. For the usual choice N^k ~ M we achieve the complexity O(M log^2(M)) instead of O(M^2). In conjunction with the proposed conjugate gradient method on Riemannian manifolds we arrive at a particular efficient optimization approach for the computation of optimal quadrature points on the torus T^d, the sphere S^d, and the rotation group SO(3). Finally, with the proposed optimization methods we are able to provide new lists with quadrature formulas for high polynomial degrees N on the sphere S^2, and the rotation group SO(3). Further applications of the proposed optimization framework are found due to the interesting connections between worst case quadrature errors, discrepancies and potential energies. Especially, discrepancies provide us with an intuitive notion for describing the uniformity of point distributions and are of particular importance for high dimensional integration in quasi-Monte Carlo methods. A generalized form of uniform point distributions arises in applications of image processing and computer graphics, where one is concerned with the problem of distributing points in an optimal way accordingly to a prescribed density function. We will show that such problems can be naturally described by the notion of discrepancy, and thus fit perfectly into the proposed framework. A typical application is halftoning of images, where nonuniform distributions of black dots create the illusion of gray toned images. We will see that the proposed optimization methods compete with state-of-the-art halftoning methods.
100

Strategische Planung technischer Kapazität in komplexen Produktionssystemen: mathematische Optimierung grafischer Modelle mit der Software AURELIE

Hochmuth, Christian Andreas 28 May 2020 (has links)
Aktuelle Entwicklungen führen zu komplexeren Produktionssystemen, insbesondere in der variantenreichen Serienfertigung. Als Folge bestehen erhebliche Herausforderungen darin, die technische Kapazität mit strategischem Zeithorizont effizient, transparent und flexibel zu planen. Da zahlreiche Abhängigkeiten berücksichtigt werden müssen, ist in der Praxis festzustellen, dass sich Vollständigkeit und Verständlichkeit der Modelle ausschließen. Zur Lösung dieses Zielkonflikts wird ein softwaregestützter Workflow vorgeschlagen, welcher in der neu entwickelten Software AURELIE realisiert wurde. Der Workflow basiert auf der grafischen Modellierung eines geplanten Systems von Wertströmen, der automatischen Validierung und Transformation des grafischen Modells und der automatischen Optimierung des resultierenden mathematischen Modells. Den Ausgangspunkt bildet ein grafisches Modell, das nicht nur verständlich ist, sondern auch das System in seiner Komplexität vollständig widerspiegelt. Aus Sicht der Forschung liegt der wesentliche Beitrag neben einer formalen Systembeschreibung und dem Aufzeigen der Forschungslücke in der Entwicklung der notwendigen Modelle und Algorithmen. Der Neuheitsgrad ist durch den ganzheitlichen Lösungsansatz gegeben, dessen Umsetzbarkeit durch die Software AURELIE belegt wird. Aus Sicht der Praxis werden die Effizienz, Transparenz und Flexibilität im Planungsprozess signifikant gesteigert. Dies wird durch die weltweite Einführung der Software AURELIE an den Standorten der Bosch Rexroth AG bestätigt.:Vorwort Referat Abbildungsverzeichnis Tabellenverzeichnis Algorithmenverzeichnis 1 Einführung 1.1 Ausgangssituation: Potenziale in der Planung 1.2 Problembeschreibung und Einordnung der Dissertation 1.3 Lösungsansatz: softwaregestützter Workflow 1.4 Forschungsfragen und Aufbau der Arbeit 2 Lösungsvorbereitung: Systemanalyse 2.1 Kontext: strategische Planung technischer Kapazität in der Serienfertigung 2.2 Systemstruktur: rekursive Zusammensetzung von Wertströmen 2.2.1 Prozessschritte, Stückzahlverteilung und Verknüpfungstypen 2.2.2 Prozesse und Wertströme 2.3 Systemschnittstelle: Funktionen der Eingaben und Ausgaben 2.3.1 Materialfluss: Bereitstellung von Komponenten für Produkte 2.3.2 Informationsfluss: Planung der Produktion 2.4 Grundlagen der Kalkulation: einfacher Fall eines Prozessschritts 2.4.1 Taktzeiten, Nutzungsgrad und Betriebsmittelzeit 2.4.2 Kapazität, Auslastung und Investitionen 2.5 Erweiterung der Kalkulation: allgemeiner Fall verknüpfter Prozessschritte 2.5.1 Sequenzielle Verknüpfung Beispiel SQ1 Beispiel SQ2 Beispiel SQ3 2.5.2 Alternative Verknüpfung Beispiel AL1 Beispiel AL2 Beispiel AL3 2.5.3 Selektive Verknüpfung Beispiel SL1 Beispiel SL2 2.6 Anforderungen in Bezug auf die Modellierung und die Optimierung 2.6.1 Kategorisierung möglicher Anforderungen 2.6.2 Formulierung der essenziellen Anforderungen 3 Stand der Technik 3.1 Auswahl zu evaluierender Softwaretypen 3.2 Software zur Erstellung von Tabellenkalkulationen 3.2.1 Beispiel: Microsoft Excel 3.2.2 Erfüllungsgrad der Anforderungen 3.3 Software zur Materialflusssimulation 3.3.1 Beispiel: Siemens Plant Simulation 3.3.2 Erfüllungsgrad der Anforderungen 3.4 Software für Supply Chain Management 3.4.1 Beispiel: SAP APO Supply Network Planning 3.4.2 Erfüllungsgrad der Anforderungen 3.5 Software zur Prozessmodellierung 3.5.1 Beispiel: BPMN mit idealem Interpreter und Optimierer 3.5.2 Erfüllungsgrad der Anforderungen 3.6 Fazit: Bedarf nach einer neuen Entwicklung 4 Lösungsschritt I: grafische Modellierung und Modelltransformation 4.1 Kurzeinführung: Graphentheorie und Komplexität 4.1.1 Graphentheorie 4.1.2 Komplexität von Algorithmen 4.2 Modellierung eines Systems durch Wertstromgraphen 4.2.1 Grafische Modellstruktur: Knoten und Kanten 4.2.2 Modellelemente: Quellen, Senken, Ressourcen und Flusspunkte 4.3 Validierung eines grafischen Modells 4.3.1 Ziel, Grundidee und Datenstrukturen 4.3.2 Beschreibung der Algorithmen 4.3.3 Beweis der Zeitkomplexität 4.4 Transformation eines grafischen Modells in ein mathematisches Modell 4.4.1 Mathematische Modellstruktur: Matrizen und Folgen 4.4.2 Ziel, Grundidee und Datenstrukturen 4.4.3 Beschreibung der Algorithmen 4.4.4 Beweis der Zeitkomplexität 4.5 Umsetzung in der Software AURELIE 4.5.1 Funktionsübersicht und Benutzerführung 4.5.2 Erfüllungsgrad der Anforderungen 4.6 Fazit: Erreichen des vorgegebenen Entwicklungsziels 5 Lösungsschritt II: mathematische Optimierung 5.1 Kurzeinführung: lineare Optimierung und Korrektheit 5.1.1 Lineare Optimierung 5.1.2 Korrektheit von Algorithmen 5.2 Maximierung der Kapazitäten 5.2.1 Ziel, Grundidee und Datenstrukturen 5.2.2 Beschreibung des Algorithmus 5.2.3 Beweis der Korrektheit und Zeitkomplexität 5.3 Minimierung der Investitionen 5.3.1 Ziel, Grundidee und Datenstrukturen 5.3.2 Beschreibung des Algorithmus 5.3.3 Beweis der Korrektheit und Zeitkomplexität 5.4 Optimierung der Auslastung 5.4.1 Ziel, Grundidee und Datenstrukturen 5.4.2 Beschreibung des Algorithmus 5.4.3 Beweis der Korrektheit und Zeitkomplexität 5.5 Umsetzung in der Software AURELIE 5.5.1 Funktionsübersicht und Benutzerführung 5.5.2 Erfüllungsgrad der Anforderungen 5.5.3 Wesentliche Erweiterungen 5.5.4 Validierung der Optimierungsergebnisse 5.6 Fazit: Erreichen des vorgegebenen Entwicklungsziels 6 Schluss 6.1 Zusammenfassung der Ergebnisse 6.2 Implikationen für Forschung und planerische Praxis 6.3 Ausblick: mögliche Weiterentwicklungen A Technische Dokumentation A.1 Algorithmen, Teil I: grafische Modellierung und Modelltransformation A.1.1 Nichtrekursive Breitensuche von Knoten in einem Graphen A.1.2 Rekursive Breitensuche von Knoten in einem Graphen A.1.3 Nichtrekursive Tiefensuche von Knoten in einem Graphen A.1.4 Rekursive Tiefensuche von Knoten in einem Graphen A.1.5 Traversierung der Kanten eines grafischen Modells A.1.6 Validierung eines grafischen Modells A.1.7 Traversierung der Knoten eines grafischen Modells A.1.8 Transformation eines grafischen Modells A.2 Algorithmen, Teil II: mathematische Optimierung A.2.1 Minimierung einer allgemeinen linearen Zielfunktion A.2.2 Maximierung der technischen Kapazitäten A.2.3 Minimierung der Überlastung (Komponenten größer als eins) A.2.4 Optimierung der Auslastung (alle Komponenten) Abkürzungsverzeichnis Symbolverzeichnis Index Literaturverzeichnis / Recent developments lead to increasingly complex production systems, especially in the case of series production with a great number of variants. As a result, considerable challenges exist in planning the technical capacity with strategic time horizon efficiently, transparently and flexibly. Since numerous interdependencies must be considered, it can be observed in practice that completeness and understandability of the models are mutually exclusive. To solve this conflict of objectives, a software-based workflow is proposed, which was implemented in the newly developed software AURELIE. The workflow relies on the graphical modeling of a planned system of value streams, the automated validation and transformation of the graphical model and the automated optimization of the resulting mathematical model. The starting point is a graphical model, which is not only understandable, but also reflects the system completely with respect to its complexity. From a research perspective, the essential contribution, besides a formal system description and the identification of the research gap, lies in the development of the required models and algorithms. The degree of novelty is given by the holistic solution approach, which is proven feasible by the software AURELIE. From a practical perspective, efficiency, transparency and flexibility in the planning process are significantly increased. This is confirmed by the worldwide implementation of the software AURELIE at the locations of Bosch Rexroth AG.:Vorwort Referat Abbildungsverzeichnis Tabellenverzeichnis Algorithmenverzeichnis 1 Einführung 1.1 Ausgangssituation: Potenziale in der Planung 1.2 Problembeschreibung und Einordnung der Dissertation 1.3 Lösungsansatz: softwaregestützter Workflow 1.4 Forschungsfragen und Aufbau der Arbeit 2 Lösungsvorbereitung: Systemanalyse 2.1 Kontext: strategische Planung technischer Kapazität in der Serienfertigung 2.2 Systemstruktur: rekursive Zusammensetzung von Wertströmen 2.2.1 Prozessschritte, Stückzahlverteilung und Verknüpfungstypen 2.2.2 Prozesse und Wertströme 2.3 Systemschnittstelle: Funktionen der Eingaben und Ausgaben 2.3.1 Materialfluss: Bereitstellung von Komponenten für Produkte 2.3.2 Informationsfluss: Planung der Produktion 2.4 Grundlagen der Kalkulation: einfacher Fall eines Prozessschritts 2.4.1 Taktzeiten, Nutzungsgrad und Betriebsmittelzeit 2.4.2 Kapazität, Auslastung und Investitionen 2.5 Erweiterung der Kalkulation: allgemeiner Fall verknüpfter Prozessschritte 2.5.1 Sequenzielle Verknüpfung Beispiel SQ1 Beispiel SQ2 Beispiel SQ3 2.5.2 Alternative Verknüpfung Beispiel AL1 Beispiel AL2 Beispiel AL3 2.5.3 Selektive Verknüpfung Beispiel SL1 Beispiel SL2 2.6 Anforderungen in Bezug auf die Modellierung und die Optimierung 2.6.1 Kategorisierung möglicher Anforderungen 2.6.2 Formulierung der essenziellen Anforderungen 3 Stand der Technik 3.1 Auswahl zu evaluierender Softwaretypen 3.2 Software zur Erstellung von Tabellenkalkulationen 3.2.1 Beispiel: Microsoft Excel 3.2.2 Erfüllungsgrad der Anforderungen 3.3 Software zur Materialflusssimulation 3.3.1 Beispiel: Siemens Plant Simulation 3.3.2 Erfüllungsgrad der Anforderungen 3.4 Software für Supply Chain Management 3.4.1 Beispiel: SAP APO Supply Network Planning 3.4.2 Erfüllungsgrad der Anforderungen 3.5 Software zur Prozessmodellierung 3.5.1 Beispiel: BPMN mit idealem Interpreter und Optimierer 3.5.2 Erfüllungsgrad der Anforderungen 3.6 Fazit: Bedarf nach einer neuen Entwicklung 4 Lösungsschritt I: grafische Modellierung und Modelltransformation 4.1 Kurzeinführung: Graphentheorie und Komplexität 4.1.1 Graphentheorie 4.1.2 Komplexität von Algorithmen 4.2 Modellierung eines Systems durch Wertstromgraphen 4.2.1 Grafische Modellstruktur: Knoten und Kanten 4.2.2 Modellelemente: Quellen, Senken, Ressourcen und Flusspunkte 4.3 Validierung eines grafischen Modells 4.3.1 Ziel, Grundidee und Datenstrukturen 4.3.2 Beschreibung der Algorithmen 4.3.3 Beweis der Zeitkomplexität 4.4 Transformation eines grafischen Modells in ein mathematisches Modell 4.4.1 Mathematische Modellstruktur: Matrizen und Folgen 4.4.2 Ziel, Grundidee und Datenstrukturen 4.4.3 Beschreibung der Algorithmen 4.4.4 Beweis der Zeitkomplexität 4.5 Umsetzung in der Software AURELIE 4.5.1 Funktionsübersicht und Benutzerführung 4.5.2 Erfüllungsgrad der Anforderungen 4.6 Fazit: Erreichen des vorgegebenen Entwicklungsziels 5 Lösungsschritt II: mathematische Optimierung 5.1 Kurzeinführung: lineare Optimierung und Korrektheit 5.1.1 Lineare Optimierung 5.1.2 Korrektheit von Algorithmen 5.2 Maximierung der Kapazitäten 5.2.1 Ziel, Grundidee und Datenstrukturen 5.2.2 Beschreibung des Algorithmus 5.2.3 Beweis der Korrektheit und Zeitkomplexität 5.3 Minimierung der Investitionen 5.3.1 Ziel, Grundidee und Datenstrukturen 5.3.2 Beschreibung des Algorithmus 5.3.3 Beweis der Korrektheit und Zeitkomplexität 5.4 Optimierung der Auslastung 5.4.1 Ziel, Grundidee und Datenstrukturen 5.4.2 Beschreibung des Algorithmus 5.4.3 Beweis der Korrektheit und Zeitkomplexität 5.5 Umsetzung in der Software AURELIE 5.5.1 Funktionsübersicht und Benutzerführung 5.5.2 Erfüllungsgrad der Anforderungen 5.5.3 Wesentliche Erweiterungen 5.5.4 Validierung der Optimierungsergebnisse 5.6 Fazit: Erreichen des vorgegebenen Entwicklungsziels 6 Schluss 6.1 Zusammenfassung der Ergebnisse 6.2 Implikationen für Forschung und planerische Praxis 6.3 Ausblick: mögliche Weiterentwicklungen A Technische Dokumentation A.1 Algorithmen, Teil I: grafische Modellierung und Modelltransformation A.1.1 Nichtrekursive Breitensuche von Knoten in einem Graphen A.1.2 Rekursive Breitensuche von Knoten in einem Graphen A.1.3 Nichtrekursive Tiefensuche von Knoten in einem Graphen A.1.4 Rekursive Tiefensuche von Knoten in einem Graphen A.1.5 Traversierung der Kanten eines grafischen Modells A.1.6 Validierung eines grafischen Modells A.1.7 Traversierung der Knoten eines grafischen Modells A.1.8 Transformation eines grafischen Modells A.2 Algorithmen, Teil II: mathematische Optimierung A.2.1 Minimierung einer allgemeinen linearen Zielfunktion A.2.2 Maximierung der technischen Kapazitäten A.2.3 Minimierung der Überlastung (Komponenten größer als eins) A.2.4 Optimierung der Auslastung (alle Komponenten) Abkürzungsverzeichnis Symbolverzeichnis Index Literaturverzeichnis

Page generated in 0.1355 seconds