• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Interaction Based Measure of Manufacturing Systems Complexity and Supply Chain Systems Vulnerability Using Information Entropy

Alamoudi, Rami Hussain 20 April 2008 (has links)
The first primary objective of this dissertation is to develop a framework that can quantitatively measure complexity of manufacturing systems in various configurations, including conjoined and disjoined systems. In this dissertation, an analytical model for manufacturing systems complexity that employs information entropy theory is proposed and verified. The model uses probability distribution of information regarding resource allocations that are described in terms of interactions among resources for part processing and part processing requirements. In the proposed framework, both direct and indirect interactions among resources are modeled using a matrix, called interaction matrix, which accounts for part processing and waiting times. The proposed complexity model identifies a manufacturing system that has evenly distributed interactions among resources as being more complex, because under disruption situation more information is required to identify source of the disruption. In addition, implicit relationships between the system complexity and performance in terms of resource utilizations, waiting time, cycle time and throughput of the system are studied in this dissertation by developing a computer program for simulating general job shop environment. The second primary objective of this dissertation is to develop a mathematical model for measuring the vulnerability of the supply chain systems. Global supply chains are exposed to different kinds of disruptions. This has promoted the issue of supply chain resilience higher than ever before in business as well as supporting agendas. In this dissertation, an extension of the proposed measure for manufacturing system complexity is used to measure the vulnerability of the supply chain systems using information entropy theory and influence matrix. We define the vulnerability of supply chain systems based on required information that describes the system in terms of topology and interrelationship among components. The proposed framework for vulnerability modeling in this dissertation focus on disruptive events such as natural disasters, terrorist attacks, or industrial disputes, rather than deviations such as variations in demand, procurement and transportation.
2

Analyse macroscopique des grands systèmes : émergence épistémique et agrégation spatio-temporelle / Macroscopic Analysis of Large-scale Systems : Epistemic Emergence and Spatiotemporal Aggregation

Lamarche-Perrin, Robin 14 October 2013 (has links)
L'analyse des systèmes de grande taille est confrontée à des difficultés d'ordre syntaxique et sémantique : comment observer un million d'entités distribuées et asynchrones ? Comment interpréter le désordre résultant de l'observation microscopique de ces entités ? Comment produire et manipuler des abstractions pertinentes pour l'analyse macroscopique des systèmes ? Face à l'échec de l'approche analytique, le concept d'émergence épistémique - relatif à la nature de la connaissance - nous permet de définir une stratégie d'analyse alternative, motivée par le constat suivant : l'activité scientifique repose sur des processus d'abstraction fournissant des éléments de description macroscopique pour aborder la complexité des systèmes. Cette thèse s'intéresse plus particulièrement à la production d'abstractions spatiales et temporelles par agrégation de données. Afin d'engendrer des représentations exploitables lors du passage à l'échelle, il apparaît nécessaire de contrôler deux aspects essentiels du processus d'abstraction. Premièrement, la complexité et le contenu informationnel des représentations macroscopiques doivent être conjointement optimisés afin de préserver les détails pertinents pour l'observateur, tout en minimisant le coût de l'analyse. Nous proposons des mesures de qualité (critères internes) permettant d'évaluer, de comparer et de sélectionner les représentations en fonction du contexte et des objectifs de l'analyse. Deuxièmement, afin de conserver leur pouvoir explicatif, les abstractions engendrées doivent être cohérentes avec les connaissances mobilisées par l'observateur lors de l'analyse. Nous proposons d'utiliser les propriétés organisationnelles, structurelles et topologiques du système (critères externes) pour contraindre le processus d'agrégation et pour engendrer des représentations viables sur les plans syntaxique et sémantique. Par conséquent, l'automatisation du processus d'agrégation nécessite de résoudre un problème d'optimisation sous contraintes. Nous proposons dans cette thèse un algorithme de résolution générique, s'adaptant aux critères formulés par l'observateur. De plus, nous montrons que la complexité de ce problème d'optimisation dépend directement de ces critères. L'approche macroscopique défendue dans cette thèse est évaluée sur deux classes de systèmes. Premièrement, le processus d'agrégation est appliqué à la visualisation d'applications parallèles de grande taille pour l'analyse de performance. Il permet de détecter les anomalies présentes à plusieurs niveaux de granularité dans les traces d'exécution et d'expliquer ces anomalies à partir des propriétés syntaxiques du système. Deuxièmement, le processus est appliqué à l'agrégation de données médiatiques pour l'analyse des relations internationales. L'agrégation géographique et temporelle de l'attention médiatique permet de définir des évènements macroscopiques pertinents sur le plan sémantique pour l'analyse du système international. Pour autant, nous pensons que l'approche et les outils présentés dans cette thèse peuvent être généralisés à de nombreux autres domaines d'application. / The analysis of large-scale systems faces syntactic and semantic difficulties: How to observe millions of distributed and asynchronous entities? How to interpret the disorder that results from the microscopic observation of such entities? How to produce and handle relevant abstractions for the systems' macroscopic analysis? Faced with the failure of the analytic approach, the concept of epistemic emergence - related to the nature of knowledge - allows us to define an alternative strategy. This strategy is motivated by the observation that scientific activity relies on abstraction processes that provide macroscopic descriptions to broach the systems' complexity. This thesis is more specifically interested in the production of spatial and temporal abstractions through data aggregation. In order to generate scalable representations, the control of two essential aspects of the aggregation process is necessary. Firstly, the complexity and the information content of macroscopic representations should be jointly optimized in order to preserve the relevant details for the observer, while minimizing the cost of the analysis. We propose several measures of quality (internal criteria) to evaluate, compare and select the representations depending on the context and the objectives of the analysis. Secondly, in order to preserve their explanatory power, the generated abstractions should be consistent with the background knowledge exploited by the observer for the analysis. We propose to exploit the systems' organisational, structural and topological properties (external criteria) to constrain the aggregation process and to generate syntactically and semantically consistent representations. Consequently, the automation of the aggregation process requires solving a constrained optimization problem. We propose a generic algorithm that adapts to the criteria expressed by the observer. Furthermore, we show that the complexity of this optimization problem directly depend on these criteria. The macroscopic approach supported by this thesis is evaluated on two classes of systems. Firstly, the aggregation process is applied to the visualisation of large-scale distributed applications for performance analysis. It allows the detection of anomalies at several scales in the execution traces and the explanation of these anomalies according to the system syntactic properties. Secondly, the process is applied to the aggregation of news for the analysis of international relations. The geographical and temporal aggregation of media attention allows the definition of semantically consistent macroscopic events for the analysis of the international system. Furthermore, we believe that the approach and the tools presented in this thesis can be extended to a wider class of application domains.
3

Least Squares in Sampling Complexity and Statistical Learning

Bartel, Felix 19 January 2024 (has links)
Data gathering is a constant in human history with ever increasing amounts in quantity and dimensionality. To get a feel for the data, make it interpretable, or find underlying laws it is necessary to fit a function to the finite and possibly noisy data. In this thesis we focus on a method achieving this, namely least squares approximation. Its discovery dates back to around 1800 and it has since then proven to be an indispensable tool which is efficient and has the capability to achieve optimal error when used right. Crucial for the least squares method are the ansatz functions and the sampling points. To discuss them, we gather tools from probability theory, frame subsampling, and $L_2$-Marcinkiewicz-Zygmund inequalities. With that we give results in the worst-case or minmax setting, when a set of points is sought for approximating a class of functions, which we model as a generic reproducing kernel Hilbert space. Further, we give error bounds in the statistical learning setting for approximating individual functions from possibly noisy samples. Here, we include the covariate-shift setting as a subfield of transfer learning. In a natural way a parameter choice question arises for balancing over- and underfitting effect. We tackle this by using the cross-validation score, for which we show a fast way of computing as well as prove the goodness thereof.:1 Introduction 2 Least squares approximation 3 Reproducing kernel Hilbert spaces (RKHS) 4 Concentration inequalities 5 Subsampling of finite frames 6 L2 -Marcinkiewicz-Zygmund (MZ) inequalities 7 Least squares in the worst-case setting 8 Least squares in statistical learning 9 Cross-validation 10 Outlook

Page generated in 0.4694 seconds