• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 6
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Real-time Analysis and Control for Smart Manufacturing Systems

January 2020 (has links)
abstract: Recent advances in manufacturing system, such as advanced embedded sensing, big data analytics and IoT and robotics, are promising a paradigm shift in the manufacturing industry towards smart manufacturing systems. Typically, real-time data is available in many industries, such as automotive, semiconductor, and food production, which can reflect the machine conditions and production system’s operation performance. However, a major research gap still exists in terms of how to utilize these real-time data information to evaluate and predict production system performance and to further facilitate timely decision making and production control on the factory floor. To tackle these challenges, this dissertation takes on an integrated analytical approach by hybridizing data analytics, stochastic modeling and decision making under uncertainty methodology to solve practical manufacturing problems. Specifically, in this research, the machine degradation process is considered. It has been shown that machines working at different operating states may break down in different probabilistic manners. In addition, machines working in worse operating stage are more likely to fail, thus causing more frequent down period and reducing the system throughput. However, there is still a lack of analytical methods to quantify the potential impact of machine condition degradation on the overall system performance to facilitate operation decision making on the factory floor. To address these issues, this dissertation considers a serial production line with finite buffers and multiple machines following Markovian degradation process. An integrated model based on the aggregation method is built to quantify the overall system performance and its interactions with machine condition process. Moreover, system properties are investigated to analyze the influence of system parameters on system performance. In addition, three types of bottlenecks are defined and their corresponding indicators are derived to provide guidelines on improving system performance. These methods provide quantitative tools for modeling, analyzing, and improving manufacturing systems with the coupling between machine condition degradation and productivity given the real-time signals. / Dissertation/Thesis / Doctoral Dissertation Industrial Engineering 2020
2

Causation and the objectification of agency

Schulz, Christoph January 2015 (has links)
This dissertation defends the so-called 'agency-approach' to causation, which attempts to ground the causal relation in the cause's role of being a means to bring about its effect. The defence is confined to a conceptual interpretation of this theory, pertaining to the concept of causation as it appears in a causal judgement. However, causal judgements are not seen as limited to specific domains, and they are not exclusively attributed to human agents alone. As a methodological framework to describe the different perspectives of causal judgments, a method taken from the philosophy of information is made use of - the so-called 'method of abstraction'. According to this method, levels of abstraction are devised for the subjective perspective of the acting agent, for the agent as observer during the observation of other agents' actions, and for the agent that judges efficient causation. As a further piece of propaedeutic work, a class of similar (yet not agency-centred) approaches to causation is considered, and their modelling paradigms - Bayesian networks and interventions objectively construed - will be criticised. The dissertation then proceeds to the defence of the agency-approach, the first part of which is a defence against the objection of conceptual circularity, which holds that agency analyses causation in causal terms. While the circularity-objection is rebutted, I rely at that stage on a set of subjective concepts, i.e. concepts that are eligible to the description of the agent's own experience while performing actions. In order to give a further, positive corroboration of the agency-approach, an investigation into the natural origins and constraints of the concept of agency is made in the central chapter six of the dissertation. The thermodynamic account developed in that part affords a third-person perspective on actions, which has as its core element a cybernetic feedback cycle. At that point, the stage is set to analyse the relation between the first- and the third-person perspectives on actions previously assumed. A dual-aspect interpretation of the cybernetic-thermodynamic picture developed in chapter six will be directly applied to the levels of abstraction proposed earlier. The level of abstraction that underpins judgments of efficient causation, the kind of causation seemingly devoid of agency, will appear as a derived scheme produced by and dependent on the concept of agency. This account of efficient causation, the 'objectification of agency', affords the rebuttal of a second objection against the agency-approach, which claims that the approach is inappropriately anthropomorphic. The dissertation concludes with an account of single-case, or token level, causation, and with an examination of the impact of the causal concept on the validity of causal models.
3

Mise en place d'un modèle de fuite multi-états en secteur hydraulique partiellement instrumenté / Mastering losses on drinking water network

Claudio, Karim 19 December 2014 (has links)
L’évolution de l’équipement des réseaux d’eau potable a considérablement amélioré le pilotage de ces derniers. Le telérelevé des compteurs d’eau est sans doute la technologie qui a créé la plus grande avancée ces dernières années dans la gestion de l’eau, tant pour l’opérateur que pour l’usager. Cette technologie a permis de passer d’une information le plus souvent annuelle sur les consommations (suite à la relève manuelle des compteurs d’eau) à une information infra-journalière. Mais le télérelevé, aussi performant soit-il, a un inconvénient : son coût. L’instrumentation complète d’un réseau engendre des investissements que certains opérateurs ne peuvent se permettre. Ainsi la création d’un échantillon de compteurs à équiper permet d’estimer la consommation totale d’un réseau tout en minimisant les coûts d’investissement. Cet échantillon doit être construit de façon intelligente de sorte que l’imprécision liée à l’estimation ne nuise pas à l’évaluation des consommations. Une connaissance précise sur les consommations d’eau permet de quantifier les volumes perdus en réseau. Mais, même dans le cas d’une évaluation exacte des pertes, cela ne peut pas suffire à éliminer toutes les fuites sur le réseau. En effet, si le réseau de distribution d’eau potable est majoritairement enterré, donc invisible, il en va de même pour les fuites. Une fraction des fuites est invisible et même indétectable par les techniques actuelles de recherche de fuites, et donc irréparable. La construction d’un modèle de fuite multi-états permet de décomposer le débit de fuite suivant les différents stades d’apparition d’une fuite : invisible et indétectable, invisible mais détectable par la recherche de fuite et enfin visible en surface. Ce modèle, de type semi-markovien, prend en compte les contraintes opérationnelles, notamment le fait que nous disposons de données de panel. La décomposition du débit de fuite permet de fait une meilleure gestion du réseau en ciblant et adaptant les actions de lutte contre les fuites à mettre en place en fonction de l’état de dégradation du réseau. / The evolution of equipment on drinking water networks has considerably bettered the monitoring of these lasts. Automatic meter reading (AMR) is clearly the technology which has brought the major progress these last years in water management, as for the operator and the end-users. This technology has allowed passing from an annual information on water consumption (thanks to the manual meter reading) toan infra-daily information. But as efficient as AMR can be, it has one main inconvenient : its cost. A complete network instrumentation generates capital expenditures that some operators can’t allowed themselves. The constitution of a sample of meters to equip enables then to estimate the network total consumption while minimizing the investments. This sample has to be built smartly so the inaccuracy of the estimator shouldn’t be harmful to the consumption estimation. A precise knowledge on water consumption allowsquantifying the water lost volumes on the network. But even an exact assessment of losses is still not enough to eliminate all the leaks on the network. Indeed, if the water distribution network is buried, and so invisible, so do the leaks. A fraction of leaks are invisible and even undetectable by the current technologies of leakage control, and so these leaks are un-reparable. The construction of a multi-state model enables us to decompose the leakage flow according to the different stages of appearance of a leak : invisible and undetectable, invisible but detectable with leakage control and finally detectable. This semi-Markovian model takes into account operational constrains, in particular the fact that we dispose of panel data. The leakage flow decomposition allows a better network monitoring but targeting and adapting the action of leakage reduction to set up according to the degradation state of the network.
4

Task-Driven Integrity Assessment and Control for Vehicular Hybrid Localization Systems

Drawil, Nabil 17 January 2013 (has links)
Throughout the last decade, vehicle localization has been attracting significant attention in a wide range of applications, including Navigation Systems, Road Tolling, Smart Parking, and Collision Avoidance. To deliver on their requirements, these applications need specific localization accuracy. However, current localization techniques lack the required accuracy, especially for mission critical applications. Although various approaches for improving localization accuracy have been reported in the literature, there is still a need for more efficient and more effective measures that can ascribe some level of accuracy to the localization process. These measures will enable localization systems to manage the localization process and resources so as to achieve the highest accuracy possible, and to mitigate the impact of inadequate accuracy on the target application. In this thesis, a framework for fusing different localization techniques is introduced in order to estimate the location of a vehicle along with location integrity assessment that captures the impact of the measurement conditions on the localization quality. Knowledge about estimate integrity allows the system to plan the use of its localization resources so as to match the target accuracy of the application. The framework introduced provides the tools that would allow for modeling the impact of the operation conditions on estimate accuracy and integrity, as such it enables more robust system performance in three steps. First, localization system parameters are utilized to contrive a feature space that constitutes probable accuracy classes. Due to the strong overlap among accuracy classes in the feature space, a hierarchical classification strategy is developed to address the class ambiguity problem via the class unfolding approach (HCCU). HCCU strategy is proven to be superior with respect to other hierarchical configuration. Furthermore, a Context Based Accuracy Classification (CBAC) algorithm is introduced to enhance the performance of the classification process. In this algorithm, knowledge about the surrounding environment is utilized to optimize classification performance as a function of the observation conditions. Second, a task-driven integrity (TDI) model is developed to enable the applications modules to be aware of the trust level of the localization output. Typically, this trust level functions in the measurement conditions; therefore, the TDI model monitors specific parameter(s) in the localization technique and, accordingly, infers the impact of the change in the environmental conditions on the quality of the localization process. A generalized TDI solution is also introduced to handle the cases where sufficient information about the sensing parameters is unavailable. Finally, the produce of the employed localization techniques (i.e., location estimates, accuracy, and integrity level assessment) needs to be fused. Nevertheless, these techniques are hybrid and their pieces of information are conflicting in many situations. Therefore, a novel evidence structure model called Spatial Evidence Structure Model (SESM) is developed and used in constructing a frame of discernment comprising discretized spatial data. SESM-based fusion paradigms are capable of performing a fusion process using the information provided by the techniques employed. Both the location estimate accuracy and aggregated integrity resultant from the fusion process demonstrate superiority over the employing localization techniques. Furthermore, a context aware task-driven resource allocation mechanism is developed to manage the fusion process. The main objective of this mechanism is to optimize the usage of system resources and achieve a task-driven performance. Extensive experimental work is conducted on real-life and simulated data to validate models developed in this thesis. It is evident from the experimental results that task-driven integrity assessment and control is applicable and effective on hybrid localization systems.
5

Task-Driven Integrity Assessment and Control for Vehicular Hybrid Localization Systems

Drawil, Nabil 17 January 2013 (has links)
Throughout the last decade, vehicle localization has been attracting significant attention in a wide range of applications, including Navigation Systems, Road Tolling, Smart Parking, and Collision Avoidance. To deliver on their requirements, these applications need specific localization accuracy. However, current localization techniques lack the required accuracy, especially for mission critical applications. Although various approaches for improving localization accuracy have been reported in the literature, there is still a need for more efficient and more effective measures that can ascribe some level of accuracy to the localization process. These measures will enable localization systems to manage the localization process and resources so as to achieve the highest accuracy possible, and to mitigate the impact of inadequate accuracy on the target application. In this thesis, a framework for fusing different localization techniques is introduced in order to estimate the location of a vehicle along with location integrity assessment that captures the impact of the measurement conditions on the localization quality. Knowledge about estimate integrity allows the system to plan the use of its localization resources so as to match the target accuracy of the application. The framework introduced provides the tools that would allow for modeling the impact of the operation conditions on estimate accuracy and integrity, as such it enables more robust system performance in three steps. First, localization system parameters are utilized to contrive a feature space that constitutes probable accuracy classes. Due to the strong overlap among accuracy classes in the feature space, a hierarchical classification strategy is developed to address the class ambiguity problem via the class unfolding approach (HCCU). HCCU strategy is proven to be superior with respect to other hierarchical configuration. Furthermore, a Context Based Accuracy Classification (CBAC) algorithm is introduced to enhance the performance of the classification process. In this algorithm, knowledge about the surrounding environment is utilized to optimize classification performance as a function of the observation conditions. Second, a task-driven integrity (TDI) model is developed to enable the applications modules to be aware of the trust level of the localization output. Typically, this trust level functions in the measurement conditions; therefore, the TDI model monitors specific parameter(s) in the localization technique and, accordingly, infers the impact of the change in the environmental conditions on the quality of the localization process. A generalized TDI solution is also introduced to handle the cases where sufficient information about the sensing parameters is unavailable. Finally, the produce of the employed localization techniques (i.e., location estimates, accuracy, and integrity level assessment) needs to be fused. Nevertheless, these techniques are hybrid and their pieces of information are conflicting in many situations. Therefore, a novel evidence structure model called Spatial Evidence Structure Model (SESM) is developed and used in constructing a frame of discernment comprising discretized spatial data. SESM-based fusion paradigms are capable of performing a fusion process using the information provided by the techniques employed. Both the location estimate accuracy and aggregated integrity resultant from the fusion process demonstrate superiority over the employing localization techniques. Furthermore, a context aware task-driven resource allocation mechanism is developed to manage the fusion process. The main objective of this mechanism is to optimize the usage of system resources and achieve a task-driven performance. Extensive experimental work is conducted on real-life and simulated data to validate models developed in this thesis. It is evident from the experimental results that task-driven integrity assessment and control is applicable and effective on hybrid localization systems.
6

Empirical evaluation of a Markovian model in a limit order market

Trönnberg, Filip January 2012 (has links)
A stochastic model for the dynamics of a limit order book is evaluated and tested on empirical data. Arrival of limit, market and cancellation orders are described in terms of a Markovian queuing system with exponentially distributed occurrences. In this model, several key quantities can be analytically calculated, such as the distribution of times between price moves, price volatility and the probability of an upward price move, all conditional on the state of the order book. We show that the exponential distribution poorly fits the occurrences of order book events and further show that little resemblance exists between the analytical formulas in this model and the empirical data. The log-normal and Weibull distribution are suggested as replacements as they appear to fit the empirical data better.

Page generated in 0.0701 seconds