• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 2
  • 1
  • 1
  • Tagged with
  • 19
  • 19
  • 8
  • 6
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Creating An Editor For The Implementation of WorkFlow+: A Framework for Developing Assurance Cases

Chiang, Thomas January 2021 (has links)
As vehicles become more complex, the work required to ensure that they are safe increases enormously. This in turn results in a much more complicated task of testing systems, subsystems, and components to ensure that they are safe individually as well as when they are integrated. As a result, managing the safety engineering process for vehicle development is of major interest to all automotive manufacturers. The goal of this research is to introduce a tool that provides support for a new framework for modeling safety processes, which can partially address some of these challenges. WorkFlow+ is a framework that was developed to combine both data flow and process flow to increase traceability, enable users to model with the desired granularity safety engineering workflow for their products, and produce assurance cases for regulators and evaluators to be able to validate that the product is safe for the users and the public. With the development of an editor, it will bring WorkFlow+ to life. / Thesis / Master of Applied Science (MASc)
12

Dissertation_XiaoquanGao.pdf

Xiaoquan Gao (12049385) 04 December 2024 (has links)
Public sector services often face challenges in allocating limited resources effectively. Despite their fundamental importance to societal welfare, these systems often operate without sufficient analytical support, and their decision-making processes remain understudied in academic literature. While data-driven analytical approaches offer promising solutions for addressing complex tradeoffs and resource constraints, the unique characteristics of public systems create significant challenges for modeling and developing efficient solutions. This dissertation addresses these challenges by applying stochastic models to enhance decision-making in two critical areas: emergency medical services in healthcare and jail diversion in the criminal justice system.The first part focuses on integrating drones into emergency medical services to shorten response times and improve patient outcomes. We develop a Markov Decision Process (MDP) model to address the coordination between aerial and ground vehicles, accounting for uncertain travel times and bystander availability. To solve this complex problem, we develop a tractable approximate policy iteration algorithm that approximates value function through neural networks, with basis functions tailored to the spatial and temporal characteristics of the EMS system. Case studies using historical data from Indiana provide valuable insights for managing real-time EMS logistics. Our results show that drone augmentation can reduce response times by over 30% compared to traditional ambulances. This research provides practical guidelines for implementing drone-assisted emergency medical services while contributing to the literature on hybrid delivery systems.The second part develops data-driven analytical tools to improve placement decisions in jail diversion programs, balancing public safety and individual rehabilitation. Community corrections programs offer promising alternatives to incarceration but face their own resource constraints. We develop an MDP model that captures the complex tradeoffs between individual recidivism risks and the impacts of overcrowding. Our model extends beyond traditional queueing problems by incorporating criminal justice-specific features, including deterministic service times and convex occupancy-dependent costs. To overcome the theoretical challenges, we develop a novel unified approach that combines system coupling with policy deviation bounds to analyze value functions, ultimately establishing the superconvexity. This theoretical foundation enables us to develop an efficient algorithm based on time-scale separation, providing practical tools for optimizing diversion decisions. Case study based on real data from our community partner shows our approach can reduce recidivism rates by 28% compared to current practices. Beyond academic impact, this research has been used by community partners to secure program funding for future staffing.<p></p>
13

L'introduction de la gestion du cycle de vie produit dans les entreprises de sous-traitance comme vecteur d'agilité opérationnelle et de maîtrise du produit / ‘Product Lifecycle Management’ (PLM) in the subcontracting industry as a key for operational agility and product management

Pinel, Muriel 30 May 2013 (has links)
Pour faire face à un environnement en perpétuelle évolution, les entreprises doivent changer et parfois en profondeur. Ces évolutions en principe voulues et contrôlées se font au moyen de projets dits « d'entreprise ». Parmi les buts qu'il s'agit d'atteindre par le biais de ces projets, deux buts majeurs sont identifiables : le développement de l'agilité opérationnelle et la maîtrise des produits. Dans ces travaux de thèse, nous nous focalisons sur le projet PLM (Product Lifecycle Management) et plus particulièrement sur la mise en oeuvre du système d’information nécessaire à la gestion du cycle de vie des produits : le système PLM. Ce système d’information coordonne tout ou partie des informations liées à la définition, à la réalisation, à l’usage et au retrait des produits. Après un état de l’art permettant de définir de façon précise les concepts liés à la gestion du cycle de vie des produits, une méthode est proposée pour définir le cahier des charges du système PLM. La définition de cette méthode montre la nécessité d’assurer la cohérence entre les différents modèles du système PLM d’une part et entre les différentes représentations du produit utilisées tout au long de son cycle de vie d’autre part. Un cadre de modélisation basé sur le paradigme systémique, sur le paradigme d'ambivalence et sur des concepts de métamodélisation est alors proposé. Ce cadre de modélisation apporte des éléments de réponse aux besoins de cohérence identifiés. Il supporte également l'adoption du raisonnement synergique indispensable au développement de l'agilité opérationnelle recherchée par l’entreprise. Une expérimentation est réalisée pour illustrer les concepts introduits dans notre cadre de modélisation. / Faced with a constantly evolving environment, companies have to change and sometimes have to change in depth. These evolutions are usually intentional and monitored and they are done through business projects. Two major goals can be identified among the goals to be reached through these projects: the operational agility development and the product management expertise. This thesis work focuses on PLM (Product Lifecycle Management) project and more precisely on the implementation of the information system needed to manage products’ lifecycle : the PLM system. This information system manages all or part of information related to the definition, the manufacturing, the use and the treatment as a waste of products. First of all, a state of the art of the concepts related to product lifecycle management is done. Basing on these concepts, a method for defining the PLM system requirements is proposed. Defining this method highlights two consistency needs. The first one is related to consistency among the several PLM system models. The second one is related to consistency among the several product representations used throughout the product lifecycle. A modeling framework based on the systemic paradigm, on the ambivalence paradigm and on metamodeling concepts is then proposed. On the one hand, this modeling framework provides elements to respond to identified consistency needs. On the other hand, it supports the adoption of the synergistic reasoning that is essential for developing the operational agility sought by the enterprise. An experimentation is carried out to illustrate the concepts introduced by the modeling framework.
14

Development of a modeling framework for design of low-cost and appropriate rehabilitation strategies for Nyala abandoned mine

Mhlongo, Sphiwe Emmauel 01 October 2013 (has links)
Department of Mining and Environmental Geology / MESC
15

Analysis of enterprise IT service availability : Enterprise architecture modeling for assessment, prediction, and decision-making

Franke, Ulrik January 2012 (has links)
Information technology has become increasingly important to individuals and organizations alike. Not only does IT allow us to do what we always did faster and more effectively, but it also allows us to do new things, organize ourselves differently, and work in ways previously unimaginable. However, these advantages come at a cost: as we become increasingly dependent upon IT services, we also demand that they are continuously and uninterruptedly available for use. Despite advances in reliability engineering, the complexity of today's increasingly integrated systems offers a non-trivial challenge in this respect. How can high availability of enterprise IT services be maintained in the face of constant additions and upgrades, decade-long life-cycles, dependencies upon third-parties and the ever-present business-imposed requirement of flexible and agile IT services? The contribution of this thesis includes (i) an enterprise architecture framework that offers a unique and action-guiding way to analyze service availability, (ii) identification of causal factors that affect the availability of enterprise IT services, (iii) a study of the use of fault trees for enterprise architecture availability analysis, and (iv) principles for how to think about availability management. This thesis is a composite thesis of five papers. Paper 1 offers a framework for thinking about enterprise IT service availability management, highlighting the importance of variance of outage costs. Paper 2 shows how enterprise architecture (EA) frameworks for dependency analysis can be extended with Fault Tree Analysis (FTA) and Bayesian networks (BN) techniques. FTA and BN are proven formal methods for reliability and availability modeling. Paper 3 describes a Bayesian prediction model for systems availability, based on expert elicitation from 50 experts. Paper 4 combines FTA and constructs from the ArchiMate EA language into a method for availability analysis on the enterprise level. The method is validated by five case studies, where annual downtime estimates were always within eight hours from the actual values. Paper 5 extends the Bayesian prediction model from paper 3 and the modeling method from paper 4 into a full-blown enterprise architecture framework, expressed in a probabilistic version of the Object Constraint Language. The resulting modeling framework is tested in nine case studies of enterprise information systems. / Informationsteknik blir allt viktigare för både enskilda individer och för organisationer. IT låter oss inte bara arbeta snabbare och effektivare med det vi redan gör, utan låter oss också göra helt nya saker, organisera oss annorlunda och arbeta på nya sätt. Tyvärr har dessa fördelar ett pris: i takt med att vi blir alltmer beroende av IT-tjänster ökar också våra krav på att de är ständigt tillgängliga för oss, utan avbrott. Trots att tillförlitlighetstekniken går framåt utgör dagens alltmer sammankopplade system en svår utmaning i detta avseende. Hur kan man säkerställa hög tillgänglighet hos IT-tjänster som ständigt byggs ut och uppgraderas, som har livscykler på tiotals år, som är beroende av tredjepartsleverantörer och som dessutom måste leva upp till verksamhetskrav på att vara flexibla och agila? Den här avhandlingen innehåller (i) ett arkitekturramverk som på ett unikt sätt kan analysera IT-tjänsters tillgänglighet och ta fram rekommenderade åtgärder, (ii) ett antal identifierade kausalfaktorer som påverkar IT-tjänsters tillgänglighet, (iii) en studie av hur felträd kan användas för arkitekturanalys av tillgänglighet samt (iv) en uppsättning principer för beslutsfattande kring tillgänglighet. Avhandlingen är en sammanläggningsavhandling med fem artiklar. Artikel 1 innehåller ett konceptuellt ramverk för beslutsfattande kring IT-tjänsters tillgänglighet som understryker vikten av variansen hos nertidskostnaderna. Artikel 2 visar hur ramverk för organisationsövergripande arkitektur (s.k. enterprise architecture -- EA) kan utvidgas med felträdsanalys (FTA) och bayesianska nätverk (BN) för analys av beroenden mellan komponenter. FTA och BN är bägge etablerade metoder för tillförlitlighets- och tillgänglighetsmodellering. Artikel 3 beskriver en bayesiansk prediktionsmodell för systemtillgänglighet, baserad på utlåtanden från 50 experter. Artikel 4 kombinerar FTA med modelleringselement från EA-ramverket ArchiMate till en metod för tillgänglighetsanalys på verksamhetsnivå. Metoden har validerats i fem fallstudier, där de estimerade årliga nertiderna alltid låg inom åtta timmar från de faktiska värdena. Artikel 5 utvidgar den bayesianska prediktionsmodellen från artikel 3 och modelleringsmetoden från artikel 4 till ett fullständigt EA-ramverk som uttrycks i en probabilistisk version av Object Constraint Language (OCL). Det resulterande modelleringsramverket har testats i nio fallstudier på verksamhetsstödjande IT-system. / <p>QC 20120912</p>
16

Development of a multimodal port freight transportation model for estimating container throughput

Gbologah, Franklin Ekoue 08 July 2010 (has links)
Computer based simulation models have often been used to study the multimodal freight transportation system. But these studies have not been able to dynamically couple the various modes into one model; therefore, they are limited in their ability to inform on dynamic system level interactions. This research thesis is motivated by the need to dynamically couple the multimodal freight transportation system to operate at multiple spatial and temporal scales. It is part of a larger research program to develop a systems modeling framework applicable to freight transportation. This larger research program attempts to dynamically couple railroad, seaport, and highway freight transportation models. The focus of this thesis is the development of the coupled railroad and seaport models. A separate volume (Wall 2010) on the development of the highway model has been completed. The model railroad and seaport was developed using Arena® simulation software and it comprises of the Ports of Savannah, GA, Charleston, NC, Jacksonville, FL, their adjacent CSX rail terminal, and connecting CSX railroads in the southeastern U.S. However, only the simulation outputs for the Port of Savannah are discussed in this paper. It should be mentioned that the modeled port layout is only conceptual; therefore, any inferences drawn from the model's outputs do not represent actual port performance. The model was run for 26 continuous simulation days, generating 141 containership calls, 147 highway truck deliveries of containers, 900 trains, and a throughput of 28,738 containers at the Port of Savannah, GA. An analysis of each train's trajectory from origin to destination shows that trains spend between 24 - 67 percent of their travel time idle on the tracks waiting for permission to move. Train parking demand analysis on the adjacent shunting area at the multimodal terminal seems to indicate that there aren't enough containers coming from the port because the demand is due to only trains waiting to load. The simulation also shows that on average it takes containerships calling at the Port of Savannah about 3.2 days to find an available dock to berth and unload containers. The observed mean turnaround time for containerships was 4.5 days. This experiment also shows that container residence time within the port and adjacent multimodal rail terminal varies widely. Residence times within the port range from about 0.2 hours to 9 hours with a mean of 1 hour. The average residence time inside the rail terminal is about 20 minutes but observations varied from as little as 2 minutes to a high of 2.5 hours. In addition, about 85 percent of container residence time in the port is spent idle. This research thesis demonstrates that it is possible to dynamically couple the different sub-models of the multimodal freight transportation system. However, there are challenges that need to be addressed by future research. The principal challenge is the development of a more efficient train movement algorithm that can incorporate the actual Direct Traffic Control (DTC) and / or Automatic Block Signal (ABS) track segmentation. Such an algorithm would likely improve the capacity estimates of the railroad network. In addition, future research should seek to reduce the high computational cost imposed by a discrete process modeling methodology and the adoption of single container resolution level for terminal operations. A methodology combining both discrete and continuous process modeling as proposed in this study could lessen computational costs and lower computer system requirements at a cost of some of the feedback capabilities of the model This tradeoff must be carefully examined.
17

Conceiving and Implementing a language-oriented approach for the design of automated learning scenarios

Moura, César 20 June 2007 (has links) (PDF)
Cette thèse a pour sujet la conception de scénarios pédagogiques destinés à l'e-formation. Afin de faciliter les échanges de matériaux décrivant des stratégies pédagogiques, la communauté s'est récemment mobilisée pour proposer un langage standard suffisamment générique pour permettre la représentation de n'importe quel scénario, indépendant même du paradigme éducationnel sous-jacent. Appelé génériquement Educational Modeling Language (EML), ce type de langage engendre une nouvelle façon de concevoir des EIAH, en s'éloignant du traditionnel Instructional System Design, une fois que, au lieu de proposer une application finie, les EML proposent un modèle conceptuel standard, une notation pour l'exprimer et des éditeurs et frameworks, laissant aux concepteurs finaux la tâche de créer leurs propres « applications ». Les EMLs permettent alors la création et exécution d'instances de scénarios, dans une approche plus ouverte et flexible, augmentant, ainsi, les possibilités d'adaptation des applications résultantes aux besoins des usagers.<br />Cette flexibilité reste pourtant limitée et, après quelques années de recherche, les EMLs commencent à montrer ses faiblesses. En fait, le langage choisi pour devenir le standard du domaine, le IMS-LD, s'est montré générique, certes, mais peu expressive, ne permettant pas une représentation fidèle des divers scénarios existants. C'est à dire, c'est aux usagers de s'adapter à la syntaxe et sémantique de cet standard.<br />Cette thèse part d'un constat quant aux difficultés du processus de conception lui-même, et aux risques de coupure qu'il peut y avoir entre pédagogues et développeurs de logiciels. Pour améliorer la capacité des équipes pédagogiques à pouvoir spécifier, et même implémenter, des scénarios pédagogiques, nous proposons une approche où c'est l'EML qui doit s'adapter aux besoins de l'usager. L'usager a la possibilité de créer son propre langage (ou ses propres langages), s'il en a besoin. En plus, un même scénario peut être décrit en même temps par des différents EMLs (ou modèles) respectant des différents perspectives - et même paradigmes - de chaque stake holder. <br />Cette approche, appelée multi-EML, est possible grâce aux avancées récentes du génie logiciel, telle l'Architecture Dirigée par les Modèles – l'implémentation la plus connue d'un nouvel paradigme de programmation surnommé Languages Oriented Programming (LOP), qui inclut encore d'autres implémentations. <br />Notre proposition réside dans la conception d'un environnement informatique « auteur », qui repose sur les principes des Languages Oriented Programming, en utilisant la plateforme ouverte ECLIPSE et, plus particulièrement son implémentation du LOP, l'Eclipse Modeling Framework (EMF). Ainsi, les concepteurs auront un outil qui leur permettra de créer des spécifications formelles décrivant les scénarios envisagés et d'en générer automatiquement des applications correspondantes, dans un processus qui démarre avec les descriptions informelles des experts du domaine.<br />Reconnaissant que les experts d'éducation - ceux qui mieux comprennent le domaine - ne sont pas nécessairement des informaticiens, l'environnement proposé, appelé MDEduc, fournit aussi un éditeur permettant de décrire un scénario dans une notation informelle, à savoir le pattern pédagogique, à partir de laquelle les modèles formels peuvent être dérivés. En plus, nous proposons de garder côte à côte et en coïncidence ces descriptions en langage informelles, et les descriptions plus formelles et normatives et d'offrir la possibilité d'effectuer des allers-retours à toutes les phases du cycle de vie du dispositif pédagogique.
18

Component-Based Model-Driven Software Development

Johannes, Jendrik 07 January 2011 (has links) (PDF)
Model-driven software development (MDSD) and component-based software development are both paradigms for reducing complexity and for increasing abstraction and reuse in software development. In this thesis, we aim at combining the advantages of each by introducing methods from component-based development into MDSD. In MDSD, all artefacts that describe a software system are regarded as models of the system and are treated as the central development artefacts. To obtain a system implementation from such models, they are transformed and integrated until implementation code can be generated from them. Models in MDSD can have very different forms: they can be documents, diagrams, or textual specifications defined in different modelling languages. Integrating these models of different formats and abstraction in a consistent way is a central challenge in MDSD. We propose to tackle this challenge by explicitly separating the tasks of defining model components and composing model components, which is also known as distinguishing programming-in-the-small and programming-in-the-large. That is, we promote a separation of models into models for modelling-in-the-small (models that are components) and models for modelling-in-the-large (models that describe compositions of model components). To perform such component-based modelling, we introduce two architectural styles for developing systems with component-based MDSD (CB-MDSD). For CB-MDSD, we require a universal composition technique that can handle models defined in arbitrary modelling languages. A technique that can handle arbitrary textual languages is universal invasive software composition for code fragment composition. We extend this technique to universal invasive software composition for graph fragments (U-ISC/Graph) which can handle arbitrary models, including graphical and textual ones, as components. Such components are called graph fragments, because we treat each model as a typed graph and support reuse of partial models. To put the composition technique into practice, we developed the tool Reuseware that implements U-ISC/Graph. The tool is based on the Eclipse Modelling Framework and can therefore be integrated into existing MDSD development environments based on the framework. To evaluate the applicability of CB-MDSD, we realised for each of our two architectural styles a model-driven architecture with Reuseware. The first style, which we name ModelSoC, is based on the component-based development paradigm of multi-dimensional separation of concerns. The architecture we realised with that style shows how a system that involves multiple modelling languages can be developed with CB-MDSD. The second style, which we name ModelHiC, is based on hierarchical composition. With this style, we developed abstraction and reuse support for a large modelling language for telecommunication networks that implements the Common Information Model industry standard.
19

Component-Based Model-Driven Software Development

Johannes, Jendrik 15 December 2010 (has links)
Model-driven software development (MDSD) and component-based software development are both paradigms for reducing complexity and for increasing abstraction and reuse in software development. In this thesis, we aim at combining the advantages of each by introducing methods from component-based development into MDSD. In MDSD, all artefacts that describe a software system are regarded as models of the system and are treated as the central development artefacts. To obtain a system implementation from such models, they are transformed and integrated until implementation code can be generated from them. Models in MDSD can have very different forms: they can be documents, diagrams, or textual specifications defined in different modelling languages. Integrating these models of different formats and abstraction in a consistent way is a central challenge in MDSD. We propose to tackle this challenge by explicitly separating the tasks of defining model components and composing model components, which is also known as distinguishing programming-in-the-small and programming-in-the-large. That is, we promote a separation of models into models for modelling-in-the-small (models that are components) and models for modelling-in-the-large (models that describe compositions of model components). To perform such component-based modelling, we introduce two architectural styles for developing systems with component-based MDSD (CB-MDSD). For CB-MDSD, we require a universal composition technique that can handle models defined in arbitrary modelling languages. A technique that can handle arbitrary textual languages is universal invasive software composition for code fragment composition. We extend this technique to universal invasive software composition for graph fragments (U-ISC/Graph) which can handle arbitrary models, including graphical and textual ones, as components. Such components are called graph fragments, because we treat each model as a typed graph and support reuse of partial models. To put the composition technique into practice, we developed the tool Reuseware that implements U-ISC/Graph. The tool is based on the Eclipse Modelling Framework and can therefore be integrated into existing MDSD development environments based on the framework. To evaluate the applicability of CB-MDSD, we realised for each of our two architectural styles a model-driven architecture with Reuseware. The first style, which we name ModelSoC, is based on the component-based development paradigm of multi-dimensional separation of concerns. The architecture we realised with that style shows how a system that involves multiple modelling languages can be developed with CB-MDSD. The second style, which we name ModelHiC, is based on hierarchical composition. With this style, we developed abstraction and reuse support for a large modelling language for telecommunication networks that implements the Common Information Model industry standard.

Page generated in 0.0919 seconds