• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 2
  • 1
  • 1
  • Tagged with
  • 15
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Composability of parallel codes on heterogeneous architectures / La composition des codes parallèles sur plates-formes hétérogènes

Hugo, Andra-Ecaterina 12 December 2014 (has links)
Pour répondre aux besoins de précision et d'efficacité des simulations scientifiques, la communauté du Calcul Haute Performance augmente progressivement les demandes en terme de parallélisme, rajoutant ainsi un besoin croissant de réutiliser les bibliothèques parallèles optimisées pour les architectures complexes.L'utilisation simultanée de plusieurs bibliothèques de calcul parallèle au sein d'une application soulève bien souvent des problèmes d 'efficacité. En compétition pour l'obtention des ressources, les routines parallèles, pourtant optimisées, se gênent et l'on voit alors apparaître des phénomènes de surcharge, de contention ou de défaut de cache.Dans cette thèse, nous présentons une technique de cloisonnement de flux de calculs qui permet de limiter les effets de telles interférences. Le cloisonnement est réalisé à l'aide de contextes d'exécution qui partitionnement les unités de calculs voire en partagent certaines. La répartition des ressources entre les contextes peut être modifiée dynamiquement afin d'optimiser le rendement de la machine. A cette fin, nous proposons l'utilisation de certaines métriques par un superviseur pour redistribuer automatiquement les ressources aux contextes. Nous décrivons l'intégration des contextes d'ordonnancement au support d'exécution pour machines hétérogènes StarPU et présentons des résultats d'expériences démontrant la pertinence de notre approche. Dans ce but, nous avons implémenté une extension du solveur direct creux qr mumps dans la quelle nous avons fait appel à ces mécanismes d'allocation de ressources. A travers les contextes d'ordonnancement nous décrivons une nouvelle méthode de décomposition du problème basée sur un algorithme de \proportional mapping". Le superviseur permet de réadapter dynamiquement et automatiquement l'allocation des ressources au parallèlisme irrégulier de l'application. L'utilisation des contextes d'ordonnancement et du superviseur a amélioré la localité et la performance globale du solveur. / To face the ever demanding requirements in term of accuracy and speed of scientific simulations, the High Performance community is constantly increasing the demands in term of parallelism, adding thus tremendous value to parallel libraries strongly optimized for highly complex architectures.Enabling HPC applications to perform efficiently when invoking multiple parallel libraries simultaneously is a great challenge. Even if a uniform runtime system is used underneath, scheduling tasks or threads coming from dfferent libraries over the same set of hardware resources introduces many issues, such as resource oversubscription, undesirable cache ushes or memory bus contention.In this thesis, we present an extension of StarPU, a runtime system specifically designed for heterogeneous architectures, that allows multiple parallel codes to run concurrently with minimal interference. Such parallel codes run within scheduling contexts that provide confined executionenvironments which can be used to partition computing resources. Scheduling contexts can be dynamically resized to optimize the allocation of computing resources among concurrently running libraries. We introduced a hypervisor that automatically expands or shrinks contexts using feedback from the runtime system (e.g. resource utilization). We demonstrated the relevance of this approach by extending an existing generic sparse direct solver (qr mumps) to use these mechanisms and introduced a new decomposition method based on proportional mapping that is used to build the scheduling contexts. In order to cope with the very irregular behavior of the application, the hypervisor manages dynamically the allocation of resources. By means of the scheduling contexts and the hypervisor we improved the locality and thus the overall performance of the solver.
2

Towards Grid-Wide Modeling and Simulation

Xie, Yong, Teo, Yong Meng, Cai, W., Turner, S. J. 01 1900 (has links)
Modeling and simulation permeate all areas of business, science and engineering. With the increase in the scale and complexity of simulations, large amounts of computational resources are required, and collaborative model development is needed, as multiple parties could be involved in the development process. The Grid provides a platform for coordinated resource sharing and application development and execution. In this paper, we survey existing technologies in modeling and simulation, and we focus on interoperability and composability of simulation components for both simulation development and execution. We also present our recent work on an HLA-based simulation framework on the Grid, and discuss the issues to achieve composability. / Singapore-MIT Alliance (SMA)
3

Definitions and Measures of Performance for Standard Biological Parts

Conboy, Caitlin, Braff, Jen, Endy, Drew 16 March 2006 (has links)
We are working to enable the engineering of integrated biological systems. Specifically, we would like to be able to build systems using standard parts that, when combined, have reliable and predictable behavior. Here, we define standard characteristics for describing the absolute physical performance of genetic parts that control gene expression. The first characteristic, PoPS, defines the level of transcription as the number of RNA polymerase molecules that pass a point on DNA each second, on a per DNA copy basis (PoPS = Polymerase Per Second; PoPSdc = PoPS per DNA copy). The second characteristic, RiPS, defines the level of translation as the number of ribosome molecules that pass a point on mRNA each second, on a per mRNA copy basis (RiPS = Ribosomes Per Second; RiPSmc = RiPS per mRNA copy). In theory, it should be possible to routinely combine devices that send and receive PoPS and RiPS signals to produce gene expression-based systems whose quantiative behavior is easy to predict. To begin to evaluate the utility of the PoPS and RIPS framework we are characterizing the performance of a simple gene expression device in E. coli growing at steady state under standard operating conditions; we are using a simple ordinary differential equation model to estimate the steady state PoPS and RiPS levels. / Poster presented at the 2005 ICSB meeting, held at Harvard Medical School in Boston, MA.
4

On Specifying and Enforcing Access Control of Web Services Based Workflows

Chen, Yun-Chih 11 August 2009 (has links)
Web services have become the de facto standards as components for quickly building a business process that satisfies the business goal of an organization. Nowadays, Web services have found its way into describing the functions of automatic tasks as well as manual tasks. An important part in the specification of a business process, especially for manual tasks, is the access control. This thesis considers both types of tasks involved in a Web services-based process with its corresponding access control problem and proposes a selection approach for choosing the performer for each task so as to satisfy all access control constraints. Based on the role-based access control model, we focus on two types of access control: separation of duties (SoD) and binding of duties (BoD). Both role-level and participant-level of SoDs and of BoDs that need to be dynamically enforced and these constraints are considered in this thesis. The proposed performer selection approach is evaluated by a workflow scenario and is shown to have the highest chance of satisfying all predefined access control constraints when compared to other methods.
5

A Verification Framework for Component Based Modeling and Simulation : “Putting the pieces together”

Mahmood, Imran January 2013 (has links)
The discipline of component-based modeling and simulation offers promising gains including reduction in development cost, time, and system complexity. This paradigm is very profitable as it promotes the use and reuse of modular components and is auspicious for effective development of complex simulations. It however is confronted by a series of research challenges when it comes to actually practice this methodology. One of such important issue is Composability verification. In modeling and simulation (M&amp;S), composability is the capability to select and assemble components in various combinations to satisfy specific user requirements. Therefore to ensure the correctness of a composed model, it is verified with respect to its requirements specifications.There are different approaches and existing component modeling frameworks that support composability however in our observation most of the component modeling frameworks possess none or weak built-in support for the composability verification. One such framework is Base Object Model (BOM) which fundamentally poses a satisfactory potential for effective model composability and reuse. However it falls short of required semantics, necessary modeling characteristics and built-in evaluation techniques, which are essential for modeling complex system behavior and reasoning about the validity of the composability at different levels.In this thesis a comprehensive verification framework is proposed to contend with some important issues in composability verification and a verification process is suggested to verify composability of different kinds of systems models, such as reactive, real-time and probabilistic systems. With an assumption that all these systems are concurrent in nature in which different composed components interact with each other simultaneously, the requirements for the extensive techniques for the structural and behavioral analysis becomes increasingly challenging. The proposed verification framework provides methods, techniques and tool support for verifying composability at its different levels. These levels are defined as foundations of a consistent model composability. Each level is discussed in detail and an approach is presented to verify composability at that level. In particular we focus on theDynamic-Semantic Composability level due to its significance in the overallcomposability correctness and also due to the level of difficulty it poses in theprocess. In order to verify composability at this level we investigate the application ofthree different approaches namely (i) Petri Nets based Algebraic Analysis (ii) ColoredPetri Nets (CPN) based State-space Analysis and (iii) Communicating SequentialProcesses based Model Checking. All the three approaches attack the problem ofverifying dynamic-semantic composability in different ways however they all sharethe same aim i.e., to confirm the correctness of a composed model with respect to itsrequirement specifications. Beside the operative integration of these approaches inour framework, we also contributed in the improvement of each approach foreffective applicability in the composability verification. Such as applying algorithmsfor automating Petri Net algebraic computations, introducing a state-space reductiontechnique in CPN based state-space analysis, and introducing function libraries toperform verification tasks and help the molder with ease of use during thecomposability verification. We also provide detailed examples of using each approachwith different models to explain the verification process and their functionality.Lastly we provide a comparison of these approaches and suggest guidelines forchoosing the right one based on the nature of the model and the availableinformation. With a right choice of an approach and following the guidelines of ourcomponent-based M&amp;S life-cycle a modeler can easily construct and verify BOMbased composed models with respect to its requirement specifications. / <p>Overseas Scholarship for PHD in selected Studies Phase II Batch I</p><p>Higher Education Commision of Pakistan.</p><p>QC 20130224</p>
6

Simulation Software as a Service and Service-Oriented Simulation Experiment

Guo, Song 28 July 2012 (has links)
Simulation software is being increasingly used in various domains for system analysis and/or behavior prediction. Traditionally, researchers and field experts need to have access to the computers that host the simulation software to do simulation experiments. With recent advances in cloud computing and Software as a Service (SaaS), a new paradigm is emerging where simulation software is used as services that are composed with others and dynamically influence each other for service-oriented simulation experiment on the Internet. The new service-oriented paradigm brings new research challenges in composing multiple simulation services in a meaningful and correct way for simulation experiments. To systematically support simulation software as a service (SimSaaS) and service-oriented simulation experiment, we propose a layered framework that includes five layers: an infrastructure layer, a simulation execution engine layer, a simulation service layer, a simulation experiment layer and finally a graphical user interface layer. Within this layered framework, we provide a specification for both simulation experiment and the involved individual simulation services. Such a formal specification is useful in order to support systematic compositions of simulation services as well as automatic deployment of composed services for carrying out simulation experiments. Built on this specification, we identify the issue of mismatch of time granularity and event granularity in composing simulation services at the pragmatic level, and develop four types of granularity handling agents to be associated with the couplings between services. The ultimate goal is to achieve standard and automated approaches for simulation service composition in the emerging service-oriented computing environment. Finally, to achieve more efficient service-oriented simulation, we develop a profile-based partitioning method that exploits a system’s dynamic behavior and uses it as a profile to guide the spatial partitioning for more efficient parallel simulation. We develop the work in this dissertation within the application context of wildfire spread simulation, and demonstrate the effectiveness of our work based on this application.
7

Simulation Software as a Service and Service-Oriented Simulation Experiment

Guo, Song 28 July 2012 (has links)
Simulation software is being increasingly used in various domains for system analysis and/or behavior prediction. Traditionally, researchers and field experts need to have access to the computers that host the simulation software to do simulation experiments. With recent advances in cloud computing and Software as a Service (SaaS), a new paradigm is emerging where simulation software is used as services that are composed with others and dynamically influence each other for service-oriented simulation experiment on the Internet. The new service-oriented paradigm brings new research challenges in composing multiple simulation services in a meaningful and correct way for simulation experiments. To systematically support simulation software as a service (SimSaaS) and service-oriented simulation experiment, we propose a layered framework that includes five layers: an infrastructure layer, a simulation execution engine layer, a simulation service layer, a simulation experiment layer and finally a graphical user interface layer. Within this layered framework, we provide a specification for both simulation experiment and the involved individual simulation services. Such a formal specification is useful in order to support systematic compositions of simulation services as well as automatic deployment of composed services for carrying out simulation experiments. Built on this specification, we identify the issue of mismatch of time granularity and event granularity in composing simulation services at the pragmatic level, and develop four types of granularity handling agents to be associated with the couplings between services. The ultimate goal is to achieve standard and automated approaches for simulation service composition in the emerging service-oriented computing environment. Finally, to achieve more efficient service-oriented simulation, we develop a profile-based partitioning method that exploits a system’s dynamic behavior and uses it as a profile to guide the spatial partitioning for more efficient parallel simulation. We develop the work in this dissertation within the application context of wildfire spread simulation, and demonstrate the effectiveness of our work based on this application.
8

Improved composability of software components through parallel hardware platforms for in-car multimedia systems

Knirsch, Andreas January 2015 (has links)
Recent years have witnessed a significant change to vehicular user interfaces (UI). This is the result of increased functionality, triggered by the continuous proliferation of vehicular software and computer systems. The UI represents the integration point that must fulfil particular requirements for usability despite the increased functionality. A concurrent present trend is the substitution of federated systems with integrated architectures. The steadily rising number of interacting functional components and the increasing integration density implies a growing complexity that has an effect on system development. This evolution raises demands for concepts that aid the composition of such complex and interactive embedded software systems, operated within safety critical environments. This thesis explores the requirements related to composability of software components, based on the example of In-Car Multimedia (ICM). This thesis proposes a novel software architecture that provides an integration path for next-generation ICM. The investigation begins with an examination of characteristics, existing frameworks and applied practice regarding the development and composition of ICM systems. To this end, constructive aspects are identified as potential means for improving composability of independently developed software components that differ in criticality, temporal and computational characteristics. This research examines the feasibility of partitioning software components by exploitation of parallel hardware architectures. Experimental evaluations demonstrate the applicability of encapsulated scheduling domains. These are achieved through the utilisation of multiple technologies that complement each other and provide different levels of containment, while featuring efficient communication to preserve adequate interoperability. In spite of allocating dedicated computational resources to software components, certain resources are still shared and require concurrent access. Particular attention has been paid to management of concurrent access to shared resources to consider the software components' individual criticality and derived priority. A software based resource arbiter is specified and evaluated to improve the system's determinism. Within the context of automotive interactive systems, the UI is of vital importance, as it must conceal inherent complexity to minimise driver distraction. Therefore, the architecture is enhanced with a UI compositing infrastructure to facilitate implementation of a homogenous and comprehensive look and feel despite the segregation of functionality. The core elements of the novel architecture are validated both individually and in combination through a proof-of-concept prototype. The proposed integral architecture supports the development and in particular the integration of mixed-critical and interactive systems.
9

Market driven elastic secure infrastructure

Tikale, Sahil 30 May 2023 (has links)
In today’s Data Centers, a combination of factors leads to the static allocation of physical servers and switches into dedicated clusters such that it is difficult to add or remove hardware from these clusters for short periods of time. This silofication of the hardware leads to inefficient use of clusters. This dissertation proposes a novel architecture for improving the efficiency of clusters by enabling them to add or remove bare-metal servers for short periods of time. We demonstrate by implementing a working prototype of the architecture that such silos can be broken and it is possible to share servers between clusters that are managed by different tools, have different security requirements, and are operated by tenants of the Data Center, which may not trust each other. Physical servers and switches in a Data Center are grouped for a combination of reasons. They are used for different purposes (staging, production, research, etc); host applications required for servicing specific workloads (HPC, Cloud, Big Data, etc); and/or configured to meet stringent security and compliance requirements. Additionally, different provisioning systems and tools such as Openstack-Ironic, MaaS, Foreman, etc that are used to manage these clusters take control of the servers making it difficult to add or remove the hardware from their control. Moreover, these clusters are typically stood up with sufficient capacity to meet anticipated peak workload. This leads to inefficient usage of the clusters. They are under-utilized during off-peak hours and in the cases where the demand exceeds capacity the clusters suffer from degraded quality of service (QoS) or may violate service level objectives (SLOs). Although today’s clouds offer huge benefits in terms of on-demand elasticity, economies of scale, and a pay-as-you-go model yet many organizations are reluctant to move their workloads to the cloud. Organizations that (i) needs total control of their hardware (ii) has custom deployment practices (iii) needs to match stringent security and compliance requirements or (iv) do not want to pay high costs incurred from running workloads in the cloud prefers to own its hardware and host it in a data center. This includes a large section of the economy including financial companies, medical institutions, and government agencies that continue to host their own clusters outside of the public cloud. Considering that all the clusters may not undergo peak demand at the same time provides an opportunity to improve the efficiency of clusters by sharing resources between them. The dissertation describes the design and implementation of the Market Driven Elastic Secure Infrastructure (MESI) as an alternative to the public cloud and as an architecture for the lowest layer of the public cloud to improve its efficiency. It allows mutually non-trusting physically deployed services to share the physical servers of a data center efficiently. The approach proposed here is to build a system composed of a set of services each fulfilling a specific functionality. A tenant of the MESI has to trust only a minimal functionality of the tenant that offers the hardware resources. The rest of the services can be deployed by each tenant themselves MESI is based on the idea of enabling tenants to share hardware they own with tenants they may not trust and between clusters with different security requirements. The architecture provides control and freedom of choice to the tenants whether they wish to deploy and manage these services themselves or use them from a trusted third party. MESI services fit into three layers that build on each other to provide: 1) Elastic Infrastructure, 2) Elastic Secure Infrastructure, and 3) Market-driven Elastic Secure Infrastructure. 1) Hardware Isolation Layer (HIL) – the bottommost layer of MESI is designed for moving nodes between multiple tools and schedulers used for managing the clusters. It defines HIL to control the layer 2 switches and bare-metal servers such that tenants can elastically adjust the size of the clusters in response to the changing demand of the workload. It enables the movement of nodes between clusters with minimal to no modifications required to the tools and workflow used for managing these clusters. (2) Elastic Secure Infrastructure (ESI) builds on HIL to enable sharing of servers between clusters with different security requirements and mutually non-trusting tenants of the Data Center. ESI enables the borrowing tenant to minimize its trust in the node provider and take control of trade-offs between cost, performance, and security. This enables sharing of nodes between tenants that are not only part of the same organization by can be organization tenants in a co-located Data Center. (3) The Bare-metal Marketplace is an incentive-based system that uses economic principles of the marketplace to encourage the tenants to share their servers with others not just when they do not need them but also when others need them more. It provides tenants the ability to define their own cluster objectives and sharing constraints and the freedom to decide the number of nodes they wish to share with others. MESI is evaluated using prototype implementations at each layer of the architecture. (i) The HIL prototype implemented with only 3000 Lines of Code (LOC) is able to support many provisioning tools and schedulers with little to no modification; adds no overhead to the performance of the clusters and is in active production use at MOC managing over 150 servers and 11 switches. (ii) The ESI prototype builds on the HIL prototype and adds to it an attestation service, a provisioning service, and a deterministically built open-source firmware. Results demonstrate that it is possible to build a cluster that is secure, elastic, and fairly quick to set up. The tenant requires only minimum trust in the provider for the availability of the node. (iii) The MESI prototype demonstrates the feasibility of having a one-of-kind multi-provider marketplace for trading bare-metal servers where providers also use the nodes. The evaluation of the MESI prototype shows that all the clusters benefit from participating in the marketplace. It uses agents to trade bare-metal servers in a marketplace to meet the requirements of their clusters. Results show that compared to operating as silos individual clusters see a 50% improvement in the total work done; up to 75% improvement (reduction) in waiting for queues and up to 60% improvement in the aggregate utilization of the test bed. This dissertation makes the following contributions: (i) It defines the architecture of MESI allows mutually non-trusting tenants of the data center to share resources between clusters with different security requirements. (ii) Demonstrates that it is possible to design a service that breaks the silos of static allocation of clusters yet has a small Trusted Computing Base (TCB) and no overhead to the performance of the clusters. (iii) Provides a unique architecture that puts the tenant in control of its own security and minimizes the trust needed in the provider for sharing nodes. (iv) A working prototype of a multi-provider marketplace for bare-metal servers which is a first proof-of-concept that demonstrates that it is possible to trade real bare-metal nodes at practical time scales such that moving nodes between clusters is sufficiently fast to be able to get some useful work done. (v) Finally results show that it is possible to encourage even mutually non-trusting tenants to share their nodes with each other without any central authority making allocation decisions. Many smart, dedicated engineers and researchers have contributed to this work over the years. I have jointly led the efforts to design the HIL and the ESI layer; led the design and implementation of the bare-metal marketplace and the overall MESI architecture.
10

Abortable and Query-abortable Types and Their Efficient Implementation

Horn, Stephanie Lorraine 24 September 2009 (has links)
We introduce abortable and query-abortable object types intended for implementation in asynchronous shared-memory systems with low contention. Implementations of such types behave like ordinary objects when accessed sequentially, but may abort operations when accessed concurrently. An aborted operation may or may not take effect, i.e., cause a state transition, and it returns no indication of which possibility occurred. Since this uncertainty can be problematic, a query-abortable type supports a QUERY operation that each process can use to determine its last non-QUERY operation on the object that caused a state transition, and the response associated with this state transition. Our research is closely related to obstruction-free implementations (introduced by Herlihy, Luchangco and Moir) and responsive obstruction-free implementations (introduced by Attiya, Guerraoui and Kouznetsov). Like abortable and query-abortable types, these implementations may exhibit degraded behaviour in the face of contention. We show that abortable registers--registers strictly weaker than safe registers--can be used to obtain obstruction-free and responsive obstruction-free implementations for any type. We present universal constructions for abortable and query-abortable types that are novel and efficient in the number of registers used. Specifically, they are based on a simple timestamping mechanism for detecting concurrent executions, and, in systems with n processes, use only n abortable registers or only O(n^2) single-reader, single-writer abortable registers. The timestamping mechanism we introduce is based on the inc&read counter type and appears to be interesting in its own right. As a generalization, we study the k-inc&read counter types, for k>0. We also identify a potential problem with correctness properties based on step contention: with such properties, the composition of correct object implementations may result in an implementation that is not correct. In other words, implementations defined in terms of step contention are not always composable. To avoid this problem, we introduce a property based on interval contention, namely non-triviality, to define the correct behaviour of abortable and query-abortable object implementations.

Page generated in 0.0592 seconds