• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 170
  • 94
  • 32
  • 24
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 3
  • 1
  • 1
  • Tagged with
  • 386
  • 386
  • 326
  • 316
  • 200
  • 107
  • 76
  • 67
  • 66
  • 65
  • 56
  • 45
  • 38
  • 34
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Designing machine learning ensembles : a game coalition approach

Alzubi, Omar A. January 2013 (has links)
No description available.
222

Gestion des risques liés au transport des matières dangereuses / Risk management related to the transport of dangerous goods

Najib, Mehdi 31 October 2014 (has links)
L’évolution du commerce international et la croissance des échanges intercontinentaux ont créé un besoin constant pour le transport de marchandises. Dans ce contexte, le transport maritime a connu un grand engouement vu son efficience pour la mobilité de grande quantité de marchandises. Ce mode de transport a été révolutionné par l’introduction des conteneurs, et le développement de nouvelles plateformes multimodales : les terminaux à conteneurs (TC), spécialisés dans la manutention des conteneurs. Ces derniers sont souvent soumis à des contraintes et des exigences qu'ils doivent satisfaire en termes d'efficience, de sécurité et de sûreté de fonctionnement. L’objectif de cette thèse est de gérer les risques liés au transport des conteneurs dans un TC tout en prenant en compte l’aspect collaboratif au niveau d’une chaîne logistique et les activités qui pourraient être réalisées en amont de la livraison des conteneurs. Ceci en garantissant une réconciliation des aspects gestion des risques et performance dans un TC. La mise en œuvre est basée sur une approche multi-paradigme permettant l’urbanisation du système de traçabilité GOST (Géo-localisation Optimisation et Sécurité de Transport) et le développement d’un Système de Gestion d’un Terminal à Conteneurs (SGTC). Concernant la gestion des risques liés au transport des conteneurs, une solution a été proposée en se basant sur la traçabilité et la géo-localisation en s’appuyant sur le système GOST moyennant son urbanisation, le concept de produit intelligent et des architectures orientées services. Le but de cette solution est d’améliorer la collecte des informations relatives à la gestion des risques fournies par les acteurs de la chaîne logistique. Pour ce faire, nous avons tout d’abord procédé à une urbanisation du système GOST afin de l’adapter aux nouvelles exigences. Ensuite, nous avons proposé un enrichissement du concept de produit intelligent afin de développer un modèle du conteneur intelligent approprié. Enfin, nous avons employé les architectures dirigées par les modèles afin d’automatiser la génération du code des services web pour la collecte des données de traçabilité. A cet effet, deux approches pour l’interfaçage du conteneur intelligent aux différents services web ont été proposées. La première est basée sur l’orchestration des services selon la logique des processus métiers. Quant à la seconde, elle est fondée sur l’utilisation d’un bus de communication l’ESB : Entreprise Service Bus.Toutes ces solutions sont intégrées dans le système SGTC qui s’appui sur la technologie Agent. Ce système intègre une approche de gestion des risques et l’évaluation de la performance du TC. L’approche de gestion des risques est basée sur deux processus. Le premier traite le ciblage des conteneurs suspects et est bâti autour d’un système expert enrichi par une méthode d’apprentissage forcé : l’algorithme Apriori. Le second prend en charge la vérification de la ségrégation spatiale durant l’entreposage. Enfin, une étude de cas a été réalisée afin de valider la solution proposée ainsi qu’une simulation pour l’évaluation de la performance. / The international trade evolution and the growth of intercontinental commercial exchanges have created an ongoing need for goods’ transport. In this context, maritime transport knew an enormous craze due to its efficiency for shipping large quantities of goods. This mode of transport has been revolutionized by the introduction of containers and the development of new multimodal platforms specialized in container handling: Container Terminals (CT). These CTs are subject to a set of constraints and requirements that must be satisfied in terms of efficiency, safety, and dependability. This thesis aims to manage the risks related to containers transport in a CT taking into account the collaborative aspect of the supply chain and the activities carried out before the containers’ delivery. Furthermore, it tackles reconciliation of the risk management aspect and performance aspect in CT. The implementation is based on a multi-paradigm approach for the urbanization of GOST traceability system (Geo-localization, Optimization, Securing, and Transport) and the development of a Container Terminal Management System (CTMS). For the risk management related to containers transport, we proposed a tracking and tracing solution based on the urbanization of GOST system, intelligent product concept and service-oriented architectures. This solution aims to improve the collection of information needed for risk management, which are provided by the supply chain actors. For this purpose, first of all, we propose an urbanization of the GOST system to fit t risk management requirements. In a second step, we define an improved intelligent product concept to develop an appropriate intelligent container model. Finally, we used the model driven architectures to automate code generation of web services needed to collect traceability data. For this purpose, two approaches for interfacing the intelligent container to different web services have been proposed. The first is based on services orchestration using business process. The second is founded on the configuration of an Enterprise Service Bus (ESB). All these solutions are integrated in the CTMS system. This system is developed using the Agent technology and aims to integrate risk management approach and the evaluation of the CT performance. Our risk management approach is based on two processes. The first deals with the suspicious containers targeting and it is based on an expert system enriched by a forced learning method: the Apriori algorithm. The second supports the verification of spatial segregation during storage. Finally, a case study was carried out to validate the proposed solution as well as a simulation to evaluate the performance.
223

Evolving a secure grid-enabled, distributed data warehouse : a standards-based perspective

Li, Xiao-Yu January 2007 (has links)
As digital data-collection has increased in scale and number, it becomes an important type of resource serving a wide community of researchers. Cross-institutional data-sharing and collaboration introduce a suitable approach to facilitate those research institutions that are suffering the lack of data and related IT infrastructures. Grid computing has become a widely adopted approach to enable cross-institutional resource-sharing and collaboration. It integrates a distributed and heterogeneous collection of locally managed users and resources. This project proposes a distributed data warehouse system, which uses Grid technology to enable data-access and integration, and collaborative operations across multi-distributed institutions in the context of HV/AIDS research. This study is based on wider research into OGSA-based Grid services architecture, comprising a data-analysis system which utilizes a data warehouse, data marts, and near-line operational database that are hosted by distributed institutions. Within this framework, specific patterns for collaboration, interoperability, resource virtualization and security are included. The heterogeneous and dynamic nature of the Grid environment introduces a number of security challenges. This study also concerns a set of particular security aspects, including PKI-based authentication, single sign-on, dynamic delegation, and attribute-based authorization. These mechanisms, as supported by the Globus Toolkit’s Grid Security Infrastructure, are used to enable interoperability and establish trust relationship between various security mechanisms and policies within different institutions; manage credentials; and ensure secure interactions.
224

Inhabiting the information space : Paradigms of collaborative design environments

Shakarchī, ʻAlī 11 1900 (has links)
The notion of information space (iSpace) is that a collective context of transmitters and receivers can serve as a medium to share, exchange, and apply data and knowledge between a group of human beings or software agents. Inhabiting this space requires a perception of its dimensions, limits, and an understanding of the way data is diffused between inhabitants. One of the important aspects of iSpace is that it expands the limits of communication between distributed designers allowing them to carry out tasks that were very difficult to accomplish with the diverse, but not well integrated current communication technologies. In architecture, design team members, often rely on each others' expertise to review and problem solve design issues as well as interact with each other for critic, and presentations. This process is called Collaborative Design. Applying this process of collaboration to the iSpace to serve as a supplementary medium of communication, rather than a replacement for it, and understanding how design team members can use it to enhance the effectiveness of the design process and increase the efficiency of communication, is the main focus of this research. The first chapter will give an overview of the research and define the objectives and the scope of it as well as giving a background on the evolving technological media in design practice. This chapter will also give a summary of some case studies for collaborative design projects as real examples to introduce the subject. The second chapter of this research will study the collaborative design activities with respect to the creative problem solving, the group behaviour, and the information flow between members. It will also examine the technical and social problems with the distributed collaboration. The third chapter will give a definition of the iSpace and analyze its components (epistemological, utilitarian, and cultural) based on research done by others. It will also study the impact of the iSpace on the design process in general and on the architectural product in particular. The fourth chapter will be describing software programs written as prototypes for this research that allow for realtime and non-realtime collaboration over the internet, tailored specifically to suit the design team use to facilitate distributed collaboration in architecture. These prototypes are : 1. pinUpBoard (realtime shared display board for pin-ups) 2. sketchBoard (realtime whiteboarding application with multisessions) 3. mediaBase (shared database management system) 4. teamCalendar (shared interactive calendar on the internet) 5. talkSpace (organized forums for discussions) / Applied Science, Faculty of / Architecture and Landscape Architecture (SALA), School of / Graduate
225

Performance comparison of data distribution management strategies in large-scale distributed simulation.

Dzermajko, Caron 05 1900 (has links)
Data distribution management (DDM) is a High Level Architecture/Run-time Infrastructure (HLA/RTI) service that manages the distribution of state updates and interaction information in large-scale distributed simulations. The key to efficient DDM is to limit and control the volume of data exchanged during the simulation, to relay data to only those hosts requiring the data. This thesis focuses upon different DDM implementations and strategies. This thesis includes analysis of three DDM methods including the fixed grid-based, dynamic grid-based, and region-based methods. Also included is the use of multi-resolution modeling with various DDM strategies and analysis of the performance effects of aggregation/disaggregation with these strategies. Running numerous federation executions, I simulate four different scenarios on a cluster of workstations with a mini-RTI Kit framework and propose a set of benchmarks for a comparison of the DDM schemes. The goals of this work are to determine the most efficient model for applying each DDM scheme, discover the limitations of the scalability of the various DDM methods, evaluate the effects of aggregation/disaggregation on performance and resource usage, and present accepted benchmarks for use in future research.
226

An Integrated Architecture for Ad Hoc Grids

Amin, Kaizar Abdul Husain 05 1900 (has links)
Extensive research has been conducted by the grid community to enable large-scale collaborations in pre-configured environments. grid collaborations can vary in scale and motivation resulting in a coarse classification of grids: national grid, project grid, enterprise grid, and volunteer grid. Despite the differences in scope and scale, all the traditional grids in practice share some common assumptions. They support mutually collaborative communities, adopt a centralized control for membership, and assume a well-defined non-changing collaboration. To support grid applications that do not confirm to these assumptions, we propose the concept of ad hoc grids. In the context of this research, we propose a novel architecture for ad hoc grids that integrates a suite of component frameworks. Specifically, our architecture combines the community management framework, security framework, abstraction framework, quality of service framework, and reputation framework. The overarching objective of our integrated architecture is to support a variety of grid applications in a self-controlled fashion with the help of a self-organizing ad hoc community. We introduce mechanisms in our architecture that successfully isolates malicious elements from the community, inherently improving the quality of grid services and extracting deterministic quality assurances from the underlying infrastructure. We also emphasize on the technology-independence of our architecture, thereby offering the requisite platform for technology interoperability. The feasibility of the proposed architecture is verified with a high-quality ad hoc grid implementation. Additionally, we have analyzed the performance and behavior of ad hoc grids with respect to several control parameters.
227

Infrastructure For Performance Tuning MPI Applications

Mohror, Kathryn Marie 01 January 2004 (has links)
Clusters of workstations are becoming increasingly popular as a low-budget alternative for supercomputing power. In these systems,message-passing is often used to allow the separate nodes to act as a single computing machine. Programmers of such systems face a daunting challenge in understanding the performance bottlenecks of their applications. This is largely due to the vast amount of performance data that is collected, and the time and expertise necessary to use traditional parallel performance tools to analyze that data. The goal of this project is to increase the level of performance tool support for message-passing application programmers on clusters of workstations. We added support for LAM/MPI into the existing parallel performance tool,P aradyn. LAM/MPI is a commonly used, freely-available implementation of the Message Passing Interface (MPI),and also includes several newer MPI features,such as dynamic process creation. In addition, we added support for non-shared filesystems into Paradyn and enhanced the existing support for the MPICH implementation of MPI. We verified that Paradyn correctly measures the performance of the majority of LAM/MPI programs on Linux clusters and show the results of those tests. In addition,we discuss MPI-2 features that are of interest to parallel performance tool developers and design support for these features for Paradyn.
228

Performance Modeling of Large-Scale Parallel-Distributed Processing for Cloud Environment / クラウド環境における大規模並列分散処理の性能モデル

Hirai, Tsuguhito 23 May 2018 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第21280号 / 情博第674号 / 新制||情||116(附属図書館) / 京都大学大学院情報学研究科システム科学専攻 / (主査)教授 田中 利幸, 教授 山下 信雄, 准教授 増山 博之, 教授 笠原 正治 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
229

Chunked extendible arrays and its integration with the global array toolkit for parallel image processing

Nimako, Gideon January 2016 (has links)
A thesis submitted to the Faculty of Engineering and the Built Environment in fulfilment of the requirements for the degree of Doctor of Philosophy, 2016 / Online resource (xii, 151 leaves) / Several meetings of the Extremely Large Databases Community for large scale scientific applications have advocated the use of multidimensional arrays as the appropriate model for representing scientific databases. Scientific databases gradually grow to massive sizes of the order of terabytes and petabytes. As such, the storage of such databases requires efficient dynamic storage schemes where the array is allowed to arbitrarily extend the bounds of the dimensions. Conventional multidimensional array representations in today’s programming environments do not extend or shrink their bounds without relocating elements of the data-set. In general extendibility of the bounds of the dimensions is limited to only one dimension. This thesis presents a technique for storing dense multidimensional arrays by chunks such that the array can be extended along any dimension without compromising the access time of an element. This is done with a computed access mapping function that maps the k-dimensional index onto a linear index of the storage locations. This concept forms the basis for the implementation of an array file of any number of dimensions, where the bounds of the array dimension can be extended arbitrarily. Such a feature currently exists in the Hierarchical Data Format version 5 (HDF5). However, extending the bound of a dimension in the HDF5 array file can be unusually expensive in time. Such extensions, in our storage scheme for dense array files, can be performed while still accessing elements of the array at orders of magnitude faster than in HDF5 or conventional array-files. We also present Parallel Chunked Extendible Dense Array (PEXTA), a new parallel I/O model for the Global Array Toolkit. PEXTA provides the necessary Application Programming Interface (API) for explicit data transfer between the memory resident global array and its secondary storage counterpart but also allows the persistent array to be extended on any dimension without compromising the access time of an element or sub-array elements. Such APIs provide a platform for high speed and parallel hyperspectral image processing without performance degradation, even when the imagery files undergo extensions. / MT2017
230

A model checking based framework for the trace analysis of distributed systems /

Hallal, Hesham H. January 2007 (has links)
No description available.

Page generated in 0.0764 seconds