• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 14
  • 11
  • 9
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 66
  • 66
  • 12
  • 12
  • 10
  • 9
  • 9
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

DESIGN OF A CONFIGURATIONAND MANAGEMENT TOOL FORINSTRUMENTATION NETWORKS

Roach, John 10 1900 (has links)
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada / The development of network-based data acquisition systems has resulted in a new architecture for supporting flight instrumentation that has the potential to revolutionize the way we test our aircraft. However, the inherent capability and flexibility in a networked test architecture can only be realized by the flight engineer if a sufficiently powerful toolset is available that can configure and manage the system. This paper introduces the concept of an instrumentation configuration and management system (ICMS) that acts as the central resource for configuring, controlling, and monitoring the instrumentation network. Typically, the ICMS supports a graphical user interface into the workings of the instrumentation network, providing the user with a friendly and efficient way to verify the operation of the system. Statistics being gathered at different peripherals within the network would be collected by this tool and formatted for interpretation by the user. Any error conditions or out-of-bounds situations would be detected by the ICMS and signaled to the user. Changes made to the operation of any of the peripherals in the network (if permitted) would be managed by the ICMS to ensure consistency of the system. Furthermore, the ICMS could guarantee that the appropriate procedures were being followed and that the operator had the required privileges needed to make any changes. This paper describes the high-level design of a modular and multi-platform ICMS and its use within the measurement-centric aircraft instrumentation network architecture under development by the Network Products Division at Teletronics.
12

iNET Interoperability Tools

Araujo, Maria S., Seegmiller, Ray D., Noonan, Patrick J., Newton, Todd A., Samiadji-Benthin, Chris S., Moodie, Myron L., Grace, Thomas B., Malatesta, William A. 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / The integrated Network Enhanced Telemetry (iNET) program has developed standards for network-based telemetry systems, which implementers and range users of Telemetry Network System (TmNS) equipment can use to promote interoperability between components. While standards promote interoperability, only implementation of the standards can ensure it. This paper discusses the tools that are being developed by the iNET program which implement the technologies and protocols specified in the iNET standards in order to ensure interoperability between TmNS components and provide a general framework for device development. Capabilities provided by the tools include system management, TmNS message processing, metadata processing, and time synchronization.
13

Μελέτη, σχεδιασμός και ανάπτυξη λογισμικού ανοικτού κώδικα απομακρυσμένης διαχείρισης υπολογιστικών και δικτυακών συστημάτων

Κάραλης, Ιωάννης 09 February 2009 (has links)
Η διαχείριση πληροφοριακών συστημάτων αυξάνει ραγδαία σε πολυπλοκότητα καθώς η αρχιτεκτονική των εξυπηρετητών γίνεται πιο κατανεμημένη και ο αριθμός των οντοτήτων πληθαίνει, αναγκάζοντας τις επιχειρήσεις να καταφύγουν σε πολύπλοκες και δαπανηρές λύσεις διαχείρισης συστημάτων και δικτύων. Η επένδυση για μια επιχείρηση μικρού ή μεσαίου μεγέθους σε μια τέτοια λύση συχνά κρίνεται ανεπαρκής καθώς η πλειονότητα των εργασιών διαχείρισης θεωρούνται τετριμμένες, ενώ το κόστος εκμάθησης και ειδίκευσης προσωπικού είναι μεγάλο. Παράλληλα η κοινότητα ανοικτού κώδικα δεν έχει εμφανίσει μια αξιόπιστη και ολοκληρωμένη λύση για τη διαχείριση συστημάτων. Η διπλωματική εργασία έχει σκοπό τη δημιουργία ενός ενιαίου συστήματος διαχείρισης υπολογιστικών συστημάτων για λειτουργικά Linux και Windows. Μελετήθηκε, σχεδιάστηκε και αναπτύχθηκε μια ολοκληρωμένη εφαρμογή ανοικτού κώδικα που παρέχει βασικές δυνατότητες διαχείρισης. Αυτές περιλαμβάνουν απογραφή περιουσιακών στοιχείων (inventory), διανομή λογισμικού (software delivery), απομακρυσμένος έλεγχος (remote control), παρακολούθηση κατάστασης συστημάτων (system monitoring), καθώς και άλλων πρόσθετων δυνατοτήτων διαχείρισης λειτουργικού. Στόχος της εργασίας είναι να παράγει μια καινοτόμο ενιαία λύση, αξιοποιώντας κατά το δυνατό τις υπάρχουσες λύσεις ή εργαλεία που είναι διαθέσιμα από την κοινότητα ανοικτού κώδικα. Ειδικότερα: - Μελετώνται οι υπάρχουσες λύσεις opensource, συγκρίνονται βάση κριτηρίων αξιολόγησης και απαιτήσεων και επιλέγονται αυτές που θα ενσωματωθούν στη τελική εφαρμογή. - Παρουσιάζεται η ιδέα ενσωμάτωσης και σχεδιάζεται το ενιαίο σύστημα που περιέχει τις λύσεις που επιλέχθηκαν. - Αναπτύσσεται η εφαρμογή διαχείρισης βάσει σχεδιασμού. Για το σύστημα αναπτύχθηκε λογισμικό για client και server για δύο πλατφόρμες, πλήρες Win32 γραφικό περιβάλλον διαχειριστή, καθώς και περιβάλλον εγκατάστασης. Η εφαρμογή ολοκληρώθηκε σε 16 μήνες περνώντας από τα στάδια Alpha και Beta, καταλήγοντας στη τελική έκδοση παραγωγής. Ονομάστηκε «OpenRSM» από τα αρχικά του «Opensource Remote System Management». Δημοσιεύθηκε στο διαδίκτυο μέσω της κοινότητας ανοικτού κώδικα “SourceForge” στη κατηγορία “System Administration” και τέθηκε σε πιλοτική λειτουργία σε ελληνικούς δημόσιους φορείς. Η τελική εφαρμογή μετρήθηκε και αξιολογήθηκε ως προς την απόδοση και την αξιοπιστία. Τα αποτελέσματα σε συνδυασμό με τα τελικά χαρακτηριστικά της, ανταποκρίνονται σε ένα βαθμό μεγαλύτερο από τις προσδοκίες. / -
14

Verteilungsaspekte im Rahmen der strategischen Informationssystemplanung /

Wolf, Frank, January 1999 (has links)
Thesis (doctoral)--Technische Hochschule, Aachen, 1999.
15

AEGIS platforms using KVA analysis to assess Open Architecture in sustaining engineering /

Adler, Jameson R. Ahart, Jennifer L. January 2007 (has links) (PDF)
Thesis (M.S. in Systems Technology (Command, Control and Communications (C3))--Naval Postgraduate School, June 2007. / Thesis Advisor(s): Thomas Housel. "June 2007." Includes bibliographical references (p. 79-82). Also available in print.
16

IP-Based Networking as Part of the Design of a Payload Control System

Horan, Stephen, Aaronscooke, Ryan, Jaramillo, Daniel 10 1900 (has links)
ITC/USA 2008 Conference Proceedings / The Forty-Fourth Annual International Telemetering Conference and Technical Exhibition / October 27-30, 2008 / Town and Country Resort & Convention Center, San Diego, California / As part of a project to develop small satellites, we have developed a combined ground station and flight computer that use IP-based networking for the command and telemetry data communications. The network uses a private IP network between the payload and the ground-station. Commands are sent to the payload as UDP short message packets. Status and real-time telemetry are sent as UDP text strings. Production data are sent as files using a ftp-type of data exchange. Production data types include numeric data (sensor data) and JPEG-formatted picture data (full pictures and thumbnails). Details of the software used, challenges of making the system work over a low-quality radio link, and integration with the operating system will be discussed.
17

Event-based risk management of large scale information technology projects

Alem, Mohammad January 2013 (has links)
Globalisation has come as a double-edged blade for information technology (IT) companies; providing growth opportunities and yet posing many challenges. Software development is moving from a monolithic model to a distributed approach, where many entities and organisations are involved in the development process. Risk management an important area to deal with all the kinds of technical and social issues within companies planning and programming schedules, and this new way of working requires more attention to be paid to the temporal, socio-cultural and control aspects than before. Multinational companies like IBM have begun to consider how to address the distributed nature of its projects across the globe. With outlets across the globe, the company finds various people of different cultures, languages and ethics working on a single and bigger IT projects from different locations. Other IT companies are facing the same problems, despite there being many kinds of approaches available to handle risk management in large scale IT companies. IBM commissioned the Distributed Risk Management Process (DRiMaP) model as a suitable solution. This model focused on the collaborative and on-going control aspects, and paid attention to the need for risk managers, project managers and management to include risk management into all phases of projects and the business cycle. The authors of the DRiMaP model did not subject it to extensive testing. This research sets out to evaluate, improve and extend the model process and thereby develop a new and dynamic approach to distributed information systems development. To do this, this research compares and contrasts the model with other risk management approaches. An Evolutionary Model is developed, and this is subjected to empirical testing through a hybrid constructive research approach. A survey is used to draw out the observations of project participants, a structured interview gathered the opinions of project experts, a software tool was developed to implement the model, and SysML and Monte Carlo methods were applied to this to simulate the functioning of the model. The Evolutionary Model was found to partially address the shortcomings of the DRiMaP model, and to provide a valuable platform for the development of an enterprise risk management solution.
18

Use and management of information systems in academic libraries in Ghana

Dzandza, Patience Emefa January 2019 (has links)
Philosophiae Doctor - PhD / The use of Information Systems (ISs) has been widely accepted and proven to increase the service quality in many organizations. Academic libraries have embraced the use of ISs and have implemented them to perform different activities. The efficient utilization and management of ISs in libraries will help libraries to derive maximum benefit from adopted ISs. The research used the DeLone and McLean (2003) IS success theory to determine the impact of IS management on the quality of the IS, the use of the IS and the benefits gained. The researcher used nine (30%) of the thirty university libraries which are members of the consortium of academic and research libraries in Ghana (CARLIGH) - an association of libraries that help with the IS and electronic resource use of member libraries. A mixed method approach with questionnaires, interviews combined with content analysis of the university websites was used to gather data. Findings indicated that academic libraries in Ghana are making use of some ISs including; ILS, DAM, social media, websites, among others, amidst a number of challenges. The research also revealed that the management of ISs affects the quality thereof. Quality of ISs affects use, and use affects the benefits gained from use. The researcher proposed an IS management standard guideline which Ghanaian academic libraries could adopt for using and managing ISs to enhance efficiency and better service delivery.
19

EXPLORATION OF RUNTIME DISTRIBUTED MAPPING TECHNIQUES FOR EMERGING LARGE SCALE MPSOCS / EXPLORATION DE TECHNIQUES D’ALLOCATION DE TÂCHES DYNAMIQUES ET DISTRIBUÉES POUR MPSOCS DE LARGE ÉCHELLE

Grandi Mandelli, Marcelo 13 July 2015 (has links)
MPSoCs (systèmes multiprocesseurs sur puces) avec des centaines de cœurs sont déjà disponibles sur le marché. Selon le ITRS, ces systèmes intégreront des milliers de cœurs à la fin de la décennie. La définition du cœur, où chaque tâche sera exécutée dans le système, est une question majeure dans la conception de MPSoCs. Dans la littérature, cette question est définie comme allocation de tâches. La croissance du nombre de cœurs augmente la complexité de l'allocation de tâches. Les principales préoccupations en matière d'allocation de tâches dans des grands MPSoCs incluent: (i) l'évolutivité; (ii) la charge de travail dynamique; et (iii) la fiabilité. Il est nécessaire de distribuer la décision d'allocation de tâches à travers le système afin d'assurer l'évolutivité. La charge de travail de grands MPSoCs peut être dynamique, à savoir, de nouvelles applications peuvent commencer à tout moment, conduisant à différents scénarios d'allocation. Par conséquent, il est nécessaire d'exécuter le processus d'allocation à l'exécution pour soutenir une charge de travail dynamique. La fiabilité est étroitement liée à la distribution de la charge de travail du système. Un déséquilibre de charge peut générer des hotspots et autres implications thermiques, ce qui peut entraîner un fonctionnement peu fiable du système. Dans de grands MPSoCs, les problèmes de fiabilité empirent puisque l'augmentation du nombre de cœurs sur la même puce augmente la densité de puissance et, par conséquent, la température du système. La littérature présente différentes techniques d'allocation de tâches pour améliorer la fiabilité du système. Cependant, ces techniques utilisent des approches d'allocation centralisées, qui ne sont pas évolutives. Pour répondre à ces trois défis, l'objectif principal de cette Thèse est de proposer et évaluer des heuristiques d'allocation de tâches distribuées et dynamiques en assurant l'évolutivité et une distribution équitable de la charge de travail. Une distribution équitable de la charge de travail et du trafic du NoC (réseau sur puce) augmente la fiabilité du système dans le long terme, en raison de la minimisation des régions de hotspot. Pour permettre l'exploration de l'espace de conception de grands MPSoCs, la première contribution de cette Thèse se situe dans le cadre d'une modélisation multi-niveaux, qui prend en compte différents modèles et de capacités de débogage qui enrichissent et facilitent la conception des MPSoCs. La simulation de modèles de niveau inférieur (par exemple RTL) génère des paramètres de performance utilisés pour calibrer des modèles abstraits (sans précision d'horloge). Les modèles abstraits permettent d'explorer des heuristiques d'allocation de tâches dans de grands systèmes. La plupart des techniques d'allocation de tâches se focalisent sur l'optimisation du volume de communication, ce qui peut compromettre la fiabilité du système, en raison d'une surcharge des processeurs. D'autre part, une heuristique qui optimise seulement la distribution de la charge de travail peut surcharger le NoC et compromettre sa fiabilité. La deuxième contribution importante de cette Thèse est la proposition d'heuristiques d'allocation de tâches dynamiques et distribuées, qui réalisent un compromis entre le volume de communication (liens du NoC) et la distribution de la charge de travail (de l'utilisation des processeurs). Des résultats liés au temps d'exécution, au volume de la communication, à la consommation d'énergie, aux traces de puissance et à la distribution de la température dans les grands MPSoCs (144 processeurs) confirment l'hypothèse de compromis. Faire un compromis entre la réduction du volume de communication et une distribution équitable de la charge de travail améliore le système de manière fiable grâce à la réduction des régions de hotspots, sans compromettre la performance du système. / MPSoCs with hundreds of cores are already available in the market. According to the ITRS roadmap, such systems will integrate thousands of cores by the end of the decade. The definition of where each task will execute in the system is a major issue in the MPSoC design. In the literature, this issue is defined as task mapping. The growth in the number of cores increases the complexity of the task mapping. The main concerns in task mapping in large systems include: (i) scalability; (ii) dynamic workload; and (iii) reliability. It is necessary to distribute the mapping decision across the system to ensure scalability. The workload of emerging large MPSoCs may be dynamic, i.e., new applications may start at any moment, leading to different mapping scenarios. Therefore, it is necessary to execute the mapping process at runtime to support a dynamic workload. Reliability is tightly connected to the system workload distribution. Load imbalance may generate hotspots zones and consequently thermal implications, which may result in unreliable system operation. In large scale MPSoCs, reliability issues get worse since the growing number of cores on the same die increases power densities and, consequently, the system temperature. The literature presents different task mapping techniques to improve system reliability. However, such approaches use a centralized mapping approach, which are not scalable. To address these three challenges, the main goal of this Thesis is to propose and evaluate distributed mapping heuristics, executed at runtime, ensuring scalability and a fair workload distribution. Distributing the workload and the traffic inside the NoC increases the system reliability in long-term, due to the minimization of hotspot regions. To enable the design space exploration of large MPSoCs the first contribution of the Thesis lies in a multi-level modeling framework, which supports different models and debugging capabilities that enrich and facilitate the design of MPSoCs. The simulation of lower level models (e.g. RTL) generates performance parameters used to calibrate abstract models (e.g. untimed models). The abstract models pave the way to explore mapping heuristics in large systems. Most mapping techniques focus on optimizing communication volume in the NoC, which may compromise reliability due to overload processors. On the other hand, a heuristic optimizing only the workload distribution may overload NoC links, compromising its reliability. The second significant contribution of the Thesis is the proposition of dynamic and distributed mapping heuristics, making a tradeoff between communication volume (NoC links) and workload distribution (CPU usage). Results related to execution time, communication volume, energy consumption, power traces and temperature distribution in large MPSoCs (144 processors) confirm the tradeoff hypothesis. Trading off workload and communication volume improves system reliably through the reduction of hotspots regions, without compromising system performance.
20

Trajectoires d'adoption et d'appropriation de TIC issues du web en entreprise : une analyse empirique de la diffusion du web 2.0 en entreprise / Adoption and appropriation trajectories of web based ICT within enterprise : an empirical analysis of web 2.0

Guesmi, Samy 29 October 2012 (has links)
Cette thèse s’intéresse aux récentes évolutions du web habituellement désignées par les termes de« web 2.0 » (ou web participatif). Elle porte plus particulièrement sur la diffusion, au sein desorganisations, des outils du travail collaboratif qui en sont issus. En mobilisant les littératures de lasociologie des usages et du management des systèmes d’information, ce travail poursuit un double but :proposer une analyse des différentes modalités de leur adoption en entreprise, en observant la formesous laquelle ils y pénètrent (naturelle ou reconfigurée), et tenter d’en tirer les conséquences sur leprocessus d’appropriation. Les deux enquêtes qui ont été menées sur des terrains complémentaires ontpermis d’élaborer une grille de lecture possible des trajectoires d’adoption et d’appropriation des outilsdu web 2.0 en entreprise. Le premier terrain se concentre sur l’analyse de l’adoption et l’appropriationd’un wiki en usage pendant deux ans au sein d’un département R&D. L’enquête se base ici sur uneméthodologie mixte qui mêle données de trafic, données qualitatives et données quantitatives. Lesecond terrain s’intéresse quant à lui aux pratiques de coordination décentralisée, c’est-à-dire despratiques analogues à celles développées autour des outils du web 2.0 mais qui peuvent mettre en jeuun plus grand nombre d’outils. Cette seconde enquête permet de prendre du champ par rapport à uneapproche exclusivement centrée sur l’outil. Davantage tournée vers les pratiques, elle tente dedécouvrir l’outillage de la coordination concrètement mobilisé par des salariés qui présentent desprofils divers et évoluent dans des entreprises de différents types. Cette seconde enquête se base surune post-enquête réalisée à la suite de l’enquête Changement Organisationnel et Informatisation –Technologies de l’Information et de la Communication menée en 2006. Les données recueillies sur cesdeux terrains ont conduit à proposer trois scenarii possibles d’adoption et d’appropriation du web 2.0en entreprise : celui de la « diffusion classique », celui du « bricolage » et celui de « l’entreprise 2.0 ».Dans une démarche interprétativiste, ces scenarii sont à la fois nourris par et confrontés aux terrainsempiriques mais ne se veulent qu’une lecture possible de l’adoption et l’appropriation à partir defigures polaires. Ce travail se propose d’élargir le champ des recherches en management des systèmesd’information et les outils d’analyse existants pour mieux prendre en compte l’apparition d’unenouvelle génération d’outils en entreprise. / This thesis focuses on the recent trends witnessed on the web usually referred as “web 2.0” (orparticipative web). More specifically, this work takes an interest in the diffusion of web-basedcollaborative tools that stem from web 2.0 within organisations. Mobilizing literatures from thesociology of uses and management information system, this work pursue a twofold aim: to propose ananalysis of the different possible conditions of web 2.0’s adoption within enterprise, depending on theform it takes when it penetrates (natural or reconfigured), and try to draw upon this analysis to explainthe appropriation process. The two investigations that have been conducted on complementary fieldsenabled to develop a framework for a possible reading of the web 2.0 tools adoption andappropriation’s trajectories within enterprises. The first field focuses on the adoption andappropriation’s analysis of a two year old wiki used within an R&D department. This investigation isbased on a mixed methodology that puts together traffic data, qualitative and quantitative data. Thesecond field takes an interest in the decentralized coordination practices that is to say similar practicesas the one that takes place around web 2.0 tools while widened to other types of technologies. Taking apractice focus, this investigation tries to unravel the coordination equipment concretely employed byworkers presenting diverse profiles within enterprises of different kind. That second investigation isbased on a post survey realized after the survey Organisational Change and Computerization –Information and Communication Technologies conducted in 2006. The data gathered on these twofields led to propose three possible scenarios of web 2.0’s adoption and appropriation within enterprise:“the classical diffusion”, “the tinkering” and “the enterprise 2.0”. Based on an interpretive approach,these scenarios are at the same time feed by and confronted with the fields but are simply proposed as apossible reading of adoption and appropriation based on archetypes. This work proposes to widen thefield research of management information system and the tools it offers to take a better account of theemergence of new generations of tools within enterprise.

Page generated in 0.1091 seconds