• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 59
  • 11
  • 7
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 149
  • 149
  • 38
  • 37
  • 25
  • 22
  • 22
  • 21
  • 19
  • 18
  • 17
  • 16
  • 15
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A database system architecture supporting coexisting query languages and data models

Hepp, Pedro E. January 1983 (has links)
Database technology is already recognised and increasingly used in administering and organising large bodies of data and as an aid in developing software. This thesis considers the applicability of this technology in small but potentially expanding application environments with users of varying levels of competence. A database system architecture with the following main characteristics is proposed: Database technology is already recognised and increasingly used in administering and organising large bodies of data and as an aid in developing software. This thesis considers the applicability of this technology in small but potentially expanding application environments with users of varying levels of competence. A database system architecture with the following main characteristics is proposed:Database technology is already recognised and increasingly used in administering and organising large bodies of data and as an aid in developing software. This thesis considers the applicability of this technology in small but potentially expanding application environments with users of varying levels of competence. A database system architecture with the following main characteristics is proposed : 1. It is based on a set of software components that facilitates the implementation and evolution of a software development environment centered on a database. 2. It enables the implementation of different user interfaces to provide adequate perceptions of the information content of the database according to the user's competence, familiarity with the system or the complexity of the processing requirements. 3. it is oriented toward databases that require moderate resources from the computer system to start an application. Personal or small-group databases are likely to benefit most from this approach.
12

The Role of Standards in COTS Integration Projects

Stottlemyer, Alan R., Hassett, Kevin M. 10 1900 (has links)
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California / We have long used standards to guide the development process of software systems. Standards such as POSIX, X-Windows, SQL have become part of the language of software developers and have guided the coding of systems that are intended to be portable and interoperable. Standards also have a role to play in the integration of commercial off-the-shelf (COTS) products. At NASA's Goddard Space Flight Center, we have been participating on the Renaissance Team, a reengineering effort that has seen the focus shift from custom-built systems to the use of COTS to satisfy prime mission functions. As part of this effort, we developed a process that identified standards that are applicable to the evaluation and integration of products and assessed how those standards should be applied. Since the goal is to develop a set of standards that can be used to instantiate systems of differing sizes and capabilities, the standards selected have been broken into four areas: global integration standards, global development standards, mission development standards, and mission integration standards. Each of the areas is less restrictive than the preceding area in the standards that are allowed. This paper describes the process that we used to select and categorize the standards to be applied to Renaissance systems.
13

A Real-Time Telemetry Data Processing System with Open System Architecture

Jun, Zhang, MeiPing, Feng, Yanbo, Zhu, Bin, He, Qishan, Zhang 10 1900 (has links)
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California / In face of the characteristics of multiple data streams, high bit rate, variable data formats, complicated frame structure and changeable application environment, the programmable PCM telemetry system needs a new data processing system with advanced telemetry system architecture. This paper fully considers the characteristics of real-time telemetry data processing, analyzes the design of open system architecture for real-time telemetry data processing system(TDPS), presents an open system architecture scheme and design of real-time TDPS, gives the structure model of distributed network system, and develops the interface between network database and telemetry database, as well as telemetry processing software with man-machine interface. Finally, a practical and multi-functional real-time TDPS with open system architecture has been built, which based on UNIX operating system, supporting TCP/IP protocol and using Oracle relational database management system. This scheme and design have already proved to be efficient for real-time processing, high speed, mass storage and multi-user operation.
14

Acquisition and Near Real-Time Display of Multispectral Test Data from Widely Separated Test Sites

Donlan, Brian, Sabo, Frank 10 1900 (has links)
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California / As modern weapons grow more sophisticated and capable of operating autonomously, the challenge of testing these weapons has also grown more complex. Seekers may be multispectral and must be able to overcome threat countermeasures. To effectively analyze the performance of these weapons, time-correlated test data must be efficiently, simultaneously acquired from both the weapons' internal busses and from the threat countermeasures' internal communication busses, often in a "live fire" environment. The test data must be transmitted to a central processing station where test personnel may immediately analyze the performance of the weapon with the assistance of scientific visualization techniques. In addition, the data must be captured on permanent media for future playback and more detailed analysis. One solution is to link the test article, threat countermeasures and other test support resources through an Integrated Telemetry System (ITS). Instrumentation to acquire high-speed test data is installed in data collection vans that are remotely located in the vicinity of the article under test or in the vicinity of the threat countermeasures systems or test support resources. The remote vans will be interconnected and linked to a control van which provides a centralized test control and monitoring point. Remote Data Formatter (RDF) instrumentation units, located in the remote vans, can acquire data from and control seekers, sensors, emission sources or other equipment located in or near the remote vans. The RDF units can also format the data for transmission to the control van via either fiber optic or microwave radio links. The data transmitted from multiple remote vans is received by Real-time Data Processing System (RTPS) units located in the control van for merging, processing and recording. Some of the processed data can be transferred to a Host Processing System (HPS) where it can be displayed on color graphic workstations. The control van's HPS workstations provide user-friendly displays and menus for test setup and control. Both the remote and control vans are equipped with secure digital communication systems capable of supporting compressed digital video, audio, high-speed instrumentation data and an Ethernet computer network.
15

An architectural approach for reasoning about trust properties

Namiluko, Cornelius January 2012 (has links)
The need for trustworthy system operation has been acknowledged in many circles. However, establishing that a system is trustworthy is a significant challenge. While trusted computing proposes technical mechanisms towards this end, less attention is directed towards providing a basis for trusting such systems. Consequently, it is not clear: (i) how such mechanisms influence the overall trust in a system; (ii) the properties and assumptions upon which trust is based; and (iii) the evidence necessary to reason about these properties. This can be attributed to a number of factors including: (i) the complexity of modern systems; (ii) a lack of consensus on a definition of trust; and (iii) a lack of a systematic approach for identifying and using evidence to reason about trust-related properties. This dissertation presents research towards addressing these challenges. We argue that an architectural approach provides effective abstractions for making trust properties and assumptions explicit and reasoning about a system's ability to satisfy these properties. We propose a framework for identifying, categorising and mapping trust-properties to aspects of a system that could be used to reason about these properties. Guided by this framework, we propose and develop models for representing knowledge about a particular aspect and using it to reason about trust-properties. A semantic model, based on the semantics of Z, is developed to characterise building blocks of trustworthy systems and to demonstrate how the system's constituents determine its trustworthiness. An abstraction model based on formal verification is developed to reason about the impact of the system's construction and configuration on its trustworthiness. Finally, two complementary models for capturing the runtime aspects of the system are developed. A trace-based model enables analysis of runtime evidence in the form of event logs and a provenance-based model captures operations on the system as a provenance graph. The models are validated on a trusted grid architecture, a password manager and a trustworthy collaborative system.
16

Evaluating ARCADIA/Capella vs. OOSEM/SysML for System Architecture Development

Alai, Shashank P. 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Systems Engineering is catching pace in many segments of product manufacturing industries. Model-Based Systems Engineering (MBSE) is the formalized application of modeling to perform systems engineering activities. In order to effectively utilize the complete potential of MBSE, a methodology consisting of appropriate processes, methods and tools is a key necessity. In the last decade, several MBSE projects have been implemented in industries varying from aerospace and defense to automotive, healthcare and transportation. The Systems Modeling Language (SysML) standard has been a key enabler of these projects at many companies. Although SysML is capable of providing a rich representation of any system through various viewpoints, the journey towards adopting SysML to realize the true potential of MBSE has been a challenge. Among all, one of the common roadblocks faced by systems engineers across industries has been the software engineering-based nature of SysML which leads to difficulties in grasping the modeling concepts for people that do not possess a software engineering background. As a consequence, developing a system (or a system of systems) architecture model using SysML has been a challenging task for many engineers even after a decade of its inception and multiple successive iterations of the language specification. Being a modeling language, SysML is method-agnostic, but its associated limitations outweigh the advantages. ARCADIA (Architecture Analysis and Design Integrated Approach) is a systems and software architecture engineering method based on architecture-centric and model-based engineering activities. If applied properly, ARCADIA allows for a very effective way to model the architecture of multi-domain systems, and overcome many of the limitations faced in traditional SysML implementation. This thesis evaluates the architecture development capabilities of ARCADIA/Capella versus SysML following the Object-Oriented Systems Engineering Method (OOSEM). The study focuses on the key equivalences and differences between the two MBSE solutions from a model development perspective and provides several criteria to evaluate their effectiveness for architecture development using a conceptual case of Adaptive Cruise Control (ACC). The evaluation is based on three perspectives namely, architecture quality, ability to support key process deliverables, and the overall methodology. Towards this end, an industry-wide survey of MBSE practitioners and thought leaders was conducted to identify several concerns in using models but also to validate the results of the study. The case study demonstrates how the ARCADIA/Capella approach addresses several challenges that are currently faced in SysML implementation. From a process point of view, ARCADIA/Capella and SysML equally support the provision of the key deliverable artifacts required in the systems engineering process. However, the candidate architectures developed using the two approaches show a considerable difference in various aspects such as the mapping of the form to function, creating functional architectures, etc. The ARCADIA/Capella approach allows to develop a ‘good’ system architecture representation efficiently and intuitively. The study also provides answers to several useful criteria pertaining to the overall candidate methodologies while serving as a practitioner’s reference in selecting the most suitable approach.
17

An Interactive Wildfire Spread and Suppression Simulation Environment Based on Devs-Fire

Song, Fei 21 November 2008 (has links)
Wildfires pose serious threats to the society and environment. Simulation of wildfire spread and fire suppression remains a challenging task due to the complexity of wildfire behavior and fire suppression tactics. In previous work, a wildfire spread and suppression simulation model called DEVS-FIRE has been developed. Based on that model, this thesis develops a graphic user interface to support an interactive simulation environment for surface wildfire spread and suppression simulation. The developed environment allows users to dynamically set up fire spread simulations, and to interacticaly deploy firefighting agents to experiment different fire suppression tactics. This graphic user interface is implemented using the Java Swing framework, and is intergrated with the DEVS-FIRE model in a well-designed manner. The software architecture is described and the simulation environment and experiment results with different fuel, terrain and weather data are presented.
18

Study on Architecture-Oriented Enterprise Private Cloud Model

Hsu, Chine-chuan 12 June 2012 (has links)
Cloud computing has updated the appearance of the Information Technology (IT) infrastructure, and in addition to lower operation costs provides real-time services and reduces the information service barrier. In order to adapt to the rapidly changing market demand, enterprises are beginning to consider the feasibility of the deployment of cloud computing. The business environment changes so fast that an integrated dynamic framework and intelligent service system to achieve enterprises¡¦ visions, objectives and strategies, and to quick response is needed. Regarding to the three main service types of cloud computing: Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS), this study proposes an integration model for enterprise administration. Cloud computing packs the functionality of dynamic resource adjustment. From the deployment of organizations to the customer interactions, cloud computing is divided into a public cloud, private cloud and mixed cloud based on its deployment model. As for the private cloud, its information security and efficiency allow enterprises execute their operations smoothly according to the business rules. Thus more and more enterprises are inclined to deploy private clouds. This study uses structure-behavior coalescence architecture description language (SBC-ADL) to accomplish the systems architecture, and provides thorough suggestions of dynamic resources allocation as a reference model for any enterprise which plans to deploy the cloud computing service. For those enterprises that have already implemented cloud computing services, the systems architecture can be referred to better their business management. Describing the relationship between the various systems architecture is helpful in quickly understanding the system operation. Consider reducing misunderstanding and increasing work efficiency and information correctness, SBC-ADL works very well as an effective tool for training and communication within the IT department.
19

Profiling and reducing micro-architecture bottlenecks at the hardware level / BLAP : um caracterizador de blocos básicos de arquitetura

Moreira, Francis Birck January 2014 (has links)
A maior parte dos mecanismos em processadores superescalares atuais usam granularidade de instrução para criar ou caracterizar especulações, tais como predição de desvios ou prefetchers. No entanto, muitas das características das instruções podem ser obtidas ao analisar uma granularidade mais grossa, o bloco básico de código, aumentando a quantidade de código coberta em um espaço similar de armazenamento. Adicionalmente, códigos podem ser analisados mais precisamente e prover uma variedade maior de informação ao observar diferentes tipos de instruções e suas relações. Devido a estas vantagens, a análise no nível de blocos pode fornecer mais oportunidades para mecanismos que necessitam desta informação. Por exemplo, é possível integrar informações de desvios mal previstos e acessos a memória para gerar informações mais precisas de quais acessos a memória oferecem melhor desempenho ao serem priorizados. Nesta tese propomos o Block-Level Architecture Profiler (BLAP) (Block Level Architecture Profiler), um mecanismo em hardware que caracteriza gargalos no nível microarquitetural, tal como loads delinquentes, desvios de difícil previsão e contenção nas unidades funcionais. O BLAP trabalha no nível de bloco básico, apenas detectando e fornecendo informações que podem ser usada para otimizar tais gargalos. Um mecanismo para a remoção de prefetches e uma política de controlador de memória DRAM foram criados para usar a informação criada pelo BLAP e demonstrar seu potencial. Juntos, estes mecanismos são capazes de melhorar o desempenho do sistema em até 17.39% (3.9% em média). Nosso método mostrou também ganhos médios de 13.14% quando avaliado com uma pressão na memória mais alta devido a prefetchers mais agressivos. / Most mechanisms in current superscalar processors use instruction granularity information for speculation, such as branch predictors or prefetchers. However, many of these characteristics can be obtained at the basic block level, increasing the amount of code that can be covered while requiring less space to store the data. Moreover, the code can be profiled more accurately and provide a higher variety of information by analyzing different instruction types inside a block. Because of these advantages, block-level analysis can offer more opportunities for mechanisms that use this information. For example, it is possible to integrate information about branch prediction and memory accesses to provide precise information for speculative mechanisms, increasing accuracy and performance. We propose a BLAP, an online mechanism that profiles bottlenecks at the microarchitectural level, such as delinquent memory loads, hard-to-predict branches and contention for functional units. BLAP works at the basic block level, providing information that can be used to reduce the impact of these bottlenecks. A prefetch dropping mechanism and a memory controller policy were developed to use the profiled information provided by BLAP. Together, these mechanisms are able to improve performance by up to 17.39% (3.90% on average). Our technique showed average gains of 13.14% when evaluated under high memory pressure due to highly aggressive prefetch.
20

Profiling and reducing micro-architecture bottlenecks at the hardware level / BLAP : um caracterizador de blocos básicos de arquitetura

Moreira, Francis Birck January 2014 (has links)
A maior parte dos mecanismos em processadores superescalares atuais usam granularidade de instrução para criar ou caracterizar especulações, tais como predição de desvios ou prefetchers. No entanto, muitas das características das instruções podem ser obtidas ao analisar uma granularidade mais grossa, o bloco básico de código, aumentando a quantidade de código coberta em um espaço similar de armazenamento. Adicionalmente, códigos podem ser analisados mais precisamente e prover uma variedade maior de informação ao observar diferentes tipos de instruções e suas relações. Devido a estas vantagens, a análise no nível de blocos pode fornecer mais oportunidades para mecanismos que necessitam desta informação. Por exemplo, é possível integrar informações de desvios mal previstos e acessos a memória para gerar informações mais precisas de quais acessos a memória oferecem melhor desempenho ao serem priorizados. Nesta tese propomos o Block-Level Architecture Profiler (BLAP) (Block Level Architecture Profiler), um mecanismo em hardware que caracteriza gargalos no nível microarquitetural, tal como loads delinquentes, desvios de difícil previsão e contenção nas unidades funcionais. O BLAP trabalha no nível de bloco básico, apenas detectando e fornecendo informações que podem ser usada para otimizar tais gargalos. Um mecanismo para a remoção de prefetches e uma política de controlador de memória DRAM foram criados para usar a informação criada pelo BLAP e demonstrar seu potencial. Juntos, estes mecanismos são capazes de melhorar o desempenho do sistema em até 17.39% (3.9% em média). Nosso método mostrou também ganhos médios de 13.14% quando avaliado com uma pressão na memória mais alta devido a prefetchers mais agressivos. / Most mechanisms in current superscalar processors use instruction granularity information for speculation, such as branch predictors or prefetchers. However, many of these characteristics can be obtained at the basic block level, increasing the amount of code that can be covered while requiring less space to store the data. Moreover, the code can be profiled more accurately and provide a higher variety of information by analyzing different instruction types inside a block. Because of these advantages, block-level analysis can offer more opportunities for mechanisms that use this information. For example, it is possible to integrate information about branch prediction and memory accesses to provide precise information for speculative mechanisms, increasing accuracy and performance. We propose a BLAP, an online mechanism that profiles bottlenecks at the microarchitectural level, such as delinquent memory loads, hard-to-predict branches and contention for functional units. BLAP works at the basic block level, providing information that can be used to reduce the impact of these bottlenecks. A prefetch dropping mechanism and a memory controller policy were developed to use the profiled information provided by BLAP. Together, these mechanisms are able to improve performance by up to 17.39% (3.90% on average). Our technique showed average gains of 13.14% when evaluated under high memory pressure due to highly aggressive prefetch.

Page generated in 0.0987 seconds