• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 112
  • 32
  • 7
  • 7
  • 7
  • 7
  • 7
  • 5
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 183
  • 183
  • 183
  • 59
  • 25
  • 25
  • 25
  • 24
  • 24
  • 20
  • 18
  • 18
  • 16
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

CLUE: A Cluster Evaluation Tool

Parker, Brandon S. 12 1900 (has links)
Modern high performance computing is dependent on parallel processing systems. Most current benchmarks reveal only the high level computational throughput metrics, which may be sufficient for single processor systems, but can lead to a misrepresentation of true system capability for parallel systems. A new benchmark is therefore proposed. CLUE (Cluster Evaluator) uses a cellular automata algorithm to evaluate the scalability of parallel processing machines. The benchmark also uses algorithmic variations to evaluate individual system components' impact on the overall serial fraction and efficiency. CLUE is not a replacement for other performance-centric benchmarks, but rather shows the scalability of a system and provides metrics to reveal where one can improve overall performance. CLUE is a new benchmark which demonstrates a better comparison among different parallel systems than existing benchmarks and can diagnose where a particular parallel system can be optimized.
172

Development of time and workload methodologies for Micro Saint models of visual display and control systems

Moscovic, Sandra A. 22 December 2005 (has links)
The Navy, through its Total Quality Leadership (TQL) program, has emphasized the need for objective criteria in making design decisions. There are numerous tools available to aid human factors engineers meet the Navy’s need. For example, simulation modeling provides objective design decisions without incurring the high costs associated with prototype building and testing. Unfortunately, simulation modeling of human— machine systems is limited by the lack of task completion time and variance data for various objectives. Moreover, no study has explored the use of a simulation model with a Predetermined Time System (PTS) as a valid method for making design decisions for display interactive consoles. This dissertation concerns the development and validation of a methodology to incorporate a PTS known as Modapts into a simulation modeling tool known as Micro Saint. The operator task context for the model was an interactive displays and controls console known as the AN/SLQ-32(V). In addition, the dissertation examined the incorporation of a cognitive workload metric known as the Subjective Workload Assessment Technique (SWAT) into the Micro Saint model. The dissertation was conducted in three phases. In the first phase, a task analysis was performed to identify operator task and hardware interface redesign options. In the second phase data were collected from two groups of six participants who performed an operationally realistic task on 24 different configurations of a Macintosh AN/SLQ-32(V) simulator. Configurations of the simulated AN/SLQ-32(V) were defined by combinations of two display formats, two color conditions, and two emitter symbol sets, presented under three emitter density conditions. Data from Group 1 were used to assign standard deviations, probability distributions and Modapts times to a Micro Saint model of the task. The third phase of the study consisted of (1) verifying the model-generated performance scores and workload scores by comparison against scores obtained from Group 1 using regression analyses, and (2) validation of the model by comparison against Group 2. The results indicate that the Modapts/Micro Saint methodology was a valid way to predict performance scores obtained from the 24 simulated AN/SLQ-32(V) prototypes (R² = 0.78). The workload metric used in the task network model accounted for 76 percent of the variance in Group 2 mean workload scores, but the slope of the regression was different from unity (p = 0.05). The statistical finding suggests that the model does not provide an exact prediction of workload scores. Further regression analysis of Group 1 and Group 2 workload scores indicates that the two groups were not homogenous with respect to workload ratings. / Ph. D.
173

Multimodule simulation techniques for chip level modeling

Cho, Chang H. January 1986 (has links)
A design and implementation of a multimodule chip-level simulator whose source description language is based on the original GSP2 system is described. To enhance the simulation speed, a special addressing ("sharing single memory location") scheme is used in the implementation of pin connections. The basic data structures and algorithms for the simulator are described. The developed simulator can simulate many digital devices interconnected as a digital network. It also has the capability of modeling external buses and handling the suspension of processes in the environment of multimodule simulation. An example of a multimodule digital system simulation is presented. / M.S.
174

Implementing security in an IP Multimedia Subsystem (IMS) next generation network - a case study

Unknown Date (has links)
The IP Multimedia Subsystem (IMS) has gone from just a step in the evolution of the GSM cellular architecture control core, to being the de-facto framework for Next Generation Network (NGN) implementations and deployments by operators world-wide, not only cellular mobile communications operators, but also fixed line, cable television, and alternative operators. With this transition from standards documents to the real world, engineers in these new multimedia communications companies need to face the task of making these new networks secure against threats and real attacks that were not a part of the previous generation of networks. We present the IMS and other competing frameworks, we analyze the security issues, we present the topic of Security Patterns, we introduce several new patterns, including the basis for a Generic Network pattern, and we apply these concepts to designing a security architecture for a fictitious 3G operator using IMS for the control core. / by Jose M. Ortiz-Villajos. / Thesis (M.S.C.S.)--Florida Atlantic University, 2009. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2009. Mode of access: World Wide Web.
175

Equivalence Checking for High-Assurance Behavioral Synthesis

Hao, Kecheng 10 June 2013 (has links)
The rapidly increasing complexities of hardware designs are forcing design methodologies and tools to move to the Electronic System Level (ESL), a higher abstraction level with better productivity than the state-of-the-art Register Transfer Level (RTL). Behavioral synthesis, which automatically synthesizes ESL behavioral specifications to RTL implementations, plays a central role in this transition. However, since behavioral synthesis is a complex and error-prone translation process, the lack of designers' confidence in its correctness becomes a major barrier to its wide adoption. Therefore, techniques for establishing equivalence between an ESL specification and its synthesized RTL implementation are critical to bring behavioral synthesis into practice. The major research challenge to equivalence checking for behavioral synthesis is the significant semantic gap between ESL and RTL. The semantics of ESL involve untimed, sequential execution; however, the semantics of RTL involve timed, concurrent execution. We propose a sequential equivalence checking (SEC) framework for certifying a behavioral synthesis flow, which exploits information on successive intermediate design representations produced by the synthesis flow to bridge the semantic gap. In particular, the intermediate design representation after scheduling and pipelining transformations permits effective correspondence of internal operations between this design representation and the synthesized RTL implementation, enabling scalable, compositional equivalence checking. Certifications of loop and function pipelining transformations are possible by a combination of theorem proving and SEC through exploiting pipeline generation information from the synthesis flow (e.g., the iteration interval of a generated pipeline). The complexity brought by bubbles in function pipelines is creatively reduced by symbolically encoding all possible bubble insertions in one pipelined design representation. The result of this dissertation is a robust, practical, and scalable framework for certifying RTL designs synthesized from ESL specifications. We have validated the robustness, practicality, and scalability of our approach on industrial-scale ESL designs that result in tens of thousands of lines of RTL implementations.
176

High Level Preprocessor of a VHDL-based Design System

Palanisamy, Karthikeyan 27 October 1994 (has links)
This thesis presents the work done on a design automation system in which high-level synthesis is integrated with logic synthesis. DIADESfa design automation system developed at PSU, starts the synthesis process from a language called ADL. The major part of this thesis deals with transforming the ADL -based DIADES system into a VHDL -based DIADES system. In this thesis I have upgraded and modified the existing DIADES system so that it becomes a preprocessor to a comprehensive VHDL -based design system from Mentor Graphics. The high-level synthesis in the DIADES system includes two stages: data path synthesis and control unit synthesis. The conversion of data path synthesis is done in this thesis. In the DIADES system a digital system is described on the behavioral level in terms of variables and operations using the language ADL. The digital system described in ADL is compiled to a format called GRAPH language. In the GRAPH language the behavior of a digital system is represented by a specific sequence of program statements. The descriptions in the GRAPH language is compiled to a format called STRU CT language. The system is described in the STRU CT language in terms of lists of nodes and arrows. The main task of this thesis is to convert the descriptions in the GRAPH language and the descriptions in the STRUCT language to the VHDL format. All the generated VHDL Code will be Mentor Graphics VHDL format compatible, and all the VHDL code can be compiled, simulated and synthesised by the Mentor Graphics tools.
177

Performance-directed design of asynchronous VLSI systems / Samuel Scott Appleton.

Appleton, Samuel Scott January 1997 (has links)
Bibliography :p.269-285. / xxii, 285 p. : ill. ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / Describes a new method for describing asynchronous systems (free-flow asynchronism). The method is demonstrated through two applications ; a channel signalling system and amedo. / Thesis (Ph.D.)--University of Adelaide, Dept. of Electrical and Electronic Engineering, 1998
178

Uma infra-estrutura confiavel para arquiteturas baseadas em serviços Web aplicada a pesquisa de biodiversidade / A dependable infrastructure for service-oriented architectures applied at biodiversity research

Gonçalves, Eduardo Machado 15 August 2018 (has links)
Orientador: Cecilia Mary Fischer Rubira / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-15T11:38:59Z (GMT). No. of bitstreams: 1 Goncalves_EduardoMachado_M.pdf: 3443509 bytes, checksum: b9211dc7c7cdb58d86853bd60f992664 (MD5) Previous issue date: 2009 / Resumo: A Arquitetura Orientada a Serviços (SOA) é responsável por mapear os processos de negócios relevantes aos seus serviços correspondentes que, juntos, agregam o valor final ao usuário. Esta arquitetura deve atender aos principais requisitos de dependabilidade, entre eles, alta disponibilidade e alta confiabilidade da solução baseada em serviços. O objetivo deste trabalho é desenvolver uma infra-estrutura de software, chamada de Arquitetura Mediador, que atua na comunicação entre os clientes dos serviços e os próprios serviços Web, a fim de implementar técnicas de tolerância a falhas que façam uso efetivo das redundâncias de serviços disponíveis. A Arquitetura Mediador foi projetada para ser acessível remotamente via serviços Web, de forma que o impacto na sua adoção seja minimizado. A validação da solução proposta foi feita usando aplicações baseadas em serviços Web implementadas no projeto BioCORE. Tal projeto visa apoiar biólogos nas suas atividades de pesquisa de manutenção do acervo de informações sobre biodiversidade de espécies / Abstract: The Service-Oriented Architecture is responsible to map the business processes relevant to its services that, together, add value to the final user. This architecture must meet the main dependability requirements, among them, high availability and high reliability, part of the service-based solution. The objective of this work is to develop a software infrastructure, called Arquitetura Mediador, that operates in the communication between the web service's clients and the web services itself, in order to implement fault tolerance techniques that make eéctive use of available services redundancies. The Arquitetura Mediador infrastructure was designed to be remotely accessible via web services, so that the impact on its adoption should be minimized. The validation of the proposed solution was made using web services-based applications implemented on BioCORE project. This project aims to support biologists in his/her research activities and to maintain informations about collections of species and biodiversity / Mestrado / Engenharia de Software / Mestre em Ciência da Computação
179

Modulo computacional, baseado em redes neurais, para a força de corte e para a rugosidade, em torneamento / Computacional module, based on neural networks for cutting force and roughness in turning

Almeida, Sergio Luis Rabelo de 14 June 2006 (has links)
Orientador: Olivio Novaski / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecanica / Made available in DSpace on 2018-08-07T14:11:27Z (GMT). No. of bitstreams: 1 Almeida_SergioLuisRabelode_D.pdf: 2212505 bytes, checksum: b547ce8e7c1a1faea1a6c7468054f32f (MD5) Previous issue date: 2006 / Resumo: Os softwares CAM existentes hoje no mercado permitem facilmente automatizar a geração de programas em linguagem CNC a partir de um modelo CAD. As trajetórias da ferramenta são calculadas respeitando-se a geometria final da peça. No entanto, estes programas, em geral, não disponibilizam recursos para corretamente estimar os parâmetros de usinagem (velocidade de corte, avanço, profundidade de usinagem), bem como sua influência em grandezas relevantes ao processo, como a força de corte e a rugosidade. Cria-se, assim, um descompasso com a realidade fisica do processo. Adicionalmente, tais programas não foram desenvolvidos com abordagem didática, possuindo pré-requisitos (interface CAD, domínio de idioma estrangeiro etc.) que dificultam ao estudante o fácil aprendizado de processos de usinagem a CNC. Este trabalho busca então abordar estes problemas vivenciados por escolas técnicas, desenvolvendo um módulo computacional, acoplado a um software CNC didático comercial para torneamento, que permita a predição de esforços de usinagem e rugosidade em tempo de programação CNC. Optou-se pela técnica da rede neural como núcleo, uma vez que permite aproximações bastante satisfatórias do processo de torneamento. Os resultados indicam que os modelos de rede neurais adotados (perceptron multi-camadas e função de base radial) aproximam de forma satisfatória o comportamento da força de corte e rugosidade, em função dos parâmetros de usinagem escolhidos (velocidade de corte, avanço e profundidade de corte) em uma série de casos de uso, utilizando-se o módulo computacional desenvolvido / Abstract: The majority of CAM software in the market allows the user to easily create the CNC program through CAD models. The tool paths are ca1culated in respect to the final piece geometry. However, these software do not permit, as part of their functionality, to estimate the cutting parameters (cutting speed, feed and depth of cut), as well as their influence in process variables such as cutting force and roughness. There is, in that sense, a gap between the geometrical and physical scenario of the machining process. Additionally, such software were not developed with didactical requirements, which makes difficult to the students the learning of the machining concepts using the CNC technology. The CAD and the foreign language interface are examples of such fact. This work targets to approach these remarks which are particularly common among the Technical Schools, developing a computational module, embedded in a commercial CNC didactic software, capable of predicting cutting forces (in roughing) and surface roughness (in finishing) at programming time. It was used a neural network technique as the base core, since it allows good estimative of turning process. The results indicate that the ANN topologies (Multilayer Perpectron and Radial Basis function) correlate satisfactorily with the experimental behavior of the cutting force and roughness regarding the input parameters chosen (cutting speed, feed and depth of cut) for different cases using the software prototype / Doutorado / Materiais e Processos de Fabricação / Doutor em Engenharia Mecânica
180

Investigation of Immersion Cooled ARM-Based Computer Clusters for Low-Cost, High-Performance Computing

Mohammed, Awaizulla Shareef 08 1900 (has links)
This study aimed to investigate performance of ARM-based computer clusters using two-phase immersion cooling approach, and demonstrate its potential benefits over the air-based natural and forced convection approaches. ARM-based clusters were created using Raspberry Pi model 2 and 3, a commodity-level, single-board computer. Immersion cooling mode utilized two types of dielectric liquids, HFE-7000 and HFE-7100. Experiments involved running benchmarking tests Sysbench high performance linpack (HPL), and the combination of both in order to quantify the key parameters of device junction temperature, frequency, execution time, computing performance, and energy consumption. Results indicated that the device core temperature has direct effects on the computing performance and energy consumption. In the reference, natural convection cooling mode, as the temperature raised, the cluster started to decease its operating frequency to save the internal cores from damage. This resulted in decline of computing performance and increase of execution time, further leading to increase of energy consumption. In more extreme cases, performance of the cluster dropped by 4X, while the energy consumption increased by 220%. This study therefore demonstrated that two-phase immersion cooling method with its near-isothermal, high heat transfer capability would enable fast, energy efficient, and reliable operation, particularly benefiting high performance computing applications where conventional air-based cooling methods would fail.

Page generated in 0.0947 seconds