• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 992
  • 351
  • 97
  • 80
  • 76
  • 48
  • 48
  • 36
  • 25
  • 19
  • 19
  • 12
  • 8
  • 7
  • 4
  • Tagged with
  • 2440
  • 907
  • 570
  • 425
  • 396
  • 311
  • 255
  • 246
  • 242
  • 230
  • 216
  • 193
  • 191
  • 190
  • 185
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Projeto e implementação de hardware e software para uma placa grafica de media/alta resolução com capacidade de processamento local

Berardi, Paulo Cesar 02 June 1989 (has links)
Orientador : Clesio Luiz Tozzi / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica / Made available in DSpace on 2018-07-14T07:18:52Z (GMT). No. of bitstreams: 1 Berardi_PauloCesar_M.pdf: 8111611 bytes, checksum: 0daa0f0d8bcf3bd395f91a58d0139789 (MD5) Previous issue date: 1989 / Resumo: Esta dissertação descreve o projeto e imp1ementação do protótipo de uma placa gráfica, com capacidade local de processamento (processador MC68008) , resolução de 512x380 pixe1s, 16 cores simultâneas escolhidas dentro de um "pal1et" de 4096 cores, e taxa de regeneração de 60 Hz. Mantendo-se a mesma arquitetura e com a utilização de componentes mais atuais, a resolução da placa pode ser expandida para 1024x1024 pixels e 256 cores simultâneas. Foi desenvolvido também um núcleo gráfico, residente em EPROM, para a execução das funções de inicialização, recepção, analise e execução dos comandos. Utilizando a capacidade de processamento da placa gráfica, foram incorporados ao núcleo gráfico primitivas mais complexas como circulos, elipses, textos, bem como todas as funções de gerenciamento de segmentos, janelas (windows) e viewports. Uma das características mais importantes do software desenvolvido é a sua transportabilidade/reusabilidade, que possibilita, pela reescrita de alguns módulos utilitários, sua implantação em ambientes gráficos diversos / Abstract: This dissertation describes the project and implemantation of a graphics board prototype, with local processing capacity (MC 68008 processor), resolution of 512x380 pixels, 16 simultaneous colors from a pallet of 4096 colors, and refresh rate of 60Hz. Using the same architecture with up-to-date components, the reolution can be expanded up to, 1024x1024 pixels with 256 simultaneous colors. A built-in graphics kernel was also developed for the purpose of board initialization and reception, analysis and processing of commands. Using the processing power of the graphics board, more complex primitives were incorporated such as circles, ellipses, text, segmentation, management functions, windows and viewports. One of the main characteristics of the software is its transportability/reusability, which allows its migration to other graphics enviroments with few changes in basic modules / Mestrado / Automação / Mestre em Engenharia Elétrica
212

Design, analysis and implementation of bulk-synchronous parallel algorithms, data structures and techniques

Siniolakis, Constantinos J. January 1998 (has links)
The objective of this thesis is the unified investigation of a wide range of fundamental parallel methods that are transportable amongst pragmatic parallel machines having different number of processors, different periodicity of global synchronization and different bandwidth of inter-processor communication. The computational model adopted is the bulk-synchronous parallel (BSP) model, which abstracts the characteristics of parallel machines into three numerical parameters p, L and g, that quantify, respectively, processors, periodicity and bandwidth - the model differentiates memory that is local to a processor from memory that is non-local, yet, for the sake of universality, does not differentiate network proximity. The BSP parameters p, L and g, together with the problem size n, are employed to measure the performance, and consequently, the transportability of parallel methods across machines having different values of these parameters. We show that optimality to within small multiplicative constant factors close to one can be achieved for a multiplicity of fundamental computational problems by transportable algorithms and data structures that can be applied for a wide range of values of the BSP parameters. While these algorithms and data structures are fairly simple themselves, description of their performance in terms of these parameters is somewhat complicated. The main reward for quantifying these complications, is that it enables software to be written once and for all that can be migrated efficiently amongst a variety of parallel machines. The methods considered in this thesis - both theoretically and experimentally - embody deterministic and randomized techniques for the efficient realization of fundamental algorithms (broadcasting, computing parallel-prefixes, load-balancing, list-contracting, merging, sorting, integer-sorting, selecting, searching and hashing), data structures (heaps, search trees and hash tables) and applications (computational geometry, parallel model simulations and structured query language primitives).
213

A design methodology for self-timed VLSI systems

Al-Helwani, A. M. January 1985 (has links)
No description available.
214

An evaluation of load balancing algorithms for distributed systems

Benmohammed-Mahieddine, Kouider January 1991 (has links)
Distributed systems are gradually being accepted as the dominant computing paradigm of the future. However, due to the diversity and multiplicity of resources, and the need for transparency to users, global resource management raises many questions. On the performance level the potential benefits of the load balancing in resolving the occasional congestion experienced by some nodes while others are idle or lightly loaded are commonly accepted. It is also acknowledged that no single load balancing algorithm deals satisfactorily with the changing system characteristics and dynamic workload environment. In modelling distributed systems for load balancing, optimistic assumptions of system characteristics are commonly made, with no evaluation of alternative system design options such as communications protocols. When realistic assumptions are made on system attributes such as communication bandwidth, load balancing overheads, and workload model, doubts are cast on the capability of load balancing to improve the performance of distributed systems significantly. A taxonomy is developed for the components as well as the attributes aspects of load balancing algorithms to provide a common terminology and a comprehensive view to load balancing in distributed systems. For adaptive algorithms the taxonomy is extended to identify the issues involved and the ways of adding adaptability along different dimensions. A design methodology is also outlined. A review of related work is used to identify the most promising load balancing strategies and the modelling assumptions made in previous load balancing studies. Subsequently the research problems addressed in this thesis and the design of new algorithms are detailed. A simulated system developed to allow an experimentation with various load balancing algorithms under different workload models and system attributes is described. Based on the nature of the file system structure and the classes of nodes processing speed involved, different models of loosely-coupled distributed systems can be defined. Four models are developed: disk-based homogeneous nodes, diskless homogeneous nodes, diskless heterogeneous nodes, and disk-based heterogeneous nodes. The nodes are connected through a broadcast transfer device. A set of representative load balancing algorithms covering a range of strategies are evaluated and compared for the four models of distributed systems. The algorithms developed include a new algorithm called Diffuse based on explicit adaptability for the homogeneous systems. In the case of heterogeneous systems, novel modifications are made to a number of algorithms to take into account the heterogeneity of nodes speed. The evaluation on homogeneous systems is two-fold: an assessment of the effect of system attributes on the performance of the distributed system subject to these algorithms, and a comparison of the relative merits of the algorithms using different performance metrics, and in particular a classification of the performance of the Diffuse algorithm with regard to others in the literature. For the heterogeneous systems the performance of the adapted algorithms is compared to that of the standard versions and to the no load balancing case. As a result of this evaluation, for a set of combinations of performance objectives, distributed system attributes, and workload environment, we identify the most . appropriate load balancing algorithm and optimal values for adjustable parameters of the algorithm.
215

Some aspects of the efficient use of multiprocessor control systems

Woodward, Michael C. January 1981 (has links)
Computer technology, particularly at the circuit level, is fast approaching its physical limitations. As future needs for greater power from computing systems grows, increases in circuit switching speed (and thus instruction speed) will be unable to match these requirements. Greater power can also be obtained by incorporating several processing units into a single system. This ability to increase the performance of a system by the addition of processing units is one of the major advantages of multiprocessor systems. Four major characteristics of multiprocessor systems have been identified (28) which demonstrate their advantage. These are:- Throughput Flexibility Availability Reliability The additional throughput obtained from a multiprocessor has been mentioned above. This increase in the power of the system can be obtained in a modular fashion with extra processors being added as greater processing needs arise. The addition of extra processors also has (in general) the desirable advantage of giving a smoother cost-performance curve ( 63). Flexibility is obtained from the increased ability to construct a system matching the user 'requirements at a given time without placing restrictions upon future expansion. With multiprocessor systems; the potential also exists of making greater use of the resources within the system. Availability and reliability are inter-related. Increased availability is achieved, in a well designed system, by ensuring that processing capabilities can be provided to the user even if one (or more) of the processing units has failed. The service provided, however, will probably be degraded due to the reduction in processing capacity. Increased reliability is obtained by the ability of the processing units to compensate for the failure of one of their number. This recovery may involve complex software checks and a consequent decrease in available power even when all the units are functioning.
216

The design of distributed processing systems using stable modules

Kramer, J. January 1980 (has links)
No description available.
217

Pascal-orientated computer design

Schmitz, E. A. January 1980 (has links)
No description available.
218

Data transmission at 19,200 bit/s over telephone channels

Bateman, Stephen C. January 1985 (has links)
The thesis is concerned with the transmission and reception of digital data at 19,200 bit/s over voice-frequency telephone channels. Following a feasibility study based on both practical and theoretical constraints, the telephone network itself is investigated to determine methods of circuit characterisation and the causes and effects of distortion and other signal impairments.
219

Video replay in computer animation

Hawkins, Stuart Philip January 1990 (has links)
No description available.
220

A Power System-Based IoT Network for Remote Sensing Applications

Gaiero, Dominic 01 June 2021 (has links) (PDF)
Cities around the world are facing increasingly significant challenges, including rapid urbanization, resource management, and environmental threats. In California for example, wildfires present an ever-growing threat that gravely harms people, destroys communities, and causes billions of dollars in damages. The task of addressing these environmental threats and many other challenges is greatly aided with widespread data collection and real-time inference. However, as IoT networks scale and require more energy for near-data analytics, the IoT endpoints require more power and complexity, limiting their deployment. Additionally, deploying endpoints in remote locations creates further challenges with higher reliability and communication constraints. In this thesis, we propose an approach for building scalable and reliable near-data analytics systems by leveraging existing power systems. The insight for this approach is that power transmission and distribution systems provide 1) an elevated vantage ideal for sensing, 2) wide coverage of remote and urban areas, 3) cost effective power supply via energy harvesting, and 4) the ability to use existing power infrastructures to further improve application accuracy. We describe an implementation of our approach using power system-based sensor and gateway nodes, and their integration with cloud processing resources. We evaluate the cost, power, and communication of this approach in the context of a remote wildfire sensing application, and demonstrate that this approach provides improved accuracy and scalability with significantly lower costs as compared to conventional approaches.

Page generated in 0.0238 seconds