Spelling suggestions: "subject:"arallel processing"" "subject:"aparallel processing""
31 
Computational properties of regular processor networksErmann, Renato 08 1900 (has links)
No description available.

32 
A design methodology for optimal parallel coupled cyclic computingHaggard, Roger Lynn 08 1900 (has links)
No description available.

33 
Clumps : a candidate model of efficient, general purpose parallel computationCampbell, Duncan Karl Gordon January 1994 (has links)
No description available.

34 
A parallel distributed processing approach to the representation of knowledge for natural language understandingSutcliffe, R. F. E. January 1988 (has links)
No description available.

35 
Parallel solution of power system linear equationsGrey, David John January 1995 (has links)
At the heart of many power system computations lies the solution of a large sparse set of linear equations. These equations arise from the modelling of the network and are the cause of a computational bottleneck in power system analysis applications. Efficient sequential techniques have been developed to solve these equations but the solution is still too slow for applications such as realtime dynamic simulation and online security analysis. Parallel computing techniques have been explored in the attempt to find faster solutions but the methods developed to date have not efficiently exploited the full power of parallel processing. This thesis considers the solution of the linear network equations encountered in power system computations. Based on the insight provided by the elimination tree, it is proposed that a novel matrix structure is adopted to allow the exploitation of parallelism which exists within the cutset of a typical parallel solution. Using this matrix structure it is possible to reduce the size of the sequential part of the problem and to increase the speed and efficiency of typical LUbased parallel solution. A method for transforming the admittance matrix into the required form is presented along with network partitioning and load balancing techniques. Sequential solution techniques are considered and existing parallel methods are surveyed to determine their strengths and weaknesses. Combining the benefits of existing solutions with the new matrix structure allows an improved LUbased parallel solution to be derived. A simulation of the improved LU solution is used to show the improvements in performance over a standard LUbased solution that result from the adoption of the new techniques. The results of a multiprocessor implementation of the method are presented and the new method is shown to have a better performance than existing methods for distributed memory multiprocessors.

36 
Iterative methods for linear and geometrically nonlinear parallel finite element analysisStang, Jorgen January 1995 (has links)
No description available.

37 
Image processing using cellular neural networksSaatci, Ertugrul January 2003 (has links)
No description available.

38 
Faulttolerant parallel applications using a network of workstationsSmith, James Antony January 1997 (has links)
It is becoming common to employ a Network Of Workstations, often referred to as a NOW, for general purpose computing since the allocation of an individual workstation offers good interactive response. However, there may still be a need to perform very large scale computations which exceed the resources of a single workstation. It may be that the amount of processing implies an inconveniently long duration or that the data manipulated exceeds available storage. One possibility is to employ a more powerful single machine for such computations. However, there is growing interest in seeking a cheaper alternative by harnessing the significant idle time often observed in a NOW and also possibly employing a number of workstations in parallel on a single problem. Parallelisation permits use of the combined memories of all participating workstations, but also introduces a need for communication. and success in any hardware environment depends on the amount of communication relative to the amount of computation required. In the context of a NOW, much success is reported with applications which have low communication requirements relative to computation requirements. Here it is claimed that there is reason for investigation into the use of a NOW for parallel execution of computations which are demanding in storage, potentially even exceeding the sum of memory in all available workstations. Another consideration is that where a computation is of sufficient scale, some provision for tolerating partial failures may be desirable. However, generic support for storage management and faulttolerance in computations of this scale for a NOW is not currently available and the suitability of a NOW for solving such computations has not been investigated to any large extent. The work described here is concerned with these issues. The approach employed is to make use of an existing distributed system which supports nested atomic actions (atomic transactions) to structure faulttolerant computations with persistent objects. This system is used to develop a faulttolerant "bag of tasks" computation model, where the bag and shared objects are located on secondary storage. In order to understand the factors that affect the performance of large parallel computations on a NOW, a number of specific applications are developed. The performance of these applications is ana lysed using a semiempirical model. The same measurements underlying these performance predictions may be employed in estimation of the performance of alternative application structures. Using services provided by the distributed system referred to above, each application is implemented. The implement ation allows verification of predicted performance and also permits identification of issues regarding construction of components required to support the chosen application structuring technique. The work demonstrates that a NOW certainly offers some potential for gain through parallelisation and that for large grain computations, the cost of implementing fault tolerance is low.

39 
Parallel processing and VLSI design a high speed efficient multiplier.Dandu, Venkata Satyanarayana Raju. January 1985 (has links)
Thesis (M.S.)Ohio University, November, 1985. / Title from PDF t.p.

40 
Pipelined floating point divider with builtin testing circuitsLyu, Chungnan. January 1988 (has links)
Thesis (M.S.)Ohio University, June, 1988. / Title from PDF t.p.

Page generated in 0.0881 seconds