Spelling suggestions: "subject:"parallelism"" "subject:"arallelism""
11 |
Efficient use of Multi-core Technology in Interactive Desktop ApplicationsKarlsson, Johan January 2015 (has links)
The emergence of multi-core processors has successfully ended the era where applications could enjoy free and regular performance improvements without source code modifications. This thesis aims to gather experiences from the work of retrofitting parallelism into a desktop application originally written for sequential execution. The main contribution is the underlying theory and the performance evaluation, experiments and tests of the parallel software regions compared to its sequential counterparts. The feasibility is demonstrated as the theory is put into use when a complex commercially active desktop application is being rewritten to support parallelism. The thesis finds no simple guaranteed solution to the problem of making a serial application execute in parallel. However, experiments and tests proves that many of the evaluated methods offers tangible performance advantages compared to sequential execution.
|
12 |
Parallel architectures for real-time image processingMartinez, Kirk January 1989 (has links)
No description available.
|
13 |
Defining the selective mechanism of problem solving in a distributed systemMashhadi, Tahereh Yaghoobi January 2001 (has links)
Distribution and parallelism are historically important approaches for the implementation of artificial intelligence systems. Research in distributed problem solving considers the approach of solving a particular problem by sharing the problem across a number of cooperatively acting processing agents. Communicating problem solvers can cooperate by exchanging partial solutions to converge on global results. The purpose of this research programme is to make a contribution to the field of Artificial Intelligence by developing a knowledge representation language. The project has attempted to create a computational model using an underlying theory of cognition to address the problem of finding clusters of relevant problem solving agents to provide appropriate partial solutions, which when put together provide the overall solution for a given complex problem. To prove the validity of this approach to problem solving, a model of a distributed production system has been created. A model of a supporting parallel architecture for the proposed distributed production problem solving system (DPSS) is described, along with the mechanism for inference processing. The architecture should offer sufficient computing power to cope with the larger search space required by the knowledge representation, and the required faster methods of processing. The inference engine mechanism, which is a combination of task sharing and result sharing perspectives, is distinguished into three phases of initialising, clustering and integrating. Based on a fitness measure derived to balance the communication and computation for the clusters, new clusters are assembled using genetic operators. The algorithm is also guided by the knowledge expert. A cost model for fitness values has been used, parameterised by computation ration and communication performance. Following the establishment of this knowledge representation scheme and identification of a supporting parallel architecture, a simulation of the array of PEs has been developed to emulate the behaviour of such a system. The thesis reports on findings from a series of tests used to assess its potential gains. The performance of the DPSS has been evaluated to verify the validity of this approach by measuring the gain in speed of execution in a parallel environment as compared with serial processing. The evaluation of test results shows the validity of the proposed approach in constructing large knowledge based systems.
|
14 |
ParForPy: Loop Parallelism in PythonGaska, Benjamin James, Gaska, Benjamin James January 2017 (has links)
Scientists are trending towards usage of
high-level programming languages such as Python.
The convenience of these languages often have a performance cost.
As the amount of data being processed increases this can make using
these languages unfeasible.
Parallelism is a means to achieve better performance, but many
users are unaware of it, or find it difficult to work with.
This thesis presents ParForPy, a means for loop-parallelization to to simplify usage of parallelism in Python for users.
Discussion is included for determining when parallelism
matches well with the problem.
Results are given that indicate that ParForPy is both capable
of improving program execution time and perceived to be a simpler
construct to understand than other techniques for parallelism in Python.
|
15 |
Measuring and Improving the Potential Parallelism of Sequential Java ProgramsVan Valkenburgh, Kevin 25 September 2009 (has links)
No description available.
|
16 |
Acceleration techniques in ray tracing for dynamic scenesSamothrakis, Stavros Nikolaou January 1998 (has links)
No description available.
|
17 |
Portable implementations of nested data-parallel programming languagesAu, Kwok Tat Peter January 1999 (has links)
No description available.
|
18 |
The Scientific Community MetaphorKornfeld, William A., Hewitt, Carl 01 January 1981 (has links)
Scientific communnities have proven to be extremely successful at solving problems. They are inherently parallel systems and their macroscopic nature makes them amenable to careful study. In this paper the character of scientific research is examined drawing on sources in the philosophy and history of science. We maintain that the success of scientific research depends critically on its concurrency and pluralism. A variant of the language Ether is developed that embodies notions of concurrency necessary to emulate some of the problem solving behavior of scientific communities. Capabilities of scientific communities are discussed in parallel with simplified models of these capabilities in this language.
|
19 |
KFusion: obtaining modularity and performance with regards to general purpose GPU computing and co-processorsKiemele, Liam 14 December 2012 (has links)
Concurrency has recently come to the forefront of computing as multi-core processors
become more and more common. General purpose graphics processing unit
computing brings with them new language support for dealing with co-processor environments
such as OpenCL and CUDA. Programming language support for multi-core
architectures introduces a fundamentally new mechanism for modularity--a kernel.
Developers attempting to leverage these mechanism to separate concerns often
incur unanticipated performance penalties. My proposed solution aims to preserve
the benefits of kernel boundaries for modularity, while at the same time eliminate
these inherent costs at compile time and execution.
KFusion is a prototype tool for transforming programs written in OpenCL to make
them more efficient. By leveraging loop fusion and deforestation, it can eliminate the
costs associated with compositions of kernels that share data. Case studies show that
Kfusion can address key memory bandwidth and latency bottlenecks and result in
substantial performance improvements. / Graduate
|
20 |
Locality Conscious Scheduling Strategies for High Performance Data Analysis ApplicationsVydyanathan, Nagavijayalakshmi 20 August 2008 (has links)
No description available.
|
Page generated in 0.0715 seconds