• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Suitability of Java for Solving Large Sparse Positive Definite Systems of Equations Using Direct Methods

Armstrong, Shea January 2004 (has links)
The purpose of the thesis is to determine whether Java, a programming language that evolved out of a research project by Sun Microsystems in 1990, is suitable for solving large sparse linear systems using direct methods. That is, can performance comparable to the language traditionally used for sparse matrix computation, Fortran, be achieved by a Java implementation. Performance evaluation criteria include execution speed and memory requirements. A secondary criterion is ease of development. Many attractive features, unique to the Java programming language, make it desirable for use in sparse matrix computation and provide the motivation for the thesis. The 'write once, run anywhere' proposition, coupled with nearly-ubiquitous Java support, alleviates the need to re-write programs in the event of hardware change. Features such as garbage collection (automatic recycling of memory) and array-index bounds checking make Java programs more robust than those written in Fortran. Java has garnered a poor reputation as a high-performance computing platform, largely attributable to poor performance relative to Fortran in its early years. It is now a consensus among researchers that the Java language itself is not the problem, but rather its implementation. As such, improving compiler technology for numerical codes is critical to achieving high performance in numerical Java applications. Preliminary work involved converting SPARSPAK, a collection of Fortran 90 subroutines for solving large sparse systems of linear equations and least squares problems developed by Dr. Alan George, into Java (J-SPARSPAK). It is well known that the majority of the solution process is spent in the numeric factorization phase. Initial benchmarks showed Java performing, on average, 3. 6 times slower than Fortran for this critical phase. We detail how we improved Java performance to within a factor of two of Fortran.
2

Suitability of Java for Solving Large Sparse Positive Definite Systems of Equations Using Direct Methods

Armstrong, Shea January 2004 (has links)
The purpose of the thesis is to determine whether Java, a programming language that evolved out of a research project by Sun Microsystems in 1990, is suitable for solving large sparse linear systems using direct methods. That is, can performance comparable to the language traditionally used for sparse matrix computation, Fortran, be achieved by a Java implementation. Performance evaluation criteria include execution speed and memory requirements. A secondary criterion is ease of development. Many attractive features, unique to the Java programming language, make it desirable for use in sparse matrix computation and provide the motivation for the thesis. The 'write once, run anywhere' proposition, coupled with nearly-ubiquitous Java support, alleviates the need to re-write programs in the event of hardware change. Features such as garbage collection (automatic recycling of memory) and array-index bounds checking make Java programs more robust than those written in Fortran. Java has garnered a poor reputation as a high-performance computing platform, largely attributable to poor performance relative to Fortran in its early years. It is now a consensus among researchers that the Java language itself is not the problem, but rather its implementation. As such, improving compiler technology for numerical codes is critical to achieving high performance in numerical Java applications. Preliminary work involved converting SPARSPAK, a collection of Fortran 90 subroutines for solving large sparse systems of linear equations and least squares problems developed by Dr. Alan George, into Java (J-SPARSPAK). It is well known that the majority of the solution process is spent in the numeric factorization phase. Initial benchmarks showed Java performing, on average, 3. 6 times slower than Fortran for this critical phase. We detail how we improved Java performance to within a factor of two of Fortran.
3

One Compiler to Rule Them All : Extending the Storm Programming Language Platform with a Java Frontend

Ahrenstedt, Simon, Huber, Daniel January 2023 (has links)
The thesis aims to develop a method for extending the language platform Storm with a Java frontend.The project was conducted using an Action Research methodology and highlights triumphs andchallenges. Despite the significant overhead related to note generation and problem statementformulation, this methodology proved beneficial in identifying problems and providing the framework tosolve them. The first research question (RQ.1) evaluates to what extent the language platform Storm is suitable forimplementing the object oriented language Java. Using Storm, only a BNF and a specification for three-address code instructions are needed. Despite encountering difficulties during the implementation, theplatform offers tools that allow comprehensive customization of the new language's intended behaviorand functionality. The second research question (RQ.2) explores a suitable method for implementing a new language inStorm. It is suggested to first implement a foundational structure comprising of statements, blocks,scope handling and variable declarations. From this foundation, new functionalities can be graduallyintroduced and tested by connecting them to the appropriate location in the structure. When allfunctionality is added and tested a refactoring step can take place to modify the BNF if needed.

Page generated in 0.0912 seconds