• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 48
  • 25
  • 18
  • 14
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 136
  • 50
  • 30
  • 29
  • 26
  • 18
  • 18
  • 16
  • 14
  • 13
  • 12
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Migrace zdrojových kódů pomocí dekompilace / Source-Code Migration Using Decompilation

Korec, Tomáš January 2014 (has links)
This thesis deals with source-code migration of high-level programming languages using decompilation. A migration tool developed within the thesis is built on top of the middle-end and back-end parts of Lissom project decompiler. Several compilers generating LLVM IR code from input languages are discussed. Compilers suitable for integration to the migration tool were chosen. Compiled LLVM IR code is an input of the decompiler's optimizing middle-end. The output from the migration tool is a code in the C language or Python-like language generated by the back-end of the decompiler. The input languages are Fortran and its dialects, C/C++/Objective-C/Objective-C++, and D. The thesis describes problems connected with migration of these languages, their solutions, and ways to improve quality and readability of the produced source code.
112

DGRSVX and DMSRIC: Fortran 77 subroutines for solving continuous-time matrix algebraic Riccati equations with condition and accuracy estimates

Petkov, P. Hr., Konstantinov, M. M., Mehrmann, V. 12 September 2005 (has links)
We present new Fortran 77 subroutines which implement the Schur method and the matrix sign function method for the solution of the continuous­time matrix algebraic Riccati equation on the basis of LAPACK subroutines. In order to avoid some of the well­known difficulties with these methods due to a loss of accuracy, we combine the implementations with block scalings as well as condition estimates and forward error estimates. Results of numerical experiments comparing the performance of both methods for more than one hundred well­ and ill­conditioned Riccati equations of order up to 150 are given. It is demonstrated that there exist several classes of examples for which the matrix sign function approach performs more reliably and more accurately than the Schur method. In all cases the forward error estimates allow to obtain a reliable bound on the accuracy of the computed solution.
113

Modeling of Hybrid STATCOM in PSSE

Mikwar, Abulaziz January 2017 (has links)
Flexible AC Transmission Systems (FACTS) have the ability of voltage supportand increase transmission capacity. In order to specify a FACTS devicethat is performing according to expectations in a network, a set of studiesand network analyses must be performed. Part of these studies are done usingpower system analysis programs such as PSS®E, which is a planning toolsimulating large power systems in phasor domain using RMS values. Theseplanning tools are used for evaluating stability and reinforcement needs ina power system. The results play a vital role in investment decisions inthe power system. FACTS devices are modeled in PSS®E using a programminglanguage called FORTRAN. It is important to model FACTS devicesaccurately to avoid misleading results. In this Master thesis, STATCOMand Hybrid-STATCOM models are proposed and programmed accordingto ABB’s control strategy. The models are tested in PSS®E and verifiedagainst detailed models in PSCAD. Also, the models are compared againstother industry wide spread generic models. / System inom produktgruppen FACTS (Flexible AC Transmission Systems)har m¨ojligheten att st¨odja sp¨anning och h¨oja ¨overf¨oringskapacitet p°a existerandeledningar. F¨or att kunna specificera en FACTS-anl¨aggning sombeter sig som f¨orv¨antat i ett eln¨at beh¨ovs ett antal studier och n¨atanalyserutf¨oras. Delar av dessa studier ¨ar gjorda genom att anv¨anda verktyg f¨orkraftsystemanalys som t.ex. PSS®E, som ¨ar ett verktyg f¨or n¨atplaneringd¨ar fasvektorer och RMS-v¨arden anv¨ands i ber¨akningarna. Dessa verktyganv¨ands f¨or att evaluera stabilitet och utbyggnadsbehov i eln¨atet. Resultatenfr°an verktygen spelar en vital roll i investeringsbeslut i ett eln¨at.FACTS-system modelleras i PSS®E med hj¨alp av programmeringsspr°aketFORTRAN. Det ¨ar viktigt att anv¨anda korrekta modeller f¨or att undvikamissledande resultat. I denna Master-uppsats f¨oresl°as och utvecklasSTATCOM och Hybrid-STATCOM modeller i enlighet med ABBs kontrollstrategi.Modellerna testas i PSS®E och verifieras mot detaljerade modelleri PSCAD. Modellerna j¨amf¨ors ¨aven mot andra generiska modeller som ¨araccepterade och spridda ¨over branschen i stort.
114

Analysis of GPU-based convolution for acoustic wave propagation modeling with finite differences: Fortran to CUDA-C step-by-step

Sadahiro, Makoto 04 September 2014 (has links)
By projecting observed microseismic data backward in time to when fracturing occurred, it is possible to locate the fracture events in space, assuming a correct velocity model. In order to achieve this task in near real-time, a robust computational system to handle backward propagation, or Reverse Time Migration (RTM), is required. We can then test many different velocity models for each run of the RTM. We investigate the use of a Graphics Processing Unit (GPU) based system using Compute Unified Device Architecture for C (CUDA-C) as the programming language. Our preliminary results show a large improvement in run-time over conventional programming methods based on conventional Central Processing Unit (CPU) computing with Fortran. Considerable room for improvement still remains. / text
115

APPLICATION OF PROCESS SYSTEMS ENGINEERING TOOLS AND METHODS TO FERMENTATION-BASED BIOREFINERIES

Darkwah, Kwabena 01 January 2018 (has links)
Biofuels produced from lignocellulosic biomass via the fermentation platform are sustainable energy alternatives to fossil fuels. Process Systems Engineering (PSE) uses computer-based tools and methods to design, simulate and optimize processes. Application of PSE tools to the design of economic biorefinery processes requires the development of simulation approaches that can be integrated with existing, mature PSE tools used to optimize traditional refineries, such as Aspen Plus. Current unit operation models lack the ability to describe unsteady state fermentation processes, link unsteady state fermentation with in situ separations, and optimize these processes for competing factors (e.g., yield and productivity). This work applies a novel architecture of commercial PSE tools, Aspen Plus and MATLAB, to develop techniques to simulate time-dependent fermentation without and with in situ separations for process design, analyses and optimization of the operating conditions. Traditional batch fermentation simulations with in situ separations decouple these interdependent steps in a separate “steady state” reactor followed by an equilibrium separation of the final fermentation broth. A typical mechanistic system of ordinary differential equations (ODEs) describing a batch fermentation does not fit the standard built-in power law reaction kinetics model in Aspen Plus. To circumvent this challenge, a novel platform that links the batch reactor to a FORTRAN user kinetics subroutine (incorporates the ODEs) combined with component substitution (to simulate non-databank components) is utilized to simulate an unsteady state batch and in situ gas stripping process. The resulting model system predicts the product profile to be sensitive to the gas flow rate unlike previous “steady state” simulations. This demonstrates the importance of linking a time-dependent fermentation model to the fermentation environment for the design and analyses of fermentation processes. A novel platform linking the genetic algorithm multi-objective and single-objective optimizations in MATLAB to the unsteady state batch fermentation simulation in Aspen Plus through a component object module communication platform is utilized to optimize the operating conditions of a typical batch fermentation process. Two major contributions are: prior concentration of sugars from a typical lignocellulosic hydrolysate may be needed and with a higher initial sugar concentration, the fermentation process must be integrated with an in situ separation process to optimize the performance of fermentation processes. With this framework, fermentation experimentalists can use the full suite of PSE tools and methods to integrate biorefineries and refineries and as a decision-support tool to guide the design, analyses and optimization of fermentation-based biorefineries.
116

Automatic data distribution for massively parallel processors

García Almiñana, Jordi 16 April 1997 (has links)
Massively Parallel Processor systems provide the required computational power to solve most large scale High Performance Computing applications. Machines with physically distributed memory allow a cost-effective way to achieve this performance, however, these systems are very diffcult to program and tune. In a distributed-memory organization each processor has direct access to its local memory, and indirect access to the remote memories of other processors. But the cost of accessing a local memory location can be more than one order of magnitude faster than accessing a remote memory location. In these systems, the choice of a good data distribution strategy can dramatically improve performance, although different parts of the data distribution problem have been proved to be NP-complete.The selection of an optimal data placement depends on the program structure, the program's data sizes, the compiler capabilities, and some characteristics of the target machine. In addition, there is often a trade-off between minimizing interprocessor data movement and load balancing on processors. Automatic data distribution tools can assist the programmer in the selection of a good data layout strategy. These use to be source-to-source tools which annotate the original program with data distribution directives.Crucial aspects such as data movement, parallelism, and load balance have to be taken into consideration in a unified way to efficiently solve the data distribution problem.In this thesis a framework for automatic data distribution is presented, in the context of a parallelizing environment for massive parallel processor (MPP) systems. The applications considered for parallelization are usually regular problems, in which data structures are dense arrays. The data mapping strategy generated is optimal for a given problem size and target MPP architecture, according to our current cost and compilation model.A single data structure, named Communication-Parallelism Graph (CPG), that holds symbolic information related to data movement and parallelism inherent in the whole program, is the core of our approach. This data structure allows the estimation of the data movement and parallelism effects of any data distribution strategy supported by our model. Assuming that some program characteristics have been obtained by profiling and that some specific target machine features have been provided, the symbolic information included in the CPG can be replaced by constant values expressed in seconds representing data movement time overhead and saving time due to parallelization. The CPG is then used to model a minimal path problem which is solved by a general purpose linear 0-1 integer programming solver. Linear programming techniques guarantees that the solution provided is optimal, and it is highly effcient to solve this kind of problems.The data mapping capabilities provided by the tool includes alignment of the arrays, one or two-dimensional distribution with BLOCK or CYCLIC fashion, a set of remapping actions to be performed between phases if profitable, plus the parallelization strategy associated. The effects of control flow statements between phases are taken into account in order to improve the accuracy of the model. The novelty of the approach resides in handling all stages of the data distribution problem, that traditionally have been treated in several independent phases, in a single step, and providing an optimal solution according to our model.
117

Modèle de coopération entre calcul formel et calcul numérique pour la simulation et l'optimisation des systèmes

Alloula, Karim Le Lann, Jean-Marc. January 2008 (has links)
Reproduction de : Thèse de doctorat : Systèmes industriels : Toulouse, INPT : 2007. / Titre provenant de l'écran-titre. Bibliogr. 111 réf.
118

Sistema COPPE-FORTRAN: um compilador Fortran Residente para o computador IBM-1130

Salenbauch, Pedro 09 1900 (has links)
Submitted by Algacilda Conceição (algacilda@sibi.ufrj.br) on 2018-03-20T14:41:56Z No. of bitstreams: 1 130289.pdf: 2950159 bytes, checksum: 87f4f1586740edcb38c74bc241927da7 (MD5) / Made available in DSpace on 2018-03-20T14:41:56Z (GMT). No. of bitstreams: 1 130289.pdf: 2950159 bytes, checksum: 87f4f1586740edcb38c74bc241927da7 (MD5) Previous issue date: 1972-09 / É apresentado o problema da sobrecarga dos centros de computação de Universidades, causado pela quantidade enorme de novos usuários que surgiram com o ensino do Fortran aos alunos. O Sistema "COPPE-FORTRAN", um compilador Fortran residente para o computador IBM-1130 é apresentado como solução. Este sistema é descrito em seus vários aspectos, como os objetivos, componentes, técnicas de implementação e os resultados obtidos. / The overload of the universities' computing centers, due to the large number of new users that appeared with the Fortran teaching, is presented. The "COPPE-FORTRAN" System, a residente load and go Fortran compiler for the IBM-1130 computer is introduced as a solution. Various aspects of this system, as its objectives, components, implementation techniques and results are discussed.
119

Analysis and Optimum Design of stiffened shear webs in airframes

Viljoen, Awie 13 January 2005 (has links)
The analysis and optimum design of stiffened, shear webs in aircraft structures is addressed. The post-buckling behaviour of the webs is assessed using the interactive algorithm developed by Grisham. This method requires only linear finite element analyses, while convergence is typically achieved in as few as five iterations. The Grisham algorithm is extensively compared with empirical analysis methods previously used for aircraft structures and also with a refined, non-linear quasi-static finite element analysis. The Grisham algorithm provides for both compressive buckling in two directions as well as shear buckling, and overcomes some of the conservatism inherent in conventional methods of analysis. In addition, the method is notably less expensive than a complete non-linear finite element analysis, even though global collapse cannot be predicted. While verification of the analysis methodology is the main focus of the stud, an initial investigation into optimization is also made. In optimizing stiffened thin walled structures, the Grisham algorithm is combined with a genetic algorithm. Allowable stress constraints are accommodated using a simple penalty formulation. / Dissertation (MEng (Mechanical and Aeronautical Engineering))--University of Pretoria, 2006. / Mechanical and Aeronautical Engineering / unrestricted
120

Materiálově nelineární řešení konstrukcí z plastů / Material nonlinear solution of structures made of plastics

Weis, Lukáš January 2014 (has links)
The presented thesis focuses on static analysis of plastic structures, taking into account nonlinear behaviour of the material depending on the stress. The static analysis is performed using the finite element method. The difference between material linear and material nonlinear approach is illustratively described in the introduction. A shell finite element, which is enhanced by the possibility of further delamination into layers and integration points along its thickness, is suitable to be used for a numerical analysis of a plastic structures. Separate chapters are devoted to the integration of the resulting values over the height of the cross-section. The integration of the material stiffness matrix correctly reflects the emergence of eccentricity. A part of the attention is devoted to the numerical quadrature rules. Next chapter is devoted to material nonlinear models. Two approaches are described: a simpler one, using the isotropic nonlinear elastic model, and more general one, using the orthotropic plastic model. The theoretical description is complemented by the graphic interpretation of the criteria according to the individual authors. A significant portion of this work is devoted to the algorithmization of calculation procedures described in the theoretical chapters. The algorithmization itself is implemented in Fortran language into a dynamic-link library which is part of the software program RFEM 5 which is widely used in engineering practice. A part of the work is a study comparing the performance of the different technologies applicable for the algorithmization of the described issues. The agreement of the theoretical analysis of the material models and subsequent implementation within the RFEM 5 is demonstrated on the example of the bent cantilever. The thermoplastic aboveground tank structure is subject of detailed material linear, and nonlinear analysis respectively. The various approaches are compared on the results of stress, deformation an

Page generated in 0.0238 seconds