• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 288
  • 169
  • 79
  • 37
  • 27
  • 21
  • 14
  • 11
  • 8
  • 8
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 720
  • 152
  • 140
  • 89
  • 76
  • 73
  • 72
  • 72
  • 71
  • 70
  • 61
  • 60
  • 51
  • 50
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Heapy: A Memory Profiler and Debugger for Python

Nilsson, Sverker January 2006 (has links)
<p>Excessive memory use may cause severe performance problems and system crashes. Without appropriate tools, it may be difficult or impossible to determine why a program is using too much memory. This applies even though Python provides automatic memory management --- garbage collection can help avoid many memory allocation bugs, but only to a certain extent due to the lack of information during program execution. There is still a need for tools helping the programmer to understand the memory behaviour of programs, especially in complicated situations. The primary motivation for Heapy is that there has been a lack of such tools for Python.</p><p>The main questions addressed by Heapy are how much memory is used by objects, what are the objects of most interest for optimization purposes, and why are objects kept in memory. Memory leaks are often of special interest and may be found by comparing snapshots of the heap population taken at different times. Memory profiles, using different kinds of classifiers that may include retainer information, can provide quick overviews revealing optimization possibilities not thought of beforehand. Reference patterns and shortest reference paths provide different perspectives of object access patterns to help explain why objects are kept in memory.</p>
212

Walk-A-Way : A Maya Plug-in for Walk Cycle Automation

Christiansson, Kajsa January 2009 (has links)
<p>In 3D and 2D animations walk cycles of characters appear very frequently and are an important way of expressing various aspects of the story told. However walk cycles are tedious and time consuming to animate. In this work an Autodesk MAYA plug-in has been developed, that aims at automating this process. The walk cycle plug-in can be highly beneficial for animators when creating convincing walk cycles in a fast and simple way. The plug-in calculates the right values for each phase in the walk cycle. The GUI of the plug-in makes it easy to provide the required input parameters. In addition, the plug-in allows the animation of a character to walk along a chosen path.</p>
213

Introducing Lantmäteriet’s gravity data in ArcGIS with implementation of customized GIS functions

Ryttberg, Mattias January 2013 (has links)
Gravity is measured and used by Lantmäteriet to calculate a model of the geoid to get accurate reference heights for positioning. Lantmäteriet are continuously measuring new gravity and height data across Sweden to both complement, replace and to add new data points. This is mainly done by measurements in the field at benchmark points. One of the major reasons for continued measurements on e.g. benchmark points is that the measuring always moves forward which makes the measurements more accurate. More accurate data leads to a more accurate calculation of the geoid due to the more accurate gravity values. A more accurate geoid gives the possibility of more precise positioning across Sweden, due to the more precise height values. Lantmäteriet is in the process of updating their entire database of gravity data. They are also measuring at locations where there are none or sparse with measurements. As a stage in the renewing of their database and other systems the Geodesy department wishes to get an introduction to the ArcGIS environment. By customizations of several ArcGIS functions, Lantmäteriet’s work with the extensive data will get easier and perhaps faster. Customized tools will help make e. g. adding and removing data points easier, as well as making cross validation and several other functions only a click of a button away.
214

PIC/FLIP Fluid Simulation Using Block-Optimized Grid Data Structure

Salomonsson, Fredrik January 2011 (has links)
This thesis work will examin and present how to implement a Particle-In-Cell and a Fluid-Implicit-Particle (PIC / FLIP) fluid solver that takes advantage of the inherent parallelism of Digital Domain's sparse block optimized data structure, DB-Grid. The methods offer a hybrid approach between particle and grid based simulation. This thesis will also discuss and go through different approaches for storing and accessing the data associated with each particle. For dynamically create and remove attributes from the particles, Disney's open source API, Partio is used. Which is also used for saving the particles to disk. Finally how to expose C++ classes into Python by wrapping everything into a Python module using the Boost.Python API and discuss the benets of having a script language.
215

Wind Power and Its Impact on the Moldovan Electrical System

Eriksson, Joel, Gozdz Englund, Simon January 2012 (has links)
The master thesis project has been executed with the cooperation of Borlänge Energi, with the aim of reducing the high electric energy dependency which Moldova has on Ukraine, Transnistria and Russia. The project examines what reduction that would be possible by wind power installations on the existing electrical grid of Moldova. The installations should not surpass the capacity of the transmission lines or the voltage levels according to regulation. The southern regions of Moldova proved to have the best wind conditions and the locations of Besarabeasca, Zarnesti, Leovo, Ciadyr and Cimislia in the southern region were chosen for wind power installations. For the analysis a model over the Moldovan electrical system is constructed. Each of the five chosen locations is modelled with a generator symbolizing the wind power installation. The power flow software PSS/E is used to construct the model. To examine possible wind power installations different scenarios are created. The scenarios are executed with the southern regions 110 kV system as a focus area. All scenarios are analysed with a contingency analysis, where transmission lines in the focus region are tripped. The contingency analysis and the scenarios are automated using the programming language Python. An economic analysis shows payback periods for wind power investments in Moldova, the analysis also shows the sensitivity of the electricity price and discount rates. The project concludes that wind power installations are possible with the Moldovan electric grid as it looks today. The installations would result in reducing the high dependency of imported electrical energy.
216

Impact of error : Implementation and evaluation of a spatial model for analysing landscape configuration

Wennbom, Marika January 2012 (has links)
Quality and error assessment is an essential part of spatial analysis which with the increasingamount of applications resulting from today’s extensive access to spatial data, such as satelliteimagery and computer power is extra important to address. This study evaluates the impact ofinput errors associated with satellite sensor noise for a spatial method aimed at characterisingaspects of landscapes associated with the historical village structure, called the HybridCharacterisation Model (HCM), that was developed as a tool to monitor sub goals of theSwedish Environmental Goal “A varied agricultural landscape”. The method and errorsimulation method employed for generating random errors in the input data, is implemented andautomated as a Python script enabling easy iteration of the procedure. The HCM is evaluatedqualitatively (by visual analysis) and quantitatively comparing kappa index values between theoutputs affected by error. Comparing the result of the qualitative and quantitative evaluationshows that the kappa index is an applicable measurement of quality for the HCM. Thequalitative analysis compares impact of error for two different scales, the village scale and thelandscape scale, and shows that the HCM is performing well on the landscape scale for up to30% error and on the village scale for up to 10% and shows that the impact of error differsdepending on the shape of the analysed feature. The Python script produced in this study couldbe further developed and modified to evaluate the HCM for other aspects of input error, such asclassification errors, although for such studies to be motivated the potential errors associatedwith the model and its parameters must first be further evaluated.
217

Nutzerorientiertes Management von materiellen und immateriellen Informationsobjekten

Hübsch, Chris 12 December 2001 (has links) (PDF)
Schaffung einer stabilen, erweiterbaren und skalierbaren Infrastruktur für die Bereitstellung von Diensten im Umfeld von Bibliotheken und ähnlichen wissensanbietenden Einrichtungen unter Verwendung von XML-RPC und Python.
218

wxWindows / wxPython

Wegener, Jens 17 May 2002 (has links)
Gemeinsamer Workshop von Universitaetsrechenzentrum und Professur Rechnernetze und verteilte Systeme der Fakultaet fuer Informatik der TU Chemnitz. Der Vortrag stellt wxWindows und wxPython als Lösung zur Entwicklung plattformunabhängiger Software mit graphischer Benutzerschnittstelle vor.
219

A Warranted Domain Theory and Developmental Framework for a Web-based Treatment in Support of Physician Wellness

Donnelly, David Scott 01 January 2013 (has links)
This study employed a design-based research methodology to develop a theoretically sound approach for designing instructional treatments. The instruction of interest addressed the broad issue of physician wellness among medical school faculty, with particular emphasis on physician self-diagnosis and self-care. The theoretically sound approach comprised a domain theory and design framework. The domain theory was posited subsequent to an examination of the literature, and subjected to expert examination through three cycles of instructional treatment development. The design framework for crafting the treatment was created from components of existing frameworks, and evolved with the cycles of development. The instructional treatment was designed to be delivered to a web browser from a server using a Python microframework to preserve the anonymity of the end user. Experts in three relevant knowledge domains verified that the instructional treatment embodied the domain theory, and was suitable for use as a practical instructional treatment. Subsequently, a limited-time pilot deployment was initiated among practicing faculty physicians (N=273) to solicit user feedback. Responses were obtained through a survey instrument created for the purpose and hosted on a remote website. Although the response rate was low (12%), the responses were encouraging and useful for guiding future research and treatment development.
220

Seismic inversion through operator overloading

Herrmann, Felix J. January 2007 (has links)
Inverse problems in (exploration) seismology are known for their large to very large scale. For instance, certain sparsity-promoting inversion techniques involve vectors that easily exceed 230 unknowns while seismic imaging involves the construction and application of matrix-free discretized operators where single matrix-vector evaluations may require hours, days or even weeks on large compute clusters. For these reasons, software development in this field has remained the domain of highly technical codes programmed in low-level languages with little eye for easy development, code reuse and integration with (nonlinear) programs that solve inverse problems. Following ideas from the Symes’ Rice Vector Library and Bartlett’s C++ object-oriented interface, Thyra, and Reduction/Transformation operators (both part of the Trilinos software package), we developed a software-development environment based on overloading. This environment provides a pathway from in-core prototype development to out-of-core and MPI ’production’ code with a high level of code reuse. This code reuse is accomplished by integrating the out-of-core and MPI functionality into the dynamic object-oriented programming language Python. This integration is implemented through operator overloading and allows for the development of a coordinate-free solver framework that (i) promotes code reuse; (ii) analyses the statements in an abstract syntax tree and (iii) generates executable statements. In the current implementation, we developed an interface to generate executable statements for the out-of-core unix-pipe based (seismic) processing package RSF-Madagascar (rsf.sf.net). The modular design allows for interfaces to other seismic processing packages and to in-core Python packages such as numpy. So far, the implementation overloads linear operators and elementwise reduction/transformation operators. We are planning extensions towards nonlinear operators and integration with existing (parallel) solver frameworks such as Trilinos.

Page generated in 0.1802 seconds