• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 628
  • 171
  • Tagged with
  • 799
  • 799
  • 799
  • 557
  • 471
  • 471
  • 136
  • 136
  • 94
  • 94
  • 88
  • 88
  • 6
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

HPC Virtualization with Xen on Itanium

Bjerke, Håvard K. F. January 2005 (has links)
<p>The Xen Virtual Machine Monitor has proven to achieve higher efficiency in virtualizing the x86 architecture than competing x86 virtualization technologies. This makes virtualization on the x86 platform more feasible in High-Performance and mainframe computing, where virtualization can offer attractive solutions for managing resources between users. Virtualization is also attractive on the Itanium architecture. Future x86 and Itanium computer architectures include extensions which make virtualization more efficient. Moving to virtualizing resources through Xen may ready computer centers for the possibilities offered by these extensions. The Itanium architecture is ``uncooperative'' in terms of virtualization. Privilege-sensitive instructions make full virtualization inefficient and impose the need for para-virtualization. Para-virtualizing Linux involves changing certain native operations in the guest kernel in order to adapt it to the Xen virtual architecture. Minimum para-virtualizing impact on Linux is achieved by, instead of replacing illegal instructions, trapping them by the hypervisor, which then emulates them. Transparent para-virtualization allows the same Linux kernel binary to run on top of Xen and on physical hardware. Itanium region registers allow more graceful distribution of memory between guest operating systems, while not disturbing the Translation Lookaside Buffer. The Extensible Firmware Interface provides a standardized interface to hardware functions, and is easier to virtualize than legacy hardware interfaces. The overhead of running para-virtualized Linux on Itanium is reasonably small and measured to be around 4.9 %. Also, the overhead of running transparently para-virtualized Linux on physical hardware is reasonably small compared to non-virtualized Linux.</p>
432

Digital Forensics: Methods and tools for retrieval and analysis of security credentials and hidden data.

Furuseth, Andreas Grytting January 2005 (has links)
<p>This master thesis proposes digital forensic methods for retrieval and analysis of steganography during a digital investigation. These proposed methods are examined using scenarios. From the examination of steganography and these cases, it is concluded that the recommended methods can be automated and increase the chances for an investigator to detect steganography.</p>
433

Predicting MicroRNA targets

Sætrom, Ola January 2005 (has links)
<p>MicroRNAs are a large family of short non-encoding RNAs that regulated protein production by binding to mRNAs. A single miRNA can regulate an mRNA by itself, or several miRNAs can cooperate in regulating the mRNAs. This is all dependent on the degree of complementarity between the miRNA and the target mRNA. Here, we present the program TargetBoost that, using a classifier generated by a combination of hardware accelerated genetic programming and boosting, allows for screening several large dataset against several miRNAs, and computes a likelihood of that genes in the dataset is regulated by the set of miRNAs used in the screening. We also present results from comparison of several different scoring functions for measuring cooperative effects. We found that the classifier used in TargetBoost is best for finding target sites that regulate mRNAs by themselves. A demo of TargetBoost can be found on http://www.interagon.com/demo.</p>
434

Tracking the Lineage of Arbitrary Processing Sequences

Valeur, Håvar January 2005 (has links)
<p>Data is worthless without knowing what the data represents, and you need metadata to efficiently manage large data sets. As computing power becomes cheaper and more data is derived, metadata becomes more important than ever. Today researcher are setting more experimental scientific workflows than before. As a result a lot of steps leading to the implementation are skipped. The leading steps usu- ally included documenting the work, which is not a central part of the more experimental approach. Since documenting is no longer a natural part of the scientific workflow, and the workflow might be changing a lot though its lifetime, many data products are lacking documentation. Since the way the scientist work have changed, we feel the way they document their work need to change. Currently there is no metadata system that retrieves metadata di- rectly from the scientific process without having the researcher having to change his code or in other ways manually set up the system to handle the workflow. This thesis suggest ways to automate the metadata retrieval, and shows how two of these techniques can be implemented. Automatic linage and metadata retrieval will help the researchers document the process a data product have gone though. My implementation shows how to retrieve linage and metadata by instrumenting Interactive Data Language scripts, and how to re- trieve linage from shell script by looking at the system calls made by the executable. The implementation discussed in this paper is intended to be a client for the Earth System Science Server, a metadata system for earth science data.</p>
435

Individual fiber segmentation of three-dimensional microtomograms of paper and fiber-reinforced composite materials

Bache-Wiig, Jens, Henden, Per Christian January 2005 (has links)
<p>The structure of a material is of special significance to its properties, and material structure has been an active area of research. In order to analyze the structure based on digital microcopy images of the material, noise reduction and binarization of these images are necessary. Measurements on fiber networks, found in paper and wood fiber - reinforced composites, require a segmentation of the imaged material sample into individual fibers. The acquisition process for modern X-ray absorption mode micro-tomographic images is described. An improved method for the binarization of paper and fiber-reinforced composite volumes is suggested. State of the art techniques for individual fiber segmentation are examined and an improved method is suggested. Software tools for the mentioned image processing tasks have been created and made available to the public. The orientation distribution of selected paper and composite samples was measured using these tools.</p>
436

COMPARING BINARY, CSD AND CSD4 APPROACHES ON THE ASPECT OF POWER CONSUMPTION IN FIR FILTERS USING MULTIPLIERLESS ARCHITECTURES ON FPGAS

Birkeland, Guri Kristine January 2005 (has links)
<p>The aim of this thesis is to compare several different algorithms of FIR-filter design on the aspect of the amount of power they consume. Three different approaches are presented: One based on binary, two's complement representation of the coefficients in the filter. The second approach is based on CSD representation, and the third approach is based on CSD4 representation of the coefficients. The three approaches are compared due to their overall power consumption when implemented on an FPGA. In theory, representing coefficients in CSD number representation, yields a reduction of non-zero bits in the implementation by 33% compared to binary representation for long wordlengths. Representing them in CSD4 yields a further reduction of 36% over CSD representation. These are the theoretical numbers. This thesis presents a practical example, simulated in distributed arithmetic on Xilinx's FPGAs. 12 different filters have been simulated with number of taps between 4 and 200. An automatic design generation tool has been developed in C to ease the process of VHDL-code generation. The automation tool generates two basic architectures, each consisting of three designs. The designs are one design based on binary numbers, one design based on CSD and the last design based on CSD4 number representation. The simulations have been done on Xilinx - Project Navigator 7.1.02i, on device family Spartan II for the smaller filters and on Spartan 3 for the larger filters. The power analysis is done using Xilinx - XPower. The results from this thesis are not what the theory states: For filters with number of taps between 4 and 32, simulated on Spartan II, the results show an increased difference between the binary approach and the CSD4 approach power consumption, in favour of the binary one. On average for these designs, binary consumes 24,5% less power than CSD4. The filters with larger number of taps (62-200) simulated on Spartan 3, the results show a power consumption equal for all the three different approaches in a filter. In other words, the percentage difference between binary, CSD and CSD4 numbers are almost zero. In this thesis it has not been shown that the binary approach in any case consumes less power than the CSD4 approach. This is, however, only a novel start on the big research field exploiting the possibilities of CSD4 number representation. The future will show whether the CSD4 number representation will turn out to be beneficial or not and if the use of it in FIR-filters will exceed the efficiency of RAG-n and other currently optimal algorithms.</p>
437

Motif discovery in biological sequences

Sandve, Geir Kjetil January 2005 (has links)
<p>This master thesis is a Ph.D. research plan for motif discovery in biological sequences, and consists of three main parts. Chapter 2 is a survey of methods for motif discovery in DNA regulatory regions, with a special emphasis on computational models. The survey presents an integrated model of the problem that allows systematic and coherent treatment of the surveyed methods. Chapter 3 presents a new algorithm for composite motif discovery in biological sequences. This algorithm has been used with success for motif discovery in protein sequences, and will in future work be extended on to explore properties of the DNA regulatory mechanism. Finally, chapter 4 describes several current research projects, as well as some more general future directions of research. The research focuses on the development of new algorithms for the discovery of composite motifs in DNA. These algorithms will partly be used for systematic exploration of the DNA regulatory mechanism. An increased understanding of this mechanism may lead to more accurate computational models, and hence more sensitive motif discovery methods.</p>
438

HPC File Server Monitoring and Tuning

Andresen, Rune Johan January 2005 (has links)
<p>As HPC systems grow, the distributed file systems serving these systems need to handle an increased load of data. In order to maintain performance, these underlying file servers need to distributethe load of data volumes efficiently over available disks. This is particularly true at CERN, the European European Organizationfor Nuclear Research, which expects to behandling Pentabytes of data in the near future. In this thesis, new utilities that analyze file serverdata which is then used to semiautomatically tune thefiles system, are developed. This is achieved using a commercial database to store the dataand then integrating it with the file server. This requires a database and a system design that can handle a large amount of data. File server data collections associated with aprocess known as "volumes", can vary in size, and be accessed at any time. To increase the overall system performance, volume history data is analyzed to locate volumes that may be gathered for increased system performance throuhgh load balancing. For instance, using the volume history data, it is possible to detect and gather volumes that are most accessed during the day with volumes that are most accessed during the night on one file server. The file server capacity is hence optimized. As part of this work, a user interface which can visualize the history data for volumes and partitions, is designed and implemented on top of the AFS file system at CERN. Our initial results presented in this thesisreveal that it is possible to locate volumes that have a repeating access period, and thus, gather them on the same partition. Other analyses and suggestions for future work will also be discussed.</p>
439

Use of GPU Functionality in Volume Rendering

Eide, Kristian Edvard Nigar January 2005 (has links)
<p>Volume rendering describes the processes of creating a 2D projection of a 3D discretely sampled data set. This field has a number of applications, most notably within medical imaging, where the output of CT and MRI scanners is a volume data set, as well as geology where seismic surveys are visualized as an aid when searching for oil & gas. Rendering a volume is a computationally intensive task due to the large amount of data that needs to be processed, and it is only recently, with the advent of commodity 3D accelerator cards, that interactive rendering of volumes has become possible. The latest versions of 3D graphics cards include a Graphics Processing Unit, or GPU, which is capable of executing small code fragments at very high speed. These small programs, while not as flexible as traditional programming, still represent a significant improvement in what is possible to achieve with the added computational ability provided by the graphics card. This thesis explores how volume rendering can be enhanced by the use of a GPU. In particular, it shows an improvement to the GPU-based raycasting approach presented in [1] and also a method for integrating the “depth peeling” technique [6] with a volume renderer for correctly rendering transparent geometry embedded in the volume. In addition, an introduction to volume rendering and GPU programming is given, and a rendering of a volume with the Phong illumination model is shown.</p>
440

a Multivariate Image Analysis Toolbox

Hagen, Reidar Strand January 2005 (has links)
<p>The toolkit has been implemented as planned: The ground work for visualisation mappings and relationships between datasets have been finished. Wavelet transforms have been to compress datasets in order to reduce computational time. Principal Component Analysis and other transforms are working. Examples of use have been provided, and several ways of visualizing them have been provided. Multivariate Image Analysis is viable on regular Workstations.</p>

Page generated in 0.0504 seconds