• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 382
  • 116
  • Tagged with
  • 498
  • 498
  • 498
  • 498
  • 498
  • 471
  • 27
  • 10
  • 10
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Conformance testing of Data Exchange Set implementations

Larsen, Fredrik Lied January 2005 (has links)
Product information exchange has been described by a number of standards. The “Standard for the Exchange of Product model data” (STEP) is published by ISO as an international standard to cover this exchange. “Product Life Cycle Support” (PLCS) is a standard developed as an extension to STEP, covering the complete life cycle information needs for products. PLCS uses Data Exchange Sets (DEXs) to exchange information. A DEX is a subset of the PLCS structure applicable for product information exchange. A DEX is specified in a separate document form the PLCS standard, and is published under OASIS. The development of DEXs is ongoing and changing, nine DEXs have been identified and are being developed within the Organization for the Advancement of Structured Information Standards (OASIS). Each of the nine DEXs covers a specific business concept. Implementations based on the DEX specifications are necessary in order to send and receive populated DEXs with product information. The implementations add contents to a DEX structure in a neutral file format which can be exchanged. Interoperability between senders and receivers of DEXs can not be guaranteed, however, conformance testing of implementations can help increase the chances of interoperability. Conformance testing is the process of testing an implementation against a set of requirements stated in a specification or standard used to develop the implementation. Conformance testing is performed by sending inputs to the implementation and observing the output. The output is then analysed with respect to expected output. STEP dedicates a whole section of the standard to conformance testing of STEP implementations. This section describes how implementations of STEP shall be tested and analysed. PLCS is an extension of STEP, and DEXs are subsets of PLCS. Conformance testing for STEP is used as a basis for DEX conformance testing, because of the similarities between PLCS and STEP. A testing methodology based on STEP conformance testing and DEX specifications is developed. The testing methodology explains how conformance testing can be achieved on DEX implementations exemplified with a test example on a specific DEX. The thesis develops a proposed set of test methods for conformance testing DEX adapter implementations. Conformance testing of Export adapters tests the adapter’s ability to populate and output a correct DEX according to the specifications in the applicable DEX specification. Conformance testing of the Import adapter verifies that the content of the populated input DEX is retrievable in the receiving computer system. A specific DEX, “Identify a part and its constituent parts”, is finally used as an example on how to test a specific DEX specification. Test cases are derived from a set of test requirements identified from the DEX specification. Testing of these requirements is explained explicitly.
322

Automatic recognition of standard views in ultrasound images of the heart

Torland, Anne Vold January 2005 (has links)
With medical imaging, clinicians are given new opportunities in inspection of anatomical structures, surgical planning and diagnosing. Computer vision is often used with the aim of automating these processes. Ultrasound imaging is one of the most popular medical imaging modalities. The equipment is portable and relatively inexpensive, the procedure is non-invasive and there are few known side effects. But the acquisition of ultrasound images, for instance of the heart, is not a trivial job for the inexperienced. Five classes of standard images, or standard views, have been developed to ensure acceptable quality of ultrasound heart images. Automatic recognition of these standard views, or classification, would be a good starting point for an ”Ultrasound for dummies” project. Recently, a new class of object recognition methods has emerged. These methods are based on matching of local features. Image content is transformed into local feature coordinates, which are ideally invariant to translation, rotation, scaling and other image parameters. In [21], David Lowe proposes the Scale Invariant Feature Transform (SIFT), which is a method for extracting distinctive invariant features from an image. He also suggests a method for using these features to recognize different images of the same object. In this thesis I suggest using the SIFT features to classify heart view images. The invariance requirements to a standard heart view recognition system are special. Therefore, in addition to using Lowe’s algorithm for feature extraction, a new matching algorithm specialized at the heart view classification task is proposed.
323

Exploring interface metaphors for using handhelds and PCs together

Alsos, Ole Andreas January 2005 (has links)
By distributing the user interface between devices like a PDA and a PC, one can utilize the best characteristic from each device. This thesis has investigated what conceptual models and interface metaphors one should use when designing systems using handheld computers and PCs together. This has been done by exploring the design space of the devices, resulting in seven interface metaphors that have been adapted to a hospital case. Based on results from a focus group session and an interview, several prototypes based on the interface metaphors have been developed. These prototypes all enable a physician to display x-ray images on a patient terminal by using a PDA. In a usability test experiment the users’ actions and think-aloud protocol when using the prototypes have been captured and analyzed to find their mental models. The analysis has resulted in four general metaphors on which users internalize when using handhelds and PCs together. A design process using the user’s mental models as a basis for the creation of the conceptual model is presented. The thesis concludes with that the general metaphors found can be a good basis for the design of a conceptual model and ends with general guidelines for systems using handhelds and PCs together. Keywords: Handheld, PC, conceptual model, mental model, metaphor, design process, usability test, card sort.
324

Organizing Mobile Work Processes in Ubiquitous Computing Environments

Jacobsen, Kristoffer January 2005 (has links)
This thesis explores the domain of ubiquitous computing and relates situations of mobile work to Virtual Organizations (VOs). Motivated by the work performed by the MOWAHS project, this thesis aims to contribute in understanding virtual organizations, and in continuously assessing and improving the work processes within these. Emerging technologies enable improved sensing of users, actions, wishes and requirements which can be utilized for facilitating situated activities in dynamic organizations. Taking an organizational approach to the subject we aim to describe new ways of coordinating actors automatically in these environments based on context information from the surroundings. Through analysis of simple mobile work scenarios, we can extract knowledge of how different situations of mobile work demand coordination. This is used as method for identifying the importance of work process information in monitoring coordination. We provide an architecture proposition for a coordination module and suggestions to how context information of the work processes could be acquired and represented as knowledge to the organization.
325

HPC Virtualization with Xen on Itanium

Bjerke, Håvard K. F. January 2005 (has links)
The Xen Virtual Machine Monitor has proven to achieve higher efficiency in virtualizing the x86 architecture than competing x86 virtualization technologies. This makes virtualization on the x86 platform more feasible in High-Performance and mainframe computing, where virtualization can offer attractive solutions for managing resources between users. Virtualization is also attractive on the Itanium architecture. Future x86 and Itanium computer architectures include extensions which make virtualization more efficient. Moving to virtualizing resources through Xen may ready computer centers for the possibilities offered by these extensions. The Itanium architecture is ``uncooperative'' in terms of virtualization. Privilege-sensitive instructions make full virtualization inefficient and impose the need for para-virtualization. Para-virtualizing Linux involves changing certain native operations in the guest kernel in order to adapt it to the Xen virtual architecture. Minimum para-virtualizing impact on Linux is achieved by, instead of replacing illegal instructions, trapping them by the hypervisor, which then emulates them. Transparent para-virtualization allows the same Linux kernel binary to run on top of Xen and on physical hardware. Itanium region registers allow more graceful distribution of memory between guest operating systems, while not disturbing the Translation Lookaside Buffer. The Extensible Firmware Interface provides a standardized interface to hardware functions, and is easier to virtualize than legacy hardware interfaces. The overhead of running para-virtualized Linux on Itanium is reasonably small and measured to be around 4.9 %. Also, the overhead of running transparently para-virtualized Linux on physical hardware is reasonably small compared to non-virtualized Linux.
326

Digital Forensics: Methods and tools for retrieval and analysis of security credentials and hidden data.

Furuseth, Andreas Grytting January 2005 (has links)
This master thesis proposes digital forensic methods for retrieval and analysis of steganography during a digital investigation. These proposed methods are examined using scenarios. From the examination of steganography and these cases, it is concluded that the recommended methods can be automated and increase the chances for an investigator to detect steganography.
327

Predicting MicroRNA targets

Sætrom, Ola January 2005 (has links)
MicroRNAs are a large family of short non-encoding RNAs that regulated protein production by binding to mRNAs. A single miRNA can regulate an mRNA by itself, or several miRNAs can cooperate in regulating the mRNAs. This is all dependent on the degree of complementarity between the miRNA and the target mRNA. Here, we present the program TargetBoost that, using a classifier generated by a combination of hardware accelerated genetic programming and boosting, allows for screening several large dataset against several miRNAs, and computes a likelihood of that genes in the dataset is regulated by the set of miRNAs used in the screening. We also present results from comparison of several different scoring functions for measuring cooperative effects. We found that the classifier used in TargetBoost is best for finding target sites that regulate mRNAs by themselves. A demo of TargetBoost can be found on http://www.interagon.com/demo.
328

Tracking the Lineage of Arbitrary Processing Sequences

Valeur, Håvar January 2005 (has links)
Data is worthless without knowing what the data represents, and you need metadata to efficiently manage large data sets. As computing power becomes cheaper and more data is derived, metadata becomes more important than ever. Today researcher are setting more experimental scientific workflows than before. As a result a lot of steps leading to the implementation are skipped. The leading steps usu- ally included documenting the work, which is not a central part of the more experimental approach. Since documenting is no longer a natural part of the scientific workflow, and the workflow might be changing a lot though its lifetime, many data products are lacking documentation. Since the way the scientist work have changed, we feel the way they document their work need to change. Currently there is no metadata system that retrieves metadata di- rectly from the scientific process without having the researcher having to change his code or in other ways manually set up the system to handle the workflow. This thesis suggest ways to automate the metadata retrieval, and shows how two of these techniques can be implemented. Automatic linage and metadata retrieval will help the researchers document the process a data product have gone though. My implementation shows how to retrieve linage and metadata by instrumenting Interactive Data Language scripts, and how to re- trieve linage from shell script by looking at the system calls made by the executable. The implementation discussed in this paper is intended to be a client for the Earth System Science Server, a metadata system for earth science data.
329

Individual fiber segmentation of three-dimensional microtomograms of paper and fiber-reinforced composite materials

Bache-Wiig, Jens, Henden, Per Christian January 2005 (has links)
The structure of a material is of special significance to its properties, and material structure has been an active area of research. In order to analyze the structure based on digital microcopy images of the material, noise reduction and binarization of these images are necessary. Measurements on fiber networks, found in paper and wood fiber - reinforced composites, require a segmentation of the imaged material sample into individual fibers. The acquisition process for modern X-ray absorption mode micro-tomographic images is described. An improved method for the binarization of paper and fiber-reinforced composite volumes is suggested. State of the art techniques for individual fiber segmentation are examined and an improved method is suggested. Software tools for the mentioned image processing tasks have been created and made available to the public. The orientation distribution of selected paper and composite samples was measured using these tools.
330

COMPARING BINARY, CSD AND CSD4 APPROACHES ON THE ASPECT OF POWER CONSUMPTION IN FIR FILTERS USING MULTIPLIERLESS ARCHITECTURES ON FPGAS

Birkeland, Guri Kristine January 2005 (has links)
The aim of this thesis is to compare several different algorithms of FIR-filter design on the aspect of the amount of power they consume. Three different approaches are presented: One based on binary, two's complement representation of the coefficients in the filter. The second approach is based on CSD representation, and the third approach is based on CSD4 representation of the coefficients. The three approaches are compared due to their overall power consumption when implemented on an FPGA. In theory, representing coefficients in CSD number representation, yields a reduction of non-zero bits in the implementation by 33% compared to binary representation for long wordlengths. Representing them in CSD4 yields a further reduction of 36% over CSD representation. These are the theoretical numbers. This thesis presents a practical example, simulated in distributed arithmetic on Xilinx's FPGAs. 12 different filters have been simulated with number of taps between 4 and 200. An automatic design generation tool has been developed in C to ease the process of VHDL-code generation. The automation tool generates two basic architectures, each consisting of three designs. The designs are one design based on binary numbers, one design based on CSD and the last design based on CSD4 number representation. The simulations have been done on Xilinx - Project Navigator 7.1.02i, on device family Spartan II for the smaller filters and on Spartan 3 for the larger filters. The power analysis is done using Xilinx - XPower. The results from this thesis are not what the theory states: For filters with number of taps between 4 and 32, simulated on Spartan II, the results show an increased difference between the binary approach and the CSD4 approach power consumption, in favour of the binary one. On average for these designs, binary consumes 24,5% less power than CSD4. The filters with larger number of taps (62-200) simulated on Spartan 3, the results show a power consumption equal for all the three different approaches in a filter. In other words, the percentage difference between binary, CSD and CSD4 numbers are almost zero. In this thesis it has not been shown that the binary approach in any case consumes less power than the CSD4 approach. This is, however, only a novel start on the big research field exploiting the possibilities of CSD4 number representation. The future will show whether the CSD4 number representation will turn out to be beneficial or not and if the use of it in FIR-filters will exceed the efficiency of RAG-n and other currently optimal algorithms.

Page generated in 0.6003 seconds