• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1879
  • 978
  • 10
  • 10
  • 7
  • 6
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 2895
  • 2759
  • 611
  • 592
  • 557
  • 498
  • 498
  • 459
  • 415
  • 381
  • 379
  • 378
  • 339
  • 314
  • 301
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
561

The Expectation Propagation Algorithm for use in Approximate Bayesian Analysis of Latent Gaussian Models

Skar, Christian January 2010 (has links)
<p>Analyzing latent Gaussian models by using approximate Bayesian inference methods has proven to be a fast and accurate alternative to running time consuming Markov chain Monte Carlo simulations. A crucial part of these methods is the use of a Gaussian approximation, which is commonly found using an asymptotic expansion approximation. This study considered an alternative method for making a Gaussian approximation, the expectation propagation (EP) algorithm, which is known to be more accurate, but also more computationally demanding. By assuming that the latent field is a Gaussian Markov random field, specialized algorithms for factorizing sparse matrices was used to speed up the EP algorithm. The approximation methods were then compared both with regards to computational complexity and accuracy in the approximations. The expectation propagation algorithm was shown to provide some improvements in accuracy compared to the asymptotic expansion approximation when tested on a binary logistic regression model. However, tests of computational time requirement for computing approximations in simple examples show that the EP algorithm is as much as 15-20 times slower than the alternative method.</p>
562

Topology and Data

Brekke, Øyvind January 2010 (has links)
<p>Today there is an immense production of data, and the need for better methods to analyze data is ever increasing. Topology has many features and good ideas which seem favourable in analyzing certain datasets where statistics is starting to have problems. For example, we see this in datasets originating from microarray experiments. However, topological methods cannot be directly applied on finite point sets coming from such data, or atleast it will not say anything interesting. So, we have to modify the data sets in some way such that we can work on them with the topological machinery. This way of applying topology may be viewed as a kind of discrete version of topology. In this thesis we present some ways to construct simplicial complexes from a finite point cloud, in an attempt to model the underlying space. Together with simplicial homology and persistent homology and barcodes, we obtain a tool to uncover topological features in finite point clouds. This theory is tested with a Java software package called JPlex, which is an implementation of these ideas. Lastly, a method called Mapper is covered. This is also a method for creating simplicial complexes from a finite point cloud. However, Mapper is mostly used to create low dimensional simplicial complexes that can be easily visualized, and structures are then detected this way. An implementation of the Mapper method is also tested on a self made data set.</p>
563

Skin effects and UV dosimetry of climate therapy in patients with psoriasis

Bartosova, Veronika January 2010 (has links)
<p>Sun exposure and climate therapy is an effective treatment for psoriasis. However, even though this treatment gives the patients relief from their discomforting symptoms, it has some potentially dangerous side effects such as an increased risk of skin cancer and premature skin aging. A prospective field study plans to follow the patients undergoing the climate therapy. During this study the UV dose to each patient will be monitored by personal dosimeters worn by the patients. Furthermore the patients' skin spectra acquired by the means of optical spectroscopy will be obtained daily. Both psoriatic skin a unaffected skin will be observed. These data will be used to assess the skin changes which take place during the psoriasis treatment. This project is focused on developing an automatic algorithm for handling bulk spectrometric measurements data and to propose ways of numerically evaluating the skin spectra. These numerical values will be later used to compare the daily patients' spectra and monitor progress of the treatment. An inverse model based on a lookup table and successive iteration was proposed in this project. The model matches the diffuse skin reflectance spectra modeled with a diffuse skin model with the measured patients skin spectra. The measured skin spectra are then defined by the diffuse skin model input parameters which were found by the inverse model. These parameters are oxygenation, blood volume and melanin absorption coefficient. Additionally four indexes were proposed to supplement the parameters found by the inverse model, namely the erythema index, melanin index, hemoglobin index and oxygenation index. Measurements of several skin spectra including psoriatic plaques spectra were carried out and used to test the inverse fitting model performance. The proposed model proved to match the measured spectra in an acceptable form employable for distinguishing between different measured spectra. The highest deviation is at the ends of the spectra due to the use of a constant value of scattering coefficient and additional parameters not directly relevant to sun exposure, hence not considered by the model. The proposed parameters together with the indexes proved to be a viable means of evaluating the healing in the psoriatic plaques as well as determining the changes caused by the sun in normal skin.</p>
564

Reactive Power Compensation using a Matrix Converter

Holtsmark, Nathalie Marie-Anna January 2010 (has links)
<p>This Master's thesis investigates a new application for the matrix converter: Shunt reactive power compensation. The suggested Matrix Converter-based Reactive power Compensation (MCRC) device is composed of a matrix converter, which input is connected to the grid and an electric machine at the output of the converter. The reactive power flowing in or out of the grid can be regulated with the matrix converter by controlling the magnitude and/or phase angle of the current at the input of the converter. The matrix converter has no bulky DC link capacitor like traditional AC-DC-AC converters. The thought electric machine is a Permanent Magnet (PM) synchronous machine which is compact as well, yielding an overall compact device. The main focus of the thesis is to evaluate the reactive power range that the MCRC device can offer. The reactive power range depends mainly on the modulation of the matrix converter. Two different modulation techniques are studied: the indirect virtual space vector modulation and the three-vector-scheme. The indirect space vector modulation can provide or draw reactive power at the input of the matrix converter as long as there is an active power flow through the converter that is different from zero. For pure reactive power compensation the indirect space vector modulation cannot be used and the three-vector-scheme must be used instead. Both modulation techniques are presented in details as well as their reactive power compensation range. To verify the reactive power capabilities of the device, three different simulation models are built in MATLAB Simulink. The first simulation model represents the MCRC device with the matrix converter modulated with the indirect space vector modulation. The second model represents also the MCRC device with the matrix converter modulated with the three-vector-scheme. In both model the PM machine is represented by a simple equivalent circuit. Simulations done with both models show a good accordance between the theoretical analysis of the device and the experimental results. The last simulation model features a simplified version of the MCRC system connected to a grid where a symmetrical fault occurs. The MCRC proves to be efficient in re-establishing the voltage to its pre-fault value.</p>
565

Investigation of methods for speckle contrast reduction

Welde, Kristine January 2010 (has links)
<p>Speckle arises when coherent light is reflected from a rough screen and observed by an intensity detector with a finite aperture. Because speckle causes serious image degradation when lasers are used as light sources in e.g. projectors, methods for reducing the speckle contrast need to be developed. Different speckle contrast reduction methods are investigated in this thesis, such as a rotating diffuser and a sinusoidal rotating grating. In addition, speckle simulations with the optical system design software ZEMAX has been explored. A setup consisting of a 4-f imaging system with a rotating diffuser in the Fourier plane was developed in order to decide whether or not it is advantageous to perform speckle reduction in the Fourier plane. Hence, measurement series were performed with the rotating diffuser placed at different positions in the 4-f imaging system for comparison. Measurement series were executed both with an empty object plane and with a lens in it to spread the light in the Fourier plane. Placing a rotating diffuser in the Fourier plane does not appear to be effective for speckle contrast reduction. The last setup investigated was a transmissive spatial light modulator (SLM) placed in the beam path. Sinusoidal rotating gratings created by means of gray levels, to simulate a potential modulator based on a deformable polymer layer, were implemented on the SLM. The gratings were rotated around their centers, and in a spiral in order to reduce the speckle contrast. For the first method the modulator speckle contrast was 34% for N = 18 averaged images, and for the second method it was 31% for N = 36 averaged images, both with a grating period of 4 pixels. Due to the drawbacks of the SLM optimal results were not achieved, but the SLM is useful for a proof-of-concept. Further measurements should be performed for this promising, novel method based on a true sinusoidal grating.</p>
566

Representing and reasoning with constraints in Creek

Stige, Martin January 2006 (has links)
<p>This work studies constraint mechanisms in frame-based knowledge representation systems with the aim of improving the knowledge modelling abilities of the TrollCreek system. TrollCreek is an implementation of Creek, an architecture for case based reasoning (CBR) that uses an explicit frame-based knowledge model to guide the CBR process. The objective of this project is to develop a constraint mechanism for TrollCreek. In doing this the earlier Lisp implementation of Creek and four other frame-based systems are examined with emphasize on their constraint mechanisms. Based on these systems a constraint mechanism for TrollCreek is discussed and specified. The part of the mechanism considered most central is implemented and evaluated.</p>
567

Swarm-based information retrieval : Automatic knowledge-acquisition using MAS, SI and the Internet

Rykkelid, Håvard January 2006 (has links)
<p>In testing the viability of automatic knowledge-acquisition, using simple techniques and brute force on the Internet, a system was implemented in Java. Techniques from both multi-agent system and swarm intelligence paradigms were used to structure the system, improve searches, increase stability and increase modularity. The system presented relies on using existing search-engines to find texts on the World Wide Web, containing a user-specified key-word. Knowledge is identified in the texts using key-sentences, terms related to the key-word becomes new key-words in an incremental search. The result is expressed as sentences in a KR-language. The answers from a run were often interesting and surprising, and gave information beyond an encyclopedic scope, even if the answers often contained false information. The results of the implemented system verified the viability of both the designed framework and the theory behind it.</p>
568

Forensic analysis of an unknown embedded device

Eide, Jarle, Olsen, Jan Ove Skogheim January 2006 (has links)
<p>Every year thousands of new digital consumer device models come on the market. These devices include video cameras, photo cameras, computers, mobile phones and a multitude of different combinations. Most of these devices have the ability to store information in one form or another. This is a problem for law enforcement agencies as they need access to all these new kinds of devices and the information on them in investigations. Forensic analysis of electronic and digital equipment has become much more complex lately because of the sheer number of new devices and their increasing internal technological sophistication. This thesis tries to help the situation by reverse engineering a Qtek S110 device. More specifically we analyze how the storage system of this device, called the object store, is implemented on the device’s operating system, Windows Mobile. We hope to figure out how the device stores user data and what happens to this data when it is "deleted". We further try to define a generalized methodology for such forensic analysis of unknown digital devices. The methodology takes into account that such analysis will have to be performed by teams of reverse-engineers more than single individuals. Based on prior external research we constructed and tested the methodology successfully. We were able to figure our more or less entirely the object store’s internal workings and constructed a software tool called BlobExtractor that can extract data, including "deleted", from the device without using the operating system API. The main reverse engineering strategies utilized was black box testing and disassembly. We believe our results can be the basis for future advanced recovery tools for Windows Mobile devices and that our generalized reverse engineering methodology can be utilized on many kinds of unknown digital devices.</p>
569

Bluetooth broadcasting

Ho, Johan January 2006 (has links)
<p>Background: The wireless technology Bluetooth has rapidly become more commonly supported by electronic devices like mobile phones and PDAs. Several companies are currently developing Bluetooth broadcasting systems to use for marketing. This report is a result of researching the use of Bluetooth broadcasting for delivering information for more general purposes, how well Bluetooth actually works for broadcasting, and also on the topic of user privacy. Results: Broadcasting with Bluetooth did work with a few devices at the same time, since Bluetooth allows up to seven connections to one Bluetooth radio at the same time. By making a passive system where the user is the one which requests information, it also will not affect users' privacy. However, my research also found a few issues with the Bluetooth which might affect a broadcast, the most noticeable of them being the somewhat low transfer rate, an issue with device discovery not always working when a lot of users are doing device discovery at the same time. The fact that it only supports seven connections is also a limitation. Basically, while it is possible to use Bluetooth for broadcasting, it might be problematic to use it for targeting a large audience. Conclusions: Even with the problems mentioned, Bluetooth broadcasting provides quite a unique way of broadcasting, and with the newer versions of Bluetooth the issues mentioned might be less of a problem. Bluetooth broadcasting definitely has some potential.</p>
570

On the Efficiency of Data Communication for the Ultramonit Corrosion Monitoring System

Rommetveit, Tarjei January 2006 (has links)
<p>Ultramonit is a system under development for permanent installation on critical parts of the subsea oil- and gas pipelines in order to monitor the corrosion continuously by using ultrasound. The communication link which connects the Ultramonit units with the outside world is identified as the system’s bottleneck, and it is thus of interest to compress the ultrasonic data before transmission. The main goal of this diploma work has been to implement and optimize a lossy compression scheme in C on the available hardware (HW) with respect to a self-defined fidelity measure. Limited resources, such as memory constraints and constraints with respect to the processing time, have been a major issue during implementation. The real-time aspect of the problem results in an intricate relation between transfer time, processing time and compression ratio for a given fidelity. The encoder is optimized with respect to two different bit allocation schemes, two different filters as well as various parameters. Compared to transferring the unprocessed traces, the results demonstrate that the transfer time can be reduced with a factor 12. This yields acceptable fidelity concerning the main application of long term monitoring of subsea pipelines. However, for ultra-high precision applications where the total change in thickness due to corrosion is less than a few micrometers, compression should not be employed.</p>

Page generated in 0.0251 seconds