• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1879
  • 978
  • 10
  • 10
  • 7
  • 6
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 2895
  • 2759
  • 611
  • 592
  • 557
  • 498
  • 498
  • 459
  • 415
  • 381
  • 379
  • 378
  • 339
  • 314
  • 301
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
791

Superconducting magnetic energy storage in power systems with renewable energy sources

Nielsen, Knut Erik January 2010 (has links)
<p>The increasing focus on large scale integration of new renewable energy sources like wind power and wave power introduces the need for energy storage. Superconducting Magnetic Energy Storage (SMES) is a promising alternative for active power compensation. Having high efficiency, very fast response time and high power capability it is ideal for levelling fast fluctuations. This thesis investigates the feasibility of a current source converter as a power conditioning system for SMES applications. The current source converter is compared with the voltage source converter solution from the project thesis. A control system is developed for the converter. The modulation technique is also investigated. The SMES is connected in shunt with an induction generator, and is facing a stiff network. The objective of the SMES is to compensate for power fluctuations from the induction generator due to variations in wind speed. The converter is controlled by a PI-regulator and a current compensation technique deduced from abc-theory. Simulations on the system are carried out using the software PSIM. The simulations have proved that the SMES works as both an active and reactive power compensator and smoothes power delivery to the grid. The converter does however not seem like an optimum solution at the moment. High harmonic distortion of the output currents is the main reason for this. However this system might be interesting for low power applications like wave power. I</p>
792

Mimetic Finite Difference Method on GPU : Application in Reservoir Simulation and Well Modeling

Singh, Gagandeep January 2010 (has links)
<p>Heterogeneous and parallel computing systems are increasingly appealing to high-performance computing. Among heterogeneous systems, the GPUs have become an attractive device for compute-intensive problems. Their many-core architecture, primarily customized for graphics processing, is now widely available through programming architectures that exploit parallelism in GPUs. We follow this new trend and attempt an implementation of a classical mathematical model describing incompressible single-phase fluid flow through a porous medium. The porous medium is an oil reservoir represented by means of corner-point grids. Important geological and mathematical properties of corner-point grids will be discussed. The model will also incorporate pressure- and rate-controlled wells to be used for some realistic simulations. Among the test models are the 10th SPE Comparative Solution Project Model 2. After deriving the underlying mathematical model, it will be discretised using the numerical technique of Mimetic Finite Difference methods. The heterogeneous system utilised is a desktop computer with an NVIDIA GPU, and the programming architecture to be used is CUDA, which will be described. Two different versions of the final discretised system have been implemented; a traditional way of using an assembled global stiffness sparse matrix, and a Matrix-free version, in which only the element stiffness matrices are used. The former version evaluates two GPU libraries; CUSP and THRUST. These libraries will be briefly decribed. The linear system is solved using the iterative Jacobi-preconditioned conjugate gradient method. Numerical tests on realistic and complex reservoir models shows significant performance benefits compared to corresponding CPU implementations.</p>
793

Dynamic Management of Software Components in a Ubiquitous Collaborative Environment

Kristiansen, Yngvar January 2010 (has links)
<p>The key motivation of this thesis is to find innovative solutions for facilitating the deployment of ubiquitous systems, with the purpose of making technology supported collaboration an easier task. Users, being in a ubiquitous environment, continuously encounter new resources that might provide some value. As the number of these resources increase, the management of them will be a central task in a ubiquitous computing system. The problems and challenges discussed in this thesis are related to continuous and unpredictable changes in the ubiquitous environment, which makes it difficult for users to retrieve appropriate software for utilizing resources. We also discuss the challenge of managing resources, and sharing them between users. The research questions in this thesis are: RQ-1: How can we extend existing service management architectures to support user-centered and community-based service management? RQ-2: What technologies, architectures and platforms are the most suitable for implementing user-centered and community-based service management? RQ-3: How can we evaluate the usability and utility of user-centered and community-based service management? What are the most compelling scenarios? The contributions in this thesis are, correspondingly: C1: We have made a solution proposal and an implementation of an improved service management system, which is based on earlier works of the Ubicollab platform. C2: Four items were found suitable: 1. The deployment model used by distribution platforms for mobile applications (such as AppStore and Android Market), 2. OSGi, 3. R-OSGi, and 4. HTTP-based communication using Java Servlets. C3: The evaluation of such systems can be done using a three-step process that includes: 1. Examining the system's fulfillment its requirement specification. 2. Compare the system's functionality with that of a scenario-described ideal system. 3. Create applications that demonstrate the utility of the system.</p>
794

Evaluating Different Simulation-Based Estimates for Value and Risk in Interest Rate Portfolios

Kierulf, Kaja January 2010 (has links)
<p>This thesis evaluates risk measures for interest rate portfolios. First a model for interest rates is established: the LIBOR market model. The model is applied to Norwegian and international interest rate data and used to calculate the value of the portfolio by using Monte Carlo simulation. Estimation of volatility and correlation is discussed as well as the two risk measures value at risk and expected tail loss. The data used is analysed before the results of the backtesting evaluating the two risk measures are presented.</p>
795

Matrix-Free Conjugate Gradient Methods for Finite Element Simulations on GPUs

Refsnæs, Runar Heggelien January 2010 (has links)
<p>A block-structured approach for solving 2-dimensional finite element approximations of the Poisson equation on graphics processing units(GPUs) is developed. Linear triangular elements are used, and a matrix-free version of the conjugate gradient method is utilized for solving test problems with over 30 million elements. A speedup of 24 is achieved on a NVIDIA Tesla C1060 GPU when compared to a serial CPU version of the same solution approach, and a comparison is made with previous GPU implementations of the same problem.</p>
796

Rekursiv blokkoppdatering av Isingmodellen / Recursive block updating of the Ising model

Sæther, Bjarne January 2006 (has links)
<p>I denne rapporten sammenligner vi tre varianter av Markov Chain Monte Carlo (MCMC) - simulering av Isingmodellen. Vi sammenligner enkeltnode-oppdatering, naiv blokkoppdatering og rekursiv blokkoppdatering. Vi begynner med å gi en generell introduksjon til markovfelt og Isingmodellen. Deretter viser vi det teoretiske fundamentet som MCMC-metoder hviler på. Etter det gir vi en teoretisk introduksjon til enkeltnode-oppdatering. Så gir vi en innføring i naiv blokkoppdatering som er den tradisjonelle metoden å utføre blokkoppdatering på. Deretter gir vi en tilsvarende innføring i en nylig foreslått metode for å gjennomføre blokkoppdatering, nemlig rekursiv blokkoppdatering. Blokkoppdatering er en metode som har vist seg nyttig med hensyn på miksing når vi simulerer. Med det menes at blokkoppdatering har vist seg nyttig i forhold til å utforske utfallsrommet til fordelingen vi er interessert i med færre iterasjoner enn enkeltnode-oppdatering. Problemet med naiv blokkoppdatering er imidlertid at vi raskt får en høy beregningsmengde ved at hver iterasjon tar veldig lang tid. Vi prøver også ut rekursiv blokkoppdatering. Denne tar sikte på å redusere beregningsmengden for hver iterasjon når vi utfører blokkoppdatering på et markovfelt. Vi viser så simuleringsalgoritmer og resultater. Vi har simulert Isingmodellen med enkeltnode-oppdatering, naiv blokkoppdatering og rekursiv blokkoppdatering. Det vi sammenligner er antall iterasjoner før markovfeltet konvergerer og spesielt beregningstiden pr iterasjon. Vi viser at beregningsmengden pr iterasjon øker med 91000 ganger med naiv blokkoppdatering dersom vi går fra en 3 × 3 blokk til en 5 × 5 blokk. Tilsvarende tall for rekursiv blokkoppdatering er en økning på 83 ganger fra en 3 × 3 blokk til en 5 × 5 blokk. Vi sammenligner også tiden det tar før Isingmodellen konvergerer. Når vi benytter naiv blokkoppdatering finner vi at Isingmodellen bruker 15 sekunder på å konvergere med en 3 × 3 blokk, 910 sekunder på å konvergere med en 4×4 blokk og 182000 sekunder med en 5×5 blokk. Tilsvarende tall for rekursiv blokkoppdatering er 3.74 sekunder for en 3 × 3 blokk, 72 sekunder for en 4 × 4 blokk og 141.2 sekunder for en 5×5 blokk. Når vi benytter enkeltnode-oppdatering bruker feltet 6.6 sekunder på å konvergere.</p>
797

Security of quantum key distribution source

Simonsen, Eivind Sjøtun January 2010 (has links)
<p>Cryptography has begun its journey into the field of quantum information theory. Classical cryptography has shown weaknesses, which may be exploited in the future, either by development in mathematics, or by quantum computers. Quantum key distribution (QKD) is a promising path for cryptography to enable secure communication in the future. Although the theory of QKD promises absolute security, the reality is that current quantum crypto systems have flaws in them, as perfect devices have proven impossible to build. However, this can be taken into account in security proofs to ensure security, even with flaws. Security loopholes in QKD systems are being discovered as development progresses. Nevertheless, the system being built at NTNU is intended to address them all, creating a totally secure system. During this thesis, work was continued assembling the interferometer which is the basis for encoding qubits. It was fully connected on an optical table, and interference was obtained. Concerning theoretical work, calculations for a photon source specific parameter was carried out. It consisted of expanding previous framework and applying the results in both an established security proof, and a recent generalization of this proof. Two source effects were in focus, the lasers random phase and its fluctuating pulse intensity. Where analytical derivation was no longer possible, Matlab was used for numerical calculations. Under the conditions of the framework and proofs this thesis lies on, randomized phase turned out to have a negligible improvement over the case of non-random phase. Fluctuating amplitude showed a larger effect, reducing system performance. The input parameters were extreme, thus in a realistic situation it should not affect system performance significantly. However, these fluctuations must be taken into account when proving system security.</p>
798

Modelling of multimode and layer structure by transfermatrices

Sveinsson, Helge Mjølhus January 2010 (has links)
<p>Modelling of multimode and layer structure by transfermatrices</p>
799

Distribution Based Spectrum Sensing in Cognitive Radio

Christiansen, Jørgen Berle January 2010 (has links)
<p>Blind spectrum sensing in cognitive radio is being addressed in this thesis. Particular emphasis is put on performance in the low signal to noise range. It is shown how methods relying on traditional sample based estimation methods, such as the energy detector and autocorrelation based detectors, suffer at low SNRs. This problem is attempted to be solved by investigating how higher order statistics and information theoretic distance measures can be applied to do spectrum sensing. Results from a thorough literature survey indicate that the information theoretic distance gls{kl} divergence is promising when trying to devise a novel cognitive radio spectrum sensing scheme. Two novel detection algorithms based on Kullback-Leibler divergence estimation are proposed. However, unfortunately only one of them has a fully proven theoretical foundation. The other has a partial theoretical framework, supported by empirical results. Detection performance of the two proposed detectors in comparison with two reference detectors is assessed. The two reference detectors are the energy detector, and an autocorrelation based detector. Through simulations, it is shown that the proposed KL divergence based algorithms perform worse than the energy detector for all the considered scenarios, while one of them performs better than the autocorrelation based detector for certain signals. The reason why the detectors perform worse than the energy detector, despite the good properties of the estimators at low signal to noise ratios, is that the KL divergence between signal and noise is small. The low divergence stems from the fact that both signal and noise have very similar probability density distributions. Detection performance is also assessed by applying the detectors to raw data of a downconverted UMTS signal. It is shown that the noise distribution deviates from the standard assumption (circularly symmetric complex white Gaussian). Due to this deviation, the autocorrelation based reference detector and the two proposed Kullback-Leibler divergence based detectors are challenged. These detectors rely heavily on the aforementioned assumption, and fail to function properly when applied to signals with deviating characteristics.</p>
800

Design and Evaluation of an Personalized Mobile Tourist Application

Wium, Magnar January 2010 (has links)
<p>Mobile applications supporting tourists with travel information can make use of information about the user's location, time and personal preferences to provide personalized recommendations. This could be a solution to the problem of displaying information and navigating on small mobile devices, as it allow tourists to receive information that fit very well with their current situation and needs. However, filtering of information introduce new challenges in terms of facilitating user control and transparency. In this thesis we have developed and evaluated a personalized mobile tourism applications based on collaborative filtering that have tried to meet these challenges. The design of this application is based on experience from similar projects and research on interaction design in recommender systems. The user evaluation of our system suggests that our approach is feasible, but more research must be done to predict the acceptance of this application among tourists.</p>

Page generated in 0.0158 seconds