• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 295
  • 64
  • Tagged with
  • 359
  • 356
  • 340
  • 339
  • 251
  • 198
  • 105
  • 48
  • 37
  • 36
  • 36
  • 36
  • 36
  • 36
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Electromagnetic Scattering : A Surface Integral Equation Formulation

Aas, Rune Øistein January 2012 (has links)
A numerical approach to solving the problem of electromagnetic (EM) scattering on a single scatterer is studied. The problem involves calculating the total EM field in arbitrary observation points when a planar EM wave is scattered.The method considered is a surface integral equation (SIE) formulation involving the use of a dyadic Green's function. A theoretical derivation of the magnetic field integral equation (MFIE) and the electric field integral equation (EFIE) from Maxwell's equations are shown. The Method of Weighted Residuals (MWR) and Kirchoff's Approximation (KA) with their respective domains of application are studied as ways of estimating the surface current densities. A parallelized implementation of the SIE method including both the KA and the MWRis written using the FORTRAN language. The implementation is applied in three concrete versions of the scattering problem, all involving a spherical perfectly conducting scatterer, namely the cases of incoming wavelength much larger, much smaller and comparable with the radius of the scatterer. The problems are divided into two separate solution categories, separated by whether or not the KA is assumed valid. A recursive discretization algorithm was found to be superior to a Delaunay triangulationalgorithm due to less spread in element shape and area. The produced resultsfitted well considering the interference pattern and symmetry requirements with relative errors in the order of magnitude $10^{-5}$ and less. The case of having large wavelength compared to the radius was also compared with Rayleigh scattering theory considering the far field dependence on wavelenth, scattering angle and distance from the scatterer. This resulted in relative errors of 2.1 percent and less. The main advantage of the SIE method is only requiring the surface of the scatterer to be discretized thus saving computational time and memory compared to methods requiring discretization of volume. The method is also capable of producing accurate results for observation points arbitrary close to the scatterer surface. A brief discussion on how the program may be modified in order to extend its capabilities is also included.
52

On the Design of Accurate Spatial and Temporal Temperature Measurements in Sea Ice

Meyer, Karsten January 2012 (has links)
Studies on sea ice have become increasingly popular among researchers in the last decades, due to its effect on the global climate and the challenges it presents for arctic engineering. The research is in demand of reliable data acquisition on various properties, in this case the temperature gradient in forming sea ice.This project continues the development of a spatial temperature instrument, which intends to provide measurements with an accuracy of 0.01°C, making the researchers able to distinguish between fine variations in melting point due to salinity.The outcome of the project is a complete revision of the probe design, which according to simulations not until now provides an environment for the sensor array that satisfies the accuracy requirement. The design also combines the low thermal resistance between sensor and medium from an earlier steel probe with the low thermal interference of an insulating plastic probe.Manufacturing and assembly of the updated temperature instrument with a new probe is almost finished, and should after completion provide experimental data to back up the claims made by the simulations.
53

Development and testing of a Linnik Interference Microscope for Sub-surface Inspection of Silicon during moving Indentation

Kittang, Lars Oskar Osnes January 2012 (has links)
Fixed-abrasive diamond wire sawing is a promising technique for reduction of costs related to sawing of silicon wafers for solar cells. The microscopic mechanisms of material removal in the process are however not fully understood, and must be surveyed in order for costs to be further reduced.An interference microscope for sub-surface inspection of mono-crystalline silicon has been built based on the Linnik configuration, with specific application to in-situ monitoring of moving indentations. The working principles of the instrument are explained from a literature study on relevant theory, combining concepts of optical interference and coherence with imaging theory. The optical system has been experimentally tested in terms of its performance in conventional imaging as well as its interferometric capabilities. Tests on the imaging performance show that a large magnification is accompanied by a lateral resolution with a lower limit of $0.9mumathrm{m}$ and an adequately long depth of field. This provides improved conditions for imaging of internal reflections in silicon, compared to a previously used prototype.Using a light source of low temporal coherence, the capability of the system to measure depth profiles of silicon surfaces has been tested. The technique calculates depths from interferograms recorded by scanning of a reference field. Preliminary results from a flat test surface show that depths are not determined accurately enough for calculated profiles to be considered as reliable reconstructions. It is discussed that the inaccuracy is caused by a number of experimental factors including non-uniform illumination, undesired reflections and non-uniform sampling intervals in scanning.Two experiments with moving indentations on silicon surfaces have been performed, monitored by conventional imaging and calculation of interferometric phase maps, respectively. Results are seen in context with the theoretical understanding of material removal mechanisms in fixed-abrasive diamond wire sawing. The evolution of surface damage is observed as interconnection of chippings in both experiments. In addition, sub-surface lateral cracks are identified from interferometric phase maps. The phase maps of surface damage can, however, only to a limited extent be interpreted as topographic contour lines of surface depth. A deeper knowledge of removal mechanisms requires quantitative measurements of depths. This can be better achieved by calculating accurate depth profiles from interferograms. Future enhancement of the system is dependent on a reevaluation of the optical design as well as better control of sampling intervals in scanning.
54

Gold and Platinum Surface Nanostructures on Highly Oriented Pyrolytic Graphite

Karlsen, Terje Kultom January 2012 (has links)
Self-assembled platinum and gold nanostructures, which are formed by evaporation and subsequent diffusion limited aggregation of metal on highly oriented pyrolytic graphite, have been studied by photoemission spectroscopy and scanning electron microscopy. Dendritic gold nanostructures were observed on samples onto which gold was evaporated at room temperature. For samples onto which gold was evaporated at reduced temperatures, no such dendrites were found. For samples evaporated with platinum, small nano-spiders were seen at low evaporation time, and more complex fractal structures at higher evaporation time. Studying the oxidation of carbon monoxide over the platinum nanostructures yielded no clear corrolation between nanostructure size and oxidation rate.
55

The Local Level-Set Extraction Method for Robust Calculation of Geometric Quantities in the Level-Set Method

Ervik, Åsmund January 2012 (has links)
The level-set method is an implicit interface capturing method that can be used in two or more dimensions. The method is popular e.g. in computer graphics, and as here, in simulations of two-phase flow. The motivation for the simulations performed here is to obtain a better understanding of the complex two-phase flow phenomena ocurring in heat exchangers used for liquefaction of natural gas, including the study of droplet-film interactions and coalescence.One of the main advantages of the level-set method is that it handles changes in the interface topology in a natural way. In the present work, the calculation of the curvature and normal vectors of an interface represented by the level-set method is considered. The curvature and normal vectors are usually calculated using central-difference stencils, but this standard method fails when the interface undergoesa topological change, e.g. when two droplets collide and merge. Several methodshave previously been developed to handle this problem. In the present work,a new method is presented, which is a development on existing methods. The newmethod handles more general cases than previous methods. In contrast to someprevious methods, the present method retains the implicit formulation and can easily be extended to three-dimensional simulations, as demonstrated in this work.Briefly, the new method consists in extracting one or more local level sets forbodies close to the grid point considered, reinitializing these local level setsto remove kinks, and using these to calculate the curvature and normal vector atthe grid point considered. For the curvature, multiple values are averaged,while for the normal vector, the one corresponding to the closest interface isselected.With this new method, several two-phase flow simulations are performed that arerelevant for understanding the liquefaction of natural gas. The new methodenables simulations that are more general than previous ones. A two-dimensionalsimulation was performed of a 0.6 mm diameter methanol droplet falling through air and merging with a deep pool of methanol. The new method gave good results in this case, but unphysical oscillations in the pressure field rendered this result unsuitablefor comparison with experimental results.Several similar cases with significantly lower density differences between thetwo fluids were also considered; in these cases, the pressure field behavedphysically, but the results are less applicable to the understanding of naturalgas liquefication, and better suited for validation of the new method. Inparticular, an axisymmetric simulation of a 0.11 mm diameter water droplet in decanemerging with a deep pool of water has been considered. The results of thissimulation show a very close agreement with experimental data. Attempts werealso made to simulate a larger droplet, but in this case finer grids were neededthan what could be achieved here due to the computational time cost of gridrefinement. Purely geometrical results are also presented in order to validate the results of the new method, and three-dimensional results are given for a static interface configuration, demonstrating that the method is easily extended to higher dimensions.
56

System Identification of Unmanned Aerial Vehicles

Ingebretsen, Thomas January 2012 (has links)
The least squares method has been applied to estimate parameters inan aerodynamic model of a simulated aircraft, using data that can beexpected to available from sensors on an Unmanned Aerial Vehicle. Acombination of two non-linear state observers have been implemented toestimate wind data such as angle of attack, sideslip and dynamic pressure.Simulations have confirmed that the observers are able to estimete thewind data using noisy sensor measurements. Parameter estimation havebeen demonstrated with both measured and estimated wind data.
57

Maximum Entropy and Maximum Entropy Production in Macroecology

Sognnæs, Ida Andrea Braathen January 2011 (has links)
The Maximum Entropy Theory of Ecology (METE), developed by John Harte, presents an entirely new method of making inferences in ecology. The method is based on the established mathematical procedure of Maximum Information Entropy (MaxEnt), developed by Edwin T. Jaynes, and is used to derive a range of important relationships in macroecology. The Maximum Entropy Production (MEP) principle is a more recent theory. This principle was used by Paltridge to successfully predict the climate on Earth in 1975. It has been suggested that this principle can be used for predicting the evolution of ecosystems over time in the framework of METE. This idea is at the very frontier of Harte's theory. This thesis investigates the hypothesis that the information entropy defined in METE is described by the MEP principle.I show that the application of the MEP principle to the information entropy in METE leads to a range of conceptual and mathematical difficulties. I show that the initial hypothesis alone cannot predict the time rate of change, but that it does predict that the number of individual organisms and the total metabolic rate of an ecosystem will continue to grow indefinitely, whereas the number of species will approach one.I also conduct a thorough review of the MEP literature and discuss the possibility of an application of the MEP principle to METE based on analogies. I also study a proof of the MEP principle published by Dewar in 2003 and 2005 in order to investigate the possibility of an application based on first principles. I conclude that the MEP principle has a low probability of success if applied directly to the information entropy in METE.One of the most central relationships derived in METE is the expected number of species in a plot of area $A$. I conduct a numerical simulation in order to study the variance of the actual number of species in a collection of plots. I then suggest two methods to be used for comparison between predictions and observations in METE.I also conduct a numerical study of selectied stability properties of Paltridge's climate model and conclude that none of these can explain the observed MEP state in nature.
58

Engineered Surfaces for Redirection of Light

Walle, Øystein January 2011 (has links)
It is of interest to construct windows that can spread the transmitted light in a specified manner. The Kirchhoff approximation in the geometrical optical limit in combination with a chosen general form of the window surfaces yields the profile for the window surfaces, letting us specify how the light should be spread.The probability distribution function for the slopes of a window surface consisting of joined line segments was implemented in the simulation software Maxwell1D. Simulations show the feasibility of such windows. However they do not respond well when subject to light incident with another angle of incidence than the angle in mind.By using Snell's law to compensate for using a simpler system, the time needed for the simulations can be greatly reduced while simultaneously obtaining a higher accuracy in the results.
59

A random Matrix Approach to collective Trends of falling and rising Stock Markets

Hansen, Christoffer Berge January 2011 (has links)
An inverse statistics analysis of one minute stock quotes from 492 large Europeancompanies has revealed the existence of a gain-loss asymmetry in thefollowing index. The gain-loss asymmetry differs from that observed for dailyclosure prices of the Dow Jones Industrial Average [38], as the probability ofthe optimal investment horizon for a gain is higher than that of a loss. Forindividual stocks, the gain-loss asymmetry was observed to only appear forsignificantly larger return-levels. To the best of our knowledge, this is thefirst time such an analysis has been performed on high-frequency data.A principal component analysis was done by performing an eigenvalue decompositionof the correlation matrix from a sliding time-window. The firstprincipal component was observed to describe the market excellently. Its correspondingeigenvalue was observed to be significantly larger than theoreticalpredictions from random matrix theory, implying that the eigenvalue carriesinformation common to all stocks. Using this eigenvalue as an index measuringthe collectivity in the market has revealed the existence of collectivetrends that appear to be stronger during falling than rising markets. Thishas been observed for two different datasets, the above described one minutestock quotes and daily closure prices from 29 stocks composing the DJIA lateFebruary 2008. The observation is in accordance with results of Balogh etal. [40], and provides further support to the speculation of Johansen et al.[37] that a difference in collective trends is the reason behind the gain-lossasymmetry observed in indexes and not for individual stocks for the samereturn-level.The key idea behind the fear factor model of Donangelo et al. [42] has beenstrongly supported by the observation that collective trends appear to bestronger during sharp index drops. As the collectivity increment has beenobserved to be dependent on the size of the index drop, it is suggested thatthe model should incorporate also individual fear factors for economic sectors,in addition to the global fear factor governing the market as a whole. Periodsexhibiting a rising index positively correlated to the strength of collectivityhas indicated the presence of an optimism factor that also should be incorporatedin the fear factor model [42], forcing stocks to rise synchronously.
60

Asymmetriske Energivariasjoner i Turbulens : Invers statistikk metode for beskrivelse av asymmetrisk energivariasjon / Asymmetric Energy Fluctuations in Turbulence : Inverse statistical method for the description of asymmetric energy variation

Mersland, Mailinn Blandkjenn January 2011 (has links)
Energivariasjon i fullt utviklet turbulens er studert. I en tidsserie for tur- bulens energi er det oppdaget en asymmetri mellom positive og negative energiendringer. Invers statistikk metode gir en mulighet til å studere denne asymmetrien nærmere.For analyse i denne oppgaven er turbulent strømning generert ved bruk av GOY skallmodell. Skallmodellen er en tilnærmingmodell til Navier-Stokes likninger for strømningens bevegelse. Modellen er tidligere vist å gi realistiske verdier for energi og hastighet for en turbulens strømning.Ved bruk av forward statistikk og invers statistikk på turbulens ener- gi er det funnet en asymmetri i energiendringen. Det er vist at en negativ energivariasjon høyst sannsynlig inntreffer før en positiv energivariasjon av samme størrelse. Tidsforskjellen er empirisk funnet til å følge sammenhen- gen δτδE ∼ δE0,749, hvor τδE er forventningstiden for en energivariasjon δE. I tillegg er det funnet en positiv trend i tidsserien for energi, som sier at ener- giendring etter korte tidsintervall høyst sannsynlig er positiv. Det er gjort et forsøk på å beskrive denne asymmetrien og opprinnelsen av dette fenomenet er drøftet.

Page generated in 0.0662 seconds