551 |
Fabrication and Characterization of GaAs/AlGaAs Core-Shell Photonic NanowiresRogstad, Espen January 2009 (has links)
<p>GaAs/AlGaAs core-shell nanowires (NWs) were grown on GaAs(111)B substrates by Au-assisted molecular beam epitaxy (MBE) to investigate how different Al compositions in the shell influences the structural and optical properties of the NWs. Investigations with a secondary electron microscope (SEM) revealed that an increase in Al content leads to an increase in radial growth rate and a decrease in the axial growth rate of the AlGaAs shell. Low temperature μ-photoluminescence (PL) measurements showed that there was great improvement in the luminescence for the GaAs/AlGaAs core-shell NWs compared to GaAs NWs without shell.</p>
|
552 |
Accurate discretizations of torqued rigid body dynamicsGustafsson, Einar January 2010 (has links)
<p>This paper investigates the solution of the free rigid body equations of motion, as well as of the equations governing the torqued rigid body. We will consider two semi-exact methods for the solution of the free rigid body equations, and we discuss the use of both rotation matrices and quaternions to describe the motion of the body; our focus is on the quaternion formulation. The approach to which we give the most attention is based on the Magnus series expansion, and we derive numerical methods of order 2, 4, 6, and 8, which are optimal as they require a minimal number of commutators. The other approach uses Gaussian quadrature to approximate an elliptic integral of the third kind. Both methods rely on the exact solution of the Euler equation which involves the exact computation of the elliptic integral of the first kind. For the solution of the torqued rigid body equations, we divide the equations into two systems where one of them is the free rigid body equations; the solutions of these two systems are then combined in the Störmer-Verlet splitting scheme. We use these methods to solve the so-called marine vessel equations. Our numerical experiments suggest that the methods we present are robust and accurate numerical integrators of both the free and the torqued rigid body.</p>
|
553 |
Improving Elastography using SURF Imaging for Suppression of ReverberationsGrythe, Jørgen January 2010 (has links)
<p>For some of the applications of the Second-order UltRasound Field (SURF) imaging technique, a real-time delay-estimation algorithm has been developed for estimating spatially range-varying delays in RF signals. This algorithm is a phase-based approach for subsample delay estimation, and makes no assumption on the local delay variation. Any parametric model can be used for modeling the local delay variation. The phase-based delay estimator uses estimates of the instantaneous frequency and the phase difference and the relationship between the two to estimate the delay. The estimated delay may be used to calculate an improved estimate of the instantaneous frequency, which in turn may be used to calculate new, updated values for the delay using an iterative scheme. Although an iterative scheme introduces a larger bias, the estimated delay values have a significantly lowered standard deviation in comparison to the original method. The delay estimator originally developed for estimating propagation delays for SURF imaging, can also be used for elastography purposes. By not being restricted to locally constant delays, the delay estimator is able to more robustly estimate sharp changes in tissue stiffness, and in estimating small differences in strain more closely. Two different parametric models for the local delay have been tried, one linear, and one polynomial of the first degree. The two various models have been tested on an elastography recording provided by the Ultrasonix company (Ultrasonix Medical Corporation, Vancouver, Canada), and in vitro. Using a polynomial of the second degree as parametric model for the delay is better than a linear model in detecting edges of inclusions located at a depth where the strain is lower than closer to the transducer surface. The differences may be further emphasized by performing spatial filtering with a median filter. The downside of updating the model is an increased computational time of approximately 50%. Multiple reflections, also known as reverberations, appear as acoustic noise in ultrasound images and may greatly impair time-delay estimation, particularly in elastography. Today reverberation suppression is achieved by second harmonic imaging, but this method has the disadvantage of low penetration, and little or no signal in the near field. The SURF imaging technique has the advantages of reverberation suppression in addition to imaging in the fundamental frequency. A reverberation model has been established, and the effect reverberations have on estimated elastography images is studied. When using a layered silicon plate as reverberation model, and imaging through this initial reverberation model placed on top of the imaging phantom, elastography images were not obtained as the quality of the recording was degraded as a result of power loss. By adding reverberations by computer simulations after a recording with a SURF probe with reverberation suppression was performed, a markedly difference between elastography estimates done on the image with reverberations, and the image with reverberations and reverberation suppression was observed. Estimating on a signal with reverberations, the phase-based time-delay algorithm was unable to distinguish any differences in elasticity at all. Estimating time delays on a signal with reverberations and SURF reverberation suppression however, the algorithm was able to clearly estimate differences in strain, and display the presence of an inclusion.</p>
|
554 |
Calculation of the coverage area of mobile broadband communications. Focus on landMartínez Gálvez, Antonio January 2010 (has links)
<p>Calculation of the coverage area of mobile broadband communications. Focus on land</p>
|
555 |
Photonic crystal light emitting diodeLeirset, Erlend January 2010 (has links)
<p>This master's thesis describe electromagnetic simulations of a gallium antimonide (GaSb) light emitting diode, LED. A problem for such devices is that most of the generated light is reflected from the surface due to total internal reflection, and is therefore prevented from coupling out of the semiconductor material. Etching out a 2D photonic crystal grating on the LED surface would put aside the absolute rule of total internal reflection, and could therefore be used to increase the total transmission. The simulation method which was developed was supposed to find geometry parameters for the photonic crystal to optimize the light extraction. A set of plane waves were therefore simulated using FDTD to build an equivalent to the Fresnel equations for the photonic crystal surface. From that the total transmittance and radiation patterns for the simulated geometries were calculated. The results indicated an increase in the transmission properties of up to 70% using a square grating of holes where the holes have a radius of 0.5µm, the hole depth is 0.4µm, and the grating constant is 1µm. A hexagonal grating of holes and a square grating of isotropically etched holes were also simulated, and featured improvements on the same scale, but with different dimensions for the holes. The simulations were computationally very demanding, and the simulation structure therefore had to be highly trimmed to limit the calculation time to reasonable values. This might have reduced the accuracy of the results. Especially the optimum grating constant, and the value of the optimum improvement itself is believed to be somewhat inaccurate.</p>
|
556 |
Bandwidth Selection in Kernel Density EstimationKile, Håkon January 2010 (has links)
<p>In kernel density estimation, the most crucial step is to select a proper bandwidth (smoothing parameter). There are two conceptually different approaches to this problem: a subjective and an objective approach. In this report, we only consider the objective approach, which is based upon minimizing an error, defined by an error criterion. The most common objective bandwidth selection method is to minimize some squared error expression, but this method is not without its critics. This approach is said to not perform satisfactory in the tail(s) of the density, and to put too much weight on observations close to the mode(s) of the density. An approach which minimizes an absolute error expression, is thought to be without these drawbacks. We will provide a new explicit formula for the mean integrated absolute error. The optimal mean integrated absolute error bandwidth will be compared to the optimal mean integrated squared error bandwidth. We will argue that these two bandwidths are essentially equal. In addition, we study data-driven bandwidth selection, and we will propose a new data-driven bandwidth selector. Our new bandwidth selector has promising behavior with respect to the visual error criterion, especially in the cases of limited sample sizes.</p>
|
557 |
Analysis of the Transport Layer Security protocolFiring, Tia Helene January 2010 (has links)
<p>In this master thesis we have presented a security analysis of the TLS protocol with particular emphasis on the recently discovered renegotiation attack. From our security proof we get that the Handshake protocol with renegotiation, including the fix from IETF, is secure, and hence not vulnerable to the renegotiation attack anymore. We have also analysed the Handshake protocol with session resumption, and the Application data protocol together with the Record protocol. Both of these protocols were deemed secure as well. All the security proofs are based on the UC (Universal Composability) security framework.</p>
|
558 |
Topology and DataBrekke, Birger January 2010 (has links)
<p>In the last years, there has been done research in using topology as a new tool for studying data sets, typically high dimensional data. These studies have brought new methods for qualitative analysis, simplification, and visualization of high dimensional data sets. One good example, where these methods are useful, is in the study of microarray data (DNA data). To be able to use these methods, one needs to acquire knowledge of different topics in topology. In this paper we introduce simplicial homology, persistent homology, Mapper, and some simplicial complex constructions.</p>
|
559 |
Flow-times in an M/G/1 Queue under a Combined Preemptive/Non-preemptive Priority Discipline. : Scheduled Waiting Time on Single Track Railway LinesFatnes, Johan Narvestad January 2010 (has links)
<p>A priority based rule for use during the process of scheduling trains oper- ating on a single track railway line was proposed by the Norwegian railway operator and owner, Jernbaneverket. The purpose of this study is to inves- tigate the effect of the suggested scheduling rule on the scheduled waiting times suffered by trains operating on a segment of the railway line. It is shown that the scheduling rule, under certain limiting assumptions, can be studied in the setting of queuing theory and that it has properties in common with a theoretical priority discipline combining two well docu- mented priority rules. The main part of this study is the development and analysis of a threshold based, combined preemptive/non-preemptive priority discipline. Under the combined discipline, preemptions are allowed during the early stage of processing only. Theoretical expressions for flow-times of jobs passing through the queuing system are reached through detailed studies of the non-preemptive and the preemptive priority discipline. The relationship between the suggested priority based scheduling rule and the theoretical, combined priority discipline is finally illustrated by sim- ulations. When adjusted for actual time spent by trains on traversing the line segment, the steady state solution for flow-times obtained from queuing theory yields an accurate expression for the trains average scheduled wait- ing times. The scheduling problem can in fact be modeled accurately by an M/G/1 queue under the combined priority discipline.</p>
|
560 |
Parameter Estimation in Extreme Value Models with Markov Chain Monte Carlo MethodsGausland, Eivind Blomholm January 2010 (has links)
<p>In this thesis I have studied how to estimate parameters in an extreme value model with Markov Chain Monte Carlo (MCMC) given a data set. This is done with synthetic Gaussian time series generated by spectral densities, called spectrums, with a "box" shape. Three different spectrums have been used. In the acceptance probability in the MCMC algorithm, the likelihood have been built up by dividing the time series into blocks consisting of a constant number of points. In each block, only the maximum value, i.e. the extreme value, have been used. Each extreme value will then be interpreted as independent. Since the time series analysed are generated the way they are, there exists theoretical values for the parameters in the extreme value model. When the MCMC algorithm is tested to fit a model to the generated data, the true parameter values are already known. For the first and widest spectrum, the method is unable to find estimates matching the true values for the parameters in the extreme value model. For the two other spectrums, I obtained good estimates for some block lengths, others block lengths gave poor estimates compared to the true values. Finally, it looked like an increasing block length gave more accurate estimates as the spectrum became more narrow banded. A final simulation on a time series generated by a narrow banded spectrum, disproved this hypothesis.</p>
|
Page generated in 0.0248 seconds