• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 764
  • 222
  • 87
  • 68
  • 60
  • 33
  • 30
  • 24
  • 20
  • 15
  • 10
  • 7
  • 7
  • 6
  • 5
  • Tagged with
  • 1554
  • 272
  • 203
  • 188
  • 154
  • 147
  • 144
  • 143
  • 128
  • 125
  • 87
  • 87
  • 85
  • 81
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

Optimum bit-by-bit power allocation for minimum distortion transmission

Karaer, Arzu 25 April 2007 (has links)
In this thesis, bit-by-bit power allocation in order to minimize mean-squared error (MSE) distortion of a basic communication system is studied. This communication system consists of a quantizer. There may or may not be a channel encoder and a Binary Phase Shift Keying (BPSK) modulator. In the quantizer, natural binary mapping is made. First, the case where there is no channel coding is considered. In the uncoded case, hard decision decoding is done at the receiver. It is seen that errors that occur in the more significant information bits contribute more to the distortion than less significant bits. For the uncoded case, the optimum power profile for each bit is determined analytically and through computer-based optimization methods like differential evolution. For low signal-to-noise ratio (SNR), the less significant bits are allocated negligible power compared to the more significant bits. For high SNRs, it is seen that the optimum bit-by-bit power allocation gives constant MSE gain in dB over the uniform power allocation. Second, the coded case is considered. Linear block codes like (3,2), (4,3) and (5,4) single parity check codes and (7,4) Hamming codes are used and soft-decision decoding is done at the receiver. Approximate expressions for the MSE are considered in order to find a near-optimum power profile for the coded case. The optimization is done through a computer-based optimization method (differential evolution). For a simple code like (7,4) Hamming code simulations show that up to 3 dB MSE gain can be obtained by changing the power allocation on the information and parity bits. A systematic method to find the power profile for linear block codes is also introduced given the knowledge of input-output weight enumerating function of the code. The information bits have the same power, and parity bits have the same power, and the two power levels can be different.
382

Ambient Noise Analysis in Shallow Water Ambient Noise Analysis in Shallow Water at Southwestern Sea of Taiwan

Tsai, Chung-Ting 31 December 2007 (has links)
Sound wave has much better transmission in ocean environment than electromagnetic waves, therefore sonar systems are widely applied in underwater investigations. However, not only the target signal is received by the sonar but also the noise from different directions. The noise will affect the performance of the sonar, so the understanding of ocean ambient is an important issue both in academic study and military applications. The ambient noise data of this research was collected by a passive acoustic recording system deployed in the southwest sea of Taiwan, along with the information of wind velocity in the experimented area. The influence on noise level fluctuations by the variation of the wind velocity was first discussed in light of correlation analysis. The fluctuations were expressed in terms of statistic distribution, mean value, standard deviation in different time series. As results, 500 Hz and 1.5k Hz were saturated by high levels signal from unknown sources in spring and summer, so the average sound levels were higher than in fall and winter, about 10 dB and 5 dB higher for 500 Hz and 1.5k Hz respectively. In seasonal analysis, 2.4k and 3.6k Hz have quite stable the mean levels and their standard deviations were around 3 dB. Especially, the noise level of 3.6 Hz has the least fluctuation throughout the year than any other frequencies analyzed. It was also observed that the noise level was decreased with the increase of frequency. Calculated by linear regression, this research worked out the estimation equation for the ambient noise level at high wind speed. However, the estimated values are higher than the measured data, it is due to the distribution of wind velocity. The wind data in this study was skewed towards the lower velocity, consequently the predicted values were overestimated.
383

Study of Tidal Phase and Amplitude Characteristic in Kaohsiung Harbor and Central Taiwan Strait

Wang, Wei-hua 10 February 2009 (has links)
In recent year, tidal gauge has progressed in temporal resolution or measurement accuracy, so that the quality of observational data tends to stable and reliable. However, setting up tidal gauge in the offshore areas restricts may apply due to many factors such as seabed topography, weather, sea state and leveling survey from land to gauge. Good tidal correction is one of key factors to the accuracy of bathymetric survey and to the area where tidal range is large. This study tried to use tide prediction data derived from the Yu(1993)¡¦s tide numerical model and verified with actual observed tide data, and further establishing a tidal zone of Taiwan Strait by tidal characteristic. Using Taichung and Mailiao tide stations as a reference tidal station, the direct tide station correction, tidal zone correction, nearest model grid correction, and virtual station correction methods were applied to evaluate the accuracy of tide calculating value by amplitude ratio and tidal phase difference. The tidal zone correction is not totally depending on the spatial distance from reference tidal station, and it is found that correction result of this approach is one of the best. However, further improvement in tide correction may need to explore due to different spatial resolution applied in different numerical models. In addition, the boundary condition of a harbor for tidal model is very complicated. This is why it is hard to make a numerical model for a harbor. In this study, two additional high accuracy radar tidal gauge were installed in Kaohsiung harbor and first-class leveling survey was performed in order to maintain tidal measurement accuracy, also to avoid the effect of errors propagation. According to the results from experiment, average tidal phase of second entrance of the Kaohsiung harbor is earlier than that of first entrance about 6 minutes, and average difference of tidal height is approximately 2-3cm. For this reason, we should pay attention to decide a proper reference tide station for tidal correction for dredging hydrographic surveying. And any possible tidal observation errors, such as meteorological tide. If two additional tidal gauges of this study are removed in the future, we still can predict tide height from fixed tidal gauge.
384

Mutual fund portfolio optimization for investment-linked insurance

Chen, Hsin-jung 27 July 2009 (has links)
Investment-linked insurance in Taiwan has been listed for almost a decade since 2001. In 2002, after the big sales of the investment-linked insurance, the domestic insurance companies also joined the market. For the investment-linked insurance, the policyholders retain the protection of the life insurance as well as share the earnings of the investment. Since the main investment instruments of the investment-linked insurance are mutual funds, it is important to study how to optimally allocate the portfolio. This research consider the returns of the mutual funds under tree models assumption. The objective is to find the optimal portfolio which has minimum variance and attained a given expected return level. The problem is also known as mean-variance portfolio problem. In the empirical work, we study eleven daily mutual fund price data from Sep. 2007 to Nov. 2008. Using the data of the first 12 months, we first establish initial tree price models, then update the parameters of the tree model by the EWMAmethod. The optimal trading strategies of the mean-variance portfolio are investigated under this model setting. We class the mutual funds into three categories: equity funds, balanced funds and bond funds. Different combination of these three kinds of funds are considered to find the optimal trading strategy respectively. The results showed that the realized returns using this optimal trading strategy in practice is close to the pre-specified expected return level.
385

Mean Field Study Of Point Defects In B2-NiAl

Gururajan, M P 02 1900 (has links)
Point defects control many properties of technological importance in intermetallic compounds such as atomic diffusion, creep, hardness, mechanical properties and sintering. Farther, since intermetallic compounds are characterized by long range atomic order, the point defects in these compounds can be qualitatively different from those in pure metals and disordered alloys. In the present study, we have chosen β-NiAl for our point defect studies since it is a potential candidate for high temperature applications and a model system for the study of basic phenomena in ordered alloys. We have used a mean field formulation for studying point defect concentrations. The outline of the formulation is as follows: We divide the rigid, body centred cubic lattice into two interpenetrating cubic sublattices called α and j3 which are made up of the cube corners and body centres respectively. We write a generic free energy function (G) that involves the temperature T and the six sublattice occupancies viz., the A (Ni), B (Al) and vacancies (V) on the two sublattices α andβ. We use the constraints on the number of α and β sublattice sites viz., the number of α sublattice sites is equal to the number of β sublattice sites, to write G as a function of four of the six sublattice occupancies and T. We define three auxiliary parameters η1, η2 and η3 which correspond to the vacancy concentration, the differential B species population on the two sublatices (the chemical or atomic order), and the differential vacancy population on the two sublattices, respectively. We then rewrite G as a function of T, xB and ηi. The G can now be minimized with respect to the three auxiliary variables so that we recover the free energy (G) as a function of XB and T only. The formulation requires as inputs the Ni-Ni, Al-Al, Ni-Al, Ni-V and Al-V interaction energies in the nn and nnn shells. We have obtained the Ni-Ni, Al-Al and Ni-Al interaction energies from the effective pair potentials reported in the literature. For the Ni-V and Al-V interaction energies we have used a bond breaking model in which we have assumed that the Ni-V and Al-V interaction energies in the nnn shell to be zero. Using the above interaction parameters in our mean field formulation we have determined the concentrations of various types of point defects in β-NiAL We have specifically chosen the temperature range of 800 - 2000 K and the composition range of 45 - 55 atomic% Al. Our results can be summarised as follows: 1.The predominant defect in the stoichiometric alloy is a combination of an Ni-antisite defect and two vacancies on the Ni sublattice. 2.The Al-rich alloys of composition (50 + ∆) atomic% contain 2∆% vacancies;since the alloys are almost perfectly ordered, these vacancies predominantly occupy the Ni sublattice. Similarly, the Ni-rich alloys of composition (50 — ∆)atomic% contain ∆% Ni antisites. 3.Both the vacancies on the Ni sublattice (in Al-rich alloys) and Ni-antisites (in Ni-rich alloys) show negligible temperature dependence, and hence owe their origin to the off-stoichiometry. 4.In all the alloys, the Al-antisites have the lowest concentration (of the order 10-6 even at 2000 K) and the concentration of the vacancies on the β sublattice is the next lowest. Thus, our results support the view that β-NiAl is a triple defect B2 and, if we consider constitutional vacancies as those which have a little or no temperature dependence, there exist constitutional vacancies in Al-rich β-NiAl. This conclusion is in agreement with some of the experimental results. However, it must be pointed out that there is considerable disagreement among experimental results from different groups.
386

Analytical and Numerical methods for a Mean curvature flow equation with applications to financial Mathematics and image processing

Zavareh, Alireza January 2012 (has links)
This thesis provides an analytical and two numerical methods for solving a parabolic equation of two-dimensional mean curvature flow with some applications. In analytical method, this equation is solved by Lie group analysis method, and in numerical method, two algorithms are implemented in MATLAB for solving this equation. A geometric algorithm and a step-wise algorithm; both are based on a deterministic game theoretic representation for parabolic partial differential equations, originally proposed in the genial work of Kohn-Serfaty [1]. / +46-767165881
387

Statistical Idealities and Expected Realities in the Wavelet Techniques Used for Denoising

DeNooyer, Eric-Jan D. 01 January 2010 (has links)
In the field of signal processing, one of the underlying enemies in obtaining a good quality signal is noise. The most common examples of signals that can be corrupted by noise are images and audio signals. Since the early 1980's, a time when wavelet transformations became a modernly defined tool, statistical techniques have been incorporated into processes that use wavelets with the goal of maximizing signal-to-noise ratios. We provide a brief history of wavelet theory, going back to Alfréd Haar's 1909 dissertation on orthogonal functions, as well as its important relationship to the earlier work of Joseph Fourier (circa 1801), which brought about that famous mathematical transformation, the Fourier series. We demonstrate how wavelet theory can be used to reconstruct an analyzed function, ergo, that it can be used to analyze and reconstruct images and audio signals as well. Then, in order to ground the understanding of the application of wavelets to the science of denoising, we discuss some important concepts from statistics. From all of these, we introduce the subject of wavelet shrinkage, a technique that combines wavelets and statistics into a "thresholding" scheme that effectively reduces noise without doing too much damage to the desired signal. Subsequently, we discuss how the effectiveness of these techniques are measured, both in the ideal sense and in the expected sense. We then look at an illustrative example in the application of one technique. Finally, we analyze this example more generally, in accordance with the underlying theory, and make some conclusions as to when wavelets are an effective technique in increasing a signal-to-noise ratio.
388

Effect of Cement Chemistry and Properties on Activation Energy

Bien-Aime, Andre J. 01 January 2013 (has links)
The objective of this work is to examine the effect of cement chemistry and physical properties on activation energy. Research efforts indicated that time dependent concrete properties such as strength, heat evolution, and thermal cracking are predictable through the concept of activation energy. Equivalent age concept, which uses the activation energy is key to such predictions. Furthermore, research has shown that Portland cement concrete properties are affected by particles size distribution, Blaine fineness, mineralogy and chemical composition. In this study, four Portland cements were used to evaluate different methods of activation energy determination based on strength and heat of hydration of paste and mortar mixtures. Moreover, equivalency of activation energy determined through strength and heat of hydration is addressed. The findings indicate that activation energy determined through strength measurements cannot be used for heat of hydration prediction. Additionally, models were proposed that are capable of predicting the activation energy for heat of hydration and strength. The proposed models incorporated the effect of cement chemistry, mineralogy, and particle size distribution in predicting activation energy.
389

The effect of compression ratio on the performance of a direct injection diesel engine

Aivaz Balian, Razmik January 1990 (has links)
This thesis considers the effect of compression ratio on the performance of a direct injection diesel engine. One aspect of engine performance is considered in great detail, namely the combustion performance at increased clearance volume. This aspect was of particular interest because variable compression ratio (VCR) systems normally operate by varying the clearance volume. The investigation relied upon results obtained both from experimental and computer simulating models. The experimental tests were carried out using a single-cylinder direct-injection diesel engine, under simulated turbocharged conditions at a reduced compression ratio. A number of one-dimensional computer models were developed; these simulate the induction and compression strokes, and the fuel spray trajectories in the presence of air swirl. The major objectives of the investigation were: to assess the benefits of VCR in terms of improvements in output power and fuel economy; to assess the effects on combustion of increased clearance volume, and investigate methods for ameliorating resulting problems; develop computational models which could aid understanding of the combustion process under varying clearance volume conditions. It was concluded that at the reduced compression ratio of 12.9:1 (compared to the standard value of 17.4:1 for the naturally-aspirated engine), brake mean effective pressure (BMEP) could be increased by more than 50%, and the brake specific fuel consumption (BSFC) could be reduced by more than 20%. These improvements were achieved without the maximum cylinder pressure or engine temperatures exceeding the highest values for the standard engine. Combustion performance deteriorated markedly, but certain modifications to the injection system proved successful in ameliorating the problems. These included: increase in the number of injector nozzle holes from 3 to 4, increase in injection rate by about 28%, advancing injection timing by about 6°CA. In addition, operation with weaker air fuel ratio, in the range of 30 to 40:1 reduced smoke emissions and improved BSFC. Use of intercooling under VCR conditions provided only modest gains in performance. The NO emission was found to be insensitive to engine operating conditions (fixed compression ratio of 12.9:1), as long as the peak cylinder pressure was maintained constant. Engine test results were used in order to assess the accuracy of four published correlations for predicting ignition delay. The best prediction of ignition delay with these correlations deviated by up to 50% from the measured values. The computer simulation models provided useful insights into the fuel distribution within the engine cylinder. It also became possible to quantify the interaction between the swirling air and the fuel sprays, using two parameters: the crosswind and impingement velocities of the fuel spray when it impinges on the piston-bowl walls. Tentative trends were identified which showed that high crosswind velocity coincided with lower smoke emissions and lower BSFC.
390

Application of real options to valuation and decision making in the petroleum E&P industry

Xu, Liying, 1962- 17 July 2012 (has links)
This study is to establish a binomial lattice method to apply real options theory to valuation and decision making in the petroleum exploration and production industry with a specific focus on the switching time from primary to water flooding oil recovery. First, West Texas Intermediate (WTI) historical oil price evolution in the past 25 years is studied and modeled with the geometric Brownian motion (GBM) and one-factor mean reversion price models to capture the oil price uncertainty. Second, to conduct real options evaluation, specific reservoir simulations are designed and oil production profile for primary and water flooding oil recovery for a synthetic onshore oil reservoir is generated using UTCHEM reservoir simulator. Third, a cash flow model from producing the oil reservoir is created with a concessionary fiscal system. Finally, the binomial lattice real options evaluation method is established to value the project with flexibility in the switching time from primary to water flooding oil recovery under uncertain oil prices. The research reaches seven conclusions: 1) for the GBM price model, the assumptions of constant drift rate and constant volatility do not hold for WTI historical oil price; 2) one-factor mean reversion price model is a better model to fit the historical WTI oil prices than the GBM model; 3) the evolution of historical WTI oil prices from January 2, 1986 to May 28, 2010 was according to three price regimes with different long run prices; 4) the established real options evaluation method can be used to identify the best time to switch from primary to water flooding oil recovery using stochastic oil prices; 5) with the mean reversion oil price model and the most updated cost data, the real options evaluation method finds that the water flooding switching time is earlier than the traditional net present value (NPV) optimizing method; 6) the real options evaluation results reveals that most of time water flooding should start when oil price is high, and should not start when oil price is low; and 7) water flooding switching time is sensitive to oil price model to be used and to the investment and operating costs. / text

Page generated in 0.0331 seconds