• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 764
  • 222
  • 87
  • 68
  • 60
  • 33
  • 30
  • 24
  • 20
  • 15
  • 10
  • 7
  • 7
  • 6
  • 5
  • Tagged with
  • 1554
  • 272
  • 203
  • 188
  • 154
  • 147
  • 144
  • 143
  • 128
  • 125
  • 87
  • 87
  • 85
  • 81
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
601

Symmetric Generalized Gaussian Multiterminal Source Coding

Chang, Yameng Jr January 2018 (has links)
Consider a generalized multiterminal source coding system, where 􏱡(l choose m) 􏱢 encoders, each m observing a distinct size-m subset of l (l ≥ 2) zero-mean unit-variance symmetrically correlated Gaussian sources with correlation coefficient ρ, compress their observation in such a way that a joint decoder can reconstruct the sources within a prescribed mean squared error distortion based on the compressed data. The optimal rate- distortion performance of this system was previously known only for the two extreme cases m = l (the centralized case) and m = 1 (the distributed case), and except when ρ = 0, the centralized system can achieve strictly lower compression rates than the distributed system under all non-trivial distortion constaints. Somewhat surprisingly, it is established in the present thesis that the optimal rate-distortion preformance of the afore-described generalized multiterminal source coding system with m ≥ 2 coincides with that of the centralized system for all distortions when ρ ≤ 0 and for distortions below an explicit positive threshold (depending on m) when ρ > 0. Moreover, when ρ > 0, the minimum achievable rate of generalized multiterminal source coding subject to an arbitrary positive distortion constraint d is shown to be within a finite gap (depending on m and d) from its centralized counterpart in the large l limit except for possibly the critical distortion d = 1 − ρ. / Thesis / Master of Applied Science (MASc)
602

Robust Distributed Compression of Symmetrically Correlated Gaussian Sources

Zhang, Xuan January 2018 (has links)
Consider a lossy compression system with l distributed encoders and a centralized decoder. Each encoder compresses its observed source and forwards the compressed data to the decoder for joint reconstruction of the target signals under the mean squared error distortion constraint. It is assumed that the observed sources can be expressed as the sum of the target signals and the corruptive noises, which are generated independently from two (possibly di erent) symmetric multivariate Gaussian distributions. Depending on the parameters of such Gaussian distributions, the rate-distortion limit of this lossy compression system is characterized either completely or for a subset of distortions (including, but not necessarily limited to, those su fficiently close to the minimum distortion achievable when the observed sources are directly available at the decoder). The results are further extended to the robust distributed compression setting, where the outputs of a subset of encoders may also be used to produce a non-trivial reconstruction of the corresponding target signals. In particular, we obtain in the high-resolution regime a precise characterization of the minimum achievable reconstruction distortion based on the outputs of k + 1 or more encoders when every k out of all l encoders are operated collectively in the same mode that is greedy in the sense of minimizing the distortion incurred by the reconstruction of the corresponding k target signals with respect to the average rate of these k encoders. / Thesis / Master of Applied Science (MASc)
603

Mean-Variance Utility Functions and the Investment Behaviour of Canadian Life Insurance Companies / Investment Behaviour of Canadian Life Insurance Companies

Krinsky, Itzhak 10 1900 (has links)
In recent years, considerable effort has been directed toward establishing the nature of the investment behaviour of life insurance companies. In this dissertation an extended portfolio analysis model was developed for the simultaneous determination of the efficient composition of insurance and investment activities of a life insurance company. This was done within a model that takes advantage of the existing finance foundations and the concepts and techniques of modern demand system analysis. Unlike current models which used quadratic programming techniques and are interested in the construction of efficient sets, we have used a utility maximization approach. A two parameter portfolio model was constructed utilizing elements of utility theory and of the theory of insurance. The model provided us with the proportion of assets held in the balance sheet as well as which liabilities are used to raise the necessary capital. The model developed has sufficient empirical content to yield hypotheses about life insurance portfolio behaviour and thus was tested using appropriate econometric techniques. A comparative static analysis yielded elasticities of substitution between financial assets and liabilities. The estimation of these elasticities in the context of a flexible functional form model, forms a central part of this dissertation. More specifically, by utilizing a mean-variance portfolio framework and a general Box-Cox utility function we were able to model the demand for assets and liabilities. by an insurance company. On empirical grounds we found that, in general, the square root quadratic utility function best fits the data. We also tried to evaluate the square root quadratic approximation by showing that, broadly speaking, it yields signs for elasticities of substitution which are consistant with the theory. A by-product of the model developed is the ability to compare stock and mutual life insurance companies. The common belief that mutual companies follow a riskier path in the way they conduct their business was supported by the results in this study. The results obtained from the study are-of significant importance since life insurance companies have substantial obligations to millions of households in the economy. Furthermore, despite the extraordinary decline in the importance of the life insurance industry in the bond and mortgage markets during the sixties and the seventies, the industry is still a major supplier of funds to those markets. / Thesis / Doctor of Philosophy (PhD)
604

Decentralized Integration of Distributed Energy Resources into Energy Markets with Physical Constraints

Chen Feng (18556528) 29 May 2024 (has links)
<p dir="ltr">With the growing installation of distributed energy resources (DERs) at homes, more residential households are able to reduce the overall energy cost by storing unused energy in the storage battery when there is abundant renewable energy generation, and using the stored energy when there is insufficient renewable energy generation and high demand. It could be even more economical for the household if energy can be traded and shared among neighboring households. Despite the great economic benefit of DERs, they could also make it more challenging to ensure the stability of the grid due to the decentralization of agents' activities.</p><p><br></p><p dir="ltr">This thesis presents two approaches that combine market and control mechanisms to address these challenges. In the first work, we focus on the integration of DERs into local energy markets. We introduce a peer-to-peer (P2P) local energy market and propose a consensus multi-agent reinforcement learning (MARL) framework, which allows agents to develop strategies for trading and decentralized voltage control within the P2P market. It is compared to both the fully decentralized and centralized training & decentralized execution (CTDE) framework. Numerical results reveal that under each framework, the system is able to converge to a dynamic balance with the guarantee of system stability as each agent gradually learns the approximately optimal strategy. Theoretical results also prove the convergence of the consensus MARL algorithm under certain conditions.</p><p dir="ltr">In the second work, we introduce a mean-field game framework for the integration of DERs into wholesale energy markets. This framework helps DER owners automatically learn optimal decision policies in response to market price fluctuations and their own variable renewable energy outputs. We prove the existence of a mean-field equilibrium (MFE) for the wholesale energy market, and we develop a heuristic decentralized mean-field learning algorithm to converge to an MFE, taking into consideration the demand/supply shock and flexible demand. Our numerical experiments point to convergence to an MFE and show that our framework effectively reduces peak load and price fluctuations, especially during exogenous demand or supply shocks.</p>
605

The Effects of Return Current on Hard X-Ray Photon and Electron Spectra in Solar Flares

Zharkova, Valentina V., Gordovskyy, Mykola 18 May 2009 (has links)
No / The effect of a self-induced electric field is investigated analytically and numerically on differential and mean electron spectra produced by beam electrons during their precipitation into a flaring atmosphere as well as on the emitted hard X-ray (HXR) photon spectra. The induced electric field is found to be a constant in upper atmospheric layers and to fall sharply in the deeper atmosphere from some "turning point" occurring either in the corona (for intense and softer beams) or in the chromosphere (for weaker and harder beams). The stronger and softer the beam, the higher the electric field before the turning point and the steeper its decrease after it. Analytical solutions are presented for the electric fields, which are constant or decreasing with depth, and the characteristic "electric" stopping depths are compared with the "collisional" ones. A constant electric field is found to decelerate precipitating electrons and to significantly reduce their number in the upper atmospheric depth, resulting in their differential spectra flattening at lower energies (<100 keV). While a decreasing electric field slows down the electron deceleration, allowing them to precipitate into deeper atmospheric layers than for a constant electric field, the joint effect of electric and collisional energy losses increases the energy losses by lower energy electrons compared to pure collisions and results in maxima at energies of 40-80 keV in the differential electron spectra. This, in turn, leads to the maxima in the mean source electron spectra and to the "double power law" HXR photon spectra (with flattening at lower energies) similar to those reported from the RHESSI observations. The more intense and soft the beams are, the stronger is the lower energy flattening and the higher is the "break" energy where the flattening occurs.
606

An investigation on the effects of beam squint caused by an analog beamformed user terminal utilizing antenna arrays

Abd-Alhameed, Raed, Hu, Yim Fun, Al-Yasir, Yasir I.A., Parchin, N.O., Ullah, Atta 09 September 2023 (has links)
Yes / In the equivalent frequency-based model, the antenna array gain is utilised to characterise the frequency response of the beam squint effect generated by the antenna array. This impact is considered for a wide range of uniform linear array (ULA) and uniform planar array (UPA) designs, including those with and without tapering configurations. For a closer look at how the frequency response of the array adapts to the variations in the incidence angle of the signal, the bandwidth of the spectrum is varied and investigated. To study this effect, we have considered using the gain array response as an equivalent channel model in our approach. Beam squinting caused by distortion in the frequency response gain can be verified by one of two equalisers: a zero-forcing (ZF) equaliser or a minimum mean square error (MMSE) equaliser. Different cases with their analysis and results are studied and compared in terms of coded and uncoded modulations. / This work was supported in part by the Satellite Network of Experts V under Contract 4000130962/20/NL/NL/FE, and in part by the Innovation Program under Grant H2020-MSCA-ITN-2016 SECRET-722424.
607

Feasible Generalized Least Squares: theory and applications

González Coya Sandoval, Emilio 04 June 2024 (has links)
We study the Feasible Generalized Least-Squares (FGLS) estimation of the parameters of a linear regression model in which the errors are allowed to exhibit heteroskedasticity of unknown form and to be serially correlated. The main contribution is two fold; first we aim to demystify the reasons often advanced to use OLS instead of FGLS by showing that the latter estimate is robust, and more efficient and precise. Second, we devise consistent FGLS procedures, robust to misspecification, which achieves a lower mean squared error (MSE), often close to that of the correctly specified infeasible GLS. In the first chapter we restrict our attention to the case with independent heteroskedastic errors. We suggest a Lasso based procedure to estimate the skedastic function of the residuals. This estimate is then used to construct a FGLS estimator. Using extensive Monte Carlo simulations, we show that this Lasso-based FGLS procedure has better finite sample properties than OLS and other linear regression-based FGLS estimates. Moreover, the FGLS-Lasso estimate is robust to misspecification of both the functional form and the variables characterizing the skedastic function. The second chapter generalizes our investigation to the case with serially correlated errors. There are three main contributions; first we show that GLS is consistent requiring only pre-determined regressors, whereas OLS requires exogenous regressors to be consistent. The second contribution is to show that GLS is much more robust that OLS; even a misspecified GLS correction can achieve a lower MSE than OLS. The third contribution is to devise a FGLS procedure valid whether or not the regressors are exogenous, which achieves a MSE close to that of the correctly specified infeasible GLS. Extensive Monte Carlo experiments are conducted to assess the performance of our FGLS procedure against OLS in finite samples. FGLS achieves important reductions in MSE and variance relative to OLS. In the third chapter we consider an empirical application; we re-examine the Uncovered Interest Parity (UIP) hypothesis, which states that the expected rate of return to speculation in the forward foreign exchange market is zero. We extend the FGLS procedure to a setting in which lagged dependent variables are included as regressors. We thus provide a consistent and efficient framework to estimate the parameters of a general k-step-ahead linear forecasting equation. Finally, we apply our FGLS procedures to the analysis of the two main specifications to test the UIP.
608

Enhanced strain-based fatigue methodology for high strength aluminum alloys

Arcari, Attilio 29 March 2010 (has links)
The design of any mechanical components requires an understanding of the general statical, dynamical and environmental conditions where the components will be operating to give a satisfactory results in terms of performance and endurance. The premature failure of any components is undesirable and potentially catastrophic, therefore predictions on performances and endurances of components to proceed with repair or substitution is vital to the stability of the structure where the component is inserted. The capability of a component of withstanding fatigue loading conditions during service is called fatigue life and the designed predictions can be conservative or non conservative. Improvements to a strain based approach to fatigue were obtained in this study, studying the effects of mean stresses on fatigue life and investigating cyclic mean stress relaxation of two aluminum alloys, 7075-T6511 and 7249-T76511, used in structural aircraft applications. The two aluminum alloys were tested and their fatigue behavior characterized. The project, entirely funded by NAVAIR, Naval Air Systems Command, and jointly coordinated with TDA, Technical Data Analysis Inc., was aimed to obtain fatigue data for both aluminum alloys, with particular interest in 7249 alloy because of its enhanced corrosion resistance, and to give guidelines for improving the performances of FAMS, Fatigue Analysis of Metallic Structures, a life prediction software from the point of view of both mean stress effects and mean stress relaxation. The sensitivity of engineering materials to mean stresses is of high relevance in a strain based fatigue approach. The performance of the most common models used to calculate mean stress correction factors was studied for the two aluminum alloys 7075 and 7249 to give guidelines in the use of those for life predictions. Not only mean stresses have a high influence on fatigue life, but they are also subjected to transient cyclic behaviors. The following study considered both an empirical approach and a plasticity theory approach to simulate and include these transient effects in life calculations. Results will give valid directions to a successful modification of FAMS like any other life calculation software to include in the picture transient phenomena. / Ph. D.
609

Random Vibration Analysis of Higher-Order Nonlinear Beams and Composite Plates with Applications of ARMA Models

Lu, Yunkai 11 November 2009 (has links)
In this work, the random vibration of higher-order nonlinear beams and composite plates subjected to stochastic loading is studied. The fourth-order nonlinear beam equation is examined to study the effect of rotary inertia and shear deformation on the root mean square values of displacement response. A new linearly coupled equivalent linearization method is proposed and compared with the widely used traditional equivalent linearization method. The new method is proven to yield closer predictions to the numerical simulation results of the nonlinear beam vibration. A systematical investigation of the nonlinear random vibration of composite plates is conducted in which effects of nonlinearity, choices of different plate theories (the first order shear deformation plate theory and the classical plate theory), and temperature gradient on the plate statistical transverse response are addressed. Attention is paid to calculate the R.M.S. values of stress components since they directly affect the fatigue life of the structure. A statistical data reconstruction technique named ARMA modeling and its applications in random vibration data analysis are discussed. The model is applied to the simulation data of nonlinear beams. It is shown that good estimations of both the nonlinear frequencies and the power spectral densities are given by the technique. / Ph. D.
610

Non-Wiener Effects in Narrowband Interference Mitigation Using Adaptive Transversal Equalizers

Ikuma, Takeshi 25 April 2007 (has links)
The least mean square (LMS) algorithm is widely expected to operate near the corresponding Wiener filter solution. An exception to this popular perception occurs when the algorithm is used to adapt a transversal equalizer in the presence of additive narrowband interference. The steady-state LMS equalizer behavior does not correspond to that of the fixed Wiener equalizer: the mean of its weights is different from the Wiener weights, and its mean squared error (MSE) performance may be significantly better than the Wiener performance. The contributions of this study serve to better understand this so-called non-Wiener phenomenon of the LMS and normalized LMS adaptive transversal equalizers. The first contribution is the analysis of the mean of the LMS weights in steady state, assuming a large interference-to-signal ratio (ISR). The analysis is based on the Butterweck expansion of the weight update equation. The equalization problem is transformed to an equivalent interference estimation problem to make the analysis of the Butterweck expansion tractable. The analytical results are valid for all step-sizes. Simulation results are included to support the analytical results and show that the analytical results predict the simulation results very well, over a wide range of ISR. The second contribution is the new MSE estimator based on the expression for the mean of the LMS equalizer weight vector. The new estimator shows vast improvement over the Reuter-Zeidler MSE estimator. For the development of the new MSE estimator, the transfer function approximation of the LMS algorithm is generalized for the steady-state analysis of the LMS algorithm. This generalization also revealed the cause of the breakdown of the MSE estimators when the interference is not strong, as the assumption that the variation of the weight vector around its mean is small relative to the mean of the weight vector itself. Both the expression for the mean of the weight vector and for the MSE estimator are analyzed for the LMS algorithm at first. The results are then extended to the normalized LMS algorithm by the simple means of adaptation step-size redefinition. / Ph. D.

Page generated in 0.0365 seconds