• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 267
  • 89
  • 54
  • 39
  • 10
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 561
  • 134
  • 100
  • 98
  • 76
  • 70
  • 69
  • 59
  • 53
  • 48
  • 46
  • 44
  • 41
  • 37
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Modeling and computations of multivariate datasets in space and time

Demel, Samuel Seth January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Juan Du / Spatio-temporal and/or multivariate dependence naturally occur in datasets obtained in various disciplines; such as atmospheric sciences, meteorology, engineering and agriculture. There is a great deal of need to effectively model the complex dependence and correlated structure exhibited in these datasets. For this purpose, this dissertation studies methods and application of the spatio-temporal modeling and multivariate computation. First, a collection of spatio-temporal functions is proposed to model spatio-temporal processes which are continuous in space and discrete over time. Theoretically, we derived the necessary and sufficient conditions to ensure the model validity. On the other hand, the possibility of taking the advantage of well-established time series and spatial statistics tools makes it relatively easy to identify and fit the proposed model in practice. The spatio-temporal models with some ARMA discrete temporal margin are fitted to Kansas precipitation and Irish wind datasets for estimation or prediction, and compared with some general existing parametric models in terms of likelihood and mean squared prediction error. Second, to deal with the immense computational burden of statistical inference for multi- ple attributes recorded at a large number of locations, we develop Wendland-type compactly supported covariance matrix function models and propose multivariate covariance tapering technique with those functions for computation reduction. Simulation studies and US tem- perature data are used to illustrate applications of the proposed multivariate tapering and computational gain in spatial cokriging. Finally, to study the impact of weather change on corn yield in Kansas, we develop a spatial functional linear regression model accounting for the fact that weather data were recorded daily or hourly as opposed to the yearly crop yield data and the underlying spatial autocorrelation. The parameter function is estimated under the functional data analysis framework and its characteristics are investigated to show the influential factor and critical period of weather change dictating crop yield during the growing season.
152

Analys av metanflöden från sjön Erken / Analysis of Methane Fluxes at Lake Erken

Mintz, Ryan January 2016 (has links)
While it is not the most abundant greenhouse gas, a significant portion of the greenhouse effect is caused by methane. The amount of methane in the atmosphere is increasing, indicating that there is a continuous source of methane to the atmosphere. One significant source of methane is freshwater lakes, even though they cover only a small portion of the Earth’s surface. Because of this, it is important to monitor methane fluxes from lakes in order to understand the processes which affect the magnitude of these fluxes. Methane is produced in the sediment at the bottom of the lake, and transported through the water by ebullition, diffusive flux, storage flux, or plant mediated emission. This study looked to examine the amount of methane transmitted to the atmosphere by these processes on Lake Erken in eastern Sweden. Using the eddy covariance method, we can study the methane flux with good spatial and temporal resolution. Regular sampling of lake water, both from the surface and depths of 5 and 10 meters, also helps us to understand the amount of methane dissolved in the lake. These measurements can help us to better understand the transfer velocity, or the efficiency of the exchange between water and air, as well as the amount of methane transported from lakes to the atmosphere. Water sampling showed that there is very little variation in methane concentration between different parts of the lake. Concentrations at four surface locations are nearly identical. These surface measurements are also similar to concentrations at different depths. Over time, the concentrations generally stayed the same, with isolated high and low concentration events. The amount of methane emitted by the lake was studied with the lake divided into a shallow water area, and a deep water area. The magnitude of fluxes from both areas was very similar, but the area of shallow water had a higher total flux. The fluxes were well correlated with wind speed; higher fluxes coming during times with higher wind speed. This relates well to the transfer velocity theory, and the bulk flux approximation. However, there was no clear diurnal cycle in methane fluxes. The fluxes during the night were similar to daytime fluxes. Atmospheric pressure also had an impact on fluxes, with greater fluxes coming at times of lower pressure. A large seasonal variation was clear. More methane escaped the water in autumn and winter than in spring or summer. This is due in part to the fluxes from when the lake freezes over/thaws and the water in the lake turns over, bringing methane rich water from the lake’s bottom to the surface. As expected, the waterside concentration of methane also had a strong correlation with the fluxes. The main conclusions of this study are: 1) Methane fluxes are variable with wind speed, waterside concentrations, and the seasons 2) Water depth and diurnal cycles do not affect methane fluxes as strongly. Keywords: methane, transfer velocity, flux, waterside concentration, eddy covariance
153

Enough is Enough : Sufficient number of securities in an optimal portfolio

Barkino, Iliam, Rivera Öman, Marcus January 2016 (has links)
This empirical study has shown that optimal portfolios need approximately 10 securities to diversify away the unsystematic risk. This challenges previous studies of randomly chosen portfolios which states that at least 30 securities are needed. The result of this study sheds light upon the difference in risk diversification between random portfolios and optimal portfolios and is a valuable contribution for investors. The study suggests that a major part of the unsystematic risk in a portfolio can be diversified away with fewer securities by using portfolio optimization. Individual investors especially, who usually have portfolios consisting of few securities, benefit from these results. There are today multiple user-friendly software applications that can perform the computations of portfolio optimization without the user having to know the mathematics behind the program. Microsoft Excel’s solver function is an example of a well-used software for portfolio optimization. In this study however, MATLAB was used to perform all the optimizations. The study was executed on data of 140 stocks on NASDAQ Stockholm during 2000-2014. Multiple optimizations were done with varying input in order to yield a result that only depended on the investigated variable, that is, how many different stocks that are needed in order to diversify away the unsystematic risk in a portfolio. / <p>Osäker på examinatorns namn, tog namnet på den person som skickade mejl om betyg.</p>
154

A study on structured covariance modeling approaches to designing compact recognizers of online handwritten Chinese characters

Wang, Yongqiang, 王永強 January 2009 (has links)
published_or_final_version / Computer Science / Master / Master of Philosophy
155

Une information sur les matrices de covariance : la liaison-information

Michaux, Christian 30 June 1973 (has links) (PDF)
.
156

Linear transceivers for MIMO relays

Shang, Cheng Yu Andy January 2014 (has links)
Relays can be used in wireless communication systems to provide cell coverage extension, reduce coverage holes and increase throughput. Full duplex (FD) relays, which transmit and receive in the same time slot, can have a higher transmission rate compared with half duplex (HD) relays. However, FD relays suffer from self interference (SI) problems, which are caused by the transmitted relay signal being received by the relay receiver. This can reduce the performance of FD relays. In the literature, the SI channel is commonly nulled and removed as it simplifies the problem considerably. In practice, complete nulling is impossible due to channel estimation errors. Therefore, in this thesis, we consider the leakage of the SI from the FD relay. Our goal is to reduce the SI and increase the signal to noise ratio (SNR) of the relay system. Hence, we propose different precoder and weight vector designs. These designs may increase the end to end (e2e) signal to interference and noise ratio (SINR) at the destination. Here, a precoder is multiplied to a signal before transmission and a weight vector is multiplied to the received signal after reception. Initially, we consider an academic example where it uses a two path FD multiple input and multiple output (MIMO) system. The analysis of the SINR with the implementation of precoders and weight vectors shows that the SI component has the same underlying signal as the source signal when a relay processing delay is not being considered. Hence, to simulate the SI problem more realistically, we alter our relay design and focus on a one path FD MIMO relay system with a relay processing delay. For the implementation of precoders and weight vectors, choosing the optimal scheme is numerically challenging. Thus, we design the precoders and weight vectors using ad-hoc and near-optimal schemes. The ad-hoc schemes for the precoders are singular value decomposition (SVD), minimising the signal to leakage plus noise ratio (SLNR) using the Rayleigh Ritz (RR) method and zero forcing (ZF). The ad-hoc schemes for the weight vectors are SVD, minimum mean squared error (MMSE) and ZF. The near-optimal scheme uses an iterative RR method to compute the source precoder and destination weight vector and the relay precoder and weight vector are computed using the ad-hoc methods which provide the best performance. The average power and the instantaneous power normalisations are the two methods to constrain the relay precoder power. The average power normalisation method uses a novel closed form covariance matrix with an optimisation approach to constrain the relay precoder. This closed form covariance matrix is mathematically derived using matrix vectorization techniques. For the instantaneous power normalisation method, the constraint process does not require an optimisation approach. However, using this method the e2e SINR is difficult to calculate, therefore we use symbol error rate (SER) as a measure of performance. The results from the different precoder and weight vector designs suggest that reducing the SI using the relay weight vector instead of the relay precoder results in a higher e2e SINR. Consequently, to increase the e2e SINR, performing complicated processing at the relay receiver is more effective than at the relay transmitter.
157

Analyzing the use of UTAUT model in explaining an online behaviour : Internet banking adoption

Al-Qeisis, Kholoud Ibrahim January 2009 (has links)
Technology acceptance research is a constantly developing field. The disciplines that contributed to its development are either beliefs focused or system focused. The unified theory of acceptance and use of technology (UTAUT) combined both. The current research model proposes an extension to the UTAUT that accounts for online usage behaviour. The proposed research model is tested in two countries (UK and Jordan) to investigate the viability of the unified model of technology acceptance in different boundaries as a model of individuals’ discretionary usage of Internet banking. The study also questions the roles of other determinants and moderators in this context. Results found support for the effect of the proposed extension, website quality perceptions, on usage behaviour in both countries’ models; the total effect of this extension exhibited website quality perceptions the most influential determinant of usage behaviour in both models and performance expectancy construct was second in effect. Social influence had no impact on the usage behaviour in both models, which is consistent with previous research that advocates a declining role of social influence under discretionary usage and increased experience conditions. Furthermore, the moderating role of performance expectancy previously established in TAM’s research was supported in the UTAUT model in both countries’ models. Moreover, both models reported a non-moderating effect of gender, which, is also in line with recent research findings that suggest declining gender differences under voluntary usage conditions and advanced experience. Education and income were moderators only for the UK model. Although the research findings demonstrated that both countries’ models were “configurally” similar with respect to model specifications, the models’ explanatory power for usage behaviour was dissimilar: the UK’s model explanatory power exceeded that of Jordan’s model presenting an opportunity for future research. The current research contributes to knowledge in the field of technology acceptance research. It demonstrated that website quality perceptions, as a multidimensional concept, play an important role in the online usage context. It also demonstrated that the unified model of technology acceptance established in the western culture can be transferred to a non-western culture although with varying degrees of explanation power.
158

Lake Fluxes of Methane and Carbon Dioxide

Podgrajsek, Eva January 2015 (has links)
Methane (CH4) and carbon dioxide (CO2) are two important greenhouse gases. Recent studies have shown that lakes, although they cover a small area of the globe, can be very important natural sources of atmospheric CH4 and CO2. It is therefore important to monitor the fluxes of these gases between lakes and the atmosphere in order to understand the processes that govern the exchange. By using the eddy covariance method for lake flux studies, the resolution in time and in space of the fluxes is increased, which gives more information on the governing processes. Eddy covariance measurements at a Swedish lake revealed a diel cycle in the fluxes of both CH4 and CO2, with higher fluxes during nighttime than daytime. The high nighttime CO2 fluxes could to a large extent be explained with enhanced transfer velocities due to waterside convection. For the diel cycle of CH4 flux it was suggested that waterside convection could enhance the transfer velocity, transport CH4 rich water to the surface, as well as trigger ebullition. Simultaneous flux measurements of CH4 and CO2 have been presented using both the eddy covariance method and the floating chambers method of which the latter is the traditional measuring method for lake fluxes. For CO2 the two methods agreed well during some periods but differed considerably during others. Disagreement between the methods might be due to horizontal heterogeneity in partial pressure of CO2 in the lake. The methods agreed better for the CH4 flux measurements. However, it is clear that due to the discontinuous nature of the floating chambers, this method will likely miss important high flux events. The main conclusions of this thesis are: 1) the two gas flux methods are not directly comparable and should be seen as supplementary to each other 2) waterside convection enhances the fluxes of both CH4 and CO2 over the water-air surface. If gas flux measurements are not conducted during nighttime, potential high flux periods might be missed and estimates of the total amount of gas released from lakes to the atmosphere may be biased.
159

Scale-Dependent Community Theory for Streams and Other Linear Habitats.

Holt, Galen, Chesson, Peter 09 1900 (has links)
The maintenance of species diversity occurs at the regional scale but depends on interacting processes at the full range of lower scales. Although there is a long history of study of regional diversity as an emergent property, analyses of fully multiscale dynamics are rare. Here, we use scale transition theory for a quantitative analysis of multiscale diversity maintenance with continuous scales of dispersal and environmental variation in space and time. We develop our analysis with a model of a linear habitat, applicable to streams or coastlines, to provide a theoretical foundation for the long-standing interest in environmental variation and dispersal, including downstream drift. We find that the strength of regional coexistence is strongest when local densities and local environmental conditions are strongly correlated. Increasing dispersal and shortening environmental correlations weaken the strength of coexistence regionally and shift the dominant coexistence mechanism from fitness-density covariance to the spatial storage effect, while increasing local diversity. Analysis of the physical and biological determinants of these mechanisms improves understanding of traditional concepts of environmental filters, mass effects, and species sorting. Our results highlight the limitations of the binary distinction between local communities and a species pool and emphasize species coexistence as a problem of multiple scales in space and time.
160

Étude de la puissance de test dans le contrôle des valeurs de base lors de l'utilisation de modèles linéaires

Roy, Julie January 2006 (has links)
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.

Page generated in 0.0625 seconds