• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 682
  • 252
  • 79
  • 57
  • 42
  • 37
  • 30
  • 26
  • 25
  • 14
  • 9
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1503
  • 1029
  • 249
  • 238
  • 223
  • 215
  • 195
  • 185
  • 167
  • 163
  • 151
  • 124
  • 123
  • 122
  • 111
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

Samonastavitelná regulace elektrického motoru / Self-tuning control of electric motor

Havlíček, Jiří January 2017 (has links)
The diploma thesis deals with the self-tuning PSD controllers. The parameters of the model are obtained by a non-recurring method of least squares. With the assistance of the Matlab/Simulink programme, the individual processes of the PSD controller are compared on a second order system. In the thesis, a simulation of the self-tuning cascade control of PMSM‘s current and speed loop is created. The following part of the thesis covers the implementation of individual algorithms on the dSPACE platform for the real PMSM.
472

Trialability, perceived risk and complexity of understanding as determinants of cloud computing services adoption

Etsebeth, Eugene Everard 16 February 2013 (has links)
In 2011 one-third of South African organisations did not intend to adopt cloud computing services because IT decision-maker lacked understanding of the related concepts and benefits (Goldstuck, 2011). This research develops a media-oriented model to examine the adoption of these services in South Africa. The model uses the technology acceptance model (TAM) and innovation diffusion theory (IDT) to develop variables that are considered determinants of adoption including trialability, complexity of understanding, perceived risk, perceived ease of use and perceived usefulness.An electronic survey was sent to 107 IT decision-makers. Over 80% of the respondents were C-suite executives. The Partial Least Squares (PLS) method was chosen to depict and test the proposed model. PLS is superior to normal regression models and is a second generation technique. The data analysis included evaluating and modifying the model, assessing the new measurement model, testing the hypotheses of the model structure and presenting the structural model.The research found that media, experts and word of mouth mitigate perceived risks including bandwidth, connectivity and power. Furthermore, trialability and perceived usefulness were affected by social influence, as well as influencing adoption. The results enable service providers and marketers to develop product roadmaps and pinpoint media messages. / Dissertation (MBA)--University of Pretoria, 2012. / Gordon Institute of Business Science (GIBS) / unrestricted
473

An Evaluation of a Modified Behavioral Skills Training Procedure for Teaching Poison Prevention Skills to Children with Developmental Disabilities

Petit-Frere, Paula 21 March 2019 (has links)
Although household product, such as pharmaceuticals and cleaning chemicals, are part of a child’s everyday life, accidental poisonings can occur as a result of ingestion. Children diagnosed with developmental disabilities are even more susceptible to being injured when they come into contact with these poisonous agents. Behavioral approaches have been used extensively to teach safety skills to children with disabilities. However, those that targeted poison prevention skills required additional methods that were more intrusive for the child to acquire the skills. Thus, the purpose of this study was to evaluate the efficacy of a modified behavioral skills training package that incorporates a system of least prompts. Results showed that BST and system of least prompts increased poison prevention skills for all three participants and the skills maintained at follow-up.
474

Least-squares Migration and Full Waveform Inversion with Multisource Frequency Selection

Huang, Yunsong 09 1900 (has links)
Multisource Least-Squares Migration (LSM) of phase-encoded supergathers has shown great promise in reducing the computational cost of conventional migration. But for the marine acquisition geometry this approach faces the challenge of erroneous misfit due to the mismatch between the limited number of live traces/shot recorded in the field and the pervasive number of traces generated by the finite-difference modeling method. To tackle this mismatch problem, I present a frequency selection strategy with LSM of supergathers. The key idea is, at each LSM iteration, to assign a unique frequency band to each shot gather, so that the spectral overlap among those shots—and therefore their crosstallk—is zero. Consequently, each receiver can unambiguously identify and then discount the superfluous sources—those that are not associated with the receiver in marine acquisition. To compare with standard migration, I apply the proposed method to 2D SEG/EAGE salt model and obtain better resolved images computed at about 1/8 the cost; results for 3D SEG/EAGE salt model, with Ocean Bottom Seismometer (OBS) survey, show a speedup of 40×. This strategy is next extended to multisource Full Waveform Inversion (FWI) of supergathers for marine streamer data, with the same advantages of computational efficiency and storage savings. In the Finite-Difference Time-Domain (FDTD) method, to mitigate spectral leakage due to delayed onsets of sine waves detected at receivers, I double the simulation time and retain only the second half of the simulated records. To compare with standard FWI, I apply the proposed method to 2D velocity model of SEG/EAGE salt and to Gulf Of Mexico (GOM) field data, and obtain a speedup of about 4× and 8×. Formulas are then derived for the resolution limits of various constituent wavepaths pertaining to FWI: diving waves, primary reflections, diffractions, and multiple reflections. They suggest that inverting multiples can provide some low and intermediate-wavenumber components of the velocity model not available in the primaries. In addition, diffractions can provide twice or better the resolution as specular reflections for comparable depths of the reflector and diffractor. The width of the diffraction-transmission wavepath is on the order of λ at the diffractor location for the diffraction-transmission wavepath.
475

Public Opinion on Tobacco, Alcohol, and Sugar Policy and its Economic Implications in Sweden : A study on sociodemographic factors’ effects on health policy attitudes of Swedes

Karlsson, Jonas January 2020 (has links)
Using paired samples t-tests, this study examines attitudes toward government intervention to decrease the consumption of tobacco, alcohol, and sugar to improve public health in Sweden. The effects of the four sociodemographic variables gender, age, education, and income on attitudes toward health policies are tested using Ordinary Least Squares and ordered probit regressions. The research is performed using cross-sectional data which is supplied by a national survey. The results show that tobacco should be regulated the most, followed by alcohol and lastly sugar. According to the respondents, tobacco and alcohol consumption need clear societal restrictions while individuals should be responsible for their sugar consumption. This implies that tobacco and alcohol restrictions introduced by the government should be effective and should, therefore, reduce the consumption and subsequently decrease a country’s economic costs. The opposite is true for sugar policy. Women, younger people, highly educated people, and people with higher incomes are positively related to support toward tobacco restrictions. Women, younger people, and highly educated people show more support for alcohol restrictions. Lastly, respondents with higher levels of education are more supportive of sugar restrictions.
476

Establishing Junior-level Colleges in Developing Nations: a Site Selection Process Using Data From Uganda

Iaeger, Paula Irene 05 1900 (has links)
This research synthesizes data and presents it using mapping software to help to identify potential site locations for community-centered higher education alternatives and more traditional junior-level colleges in Uganda. What factors can be used to quantify one site over another for the location of such an institution and if these factors can be isolated; why should they be used by local authorities? the variables are secured from the Southern and Eastern Africa Consortium for Monitoring Educational Quality (SACMEQ), Afrobarometer, census data, as well as technology reports and surveys. These variables are reduced, grouped and mapped to help determine the best location for a junior-level college. the use of local expert opinion on geopolitical, economic, and educational situations can be interfaced with the database data to identify potential sites for junior-level colleges with the potential to reduce the failure rate of such post-secondary school ventures. These data are analyzed in the context of reported higher education policies and outcomes from the national ministries, United Nations Educational, Scientific and Cultural Organization (UNESCO), quality assurances agencies in the region, the World Bank, and national datasets. the final product is a model and tool that can be used by local experts to better select future sites to expand higher education, especially in rural areas in the least developed countries.
477

Using Second-Order Information in Training Deep Neural Networks

Ren, Yi January 2022 (has links)
In this dissertation, we are concerned with the advancement of optimization algorithms for training deep learning models, and in particular about practical second-order methods that take into account the structure of deep neural networks (DNNs). Although first-order methods such as stochastic gradient descent have long been the predominant optimization algorithm used in deep learning, second-order methods are of interest because of their ability to use curvature information to accelerate the optimization process. After the presentation of some background information in Chapter 1, Chapters 2 and 3 focus on the development of practical quasi-Newton methods for training DNNs. We analyze the Kronecker-factored structure of the Hessian matrix of multi-layer perceptrons and convolutional neural networks and consequently propose block-diagonal Kronecker-factored quasi-Newton methods named K-BFGS and K-BFGS(L). To handle the non-convexity nature of DNNs, we also establish new double damping techniques for our proposed methods. Our K-BFGS and K-BFGS(L) methods have memory requirements comparable to first-order methods and experience only mild overhead in terms of per-iteration time complexity. In Chapter 4, we develop a new approximate natural gradient method named Tensor Normal Training (TNT), in which the Fisher matrix is viewed as the covariance matrix of a tensor normal distribution (a generalized form of the normal distribution). The tractable Kronecker-factored approximation to the Fisher information matrix that results from this approximation enables TNT to enjoy memory requirements and per-iteration computational costs that are only slightly higher than those for first-order methods. Notably, unlike KFAC and K-BFGS/K-BFGS(L), TNT only requires the knowledge of the shape of the trainable parameters of a model and does not depend on the specific model architecture. In Chapter 5, we consider the subsampled versions of Gauss-Newton and natural gradient methods applied to DNNs. Because of the low-rank nature of the subsampled matrices, we make use of the Sherman-Morrison-Woodbury formula along with backpropagation to efficiently compute their inverse. We also show that, under rather mild conditions, the algorithm converges to a stationary point if Levenberg-Marquardt damping is used. The results of a substantial number of numerical experiments are reported in Chapters 2, 3, 4 and 5, in which we compare the performance of our methods to state-of-the-art methods used to train DNNs, that demonstrate the efficiency and effectiveness of our proposed new second-order methods.
478

Techno-economic Assessment of Wind Energy to Supply the Demand of Electricity for a Residential Community in Ethiopia

Yebi, Adamu January 2011 (has links)
The electricity sector is a major source of carbon dioxide emission that contributes to the global climate change. Over the past decade wind energy has steadily emerged as a potential source for low carbon energy source which are grown through time. As wind power generation increases around the world, there is increasing interest in adding intermittent power to the electricity grid and to design an off-grid wind energy system. The goal of the current thesis is to investigate techno-economically viable wind energy system that supplies electricity and Heat for a given residential community in Ethiopia. To ease the optimization process, HOMER software is used to identify the potential wind area and optimize cost effective wind energy system.
479

Multivariate analysis and GIS in generating vulnerability map of acid sulfate soils.

Nguyen, Nga January 2015 (has links)
The study employed multi-variate methods to generate vulnerability maps for acid sulfate soils (AS) in the Norrbotten county of Sweden. In this study, the relationships between the reclassified datasets and each biogeochemical element was carefully evaluated with ANOVA Kruskal Wallis and PLS analysis. The sta-tistical results of ANOVA Kruskall-Wallis provided us a useful knowledge of the relationships of the preliminary vulnerability ranks in the classified datasets ver-sus the amount of each biogeochemical element. Then, the statistical knowledge and expert knowledge were used to generate the final vulnerability ranks of AS soils in the classified datasets which were the input independent variables in PLS analyses. The results of Kruskal-Wallis one way ANOVA and PLS analyses showed a strong correlation of the higher levels total Cu2+, Ni2+ and S to the higher vulnerability ranks in the classified datasets. Hence, total Cu2+, Ni2+ and S were chosen as the dependent variables for further PLS analyses. In particular, the Variable Importance in the Projection (VIP) value of each classified dataset was standardized to generate its weight. Vulnerability map of AS soil was a result of a lineal combination of the standardized values in the classified dataset and its weight. Seven weight sets were formed from either uni-variate or multi-variate PLS analyses. Accuracy tests were done by testing the classification of measured pH values of 74 soil profiles with different vulnerability maps and evaluating the areas that were not the AS soil within the groups of medium to high AS soil probability in the land-cover and soil-type datasets. In comparison to the other weight sets, the weight set of multi-variate PLS analysis of the matrix of total Ni2+& S or total Cu2+& S had the robust predictive performance. Sensitivity anal-ysis was done in the weight set of total Ni2+& S, and the results of sensitivity analyses showed that the availability of ditches, and the change in the terrain sur-faces, the altitude level, and the slope had a high influence to the vulnerability map of AS soils. The study showed that using multivariate analysis was a very good approach methodology for predicting the probability of acid sulfate soil.
480

Minimax D-optimal designs for regression models with heteroscedastic errors

Yzenbrandt, Kai 20 April 2021 (has links)
Minimax D-optimal designs for regression models with heteroscedastic errors are studied and constructed. These designs are robust against possible misspecification of the error variance in the model. We propose a flexible assumption for the error variance and use a minimax approach to define robust designs. As usual it is hard to find robust designs analytically, since the associated design problem is not a convex optimization problem. However, the minimax D-optimal design problem has an objective function as a difference of two convex functions. An effective algorithm is developed to compute minimax D-optimal designs under the least squares estimator and generalized least squares estimator. The algorithm can be applied to construct minimax D-optimal designs for any linear or nonlinear regression model with heteroscedastic errors. In addition, several theoretical results are obtained for the minimax D-optimal designs. / Graduate

Page generated in 0.0631 seconds