• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1063
  • 358
  • 156
  • 97
  • 56
  • 29
  • 21
  • 14
  • 12
  • 11
  • 10
  • 9
  • 7
  • 6
  • 5
  • Tagged with
  • 2248
  • 830
  • 809
  • 344
  • 239
  • 229
  • 223
  • 223
  • 221
  • 219
  • 190
  • 189
  • 184
  • 170
  • 164
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Mapping parallel programs to heterogeneous multi-core systems

Grewe, Dominik January 2014 (has links)
Heterogeneous computer systems are ubiquitous in all areas of computing, from mobile to high-performance computing. They promise to deliver increased performance at lower energy cost than purely homogeneous, CPU-based systems. In recent years GPU-based heterogeneous systems have become increasingly popular. They combine a programmable GPU with a multi-core CPU. GPUs have become flexible enough to not only handle graphics workloads but also various kinds of general-purpose algorithms. They are thus used as a coprocessor or accelerator alongside the CPU. Developing applications for GPU-based heterogeneous systems involves several challenges. Firstly, not all algorithms are equally suited for GPU computing. It is thus important to carefully map the tasks of an application to the most suitable processor in a system. Secondly, current frameworks for heterogeneous computing, such as OpenCL, are low-level, requiring a thorough understanding of the hardware by the programmer. This high barrier to entry could be lowered by automatically generating and tuning this code from a high-level and thus more user-friendly programming language. Both challenges are addressed in this thesis. For the task mapping problem a machine learning-based approach is presented in this thesis. It combines static features of the program code with runtime information on input sizes to predict the optimal mapping of OpenCL kernels. This approach is further extended to also take contention on the GPU into account. Both methods are able to outperform competing mapping approaches by a significant margin. Furthermore, this thesis develops a method for targeting GPU-based heterogeneous systems from OpenMP, a directive-based framework for parallel computing. OpenMP programs are translated to OpenCL and optimized for GPU performance. At runtime a predictive model decides whether to execute the original OpenMP code on the CPU or the generated OpenCL code on the GPU. This approach is shown to outperform both a competing approach as well as hand-tuned code.
142

Confinement tuning of a 0-D plasma dynamics model

Hill, Maxwell D. 27 May 2016 (has links)
Investigations of tokamak dynamics, especially as they relate to the challenge of burn control, require an accurate representation of energy and particle confinement times. While the ITER-98 scaling law represents a correlation of data from a wide range of tokamaks, confinement scaling laws will need to be fine-tuned to specific operational features of specific tokamaks in the future. A methodology for developing, by regression analysis, tokamak- and configuration-specific confinement tuning models is presented and applied to DIII-D as an illustration. It is shown that inclusion of tuning parameters in the confinement models can significantly enhance the agreement between simulated and experimental temperatures relative to simulations in which only the ITER-98 scaling law is used. These confinement tuning parameters can also be used to represent the effects of various heating sources and other plasma operating parameters on overall plasma performance and may be used in future studies to inform the selection of plasma configurations that are more robust against power excursions.
143

Analysis of new sentiment and its application to finance

Yu, Xiang January 2014 (has links)
We report our investigation of how news stories influence the behaviour of tradable financial assets, in particular, equities. We consider the established methods of turning news events into a quantifiable measure and explore the models which connect these measures to financial decision making and risk control. The study of our thesis is built around two practical, as well as, research problems which are determining trading strategies and quantifying trading risk. We have constructed a new measure which takes into consideration (i) the volume of news and (ii) the decaying effect of news sentiment. In this way we derive the impact of aggregated news events for a given asset; we have defined this as the impact score. We also characterise the behaviour of assets using three parameters, which are return, volatility and liquidity, and construct predictive models which incorporate impact scores. The derivation of the impact measure and the characterisation of asset behaviour by introducing liquidity are two innovations reported in this thesis and are claimed to be contributions to knowledge. The impact of news on asset behaviour is explored using two sets of predictive models: the univariate models and the multivariate models. In our univariate predictive models, a universe of 53 assets were considered in order to justify the relationship of news and assets across 9 different sectors. For the multivariate case, we have selected 5 stocks from the financial sector only as this is relevant for the purpose of constructing trading strategies. We have analysed the celebrated Black-Litterman model (1991) and constructed our Bayesian multivariate predictive models such that we can incorporate domain expertise to improve the predictions. Not only does this suggest one of the best ways to choose priors in Bayesian inference for financial models using news sentiment, but it also allows the use of current and synchronised data with market information. This is also a novel aspect of our work and a further contribution to knowledge.
144

Modelling and Model Based Control Design For Rotorcraft Unmanned Aerial Vehicle

Choi, Rejina Ling Wei January 2014 (has links)
Designing high performance control of rotorcraft unmanned aerial vehicle (UAV) requires a mathematical model that describes the dynamics of the vehicle. The model is derived from first principle modelling, such as rigid-body dynamics, actuator dynamics and etc. It is found that simplified decoupled model of RUAV has slightly better data fitting compared with the complex model for helicopter attitude dynamics in hover or near hover flight condition. In addition, the simplified modelling approach has made the analysis of system dynamics easy. System identification method is applied to identify the unknown intrinsic parameters in the nominal model, where manual piloted flight experiment is carried out and input-output data about a nominal operating region is recorded for parameters identification process. Integral-based parameter identification algorithm is then used to identify model parameters that give the best matching between the simulation and measured output response. The results obtained show that the dominant dynamics is captured. The advantages of using integral-based method include the fast computation time, insensitive to initial parameter value and fast convergence rate in comparison with other contemporary system identification methods such as prediction error method (PEM), maximum likelihood method, equation error method and output error method. Besides, the integral-based parameter identification method can be readily extended to tackle slow time-varying model parameters and fast varying disturbances. The model prediction is found to be improved significantly when the iterative integral-based parameter identification is employed and thus further validates the minimal modelling approach. From the literature review, many control schemes have been designed and validated in simulation. However, few of them has really been implemented in real flight as well as under windy and severe conditions, where unpredictable large system parameters variations and unexpected disturbances are present. Therefore, the emphasis on this part will be on the control design that would have satisfactory reference sequence tracking or regulation capability in the presence of unmodelled dynamics and external disturbances. Generalised Predictive Controller (GPC) is particularly considered as the helicopter attitude dynamics control due to its insensitivity with respect to model mismatch and its capability to address the control problem of nominal model with deadtime. The robustness analysis shows that the robustness of the basic GPC is significantly improved using the Smith Predictor (SP) in place of optimal predictor in basic GPC. The effectiveness of the proposed robust GPC was well proven with the control of helicopter heading on the test rig in terms of the reference sequence tracking performance and the input disturbance rejection capability. The second motivation is the investigation of adaptive GPC from the perspective of performance improvements for the robust GPC. The promising experimental results prove the feasibility of the adaptive GPC controller, and especially evident when the underlying robust GPC is tuned with low robustness and legitimates the use of simplified model. Another approach of robust model predictive control is considered where disturbance is identified in real‐time using an iterative integral‐based method.
145

A study of predicitive capacity and working memory in mild Alzheimer's disease and normal controls using saccadic eye movements

Ruthirakuhan, MYURI 22 May 2013 (has links)
Alzheimer’s disease (AD) is a neurodegenerative disorder with no existing cure. Since cognitive control influences saccade behaviour, saccades provide a valuable tool when studying cognitive changes in healthy and pathological aging. This thesis aims to evaluate differences in predictive capacity and working memory between cognitively normal older adults (NC) and mild AD patients using customized saccade paradigms and a battery of neuropsychological tests. In the predictive paradigm, we hypothesize that AD participants would display a decreased level of prediction, predictive capacity and learning capacity. In the memory-guided paradigm, we hypothesize that AD participants would have a decreased ability to maintain fixation and capacity to retain information and reproduce it correctly. Overall, we found that in the predictive paradigm, NC displayed a greater degree of prediction than AD participants. However, both groups had an optimal level of prediction at intermediate inter-stimulus intervals (ISI) (750 and 1000 ms). As ISI increased, both groups, although more so in AD, elicited a greater proportion of SRTs below -200 ms and -400 ms. This may suggest that as ISI increased, participants switched from a predictive to an anticipatory/guessing strategy. At an ISI of 500 ms, NC’s learning capacity was greater than AD participants. Cognitive scores of neuropsychological tests did not correlate with learning capacity in NC. However, learning capacity in AD participants was positively correlated with working memory capacity and attentional control. The memory-guided paradigm revealed AD participants completed less viable trials, less correct trials, and had more combined directional and timing errors than NC. Cognitive correlations showed that NC’s working memory capacity positively correlated with the frequency of correct trials, whilst negatively correlating with saccade errors. Since AD participants completed 10% of viable trials correctly, the task may have been too difficult for AD participants to comprehend, rendering correlations invalid. These findings suggest that although the predictive paradigm does not solely assess for prediction, it may provide a measure to cognitively differentiate NC from AD patients, and detect AD severity. Since the memory-guided paradigm may be too difficult for AD participants, it may provide a better indicator of cognitive changes associated with healthy aging. / Thesis (Master, Neuroscience Studies) -- Queen's University, 2013-05-21 18:54:19.492
146

Practical on-line model validation for model predictive controllers (MPC)

Naidoo, Yubanthren Tyrin. January 2010 (has links)
A typical petro-chemical or oil-refining plant is known to operate with hundreds if not thousands of control loops. All critical loops are primarily required to operate at their respective optimal levels in order for the plant to run efficiently. With such a large number of vital loops, it is difficult for engineers to monitor and maintain these loops with the intention that they are operating under optimum conditions at all times. Parts of processes are interactive, more so nowadays with increasing integration, requiring the use of a more advanced protocol of control systems. The most widely applied advanced process control system is the Model Predictive Controller (MPC). The success of these controllers is noted in the large number of applications worldwide. These controllers rely on a process model in order to predict future plant responses. Naturally, the performance of model-based controllers is intimately linked to the quality of the process models. Industrial project experience has shown that the most difficult and time-consuming work in an MPC project is modeling and identification. With time, the performance of these controllers degrades due to changes in feed, working regime as well as plant configuration. One of the causes of controller degradation is this degradation of process models. If a discrepancy between the controller’s plant model and the plant itself exists, controller performance may be adversely affected. It is important to detect these changes and re-identify the plant model to maintain control performance over time. In order to avoid the time-consuming process of complete model identification, a model validation tool is developed which provides a model quality indication based on real-time plant data. The focus has been on developing a method that is simple to implement but still robust. The techniques and algorithms presented are developed as far as possible to resemble an on-line software environment and are capable of running parallel to the process in real time. These techniques are based on parametric (regression) and nonparametric (correlation) analyses which complement each other in identifying problems -iiwithin on-line models. These methods pinpoint the precise location of a mismatch. This implies that only a few inputs have to be perturbed in the re-identification process and only the degraded portion of the model is to be updated. This work is carried out for the benefit of SASOL, exclusively focused on the Secunda plant which has a large number of model predictive controllers that are required to be maintained for optimal economic benefit. The efficacy of the methodology developed is illustrated in several simulation studies with the key intention to mirror occurrences present in industrial processes. The methods were also tested on an industrial application. The key results and shortfalls of the methodology are documented. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, Durban, 2010.
147

Model predictive control of a Brayton cycle based power plant / Peter Kabanda Lusanga

Lusanga, Peter Kabanda January 2012 (has links)
The aim of this study is to implement the model predictive control in order to optimally control the power output of a Brayton cycle based power plant. Other control strategies have been tried but there still exists the need for better performance. In real systems, a number of constraints exist. Incorporating these into the control design is no trivial task. Unlike in most control strategies, model predictive control allows the designer to explicitly incorporate constraints in its formulation. The original design of the PBMR power plant is considered. It uses helium gas as the working fluid. The power output of the system can be controlled by manipulating the helium inventory to the gas cycle. A linear model of the power plant, modelled in Simulink® is used. This linear model is used as an evaluation platform for the control strategy. The helium inventory is manipulated by means of actuators which use values generated by the controller. The controller computes these values by minimizing the cost of future outputs over a finite horizon in the presence of constraints. The dynamic response of the system is used to tune the controller. The power output performance at different configurations of the controller under perfect conditions and with disturbances is examined. The best configuration is used resulting in an optimal power control system for the Brayton cycle based power plant. Results showed that the method employed can be used to implement the control strategy. Furthermore, better performance can be realised with model predictive control. / Thesis (M.Ing. (Electrical and Electronic Engineering))--North-West University, Potchefstroom Campus, 2012
148

Predictive analysis at Krononfogden : Classifying first-time debtors with an uplift model

Rantzer, Måns January 2016 (has links)
The use of predictive analysis is becoming more commonplace with each passing day, which lends increased credence to the fact that even governmental institutions should adopt it. Kronofogden is in the middle of a digitization process and is therefore in a unique position to implement predictive analysis into the core of their operations. This project aims to study if methods from predictive analysis can predict how many debts will be received for a first-time debtor, through the use of uplift modeling. The difference between uplift modeling and conventional modeling is that it aims to measure the difference in behavior after a treatment, in this case guidance from Kronofogden. Another aim of the project is to examine whether the scarce literature about uplift modeling have it right about how the conventional two-model approach fails to perform well in practical situations. The project shows similar results as Kronofogden’s internal evaluations. Three models were compared: random forests, gradient-boosted models and neural networks, the last performing the best. Positive uplift could be found for 1-5% of the debtors, meaning the current cutoff level of 15% is too high. The models have several potential sources of error, however: modeling choices, that the data might not be informative enough or that the actual expected uplift for new data is equal to zero.
149

Stochastic model predictive control

Ng, Desmond Han Tien January 2011 (has links)
The work in this thesis focuses on the development of a Stochastic Model Predictive Control (SMPC) algorithm for linear systems with additive and multiplicative stochastic uncertainty subjected to linear input/state constraints. Constraints can be in the form of hard constraints, which must be satisfied at all times, or soft constraints, which can be violated up to a pre-defined limit on the frequency of violation or the expected number of violations in a given period. When constraints are included in the SMPC algorithm, the difficulty arising from stochastic model parameters manifests itself in the online optimization in two ways. Namely, the difficulty lies in predicting the probability distribution of future states and imposing constraints on closed loop responses through constraints on predictions. This problem is overcome through the introduction of layered tubes around a centre trajectory. These tubes are optimized online in order to produce a systematic and less conservative approach of handling constraints. The layered tubes centered around a nominal trajectory achieve soft constraint satisfaction through the imposition of constraints on the probabilities of one-step-ahead transition of the predicted state between the layered tubes and constraints on the probability of one-step-ahead constraint violations. An application in the field of Sustainable Development policy is used as an example. With some adaptation, the algorithm is extended the case where the uncertainty is not identically and independently distributed. Also, by including linearization errors, it is extended to non-linear systems with additive uncertainty.
150

British banking-halls as a property investment

Tipping, Malvern January 2011 (has links)
This research is related to British banking-halls as a class of real estate investment. Sale-and-leaseback has become an increasingly common approach during the last two decades for the holding of British banking-halls. One measure used in making property investment decisions is the all risks yield (ARY). Investors and their advisors have a need for a predictive framework which they can use for predicting those retail bank premises likely to achieve the highest ARY when assembling investment portfolios of such properties. A predictive framework necessitates the identification of those factors significantly influencing the yields of British banking-halls. This research aims to develop such a framework. Triangulation methodology was adopted to establish and test the predictive framework. A literature review established theory before a qualitative study, based upon semi-structured interviews and a questionnaire, was used to establish the influencing factors. A cross-sectional study of auction data then formed the basis of the quantitative regression study. The qualitative and quantitative studies validated that four factors were significant in influencing yield. These were tenant banking company, lot size, super-region and the macro-economic cycle index. A toolkit comprising a predictive framework for those banking-halls likely to produce the highest ARY was produced. This is capable of being used by professional practitioners and investors in predicting high yield for portfolio building purposes. The predictive framework was developed based upon the quantitative data from those three banks with the most premises sold by sale-and-leaseback. It formed a baseline from which further studies can build to test its significance for other banks. Consequently, a more robust predictive framework can be developed for banking-hall investments. Further research can also be conducted to develop predictive frameworks forecasting yields for investment in other commercial retail sectors, based upon the findings of this research.

Page generated in 0.0552 seconds