• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 609
  • 157
  • 86
  • 74
  • 54
  • 47
  • 33
  • 17
  • 16
  • 14
  • 13
  • 12
  • 9
  • 8
  • 8
  • Tagged with
  • 1419
  • 209
  • 187
  • 185
  • 178
  • 178
  • 122
  • 116
  • 102
  • 101
  • 96
  • 84
  • 80
  • 76
  • 76
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Combining empirical mode decomposition with neural networks for the prediction of exchange rates / Jacques Mouton

Mouton, Jacques January 2014 (has links)
The foreign exchange market is one of the largest and most active financial markets with enormous daily trading volumes. Exchange rates are influenced by the interactions of a large number of agents, each operating with different intentions and on different time scales. This gives rise to nonlinear and non-stationary behaviour which complicates modelling. This research proposes a neural network based model trained on data filtered with a novel Empirical Mode Decomposition (EMD) filtering method for the forecasting of exchange rates. One minor and two major exchange rates are evaluated in this study. Firstly the ideal prediction horizons for trading are calculated for each of the exchange rates. The data is filtered according to this ideal prediction horizon using the EMD-filter. This EMD-filter dynamically filters the data based on the apparent number of intrinsic modes in the signal that can contribute towards prediction over the selected horizon. The filter is employed to filter out high frequency noise and components that would not contribute to the prediction of the exchange rate at the chosen timescale. This results in a clearer signal that still includes nonlinear behaviour. An artificial neural network predictor is trained on the filtered data using different sampling rates that are compatible with the cut-off frequency. The neural network is able to capture the nonlinear relationships between historic and future filtered data with greater certainty compared to a neural network trained on unfiltered data. Results show that the neural network trained on EMD-filtered data is significantly more accurate at prediction of exchange rates compared to the benchmark models of a neural network trained on unfiltered data and a random walk model for all the exchange rates. The EMD-filtered neural network’s predicted returns for the higher sample rates show higher correlations with the actual returns, and significant profits can be made when applying a trading strategy based on the predictions. Lower sample rates that just marginally satisfy the Nyquist criterion perform comparably with the neural network trained on unfiltered data; this may indicate that some aliasing occurs for these sampling rates as the EMD low-pass filter has a gradual cut-off, leaving some high frequency noise within the signal. The proposed model of the neural network trained on EMD-filtered data was able to uncover systematic relationships between the filtered inputs and actual outputs. The model is able to deliver profitable average monthly returns for most of the tested sampling rates and forecast horizons of the different exchange rates. This provides evidence that systematic predictable behaviour is present within exchange rates, and that this systematic behaviour can be modelled if it is properly separated from high frequency noise. / MIng (Computer and Electronic Engineering), North-West University, Potchefstroom Campus, 2015
92

Combining empirical mode decomposition with neural networks for the prediction of exchange rates / Jacques Mouton

Mouton, Jacques January 2014 (has links)
The foreign exchange market is one of the largest and most active financial markets with enormous daily trading volumes. Exchange rates are influenced by the interactions of a large number of agents, each operating with different intentions and on different time scales. This gives rise to nonlinear and non-stationary behaviour which complicates modelling. This research proposes a neural network based model trained on data filtered with a novel Empirical Mode Decomposition (EMD) filtering method for the forecasting of exchange rates. One minor and two major exchange rates are evaluated in this study. Firstly the ideal prediction horizons for trading are calculated for each of the exchange rates. The data is filtered according to this ideal prediction horizon using the EMD-filter. This EMD-filter dynamically filters the data based on the apparent number of intrinsic modes in the signal that can contribute towards prediction over the selected horizon. The filter is employed to filter out high frequency noise and components that would not contribute to the prediction of the exchange rate at the chosen timescale. This results in a clearer signal that still includes nonlinear behaviour. An artificial neural network predictor is trained on the filtered data using different sampling rates that are compatible with the cut-off frequency. The neural network is able to capture the nonlinear relationships between historic and future filtered data with greater certainty compared to a neural network trained on unfiltered data. Results show that the neural network trained on EMD-filtered data is significantly more accurate at prediction of exchange rates compared to the benchmark models of a neural network trained on unfiltered data and a random walk model for all the exchange rates. The EMD-filtered neural network’s predicted returns for the higher sample rates show higher correlations with the actual returns, and significant profits can be made when applying a trading strategy based on the predictions. Lower sample rates that just marginally satisfy the Nyquist criterion perform comparably with the neural network trained on unfiltered data; this may indicate that some aliasing occurs for these sampling rates as the EMD low-pass filter has a gradual cut-off, leaving some high frequency noise within the signal. The proposed model of the neural network trained on EMD-filtered data was able to uncover systematic relationships between the filtered inputs and actual outputs. The model is able to deliver profitable average monthly returns for most of the tested sampling rates and forecast horizons of the different exchange rates. This provides evidence that systematic predictable behaviour is present within exchange rates, and that this systematic behaviour can be modelled if it is properly separated from high frequency noise. / MIng (Computer and Electronic Engineering), North-West University, Potchefstroom Campus, 2015
93

A semi-empirical approach to modelling well deliverability in gas condensate reservoirs

Ugwu, Johnson Obunwa January 2011 (has links)
A critical issue in the development of gas condensate reservoirs is accurate prediction of well deliverability. In this investigation a procedure has been developed for accurate prediction of well production rates using semi-empirical approach. The use of state of the art fine grid numerical simulation is time consuming and computationally demanding, therefore not suitable for real time rapid production management decisions required on site. Development of accurate fit-for-purpose correlations for fluid property prediction below the saturation pressure was a major consideration to properly allow for retrograde condensation, complications of multiphase flow and mobility issues. Previous works are limited to use of experimentally measured pressure, volume, temperature (PVT) property data, together with static relative permeability correlations for simulation of well deliverability. To overcome the above limitations appropriate fluid property correlations required for prediction of well deliverability and dynamic three phase relative permeability correlation have been developed to enable forecasting of these properties at all the desired reservoir conditions The developed correlations include; condensate hybrid compressibility factor, viscosity, density, compositional pseudo-pressure, and dynamic three phase relative permeability. The study made use of published data bases of experimentally measured gas condensate PVT properties and three phase relative permeability data. The developed correlations have been implemented in both vertical and horizontal well models and parametric studies have been performed to determine the critical parameters that control productivity in gas condensate reservoirs, using specific case studies. The improved correlations showed superior performance over existing correlations on validation. The investigation has built on relevant literature to present an approach that modifies the black oil model for accurate well deliverability prediction for condensate reservoirs at conditions normally ignored by the conventional approach. The original contribution to knowledge and practice includes (i) the improved property correlations equations, (4.44, 4.47, 4.66, 4.69, 4.75, 5.21) and (ii) extension of gas rate equations, for condensate rate prediction in both vertical and horizontal wells. Standard industry software, the Eclipse compositional model, E-300 has been used to validate the procedure. The results show higher well performance compared with the industry standard. The new procedure is able to model well deliverability with limited PVT and rock property data which is not possible with most available methods. It also makes possible evaluation of various enhanced hydrocarbon recovery techniques and optimisation of gas condensate recovery.
94

Advances in empirical similitude method

Tadepalli, Srikanth 02 November 2009 (has links)
Dimensional Analysis is a technique that has allowed engineering evaluation of complex objects by scaling analysis results of representative simpler models. The original premise of the procedure stems from the idea of developing non-dimensional parameters to relate physical events and underlying analytical basis. Extending the process to incorporate non-linear and time variant behavior has led to development of a novel process of similitude called the Empirical Similitude Method (ESM) where experimental data of test specimen is combined to produce the required prediction values. Using the original motivation and hypothesis of ESM, this research has expanded the experimental similitude process by using adapted matrix representations and continuous functional mapping of test results. This new approach has provided more rigorous mathematical definitions for similarity and prediction estimations based on an innovative error minimization algorithm. Shape factors are also introduced and integrated into ESM to obtain comprehensive evaluation of specimen choices. A detailed overview is provided summarizing methods, principles and laws of traditional similitude (TSM) and systems that satisfy extension into ESM. Applicability of ESM in different systems is described based on the limitations of TSM in the evaluation of complex structures. Several examples and ideas spanning aerodynamic, thermal, mechanical and electro-magnetic domains are illustrated to complement inherent technical analysis. For example, the new ESM procedure is shown to be considerably more accurate than earlier methods in predicting the values of drag coefficient of an airfoil. A final foray into the regime of \design evaluation by similarity" is made to elucidate applicability and efficiency of developed techniques in practical systems and products. A thorough methodology is also presented highlighting pertinent procedures and processes in usage of this method. / text
95

Empirical Studies of Mobile Apps and Their Dependence on Mobile Platforms

Syer, MARK 24 January 2013 (has links)
Our increasing reliance on mobile devices has given rise to a new class of software applications (i.e., mobile apps). Tens of thousands of developers have developed hundreds of thousands of mobile apps that are available across multiple platforms. These apps are used by millions of people around the world every day. However, most software engineering research has been performed on large desktop or server applications. We believe that research efforts must begin to examine mobile apps. Mobile apps are rapidly growing, yet they differ from traditionally-studied desktop/server applications. In this thesis, we examine such apps by performing three quantitative studies. First, we study differences in the size of the code bases and development teams of desktop/server applications and mobile apps. We then study differences in the code, dependency and churn properties of mobile apps from two different mobile platforms. Finally, we study the impact of size, coupling, cohesion and code reuse on the quality of mobile apps. Some of the most notable findings are that mobile apps are much smaller than traditionally-studied desktop/server applications and that most mobile apps tend to be developed by only one or two developers. Mobile app developers tend to rely heavily on functionality provided by the underlying mobile platform through platform-specific APIs. We find that Android app developers tend to rely on the Android platform more than BlackBerry app developers rely on the BlackBerry platform. We also find that defects in Android apps tend to be concentrated in a small number of files and that files that depend on the Android platform tend to have more defects. Our results indicate that major differences exist between mobile apps and traditionally-studied desktop/server applications. However, the mobile apps of two different mobile platforms also differ. Further, our results suggest that mobile app developers should avoid excessive platform dependencies and focus their testing efforts on source code files that rely heavily on the underlying mobile platform. Given the widespread use of mobile apps and the lack of research surrounding these apps, we believe that our results will have significant impact on software engineering research. / Thesis (Master, Computing) -- Queen's University, 2013-01-24 10:15:56.086
96

Comparing Five Empirical Biodata Scoring Methods for Personnel Selection

Ramsay, Mark J. 08 1900 (has links)
A biodata based personnel selection measure was created to improve the retention rate of Catalog Telemarketing Representatives at a major U.S. retail company. Five separate empirical biodata scoring methods were compared to examine their usefulness in predicting retention and reducing adverse impact. The Mean Standardized Criterion Method, the Option Criterion Correlation Method, Horizontal Percentage Method, Vertical Percentage Method, and Weighted Application Blank Method using England's (1971) Assigned Weights were employed. The study showed that when using generalizable biodata items, all methods, except the Weighted Application Blank Method, were similar in their ability to discriminate between low and high retention employees and produced similar low adverse impact effects. The Weighted Application Blank Method did not discriminate between the low and high retention employees.
97

Release management in free and open source software ecosystems

Poo-Caamaño, Germán 02 December 2016 (has links)
Releasing software is challenging. To decide when to release software, developers may consider a deadline, a set of features or quality attributes. Yet, there are many stories of software that is not released on time. In large-scale software development, release management requires significant communication and coordination. It is particularly challenging in Free and Open Source Software (FOSS) ecosystems, in which hundreds of loosely connected developers and their projects are coordinated for releasing software according to a schedule. In this work, we investigate the release management process in two large-scale FOSS development projects. In particular, our focus is the communication in the whole release management process in each ecosystem across multiple releases. The main research questions addressed in this dissertation are: (1) How do developers in these FOSS ecosystems communicate and coordinate to build and release a common product based on different projects? (2) What are the release management tasks in a FOSS ecosystem? and (3) What are the challenges that release managers face in a FOSS ecosystem? To understand this process and its challenges better, we used a multiple case study methodology, and colleced evidence from a combination of the following sources: documents, archival records, interviews, direct observation, participant observation, and physical artifacts. We conducted the case studies on two FLOSS software ecosystems: GNOME and OpenStack. We analyzed over two and half years of communication in each ecosystem and studied developers’ interactions. GNOME is a collection of libraries, system services, and end-user applications; together, these projects provide a unified desktop —the GNOME desktop. OpenStack is a collection of software tools for building and managing cloud computing platforms for public and private clouds. We catalogued communication channels, categorized coordination activities in one channel, and triangulated our results by interviewing key developers identified through social network analysis. We found factors that impact the release process in a software ecosystem, including a release schedule positively, influence instead of direct control, and diversity. The release schedule drives most of the communication within an ecosystem. To achieve a concerted release, a Release Team helps developers reach technical consensus through influence rather than direct control. The diverse composition of the Release Team might increase its reach and influence in the ecosystem. Our results can help organizations build better large-scale teams and show that software engineering research focused on individual projects might miss important parts of the picture. The contributions of this dissertation are: (1) an empirical study of release management in two FOSS ecosystems (2) a set of lessons learned from the case studies, and (3) a theory of release management in FOSS ecosystems. We summarize our theory that explains our understanding of release management in FOSS ecosystems as three statements: (1) the size and complexity of the integrated product is constrained by the release managers capacity, (2) release management should be capable of reaching the whole ecosystem, and (3) the release managers need social and technical skills. The dissertation discusses this theory in the light of the case studies, other research efforts, and its implications. / Graduate / 0984 / gpoo+proquest@calcifer.org
98

Three Essays in Financial Economics

Julio, Ivan F. 06 August 2013 (has links)
No description available.
99

Analýza pracovní spokojenosti ve společnosti TOMOS, a.s. / Job satisfaction analysis in the company TOMOS, a.s.

Sumová, Lucie January 2010 (has links)
The presented diploma thesis is a practical research of job satisfaction in the chosen company Tomos Praha, a.s. It deals with the theoretical background of job satisfaction, its relation to the motivation and its impact on the performance of the employees. It describes particular theories of job satisfaction and focuses on the factors which influence the job satisfaction. In the practical part it examines the job satisfaction in the company Tomos Praha, a.s. from the different point of view, analyses the the data acquired by the empirical research and proposes the steps for improving the current situation.
100

Fundamentos teóricos da atividade de estudo como modelo didático para o ensino das disciplinas científicas /

Magagnato, Pamela Cristina. January 2011 (has links)
Orientador: Mara Sueli Simão Moraes / Banca: Elizabeth Mattiazzo Cardia / Banca: Sueli Terezinha F. Martins / Resumo: Este é um trabalho de natureza teórica, visando fundamentar um posterior experimento didático. A base dessa fundamentação teórica é o Experimento Formativo (EF) realizado na União Soviética sob coordenação de Davydov e Elkonin (no qual, foi elaborada uma específica atividade escolar: a atividade de estudo), na teoria dialética materialista do conhecimento, na Psicologia Histórico-Cultural e no que Badillo (2004) concebe por modelo didático. Inicia com uma breve conceituação sobre modelos: o modelo científico, o método dialético como forma de sistematização de modelo científico e o modelo didático considerado como um modelo científico da Ciência Didática. Em seguida, apresenta a teoria da atividade, no seu aspecto psicológico, por meio de uma caracterização geral do desenvolvimento do psiquismo humano, conduzindo a ideia de que distintas formas de pensamento levam a distintas formas de desenvolvimento do psiquismo e, portanto, a distintas formas de controle da própria conduta. Segue com uma exposição do conceito de pensamento empírico e de seu modelo escolar de formação, distinguindo-o do pensamento teórico, o qual, na sequência, é caracterizado na sua relação com seu modelo de formação: a atividade de estudo. Caracteriza-se que o pensamento empírico é formado por meio de conteúdos organizados pela lógica formal, enquanto que o pensamento teórico forma-se por meio da realização da atividade de estudo, a qual o movimento da ascensão do abstrato ao concreto e possibilita o desenvolvimento de importantes funções psíquicas superiores, tais como a análise, reflexão, planejamento e experimento mental. Resumidamente, há uma exemplificação da atividade de estudo quanto à formação do conceito de número em Matemática e apresentação dos principais resultados obtidos no EF. Analisa os procedimentoss utilizados no EF para concluir... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: This is theoretical study aimed at paving the way for a later teaching experiment. This theoretical support was based on the Formative Experiment (FE) performed in the Soviety Union and coordinated by DavydovElkoniin, (in which a specific school activity, the learning activity, was created), on the dialectical materialist theory of knowledge, on the Cultural-Historical Psychology, and on what Badillo (2004) conceives as a didatic model. Firstly, this study does a brief evaluation of models: the scientific model, the dialectical method as a way to systematize the scientific model and the didatic model considered as a scientific model of Didactic Science. It then presents the theory of activity in its psychological aspect, through a general characterizarion of the development of the human psyche, considering the idea that different ways of thinking lead to different forms of development of the phyche and therefore, to distinct forms of controlling their own conduct. Secondly, this paper conceptualizes the empirical throght and its production model in schools, distinguishing it from the theoretical thinking, which, in sequence, is characterized in its relationship with its model of production: the learning activity. Finally, it shows that the empirical thought is formed through contents organized by the formal logic, while theoretical thinking is formed by performing the learning activity itself, which follows the upward movement from the abstract to the concrete and allows the development of important higher mental functions, such as analysis, reflexion, planning and mental experiment. In short, there is an example of the learning activity used to teach the concept of number in mathematics and the main results obtained in EF. The study then examines the procedures used in the EF to conclude that this EF is characterized... (Complete abstract click electronic access below) / Mestre

Page generated in 0.0615 seconds