Spelling suggestions: "subject:"1776 computer software"" "subject:"1776 coomputer software""
481 |
Supporting large scale collaboration and crowd-based investigation in economics : a computational representation for description and simulation of financial modelsFaleiro, Jorge January 2018 (has links)
Finance should be studied as a hard science, where scientific methods apply. When a trading strategy is proposed, the underlying model should be transparent and defined robustly to allow other researchers to understand and examine it thoroughly. Any reports on experimental results must allow other researchers to trace back to the original data and models that produced them. Like any hard sciences, results must be repeatable to allow researchers to collaborate and build upon each other’s results. Large-scale collaboration, when applying the steps of scientific investigation, is an efficient way to leverage crowd science to accelerate research in finance. Unfortunately, the current reality is far from that. Evidence shows that current methods of investigation in finance in most cases do not allow for reproducible and falsifiable procedures of scientific investigation. As a consequence, the majority of financial decisions at all levels, from personal investment choices to overreaching global economic policies, rely on some variation of try-and-error and are mostly non-scientific by definition. We lack transparency for procedures and evidence, proper explanation of market events, predictability on effects, or identification of causes. There is no clear demarcation of what is inherently scientific, and as a consequence, the line between fake and true is blurred. In this research, we advocate the use of a next-generation investigative approach leveraging forces of human diversity, micro-specialized crowds, and proper computer-assisted control methods associated with accessibility, reproducibility, communication, and collaboration. This thesis is structured in three distinctive parts. The first part defines a set of very specific cognitive and non-cognitive enablers for crowd-based scientific investigation: methods of proof, large-scale collaboration, and a domain-specific computational representation. These enablers allow the application of procedures of structured scientific investigation powered by crowds, a “collective brain in which neurons are human collaborators”. The second part defines a specialized computational representation to allow proper controls and collaboration in large-scale in the field of economics. A computational representation is a role-based representation system based on facets, contributions, and constraints of data, and used to define concepts related to a specific domain of knowledge for crowd-based investigation. The third and last part performs an end-to-end investigation of a non-trivial problem in finance by measuring the actual performance of a momentum strategy in technical analysis, applying formal methods of investigation developed over the first and second part of this research.
|
482 |
The development of an adaptive environment (framework) to assist the teaching, learning and assessment of geography within the Omani secondary education systemAl-Lawati, Batoul January 2012 (has links)
Owing to particular historical reasons, the Sultanate of Oman emerged into the modern world only in 1970 and launched its state education system in 1972. Less than thirty years later, the Sultanate recognized that a major overhaul of state education was needed to face the challenges that globalization posed to its population and to its economy. The policies for the transition to the Basic Education (BE) system stipulated that students should receive training in information technology (IT) and English from Year One. These provisions were implemented from academic year 1998/1999, so that by the commencement of academic year 2010/2011 three cohorts of students had received a full ten years of schooling in IT. This research investigated the effects of integrating IT into the geography curriculum in Cycle Two of the BE system. Despite an extensive and painstaking search, no previous published study was found that dealt with the pedagogic use of IT in the Omani BE system. One study (Osman 2010) surveyed users of the Oman Educational IT Portal, but it was a general attitudinal survey of all users and did not progress beyond use of a questionnaire. Therefore, this study is the first to conduct fieldwork research in Oman to develop indicators to measure Omani students' performance in and reactions to eLearning. The study also includes two dedicated surveys covering Omani students' and teachers' opinions of and attitudes to eLearning. This is therefore the first study of this type that has been conducted in or for Oman. The findings support the importance of integrating eLearning into the curriculum in Oman, to enhance the delivery of a range of curriculum subjects through the pedagogical use of IT. Through a comparison of responses from teachers and students in Oman and two other countries, this study also explores issues emerging from a comparison between cultures (Gulf Arab and Western) in terms of the varying effects that cultural and other factors can exert on teachers' and learners' acceptance of educational technology in different countries. Again, it is a feature of this research that it is the first to conduct such a comparative study on such a scale involving Gulf Arab students and teachers. This study raises issues surrounding the optimization of acceptance include (1) the necessity of increasing the internet speed in Oman; (2) the current inadequacy of e-Iearning resources; (3) the proper management of eLearning integration; (4) the need for enhancement of eLearning training and skills issues for both teachers and learners: (5) the further relationships inherent in the interaction of culture and the acceptance of technology.
|
483 |
Computer aided mathematical modelling of turbulent flow for orifice meteringHafiz bin Haji Khozali, Muhammed January 1981 (has links)
The time-averaged Navier-Stokes' partial differential equations have been used in the mathematical modelling of fluid flow for steady, incompressible non-cavitating, high Reynolds number turbulence through an orifice plate. The model developed for orifice plates was based on a particular closed form turbulent model: the k-ε two equation model developed at Imperial College, London and embodied in the TEACH-T finite difference computer code. A basic model for axisymmetric flow through an orifice meter was developed by appropriate modification of the TEACH-T program to incorporate orifice plate geometry, upstream/downstream distances, Reynolds number, inlet velocity profile and the calculation of output quantities of interest such as discharge and pressure loss coefficients. The model vas tested for convergence and general adequacy on an orifice of diameter ratio β= .7 in a 4 inch pipe line and at a Reynolds number of 105. Quantitative tests were then conducted on thin orifice plates in the range .3 β .7. Results were compared with those from BSI 1042 for discharge coefficients (flange, D-D/2 and corner tappings) and published results for pressure loss coefficients. The results show that the discharge coefficients predictions are within 3% of experiment with very close agreement in the mid-range (β = .45). The pressure loss coefficients predictions are within 15" of experiment. Sensitivity tests were then conducted to see how these coefficients varied with such quantities as inlet velocity I profile, turbulence levels and orifice plate thickness. These results indicated that the orifice is relatively insensitive to velocity profiles (1/12 power law and uniform) and. turbulence levels. Also below a certain orifice plate thickness ratio the discharge coefficient is almost constant. It is concluded that such modelling can be a most valuable aid in understanding the behaviour of the orifice meter and similar devices. In particular this would aid in the design of novel flow meters based on the differential pressure principle. Extensive mathematical and computational details including the derivation of the k-t model equations from first principles are relegated to appendices. A source listing of the developed model is also provided in appendix G.
|
484 |
Multi-objective optimisation of low-thrust trajectoriesZuiani, Federico January 2015 (has links)
This research work developed an innovative computational approach to the preliminary design of low-thrust trajectories optimising multiple mission criteria. Low-Thrust (LT) propulsion has become the propulsion system of choice for a number of near Earth and interplanetary missions. Consequently, in the last two decades a good wealth of research has been devoted to the development of computational method to design low-thrust trajectories. Most of the techniques, however, minimise or maximise a single figure of merit under a set of design constraints. Less effort has been devoted to the development of efficient methods for the minimisation (or maximisation) of two or more figures of merit. On the other hand, in the preliminary mission design phase, the decision maker is interested in analysing as many design solutions as possible against different trade-off criteria. Therefore, in this PhD work, an innovative Multi-Objective (MO), memetic optimisation algorithm, called Multi-Agent Collaborative Search (MACS2), has been implemented to tackle low-thrust trajectory design problems with multiple figures of merit. Tests on both academic and real-world problems showed that the proposed MACS2 paradigm performs better than or as well as other state-of-the-art Multi-Objective optimisation algorithms. Concurrently, a set of novel approximated, first-order, analytical formulae has been developed, to obtain a fast but reliable estimation of the main trade-off criteria. These formulae allow for a fast propagation of the orbital motion under a constant perturbing acceleration. These formulae have been shown to allow for the fast and relatively accurate propagation of long LT trajectories under the typical acceleration level delivered by current engine technology. Various applications are presented to demonstrate the validity of the combination of the analytical formulae with MACS2. Among them, the preliminary design of the JAXA low-cost DESTINY mission to L2, a novel approach to the optimisation under uncertainty of deflection actions for Near Earth Objects (NEO), and the de-orbiting of space debris with low-thrust and with a combination of low-thrust and solar radiation pressure.
|
485 |
The importance of statistical measure when describing phenotypeHajne, Joanna January 2015 (has links)
Data collected in life sciences studies mostly include a genotype description of the organism, a phenotype characterisation of the organism, and experiment-specific covariates including a description of experimental procedures and laboratory (environmental) conditions. Here, phenotype measurements are taken for Neurospora crassa (wild type) growing on agar in the standard laboratory conditions. I define a phenotype as a set of traits including apical extension velocity, branching angle, and branching distance. I use the above measures (traits) to model (estimate) biologically complex filamentous fungi network as a simplified 'In Silico Fungus' consisting of series of straight lines. Phenotype data, under the central limit theorem, is often characterized by means and standard deviations. Subsequently, P values are used to show statistical validity. Here, I question whether making normality assumption based on the popularity of such approach is always justified. Therefore, I test three different scenarios by making different assumptions about the data collected. (1) Firstly, I use the most popular approach: I assume the phenotype data comes from the continuous, normal (Gauss) distribution. Thus, I predict the future measurement outcomes by using normal (Gauss) parametric approximation. (2) Secondly, I use the most intuitive approach: I do not make any assumptions about the data collected and use it to predict the future measurement outcomes by withdrawing values pseudo randomly from the actual, raw, and discrete dataset. (3) Finally, I use the strategy balanced between the previous two: I construct a customised, continuous, and non-parametric distribution based on the data collected. Thus, I predict the future measurement outcomes by using kernel density estimation method. Subsequently, I implement all of the strategies above: (1), (2), and (3) in the in silico fungus programme to compare the computer simulation outcomes. More specifically, I compare the surface coverage, expressed as the proportion of the surface occupied by the fungus. Obtained results show that the differences between different data regimes (1), (2), and (3) are significant. Therefore, I conclude that the correct assessment of the data normality is crucial for the correct interpretation and implementation of scientific observations. I suspect the described data classification process determines successful implementation of biological findings especially in the fields such as medicine and engineering.
|
486 |
Massively parallel time- and frequency-domain Navier-Stokes Computational Fluid Dynamics analysis of wind turbine and oscillating wing unsteady flowsDrofelnik, Jernej January 2017 (has links)
Increasing interest in renewable energy sources for electricity production complying with stricter environmental policies has greatly contributed to further optimisation of existing devices and the development of novel renewable energy generation systems. The research and development of these advanced systems is tightly bound to the use of reliable design methods, which enable accurate and efficient design. Reynolds-averaged Navier-Stokes Computational Fluid Dynamics is one of the design methods that may be used to accurately analyse complex flows past current and forthcoming renewable energy fluid machinery such as wind turbines and oscillating wings for marine power generation. The use of this simulation technology offers a deeper insight into the complex flow physics of renewable energy machines than the lower-fidelity methods widely used in industry. The complex flows past these devices, which are characterised by highly unsteady and, often, predominantly periodic behaviour, can significantly affect power production and structural loads. Therefore, such flows need to be accurately predicted. The research work presented in this thesis deals with the development of a novel, accurate, scalable, massively parallel CFD research code COSA for general fluid-based renewable energy applications. The research work also demonstrates the capabilities of newly developed solvers of COSA by investigating complex three-dimensional unsteady periodic flows past oscillating wings and horizontal-axis wind turbines. Oscillating wings for the extraction of energy from an oncoming water or air stream, feature highly unsteady hydrodynamics. The flow past oscillating wings may feature dynamic stall and leading edge vortex shedding, and is significantly three-dimensional due to finite-wing effects. Detailed understanding of these phenomena is essential for maximising the power generation efficiency. Most of the knowledge on oscillating wing hydrodynamics is based on two-dimensional low-Reynolds number computational fluid dynamics studies and experimental testing. However, real installations are expected to feature Reynolds numbers of the order of 1 million and strong finite-wing-induced losses. This research investigates the impact of finite wing effects on the hydrodynamics of a realistic aspect ratio 10 oscillating wing device in a stream with Reynolds number of 1.5 million, for two high-energy extraction operating regimes. The benefits of using endplates in order to reduce finite-wing-induced losses are also analyzed. Three-dimensional time-accurate Reynolds-averaged Navier-Stokes simulations using Menter's shear stress transport turbulence model and a 30-million-cell grid are performed. Detailed comparative hydrodynamic analyses of the finite and infinite wings highlight that the power generation efficiency of the finite wing with sharp tips for the considered high energy-extraction regimes decreases by up to 20 %, whereas the maximum power drop is 15 % at most when using the endplates. Horizontal-axis wind turbines may experience strong unsteady periodic flow regimes, such as those associated with the yawed wind condition. Reynolds-averaged Navier-Stokes CFD has been demonstrated to predict horizontal-axis wind turbine unsteady flows with accuracy suitable for reliable turbine design. The major drawback of conventional Reynolds-averaged Navier-Stokes CFD is its high computational cost. A time-step-independent time-domain simulation of horizontal-axis wind turbine periodic flows requires long runtimes, as several rotor revolutions have to be simulated before the periodic state is achieved. Runtimes can be significantly reduced by using the frequency-domain harmonic balance method for solving the unsteady Reynolds-averaged Navier-Stokes equations. This research has demonstrated that this promising technology can be efficiently used for the analyses of complex three-dimensional horizontal-axis wind turbine periodic flows, and has a vast potential for rapid wind turbine design. The three-dimensional simulations of the periodic flow past the blade of the NREL 5-MW baseline horizontal-axis wind turbine in yawed wind have been selected for the demonstration of the effectiveness of the developed technology. The comparative assessment is based on thorough parametric time-domain and harmonic balance analyses. Presented results highlight that horizontal-axis wind turbine periodic flows can be computed by the harmonic balance solver about fifty times more rapidly than by the conventional time-domain analysis, with accuracy comparable to that of the time-domain solver.
|
487 |
Operational optimisation of water distribution networksLopez-Ibanez, Manuel January 2009 (has links)
Water distribution networks are a fundamental part of any modern city and their daily operations constitute a significant expenditure in terms of energy and maintenance costs. Careful scheduling of pump operations may lead to significant energy savings and prevent wear and tear. By means of computer simulation, an optimal schedule of pumps can be found by an optimisation algorithm. The subject of this thesis is the study of pump scheduling as an optimisation problem. New representations of pump schedules are investigated for restricting the number of potential schedules. Recombination and mutation operators are proposed, in order to use the new representations in evolutionary algorithms. These new representations are empirically compared to traditional representations using different network instances, one of them being a large and complex network from UK. By means of the new representations, the evolutionary algorithm developed during this thesis finds new best-known solutions for both networks. Pump scheduling as the multi-objective problem of minimising energy and maintenance costs in terms of Pareto optimality is also investigated in this thesis. Two alternative surrogate measures of maintenance cost are considered: the minimisation of the number of pump switches and the maximisation of the shortest idle time. A single run of the multi-objective evolutionary algorithm obtains pump schedules with lower electrical cost and lower number of pump switches than those found in the literature. Alternatively, schedules with very long idle times may be found with slightly higher electrical cost. Finally, ant colony optimisation is also adapted to the pump scheduling problem. Both Ant System and Max-Min Ant System are tested. Max-Min Ant System, in particular, outperforms all other algorithms in the large real-world network instance and obtains competitive results in the smallest test network. Computation time is further reduced by parallel simulation of pump schedules.
|
Page generated in 0.0575 seconds