• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3269
  • 1226
  • 892
  • 505
  • 219
  • 178
  • 161
  • 161
  • 160
  • 160
  • 160
  • 160
  • 160
  • 159
  • 77
  • Tagged with
  • 8739
  • 4077
  • 2534
  • 2457
  • 2457
  • 805
  • 805
  • 588
  • 579
  • 554
  • 552
  • 525
  • 486
  • 480
  • 472
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
541

New genetic programming methods for rainfall prediction and rainfall derivatives pricing

Cramer, Sam January 2017 (has links)
Rainfall derivatives is a part of an umbrella concept of weather derivatives, whereby the underlying weather variable determines the value of derivative, in our case the rainfall. These financial contracts are currently in their infancy as they have started trading on the Chicago Mercantile Exchange (CME) since 2011. Such contracts are very useful for investors or trading firms who wish to hedge against the direct or indirect adverse effects of the rainfall. The first crucial problem to focus on in this thesis is the prediction of the level of rainfall. In order to predict this, two techniques are routinely used. The first most commonly used approach is Markov chain extended with rainfall prediction. The second approach is Poisson-cluster model. Both techniques have some weakness in their predictive powers for rainfall data. More specifically, a large number of rainfall pathways obtained from these techniques are not representative of future rainfall levels. Additionally, the predictions are heavily influenced by the prior information, leading to future rainfall levels being the average of previously observed values. This motivates us to develop a new algorithm to the problem domain, based on Genetic Programming (GP), to improve the prediction of the underlying variable rainfall. GP is capable of producing white box (interpretable, as opposed to black box) models, which allows us to probe the models produced. Moreover, we can capture nonlinear and unexpected patterns in the data without making any strict assumptions regarding the data. The daily rainfall data represents some difficulties for GP. The difficulties include the data value being non-negative and discontinuous on the real time line. Moreover, the rainfall data consists of high volatilities and low seasonal time series. This makes the rainfall derivatives much more challenging to deal with than other weather contracts such as temperature or wind. However, GP does not perform well when it is applied directly on the daily rainfall data. We thus propose a data transformation method that improves GP's predictive power. The transformation works by accumulating the daily rainfall amounts into accumulated amounts with a sliding window. To evaluate the performance, we compare the prediction accuracy obtained by GP against the most currently used approach in rainfall derivatives, and six other machine learning algorithms. They are compared on 42 different data sets collected from different cities across the USA and Europe. We discover that GP is able to predict rainfall more accurately than the most currently used approaches in the literature and comparably to other machine learning methods. However, we find that the equations generated by GP are not able to take into account the volatilities and extreme periods of wet and dry rainfall. Thus, we propose decomposing the problem of rainfall into 'sub problems' for GP to solve. We decompose the time series of rainfall by creating a partition to represent a selected range of the total rainfall amounts, where each partition is modelled by a separate equation from GP. We use a Genetic Algorithm to assist with the partitioning of data. We find that through the decomposition of the data, we are able to predict the underlying data better than all machine learning benchmark methods. Moreover, GP is able to provide a better representation of the extreme periods in the rainfall time series. The natural progression is to price rainfall futures contracts from rainfall prediction. Unlike other pricing domains in the trading market, there is no generally recognised pricing framework used within the literature. Much of this is due to weather derivatives (including rainfall derivatives) existing in an incomplete market, where the existing and well-studied pricing methods cannot be directly applied. There are two well-known techniques for pricing, the first is through indifference pricing and the second is through arbitrage free pricing. One of the requirements for pricing is knowing the level of risk or uncertainty that exists within the market. This allows for a contract price free of arbitrage. GP can be used to price derivatives, but the risk cannot be directly estimated. To estimate the risk, we must calculate a density of proposed rainfall values from a single GP equation, in order to calculate the most probable outcome. We propose three methods to achieve the required results. The first is through the procedure of sampling many different equations and extrapolating a density from the best of each generation over multiple runs. The second proposal builds on the first considering contract-specific equations, rather than a single equation explaining all contracts before extrapolating a density. The third method is the proposition of GP evolving and creating a collection of stochastic equations for pricing rainfall derivatives. We find that GP is a suitable method for pricing and both proposed methods are able to produce good pricing results. Our first and second methods are capable of pricing closer to the rainfall futures prices given by the CME. Moreover, we find that our third method reproduces the actual rainfall for the specified period of interest more accurately.
542

How can cheese be made sustainable? : an actor-network analysis

Brooking, Hannah January 2018 (has links)
The notions of eco-localism and sustainable intensification have emerged as approaches for sustainable food and sustainable agriculture in the sustainability literature. With regard to these two notions, a lot of the focus in the literature has been on the farm or the dairy but this study seeks to explore their applicability to the study of the production, distribution and consumption of cheese, moving beyond the farm gate. This study examines the discourses of sustainability within three different cheese actor-networks including the degree to which various discourses of sustainability are embedded in the actual practices and products produced. This interdisciplinary study also investigates the sustainability issues and potential solutions for achieving sustainability as well as providing a system development system that has a potential for making actor-network theory (ANT) more practical. Cheese is thought to be notoriously unsustainable, as on average 10 litres of milk is needed to make just 1kg of hard cheese and there are concerns over the amounts of methane and other greenhouse gas emissions as well as environmental waste across the network. Cheese is important for sustaining rural livelihoods and important for employment, especially in the context of concerns over milk prices, falling farm incomes and reductions in dairy farming. Many dairy farmers are therefore looking to diversify and add value to their milk production, by turning to cheesemaking. The Specialist Cheesemakers Association (SCA) has recorded an increase in both enquiries from dairy farmers and new members joining (Specialist Cheese makers Association, 2015). This study explores the adoption of two approaches that can combine as a set method to explore sustainability. These two approaches are Actor Network Theory (ANT) and the computer science system development methodology, i*. ANT is used in combination with i* to develop a methodological framework. This research applies this framework to eco-localist and sustainably intensified cheese networks from production to consumption, from farm-to-fork. A collection of semi-structured interviews, questionnaires and ethnographic observations were used to assemble information on sustainability challenges within milk production, cheesemaking, distribution and sales. This research produced a sustainability framework and determined that ‘sustainable cheese’ is not a fictitious agri-food but it is hard to achieve ‘sustainability’ as there is no finite end. The study also identified sustainability problems for eco-localist and sustainably intensive cheese actor-networks and explored potential ways to demonstrably improve the sustainability of cheese.
543

Data-driven refactorings for Haskell

Adams, Stephen January 2017 (has links)
Agile software development allows for software to evolve slowly over time. Decisions made during the early stages of a program's lifecycle often come with a cost in the form of technical debt. Technical debt is the concept that reworking a program that is implemented in a naive or "easy" way, is often more difficult than changing the behaviour of a more robust solution. Refactoring is one of the primary ways to reduce technical debt. Refactoring is the process of changing the internal structure of a program without changing its external behaviour. The goal of performing refactorings is to increase code quality, maintainability, and extensibility of the source program. Performing refactorings manually is time consuming and error-prone. This makes automated refactoring tools very useful. Haskell is a strongly typed, pure functional programming language. Haskell's rich type system allows for complex and powerful data models and abstractions. These abstractions and data models are an important part of Haskell programs. This thesis argues that these parts of a program accrue technical debt, and that refactoring is an important technique to reduce this type of technical debt. Refactorings exist that tackle issues with a program's data model, however these refactorings are specific to the object-oriented programming paradigm. This thesis reports on work done to design and automate refactorings that help Haskell programmers develop and evolve these abstractions. This work also discussed the current design and implementation of HaRe (the Haskell Refactorer). HaRe now supports the Glasgow Haskell Compiler's implementation of the Haskell 2010 standard and its extensions, and uses some of GHC's internal packages in its implementation.
544

Best response dynamics in simultaneous and sequential network design games

Radoja, Matthew Milan January 2017 (has links)
This thesis is concerned with the analysis of best response dynamics in simultaneous and sequential network design games with fair-cost sharing, for both capacitated and uncapacitated networks. We address questions related to the evolution of stable states through selfish updates. First, we examine in general what effects such updates can have, from various perspectives, on the quality of the solutions to a game. From this, we move on to a more specific analysis of updates which begin from an optimal profile, providing insight to the price of stability measure of network efficiency, from the perspective of the user incurring the highest cost in the game. Finally, we investigate the process of updates beginning from an empty strategy profile, and make some observations about the quality of the resultant profile in such situations.
545

Throughput optimisation in multi-channel Wireless Mesh Networks

Mashraqi, Aisha Mousa January 2017 (has links)
Wireless Mesh Networks (WMNs) are becoming common due to the features provided, especially the low cost and self-configuration ability. In WMNs, the data traffic is transmitted through intermediate nodes. With the nature of wireless networks, forwarding (routing) the data from the senders to the destinations and managing the network resources efficiently are challenging. There are various reasons that affect the network performance especially the throughput reduction such as signal interference, mobility and congestion. The focus of this research is to improve the throughput in the multi-channel wireless mesh networks from two perspectives. The first issue considered in this work is selecting a path with the maximum available bandwidth and low signal interference to transmit data from the source to the destination. Thus, we design two routing metrics, the Expected Transmission Time with Queueing (ETTQ) and the Delay and Interference Aware Metric (DIAM), that consider the intra-flow interference, inter-flow interference and delay. The simulation results of these routing metrics by the Network Simulator (NS2) demonstrate that the DIAM metric can estimate the intra-flow interference, inter-flow interference and delay of a link and then select the path efficiently. The second problem that has been addressed to improve the network throughput is controlling the network congestion. In this work, we address the issue of packet drops in the Interface Queue (IFQ) due to the node congestion. We solve this issue by reducing the number of dropped packets at IFQ by allocating the flow rate from the solution of a linear program (LP). The simulations using NS2 have shown that the LP-based flow rate improves the network throughput in chain networks. In addition, with the complex networks, traffic rate adjustments alone are not sufficient and we propose a simple forwarding delay scheme for the Ad Hoc On-Demand Distance Vector protocol with Forwarding delay (AODV-F) with DIAM routing metric that reduces node congestion and improves throughput. The forwarding delay scheme has also been evaluated using NS2. Moreover, the LP adjusted flow rate and the forwarding delay address the issue of maximising the flow fairness as the results have demonstrated.
546

Goal compliance assurance for dynamically adaptive workflows

Allehyani, Budoor Ahmad January 2018 (has links)
Business processes capture the functional requirements of an organisation. Today’s businesses operate in a very dynamic and complex environment. Thus, the suitability of automation techniques depends on their ability to rapidly and reliably react to change. To react to change rapidly, an adaptation process for business processes is required. This will also satisfy better quality of services, evidenced through performance and availability. The adaptation process includes a need to support self-monitoring of the business processes, detection of a need for a change, decision making on the right change and execution of the change. The adaptation process must be performed in a reliable and automatic manner with minimal user intervention. One of the techniques that enables automatic adaptation is a policy-driven approach, typically E-C-A policies. Policies can change running business processes’ behaviour according to changing requirements by inserting, replacing or deleting functionalities. However, there are no assurances over policies’ behaviour in terms of the satisfaction of the original goal which is the space that this thesis fills. The presented work provides an approach to support assurances in the face of automated adaptation and changing requirements. To that end, we use trace refinement and ontologies for ensuring goal compliance during adaptation. We present a goal-compliance framework which incorporates adaptation process through E-C-A policies and goal-compliance constraints for assurance purposes. The framework evaluation targets its performance according to two categories: (1) complexity of both processes and adaptation and (2) execution time including adaptation and verification. The evaluation results show that the framework reliably guarantees the satisfaction of the process goal with minimal user intervention. Moreover, it shows a promising performance in which it is a very important aspect of runtime environment.
547

Reverse engineering packet structures from network traces by segment-based alignment

Esoul, Othman Mohamed A. January 2018 (has links)
Many applications in security, from understanding unfamiliar protocols to fuzz-testing and guarding against potential attacks, rely on analysing network protocols. In many situations we cannot rely on access to a specification or even an implementation of the protocol, and must instead rely on raw network data “sniffed” from the network. When this is the case, one of the key challenges is to discern from the raw data the underlying packet structures – a task that is commonly carried out by two steps: message clustering, and message Alignment. Clustering quality is critically contingent upon the selection of the right parameters. In this thesis, we experimentally investigated two aspects: 1) the effect of different parameters on clustering, and 2) whether suitable parameter configuration for clustering can be inferred for undocumented protocols (when messages classes are unavailable). In this thesis, we have quantified the impact of specific parameters on clustering, and used clustering validation measures to predict parameter configurations with high clustering accuracy. Our results indicate that: 1) The choice of the distance measure and the message length has the most substantial impact on cluster accuracy. 2) The Ball-Hall intrinsic validation measure has yielded the best results in predicting suitable parameter configuration for clustering. While clustering is used to detect message types (similar groups) within a dataset, sequence alignment algorithms are often used to detect the protocol message structure (field partitioning). For this, most approaches have used variants of the Needleman-Wunsch algorithm to perform byte-wise alignment. However, they can suffer when messages are heterogeneous, or in cases where protocol fields are separated by long variable fields. In this thesis, we present an alternative alignment algorithm known as segment-based alignment. The results indicate that segmented-based alignment can produce highly accurate results than traditional alignment techniques especially with long and diverse network packets.
548

Automating proofs with state machine inference

Gransden, Thomas Glenn January 2017 (has links)
Interactive theorem provers are tools that help to produce formal proofs in a semiautomatic fashion. Originally designed to verify mathematical statements, they can be potentially useful in an industrial context. Despite being endorsed by leading mathematicians and computer scientists, these tools are not widely used. This is mainly because constructing proofs requires a large amount of human effort and knowledge. Frustratingly, there is limited proof automation available in many theorem proving systems. To address this limitation, a new technique called SEPIA (Search for Proofs Using Inferred Automata) is introduced. There are typically large libraries of completed proofs available. However, identifying useful information from these can be difficult and time-consuming. SEPIA uses state-machine inference techniques to produce descriptive models from corpora of Coq proofs. The resulting models can then be used to automatically generate proofs. Subsequently, SEPIA is also combined with other approaches to form an intelligent suite of methods (called Coq-PR3) to help automatically generate proofs. All of the techniques presented are available as extensions for the ProofGeneral interface. In the experimental work, the new techniques are evaluated on two large Coq datasets. They are shown to prove more theorems automatically than compared to existing proof automation. Additionally, various aspects of the discovered proofs are explored, including a comparison between the automatically generated proofs and manually created ones. Overall, the techniques are demonstrated to be a potentially useful addition to the proof development process because of their ability to automate proofs in Coq.
549

Engineering compact dynamic data structures and in-memory data mining

Poyias, Andreas January 2018 (has links)
Compact and succinct data structures use space that approaches the information-theoretic lower bound on the space that is required to represent the data. In practice, their memory footprint is orders of magnitude smaller than normal data structures and at the same time they are competitive in speed. A main drawback with many of these data structures is that they do not support dynamic operations efficiently. It can be exceedingly expensive to rebuild a static data structure each time an update occurs. In this thesis, we propose a number of novel compact dynamic data structures including m-Bonsai, which is a compact tree representation, compact dynamic rewritable (CDRW) arrays which is a compact representation of variable-length bit-strings. These data structures can answer queries efficiently, perform updates fast while they maintain their small memory footprint. In addition to the designing of these data structures, we analyze them theoretically, we implement them and finally test them to show their good practical performance. Many data mining algorithms require data structures that can query and dynamically update data in memory. One such algorithm is FP-growth. It is one of the fastest algorithms for the solution of Frequent Itemset Mining, which is one of the most fundamental problems in data mining. FP-growth reads the entire data in memory, updates the data structures in memory and performs a series of queries on the given data. We propose a compact implementation for the FP-growth algorithm, the PFP-growth. Based on our experimental evaluation, our implementation is one order of magnitude more space efficient compared to the classic implementation of FP-growth and 2 - 3 times compared to a more recent carefully engineered implementation. At the same time it is competitive in terms of speed.
550

Energy-aware scheduling in decentralised multi-cloud systems

Alsughayyir, Aeshah Yahya January 2018 (has links)
Cloud computing is an emerging Internet-based computing paradigm that aims to provide many on-demand services, requested nowadays by almost all online users. Although it greatly utilises virtualised environments for applications to be executed efficiently in low-cost hosting, it has turned energy wasting and overconsumption issues into major concerns. Many studies have projected that the energy consumption of cloud data-centres would grow significantly to reach 35% of the total energy consumed worldwide, threatening to further boost the world's energy crisis. Moreover, cloud infrastructure is built on a great amount of server equipment, including high performance computing (HPC), and the servers are naturally prone to failures. In this thesis, we study practically as well as theoretically the feasibility of optimising energy consumption in multi-cloud systems. We explore a deadline-based scheduling problem of executing HPC-applications by a heterogeneous set of clouds that are geographically distributed worldwide. We assume that these clouds participate in a federated approach. The practical part of the thesis has focused on combining two energy dimensions while scheduling HPC-applications (i.e., energy consumed for execution and data transmission). It has considered simultaneously minimising application rejections and deadline violations, to support resource reliability, with energy optimisation. In the theoretical part, we have presented the first online algorithms for the non-pre-emptive scheduling of jobs with agreeable deadlines on heterogeneous parallel processors. Through our developed simulation and experimental analysis using real parallel workloads from large-scale systems, the results evidenced that it is possible to reduce a considerable amount of energy while carefully scheduling cloud applications over a multi-cloud system. We have shown that our practical approaches provide promising energy savings with acceptable level of resource reliability. We believe that our scheduling approaches have particular importance in relation with the main aim of green cloud computing for the necessity of increasing energy efficiency.

Page generated in 0.0218 seconds