• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 720
  • 715
  • 707
  • 398
  • 385
  • 382
  • 164
  • 97
  • 86
  • 82
  • 44
  • 42
  • 39
  • 30
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

X-ray computed tomography and image-based modelling of plant, root and soil systems, for better understanding of phosphate uptake

Keyes, Samuel January 2013 (has links)
A major constraint to crop growth is the poor bioavailability of edaphic nutrients, especially phosphate (P). Improving the nutrient acquisition efficiency of crops is crucial in addressing pressing global food-security issues arising from increasing world population, reduced fertile land and changes in the climate. Despite the undoubted importance of root architecture and root/soil interactions to nutrient uptake, there is a lack of approaches for quantifying plant roots non-invasively at all scales. Mathematical models have allowed our understanding of root and soil interactions to be improved, but are almost invariably reliant on idealised geometries or virtual root growth models. In order to improve phenotyping of advantageous traits for low-P conditions and improve the accuracy of root growth and uptake models, more sophisticated and robust approaches to in vivo root and soil characterisation are needed. Microfocus X-ray Computed Tomography (�-CT) is a methodology that has shown promise for noninvasive imaging of roots and soil at various scales. However, this potential has not been extended to consideration of either very small (rhizosphere scale) or large (mature root system scale) samples. This thesis combines discovery experiments and method development in order to achieve two primary objectives: • The development of more robust, well-described approaches to root and soil �-CT imaging. Chapters 2 and 3 explore the potential of clinical contrasting methods in root investigation, and show how careful consideration of imaging parameters combined with development of user invariant image-processing protocol can improve measurement of macro-porous volume fraction, a key soil parameter. • Chapter 4 develops an assay for first-time 3D imaging of root hairs in situ within the rhizosphere. The resulting data is used to parameterise an explicit P uptake model at the hair scale, suggesting a different contribution of hairs to uptake than was predicted using idealised geometries. Chapter 5 then extends the paradigm for root hair imaging and model generation, building a robust, modular workflow for investigating P dynamics in the rhizosphere that can accommodate non-optimal soil-water states.
192

Using derivative information in the statistical analysis of computer models

Stephenson, Gemma January 2010 (has links)
Complex deterministic models are an important tool for studying a wide range of systems. Often though, such models are computationally too expensive to perform the many runs required. In this case one option is to build a Gaussian process emulator which acts as a surrogate, enabling fast prediction of the model output at specied input congurations. Derivative information may be available, either through the running of an appropriate adjoint model or as a result of some analysis previously performed. An emulator would likely benet from the inclusion of this derivative information. Whether further eciency is achieved, however, depends on the relation between the computational cost of obtaining the derivatives and the value of the derivative information in the emulator. In our examples we see that derivatives are more valuable in models which have shorter correlation lengths and emulators without derivatives generally tend to require twice as many model runs as the emulators with derivatives to produce a similar predictive performance. We conclude that an optimal solution is likely to be a hybrid design consisting of adjoint runs in some parts of the input space and standard model runs in others. The knowledge of the derivatives of complex models can add greatly to their utility, for example in the application of sensitivity analysis or data assimilation. One way of generating such derivatives, as suggested above, is by coding an adjoint model. Despite automatic dierentiation software, this remains a complex task and the adjoint model when written is computationally more demanding. We suggest an alternative method for generating partial derivatives of complex model output, with respect to model inputs. We propose the use of a Gaussian process emulator which, as long as the model is suitable for emulation, can be used to estimate derivatives even without any derivative information known a priori. We present encouraging results which show how an emulator of derivatives could reduce the demand for writing and running adjoint models. This is done with the use of both toy models and the climate model C-GOLDSTEIN.
193

Modelling near-bank flow hydraulics

Spyropoulos, Emmanouil January 2009 (has links)
River bank erosion models are a fundamental requirement for understanding the migration and evolution of river meanders, estimating the potential for land-loss and threat to floodplain infrastructure, and predicting the delivery of contaminated floodplain sediments to aquatic ecosystems. While progress has recently been made in understanding and modelling processes controlling large-scale mass failure, less attention has been paid to the role that fluvial erosion plays in bank retreat. This project aims to address this gap by developing a new fluvial erosion model. Recent developments in bank erosion monitoring technology, and in the quantification of the bank erodibility parameters using jet-testing devices, offer the means of determining fluvial erosion rates and bank erodibility. However, the missing link remains the need to obtain highresolution, spatially distributed, flow data to characterize the near-bank fluid shear stresses that drive bank erosion. One possible solution is to use Computational Fluid Dynamics (CFD) models as a substitute for empirical data. Herein I evaluate a series of three-dimensional CFD simulations for a meander loop on the River Asker at Bridport in southern England. CFD models under specific steady peak flow conditions were developed using Fluent 6.2, with peak flow discharge estimates obtained from an adjacent gauging station. All the models obtained from the three examined flow events were successfully verified and validated using clearly defined and structured procedures. The modelling results indicated that the main qualitative features of the flow remain even as flow discharge varies. However, notable differences were observed between the examined flow events, such as, a general increasing of velocity and shear stress throughout the reach as flow stage is gradually increased, a slight reduction in the size and extent of separation zones at bank full stage, a movement of impingement points further downstream, and a continuation of the secondary flow within the fast streamtube further towards the bends exits. Bed/bank shear stress is mostly seen to decrease at shallow riffles as discharge approaches bankfull, while pools experience an increase in bed/bank shear stress with increase in discharge. Zones of higher bed/bank shear stress extend and combine, while marginal recirculation zones and areas of relatively low bed/bank shear stress generally reduce in area to form discrete locations for erosion and deposition phenomena. At bank full stage, the magnitudes of velocity and simulated shear stresses within the inner bank separation zones are found to be higher than those observed under low flow conditions and they may be sufficient to result in the removal of accumulated sediments into the main downstream flow. The presence of regions of high velocity in the form of a streamtube, especially along the outer banks, creates high shear stresses within these areas. As a result, outer bank migration rates are likely to be relatively high in bends with inner bank separation zones.
194

Dark retweets : an investigation of non-conventional retweeting patterns

Azman, Norhidayah January 2014 (has links)
Retweets are an important mechanism for the propagation of information on the Twitter social media platform. However, many retweets do not use the offcial retweet mechanism, or even community established conventions, and these "dark retweets" are not accounted for in many existing analyses. In this thesis, a typology of 19 different tweet propagation types is presented, based on seven characteristics: whether it is proprietary, the mechanism used, whether it is created by followers or non-followers, whether it mentions other users, if it is explicitly propagating another tweet, if it links to an original tweet, and the audience that it is pushed to. Based on this typology and two retweetability confidence factors, the degrees of a retweet's "darkness" can be determined. This typology was evaluated over two datasets: a random sample of 27,146 tweets, and a URL drill-down dataset of 262,517 tweets. It was found that dark retweets amounted to 20.8% of the random sample, however the behaviour of dark retweets is not uniform. The existence of supervisible and superdark URLs skew the average proportion of dark retweets in a dataset. Dark retweet behaviour was explored further by examining the average reach of retweet actions and identifying content domains in which dark retweets seem more prevalent. It was found that 1) the average reach of a dark retweet action (3,614 users per retweet) was found to be just over double the average reach of a visible retweet action (1,675 users per retweet), and 2) dark retweets were more frequently used in spreading social media (41% of retweets) and spam (40.6%) URLs, whilst they were least prevalent in basic information domains such as music (8.5%), photos (5%) and videos (3.9%). It was also found that once the supervisible and superdark URLs were discarded from the analysis, the proportion of dark retweets decreased from 20.8% to 12%, whilst visible retweets increased from 79.2% to 88%. This research contributes a 19-type tweet propagation typology and the findings that dark retweets exist, but their behaviour varies depending on the retweeter and URL content domain.
195

Competition between demand-side intermediaries in ad exchanges

Stavrogiannis, Lampros C. January 2014 (has links)
Online advertising constitutes one of the main sources of revenue for the majority of businesses on the web. Online advertising inventory was traditionally traded via bilateral contracts between publishers and advertisers, vastly through a number of intermediaries. However, what caused an explosion in the volume and, consequently, the revenue of online ads was the incorporation of auctions as the major mechanism for trading sponsored search ads in all major search engines. This reduced transaction costs and allowed for the advertisement of small websites which constitute the majority of Internet traffic. Auction-based markets were harder to establish in the display advertising industry due to the higher volume of inventory and the pre-existence of traditional intermediaries, often leading to inefficiencies and lack of transparency. Nevertheless, this has recently changed with the introduction of the ad exchanges, centralized marketplaces for the allocation of display advertising inventory that support auctions and real-time bidding. The appearance of ad exchanges has also altered the market structure of both demand-side and supply side intermediaries which increasingly adopt auctions to perform their business operations. Hence, each time a user enters a publisher's website, the contracted ad exchange runs an auction among a number of demand-side intermediaries, each of which represents their interested advertisers and typically submits a bid by running a local auction among these advertisers. Against this background, within this thesis, we look both at the auction design problem of the ad exchange and the demand-side intermediaries as well as at the strategies to be adopted by advertisers. Specifically, we study the revenue and efficiency effects of the introduction and competition of the demand-side intermediaries in a single-item auction setting with independent private valuations. The introduction of these intermediaries constitutes a major issue for ad exchanges since they hide some of the demand from the ad exchange and hence can make a profit by pocketing the difference between what they receive from their advertisers and what they pay at the exchange. Ad exchanges were created to offer transparency to both sides of the market, so it is important to study the share of the revenue that intermediaries receive to justify their services offered given the competition they face by other such intermediaries. The existence of mediators is a well-known problem in other settings. For this reason, our formulation is general enough to encompass other areas where two levels of auctions arise, such as procurement auctions with subcontracting and auctions with colluding bidders. In more detail, we study the effects of the demand-side intermediaries' choice of auction for three widely used mechanisms, two variations of the second-price sealed-bid (known as Vickrey) auction, termed PRE and POST, and first-price sealed-bid (FPSB) auctions. We first look at a scenario with a finite number of intermediaries, each implementing the same mechanism, where we compare the profits attained for all stakeholders. We find that there cannot be a complete profit ranking of the three auctions: FPSB auctions yield higher expected profit for a small number of competing intermediaries, otherwise PRE auctions are better for the intermediaries. We also find that the ad exchange benefits from intermediaries implementing POST auctions. We then let demand-side intermediaries set reserve (or floor) prices, that are known to increase an auctioneer's expected revenue. For issues of analytical tractability, we only consider scenarios with two intermediaries but we also compare the two Vickrey variations in heterogeneous settings where one intermediary implements the first whereas the other implements the second variation. We find that intermediaries, in general, follow mixed reserve-price-setting strategies whose distributions are difficult to derive analytically. For this reason, we use the fictitious play algorithm to calculate approximate equilibria and numerically compare the revenue and efficiency of the three mechanisms for specific instances. We find that PRE seems to perform best in terms of attained profit but is less efficient than POST. Hence, the latter might be a better option for intermediaries in the long term. Finally, we extend the previous setting by letting advertisers strategically select one of the two intermediaries when the latter implement each of the two Vickrey variations. We analytically derive the advertisers' intermediary selection strategies in equilibrium. Given that, in some cases, these strategies are rather complex, we use again the fictitious play algorithm to numerically calculate the intermediaries' and the ad exchange's best responses for the same instances as before. We find that, when both intermediaries implement POST auctions, advertisers always select the low-reserve intermediary, otherwise they generally follow randomized strategies. Last, we find that the ad exchange benefits from intermediaries implementing the pre-award Vickrey variation compared to a setting with two heterogeneous Vickrey intermediary auctioneers, whereas the opposite is true for the intermediaries.
196

Modelling economic bubbles : is Web 2.0 next

Newman, Russell January 2015 (has links)
The Web 2.0 phenomenon has produced a number of technology companies that in various rounds of venture capital funding, have attracted very indicative valuations. Following these rising valuations, Investment Banks took an interest in the sector. However, while the companies concerned seem stable as private entities, their novel approach to business makes their financial characteristics difficult to predict. Parallels are drawn between the 2001 dot-com bubble and the current Web 2.0 sector. This thesis highlights a dependency between modern highlights a dependency between modern web companies, and the established technology sector. It aims to identify the extent to which the contemporary technology sector (encompassing Web 2.0) has exhibited characteristics similar to those of the dot-com bubble. to that end, this thesis identifies characteristics of modern and historic bubbles, and uses them to formulate a hypothetical set of indicators, in the form of a conceptual model. To determine whether these indicators exist in real data, a novel, repeatable statistical test is developed. It first identifies statistical heuristics representative of bubble circumstances, and then compares other periods to them. Thus, given sufficient data, any period may be tested. Periods are analysed prior, during and after the dot-com bubble. During the dot-com bubble, consistently strong venture capital activity is observed, and linked to the growth in people using the internet. This is indicative of the poor decision-making by investors, documented at the time. In recent periods, patterns in venture capital investment describe an industry that is much more cautious that before, reducing the probability of the formation of a similar bubble. Looking at the past, this thesis observes investor activity that 'caused' the dot-com bubble as early as 1995-96, which raises questions about when the bubble started, and the lead-times on market collapses.
197

A voting-based agent system to support personalised e-learning in a course selection scenario

Aseere, Ali January 2012 (has links)
Agent technologies are a promising approach to solving a number of prob-lems concerned with personalised learning due to the inherent autonomy and independence they provide for learners. The objective of this thesis is to find out whether a multiagent system could potentially replace a centralised infra-structure, and to explore the impact of agents taking different strategies. More specifically, our aim is to show how intelligent agent systems can not only form a good framework for distributed e-learning systems, but also how they can be applied in contexts where learners are autonomous and independent. The study also aims to investigate fairness issues and propose a simple framework of fair-ness definitions derived from the relevant literature. To this end, a university course selection scenario has been chosen, where the university has many courses available, but has only sufficient resources to run the most preferred ones. Instead of a centralised system, we consider a de-centralised approach where individuals can make a collective decision about which courses should run by using a multi-agent system based on voting. This voting process consists of multiple rounds, allowing a student agent to accurate-ly represent the student’s preferences, and learn from previous rounds. The ef-fectiveness of this research is demonstrated in three experiments. The first ex-periment explores whether voting procedures and multiagent technology could potentially replace a centralised infrastructure. It also explores the impact of agents using different strategies on overall student satisfaction. The second ex-periment demonstrates the potential for using multiagent systems and voting in settings where students have more complex preferences. The last experiment investigates how intelligent agent-based e-learning systems can ensure fairness between individuals using different strategies. This work shows that agent technology could provide levels of decentrali-sation and personalisation that could be extended to various types of personal and informal learning. It also highlights the importance of the issue of fairness in intelligent and personalised e-learning systems. In this context, it may be said that there is only one potential view of fairness that is practical for these systems, which is the social welfare view that looks to the overall outcome.
198

Social networking theory and the rise of digital marketing in the light of big data

Dervan, Philip January 2015 (has links)
The topic of this thesis is the use of ‘Big Data’ as a catalyst for true precision target marketing, where online advertisements across all communication channels are so timely and relevant that they are welcomed by the consumer because they improve the customer experience. In particular, the research has been directed to demonstrate the link between investment in digital branding and sales revenue at the company level. This thesis includes a review of the accumulation of ‘Big Data’ from a plethora of social networks, and an assessment of its current use and application by marketing and sales departments and emerging others. The hypothesis tested was that companies most advanced in processing ‘Big Data’ by rules-based, algorithmic, digital analysis are the companies realizing the greatest return on investment in the use of ‘Big Data’. The research was conducted using a questionnaire and interviews with the top people working in large consultancy and related firms who are actively engaged in the utilization of social media and large datasets. As there is a lack of understanding within companies in terms of using social media, and many obstacles have to be overcome, the research was meant to unearth some insights into the effective use of data. The research indicated that companies that had certain organizational and operational characteristics actively use social media, although the utilization is often limited in scope. However companies that do use them effectively gain measurable ROI and tend to track users across many venues. The companies using advanced ‘Big Data’ analytical tools to describe and predict user characteristics, applying the intelligence to target, time, tailor and trigger the release of cogent content to the ‘dynamic throng of individual audiences’ are experiencing the highest return on social media investment. This thesis makes a contribution to the wider understanding of social media use by the large business entities, and to the current and future problems that this explosion of data is creating and is likely to create.
199

Smartphones and pervasive play : an examination of the effect Foursquare has on physical, spatial and social practices

Saker, Michael January 2014 (has links)
No description available.
200

Recognition of elementary upper limb movements in momadic environment

Biswas, Dwaipayan January 2015 (has links)
ICT enabled body-worn remote rehabilitation system has been projected as an effective means for combating the major socio-economic challenge resulting from the need for quality care delivery for stroke survivors. The two major problems faced in such systems are: 1) while effective for characterising the patient’s performance during a constrained “exercise phase” in remote settings, the more natural indicator of rehabilitation status, i.e., the patient’s performance in an “unconstrained nomadic environment”, are often not considered and; 2) being body-worn and thus constrained by the battery life, their sustainability for long-term continuous monitoring is questionable. These shortcomings motivated the: 1) exploration of effective algorithmic strategies for accurately detecting movement of affected body parts, more specifically, the movement of the upper limb since it frequently gets affected by stroke episodes – in unconstrained scenarios and; 2) translation of the algorithms to dedicated low-power hardware with an aim of enhancing the battery life of a resource constrained body-worn sensor based remote rehabilitation system for its sustained operation satisfying the notion of long-term continuous monitoring. Following instructions of expert physiotherapists, this work concentrates on detecting three fundamental upper limb movements in unconstrained scenarios: extension/flexion of the forearm; rotation of the forearm about the elbow; and rotation of the arm about the long axis of forearm, using body-worn inertial sensors. After selecting the appropriate type of inertial sensors and their positions through exhaustive experiments, two novel algorithms were proposed to recognize the above mentioned movements: 1) clustering and minimum distance classifier based approach and 2) tracking the orientation of an inertial sensor placed on the wrist. The performances of the algorithms have been evaluated prospectively through an archetypal activity ‘making-a-cup-of-tea’ which includes multiple occurrences of the chosen movements. The proposed clustering based approach detected the three movements with an average accuracy of 88% and 70% using accelerometer data and 83% and 70% using gyroscope data obtained from the wrist for healthy subjects and stroke survivors respectively. Compared to that the proposed sensor orientation based methodology using a wrist-worn accelerometer only recognized the three movements with accuracies in the range of 91-99% for healthy subjects and 70%-85% for stroke survivors. However the clustering based approach provides greater flexibility in terms of incorporating new types of movements apart from the ones chosen here and can also be used to track changes in motor functionality over time. Subsequently it was translated into a novel ASIC resulting in dynamic power consumption of 25.9 mW @20 MHz in 130 nm technology. On the other hand, the sensor orientation based approach was also validated in hardware using an Altera DEII FPGA system, for high speed real-time movement recognition.

Page generated in 0.0817 seconds