• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 290
  • 24
  • 21
  • 16
  • 9
  • 7
  • 7
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 475
  • 475
  • 117
  • 99
  • 99
  • 88
  • 67
  • 62
  • 62
  • 54
  • 48
  • 47
  • 47
  • 45
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
441

IT’S IN THE DATA 2 : A study on how effective design of a digital product’s user onboarding experience can increase user retention

Fridell, Gustav January 2021 (has links)
User retention is a key factor for Software as a Service (SaaS) companies to ensure long-term growth and profitability. One area which can have a lasting impact on a digital product’s user retention is its user onboarding experience, that is, the methods and elements that guide new users to become familiar with the product and activate them to become fully registered users. Within the area of user onboarding, multiple authors discuss “best practice” design patterns which are stated to positively influence the user retention of new users. However, none of the sources reviewed showcase any statistically significant proof of this claim. Thus, the objective of this study was to: Design and implement a set of commonly applied design patterns within a digital product’s user onboarding experience and evaluate their effects on user retention Through A/B testing on the SaaS product GetAccept, the following two design patterns were evaluated: Reduce friction – reducing the number of barriers and steps for a new user when first using a digital product; and Monitor progress – monitoring and clearly showcasing the progress of a new user’s journey when first using a digital product. The retention metric used to evaluate the two design patterns was first week user retention, defined as the share of customers who after signing up, sign in again at least once within one week. This was tested by randomly assigning new users into different groups: groups that did receive changes related to the design patterns, and one group did not receive any changes. By then comparing the first week user retention data between the groups using Fisher’s exact test, the conclusion could be drawn that with statistical significance, both of the evaluated design patterns positively influenced user retention for GetAccept. Furthermore, due to the generalizable nature of GetAccept’s product and the aspects evaluated, this conclusion should also be applicable to other companies and digital products with similar characteristics, and the method used to evaluate the impact of implementing the design patterns should be applicable for evaluating other design patterns and/or changes in digital products. However, as the method used for data collection in the study could not ensure full validity of it, the study could and should be repeated with the same design patterns on another digital product and set of users in order to strengthen the reliability of the conclusions drawn.
442

語料庫英語教學之研究:以“see, watch, look at”為例 / “See”, “Watch” and “Look at” : Teaching Taiwanese EFL students on a corpora-based approach

謝瑋倫, Hsieh, Wei Lun Unknown Date (has links)
字彙的誤用,是台灣英語學習者易犯的毛病之一。由於中文與英文字詞並非呈現一對一的語意對應,加上國中英語教師多年來習慣要求學生以記誦中文意思的方式學習英文單字,導致學生常有用錯字的狀況發生,並造成語意上的誤解。有鑑於此,本文作者盼能以語料庫英語教學來改善這些現象,並以see, watch ,look三個意義相近字彙的區辨為例,呈現出完整的教學過程,供英語教師教學或學習者自修參考。 在教學前,教師應根據學生英文程度與教學需求,先行篩選並編輯語料,以利學生學習。在課堂上,藉由這些語料的呈現,讓學生觀察其中的規律性;透過問題的解決,用認知的手段讓學生觀察該字彙出現的語境及其易連結的字串;並以測驗的方式檢測學生的理解程度。此外,本文亦詳細歸納這三個單字的使用時機,並提出實用的區辨方式。 作者盼能藉由語料庫英語教學的實施,增強英語學習者的學習意願。透過類似活動的投入及參與,學習者將不再只是訊息接收者,而能藉由觀察來創造自己的知識、增進對英文的掌握度。 / In Taiwan, many EFL students have difficulty using the proper vocabulary at the right time. Due to the fact that Mandarin and English vocabulary are in a one-to-many semantically-corresponding relationship, and that Taiwanese EFL students are often taught to learn English vocabulary by memorizing its Mandarin equivalent, students often have difficulties choosing proper English equivalents in different contexts. Besides, the arrangement of junior high school English curriculum has made it even more difficult for students to learn vocabulary accurately. Because the improper use of vocabulary often brings about confusion or misunderstanding, a practical method is needed to cope with the concerned problems. Nowadays, Computer-assisted Language Learning (CALL) has been a trend. In this study, the researcher will take “see, watch, and look at” as examples to show a corpora-based teaching procedure. Subjects are 8th graders in junior high school. The scope of the research is confined to the prototypical meanings of these verbs; it is believed that students should possess the basic knowledge about these verbs before they continue to learn other extended meanings. Before the class, the conordance lines are selected and carefully edited by the teacher to meet the needs of the course. In class, consciousness-raising tasks, combined with quizzes and complementary materials, provide students with comprehensive knowledge about the three verbs. After the activities, crucial information of the verbs is clearly exhibited, and useful methods are also presented to help distinguish the verbs. With the “context providers”, namely, the corpora, both teachers and students are provided with authentic and plentiful examples, which are often insufficient for Taiwanese EFL learners. Through the participation of these activities, students become participants and create their own knowledge. It is hoped that with the assistance of data-driven learning (DDL), EFL teachers will then be able to provide their students with not only more reliable information, but more constructive and systematic instruction.
443

Mobile systems for monitoring Parkinson's disease

Memedi, Mevludin January 2014 (has links)
A challenge for the clinical management of Parkinson's disease (PD) is the large within- and between-patient variability in symptom profiles as well as the emergence of motor complications which represent a significant source of disability in patients. This thesis deals with the development and evaluation of methods and systems for supporting the management of PD by using repeated measures, consisting of subjective assessments of symptoms and objective assessments of motor function through fine motor tests (spirography and tapping), collected by means of a telemetry touch screen device. One aim of the thesis was to develop methods for objective quantification and analysis of the severity of motor impairments being represented in spiral drawings and tapping results. This was accomplished by first quantifying the digitized movement data with time series analysis and then using them in data-driven modelling for automating the process of assessment of symptom severity. The objective measures were then analysed with respect to subjective assessments of motor conditions. Another aim was to develop a method for providing comparable information content as clinical rating scales by combining subjective and objective measures into composite scores, using time series analysis and data driven methods. The scores represent six symptom dimensions and an overall test score for reflecting the global health condition of the patient. In addition, the thesis presents the development of a web-based system for providing a visual representation of symptoms over time allowing clinicians to remotely monitor the symptom profiles of their patients. The quality of the methods was assessed by reporting different metrics of validity, reliability and sensitivity to treatment interventions and natural PD progression over time. Results from two studies demonstrated that the methods developed for the fine motor tests had good metrics indicating that they are appropriate to quantitatively and objectively assess the severity of motor impairments of PD patients. The fine motor tests captured different symptoms; spiral drawing impairment and tapping accuracy related to dyskinesias (involuntary movements) whereas tapping speed related to bradykinesia (slowness of movements). A longitudinal data analysis indicated that the six symptom dimensions and the overall test score contained important elements of information of the clinical scales and can be used to measure effects of PD treatment interventions and disease progression. A usability evaluation of the web-based system showed that the information presented in the system was comparable to qualitative clinical observations and the system was recognized as a tool that will assist in the management of patients.
444

Contribution au pronostic de durée de vie des systèmes piles à combustible PEMFC / Contribution to lifetime prognostics for proton exchange membrane fuel cell (PEMFC) systems

Silva Sanchez, Rosa Elvira 21 May 2015 (has links)
Les travaux de cette thèse visent à apporter des éléments de solutions au problème de la durée de vie des systèmes pile à combustible (FCS – Fuel Cell System) de type à « membrane échangeuse de protons » (PEM – Proton Exchange Membrane) et se décline sur deux champs disciplinaires complémentaires :Une première approche vise à augmenter la durée de vie de celle-ci par la conception et la mise en œuvre d'une architecture de pronostic et de gestion de l'état de santé (PHM – Prognostics & Health Management). Les PEM-FCS, de par leur technologie, sont par essence des systèmes multi-physiques (électriques, fluidiques, électrochimiques, thermiques, mécaniques, etc.) et multi-échelles (de temps et d'espace) dont les comportements sont difficilement appréhendables. La nature non linéaire des phénomènes, le caractère réversible ou non des dégradations, et les interactions entre composants rendent effectivement difficile une étape de modélisation des défaillances. De plus, le manque d'homogénéité (actuel) dans le processus de fabrication rend difficile la caractérisation statistique de leur comportement. Le déploiement d'une solution PHM permettrait en effet d'anticiper et d'éviter les défaillances, d'évaluer l'état de santé, d'estimer le temps de vie résiduel du système, et finalement, d'envisager des actions de maîtrise (contrôle et/ou maintenance) pour assurer la continuité de fonctionnement. Une deuxième approche propose d'avoir recours à une hybridation passive de la PEMFC avec des super-condensateurs (UC – Ultra Capacitor) de façon à faire fonctionner la pile au plus proche de ses conditions opératoires optimales et ainsi, à minimiser l'impact du vieillissement. Les UCs apparaissent comme une source complémentaire à la PEMFC en raison de leur forte densité de puissance, de leur capacité de charge/décharge rapide, de leur réversibilité et de leur grande durée de vie. Si l'on prend l'exemple des véhicules à pile à combustible, l'association entre une PEMFC et des UCs peut être réalisée en utilisant un système hybride de type actif ou passif. Le comportement global du système dépend à la fois du choix de l'architecture et du positionnement de ces éléments en lien avec la charge électrique. Aujourd'hui, les recherches dans ce domaine se focalisent essentiellement sur la gestion d'énergie entre les sources et stockeurs embarqués ; et sur la définition et l'optimisation d'une interface électronique de puissance destinée à conditionner le flux d'énergie entre eux. Cependant, la présence de convertisseurs statiques augmente les sources de défaillances et pannes (défaillance des interrupteurs du convertisseur statique lui-même, impact des oscillations de courant haute fréquence sur le vieillissement de la pile), et augmente également les pertes énergétiques du système complet (même si le rendement du convertisseur statique est élevé, il dégrade néanmoins le bilan global). / This thesis work aims to provide solutions for the limited lifetime of Proton Exchange Membrane Fuel Cell Systems (PEM-FCS) based on two complementary disciplines:A first approach consists in increasing the lifetime of the PEM-FCS by designing and implementing a Prognostics & Health Management (PHM) architecture. The PEM-FCS are essentially multi-physical systems (electrical, fluid, electrochemical, thermal, mechanical, etc.) and multi-scale (time and space), thus its behaviors are hardly understandable. The nonlinear nature of phenomena, the reversibility or not of degradations and the interactions between components makes it quite difficult to have a failure modeling stage. Moreover, the lack of homogeneity (actual) in the manufacturing process makes it difficult for statistical characterization of their behavior. The deployment of a PHM solution would indeed anticipate and avoid failures, assess the state of health, estimate the Remaining Useful Lifetime (RUL) of the system and finally consider control actions (control and/or maintenance) to ensure operation continuity.A second approach proposes to use a passive hybridization of the PEMFC with Ultra Capacitors (UC) to operate the fuel cell closer to its optimum operating conditions and thereby minimize the impact of aging. The UC appear as an additional source to the PEMFC due to their high power density, their capacity to charge/discharge rapidly, their reversibility and their long life. If we take the example of fuel cell hybrid electrical vehicles, the association between a PEMFC and UC can be performed using a hybrid of active or passive type system. The overall behavior of the system depends on both, the choice of the architecture and the positioning of these elements in connection with the electric charge. Today, research in this area focuses mainly on energy management between the sources and embedded storage and the definition and optimization of a power electronic interface designated to adjust the flow of energy between them. However, the presence of power converters increases the source of faults and failures (failure of the switches of the power converter and the impact of high frequency current oscillations on the aging of the PEMFC), and also increases the energy losses of the entire system (even if the performance of the power converter is high, it nevertheless degrades the overall system).
445

Measurement of the Standard Model W⁺W⁻ production cross-section using the ATLAS experiment on the LHC / Mesure de la section efficace de production des bosons W⁺W⁻ dans l'experience ATLAS au LHC

Zeman, Martin 02 October 2014 (has links)
Les mesures de sections efficaces de production di-bosons constituent une partie importante du programme de physique au Large Hadron Collider (Grand collisionneur de hadrons) au CERN. Ces analyses de physique offrent la possibilité d'explorer le secteur électrofaible du modèle standard à l'échelle du TeV et peuvent être une indication de l'existence de nouvelles particules au-delà du modèle standard. L'excellente performance du LHC dans les années 2011 et 2012 a permis de faire les mesures très compétitives. La thèse donne un aperçu complet des méthodes expérimentales utilisées dans la mesure de la section efficace de production de W⁺W⁻ dans les collisions proton-proton au √s = 7 TeV et 8 TeV. Le texte décrit l'analyse en détail, en commençant par l'introduction du cadre théorique du modèle standard et se poursuit avec une discussion détaillée des méthodes utilisées dans l'enregistrement et la reconstruction des événements de physique dans une expérience de cette ampleur. Les logiciels associés (online et offline) sont inclus dans la discussion. Les expériences sont décrites en détail avec en particulier une section détaillée sur le détecteur ATLAS. Le dernier chapitre de cette thèse présente une description détaillée de l'analyse de la production de bosons W dans les modes de désintégration leptoniques utilisant les données enregistrées par l'expérience ATLAS pendant les années 2011 et 2012 (Run I). Les analyses utilisent 4,6 fb⁻¹ de données enregistrées à √s = 7 TeV et 20,28 fb⁻¹ enregistré à 8 TeV. La section efficace mesurée expérimentalement pour la production de bosons W dans l'expérience ATLAS est plus grande que celle prédite par le modèle standard à 7 TeV et 8 TeV. La thèse se termine par la présentation des résultats de mesures différentielles de section efficace. / Measurements of di-boson production cross-sections are an important part of the physics programme at the CERN Large Hadron Collider. These physics analyses provide the opportunity to probe the electroweak sector of the Standard Model at the TeV scale and could also indicate the existence of new particles or probe beyond the Standard Model physics. The excellent performance of the LHC through years 2011 and 2012 allowed for very competitive measurements. This thesis provides a comprehensive overview of the experimental considerations and methods used in the measurement of the W⁺W⁻ production cross-section in proton-proton collisions at √s = 7 TeV and 8 TeV. The treatise covers the material in great detail, starting with the introduction of the theoretical framework of the Standard Model and follows with an extensive discussion of the methods implemented in recording and reconstructing physics events in an experiment of this magnitude. The associated online and offline software tools are included in the discussion. The relevant experiments are covered, including a very detailed section about the ATLAS detector. The final chapter of this thesis contains a detailed description of the analysis of the W-pair production in the leptonic decay channels using the datasets recorded by the ATLAS experiment during 2011 and 2012 (Run I). The analyses use 4.60 fb⁻¹ recorded at √s = 7 TeV and 20.28 fb⁻¹ recorded at 8 TeV. The experimentally measured cross section for the production of W bosons at the ATLAS experiment is consistently enhanced compared to the predictions of the Standard Model at centre-of-mass energies of 7 TeV and 8 TeV. The thesis concludes with the presentation of differential cross-section measurement results.
446

Unstable equilibrium : modelling waves and turbulence in water flow

Connell, R. J. January 2008 (has links)
This thesis develops a one-dimensional version of a new data driven model of turbulence that uses the KL expansion to provide a spectral solution of the turbulent flow field based on analysis of Particle Image Velocimetry (PIV) turbulent data. The analysis derives a 2nd order random field over the whole flow domain that gives better turbulence properties in areas of non-uniform flow and where flow separates than the present models that are based on the Navier-Stokes Equations. These latter models need assumptions to decrease the number of calculations to enable them to run on present day computers or super-computers. These assumptions reduce the accuracy of these models. The improved flow field is gained at the expense of the model not being generic. Therefore the new data driven model can only be used for the flow situation of the data as the analysis shows that the kernel of the turbulent flow field of undular hydraulic jump could not be related to the surface waves, a key feature of the jump. The kernel developed has two parts, called the outer and inner parts. A comparison shows that the ratio of outer kernel to inner kernel primarily reflects the ratio of turbulent production to turbulent dissipation. The outer part, with a larger correlation length, reflects the larger structures of the flow that contain most of the turbulent energy production. The inner part reflects the smaller structures that contain most turbulent energy dissipation. The new data driven model can use a kernel with changing variance and/or regression coefficient over the domain, necessitating the use of both numerical and analytical methods. The model allows the use of a two-part regression coefficient kernel, the solution being the addition of the result from each part of the kernel. This research highlighted the need to assess the size of the structures calculated by the models based on the Navier-Stokes equations to validate these models. At present most studies use mean velocities and the turbulent fluctuations to validate a models performance. As the new data driven model gives better turbulence properties, it could be used in complicated flow situations, such as a rock groyne to give better assessment of the forces and pressures in the water flow resulting from turbulence fluctuations for the design of such structures. Further development to make the model usable includes; solving the numerical problem associated with the double kernel, reducing the number of modes required, obtaining a solution for the kernel of two-dimensional and three-dimensional flows, including the change in correlation length with time as presently the model gives instant realisations of the flow field and finally including third and fourth order statistics to improve the data driven model velocity field from having Gaussian distribution properties. As the third and fourth order statistics are Reynolds Number dependent this will enable the model to be applied to PIV data from physical scale models. In summary, this new data driven model is complementary to models based on the Navier-Stokes equations by providing better results in complicated design situations. Further research to develop the new model is viewed as an important step forward in the analysis of river control structures such as rock groynes that are prevalent on New Zealand Rivers protecting large cities.
447

Digital Intelligence – Möglichkeiten und Umsetzung einer informatikgestützten Frühaufklärung / Digital Intelligence – opportunities and implementation of a data-driven foresight

Walde, Peter 18 January 2011 (has links) (PDF)
Das Ziel der Digital Intelligence bzw. datengetriebenen Strategischen Frühaufklärung ist, die Zukunftsgestaltung auf Basis valider und fundierter digitaler Information mit vergleichsweise geringem Aufwand und enormer Zeit- und Kostenersparnis zu unterstützen. Hilfe bieten innovative Technologien der (halb)automatischen Sprach- und Datenverarbeitung wie z. B. das Information Retrieval, das (Temporal) Data, Text und Web Mining, die Informationsvisualisierung, konzeptuelle Strukturen sowie die Informetrie. Sie ermöglichen, Schlüsselthemen und latente Zusammenhänge aus einer nicht überschaubaren, verteilten und inhomogenen Datenmenge wie z. B. Patenten, wissenschaftlichen Publikationen, Pressedokumenten oder Webinhalten rechzeitig zu erkennen und schnell und zielgerichtet bereitzustellen. Die Digital Intelligence macht somit intuitiv erahnte Muster und Entwicklungen explizit und messbar. Die vorliegende Forschungsarbeit soll zum einen die Möglichkeiten der Informatik zur datengetriebenen Frühaufklärung aufzeigen und zum zweiten diese im pragmatischen Kontext umsetzen. Ihren Ausgangspunkt findet sie in der Einführung in die Disziplin der Strategischen Frühaufklärung und ihren datengetriebenen Zweig – die Digital Intelligence. Diskutiert und klassifiziert werden die theoretischen und insbesondere informatikbezogenen Grundlagen der Frühaufklärung – vor allem die Möglichkeiten der zeitorientierten Datenexploration. Konzipiert und entwickelt werden verschiedene Methoden und Software-Werkzeuge, die die zeitorientierte Exploration insbesondere unstrukturierter Textdaten (Temporal Text Mining) unterstützen. Dabei werden nur Verfahren in Betracht gezogen, die sich im Kontext einer großen Institution und den spezifischen Anforderungen der Strategischen Frühaufklärung pragmatisch nutzen lassen. Hervorzuheben sind eine Plattform zur kollektiven Suche sowie ein innovatives Verfahren zur Identifikation schwacher Signale. Vorgestellt und diskutiert wird eine Dienstleistung der Digital Intelligence, die auf dieser Basis in einem globalen technologieorientierten Konzern erfolgreich umgesetzt wurde und eine systematische Wettbewerbs-, Markt- und Technologie-Analyse auf Basis digitaler Spuren des Menschen ermöglicht.
448

Introduction of New Products in the Supply Chain : Optimization and Management of Risks

El KHOURY, Hiba 31 January 2012 (has links) (PDF)
Shorter product life cycles and rapid product obsolescence provide increasing incentives to introduce newproducts to markets more quickly. As a consequence of rapidly changing market conditions, firms focus onimproving their new product development processes to reap the benefits of early market entry. Researchershave analyzed market entry, but have seldom provided quantitative approaches for the product rolloverproblem. This research builds upon the literature by using established optimization methods to examine howfirms can minimize their net loss during the rollover process. Specifically, our work explicitly optimizes thetiming of removal of old products and introduction of new products, the optimal strategy, and the magnitudeof net losses when the market entry approval date of a new product is unknown. In the first paper, we use theconditional value at risk to optimize the net loss and investigate the effect of risk perception of the manageron the rollover process. We compare it to the minimization of the classical expected net loss. We deriveconditions for optimality and unique closed-form solutions for single and dual rollover cases. In the secondpaper, we investigate the rollover problem, but for a time-dependent demand rate for the second producttrying to approximate the Bass Model. Finally, in the third paper, we apply the data-driven optimizationapproach to the product rollover problem where the probability distribution of the approval date is unknown.We rather have historical observations of approval dates. We develop the optimal times of rollover and showthe superiority of the data-driven method over the conditional value at risk in case where it is difficult to guessthe real probability distribution
449

Kvantitativ Modellering av förmögenhetsrättsliga dispositiva tvistemål / Quantitative legal prediction : Modeling cases amenable to out-of-court Settlements

Egil, Martinsson January 2014 (has links)
I den här uppsatsen beskrivs en ansats till att med hjälp av statistiska metoder förutse utfallet i förmögenhetsrättsliga dispositiva tvistemål. Logistiska- och multilogistiska regressionsmodeller skattades på data för 13299 tvistemål från 5 tingsrätter och användes  till att förutse utfallet för 1522 tvistemål från 3 andra tingsrätter.   Modellerna presterade bättre än slumpen vilket ger stöd för slutsatsen att man kan använda statistiska metoder för att förutse utfallet i denna typ av tvistemål. / BACKROUND: The idea of legal automatization is a controversial topic that's been discussed for hundreds of years, in modern times in the context of Law & Artificial Intelligence. Strangely, real world applications are very rare. Assuming that the judicial system is like any system that transforms inputs into outputs one would think that we should be able measure it and and gain insight into its inner workings and ultimately use these measurements to make predictions about its output. In this thesis, civil procedures on commercial matters amenable to out-of-court settlement (Förmögenhetsrättsliga Dispositiva Tvistemål) was devoted particular interest and the question was posed: Can we predict the outcome of civil procedures using Statistical Methods? METHOD: By analyzing procedural law and legal doctrin, the civil procedure was modeled in terms of a random variable with a discrete observable outcome. Some data for 14821 cases was extracted from eight different courts. Five of these courts (13299 cases) were used to train the models and three courts (1522 cases) were chosen randomly and kept untouched for validation. Most cases seemed to concern monetary claims (66%) and/or damages (12%). Binary- and Multinomial- logistic regression methods were used as classifiers. RESULTS: The models where found to be uncalibrated but they clearly outperformed random score assignment at separating classes and at a preset threshold gave accuracies significantly higher (p<<0.001) than that of random guessing and in identifying settlements or the correct type of verdict performance was significantly better (p<<0.003) than consequently guessing the most common outcome. CONCLUSION: Using data for cases from one set of courts can to some extent predict the outcomes of cases from another set of courts. The results from applying the models to new data concludes that the outcome in civil processes can be predicted using statistical methods.
450

Data-driven prediction of saltmarsh morphodynamics

Evans, Ben Richard January 2018 (has links)
Saltmarshes provide a diverse range of ecosystem services and are protected under a number of international designations. Nevertheless they are generally declining in extent in the United Kingdom and North West Europe. The drivers of this decline are complex and poorly understood. When considering mitigation and management for future ecosystem service provision it will be important to understand why, where, and to what extent decline is likely to occur. Few studies have attempted to forecast saltmarsh morphodynamics at a system level over decadal time scales. There is no synthesis of existing knowledge available for specific site predictions nor is there a formalised framework for individual site assessment and management. This project evaluates the extent to which machine learning model approaches (boosted regression trees, neural networks and Bayesian networks) can facilitate synthesis of information and prediction of decadal-scale morphological tendencies of saltmarshes. Importantly, data-driven predictions are independent of the assumptions underlying physically-based models, and therefore offer an additional opportunity to crossvalidate between two paradigms. Marsh margins and interiors are both considered but are treated separately since they are regarded as being sensitive to different process suites. The study therefore identifies factors likely to control morphological trajectories and develops geospatial methodologies to derive proxy measures relating to controls or processes. These metrics are developed at a high spatial density in the order of tens of metres allowing for the resolution of fine-scale behavioural differences. Conventional statistical approaches, as have been previously adopted, are applied to the dataset to assess consistency with previous findings, with some agreement being found. The data are subsequently used to train and compare three types of machine learning model. Boosted regression trees outperform the other two methods in this context. The resulting models are able to explain more than 95% of the variance in marginal changes and 91% for internal dynamics. Models are selected based on validation performance and are then queried with realistic future scenarios which represent altered input conditions that may arise as a consequence of future environmental change. Responses to these scenarios are evaluated, suggesting system sensitivity to all scenarios tested and offering a high degree of spatial detail in responses. While mechanistic interpretation of some responses is challenging, process-based justifications are offered for many of the observed behaviours, providing confidence that the results are realistic. The work demonstrates a potentially powerful alternative (and complement) to current morphodynamic models that can be applied over large areas with relative ease, compared to numerical implementations. Powerful analyses with broad scope are now available to the field of coastal geomorphology through the combination of spatial data streams and machine learning. Such methods are shown to be of great potential value in support of applied management and monitoring interventions.

Page generated in 0.0583 seconds