• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 1
  • 1
  • Tagged with
  • 15
  • 15
  • 15
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Deep Learning of Unknown Governing Equations

Chen, Zhen January 2021 (has links)
No description available.
2

A Physically Informed Data-Driven Approach to Analyze Human Induced Vibration in Civil Structures

Kessler, Ellis Carl 24 June 2021 (has links)
With the rise of the Internet of Things (IoT) and smart buildings, new algorithms are being developed to understand how occupants are interacting with buildings via structural vibration measurements. These vibration-based occupant inference algorithms (VBOI) have been developed to localize footsteps within a building, to classify occupants, and to monitor occupant health. This dissertation will present a three-stage journey proposing a path forward for VBOI research based on physically informed data-driven models of structural dynamical systems. The first part of this dissertation presents a method for extracting temporal gait parameters via underfloor accelerometers. The time between an occupant's consecutive steps can be measured with only structural vibration measurements with a similar accuracy to current gait analysis tools such as force plates and in-shoe pressure sensors. The benefit of this, and other VBOI gait analysis algorithms, is in their ease of use. Gait analysis is currently limited to a clinical setting with specialized measurement systems, however VBOI gait analysis provides the ability to bring gait analysis to any building. VBOI algorithms often make some simplifying assumptions about the dynamics of the building in which they operate. Through a calibration procedure, many VBOI algorithms can learn some system parameters. However, as demonstrated in the second part of this dissertation, some commonly made assumptions oversimplify phenomena present in civil structures such as: attenuation, reflections, and dispersion. A series of experimental and theoretical investigations show that three common assumptions made in VBOI algorithms are unable to account for at least one of these phenomena, leading to algorithms which are more accurate under certain conditions. The final part of this dissertation introduces a physically informed data-driven modelling technique which could be used in VBOI to create a more complete model of a building. Continuous residue interpolation (CRI) takes FRF measurements at a discrete number of testing locations, and creates a predictive model with continuous spatial resolution. The fitted CRI model can be used to simulate the response at any location to an input at any other location. An example of using CRI for VBOI localization is shown. / Doctor of Philosophy / Vibration-based occupant inference (VBOI) algorithms are an emerging area of research in smart buildings instrumented with vibration sensors. These algorithms use vibration measurements of the building's structure to learn something about the occupants inside the building. For example the vibration of a floor in response to a person's footstep could be used to estimate where that person is without the need for any line-of-sight sensors like cameras or motion sensors. The storyline of this dissertation will make three stops: The first is the demonstration of a VBOI algorithm for monitoring occupant health. The second is an investigation of some assumptions commonly made while developing VBOI algorithms, seeking to shed light on when they lead to accurate results and when they should be used with caution. The third, and final, is the development of a data-driven modelling method which uses knowledge about how systems vibrate to build as detailed a model of the system as possible. Current VBOI algorithms have demonstrated the ability to accurately infer a range of information about occupants through vibration measurements. This is shown with a varied literature of localization algorithms, as well as a growing number of algorithms for performing gait analysis. Gait analysis is the study of how people walk, and its correlation to their health. The vibration-based gait analysis procedure in this work demonstrates extracting distributions of temporal gait parameters, like the time between steps. However, many current VBOI algorithms make significant simplifying assumptions about the dynamics of civil structures. Experimental and theoretical investigations of some of these assumptions show that while all assumptions are accurate in certain situations, the dynamics of civil structures are too complex to be completely captured by these simplified models. The proposed path forward for VBOI algorithms is to employ more sophisticated data-drive modelling techniques. Data-driven models use measurements from the system to build a model of how the system would respond to new inputs. The final part of this dissertation is the development of a novel data-driven modelling technique that could be useful for VBOI. The new method, continuous residue interpolation (CRI) uses knowledge of how systems vibrate to build a model of a vibrating system, not only at the locations which were measured, but over the whole system. This allows a relatively small amount of testing to be used to create a model of the entire system, which can in turn be used for VBOI algorithms.
3

Run-to-run modelling and control of batch processes

Duran Villalobos, Carlos Alberto January 2016 (has links)
The University of ManchesterCarlos Alberto Duran VillalobosDoctor of Philosophy in the Faculty of Engineering and Physical SciencesDecember 2015This thesis presents an innovative batch-to-batch optimisation technique that was able to improve the productivity of two benchmark fed-batch fermentation simulators: Saccharomyces cerevisiae and Penicillin production. In developing the proposed technique, several important challenges needed to be addressed:For example, the technique relied on the use of a linear Multiway Partial Least Squares (MPLS) model to adapt from one operating region to another as productivity increased to estimate the end-point quality of each batch accurately. The proposed optimisation technique utilises a Quadratic Programming (QP) formulation to calculate the Manipulated Variable Trajectory (MVT) from one batch to the next. The main advantage of the proposed optimisation technique compared with other approaches that have been published was the increase of yield and the reduction of convergence speed to obtain an optimal MVT. Validity Constraints were also included into the batch-to-batch optimisation to restrict the QP calculations to the space only described by useful predictions of the MPLS model. The results from experiments over the two simulators showed that the validity constraints slowed the rate of convergence of the optimisation technique and in some cases resulted in a slight reduction in final yield. However, the introduction of the validity constraints did improve the consistency of the batch optimisation. Another important contribution of this thesis were a series of experiments that were implemented utilising a variety of smoothing techniques used in MPLS modelling combined with the proposed batch-to-batch optimisation technique. From the results of these experiments, it was clear that the MPLS model prediction accuracy did not significantly improve using these smoothing techniques. However, the batch-to-batch optimisation technique did show improvements when filtering was implemented.
4

Enhancing urban centre resilience under climate-induced disasters using data analytics and machine learning techniques

Haggag, May January 2021 (has links)
According to the Centre for Research on the Epidemiology of Disasters, the global average number of CID has tripled in less than four decades (from approximately 1,300 Climate-Induced Disasters (CID) between 1975 and 1984 to around 3,900 between 2005 and 2014). In addition, around 1 million deaths and $1.7 trillion damage costs were attributed to CID since 2000, with around $210 billion incurred only in 2020. Consequently, the World Economic Forum identified extreme weather as the top ranked global risk in terms of likelihood and among the top five risks in terms of impact in the last 4 years. These risks are not expected to diminish as: i) the number of CID is anticipated to double during the next 13 years; ii) the annual fatalities due to CID are expected to increase by 250,000 deaths in the next decade; and iii) the annual CID damage costs are expected to increase by around 20% in 2040 compared to those realized in 2020. Given the anticipated increase in CID frequency, the intensification of CID impacts, the rapid growth in the world’s population, and the fact that two thirds of such population will be officially living in urban areas by 2050, it has recently become extremely crucial to enhance both community and city resilience under CID. Resilience, in that context, refers to the ability of a system to bounce back, recover or adapt in the face of adverse events. This is considered a very farfetched goal given both the extreme unpredictability of the frequency and impacts of CID and the complex behavior of cities that stems from the interconnectivity of their comprising infrastructure systems. With the emergence of data-driven machine learning which assumes that models can be trained using historical data and accordingly, can efficiently learn to predict different complex features, developing robust models that can predict the frequency and impacts of CID became more conceivable. Through employing data analytics and machine learning techniques, this work aims at enhancing city resilience by predicting both the occurrence and expected impacts of climate-induced disasters on urban areas. The first part of this dissertation presents a critical review of the research work pertaining to resilience of critical infrastructure systems. Meta-research is employed through topic modelling, to quantitatively uncover related latent topics in the field. The second part aims at predicting the occurrence of CID by developing a framework that links different climate change indices to historical disaster records. In the third part of this work, a framework is developed for predicting the performance of critical infrastructure systems under CID. Finally, the aim of the fourth part of this dissertation is to develop a systematic data-driven framework for the prediction of CID property damages. This work is expected to aid stakeholders in developing spatio-temporal preparedness plans under CID, which can facilitate mitigating the adverse impacts of CID on infrastructure systems and improve their resilience. / Thesis / Doctor of Philosophy (PhD)
5

A qualitative assessment and optimization of URANS modelling for unsteady cavitating flows

Apte, Dhruv Girish 07 June 2024 (has links)
Cavitation is characterized by the formation of vapor bubbles when the pressure in a working fluid drops sharply below the vapor pressure. These bubbles, upon exiting the low-pressure region burst emanating tremendous amounts of energy. Unsteady cavitating flows have been influential in several aspects from being responsible for erosion damage and vibrations in hydraulic engineering devices to being used for non-invasive medical surgeries and drilling for geothermal energy. While the phenomenon has been investigated using both experimental and numerical methods, it continues to pose a challenge for numerical modelling techniques due to its flow unsteadiness and the cavitation-turbulence interaction. One of the principal aspects to modelling cavitation requires the coupling of a cavitation and a turbulence model. While, scale-resolving turbulence modelling techniques like Direct Numerical Simulations (DNS) and Large Eddy Simulations (LES) upto a certain extent may seem an intuitive solution, the physical complexities involved with cavitation result in extremely high computational costs. Thus, Unsteady Reynolds-Averaged Navier-Stokes (URANS) models have been widely utilized as a workhorse for cavitating simulations. However, URANS models are unable to reproduce the periodic vapor shedding observed in experiments and thus, are often corrected by empirical correction. Recently, some models termed as hybrid RANS-LES models that behave as RANS or LES depending on location of flow have been introduced and employed to model cavitating flows. In addition, there has also been a rise in defining some frameworks that use data from high-fidelity simulations or experiments to drive numerical algorithms and aid standard turbulence modelling procedures for accurately simulating turbulent flows. This dissertation is aimed at (1) evaluating the abilities of these corrections, traditional URANS and hybrid RANS-LES models to model cavitation and (2) optimizing the URANS modelling strategy by designing a methodology driven by experimental data to augment the turbulence modelling to simulate cavitating flow in a converging-diverging nozzle. / Doctor of Philosophy / The famous painting Arion on the Dolphin by the French artist François Boucher shows a dolphin rescuing the poet Arion from the choppy seas after being thrown overboard. Today, seeing silhouettes of dolphins swimming near the shore as the Sun sets is a calming sight. However, as these creatures splash their fins in the water, these fins create a drastic pressure difference resulting in the formation of ribbons of vapor bubbles. As the bubbles exit the low-pressure zones, they collapse and release tremendous amounts of energy. This energy manifests in the form of shockwaves rendering this pleasant sight to the human eye, extremely painful for dolphins. These shocks also impact the metal blades in hydraulic machinery like pumps and ship propellers. This dissertation aims to investigate the physics driving this phenomenon using accurate numerical simulations. We first conduct two-dimensional simulations and observe that standard numerical techniques to model the turbulence are unable to simulate cavitation accurately. The investigation is then extended to three-dimensional simulations using hybrid RANS-LES models that aim to strike a delicate balance between accuracy and efficiency. It is observed that these models are able to reproduce the flow dynamics as observed in experiments but are extremely expensive in terms of computational costs due to the three-dimensional nature of the calculations. The investigation then switches to a data-driven approach where a machine learning algorithm driven by experimental data informs the standard turbulence models and is able to simulate cavitating flows accurately and efficiently.
6

An evaluation of a data-driven approach to regional scale surface runoff modelling

Zhang, Ruoyu 03 August 2018 (has links)
Modelling surface runoff can be beneficial to operations within many fields, such as agriculture planning, flood and drought risk assessment, and water resource management. In this study, we built a data-driven model that can reproduce monthly surface runoff at a 4-km grid network covering 13 watersheds in the Chesapeake Bay area. We used a random forest algorithm to build the model, where monthly precipitation, temperature, land cover, and topographic data were used as predictors, and monthly surface runoff generated by the SWAT hydrological model was used as the response. A sub-model was developed for each of 12 monthly surface runoff estimates, independent of one another. Accuracy statistics and variable importance measures from the random forest algorithm reveal that precipitation was the most important variable to the model, but including climatological data from multiple months as predictors significantly improves the model performance. Using 3-month climatological, land cover, and DEM derivatives from 40% of the 4-km grids as the training dataset, our model successfully predicted surface runoff for the remaining 60% of the grids (mean R2 (RMSE) for the 12 monthly models is 0.83 (6.60 mm)). The lowest R2 was associated with the model for August, when the surface runoff values are least in a year. In all studied watersheds, the highest predictive errors were found within the watershed with greatest topographic complexity, for which the model tended to underestimate surface runoff. For the other 12 watersheds studied, the data-driven model produced smaller and more spatially consistent predictive errors. / Master of Science / Surface runoff data can be valuable to many fields, such as agriculture planning, water resource management, and flood and drought risk assessment. The traditional approach to acquire the surface runoff data is by simulating hydrological models. However, running such models always requires advanced knowledge to watersheds and computation technologies. In this study, we build a statistical model that can reproduce monthly surface runoff at 4-km grid covering 13 watersheds in Chesapeake Bay area. This model uses publicly accessible climate, land cover, and topographic datasets as predictors, and monthly surface runoff from the SWAT model as the response. We develop 12 monthly models for each month, independent to each other. To test whether the model can be applied to generalize the surface runoff for the entire study area, we use 40% of grid data as the training sample and the remainder as validation. The accuracy statistics, the annual mean R2 and RMSE are 0.83 and 6.60 mm, show our model is capable to accurately reproduce monthly surface runoff of our study area. The statistics for August model are not as satisfying as other months’ models. The possible reason is the surface runoff in August is the lowest among the year, thus there is no enough variation for the algorithm to distinguish the minor difference of the response in model building process. When applying the model to watersheds in steep terrain conditions, we need to pay attention to the results in which the error may be relatively large.
7

Simulations and data-based models for electrical conductivities of graphene nanolaminates

Rothe, Tom 13 August 2021 (has links)
Graphene-based conductor materials (GCMs) consist of stacked and decoupled layers of graphene flakes and could potentially transfer graphene’s outstanding material properties like its exceptional electrical conductivity to the macro scale, where alternatives to the heavy and expensive metallic conductors are desperately needed. To reach super-metallic conductivity however, a systematic electrical conductivity optimization regarding the structural and physical input parameters is required. Here, a new trend in the field of process and material optimization are data-based models which utilize data science methods to quickly identify and abstract information and relationships from the available data. In this work such data-based models for the conductivity of a real GCM thin-film sample are build on data generated with an especially improved and extended version of the network simulation approach by Rizzi et al. [1, 2, 3]. Appropriate methods to create data-based models for GCMs are thereby introduced and typical challenges during the modelling process are addressed, so that data-based models for other properties of GCMs can be easily created as soon as sufficient data is accessible. Combined with experimental measurements by Slawig et al. [4] the created data-based models allow for a coherent and comprehensive description of the thin-films’ electrical parameters across several length scales.:List of Figures List of Tables Symbol Directory List of Abbreviations 1 Introduction 2 Simulation approaches for graphene-based conductor materials 2.1 Traditional simulation approaches for GCMs 2.1.1 Analytical model for GCMs 2.1.2 Finite element method simulations for GCMs 2.2 A network simulation approach for GCMs 2.2.1 Geometry generation 2.2.2 Electrical network creation 2.2.3 Contact and probe setting 2.2.4 Conductivity computation 2.2.5 Results obtained with the network simulation approach 2.3 An improved implementation for the network simulation 2.3.1 Rizzi’s implementation of the network simulation approach 2.3.2 An network simulation tool for parameter studies 2.3.3 Extending the network simulation approach for anisotropy investigations and multilayer flakes 3 Data-based material modelling 3.1 Introduction to data-based modelling 3.2 Data-based modelling in material science 3.3 Interpretability of data-based models 3.4 The data-based modelling process 3.4.1 Preliminary considerations 3.4.2 Data acquisition 3.4.3 Preprocessing the data 3.4.4 Partitioning the dataset 3.4.5 Training the model 3.4.6 Model evaluation 3.4.7 Real-world applications 3.5 Regression estimators 3.5.1 Mathematical introduction to regression 3.5.2 Regularization and ridge regression 3.5.3 Support Vector Regression 3.5.4 Introducing non-linearity through kernels 4 Data-based models for a real GCM thin-film 4.1 Experimental measurements 4.2 Simulation procedure 4.3 Data generation 4.4 Creating data-based models 4.4.1 Quadlinear interpolation as benchmark model 4.4.2 KR, KRR and SVR 4.4.3 Enlarging the dataset 4.4.4 KR, KRR and SVR on the enlarged training dataset 4.5 Application to the GCM sample 5 Conclusion and Outlook 5.1 Conclusion 5.2 Outlook Acknowledgements Statement of Authorship
8

[en] AUTOMFIS: A FUZZY SYSTEM FOR MULTIVARIATE TIME SERIES FORECAST / [pt] AUTOMFIS: UM SISTEMA FUZZY PARA PREVISÃO DE SÉRIES TEMPORAIS MULTIVARIADAS

JULIO RIBEIRO COUTINHO 08 April 2016 (has links)
[pt] A série temporal é a representação mais comum para a evoluçãao no tempo de uma variável qualquer. Em um problema de previsão de séries temporais, procura-se ajustar um modelo para obter valores futuros da série, supondo que as informações necessárias para tal se encontram no próprio histórico da série. Como os fenômenos representados pelas séries temporais nem sempre existem de maneira isolada, pode-se enriquecer o modelo com os valores históricos de outras séries temporais relacionadas. A estrutura formada por diversas séries de mesmo intervalo e dimensão ocorrendo paralelamente é denominada série temporal multivariada. Esta dissertação propõe uma metodologia de geração de um Sistema de Inferência Fuzzy (SIF) para previsão de séries temporais multivariadas a partir de dados históricos, com o objetivo de obter bom desempenho tanto em termos de acurácia de previsão como no quesito interpretabilidade da base de regras – com o intuito de extrair conhecimento sobre o relacionamento entre as séries. Para tal, são abordados diversos aspectos relativos ao funcionamento e à construção de um SIF, levando em conta a sua complexidade e claridade semântica. O modelo é avaliado por meio de sua aplicação em séries temporais multivariadas da base completa da competição M3, comparandose a sua acurácia com as dos métodos participantes. Além disso, através de dois estudos de caso com dados reais públicos, suas possibilidades de extração de conhecimento são exploradas por meio de dois estudos de caso construídos a partir de dados reais. Os resultados confirmam a capacidade do AutoMFIS de modelar de maneira satisfatória séries temporais multivariadas e de extrair conhecimento da base de dados. / [en] A time series is the most commonly used representation for the evolution of a given variable over time. In a time series forecasting problem, a model aims at predicting the series future values, assuming that all information needed to do so is contained in the series past behavior. Since the phenomena described by the time series does not always exist in isolation, it is possible to enhance the model with historical data from other related time series. The structure formed by several different time series occurring in parallel, each featuring the same interval and dimension, is called a multivariate time series. This dissertation proposes a methodology for the generation of a Fuzzy Inference System (FIS) for multivariate time series forecasting from historical data, aiming at good performance in both forecasting accuracy and rule base interpretability – in order to extract knowledge about the relationship between the modeled time series. Several aspects related to the operation and construction of such a FIS are investigated regarding complexity and semantic clarity. The model is evaluated by applying it to multivariate time series obtained from the complete M3 competition database and by comparing it to other methods in terms of accuracy. In addition knowledge extraction possibilities are explored through two case studies built from actual data. Results confirm that AutoMFIS is indeed capable of modeling time series behaviors in a satisfactory way and of extractig meaningful knowldege from the databases.
9

Reliable Information Exchange in IIoT : Investigation into the Role of Data and Data-Driven Modelling

Lavassani, Mehrzad January 2018 (has links)
The concept of Industrial Internet of Things (IIoT) is the tangible building block for the realisation of the fourth industrial revolution. It should improve productivity, efficiency and reliability of industrial automation systems, leading to revenue growth in industrial scenarios. IIoT needs to encompass various disciplines and technologies to constitute an operable and harmonious system. One essential requirement for a system to exhibit such behaviour is reliable exchange of information. In industrial automation, the information life-cycle starts at the field level, with data collected by sensors, and ends at the enterprise level, where that data is processed into knowledge for business decision making. In IIoT, the process of knowledge discovery is expected to start in the lower layers of the automation hierarchy, and to cover the data exchange between the connected smart objects to perform collaborative tasks. This thesis aims to assist the comprehension of the processes for information exchange in IIoT-enabled industrial automation- in particular, how reliable exchange of information can be performed by communication systems at field level given an underlying wireless sensor technology, and how data analytics can complement the processes of various levels of the automation hierarchy. Furthermore, this work explores how an IIoT monitoring system can be designed and developed. The communication reliability is addressed by proposing a redundancy-based medium access control protocol for mission-critical applications, and analysing its performance regarding real-time and deterministic delivery. The importance of the data and the benefits of data analytics for various levels of the automation hierarchy are examined by suggesting data-driven methods for visualisation, centralised system modelling and distributed data streams modelling. The design and development of an IIoT monitoring system are addressed by proposing a novel three-layer framework that incorporates wireless sensor, fog, and cloud technologies. Moreover, an IIoT testbed system is developed to realise the proposed framework. The outcome of this study suggests that redundancy-based mechanisms improve communication reliability. However, they can also introduce drawbacks, such as poor link utilisation and limited scalability, in the context of IIoT. Data-driven methods result in enhanced readability of visualisation, and reduced necessity of the ground truth in system modelling. The results illustrate that distributed modelling can lower the negative effect of the redundancy-based mechanisms on link utilisation, by reducing the up-link traffic. Mathematical analysis reveals that introducing fog layer in the IIoT framework removes the single point of failure and enhances scalability, while meeting the latency requirements of the monitoring application. Finally, the experiment results show that the IIoT testbed works adequately and can serve for the future development and deployment of IIoT applications. / SMART (Smarta system och tjänster för ett effektivt och innovativt samhälle)
10

Mobile systems for monitoring Parkinson's disease

Memedi, Mevludin January 2014 (has links)
A challenge for the clinical management of Parkinson's disease (PD) is the large within- and between-patient variability in symptom profiles as well as the emergence of motor complications which represent a significant source of disability in patients. This thesis deals with the development and evaluation of methods and systems for supporting the management of PD by using repeated measures, consisting of subjective assessments of symptoms and objective assessments of motor function through fine motor tests (spirography and tapping), collected by means of a telemetry touch screen device. One aim of the thesis was to develop methods for objective quantification and analysis of the severity of motor impairments being represented in spiral drawings and tapping results. This was accomplished by first quantifying the digitized movement data with time series analysis and then using them in data-driven modelling for automating the process of assessment of symptom severity. The objective measures were then analysed with respect to subjective assessments of motor conditions. Another aim was to develop a method for providing comparable information content as clinical rating scales by combining subjective and objective measures into composite scores, using time series analysis and data-driven methods. The scores represent six symptom dimensions and an overall test score for reflecting the global health condition of the patient. In addition, the thesis presents the development of a web-based system for providing a visual representation of symptoms over time allowing clinicians to remotely monitor the symptom profiles of their patients. The quality of the methods was assessed by reporting different metrics of validity, reliability and sensitivity to treatment interventions and natural PD progression over time. Results from two studies demonstrated that the methods developed for the fine motor tests had good metrics indicating that they are appropriate to quantitatively and objectively assess the severity of motor impairments of PD patients. The fine motor tests captured different symptoms; spiral drawing impairment and tapping accuracy related to dyskinesias (involuntary movements) whereas tapping speed related to bradykinesia (slowness of movements). A longitudinal data analysis indicated that the six symptom dimensions and the overall test score contained important elements of information of the clinical scales and can be used to measure effects of PD treatment interventions and disease progression. A usability evaluation of the web-based system showed that the information presented in the system was comparable to qualitative clinical observations and the system was recognized as a tool that will assist in the management of patients.

Page generated in 0.0716 seconds