Spelling suggestions: "subject:"data deriven"" "subject:"data dcdriven""
51 |
Rack-based Data Center Temperature Regulation Using Data-driven Model Predictive ControlShi, Shizhu January 2019 (has links)
Due to the rapid and prosperous development of information technology, data centers are widely used in every aspect of social life, such as industry, economy or even our daily life. This work considers the idea of developing a data-driven model based model predictive control (MPC) to regulate temperature for a class of single-rack data centers (DCs). An auto-regressive exogenous (ARX) model is identified for our DC system using partial least square (PLS) to predict the behavior of multi-inputs-single-output (MISO) thermal system. Then an MPC controller is designed to control the temperature inside IT rack based on the identified ARX model. Moreover, fuzzy c-means (FCM) is employed to cluster the measured data set. Based on the clustered data sets, PLS is adopted to identify multiple locally linear ARX models which will be combined by appropriate weights in order to capture the nonlinear behavior of the highly-nonlinear thermal system inside the IT rack. The effectiveness of the proposed method is illustrated through experiments on our single-rack DC and it is also compared with proportional-integral (PI) control. / Thesis / Master of Applied Science (MASc)
|
52 |
Big-Data Driven Optimization Methods with Applications to LTL Freight RoutingTamvada, Srinivas January 2020 (has links)
We propose solution strategies for hard Mixed Integer Programming (MIP) problems,
with a focus on distributed parallel MIP optimization. Although our proposals are
inspired by the Less-than-truckload (LTL) freight routing problem, they are more
generally applicable to hard MIPs from other domains. We start by developing an Integer
Programming model for the Less-than-truckload (LTL) freight routing problem,
and present a novel heuristic for solving the model in a reasonable amount of time
on large LTL networks. Next, we identify some adaptations to MIP branching strategies
that are useful for achieving improved scaling upon distribution when the LTL
routing problem (or other hard MIPs) are solved using parallel MIP optimization.
Recognizing that our model represents a pseudo-Boolean optimization problem
(PBO), we leverage solution techniques used by PBO solvers to develop a CPLEX
based look-ahead solver for LTL routing and other PBO problems. Our focus once
again is on achieving improved scaling upon distribution. We also analyze a technique
for implementing subtree parallelism during distributed MIP optimization. We
believe that our proposals represent a significant step towards solving big-data driven
optimization problems (such as the LTL routing problem) in a more efficient manner. / Thesis / Doctor of Philosophy (PhD) / Less-than-truckload (LTL) freight transportation is a vital part of Canada's economy,
with revenues running into billions of dollars and a cascading impact on many
other industries. LTL operators often have to deal with large volumes of shipments,
unexpected changes in traffic conditions, and uncertainty in demand patterns. In an
industry that already has low profit margins, it is therefore vitally important to make
good routing decisions without expending a lot of time.
The optimization of such LTL freight networks often results in complex big-data
driven optimization problems. In addition to the challenge of finding optimal solutions
for these problems, analysts often have to deal with the complexities of big-data driven
inputs. In this thesis we develop several solution strategies for solving the LTL freight
routing problem including an exact model, novel heuristics, and techniques for solving
the problem efficiently on a cluster of computers.
Although the techniques we develop are inspired by LTL routing, they are more
generally applicable for solving big-data driven optimization problems from other
domains. Experiments conducted over the years in consultation with industry experts
indicate that our proposals can significantly improve solution quality and reduce
time to solution. Furthermore, our proposals open up interesting avenues for future
research.
|
53 |
A Physically Informed Data-Driven Approach to Analyze Human Induced Vibration in Civil StructuresKessler, Ellis Carl 24 June 2021 (has links)
With the rise of the Internet of Things (IoT) and smart buildings, new algorithms are being developed to understand how occupants are interacting with buildings via structural vibration measurements. These vibration-based occupant inference algorithms (VBOI) have been developed to localize footsteps within a building, to classify occupants, and to monitor occupant health. This dissertation will present a three-stage journey proposing a path forward for VBOI research based on physically informed data-driven models of structural dynamical systems.
The first part of this dissertation presents a method for extracting temporal gait parameters via underfloor accelerometers. The time between an occupant's consecutive steps can be measured with only structural vibration measurements with a similar accuracy to current gait analysis tools such as force plates and in-shoe pressure sensors. The benefit of this, and other VBOI gait analysis algorithms, is in their ease of use. Gait analysis is currently limited to a clinical setting with specialized measurement systems, however VBOI gait analysis provides the ability to bring gait analysis to any building.
VBOI algorithms often make some simplifying assumptions about the dynamics of the building in which they operate. Through a calibration procedure, many VBOI algorithms can learn some system parameters. However, as demonstrated in the second part of this dissertation, some commonly made assumptions oversimplify phenomena present in civil structures such as: attenuation, reflections, and dispersion. A series of experimental and theoretical investigations show that three common assumptions made in VBOI algorithms are unable to account for at least one of these phenomena, leading to algorithms which are more accurate under certain conditions.
The final part of this dissertation introduces a physically informed data-driven modelling technique which could be used in VBOI to create a more complete model of a building. Continuous residue interpolation (CRI) takes FRF measurements at a discrete number of testing locations, and creates a predictive model with continuous spatial resolution. The fitted CRI model can be used to simulate the response at any location to an input at any other location. An example of using CRI for VBOI localization is shown. / Doctor of Philosophy / Vibration-based occupant inference (VBOI) algorithms are an emerging area of research in smart buildings instrumented with vibration sensors. These algorithms use vibration measurements of the building's structure to learn something about the occupants inside the building. For example the vibration of a floor in response to a person's footstep could be used to estimate where that person is without the need for any line-of-sight sensors like cameras or motion sensors. The storyline of this dissertation will make three stops:
The first is the demonstration of a VBOI algorithm for monitoring occupant health.
The second is an investigation of some assumptions commonly made while developing VBOI algorithms, seeking to shed light on when they lead to accurate results and when they should be used with caution.
The third, and final, is the development of a data-driven modelling method which uses knowledge about how systems vibrate to build as detailed a model of the system as possible.
Current VBOI algorithms have demonstrated the ability to accurately infer a range of information about occupants through vibration measurements. This is shown with a varied literature of localization algorithms, as well as a growing number of algorithms for performing gait analysis. Gait analysis is the study of how people walk, and its correlation to their health. The vibration-based gait analysis procedure in this work demonstrates extracting distributions of temporal gait parameters, like the time between steps.
However, many current VBOI algorithms make significant simplifying assumptions about the dynamics of civil structures. Experimental and theoretical investigations of some of these assumptions show that while all assumptions are accurate in certain situations, the dynamics of civil structures are too complex to be completely captured by these simplified models.
The proposed path forward for VBOI algorithms is to employ more sophisticated data-drive modelling techniques. Data-driven models use measurements from the system to build a model of how the system would respond to new inputs. The final part of this dissertation is the development of a novel data-driven modelling technique that could be useful for VBOI. The new method, continuous residue interpolation (CRI) uses knowledge of how systems vibrate to build a model of a vibrating system, not only at the locations which were measured, but over the whole system. This allows a relatively small amount of testing to be used to create a model of the entire system, which can in turn be used for VBOI algorithms.
|
54 |
Fusing Modeling and Testing to Enhance Environmental Testing ApproachesDevine, Timothy Andrew 09 July 2019 (has links)
A proper understanding of the dynamics of a mechanical system is crucial to ensure the highest levels of performance. The understanding is frequently determined through modeling and testing of components. Modeling provides a cost effective method for rapidly developing a knowledge of the system, however the model is incapable of accounting for fluctuations that occur in physical spaces. Testing, when performed properly, provides a near exact understanding of how a pat or assembly functions, however can be expensive both fiscally and temporally.
Often, practitioners of the two disciplines work in parallel, never bothering to intersect with the other group. Further advancement into ways to fuse modeling and testing together is able to produce a more comprehensive understanding of dynamic systems while remaining inexpensive in terms of computation, financial cost, and time. Due to this, the goal of the presented work is to develop ways to merge the two branches to include test data in models for operational systems. This is done through a series of analytical and experimental tasks examining the boundary conditions of various systems.
The first venue explored was an attempt at modeling unknown boundary conditions from an operational environment by modeling the same system in known configurations using a controlled environment, such as what is seen in a laboratory test. An analytical beam was studied under applied environmental loading with grounding stiffnesses added to simulate an operational condition and the response was attempted to be matched by a free boundaries beam with a reduced number of excitation points. Due to the properties of the inverse problem approach taken, the response between the two systems matched at control locations, however at non-control locations the responses showed a large degree of variation. From the mismatch in mechanical impedance, it is apparent that improperly representing boundary conditions can have drastic effects on the accuracy of models and recreational tests.
With the progression now directed towards modeling and testing of boundary conditions, methods were explored to combine the two approaches working together in harmony. The second portion of this work focuses on modeling an unknown boundary connection using a collection of similar testable boundary conditions to parametrically interpolate to the unknown configuration. This was done by using data driven models of the known systems as the interpolating functions, with system boundary stiffness being the varied parameter. This approach yielded near identical parametric model response to the original system response in analytical systems and showed some early signs of promise for an experimental beam.
After the two conducted studies, the potential for extending a parametric data driven model approach to other systems is discussed. In addition to this, improvements to the approach are discussed as well as the benefits it brings. / Master of Science / A proper understanding of the dynamics of a mechanical system in a severe environment is crucial to ensure the highest levels of performance. The understanding is frequently determined through modeling and testing of components. Modeling provides a cost-effective method for rapidly developing a knowledge of the system; however, the model is incapable of accounting for fluctuations that occur in physical spaces. Testing, when performed properly, provides a near exact understanding of how a pat or assembly functions, however, can be expensive both fiscally and temporally. Often, practitioners of the two disciplines work in parallel, never bothering to intersect with the other group and favoring one approach over the other for various reasons. Further advancement into ways to fuse modeling and testing together can produce a more comprehensive understanding of dynamic systems subject to environmental excitation while remaining inexpensive in terms of computation, financial cost, and time.
Due to this, the presented work aims to develop ways to merge the two branches to include test data in models for operational systems. This is done through a series of analytical and experimental tasks examining the boundary conditions of various systems and attempting to replicate the system response using inverse approaches at first. This is then proceeded by modeling boundary stiffnesses using data-driven modeling and parametric modeling approaches. The validity and impact these methods may have are also discussed.
|
55 |
A Case Study of Crestwood Primary School: Organizational Routines Implemented For Data-Driven Decison MakingWilliams, Kimberly Graybeal 30 October 2014 (has links)
The research study investigated how organizational routines influenced classroom and intervention instruction in a primary school. Educators have used student data for decades but they continue to struggle with the best way to use data to influence instruction. The historical overview of the research highlighted the context of data use from the Effective Schools movement through the No Child Left Behind Act noting the progression of emphasis placed on student data results. While numerous research studies have focused on the use of data, the National Center for Educational Evaluation and Regional Assistance (2009) reported that existing research on the use of data to make instructional decisions does not yet provide conclusive evidence of what practices work to improve student achievement.
A descriptive case study methodology was employed to investigate the educational phenomenon of organizational routines implemented for data-driven decision making to influence classroom and intervention instruction. The case study examined a school that faced the macrolevel pressures of school improvement. The study triangulated data from surveys, interviews, and document analysis in an effort to reveal common themes about organizational routines for data-driven decision making.
The study participants identified 14 organizational routines as influencing instruction. The interview questions focused on the common themes of (a) curriculum alignment, (b) common assessments, (c) guided reading levels, (d) professional learning communities, and (e) acceleration plans. The survey respondents and interview participants explained how the organizational routines facilitated the use of data by providing (a) focus and direction, (b) student centered instruction, (c) focus on student growth, (d) collaboration and teamwork, (e), flexible grouping of students, and (f) teacher reflection and ownership of all students. Challenges and unexpected outcomes of the organizational routines for data-driven decision making were also discussed. The challenges with the most references included (a) time, (b) too much data (c) data with conflicting information, (d) the pacing guide, and (e) changing teacher attitudes and practices. Ultimately, a data-driven culture was cultivated within the school that facilitated instructional adjustments resulting in increased academic achievement. / Ed. D.
|
56 |
Organ Viability Assessment in Transplantation based on Data-driven ModelingLan, Qing 03 March 2020 (has links)
Organ transplantation is one of the most important and effective solutions to save end-stage patients, who have one or more critical organ failures. However, the inadequate organs for transplantation to meet the demands has been the major issue. Even worse, the lack of accurate non-invasive assessment methods wastes 20% of donor organs every year. Currently, the most frequently used organ assessment methods are visual inspections and biopsy. Yet both methods are subjective: the assessment accuracy depends on the evaluator's experience. Moreover, repeating biopsies will potentially damage the organs. To reduce the waste of donor organs, online non-invasive and quantitative organ assessment methods are in great needs.
Organ viability assessment is a challenging issue due to four reasons: 1) there are no universally accepted guidelines or procedures for surgeons to quantitatively assess the organ viability; 2) there is no easy-deployed and non-invasive biological in situ data to correlate with organ viability; 3) the organs viability is difficult to model because of heterogeneity among organs; 4) both visual inspection and biopsy can be applied only at present time, and how to forecast the viability of similar-but-non-identical organs at a future time is still in shadow.
Motivated by the challenges, the overall objective of this dissertation is to develop online non-invasive and quantitative assessment methods to predict and forecast the organ viability. As a result, four data-driven modeling research tasks are investigated to achieve the overall objective:
1) Quantitative and qualitative models are used to jointly predict the number of dead cells and the liver viability based on features extracted from biopsy images. This method can quantitatively assess the organ viability, which could be used to validate the biopsy results from pathologists to increase the evaluation accuracy.
2) A multitask learning logistic regression model is applied to assess liver viability by using principal component analysis to extract infrared image features to quantify the correlation between liver viability and spatial infrared imaging data. This non-invasive online assessment method can evaluate the organ viability without physical contact to reduce the risk of damaging the organs.
3) A spatial-temporal smooth variable selection method is conducted to improve the liver viability prediction accuracy by considering both spatial and temporal effects from the infrared images without feature engineering. In addition, it provides medical interpretation based on variable selection to highlight the most significant regions on the liver resulting in viability loss.
4) A multitask general path model is implemented to forecast the heterogeneous kidney viability based on limited historical data by learning the viability loss paths of each kidney during preservation. The generality of this method is validated by tissue deformation forecasting in needle biopsy process to potentially improve the biopsy accuracy.
In summary, the proposed data-driven methods can predict and forecast the organ viability without damaging the organ. As a result, the increased utilization rate of donor organs will benefit more end-stage patients by dramatically extending their life spans. / Doctor of Philosophy / Organ transplantation is the ultimate solution to save end-stage patients with one or more organ failures. However, the inadequate organs for transplantation to meet the demands has been the major issue. Even worse, the lack of accurate and non-invasive viability assessment methods wastes 20% of donor organs every year. Currently, the most frequently used organ assessment methods are visual inspections and biopsy. Yet both methods are subjective: the assessment accuracy depends on the personal experience of evaluator. Moreover, repeating biopsies will potentially damage the organs. As a result, online non-invasive and quantitative organ assessment methods are in great needs. It is extremely important because such methods will increase the organ utilization rate by saving more discarded organs with transplantation potential.
The overall objective of this dissertation is to advance the knowledge on modeling organ viability by developing online non-invasive and quantitative methods to predict and forecast the viability of heterogeneous organs in transplantation. After an introduction in Chapter 1, four research tasks are investigated. In Chapter 2, quantitative and qualitative models jointly predicting porcine liver viability are proposed based on features from biopsy images to validate the biopsy results. In Chapter 3, a multi-task learning logistic regression model is proposed to assess the cross-liver viability by correlating liver viability with spatial infrared data validated by porcine livers. In Chapter 4, a spatial-temporal smooth variable selection is proposed to predict liver viability by considering both spatial and temporal correlations in modeling without feature engineering, which is also validated by porcine livers. In addition, the variable selection results provide medical interpretations by capturing the significant regions on the liver in predicting viability. In Chapter 5, a multitask general path model is proposed to forecast kidney viability validated by porcine kidney. This forecasting method is generalized to apply to needle biopsy tissue deformation case study with the objective to improve the needle insertion accuracy. Finally, I summarize the research contribution and discuss future research directions in Chapter 6. The proposed data-driven methods can predict and forecast organ viability without damaging the organ. As a result, the increased utilization rate of donor organs will benefit more patients by dramatically extending their life spans and bringing them back to normal daily activities.
|
57 |
Cross-Validation of Data-Driven Correction Reduced Order ModelingMou, Changhong 03 October 2018 (has links)
In this thesis, we develop a data-driven correction reduced order model (DDC-ROM) for numerical simulation of fluid flows. The general DDC-ROM involves two stages: (1) we apply ROM filtering (such as ROM projection) to the full order model (FOM) and construct the filtered ROM (F-ROM). (2) We use data-driven modeling to model the nonlinear interactions between resolved and unresolved modes, which solves the F-ROM's closure problem.
In the DDC-ROM, a linear or quadratic ansatz is used in the data-driven modeling step. In this thesis, we propose a new cubic ansatz. To get the unknown coefficients in our ansatz, we solve an optimization problem that minimizes the difference between the FOM data and the ansatz. We test the new DDC-ROM in the numerical simulation of the one-dimensional Burgers equation with a small diffusion coefficient. Furthermore, we perform a cross-validation of the DDC-ROM to investigate whether it can be successful in computational settings that are different from the training regime. / M.S. / Practical engineering and scientific problems often require the repeated simulation of unsteady fluid flows. In these applications, the computational cost of high-fidelity full-order models can be prohibitively high. Reduced order models (ROMs) represent efficient alternatives to brute force computational approaches. In this thesis, we propose a data-driven correction ROM (DDC-ROM) in which available data and an optimization problem are used to model the nonlinear interactions between resolved and unresolved modes. In order to test the new DDC-ROM's predictability, we perform its cross-validation for the one-dimensional viscous Burgers equation and different training regimes.
|
58 |
Data-driven Methods in Mechanical Model Calibration and Prediction for Mesostructured MaterialsKim, Jee Yun 01 October 2018 (has links)
Mesoscale design involving control of material distribution pattern can create a statistically heterogeneous material system, which has shown increased adaptability to complex mechanical environments involving highly non-uniform stress fields. Advances in multi-material additive manufacturing can aid in this mesoscale design, providing voxel level control of material property. This vast freedom in design space also unlocks possibilities within optimization of the material distribution pattern. The optimization problem can be divided into a forward problem focusing on accurate predication and an inverse problem focusing on efficient search of the optimal design. In the forward problem, the physical behavior of the material can be modeled based on fundamental mechanics laws and simulated through finite element analysis (FEA). A major limitation in modeling is the unknown parameters in constitutive equations that describe the constituent materials; determining these parameters via conventional single material testing has been proven to be insufficient, which necessitates novel and effective approaches of calibration.
A calibration framework based in Bayesian inference, which integrates data from simulations and physical experiments, has been applied to a study involving a mesostructured material fabricated by fused deposition modeling. Calibration results provide insights on what values these parameters converge to as well as which material parameters the model output has the largest dependence on while accounting for sources of uncertainty introduced during the modeling process. Additionally, this statistical formulation is able to provide quick predictions of the physical system by implementing a surrogate and discrepancy model. The surrogate model is meant to be a statistical representation of the simulation results, circumventing issues arising from computational load, while the discrepancy is aimed to account for the difference between the simulation output and physical experiments. In this thesis, this Bayesian calibration framework is applied to a material bending problem, where in-situ mechanical characterization data and FEA simulations based on constitutive modeling are combined to produce updated values of the unknown material parameters with uncertainty. / Master of Science / A material system obtained by applying a pattern of multiple materials has proven its adaptability to complex practical conditions. The layer by layer manufacturing process of additive manufacturing can allow for this type of design because of its control over where material can be deposited. This possibility then raises the question of how a multi-material system can be optimized in its design for a given application. In this research, we focus mainly on the problem of accurately predicting the response of the material when subjected to stimuli. Conventionally, simulations aided by finite element analysis (FEA) were relied upon for prediction, however it also presents many issues such as long run times and uncertainty in context-specific inputs of the simulation. We instead have adopted a framework using advanced statistical methodology able to combine both experimental and simulation data to significantly reduce run times as well as quantify the various uncertainties associated with running simulations.
|
59 |
Combining Data-driven and Theory-guided Models in Ensemble Data AssimilationPopov, Andrey Anatoliyevich 23 August 2022 (has links)
There once was a dream that data-driven models would replace their theory-guided counterparts. We have awoken from this dream. We now know that data cannot replace theory. Data-driven models still have their advantages, mainly in computational efficiency but also providing us with some special sauce that is unreachable by our current theories. This dissertation aims to provide a way in which both the accuracy of theory-guided models, and the computational efficiency of data-driven models can be combined. This combination of theory-guided and data-driven allows us to combine ideas from a much broader set of disciplines, and can help pave the way for robust and fast methods. / Doctor of Philosophy / As an illustrative example take the problem of predicting the weather. Typically a supercomputer will run a model several times to generate predictions few days into the future. Sensors such as those on satellites will then pick up observations about a few points on the globe, that are not representative of the whole atmosphere. These observations are combined, ``assimilated'' with the computer model predictions to create a better representation of our current understanding of the state of the earth. This predict-assimilate cycle is repeated every day, and is called (sequential) data assimilation. The prediction step traditional was performed by a computer model that was based on rigorous mathematics. With the advent of big-data, many have wondered if models based purely on data would take over. This has not happened. This thesis is concerned with taking traditional mathematical models and running them alongside data-driven models in the prediction step, then building a theory in which both can be used in data assimilation at the same time in order to not have a drop in accuracy and have a decrease in computational cost.
|
60 |
Wavelet-based Dynamic Mode Decomposition in the Context of Extended Dynamic Mode Decomposition and Koopman TheoryTilki, Cankat 17 June 2024 (has links)
Koopman theory is widely used for data-driven modeling of nonlinear dynamical systems. One of the well-known algorithms that stem from this approach is the Extended Dynamic Mode Decomposition (EDMD), a data-driven algorithm for uncontrolled systems. In this thesis, we will start by discussing the EDMD algorithm. We will discuss how this algorithm encompasses Dynamic Mode Decomposition (DMD), a widely used data-driven algorithm. Then we will extend our discussion to input-output systems and identify ways to extend the Koopman Operator Theory to input-output systems. We will also discuss how various algorithms can be identified as instances of this framework. Special care is given to Wavelet-based Dynamic Mode Decomposition (WDMD). WDMD is a variant of DMD that uses only the input and output data. WDMD does that by generating auxiliary states acquired from the Wavelet transform. We will show how the action of the Koopman operator can be simplified by using the Wavelet transform and how the WDMD algorithm can be motivated by this representation. We will also introduce a slight modification to WDMD that makes it more robust to noise. / Master of Science / To analyze a real-world phenomenon we first build a mathematical model to capture its behavior. Traditionally, to build a mathematical model, we isolate its principles and encode it into a function. However, when the phenomenon is not well-known, isolating these principles is not possible. Hence, rather than understanding its principles, we sample data from that phenomenon and build our mathematical model directly from this data by using approximation techniques. In this thesis, we will start by focusing on cases where we can fully observe the phenomena, when no external stimuli are present. We will discuss how some algorithms originating from these approximation techniques can be identified as instances of the Extended Dynamic Mode Decomposition (EDMD) algorithm. For that, we will review an alternative approach to mathematical modeling, called the Koopman approach, and explain how the Extended DMD algorithm stems from this approach. Then we will focus on the case where there is external stimuli and we can only partially observe the phenomena. We will discuss generalizations of the Koopman approach for this case, and how various algorithms that model such systems can be identified as instances of the EDMD algorithm adapted for this case. Special attention is given to the Wavelet-based Dynamic Mode Decomposition (WDMD) algorithm. WDMD builds a mathematical model from the data by borrowing ideas from Wavelet theory, which is used in signal processing. In this way, WDMD does not require the sampling of the fully observed system. This gives WDMD the flexibility to be used for cases where we can only partially observe the phenomena. While showing that WDMD is an instance of EDMD, we will also show how Wavelet theory can simplify the Koopman approach and thus how it can pave the way for an easier analysis.
|
Page generated in 0.065 seconds