• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 5
  • 1
  • 1
  • 1
  • Tagged with
  • 16
  • 16
  • 16
  • 6
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

HaMMLeT: An Infinite Hidden Markov Model with Local Transitions

Dawson, Colin Reimer, Dawson, Colin Reimer January 2017 (has links)
In classical mixture modeling, each data point is modeled as arising i.i.d. (typically) from a weighted sum of probability distributions. When data arises from different sources that may not give rise to the same mixture distribution, a hierarchical model can allow the source contexts (e.g., documents, sub-populations) to share components while assigning different weights across them (while perhaps coupling the weights to "borrow strength" across contexts). The Dirichlet Process (DP) Mixture Model (e.g., Rasmussen (2000)) is a Bayesian approach to mixture modeling which models the data as arising from a countably infinite number of components: the Dirichlet Process provides a prior on the mixture weights that guards against overfitting. The Hierarchical Dirichlet Process (HDP) Mixture Model (Teh et al., 2006) employs a separate DP Mixture Model for each context, but couples the weights across contexts. This coupling is critical to ensure that mixture components are reused across contexts. An important application of HDPs is to time series models, in particular Hidden Markov Models (HMMs), where the HDP can be used as a prior on a doubly infinite transition matrix for the latent Markov chain, giving rise to the HDP-HMM (first developed, as the "Infinite HMM", by Beal et al. (2001), and subsequently shown to be a case of an HDP by Teh et al. (2006)). There, the hierarchy is over rows of the transition matrix, and the distributions across rows are coupled through a top-level Dirichlet Process. In the first part of the dissertation, I present a formal overview of Mixture Models and Hidden Markov Models. I then turn to a discussion of Dirichlet Processes and their various representations, as well as associated schemes for tackling the problem of doing approximate inference over an infinitely flexible model with finite computa- tional resources. I will then turn to the Hierarchical Dirichlet Process (HDP) and its application to an infinite state Hidden Markov Model, the HDP-HMM. These models have been widely adopted in Bayesian statistics and machine learning. However, a limitation of the vanilla HDP is that it offers no mechanism to model correlations between mixture components across contexts. This is limiting in many applications, including topic modeling, where we expect certain components to occur or not occur together. In the HMM setting, we might expect certain states to exhibit similar incoming and outgoing transition probabilities; that is, for certain rows and columns of the transition matrix to be correlated. In particular, we might expect pairs of states that are "similar" in some way to transition frequently to each other. The HDP-HMM offers no mechanism to model this similarity structure. The central contribution of the dissertation is a novel generalization of the HDP- HMM which I call the Hierarchical Dirichlet Process Hidden Markov Model With Local Transitions (HDP-HMM-LT, or HaMMLeT for short), which allows for correlations between rows and columns of the transition matrix by assigning each state a location in a latent similarity space and promoting transitions between states that are near each other. I present a Gibbs sampling scheme for inference in this model, employing auxiliary variables to simplify the relevant conditional distributions, which have a natural interpretation after re-casting the discrete time Markov chain as a continuous time Markov Jump Process where holding times are integrated out, and where some jump attempts "fail". I refer to this novel representation as the Markov Process With Failed Jumps. I test this model on several synthetic and real data sets, showing that for data where transitions between similar states are more common, the HaMMLeT model more effectively finds the latent time series structure underlying the observations.
2

Algorithms for Reconstructing and Reasoning about Chemical Reaction Networks

Cho, Yong Ju 24 January 2013 (has links)
Recent advances in systems biology have uncovered detailed mechanisms of  biological processes such as the cell cycle, circadian rhythms, and signaling pathways.  These mechanisms are modeled by chemical reaction networks (CRNs) which are typically simulated by converting to ordinary differential equations (ODEs), so that the goal is to closely reproduce the observed quantitative and qualitative behaviors of the modeled process. This thesis proposes two algorithmic problems related to the construction and comprehension of CRN models. The first problem focuses on reconstructing CRNs from given time series. Given multivariate time course data obtained by perturbing a given CRN, how can we systematically deduce the interconnections between the species of the network? We demonstrate how this problem can be modeled as, first, one of uncovering conditional independence relationships using buffering experiments and, second, of determining the properties of the individual chemical reactions. Experimental results demonstrate the effectiveness of our approach on both synthetic and real CRNs. The second problem this work focuses on is to aid in network comprehension, i.e., to understand the motifs underlying complex dynamical behaviors of CRNs. Specifically, we  focus on bistability---an important dynamical property of a CRN---and propose algorithms to identify the core structures responsible for conferring bistability. The approach we take is to systematically infer the instability causing structures (ICSs) of a CRN and use machine learning techniques to relate properties of the CRN to the presence of such ICSs. This work has the potential to aid in not just network comprehension but also model simplification, by helping  reduce the complexity of known bistable systems. / Ph. D.
3

Deep Time Series Modeling: From Distribution Regularity to Distribution Shift

Fan, Wei 01 January 2023 (has links) (PDF)
Time series data, as a pervasive kind of data format, have played one key role in numerous realworld scenarios. Effective time series modeling can help with accurate forecasting, resource optimization, risk management, etc. Considering its great importance, how can we model the nature of the pervasive time series data? Existing works have used adopted statistics analysis, state space models, Bayesian models, or other machine learning models for time series modeling. However, these methods usually follow certain assumptions and don't reveal the core and underlying rules of time series. Moreover, the recent advancement of deep learning has made neural networks a powerful tool for pattern recognition. This dissertation will target the problem of time series modeling using deep learning techniques to achieve accurate forecasting of time series. I will propose a principled approach for deep time series modeling from a novel distribution perspective. After in-depth exploration, I categorize and study two essential characteristics of time series, i.e., the distribution regularity and the distribution shift, respectively. I will investigate how can time series data involving the two characteristics be analyzed by distribution extraction, distribution scaling, and distribution transformation. By applying more recent deep learning techniques to distribution learning for time series, this defense aims to achieve more effective and efficient forecasting and decision-making. I will carefully illustrate of proposed methods of three themes and summarize the key findings and improvements achieved through experiments. Finally, I will present my future research plan and discuss how to broaden my research of deep time series modeling into a more general Data-Centric AI system for more generalized, reliable, fair, effective, and efficient decision-making.
4

Time series modeling in water loss

Chuang, Wen-Cheng January 1987 (has links)
No description available.
5

Graph-based Time-series Forecasting in Deep Learning

Chen, Hongjie 02 April 2024 (has links)
Time-series forecasting has long been studied and remains an important research task. In scenarios where multiple time series need to be forecast, approaches that exploit the mutual impact between time series results in more accurate forecasts. This has been demonstrated in various applications, including demand forecasting and traffic forecasting, among others. Hence, this dissertation focuses on graph-based models, which leverage the internode relations to forecast more efficiently and effectively by associating time series with nodes. This dissertation begins by introducing the notion of graph time-series models in a comprehensive survey of related models. The main contributions of this survey are: (1) A novel categorization is proposed to thoroughly analyze over 20 representative graph time-series models from various perspectives, including temporal components, propagation procedures, and graph construction methods, among others. (2) Similarities and differences among models are discussed to provide a fundamental understanding of decisive factors in graph time-series models. Model challenges and future directions are also discussed. Following the survey, this dissertation develops graph time-series models that utilize complex time-series interactions to yield context-aware, real-time, and probabilistic forecasting. The first method, Context Integrated Graph Neural Network (CIGNN), targets resource forecasting with contextual data. Previous solutions either neglect contextual data or only leverage static features, which fail to exploit contextual information. Its main contributions include: (1) Integrating multiple contextual graphs; and (2) Introducing and incorporating temporal, spatial, relational, and contextual dependencies; The second method, Evolving Super Graph Neural Network (ESGNN), targets large-scale time-series datasets through training on super graphs. Most graph time-series models let each node associate with a time series, potentially resulting in a high time cost. Its main contributions include: (1) Generating multiple super graphs to reflect node dynamics at different periods; and (2) Proposing an efficient super graph construction method based on K-Means and LSH; The third method, Probabilistic Hypergraph Recurrent Neural Network (PHRNN), targets datasets under the assumption that nodes interact in a simultaneous broadcasting manner. Previous hypergraph approaches leverage a static weight hypergraph, which fails to capture the interaction dynamics among nodes. Its main contributions include: (1) Learning a probabilistic hypergraph structure from the time series; and (2) Proposing the use of a KNN hypergraph for hypergraph initialization and regularization. The last method, Graph Deep Factors (GraphDF), aims at efficient and effective probabilistic forecasting. Previous probabilistic approaches neglect the interrelations between time series. Its main contributions include: (1) Proposing a framework that consists of a relational global component and a relational local component; (2) Conducting analysis in terms of accuracy, efficiency, scalability, and simulation with opportunistic scheduling. (3) Designing an algorithm for incremental online learning. / Doctor of Philosophy / Time-series forecasting has long been studied due to its usefulness in numerous applications, including demand forecasting, traffic forecasting, and workload forecasting, among others. In scenarios where multiple time series need to be forecast, approaches that exploit the mutual impact between time series results in more accurate forecasts. Hence, this dissertation focuses on a specific area of deep learning: graph time-series models. These models associate time series with a graph structure for more efficient and effective forecasting. This dissertation introduces the notion of graph time series through a comprehensive survey and analyzes representative graph time-series models to help readers gain a fundamental understanding of graph time series. Following the survey, this dissertation develops graph time-series models that utilize complex time-series interactions to yield context-aware, real-time, and probabilistic forecasting. The first method, Context Integrated Graph Neural Network (CIGNN), incorporates multiple contextual graph time series for resource time-series forecasting. The second method, Evolving Super Graph Neural Network (ESGNN), constructs dynamic super graphs for large-scale time-series forecasting. The third method, Probabilistic Hypergraph Recurrent Neural Network (PHRNN), designs a probabilistic hypergraph model that learns the interactions between nodes as distributions in a hypergraph structure. The last method, Graph Deep Factors (GraphDF), targets probabilistic time-series forecasting with a relational global component and a relational local model. These methods collectively covers various data characteristics and model structures, including graphs, super graph, and hypergraphs; a single graph, dual graphs, and multiple graphs; point forecasting and probabilistic forecasting; offline learning and online learning; and both small and large-scale datasets. This dissertation also highlights the similarities and differences between these methods. In the end, future directions in the area of graph time series are also provided.
6

Risk-Averse Optimization and its Applications in Power Grids with Renewable Energy Integration

Dashti, Hossein, Dashti, Hossein January 2017 (has links)
Electric power is one of the most critical parts of everyday life; from lighting, heating, and cooling homes to powering televisions and computers. The modern power grids face several challenges such as efficiency, sustainability, and reliability. Increase in electrical energy demand, distributed generations, integration of uncertain renewable energy resources, and demand side management are among the main underlying reasons of such growing complexity. Additionally, the elements of power systems are often vulnerable to failures because of many reasons, such as system limits, poor maintenance, human errors, terrorist/cyber attacks, and natural phenomena. One common factor complicating the operation of electrical power systems is the underlying uncertainties from the demands, supplies and failures of system components. Stochastic optimization approaches provide mathematical frameworks for decision making under uncertainty. It enables a decision maker to incorporate some knowledge of the uncertainty into the decision making process to find an optimal trade off between cost and risk. In this dissertation, we focus on application of three risk-averse approaches to power systems modeling and optimization. Particularly, we develop models and algorithms addressing the cost-effectiveness and reliability issues in power grids with integrations of renewable energy resources. First, we consider a unit commitment problem for centralized hydrothermal systems where we study improving reliability of such systems under water inflow uncertainty. We present a two-stage robust mixed-integer model to find optimal unit commitment and economic dispatch decisions against extreme weather conditions such as drought years. Further, we employ time series analysis (specifically vector autoregressive models) to construct physical based uncertainty sets for water inflow into the reservoirs. Since extensive formulation is impractical to solve for moderate size networks we develop an efficient Benders' decomposition algorithm to solve this problem. We present the numerical results on real-life case study showing the effectiveness of the model and the proposed solution method. Next, we address the cost effectiveness and reliability issues considering the integration of solar energy in distributed (decentralized) generation (DG) such as microgrids. In particular, we consider optimal placement and sizing of DG units as well as long term generation planning to efficiently balance electric power demand and supply. However, the intermittent nature of renewable energy resources such as solar irradiance imposes several difficulties in decision making process. We propose two-stage stochastic programming model with chance constraints to control the risk of load shedding (i.e., power shortage) in distributed generation. We take advantage of another time series modeling approach known as autoregressive integrated moving average (ARIMA) model to characterize the uncertain solar irradiance more accurately. Additionally, we develop a combined sample average approximation (SAA) and linearization techniques to solve the problem more efficiently. We examine the proposed framework with numerical tests on a radial network in Arizona. Lastly, we address the robustness of strategic networks including power grids and airports in general. One of the key robustness requirements is the connectivity between each pair of nodes through a sufficiently short path, which makes a network cluster more robust with respect to potential disruptions such as man-made or natural disasters. If one can reinforce the network components against future threats, the goal is to determine optimal reinforcements that would yield a cluster with minimum risk of disruptions. We propose a risk-averse model where clusters represents a R-robust 2-club, which by definition is a subgraph with at least R node/edge disjoint paths connecting each pair of nodes, where each path consists of at most 2 edges. And, develop a combinatorial branch-and-bound algorithm to compare with an equivalent mathematical programming approach on random and real-world networks.
7

I Can See What You Are Feeling, but Can I Feel It? Physiological Linkage while Viewing Communication of Emotion via Touch

Kissel, Heather Ann 20 May 2022 (has links)
Past research has demonstrated that emotions can accurately be communicated via touch (e.g., Hertenstein, Keltner, App, Bulleit, and Jaskolka, 2006). In stranger female dyads, physiological linkage plays a role in the mechanism whereby this successful communication occurs, as touch strengthens and lengthens linkage (Kissel, 2020). While touch has a direct impact on physiological processes, viewing touch may have similar effects. The current study explored this possibility in regard to physiological linkage. Hertenstein et al., 2006 demonstrated that participants can correctly decode emotions from observing videos of communication via touch to the forearm and hand. The current study replicated this finding with forty-seven female participants, while also determining the levels of physiological linkage between the "live" observers and the video-recorded participants from Kissel (2020) using dynamic linear time series modeling. Results showed that physiological linkage can occur between "live" and recorded participants. Participants demonstrated longer linkage times with the initial dyad they viewed, but linkage with videoed communicators whose communications were correctly perceived by their fellow videoed receiver had a larger influence on emotion word, valence, intensity, and quadrant detection accuracy. Based on these results, physiological linkage may influence empathic accuracy in virtual settings. / Doctor of Philosophy / A common American English slang expression to state that you relate to someone on a deep personal level is "I feel you." This is a verbal expression of empathy, but what if empathy goes deeper than our thoughts or memories of similar experiences? What if our bodies experience the same emotion as the person with whom we are interacting? This is possible through the phenomenon of "physiological linkage." Physiological linkage occurs when physiological signals, such as heart rate, between interaction partners start to sync up—for example, when one person's heart rate speeds up, so does the heart rate of the person with whom they are interacting. The author's thesis study demonstrated that this linkage can occur when people communicate emotions solely through touch. But what happens if you are watching these emotion communications instead of experiencing them? The current study examined if physiological linkage occurs between people watching a video and the people emoting in the video. The results showed that linkage does occur while watching emotional touch interactions and that this can help the observer understand what these emotions are (even if the observer can see no faces and hear no voices). Touch has many health benefits, so the observation that watching recorded touch interactions can have a similar bodily effect has implications for increasing health and connectedness. This is particularly important given the limited face-to-face and touch interactions, as well as the increase in video call interactions, resulting from the COVID-19 pandemic.
8

Physiological Linkage and Communication of Emotion via Touch

Kissel, Heather 08 1900 (has links)
Previous research has demonstrated that communication of emotion via touch is possible and occurs well-above chance levels, though the potential mechanism whereby this occurs has yet to be determined. The current study aimed to determine if physiological linkage, or the synchrony between various physiological signals between two interaction partners, played a role in successful communication of emotion via touch. Dynamic linear times series analysis was used to determine the strength and length of synchrony between the inter-beat intervals of fifty-two stranger female-female dyads (n=104, mean age=19.88) during two rounds of an emotion communication task in which they communicated a randomized list emotions to each other via forearm touch alone without being able to see their interaction partner. Results showed the highest magnitude linkage coefficients and the greatest number of consecutive lagged linked seconds during the “touch alone” communication—demonstrating that touch increases physiological linkage. Stronger and longer physiological linkage across tasks predicted emotion word, valence, intensity, and quadrant (from the circumplex model) detection accuracy. Participants serving as the initial communicator in the first round of emotion communication tended to have a greater influence on the physiology of initial receivers. Overall, greater physiological linkage as the result of touch predicted successful communication of emotion via touch and is therefore likely a portion of the mechanism underlying this phenomenon. / M.S. / People often communicate with their friends, family, and acquaintances using touch—when meeting a loved one after a long time, we might give them a particularly tight hug; to congratulate someone, we give a high five; and even in business settings, handshakes are used as a form of greeting or parting. Touch can also be used to communicate distinct emotions, just like a frown or a stern tone can communicate visually and aurally that someone is angry. However, although past research has demonstrated this communicative ability of touch, it is not yet known how touch is able to communicate emotion. The current study hypothesized that physiological linkage might play a role. Physiological linkage occurs when physiological signals, such as heart rate, between interaction partners starts to sync up—for example, when one person’s heart rate speeds up, so does the heart rate of the person with whom they are interacting. Results showed that greater levels of physiological linkage occurred in response to touch and that these increased levels of physiological linkage predicted people’s ability to successfully determine which emotion was communicated to them via touch to their forearm. All the emotions were communicated via touch alone; participants could not see or hear their interaction partner. This demonstrates how powerful communication via touch can be. Future research should examine how touch and physiological linkage can be incorporated into medical and psychological therapies.
9

Investigation Of Damage Detection Methodologies For Structural Health Monitoring

Gul, Mustafa 01 January 2009 (has links)
Structural Health Monitoring (SHM) is employed to track and evaluate damage and deterioration during regular operation as well as after extreme events for aerospace, mechanical and civil structures. A complete SHM system incorporates performance metrics, sensing, signal processing, data analysis, transmission and management for decision-making purposes. Damage detection in the context of SHM can be successful by employing a collection of robust and practical damage detection methodologies that can be used to identify, locate and quantify damage or, in general terms, changes in observable behavior. In this study, different damage detection methods are investigated for global condition assessment of structures. First, different parametric and non-parametric approaches are re-visited and further improved for damage detection using vibration data. Modal flexibility, modal curvature and un-scaled flexibility based on the dynamic properties that are obtained using Complex Mode Indicator Function (CMIF) are used as parametric damage features. Second, statistical pattern recognition approaches using time series modeling in conjunction with outlier detection are investigated as a non-parametric damage detection technique. Third, a novel methodology using ARX models (Auto-Regressive models with eXogenous output) is proposed for damage identification. By using this new methodology, it is shown that damage can be detected, located and quantified without the need of external loading information. Next, laboratory studies are conducted on different test structures with a number of different damage scenarios for the evaluation of the techniques in a comparative fashion. Finally, application of the methodologies to real life data is also presented along with the capabilities and limitations of each approach in light of analysis results of the laboratory and real life data.
10

Novel Approaches For Demand Forecasting In Semiconductor Manufacturing

Kumar, Chittari Prasanna 01 1900 (has links)
Accurate demand forecasting is a key capability for a manufacturing organization, more so, a semiconductor manufacturer. Many crucial decisions are based on demand forecasts. The semiconductor industry is characterized by very short product lifecycles (10 to 24 months) and extremely uncertain demand. The pace at which both the manufacturing technology and the product design changes, induce change in manufacturing throughput and potential demand. Well known methods like exponential smoothing, moving average, weighted moving average, ARMA, ARIMA, econometric methods and neural networks have been used in industry with varying degrees of success. We propose a novel forecasting technique which is based on Support Vector Regression (SVR). Specifically, we formulate ν-SVR models for semiconductor product demand data. We propose a 3-phased input vector modeling approach to comprehend demand characteristics learnt while building a standard ARIMA model on the data. Forecasting Experimentations are done for different semiconductor product demand data like 32 & 64 bit CPU products, 32bit Micro controller units, DSP for cellular products, NAND and NOR Flash Products. Demand data was provided by SRC(Semiconductor Research Consortium) Member Companies. Demand data was actual sales recorded at every month. Model performance is judged based on different performance metrics used in extant literature. Results of experimentation show that compared to other demand forecasting techniques ν-SVR can significantly reduce both mean absolute percentage errors and normalized mean-squared errors of forecasts. ν-SVR with our 3-phased input vector modeling approach performs better than standard ARIMA and simple ν-SVR models in most of the cases.

Page generated in 0.1263 seconds