• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 14
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 76
  • 13
  • 10
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Temporal Disaggregation of Daily Precipitation Data in a Changing Climate

Wey, Karen January 2006 (has links)
Models for spatially interpolating hourly precipitation data and temporally disaggregating daily precipitation to hourly data are developed for application to multisite scenarios at the watershed scale. The intent is to create models to produce data which are valid input for a hydrologic rainfall-runoff model, from daily data produced by a stochastic weather generator. These models will be used to determine the potential effects of climate change on local precipitation events. A case study is presented applying these models to the Upper Thames River basin in Ontario, Canada; however, these models are generic and applicable to any watershed with few changes. <br /><br /> Some hourly precipitation data were required to calibrate the temporal disaggregation model. Spatial interpolation of this hourly precipitation data was required before temporal disaggregation could be completed. Spatial interpolation methods were investigated and an inverse distance method was applied to the data. Analysis of the output from this model confirms that isotropy is a valid assumption for this application and illustrates that the model is robust. The results for this model show that further study is required for accurate spatial interpolation of hourly precipitation data at the watershed scale. <br /><br /> An improved method of fragments is used to perform temporal disaggregation on daily precipitation data. A parsimonious approach to multisite fragment calculation is introduced within this model as well as other improvements upon the methods presented in the literature. The output from this model clearly indicates that spatial and temporal variations are maintained throughout the disaggregation process. Analysis of the results indicates that the model creates plausible precipitation events. <br /><br /> The models presented here were run for multiple climate scenarios to determine which GCM scenario has the most potential to affect precipitation. Discussion on the potential impacts of climate change on the region of study is provided. Selected events are examined in detail to give a representation of extreme precipitation events which may be experienced in the study area due to climate change.
12

Integer Programming Models for finding Optimal Part-Machine Families

Mason, Cynthia 10 May 2013 (has links)
In this thesis, we develop integer programming models which find the optimal part-machine family solutions, that disaggregate a factory process at the lowest cost. The groupings created using the methods presented in this thesis can then act as the basis for the application of Group Technology, which include machine placement, job scheduling, and part routing. Four exact 0−1 Linear Programming techniques are developed and presented. The first 0 − 1 Linear Programming technique only focuses on part subcontracting as a means to disaggregate, and the second only focuses on machine duplication to disaggregate. The final two methods both yield part-machine family disaggregation through simultaneous part subcontracting and machine duplication. Once these methods are applied to example problems, the results provide the exact solutions, which have not been found in previous work. / NSERC Discovery Grant
13

Towards a Mechanistic Understanding of the Molecular Chaperone Hsp104

Lum, Ronnie 18 February 2011 (has links)
The AAA+ chaperone Hsp104 mediates the reactivation of aggregated proteins in Saccharomyces cerevisiae and is crucial for cell survival after exposure to stress. Protein disaggregation depends on cooperation between Hsp104 and a cognate Hsp70 chaperone system. Hsp104 forms a hexameric ring with a narrow axial channel penetrating the centre of the complex. In Chapter 2, I show that conserved loops in each AAA+ module that line this channel are required for disaggregation and that the position of these loops is likely determined by the nucleotide bound state of Hsp104. This evidence supports a common protein remodeling mechanism among Hsp100 members in which proteins are unfolded and threaded along the axial channel. In Chapter 3, I use a peptide-based substrate mimetic to reveal other novel features of Hsp104’s disaggregation mechanism. An Hsp104-binding peptide selected from solid phase arrays recapitulated several properties of an authentic Hsp104 substrate. Inactivation of the pore loops in either AAA+ module prevented stable peptide or protein binding. However, when the loop in the first AAA+ was inactivated, stimulation of ATPase turnover in the second AAA+ module of this mutant was abolished. Drawing on these data, I propose a detailed mechanistic model of protein unfolding by Hsp104 in which an initial unstable interaction involving the loop in the first AAA+ module simultaneously promotes penetration of the substrate into the second axial channel binding site and activates ATP turnover in the second AAA+ module. In Chapter 4, I explore the recognition elements within a model Hsp104-binding peptide that are required for rapid binding to Hsp104. Removal of bulky hydrophobic residues and lysines abrogated the ability of this peptide to function as a peptide-based substrate mimetic for Hsp104. Furthermore, rapid binding of a model unfolded protein to Hsp104 required an intact N-terminal domain and ATP binding at the first AAA+ module. Taken together, I have defined numerous structural features within Hsp104 and its model substrates that are crucial for substrate binding and processing by Hsp104. This work provides a theoretical framework that will encourage research in other protein remodeling AAA+ ATPases.
14

Deep Neural Networks Based Disaggregation of Swedish Household Energy Consumption

Bhupathiraju, Praneeth Varma January 2020 (has links)
Context: In recent years, households have been increasing energy consumption to very high levels, where it is no longer sustainable. There has been a dire need to find a way to use energy more sustainably due to the increase in the usage of energy consumption. One of the main causes of this unsustainable usage of energy consumption is that the user is not much acquainted with the energy consumed by the smart appliances (dishwasher, refrigerator, washing machine etc) in their households. By letting the household users know the energy usage consumed by the smart appliances. For the energy analytics companies, they must analyze the energy consumed by the smart appliances present in a house. To achieve this Kelly et. al. [7] have performed the task of energy disaggregation by using deep neural networks and producing good results. Zhang et. al. [7] has gone even a step further in improving the deep neural networks proposed by Kelly et. al., The task was performed by Non-intrusive load monitoring (NILM) technique. Objectives: The thesis aims to assess the performance of the deep neural networks which are proposed by Kelly et.al. [7], and Zhang et. al. [8]. We use deep neural networks for disaggregation of the dishwasher energy consumption, in the presence of vampire loads such as electric heaters, in a Swedish household setting. We also try to identify the training time of the proposed deep neural networks.  Methods: An intensive literature review is done to identify state-of-the-art deep neural network techniques used for energy disaggregation.  All the experiments are being performed on the dataset provided by the energy analytics company Eliq AB. The data is collected from 4 households in Sweden. All the households consist of vampire loads, an electrical heater, whose power consumption can be seen in the main power sensor. A separate smart plug is used to collect the dishwasher power consumption data. Each algorithm training is done on 2 houses with data provided by all the houses except two, which will be used for testing. The metrics used for analyzing the algorithms are Accuracy, Recall, Precision, Root mean square error (RMSE), and F1 measure. These software metrics would help us identify the best suitable algorithm for the disaggregation of dishwasher energy in our case.  Results: The results from our study have proved that Gated recurrent unit (GRU) performed best when compared to the other neural networks in our study like Simple recurrent neural network (SRN), Convolutional Neural Network (CNN), Long short-Term memory (LSTM) and Recurrent convolution neural network (RCNN). The Accuracy, RMSE and the F1 score of the GRU algorithm are higher when compared with the other algorithms. Also, if the user does not consider F1 score and RMSE as an evaluation metric and considers training time as his or her metric, then Simple recurrent neural network outperforms all the other neural nets with an average training time of 19.34 minutes.
15

Efficient Algorithms for Mining Large Spatio-Temporal Data

Chen, Feng 21 January 2013 (has links)
Knowledge discovery on spatio-temporal datasets has attracted<br />growing interests. Recent advances on remote sensing technology mean<br />that massive amounts of spatio-temporal data are being collected,<br />and its volume keeps increasing at an ever faster pace. It becomes<br />critical to design efficient algorithms for identifying novel and<br />meaningful patterns from massive spatio-temporal datasets. Different<br />from the other data sources, this data exhibits significant<br />space-time statistical dependence, and the assumption of i.i.d. is<br />no longer valid. The exact modeling of space-time dependence will<br />render the exponential growth of model complexity as the data size<br />increases. This research focuses on the construction of efficient<br />and effective approaches using approximate inference techniques for<br />three main mining tasks, including spatial outlier detection, robust<br />spatio-temporal prediction, and novel applications to real world<br />problems.<br /><br />Spatial novelty patterns, or spatial outliers, are those data points<br />whose characteristics are markedly different from their spatial<br />neighbors. There are two major branches of spatial outlier detection<br />methodologies, which can be either global Kriging based or local<br />Laplacian smoothing based. The former approach requires the exact<br />modeling of spatial dependence, which is time extensive; and the<br />latter approach requires the i.i.d. assumption of the smoothed<br />observations, which is not statistically solid. These two approaches<br />are constrained to numerical data, but in real world applications we<br />are often faced with a variety of non-numerical data types, such as<br />count, binary, nominal, and ordinal. To summarize, the main research<br />challenges are: 1) how much spatial dependence can be eliminated via<br />Laplace smoothing; 2) how to effectively and efficiently detect<br />outliers for large numerical spatial datasets; 3) how to generalize<br />numerical detection methods and develop a unified outlier detection<br />framework suitable for large non-numerical datasets; 4) how to<br />achieve accurate spatial prediction even when the training data has<br />been contaminated by outliers; 5) how to deal with spatio-temporal<br />data for the preceding problems.<br /><br />To address the first and second challenges, we mathematically<br />validated the effectiveness of Laplacian smoothing on the<br />elimination of spatial autocorrelations. This work provides<br />fundamental support for existing Laplacian smoothing based methods.<br />We also discovered a nontrivial side-effect of Laplacian smoothing,<br />which ingests additional spatial variations to the data due to<br />convolution effects. To capture this extra variability, we proposed<br />a generalized local statistical model, and designed two fast forward<br />and backward outlier detection methods that achieve a better balance<br />between computational efficiency and accuracy than most existing<br />methods, and are well suited to large numerical spatial datasets.<br /><br />We addressed the third challenge by mapping non-numerical variables<br />to latent numerical variables via a link function, such as logit<br />function used in logistic regression, and then utilizing<br />error-buffer artificial variables, which follow a Student-t<br />distribution, to capture the large valuations caused by outliers. We<br />proposed a unified statistical framework, which integrates the<br />advantages of spatial generalized linear mixed model, robust spatial<br />linear model, reduced-rank dimension reduction, and Bayesian<br />hierarchical model. A linear-time approximate inference algorithm<br />was designed to infer the posterior distribution of the error-buffer<br />artificial variables conditioned on observations. We demonstrated<br />that traditional numerical outlier detection methods can be directly<br />applied to the estimated artificial variables for outliers<br />detection. To the best of our knowledge, this is the first<br />linear-time outlier detection algorithm that supports a variety of<br />spatial attribute types, such as binary, count, ordinal, and<br />nominal.<br /><br />To address the fourth and fifth challenges, we proposed a robust<br />version of the Spatio-Temporal Random Effects (STRE) model, namely<br />the Robust STRE (R-STRE) model. The regular STRE model is a recently<br />proposed statistical model for large spatio-temporal data that has a<br />linear order time complexity, but is not best suited for<br />non-Gaussian and contaminated datasets. This deficiency can be<br />systemically addressed by increasing the robustness of the model<br />using heavy-tailed distributions, such as the Huber, Laplace, or<br />Student-t distribution to model the measurement error, instead of<br />the traditional Gaussian. However, the resulting R-STRE model<br />becomes analytical intractable, and direct application of<br />approximate inferences techniques still has a cubic order time<br />complexity. To address the computational challenge, we reformulated<br />the prediction problem as a maximum a posterior (MAP) problem with a<br />non-smooth objection function, transformed it to a equivalent<br />quadratic programming problem, and developed an efficient<br />interior-point numerical algorithm with a near linear order<br />complexity. This work presents the first near linear time robust<br />prediction approach for large spatio-temporal datasets in both<br />offline and online cases. / Ph. D.
16

Development of Building Markers and Unsupervised Non-intrusive Disaggregation Model for Commercial Buildings’ Energy Usage

Hossain, Mohammad Akram 01 June 2018 (has links)
No description available.
17

A spatial analysis of disaggregated commuting data: implications for excess commuting, jobs-housing balance, and accessibility

Lee, Wook 04 August 2005 (has links)
No description available.
18

INFORMATION AND INCENTIVES IN RETAIL SALES

Lee, Soojin January 2019 (has links)
I examine how managers mitigate the side effects of the overly complicated performance evaluation system in the context of a high-end retail industry. The standard performance evaluation system in the industry has evolved to include multiple performance measures. The detailed measures can incentivize employees to perform multiple performance-relevant activities, but they inevitably increase the complexity of the performance evaluation system. The complexity increases the risk of information overload of the employees, decreasing judgment quality and potentially decreasing their performance. Drawing on psychology literature, I postulate two factors moderating the relationship between information overload and performance: 1) disaggregated feedback provides detailed information on each category of performance measures and compensate for each performance measure rather than for overall performance level; 2) feedforward informs employees about how their actions affect their compensation. Both factors mitigate the negative performance effect of information overload by clarifying causality embedded in the complex performance evaluation system to employees. I conduct two field experiments that implement the disaggregated feedback and the feedforward policies for sales outlets of a high-end retail firm, respectively, and examine whether the policies mitigate information overload problem and improve performance. I find that the treatment group exhibits improvement in performance, suggesting that disaggregated feedback and the feedforward reduce information overload. / Business Administration/Accounting
19

Modulation of conformational space and dynamics of unfolded outer membrane proteins by periplasmic chaperones

Chamachi, Neharika 03 June 2021 (has links)
Beta-barrel outer membrane proteins (OMPs) present on the outer membrane of Gram-negative bacteria are vital to cell survival. Their biogenesis is a challenging process which is tightly regulated by protein-chaperone interactions at various stages. Upon secretion from the inner membrane, OMPs are solubilized by periplasmic chaperones seventeen kilodalton protein (Skp) and survival factor A (SurA) and maintained in a folding competent state until they reach the outer membrane. As periplasm has an energy deficient environment, thermodynamics plays an important role in fine tuning these chaperone-OMP interactions. Thus, a complete understanding of such associations necessitates an investigation into both structural and thermodynamic aspects of the underlying intercommunication. Yet, they have been difficult to discern because of the conformational heterogeneity of the bound substrates, fast chain dynamics and the aggregation prone nature of OMPs. This demands for use of single molecule spectroscopy techniques, specifically, single molecule Förster resonance energy transfer (smFRET). In this thesis, upon leveraging the conformational and temporal resolution offered by smFRET, an exciting insight is obtained into the mechanistic and functional features of unfolded and Skp/SurA - bound states of two differently sized OMPs: OmpX (8 β-strands) and outer membrane phospholipase A (OmpLA – 12 β-strands). First, it was elucidated that the unfolded states of both the proteins exhibit slow interconversion within their sub-populations. Remarkably, upon complexing with chaperones, irrespective of the chosen OMP, the bound substrates expanded with localised chain reconfiguration on a sub-millisecond timescale. Yet, due to the different interaction mechanisms employed by Skp (encapsulation) and SurA (multivalent binding), their clients were found to be characterised by distinct conformational ensembles. Importantly, the extracted thermodynamic parameters of change in enthalpy and entropy exemplified the mechanistically dissimilar functionalities of the two chaperones. Furthermore, both Skp and SurA were found to be capable of disintegrating aggregated OMPs rather cooperatively, highlighting their multifaceted chaperone activity. This work is of significant fundamental value towards understanding the ubiquitous chaperone-protein interactions and opens up the possibility to design drugs targeting the chaperone-OMP complex itself, one step ahead of the OMP assembly on the outer membrane.
20

Prediction of Strong Ground Motion and Hazard Uncertainties

Tavakoli, Behrooz January 2003 (has links)
The purpose of this thesis is to provide a detailed description of recent methods and scientific basis for characterizing earthquake sources within a certain region with distinct tectonic environments. The focus will be on those characteristics that are most significant to the ground-shaking hazard and on how we can incorporate our current knowledge into hazard analyses for engineering design purposes. I treat two particular geographical areas where I think current hazard analysis methods are in need of significant improvement, and suggest some approaches that have proven to be effective in past applications elsewhere. A combined hazard procedure is used to estimate seismicity in northern Central America, where there appear to be four tectonic environments for modeling the seismogenic sources and in Iran, where the large earthquakes usually occur on known faults. A preferred seismic hazard model for northern Central America and the western Caribbean plate based on earthquake catalogs, geodetic measurements, and geological information is presented. I used the widely practiced method of relating seismicity data to geological data to assess the various seismic hazard parameters and test parameter sensitivities. The sensitivity and overall uncertainty in peak ground acceleration (PGA) estimates are calculated for northwestern Iran by using a specific randomized blocks design. A Monte Carlo approach is utilized to evaluate the ground motion hazard and its uncertainties in northern Central America. A set of new seismic hazard maps, exhibiting probabilistic values of peak ground acceleration (PGA) with 50%, 10%, and 5% probabilities of exceedance (PE) in 50 years, is presented for the area of relevance. Disaggregation of seismic hazard is carried out for cities of San Salvador and Guatemala by using a spatial distribution of epicenters around these sites to select design ground motion for seismic risk decisions. In conclusion, consideration of the effect of parameters such as seismic moment, fault rupture, rupture directivity and stress drop are strongly recommended in estimating the near field ground motions. The rupture process of the 2002 Changureh earthquake (Mw = 6.5), Iran, was analyzed by using the empirical Green’s function (EGF) method. This method simulates strong ground motions for future large earthquakes at particular sites where no empirical data are available.

Page generated in 0.1038 seconds