• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 14
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 76
  • 13
  • 10
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Optimization of Disaggregated Space Systems Using the Disaggregated Integral Systems Concept Optimization Technology Methodology

Wagner, Katherine Mott 10 July 2020 (has links)
This research describes the development and application of the Disaggregated Integral Systems Concept Optimization Technology (DISCO-Tech) methodology. DISCO-Tech is a modular space system design tool that focuses on the optimization of disaggregated and non-traditional space systems. It uses a variable-length genetic algorithm to simultaneously optimize orbital parameters, payload parameters, and payload distribution for space systems. The solutions produced by the genetic algorithm are evaluated using cost estimation, coverage analysis, and spacecraft sizing modules. A set of validation cases are presented. DISCO-Tech is then applied to three representative space mission design problems. The first problem is the design of a resilient rideshare-manifested fire detection system. This analysis uses a novel framework for evaluating constellation resilience to threats using mixed integer linear programming. A solution is identified where revisit times of under four hours are achievable for $10.5 million, one quarter of the cost of a system manifested using dedicated launches. The second problem applies the same resilience techniques to the design of an expanded GPS monitor station network. Nine additional monitor stations are identified that allow the network to continuously monitor the GPS satellites even when five of the monitor stations are inoperable. The third problem is the design of a formation of satellites for performing sea surface height detection using interferometric synthetic aperture radar techniques. A solution is chosen that meets the performance requirements of an upcoming monolithic system at 70% of the cost of the monolithic system. / Doctor of Philosophy / Civilians, businesses, and the government all rely on space-based resources for their daily operations. For example, the signal provided by GPS satellites is used by drivers, commercial pilots, soldiers, and more. Communications satellites provide phone and internet to users in remote areas. Weather satellites provide short-term forecasting and measure climate change. Because of the importance of these and other space systems, it is necessary that they are designed in an efficient, reliable, and cost-effective manner. The Disaggregated Integral Systems Concept Optimization Technology (DISCO-Tech) is introduced as a means of designing these space systems. DISCO-Tech optimizes various aspects of the space mission, including the number of satellites needed to complete the mission, the location of the satellites, and the sensors that each satellite needs to accomplish its mission. This dissertation describes how DISCO-Tech works, then applies DISCO-Tech to several example missions. The first mission uses satellites to monitor forest fires in California. In order to reduce the cost of this mission, the satellites share launch vehicles with satellites from other, unrelated missions. Next, DISCO-Tech is used to choose the placement of new ground stations for GPS satellites. Because GPS is an important asset, this study also assesses the performance of the network of ground stations when some of the stations are inoperable. Finally, DISCO-Tech is used to design a group of satellites that measure sea level, since sea level is important for climatology research. A design is presented for a group of satellites that perform these measurements at a lower cost than a planned mission that uses a single satellite.
2

Temporal Mining Approaches for Smart Buildings Research

Shao, Huijuan 30 January 2017 (has links)
With the advent of modern sensor technologies, significant opportunities have opened up to help conserve energy in residential and commercial buildings. Moreover, the rapid urbanization we are witnessing requires optimized energy distribution. This dissertation focuses on two sub-problems in improving energy conservation; energy disaggregation and occupancy prediction. Energy disaggregation attempts to separate the energy usage of each circuit or each electric device in a building using only aggregate electricity usage information from the meter for the whole house. The second problem of occupancy prediction can be accomplished using non-invasive indoor activity tracking to predict the locations of people inside a building. We cast both problems as temporal mining problems. We exploit motif mining with constraints to distinguish devices with multiple states, which helps tackle the energy disaggregation problem. Our results reveal that motif mining is adept at distinguishing devices with multiple power levels and at disentangling the combinatorial operation of devices. For the second problem we propose time-gap constrained episode mining to detect activity patterns followed by the use of a mixture of episode generating HMM (EGH) models to predict home occupancy. Finally, we demonstrate that the mixture EGH model can also help predict the location of a person to address non-invasive indoor activities tracking. / Ph. D.
3

Increased Trust: The Effect of Disaggregated Financial Statements on Potential Nonprofit Donations

Schmelzer, Anthony Andrew 06 July 2018 (has links)
No description available.
4

Non-Parametric Learning for Energy Disaggregation

Khan, Mohammad Mahmudur Rahman 10 August 2018 (has links)
This thesis work presents a non-parametric learning method, the Extended Nearest Neighbor (ENN) algorithm, as a tool for data disaggregation in smart grids. The ENN algorithm makes the prediction according to the maximum gain of intra-class coherence. This algorithm not only considers the K nearest neighbors of the test sample but also considers whether these K data points consider the test sample as their nearest neighbor or not. So far, ENN has shown noticeable improvement in the classification accuracy for various real-life applications. To further enhance its prediction capability, in this thesis we propose to incorporate a metric learning algorithm, namely the Large Margin Nearest Neighbor (LMNN) algorithm, as a training stage in ENN. Our experiments on real-life energy data sets have shown significant performance improvement compared to several other traditional classification algorithms, including the classic KNN method and Support Vector Machines.
5

Netswap: Network-based Swapping for Server-Embedded Board Clusters

Errabelly, Sandeep 05 July 2023 (has links)
Capital equipment costs and energy costs are the major cost drivers in datacenters. Prior works have explored various techniques, like efficient scheduling algorithms and advanced power management techniques, to maximize resource utilization to reduce the capital and energy costs. The project HEXO has explored heterogeneous-Instruction Set Architecture (ISA) server-embedded clusters to minimize the cost. HEXO's key idea is to migrate stateful virtual machines from high-performance x86-based servers to low-power, low-cost ARM-based embedded boards, reducing server's resource congestion and thereby improving throughput and energy efficiency. However, embedded boards generally have significantly lower onboard memory, typically in the range of 100MB to 4GB. Due to this limitation, high memory-demand applications cannot be migrated to embedded devices. This limits the scope of applications that can be used with heterogeneous-ISA server-embedded clusters such as HEXO. This thesis proposes Netswap, a mechanism that utilizes the server's free memory as remote memory for the embedded board. Netswap comprises three main components: the swap-out and swap-in mechanism, a bitmap-based Free Memory Manager, and the Netswap Remote Daemon. Experimental studies using micro- and macro benchmarks reveal that Netswap improves the throughput and energy efficiency of server-embedded clusters by as much as 40% and 20%, respectively, over server-only baselines. / Master of Science / Datacenters have major expenditures like capital costs and energy expenditures. The project HEXO addresses in reducing these expenditures by including small embedded devices in datacenters. These embedded devices are cheaper and consume less energy than a typical server, but they have limited onboard RAM. The memory limitation restricts HEXO's ability to run applications with higher memory demand. This thesis introduces Netswap, which solves this issue by utilizing the free memory available on the servers as a secondary memory for the connected embedded devices. We discussed various design choices for efficiently implementing such a remote memory mechanism.
6

Towards a Mechanistic Understanding of the Molecular Chaperone Hsp104

Lum, Ronnie 18 February 2011 (has links)
The AAA+ chaperone Hsp104 mediates the reactivation of aggregated proteins in Saccharomyces cerevisiae and is crucial for cell survival after exposure to stress. Protein disaggregation depends on cooperation between Hsp104 and a cognate Hsp70 chaperone system. Hsp104 forms a hexameric ring with a narrow axial channel penetrating the centre of the complex. In Chapter 2, I show that conserved loops in each AAA+ module that line this channel are required for disaggregation and that the position of these loops is likely determined by the nucleotide bound state of Hsp104. This evidence supports a common protein remodeling mechanism among Hsp100 members in which proteins are unfolded and threaded along the axial channel. In Chapter 3, I use a peptide-based substrate mimetic to reveal other novel features of Hsp104’s disaggregation mechanism. An Hsp104-binding peptide selected from solid phase arrays recapitulated several properties of an authentic Hsp104 substrate. Inactivation of the pore loops in either AAA+ module prevented stable peptide or protein binding. However, when the loop in the first AAA+ was inactivated, stimulation of ATPase turnover in the second AAA+ module of this mutant was abolished. Drawing on these data, I propose a detailed mechanistic model of protein unfolding by Hsp104 in which an initial unstable interaction involving the loop in the first AAA+ module simultaneously promotes penetration of the substrate into the second axial channel binding site and activates ATP turnover in the second AAA+ module. In Chapter 4, I explore the recognition elements within a model Hsp104-binding peptide that are required for rapid binding to Hsp104. Removal of bulky hydrophobic residues and lysines abrogated the ability of this peptide to function as a peptide-based substrate mimetic for Hsp104. Furthermore, rapid binding of a model unfolded protein to Hsp104 required an intact N-terminal domain and ATP binding at the first AAA+ module. Taken together, I have defined numerous structural features within Hsp104 and its model substrates that are crucial for substrate binding and processing by Hsp104. This work provides a theoretical framework that will encourage research in other protein remodeling AAA+ ATPases.
7

Désagrégation spatiale des données de mobilité du recensement de la population appliquée à l'Ile-de-France / Disaggregation spatial of census mobility data apply to region Ile-de-France

Pivano, Cyril 20 October 2016 (has links)
En cours / En cours
8

Elicitation indirecte de modèles de tri multicritère / Indirect elicitation of multicriteria sorting models

Cailloux, Olivier 14 November 2012 (has links)
Le champ de l’Aide Multicritère à la Décision (AMCD) propose des manières de modéliser formellement les préférences d’un décideur de façon à lui apporter des éclaircissements. Le champ s’intéresse aux problèmes impliquant une décision et faisant intervenir plusieurs points de vue pour évaluer les options (ou alternatives) disponibles. Ce travail vise principalement à proposer des méthodes d’élicitation, donc des façons de questionner un décideur ou un groupe de décideurs pour obtenir un ou plusieurs modèles de préférence. Ces méthodes utilisent des techniques dites de désagrégation consistant à prendre des exemples de décision pour base de la modélisation. Dans le contexte étudié, les modèles de préférence sont des modèles de tri : ils déterminent une façon d’affecter des alternatives à des catégories ordonnées par préférence. Nous nous intéressons à la classe de modèles de tri MR Sort. Nous présentons une méthode permettant de faire converger un groupe de décideurs vers un modèle de tri unique. Elle s’appuie sur des programmes mathématiques. Nous analysons également en détail les difficultés liées aux imprécisions numériques posées par l’implémentation de ces programmes. Nous proposons aussi un algorithme permettant de comparer deux modèles MR Sort. Nous introduisons une manière novatrice d’interroger le décideur d’une façon qui permet de prendre en compte ses hésitations, via l’expression de degrés de crédibilités, lorsqu’il fournit des exemples d’affectation. Les résultats de la méthode permettent au décideur de visualiser les compromis possibles entre la crédibilité et la précision des conclusions obtenues. Nous proposons une méthode de choix de portefeuille. Elle intègre des préoccupations d’évaluation absolue, afin de s’assurer de la qualité intrinsèque des alternatives sélectionnées, et des préoccupations portant sur l’équilibre du portefeuille résultant. Nous expliquons également en quoi cette méthode peut constituer une alternative à la discrimination positive. Nous décrivons les composants logiciels réutilisables que nous avons soumis à une plateforme de services web, ainsi que les fonctionnalités développées dans une bibliothèque qui implémente les méthodes proposées dans ce travail. Un schéma de données existe visant à standardiser l’encodage de données de méthodes d’AMCD en vue de faciliter la communication entre composants logiciels. Nous proposons une nouvelle approche visant à résoudre un certain nombre d’inconvénients de l’approche actuelle. Nous développons en guise de perspective une proposition visant à inscrire la modélisation des préférences dans une épistémologie de type réaliste. / The field of Multicriteria Decision Aid (MCDA) aims to model in a formal way the preferences of a decision maker (DM) in order to bring informations that can help her in a decision problem. MCDA is interested in situations where the available options (called alternatives) are evaluated on multiple points of view. This work suggests elicitation methods: ways of questioning a DM or a group of DMs in order to obtain one or several preference models. These methods rely on socalled disaggregation techniques, which use exemplary decisions as a basis for building the preference model. In our context, the preference models are sorting models: they determine a way of assigning alternatives to preferenceordered categories. We are interested in a class of sorting models called MR Sort. We present a method that helps a group of DMs converge to a unique sorting model. It uses mathematical programs. We also analyze in detail the difficulties due to numerical imprecision when implementing these programs, and we propose an algorithm allowing to compare two MR Sort models. We introduce a novel way of interrogating the DM in order to take her hesitations into account, through the expression of degrees of credibility, when she gives assignment examples. Results of the method let the DM examine possible compromises between credibility and precision of the conclusions. We propose a method to choose portfolios. It encompasses two dimensions: absolute evaluation, in order to ensure that the selected alternatives are sufficiently good, and balance of the resulting portfolio. We also explain how this method compares to affirmative action. We describe the reusable software components that we have submitted to a web services platform, as well as functionalities developed in a library that implements the methods this work proposes. A data scheme exists that aims to standardize encoding of data related to MCDA methods, in order to ease communication between software components. We propose a new approach aiming to solve some drawbacks of the current approach. We develop as a perspective a proposal that aims to integrate preference modeling into the framework of realistic epistemology.
9

Stochastic Disaggregation of Daily Rainfall for Fine Timescale Design Storms

Mahbub, S. M. Parvez Bin, s.mahbub@qut.edu.au January 2008 (has links)
Rainfall data are usually gathered at daily timescales due to the availability of daily rain-gauges throughout the world. However, rainfall data at fine timescale are required for certain hydrologic modellings such as crop simulation modelling, erosion modelling etc. Limited availability of such data leads to the option of daily rainfall disaggregation. This research investigates the use of a stochastic rainfall disaggregation model on a regional basis to disaggregate daily rainfall into any desired fine timescale in the State of Queensland, Australia. With the incorporation of seasonality into the variance relationship and capping of the fine timescale maximum intensities, the model was found to be a useful tool for disaggregating daily rainfall in the regions of Queensland. The degree of model complexity in terms of binary chain parameter calibration was also reduced by using only three parameters for Queensland. The resulting rainfall Intensity-Frequency-Duration (IFD) curves better predicted the intensities at fine timescale durations compared with the existing Australian Rainfall and Runoff (ARR) approach. The model has also been linked to the SILO Data Drill synthetic data to disaggregate daily rainfall at sites where limited or no fine timescale observed data are available. This research has analysed the fine timescale rainfall properties at various sites in Queensland and established sufficient confidence in using the model for Queensland.
10

Temporal Disaggregation of Daily Precipitation Data in a Changing Climate

Wey, Karen January 2006 (has links)
Models for spatially interpolating hourly precipitation data and temporally disaggregating daily precipitation to hourly data are developed for application to multisite scenarios at the watershed scale. The intent is to create models to produce data which are valid input for a hydrologic rainfall-runoff model, from daily data produced by a stochastic weather generator. These models will be used to determine the potential effects of climate change on local precipitation events. A case study is presented applying these models to the Upper Thames River basin in Ontario, Canada; however, these models are generic and applicable to any watershed with few changes. <br /><br /> Some hourly precipitation data were required to calibrate the temporal disaggregation model. Spatial interpolation of this hourly precipitation data was required before temporal disaggregation could be completed. Spatial interpolation methods were investigated and an inverse distance method was applied to the data. Analysis of the output from this model confirms that isotropy is a valid assumption for this application and illustrates that the model is robust. The results for this model show that further study is required for accurate spatial interpolation of hourly precipitation data at the watershed scale. <br /><br /> An improved method of fragments is used to perform temporal disaggregation on daily precipitation data. A parsimonious approach to multisite fragment calculation is introduced within this model as well as other improvements upon the methods presented in the literature. The output from this model clearly indicates that spatial and temporal variations are maintained throughout the disaggregation process. Analysis of the results indicates that the model creates plausible precipitation events. <br /><br /> The models presented here were run for multiple climate scenarios to determine which GCM scenario has the most potential to affect precipitation. Discussion on the potential impacts of climate change on the region of study is provided. Selected events are examined in detail to give a representation of extreme precipitation events which may be experienced in the study area due to climate change.

Page generated in 0.0889 seconds