1071 |
Marivaux moraliste dans Le spectateur françaisRouben, César January 1964 (has links)
No description available.
|
1072 |
Framework for active solar collection systemsHassan, Marwa M. 01 July 2003 (has links)
A framework that presents a new methodology for design-evaluation of active solar collection systems was developed. Although this methodology emphasizes the importance of detailed modeling for accurate prediction of building performance, it also presents a process through which the detailed modeling results can be reused in a simplified iterative procedure allowing the designer the flexibility of revising and improving the preliminary design. For demonstration purposes, the framework was used to design and evaluate two case studies located in Blacksburg (VA) and Minneapolis (MN). These locations were selected because they both represent a cold weather region; presenting a need for using solar energy for heating and hot water requirements. Moreover, the cold weather in Blacksburg is not as severe as in Minneapolis. Therefore, the two cases will result in different thermal loading structures enabling the framework validation process. The solar collection system supplying both case studies consisted of a low temperature flat plate solar collector and storage system.
Thermal performance of the case study located in Blacksburg was conducted using detailed modeling evaluation techniques; while thermal performance of the case study located in Minneapolis was conducted using a simplified modeling evaluation technique. In the first case study, hourly evaluation of the thermal performance of the solar collection system was accomplished using finite element (FE) analysis, while hourly evaluation of the building thermal performance was made using Energy Plus software. The results of the finite element analysis were used to develop a statistical predictive design equation. The energy consumption for the second case study was calculated using the heating design day method and the energy collection for that case study was calculated using the predictive design equation developed from the first case study results. Results showed that, in the case of the building located in Blacksburg, the solar collection system can supply an average of 85% of the building's heating and hot water requirements through out the year. In the case of the building located in Minneapolis, the solar collection system can supply an average of 56% of the building's heating and hot water requirements through out the year given no night time window insulation and using similar insulation thicknesses for both cases. / Ph. D.
|
1073 |
An On-Road Investigation of Commercial Motor Vehicle Operators and Self-Rating of Alertness and Temporal Separation as Indicators of Driver FatigueBelz, Steven M. 29 November 2000 (has links)
This on-road field investigation employed, for the first time, a completely automated, trigger-based data collection system capable of evaluating driver performance in an extended duration real-world commercial motor vehicle environment. The complexities associated with the development of the system, both technological and logistical and the necessary modifications to the plan of research are presented herein
This study, performed in conjunction with an on-going three year contract with the Federal Highway Administration, examined the use of self-rating of alertness and temporal separation (minimum time-to-collision, minimum headway, and mean headway) as indicators of driver fatigue. Without exception, the regression analyses for both the self-rating of alertness and temporal separation yielded models low in predictive ability; neither metric was found to be a valid indicator of driver fatigue. Various reasons for the failure of self-rating of fatigue as a valid measure are discussed. Dispersion in the data, likely due to extraneous (non-fatigue related) factors (e.g., other drivers) are credited with reducing the sensitivity of the temporal separation indicators.
Overall fatigue levels for all temporal separation incidents (those with a time-to-collision equal to or less than four seconds) were found to be significantly higher than for those randomly triggered incidents. On this basis, it is surmised that temporal separation may be a sensitive indicator for time-to-collision values greater than the 4-second criterion employed in this study.
Two unexpected relationships in the data are also discussed. A "wall" effect was found to exist for minimum time-to-collision values at 1.9 seconds. That is, none of the participants who participated in this research effort exhibited following behaviors with less than a 1.9-second time-to-collision criterion. In addition, based upon the data collected for this research, anecdotal evidence suggests that commercial motor vehicle operators do not appear to follow the standard progression of events associated with the onset of fatigue. / Ph. D.
|
1074 |
Parallel Inverted Indices for Large-Scale, Dynamic Digital LibrariesSornil, Ohm 09 February 2001 (has links)
The dramatic increase in the amount of content available in digital forms gives rise to large-scale digital libraries, targeted to support millions of users and terabytes of data. Retrieving information from a system of this scale in an efficient manner is a challenging task due to the size of the collection as well as the index. This research deals with the design and implementation of an inverted index that supports searching for information in a large-scale digital library, implemented atop a massively parallel storage system. Inverted index partitioning is studied in a simulation environment, aiming at a terabyte of text. As a result, a high performance partitioning scheme is proposed. It combines the best qualities of the term and document partitioning approaches in a new Hybrid Partitioning Scheme. Simulation experiments show that this organization provides good performance over a wide range of conditions. Further, the issues of creation and incremental updates of the index are considered. A disk-based inversion algorithm and an extensible inverted index architecture are described, and experimental results with actual collections are presented. Finally, distributed algorithms to create a parallel inverted index partitioned according to the hybrid scheme are proposed, and performance is measured on a portion of the equipment that normally makes up the 100 node Virginia Tech PetaPlex™ system.
NOTE: (02/2007) An updated copy of this ETD was added after there were patron reports of problems with the file. / Ph. D.
|
1075 |
Investigation of Liquid Trapping Following Supercritical Fluid ExtractionMcDaniel, Lori Heldreth 30 September 1999 (has links)
Supercritical fluid extraction (SFE) is an alternative to traditional extractions with organic solvents. SFE consists of removing the analyte(s) from the matrix, solubilizing them, moving the analyte(s) into the bulk fluid, and sweeping the fluid containing the analyte(s) out of the extraction vessel.
As the fluid leaves the extraction vessel, decompression of the fluid occurs, changing its volume and temperature which can lead to analyte loss.
This work focussed on the trapping process with the restrictor immersed in a liquid after SFE. Experiments compared the effects of trapping parameters on the collection efficiencies of fat-soluble vitamins of similar polarities and structures. The most important variable was the selection of collection solvent and its physical properties, such as viscosity, surface tension and density were found to be important.
Additionally, adding a modifier to the collection solvent in an attempt to change its physical properties and influence collection efficiencies for a polarity test mix was studied. Addition of a modifier can improve collection efficiencies and allow higher collection temperature to be used, but the modifier did not increase trapping recoveries to the extent that collection pressurization did.
The occurrence of a methylation reaction of decanoic acid during the SFE and collection processes, using a methanol modified fluid or collection solvent was investigated. The majority of the reaction occurred during the collection process and the degree of methylation was found to be dependent on temperature, but not on static or dynamic extraction time. When no additional acidic catalyst other than carbon dioxide in the presence of water was present, conversion was limited to about 2%, but was quantitative with an added acidic catalyst.
The last portion of this work involved the application of the SFE process to the extraction and analysis of extractable material in eight hardwood and softwood pulp samples. Grinding the samples increased extractable fatty acid methyl esters (FAMEs) by ten-fold, and in-situ derivatizations resulted in higher FAME recoveries than derivatization after SFE. Liquid trapping enhanced recoveries of lower FAMEs when compared to tandem (solid/liquid) trapping. In-situ acetylations sometimes yielded acetylated glucoses. Large differences in FAMEs concentrations were seen for hardwood samples, but lesser differences were seen for the softwood pulp samples. / Ph. D.
|
1076 |
Optimizing Information Freshness in Wireless NetworksLi, Chengzhang 18 January 2023 (has links)
Age of Information (AoI) is a performance metric that can be used to measure the freshness of information. Since its inception, it has captured the attention of the research community and is now an area of active research. By its definition, AoI measures the elapsed time period between the present time and the generation time of the information. AoI is fundamentally different from traditional metrics such as delay or latency as the latter only considers the transit time for a packet to traverse the network.
Among the state-of-the-art in the literature, we identify two limitations that deserve further investigation. First, many existing efforts on AoI have been limited to information-theoretic exploration by considering extremely simple models and unrealistic assumptions, which are far from real-world communication systems. Second, among most existing work on scheduling algorithms to optimize AoI, there is a lack of research on guaranteeing AoI deadlines. The goal of this dissertation is to address these two limitations in the state-of-the-art. First, we design schedulers to minimize AoI under more practical settings, including varying sampling periods, varying sample sizes, cellular transmission models, dynamic channel conditions, etc. Second, we design schedulers to guarantee hard or soft AoI deadlines for each information source. More important, inspired by our results from guaranteeing AoI deadlines, we develop a general design framework that can be applied to construct high-performance schedulers for AoI-related problems.
This dissertation is organized into three parts. In the first part, we study two problems on AoI minimization under general settings. (i) We consider general and heterogeneous sampling behaviors among source nodes, varying sample size, and a cellular-based transmission model.
We develop a near-optimal low-complexity scheduler---code-named Juventas---to minimize AoI. (ii) We study the AoI minimization problem under a 5G network with dynamic channels. To meet the stringent real-time requirement for 5G, we develop a GPU-based near-optimal algorithm---code-named Kronos---and implement it on commercial off-the-shelf (COTS) GPUs.
In the second part, we investigate three problems on guaranteeing AoI deadlines. (i) We study the problem to guarantee a hard AoI deadline for information from each source. We present a novel low-complexity procedure, called Fictitious Polynomial Mapping (FPM), and prove that FPM can find a feasible scheduler for any hard deadline vector when the system load is under ln 2. (ii) For soft AoI deadlines, i.e., occasional violations can be tolerated, we present a novel procedure called Unstable Tolerant Scheduler (UTS). UTS hinges upon the notions of Almost Uniform Schedulers (AUSs) and step-down rate vectors. We show that UTS has strong performance guarantees under different settings. (iii) We investigate a 5G scheduling problem to minimize the proportion of time when the AoI exceeds a soft deadline. We derive a property called uniform fairness and use it as a guideline to develop a 5G scheduler---Aequitas. To meet the real-time requirement in 5G, we implement Aequitas on a COTS GPU.
In the third part, we present Eywa---a general design framework that can be applied to construct high-performance schedulers for AoI-related optimization and decision problems. The design of Eywa is inspired by the notions of AUS schedulers and step-down rate vectors when we develop UTS in the second part. To validate the efficacy of the proposed Eywa framework, we apply it to solve a number of problems, such as minimizing the sum of AoIs, minimizing bandwidth requirement under AoI constraints, and determining the existence of feasible schedulers to satisfy AoI constraints. We find that for each problem, Eywa can either offer a stronger performance guarantee than the state-of-the-art algorithms, or provide new/general results that are not available in the literature. / Doctor of Philosophy / Age of Information (AoI) is a performance metric that can be used to measure the freshness of information. It measures the elapsed time period between the present time and the generation time of the information. Through a literature review, we have identified two limitations: (i) many existing efforts on AoI have employed extremely simple models and unrealistic assumptions, and (ii) most existing work focuses on optimizing AoI, while overlooking AoI deadline requirements in some applications.
The goal of this dissertation is to address these two limitations. For the first limitation, we study the problem to minimize the average AoI in general and practical settings, such as dynamic channels and 5G NR networks. For the second limitation, we design schedulers to guarantee hard or soft AoI deadlines for information from each source. Finally, we develop a general design framework that can be applied to construct high-performance schedulers for AoI-related problems.
|
1077 |
The Importance of Data in RF Machine LearningClark IV, William Henry 17 November 2022 (has links)
While the toolset known as Machine Learning (ML) is not new, several of the tools available within the toolset have seen revitalization with improved hardware, and have been applied across several domains in the last two decades. Deep Neural Network (DNN) applications have contributed to significant research within Radio Frequency (RF) problems over the last decade, spurred by results in image and audio processing. Machine Learning (ML), and Deep Learning (DL) specifically, are driven by access to relevant data during the training phase of the application due to the learned feature sets that are derived from vast amounts of similar data. Despite this critical reliance on data, the literature provides insufficient answers on how to quantify the data training needs of an application in order to achieve a desired performance.
This dissertation first aims to create a practical definition that bounds the problem space of Radio Frequency Machine Learning (RFML), which we take to mean the application of Machine Learning (ML) as close to the sampled baseband signal directly after digitization as is possible, while allowing for preprocessing when reasonably defined and justified. After constraining the problem to the Radio Frequency Machine Learning (RFML) domain space, an understanding of what kinds of Machine Learning (ML) have been applied as well as the techniques that have shown benefits will be reviewed from the literature. With the problem space defined and the trends in the literature examined, the next goal aims at providing a better understanding for the concept of data quality through quantification. This quantification helps explain how the quality of data: affects Machine Learning (ML) systems with regard to final performance, drives required data observation quantity within that space, and impacts can be generalized and contrasted. With the understanding of how data quality and quantity can affect the performance of a system in the Radio Frequency Machine Learning (RFML) space, an examination of the data generation techniques and realizations from conceptual through real-time hardware implementations are discussed. Consequently, the results of this dissertation provide a foundation for estimating the investment required to realize a performance goal within a Deep Learning (DL) framework as well as a rough order of magnitude for common goals within the Radio Frequency Machine Learning (RFML) problem space. / Doctor of Philosophy / Machine Learning (ML) is a powerful toolset capable of solving difficult problems across many domains. A fundamental part of this toolset is the representative data used to train a system. Unlike the domains of image or audio processing, for which datasets are constantly being developed thanks to usage agreements with entities such as Facebook, Google, and Amazon, the field of Machine Learning (ML) within the Radio Frequency (RF) domain, or Radio Frequency Machine Learning (RFML), does not have access to such crowdsourcing means of creating labeled datasets. Therefore data within the Radio Frequency Machine Learning (RFML) problem space must be intentionally cultivated to address the target problem.
This dissertation explains the problem space of Radio Frequency Machine Learning (RFML) and then quantifies the effect of quality on data used during the training of Radio Frequency Machine Learning (RFML) systems. Taking this one step further, the work then goes on to provide a means of estimating data quantity needs to achieve high levels of performance based on the current Deep Learning (DL) approach to solve the problem, which in turn can be used as guidance to better refine the approach when the real-world data quantity requirements exceed practical acquisition levels. Finally, the problem of data generation is examined and provides context for the difficulties associated with procuring high quality data for problems in the Radio Frequency Machine Learning (RFML) space.
|
1078 |
Autonomous Sample Collection Using Image-Based 3D ReconstructionsTorok, Matthew M. 14 May 2012 (has links)
Sample collection is a common task for mobile robots and there are a variety of manipulators available to perform this operation. This thesis presents a novel scoop sample collection system design which is able to both collect and contain a sample using the same hardware. To ease the operator burden during sampling the scoop system is paired with new semi-autonomous and fully autonomous collection techniques. These are derived from data provided by colored 3D point clouds produced via image-based 3D reconstructions. A custom robotic mobility platform, the Scoopbot, is introduced to perform completely automated imaging of the sampling area and also to pick up the desired sample. The Scoopbot is wirelessly controlled by a base station computer which runs software to create and analyze the 3D point cloud models. Relevant sample parameters, such as dimensions and volume, are calculated from the reconstruction and reported to the operator. During tests of the system in full (48 images) and fast (6-8 images) modes the Scoopbot was able to identify and retrieve a sample without any human intervention. Finally, a new building crack detection algorithm (CDA) is created to use the 3D point cloud outputs from image sets gathered by a mobile robot. The CDA was shown to successfully identify and color-code several cracks in a full-scale concrete building element. / Master of Science
|
1079 |
Design of a data acquisition system to control and monitor a velocity probe in a fluid flow fieldHerwig, Nancy Lou January 1982 (has links)
A data acquisition system to control the position of a velocity probe, and to acquire digital voltages as an indication of fluid velocity is presented. This system replaces a similar manually operated traverse system, it relieves the operator of control and acquisition tasks, while providing a more consistent and systematic approach to the acquisition process. The design includes the TRS-80 microcomputer, with external interfacing accomplished using the STD-based bus design. / Master of Science
|
1080 |
Real time data acquisition for load managementGhosh, Sushmita 15 November 2013 (has links)
Demand for Data Transfer between computers has increased ever since the introduction of Personal Computers (PC). Data Communicating on the Personal Computer is much more productive as it is an intelligent terminal that can connect to various hosts on the same I/O hardware circuit as well as execute processes on its own as an isolated system.
Yet, the PC on its own is useless for data communication. It requires a hardware interface circuit and software for controlling the handshaking signals and setting up communication parameters. Often the data is distorted due to noise in the line. Such transmission errors are imbedded in the data and require careful filtering.
The thesis deals with the development of a Data Acquisition system that collects real time load and weather data and stores them as historical database for use in a load forecast algorithm in a load management system. A filtering technique has been developed here that checks for transmission errors in the raw data. The microcomputers used in this development are the IBM PC/XT and the AT&T 3B2 supermicro computer. / Master of Science
|
Page generated in 0.1401 seconds