• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 230
  • 37
  • 31
  • 30
  • 24
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 461
  • 181
  • 114
  • 100
  • 67
  • 51
  • 41
  • 41
  • 37
  • 35
  • 35
  • 34
  • 32
  • 32
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Analyzing Compressed Air Demand Trends to Develop a Method to Calculate Leaks in a Compressed Air Line Using Time Series Pressure Measurements

Daniel, Ebin John 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Compressed air is a powerful source of stored energy and is used in a variety of applications varying from painting to pressing, making it a versatile tool for manufacturers. Due to the high cost and energy consumption associated with producing compressed air and it’s use within industrial manufacturing, it is often referred to as a fourth utility behind electricity, natural gas, and water. This is the reason why air compressors and associated equipment are often the focus for improvements in the eyes of manufacturing plant managers. As compressed air can be used in multiple ways, the methods used to extract and transfer the energy from this source vary as well. Compressed air can flow through different types of piping, such as aluminum, Polyvinyl Chloride (PVC), rubber, etc. with varying hydraulic diameters, and through different fittings such as 90-degree elbows, T-junctions, valves, etc. which can cause one of the major concerns related to managing the energy consumption of an air compressor, and that is the waste of air through leaks. Air leaks make up a considerable portion of the energy that is wasted in a compressed air system, as they cause a multitude of problems that the compressor will have to make up for to maintain the steady operation of the pneumatic devices on the manufacturing floor that rely on compressed air for their application. When air leaks are formed within the compressed air piping network, they act as continuous consumers and cause not only the siphoning off of said compressed air, put also reduce the pressure that is needed within the pipes. The air compressors will have to work harder to compensate for the losses in the pressure and the amount of air itself, causing an overconsumption of energy and power. Overworking the air compressor also causes the internal equipment to be stretched beyond its capabilities, especially if they are already running at full loads, reducing their total lifespans considerably. In addition, if there are multiple leaks close to the pneumatic devices on the manufacturing floor, the immediate loss in pressure and air can cause the devices to operate inefficiently and thus cause a reduction in production. This will all cumulatively impact the manufacturer considerably when it comes to energy consumption and profits. There are multiple methods of air leak detection and accounting that currently exist so as to understand their impact on the compressed air systems. The methods are usually conducted when the air compressors are running but during the time when there is no, or minimal, active consumption of the air by the pneumatic devices on the manufacturing floor. This time period is usually called non-production hours and generally occur during breaks or between employee shift changes. This time is specifically chosen so that the only air consumption within the piping is that of the leaks and thus, the majority of the energy and power consumed during this time is noted to be used to feed the air leaks. The collected data is then used to extrapolate and calculate the energy and power consumed by these leaks for the rest of the year. There are, however, a few problems that arise when using such a method to understand the effects of the leaks in the system throughout the year. One of the issues is that it is assumed that the air and pressure lost through the found leaks are constant even during the production hours i.e. the hours that there is active air consumption by the pneumatic devices on the floor, which may not be the case due to the increased air flow rates and varying pressure within the line which can cause an increase in the amount of air lost through the same orifices that was initially detected. Another challenge that arises with using only the data collected during a single non-production time period is that there may be additional air leaks that may be created later on, and the energy and power lost due to the newer air leaks would remain unaccounted for. As the initial estimates will not include the additional losses, the effects of the air leaks may be underestimated by the plant managers. To combat said issues, a continuous method of air leak analyses will be required so as to monitor the air compressors’ efficiency in relation to the air leaks in real time. By studying a model that includes both the production, and non-production hours when accounting for the leaks, it was observed that there was a 50.33% increase in the energy losses, and a 82.90% increase in the demand losses that were estimated when the effects of the air leaks were observed continuously and in real time. A real time monitoring system can provide an in-depth understanding of the compressed air system and its efficiency. Managing leaks within a compressed air system can be challenging especially when the amount of energy wasted through these leaks are unaccounted for. The main goal of this research was to find a nonintrusive way to calculate the amount of air as well as energy lost due to these leaks using time series pressure measurements. Previous studies have shown a strong relationship between the pressure difference, and the use of air within pneumatic lines, this correlation along with other factors has been exploited in this research to find a novel and viable method of leak accounting to develop a Continuous Air Leak Monitoring (CALM) system.
32

Compressed wavefield extrapolation with curvelets

Lin, Tim T. Y., Herrmann, Felix J. January 2007 (has links)
An \emph {explicit} algorithm for the extrapolation of one-way wavefields is proposed which combines recent developments in information theory and theoretical signal processing with the physics of wave propagation. Because of excessive memory requirements, explicit formulations for wave propagation have proven to be a challenge in {3-D}. By using ideas from ``\emph{compressed sensing}'', we are able to formulate the (inverse) wavefield extrapolation problem on small subsets of the data volume{,} thereby reducing the size of the operators. According {to} compressed sensing theory, signals can successfully be recovered from an imcomplete set of measurements when the measurement basis is \emph{incoherent} with the representation in which the wavefield is sparse. In this new approach, the eigenfunctions of the Helmholtz operator are recognized as a basis that is incoherent with curvelets that are known to compress seismic wavefields. By casting the wavefield extrapolation problem in this framework, wavefields can successfully be extrapolated in the modal domain via a computationally cheaper operatoion. A proof of principle for the ``compressed sensing'' method is given for wavefield extrapolation in {2-D}. The results show that our method is stable and produces identical results compared to the direct application of the full extrapolation operator.
33

Chemical modification of wood : dimensional stabilization of viscoelastic thermal compressed wood /

Gabrielli, Christopher P. January 1900 (has links)
Thesis (M.S.)--Oregon State University, 2008. / Printout. Includes bibliographical references (leaves 114-120). Also available on the World Wide Web.
34

Wireless ECG system with bluetooth low energy and compressed sensing

Li, Wanbo 12 July 2016 (has links)
Electrocardiogram (ECG) is a noninvasive technology widely used in health care systems for diagnosis of heart diseases, and a wearable ECG sensor with long-term monitoring is necessary for real-time heart disease detection. However, the conventional ECG is restricted considering the physical size and power consumption of the system. In this thesis, we propose a Wireless ECG System with Bluetooth Low Energy (BLE) and Compressed Sensing (CS). The proposed Wireless ECG System includes an ECG sensor board based on a BLE chip, an Android application and a web service with a database. The ECG signal is first collected by the ECG Sensor Board and then transmitted to the Android application through BLE protocol. At last, the ECG signal is uploaded to the cloud database from the Android app. We also introduce Compressed Sensing into our system with a novel sparse sensing matrix, data compression and a modified Compressive Sampling Matching Pursuit (CoSaMP) reconstruction algorithm. Experiment results show that the amount of data transmitted is reduced by about 57% compared to not using Compressed Sensing, and reconstruction time is 64% less than using Orthogonal Matching Pursuit (OMP) or Iterative Re-weighted Least Squares (IRLS) algorithm. / Graduate
35

A COMPARISON OF VIDEO COMPRESSION ALGORITHMS

Thom, Gary A., Deutermann, Alan R. 10 1900 (has links)
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California / Compressed video is necessary for a variety of telemetry requirements. A large number of competing video compression algorithms exist. This paper compares the ability of these algorithms to meet criteria which are of interest for telemetry applications. Included are: quality, compression, noise susceptibility, motion performance and latency. The algorithms are divided into those which employ inter-frame compression and those which employ intra-frame compression. A video tape presentation will also be presented to illustrate the performance of the video compression algorithms.
36

Volumetric MRI of the lungs during forced expiration

Berman, Benjamin P., Pandey, Abhishek, Li, Zhitao, Jeffries, Lindsie, Trouard, Theodore P., Oliva, Isabel, Cortopassi, Felipe, Martin, Diego R., Altbach, Maria I., Bilgin, Ali 06 1900 (has links)
Purpose: Lung function is typically characterized by spirometer measurements, which do not offer spatially specific information. Imaging during exhalation provides spatial information but is challenging due to large movement over a short time. The purpose of this work is to provide a solution to lung imaging during forced expiration using accelerated magnetic resonance imaging. The method uses radial golden angle stack-of-stars gradient echo acquisition and compressed sensing reconstruction. Methods: A technique for dynamic three-dimensional imaging of the lungs from highly undersampled data is developed and tested on six subjects. This method takes advantage of image sparsity, both spatially and temporally, including the use of reference frames called bookends. Sparsity, with respect to total variation, and residual from the bookends, enables reconstruction from an extremely limited amount of data. Results: Dynamic three-dimensional images can be captured at sub-150 ms temporal resolution, using only three (or less) acquired radial lines per slice per timepoint. The images have a spatial resolution of 4.6 x 4.6 x 10 mm. Lung volume calculations based on image segmentation are compared to those from simultaneously acquired spirometer measurements. Conclusion: Dynamic lung imaging during forced expiration is made possible by compressed sensing accelerated dynamic three-dimensional radial magnetic resonance imaging. (C) 2015 Wiley Periodicals, Inc.
37

De l'échantillonage optimal en grande et petite dimension / On optimal sampling in high and low dimension

Carpentier, Alexandra 05 October 2012 (has links)
Pendant ma thèse, j’ai eu la chance d’apprendre et de travailler sous la supervision de mon directeur de thèse Rémi, et ce dans deux domaines qui me sont particulièrement chers. Je veux parler de la Théorie des Bandits et du Compressed Sensing. Je les voie comme intimement liés non par les méthodes mais par leur objectif commun: l’échantillonnage optimal de l’espace. Tous deux sont centrés sur les manières d’échantillonner l’espace efficacement : la Théorie des Bandits en petite dimension et le Compressed Sensing en grande dimension. Dans cette dissertation, je présente la plupart des travaux que mes co-auteurs et moi-même avons écrit durant les trois années qu’a duré ma thèse. / During my PhD, I had the chance to learn and work under the great supervision of my advisor Rémi (Munos) in two fields that are of particular interest to me. These domains are Bandit Theory and Compressed Sensing. While studying these domains I came to the conclusion that they are connected if one looks at them trough the prism of optimal sampling. Both these fields are concerned with strategies on how to sample the space in an efficient way: Bandit Theory in low dimension, and Compressed Sensing in high dimension. In this Dissertation, I present most of the work my co-authors and I produced during the three years that my PhD lasted.
38

A framework for fast and efficient algorithms for sparse recovery problems / CUHK electronic theses & dissertations collection

January 2015 (has links)
The sparse recovery problem aims to reconstruct a high-dimensional sparse signal from its low-dimensional measurements given a carefully designed measuring process. This thesis presents a framework for graphical-model based sparse recovery algorithms. Differing measurement processes lead to specific problems. The sparse recovery problems studied in this thesis include compressive sensing, network tomography, group testing and compressive phase retrieval. For compressive sensing and network tomography, the measurement processes are linear (freely chosen, and topology constrained measurements respectively). For group testing and compressivephase retrieval, the processes are non-linear (disjunctive, and intensity measurements respectively). For all the problems in this thesis, we present algorithms whose measurement structures are based on bipartite graphs. By studying the properties of bipartite graphs and designing novel measuring process and corresponding decoding algorithms, the number of measurements and computational decoding complexities of all the algorithms are information-theoretically either order-optimal or nearly order-optimal. / 稀疏還原問題旨在通過精心設計的低維度度量重建高維度稀疏信號。這篇論文提出了一個基於圖模型的稀疏還原演算法的框架。研究的稀疏還原問題包括了壓縮感知,網路斷層掃描,組測試和壓縮相位恢復。對於壓縮感知和網路斷層掃描,度量過程是線性的(分別是無約束的度量和拓撲結構約束的度量)。對於組測試和壓縮相位恢復,度量過程是非線性的(分別是邏輯度量和強度度量)。對於提到的問題,這篇論文提出的演算法的度量結構基於二部圖。通過學習二部圖的性質,我們提出了新穎的度量方法和相對應的解碼演算法。對於這些演算法,它們的度量維度和解碼演算法的運算複雜度都是(或接近於)資訊理論最優解。 / Cai, Sheng. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2015. / Includes bibliographical references (leaves 229-247). / Abstracts also in Chinese. / Title from PDF title page (viewed on 05, October, 2016). / Detailed summary in vernacular field only.
39

Experimental Investigation of a Green Hybrid Thruster Using a Moderately Enriched Compressed Air as the Oxidizer

Bulcher, Marc Anthony 01 December 2018 (has links)
A hybrid rocket is a propulsion system that uses propellants in two different phases, typically a solid fuel inside the combustion chamber and a separate gaseous or liquid oxidizer stored in a tank. Hybrid rockets are an area of research interest because of their low explosive risk, inexpensive components, and high degree of reliability. In the Propulsion Research Laboratory at Utah State University, pure oxygen is among the top choice for hybrid rocket oxidizers due to its low cost and ease of storage. When paired with a solid fuel known as ABS (acrylonitrile butadiene styrene) plastic, specific impulse values exceed 200 seconds at one atmosphere. This metric outperforms hydrazine, which is a propellant standard for in-space propulsion that exhibits high vapor toxicity and explosive hazards. However, due to the low density of oxygen, propulsion applications require storage pressures up to 3000 psig. At this high pressure, the use of oxygen can present a fire hazard. As a result, this thesis investigates the feasibility of replacing oxygen with a moderately enriched compressed air containing oxygen levels up to 40%, while maintaining performance metrics equal to or above hydrazine. To demonstrate the performance of moderately enriched air as a hybrid rocket oxidizer, comparisons to tests using pure oxygen are presented.
40

Implementation of Vectorization-Based VLIW DSP with Compact Instructions

Lee, Chun-Hsien 23 August 2005 (has links)
The main goal of this thesis is to design and implement the high performance processor core for completing those digital signal processing algorithms applied at the DVB-T systems. The DSP must support the signal flow in time. Completing the FFT algorithm at 8192 input signal points instantaneously is the most important key. In order to achieve the time demand of FFT and the DSP frequency must be as lower as possible, the way is to increase the degree of instruction level parallelism (ILP). The thesis designs a VLIW architecture processing core called DVB-T DSP to support instruction parallelism with enough execution units. The thesis also uses the software pipelining to schedule the loop to achieve the highest ILP when used to execute FFT butterfly operations. Furthermore, in order to provide the smooth data stream for pipeline, the thesis designs a mechanism to improve the modulo addressing, called extended modulo addressing, will collect the discrete vectors into one continuous vector. This is a big problem that the program size is bigger than other processor architecture at the VLIW processor architecture. In order to solve the problem, this thesis proposes an instruction compression mechanism, which can increase double program density and does not affect the processor execution efficiency. The simulation result shows that DVB-T DSP can achieve the time demand of FFT at 133Mhz. DVB-T DSP also has good performance for other digital signal processing algorithms.

Page generated in 0.0768 seconds