• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 58
  • 14
  • 13
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 126
  • 126
  • 39
  • 30
  • 27
  • 21
  • 20
  • 15
  • 15
  • 15
  • 14
  • 13
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Investigation of microparticle to system level phenomena in thermally activated adsorption heat pumps

Raymond, Alexander William 20 May 2010 (has links)
Heat actuated adsorption heat pumps offer the opportunity to improve overall energy efficiency in waste heat applications by eliminating shaft work requirements accompanying vapor compression cycles. The coefficient of performance (COP) in adsorption heat pumps is generally low. The objective of this thesis is to model the adsorption system to gain critical insight into how its performance can be improved. Because adsorption heat pumps are intermittent devices, which induce cooling by adsorbing refrigerant in a sorption bed heat/mass exchanger, transient models must be used to predict performance. In this thesis, such models are developed at the adsorbent particle level, heat/mass exchanger component level and system level. Adsorption heat pump modeling is a coupled heat and mass transfer problem. Intra-particle mass transfer resistance and sorption bed heat transfer resistance are shown to be significant, but for very fine particle sizes, inter-particle resistance may also be important. The diameter of the adsorbent particle in a packed bed is optimized to balance inter- and intra-particle resistances and improve sorption rate. In the literature, the linear driving force (LDF) approximation for intra-particle mass transfer is commonly used in place of the Fickian diffusion equation to reduce computation time; however, it is shown that the error in uptake prediction associated with the LDF depends on the working pair, half-cycle time, adsorbent particle radius, and operating temperatures at hand. Different methods for enhancing sorption bed heat/mass transfer have been proposed in the literature including the use of binders, adsorbent compacting, and complex extended surface geometries. To maintain high reliability, the simple, robust annular-finned-tube geometry with packed adsorbent is specified in this work. The effects of tube diameter, fin pitch and fin height on thermal conductance, metal/adsorbent mass ratio and COP are studied. As one might expect, many closely spaced fins, or high fin density, yields high thermal conductance; however, it is found that the increased inert metal mass associated with the high fin density diminishes COP. It is also found that thin adsorbent layers with low effective conduction resistance lead to high thermal conductance. As adsorbent layer thickness decreases, the relative importance of tube-side convective resistance rises, so mini-channel sized tubes are used. After selecting the proper tube geometry, an overall thermal conductance is calculated for use in a lumped-parameter sorption bed simulation. To evaluate the accuracy of the lumped-parameter approach, a distributed parameter sorption bed simulation is developed for comparison. Using the finite difference method, the distributed parameter model is used to track temperature and refrigerant distributions in the finned tube and adsorbent layer. The distributed-parameter tube model is shown to be in agreement with the lumped-parameter model, thus independently verifying the overall UA calculation and the lumped-parameter sorption bed model. After evaluating the accuracy of the lumped-parameter model, it is used to develop a system-level heat pump simulation. This simulation is used to investigate a non-recuperative two-bed heat pump containing activated carbon fiber-ethanol and silica gel-water working pairs. The two-bed configuration is investigated because it yields a desirable compromise between the number of components (heat exchangers, pumps, valves, etc.) and steady cooling rate. For non-recuperative two-bed adsorption heat pumps, the average COP prediction in the literature is 0.39 for experiments and 0.44 for models. It is important to improve the COP in mobile waste heat applications because without high COP, the available waste heat during startup or idle may be insufficient to deliver the desired cooling duty. In this thesis, a COP of 0.53 is predicted for the non-recuperative, silica gel-water chiller. If thermal energy recovery is incorporated into the cycle, a COP as high as 0.64 is predicted for a 90, 35 and 7.0°C source, ambient and average evaporator temperature, respectively. The improvement in COP over heat pumps appearing in the literature is attributed to the adsorbent particle size optimization and careful selection of sorption bed heat exchanger geometry.
102

Development of a multimodal port freight transportation model for estimating container throughput

Gbologah, Franklin Ekoue 08 July 2010 (has links)
Computer based simulation models have often been used to study the multimodal freight transportation system. But these studies have not been able to dynamically couple the various modes into one model; therefore, they are limited in their ability to inform on dynamic system level interactions. This research thesis is motivated by the need to dynamically couple the multimodal freight transportation system to operate at multiple spatial and temporal scales. It is part of a larger research program to develop a systems modeling framework applicable to freight transportation. This larger research program attempts to dynamically couple railroad, seaport, and highway freight transportation models. The focus of this thesis is the development of the coupled railroad and seaport models. A separate volume (Wall 2010) on the development of the highway model has been completed. The model railroad and seaport was developed using Arena® simulation software and it comprises of the Ports of Savannah, GA, Charleston, NC, Jacksonville, FL, their adjacent CSX rail terminal, and connecting CSX railroads in the southeastern U.S. However, only the simulation outputs for the Port of Savannah are discussed in this paper. It should be mentioned that the modeled port layout is only conceptual; therefore, any inferences drawn from the model's outputs do not represent actual port performance. The model was run for 26 continuous simulation days, generating 141 containership calls, 147 highway truck deliveries of containers, 900 trains, and a throughput of 28,738 containers at the Port of Savannah, GA. An analysis of each train's trajectory from origin to destination shows that trains spend between 24 - 67 percent of their travel time idle on the tracks waiting for permission to move. Train parking demand analysis on the adjacent shunting area at the multimodal terminal seems to indicate that there aren't enough containers coming from the port because the demand is due to only trains waiting to load. The simulation also shows that on average it takes containerships calling at the Port of Savannah about 3.2 days to find an available dock to berth and unload containers. The observed mean turnaround time for containerships was 4.5 days. This experiment also shows that container residence time within the port and adjacent multimodal rail terminal varies widely. Residence times within the port range from about 0.2 hours to 9 hours with a mean of 1 hour. The average residence time inside the rail terminal is about 20 minutes but observations varied from as little as 2 minutes to a high of 2.5 hours. In addition, about 85 percent of container residence time in the port is spent idle. This research thesis demonstrates that it is possible to dynamically couple the different sub-models of the multimodal freight transportation system. However, there are challenges that need to be addressed by future research. The principal challenge is the development of a more efficient train movement algorithm that can incorporate the actual Direct Traffic Control (DTC) and / or Automatic Block Signal (ABS) track segmentation. Such an algorithm would likely improve the capacity estimates of the railroad network. In addition, future research should seek to reduce the high computational cost imposed by a discrete process modeling methodology and the adoption of single container resolution level for terminal operations. A methodology combining both discrete and continuous process modeling as proposed in this study could lessen computational costs and lower computer system requirements at a cost of some of the feedback capabilities of the model This tradeoff must be carefully examined.
103

The role and regulatory mechanisms of nox1 in vascular systems

Yin, Weiwei 28 June 2012 (has links)
As an important endogenous source of reactive oxygen species (ROS), NADPH oxidase 1 (Nox1) has received tremendous attention in the past few decades. It has been identified to play a key role as the initial "kindle," whose activation is crucial for amplifying ROS production through several propagation mechanisms in the vascular system. As a consequence, Nox1 has been implicated in the initiation and genesis of many cardiovascular diseases and has therefore been the subject of detailed investigations. The literature on experimental studies of the Nox1 system is extensive. Numerous investigations have identified essential features of the Nox1 system in vasculature and characterized key components, possible regulatory signals and/or signaling pathways, potential activation mechanisms, a variety of Nox1 stimuli, and its potential physiological and pathophysiological functions. While these experimental studies have greatly enhanced our understanding of the Nox1 system, many open questions remain regarding the overall functionality and dynamic behavior of Nox1 in response to specific stimuli. Such questions include the following. What are the main regulatory and/or activation mechanisms of Nox1 systems in different types of vascular cells? Once Nox1 is activated, how does the system return to its original, unstimulated state, and how will its subunits be recycled? What are the potential disassembly pathways of Nox1? Are these pathways equally important for effectively reutilizing Nox1 subunits? How does Nox1 activity change in response to dynamic signals? Are there generic features or principles within the Nox1 system that permit optimal performance? These types of questions have not been answered by experiments, and they are indeed quite difficult to address with experiments. I demonstrate in this dissertation that one can pose such questions and at least partially answer them with mathematical and computational methods. Two specific cell types, namely endothelial cells (ECs) and vascular smooth muscle cells (VSMCs), are used as "templates" to investigate distinct modes of regulation of Nox1 in different vascular cells. By using a diverse array of modeling methods and computer simulations, this research identifies different types of regulation and their distinct roles in the activation process of Nox1. In the first study, I analyze ECs stimulated by mechanical stimuli, namely shear stresses of different types. The second study uses different analytical and simulation methods to reveal generic features of alternative disassembly mechanisms of Nox1 in VSMCs. This study leads to predictions of the overall dynamic behavior of the Nox1 system in VSMCs as it responds to extracellular stimuli, such as the hormone angiotensin II. The studies and investigations presented here improve our current understanding of the Nox1 system in the vascular system and might help us to develop potential strategies for manipulation and controlling Nox1 activity, which in turn will benefit future experimental and clinical studies.
104

A grid-level unit commitment assessment of high wind penetration and utilization of compressed air energy storage in ERCOT

Garrison, Jared Brett 10 February 2015 (has links)
Emerging integration of renewable energy has prompted a wide range of research on the use of energy storage to compensate for the added uncertainty that accompanies these resources. In the Electric Reliability Council of Texas (ERCOT), compressed air energy storage (CAES) has drawn particular attention because Texas has suitable geology and also lacks appropriate resources and locations for pumped hydroelectric storage (PHS). While there have been studies on incorporation of renewable energy, utilization of energy storage, and dispatch optimization, this is the first body of work to integrate all these subjects along with the proven ability to recreate historical dispatch and price conditions. To quantify the operational behavior, economic feasibility, and environmental impacts of CAES, this work utilized sophisticated unit commitment and dispatch (UC&D) models that determine the least-cost dispatch for meeting a set of grid and generator constraints. This work first addressed the ability of these models to recreate historical dispatch and price conditions through a calibration analysis that incorporated major model improvements such as capacity availability and sophisticated treatment of combined heat and power (CHP) plants. These additions appreciably improved the consistency of the model results when compared to historical ERCOT conditions. An initial UC&D model was used to investigate the impacts on the dispatch of a future high wind generation scenario with the potential to utilize numerous CAES facilities. For all future natural gas prices considered, the addition of CAES led to reduced use of high marginal cost generator types, increased use of base-load generator types, and average reductions in the total operating costs of 3.7 million dollars per week. Additional analyses demonstrated the importance of allowing CAES to participate in all available energy and ancillary services (AS) markets and that a reduction in future thermal capacity would increase the use of CAES. A second UC&D model, which incorporated advanced features like variable marginal heat rates, was used to analyze the influence of future wind generation variability on the dispatch and resulting environmental impacts. This analysis revealed that higher amounts of wind variability led to an increase in the daily net load ramping requirements which resulted in less use of coal and nuclear generators in favor of faster ramping units along with reductions in emissions and water use. The changes to the net load also resulted in increased volatility of the energy and AS prices between daily minimum and maximum levels. These impacts were also found to increase with compounding intensity as higher levels of wind variability were reached. Lastly, the advanced UC&D model was also used to evaluate the operational behavior and potential economic feasibility of a first entrant conventional or adiabatic CAES system. Both storage systems were found to operate in a single mode that enabled very high utilization of their capacity indicating both systems have highly desirable characteristics. The results suggest that there is a positive case for the investment in a first entrant CAES facility in the ERCOT market. / text
105

Βελτιστοποίηση φυσικών συστημάτων επεξεργασίας υγρών αποβλήτων

Γαλανόπουλος, Χρήστος 05 February 2015 (has links)
Η μελέτη ενός πειράματος μικρής πιλοτικής κλίμακας, με δύο παράλληλα συστήματα ρηχών λεκανών (ύψους 0.35m), η μία λεκάνη με φύτευση του είδους Typha Latifolia και η άλλη χωρίς φύτευση, διεξάχθηκε για τον σχεδιασμό ελεύθερης επιφανειακής ροής (FWS) τεχνητού υγροτόπου. Οι δύο λεκάνες τροφοδοτήθηκαν με πραγματικά αστικά λύματα όπου οι χρόνοι παραμονής κυμάνθηκαν από 27,6 έως 38,0 ημέρες. Η μεταβολή του όγκου κάθε λεκάνης παρακολουθήθηκε για 2 συνεχή έτη και ταυτόχρονα υπολογίστηκαν οι ρυθμοί βροχόπτωσης και εξάτμισης. Η διαφορά του όγκου μεταξύ των δύο λεκανών οφειλόταν στην πρόσληψη νερού από τα φυτά, η οποία συγκρίθηκε με τις προβλέψεις της εξατμισοδιαπνοής παρόμοιων φυτών με την χρήση του υπολογιστικού προγράμματος REF-ET. Η συγκομιδή των φυτών πραγματοποιήθηκε τρείς φορές στην διάρκεια του 1ου έτους του πειράματος, ώστε να εκτιμηθεί ο ρυθμός πρόσληψης αζώτου από τα φυτά. Η σημαντικότερη διαφορά των δύο συστημάτων ήταν η αφαίρεση νερού μέσω της εξατμισοδιαπνοής των φυτών. Η πιλοτική μονάδα λειτούργησε έτσι ώστε να επιτευχθεί και απομάκρυνση της οργανικής ύλης (BOD5) και του ολικού αζώτου (TN) από τα λύματα. Ο σχεδιασμός της διευκόλυνε την ανάπτυξη ενός μαθηματικού μοντέλου, ακολουθώντας το πλαίσιο του μοντέλου της ενεργής ιλύος (ASM). Αρχικά το μαθηματικό μοντέλο αναπτύχθηκε για τις δύο λεκάνες με τις μικροβιακές διεργασίες που επικράτησαν στο εσωτερικό τους, ώστε να περιγραφεί πλήρως η συμπεριφορά τους. Η προσομοίωση και η εκτίμηση των παραμέτρων του μοντέλου επιτεύχθηκε με την χρήση του υπολογιστικού περιβάλλοντος του AQUASIM. Οι κύριες διεργασίες που ελήφθησαν υπόψη για την μοντελοποίηση ήταν η αμμωνιοποίηση, η αερόβια ετεροτροφική ανάπτυξη, η νιτροποίηση και η ανάπτυξη φυκών. Μια ισχυρή εποχική εξάρτηση παρατηρήθηκε για την συμπεριφορά κάθε λεκάνης όταν το μοντέλο εφαρμόστηκε για το 1ο έτος του πειράματος. Αυτό το μοντέλο επαληθεύτηκε ικανοποιητικά με τα πειραματικά δεδομένα του 2ου έτους. Η παρατηρούμενη μέση ετήσια απόδοση απομάκρυνσης του BOD5 και του TN ήταν 60% και 69%, αντίστοιχα για την λεκάνη χωρίς φυτά και 83% και 75%, αντίστοιχα για την λεκάνη με φυτά. Το μοντέλο προέβλεψε μέση ετήσια απόδοση απομάκρυνσης 82% για το BOD5 και 65% για το TN στην λεκάνη με φυτά, ικανοποιώντας τα κριτήρια για τον σχεδιασμό πλήρους κλίμακας τεχνητού υγροτόπου . Η ικανότητα του μοντέλου να προβλέπει όχι μόνο την απομάκρυνση της οργανικής ύλης αλλά και του ολικού αζώτου, θεωρήθηκε επαρκής όταν δοκιμάστηκε με έναν ελεύθερης επιφανειακής ροής τεχνητό υγρότοπο με 400 ισοδύναμο πληθυσμό, με μοναδική τροποποίηση τον συνυπολογισμό του περιορισμού του οξυγόνου στον ρυθμό της διεργασίας της νιτροποίησης. Επομένως, το δυναμικό μοντέλο διαμορφώθηκε με την ενσωμάτωση της πρόβλεψης του ρυθμού της εξατμισοδιαπνοής των φυτών και χρησιμοποιήθηκε για τον σχεδιασμό περίπτωσης μελέτης τεχνητού υγροτόπου πλήρους κλίμακας. Τα στοιχεία που απαιτούνται για αυτό τον σχεδιασμό περιλάμβαναν την παροχή εισόδου και κλιματολογικά στοιχεία (θερμοκρασίας και βροχόπτωσης) για την περιοχή του σχεδιασμού, καθώς και οι απαιτήσεις της ποιότητας εκροής. Η περίπτωση μελέτης για 4000 ισοδύναμο πληθυσμό όπου η ποιότητα εκροής ήταν σε μέσες ετήσιες τιμές BOD5=25mg/L και TN=15mg/L, χρειάστηκε μία συνολική επιφάνεια υγροτόπου 11 εκταρίων. Εάν χρησιμοποιηθούν δύο λεκάνες σε σειρά, η 1η με φυτά και η 2η χωρίς, τότε η συνολική επιφάνεια μειώνεται κατά περίπου 27%, ελέγχοντας μόνο την αρχική μέγιστη φύτευση της πρώτης λεκάνης του υγροτόπου. / The study at pilot-scale of two parallel systems with shallow basins (height h=0.35m), one planted with Typha Latiofolia and the other without vegetation, was conducted for the modeling of free water surface (FWS) constructed wetland systems. The basins were fed with real sewage at retention times ranging from 27.6 to 38.0 days. The variation of the volume in each basin was monitored for two consecutive years and simultaneously, rainfall and evaporation rates were calculated. The difference of the volume between the basins was due to the water absorption by the plants and was compared with the predictions of evapotranspiration rates of similar plants using the REF-ET calculation software. The harvesting of the plants was performed three times during the first year, in order to estimate the nitrogen uptake by the plants. The main difference in the two systems was the water removal through plant evapotranspiration. The pilot unit was operated so as to achieve the removal of both organic matter (BOD5) and total nitrogen (TN) from the sewage. Its design enabled the development of a mathematical model, following the framework of the activated sludge model (ASM). The simulation and the parameter estimation were achieved using the AQUASIM framework. The mathematical model describes the microbial processes, which dominated within the basins describing satisfactorily their behavior. The key processes accounted for in the modeling were ammonification, aerobic heterotrophic growth, nitrification and algal growth. A strong seasonal dependence was observed for each basin. The model was satisfactorily validated with the data of the second year. An observed average annual removal efficiency of BOD5 and TN were 60% and 69%, respectively for the basin without plants and 83% and 75%, respectively for the basin with plants. The model predicted average annual removal efficiency 82% for BOD5 and 65% for TN in the basin with plants, satisfying the design criteria of a full-scale constructed wetland. The ability of the model to predict not only the removal of organic matter but also total nitrogen removal, was considered sufficient as tested with a real free water surface constructed wetland of 400 population equivalent, with the sole modification being the inclusion of oxygen limitation in the nitrification rate. The dynamic model was amended with the direct incorporation of the plant evapotranspiration rate and it was used to design a full-scale constructed wetland. The required elements for this design included the inflow rate and climatic data (temperature and rainfall) for the design region, as well as the effluent quality requirements. In the case study of 4000 population equivalent, the effluent quality requirement was: average annual values for BOD5=25mg/L and for TN=15mg/L. The model was used to determine a total wetland surface requirement of 11ha. If two sequential basins are used, the first with plants and the second without, then the total wetland surface could be reduced by approximately 27%, controlling only the maximum initial vegetation in the first wetland basin.
106

Timing verification in transaction modeling

Tsikhanovich, Alena 12 1900 (has links)
Les systèmes Matériels/Logiciels deviennent indispensables dans tous les aspects de la vie quotidienne. La présence croissante de ces systèmes dans les différents produits et services incite à trouver des méthodes pour les développer efficacement. Mais une conception efficace de ces systèmes est limitée par plusieurs facteurs, certains d'entre eux sont: la complexité croissante des applications, une augmentation de la densité d'intégration, la nature hétérogène des produits et services, la diminution de temps d’accès au marché. Une modélisation transactionnelle (TLM) est considérée comme un paradigme prometteur permettant de gérer la complexité de conception et fournissant des moyens d’exploration et de validation d'alternatives de conception à des niveaux d’abstraction élevés. Cette recherche propose une méthodologie d’expression de temps dans TLM basée sur une analyse de contraintes temporelles. Nous proposons d'utiliser une combinaison de deux paradigmes de développement pour accélérer la conception: le TLM d'une part et une méthodologie d’expression de temps entre différentes transactions d’autre part. Cette synergie nous permet de combiner dans un seul environnement des méthodes de simulation performantes et des méthodes analytiques formelles. Nous avons proposé un nouvel algorithme de vérification temporelle basé sur la procédure de linéarisation des contraintes de type min/max et une technique d'optimisation afin d'améliorer l'efficacité de l'algorithme. Nous avons complété la description mathématique de tous les types de contraintes présentées dans la littérature. Nous avons développé des méthodes d'exploration et raffinement de système de communication qui nous a permis d'utiliser les algorithmes de vérification temporelle à différents niveaux TLM. Comme il existe plusieurs définitions du TLM, dans le cadre de notre recherche, nous avons défini une méthodologie de spécification et simulation pour des systèmes Matériel/Logiciel basée sur le paradigme de TLM. Dans cette méthodologie plusieurs concepts de modélisation peuvent être considérés séparément. Basée sur l'utilisation des technologies modernes de génie logiciel telles que XML, XSLT, XSD, la programmation orientée objet et plusieurs autres fournies par l’environnement .Net, la méthodologie proposée présente une approche qui rend possible une réutilisation des modèles intermédiaires afin de faire face à la contrainte de temps d’accès au marché. Elle fournit une approche générale dans la modélisation du système qui sépare les différents aspects de conception tels que des modèles de calculs utilisés pour décrire le système à des niveaux d’abstraction multiples. En conséquence, dans le modèle du système nous pouvons clairement identifier la fonctionnalité du système sans les détails reliés aux plateformes de développement et ceci mènera à améliorer la "portabilité" du modèle d'application. / Hardware/Software (Hw/Sw) systems are likely to become essential in all aspects of everyday life. The increasing penetration of Hw/Sw systems in products and services creates a necessity of their efficient development. However, the productive design of these systems is limited by several factors, some of them being the increasing complexity of applications, the increasing degree of integration, the heterogeneous nature of products and services as well as the shrinking of the time-to-market delay. Transaction Level Modeling (TLM) paradigm is considered as one of the most promising simulation paradigms to break down the design complexity by allowing the exploration and validation of design alternatives at high levels of abstraction. This research proposes a timing expression methodology in TLM based on temporal constraints analysis. We propose to use a combination of two paradigms to accelerate the design process: TLM on one hand and a methodology to express timing between different transactions on the other hand. Using a timing specification model and underlining timing constraints verification algorithms can decrease the time needed for verification by simulation. Combining in one framework the simulation and analytical design exploration methods can improve the analytical power of design verification and validation. We have proposed a new timing verification algorithm based on the linearization procedure and an optimization technique to improve its efficiency. We have completed the mathematical representation of all constraint types discussed in the literature creating in this way a unified timing specification methodology that can be used in the expression of a wider class of applications than previously presented ones. We have developed the methods for communication structure exploration and refinement that permitted us to apply the timing verification algorithms in system exploration at different TLM levels. As there are many definitions of TLM and many development environments proposing TLM in their design cycle with several pro and contra, in the context of our research we define a hardware/software (Hw/Sw) specification and simulation methodology which supports TLM in such a way that several modeling concepts can be seen separately. Relying on the use of modern software engineering technologies such as XML, XSLT, XSD, object oriented programming and others supported by the .Net Framework, an approach that makes an intermediate design model reuse possible in order to cope with time-to-market constraint is presented. The proposed TLM design methodology provides a general approach in system modeling that separates various application modeling aspects from system specification: computational models, used in application modeling, supported by the language used for the functional specification and provided by simulator. As a result, in the system model we can clearly identify system functionality without details related to the development platform thereby leading to a better “portability” of the application model.
107

Sparsity Motivated Auditory Wavelet Representation and Blind Deconvolution

Adiga, Aniruddha January 2017 (has links) (PDF)
In many scenarios, events such as singularities and transients that carry important information about a signal undergo spreading during acquisition or transmission and it is important to localize the events. For example, edges in an image, point sources in a microscopy or astronomical image are blurred by the point-spread function (PSF) of the acquisition system, while in a speech signal, the epochs corresponding to glottal closure instants are shaped by the vocal tract response. Such events can be extracted with the help of techniques that promote sparsity, which enables separation of the smooth components from the transient ones. In this thesis, we consider development of such sparsity promoting techniques. The contributions of the thesis are three-fold: (i) an auditory-motivated continuous wavelet design and representation, which helps identify singularities; (ii) a sparsity-driven deconvolution technique; and (iii) a sparsity-driven deconvolution technique for reconstruction of nite-rate-of-innovation (FRI) signals. We use the speech signal for illustrating the performance of the techniques in the first two parts and super-resolution microscopy (2-D) for the third part. In the rst part, we develop a continuous wavelet transform (CWT) starting from an auditory motivation. Wavelet analysis provides good time and frequency localization, which has made it a popular tool for time-frequency analysis of signals. The CWT is a multiresolution analysis tool that involves decomposition of a signal using a constant-Q wavelet filterbank, akin to the time-frequency analysis performed by basilar membrane in the peripheral human auditory system. This connection motivated us to develop wavelets that possess auditory localization capabilities. Gammatone functions are extensively used in the modeling of the basilar membrane, but the non-zero average of the functions poses a hurdle. We construct bona de wavelets from the Gammatone function called Gammatone wavelets and analyze their properties such as admissibility, time-bandwidth product, vanishing moments, etc.. Of particular interest is the vanishing moments property, which enables the wavelet to suppress smooth regions in a signal leading to sparsi cation. We show how this property of the Gammatone wavelets coupled with multiresolution analysis could be employed for singularity and transient detection. Using these wavelets, we also construct equivalent lterbank models and obtain cepstral feature vectors out of such a representation. We show that the Gammatone wavelet cepstral coefficients (GWCC) are effective for robust speech recognition compared with mel-frequency cepstral coefficients (MFCC). In the second part, we consider the problem of sparse blind deconvolution (SBD) starting from a signal obtained as the convolution of an unknown PSF and a sparse excitation. The BD problem is ill-posed and the goal is to employ sparsity to come up with an accurate solution. We formulate the SBD problem within a Bayesian framework. The estimation of lter and excitation involves optimization of a cost function that consists of an `2 data- fidelity term and an `p-norm (p 2 [0; 1]) regularizer, as the sparsity promoting prior. Since the `p-norm is not differentiable at the origin, we consider a smoothed version of the `p-norm as a proxy in the optimization. Apart from the regularizer being non-convex, the data term is also non-convex in the filter and excitation as they are both unknown. We optimize the non-convex cost using an alternating minimization strategy, and develop an alternating `p `2 projections algorithm (ALPA). We demonstrate convergence of the iterative algorithm and analyze in detail the role of the pseudo-inverse solution as an initialization for the ALPA and provide probabilistic bounds on its accuracy considering the presence of noise and the condition number of the linear system of equations. We also consider the case of bounded noise and derive tight tail bounds using the Hoe ding inequality. As an application, we consider the problem of blind deconvolution of speech signals. In the linear model for speech production, voiced speech is assumed to be the result of a quasi-periodic impulse train exciting a vocal-tract lter. The locations of the impulses or epochs indicate the glottal closure instants and the spacing between them the pitch. Hence, the excitation in the case of voiced speech is sparse and its deconvolution from the vocal-tract filter is posed as a SBD problem. We employ ALPA for SBD and show that excitation obtained is sparser than the excitations obtained using sparse linear prediction, smoothed `1=`2 sparse blind deconvolution algorithm, and majorization-minimization-based sparse deconvolution techniques. We also consider the problem of epoch estimation and show that epochs estimated by ALPA in both clean and noisy conditions are closer to the instants indicated by the electroglottograph when with to the estimates provided by the zero-frequency ltering technique, which is the state-of-the-art epoch estimation technique. In the third part, we consider the problem of deconvolution of a specific class of continuous-time signals called nite-rate-of-innovation (FRI) signals, which are not bandlimited, but specified by a nite number of parameters over an observation interval. The signal is assumed to be a linear combination of delayed versions of a prototypical pulse. The reconstruction problem is posed as a 2-D SBD problem. The kernel is assumed to have a known form but with unknown parameters. Given the sampled version of the FRI signal, the delays quantized to the nearest point on the sampling grid are rst estimated using proximal-operator-based alternating `p `2 algorithm (ALPAprox), and then super-resolved to obtain o -grid (O. G.) estimates using gradient-descent optimization. The overall technique is termed OG-ALPAprox. We show application of OG-ALPAprox to a particular modality of super-resolution microscopy (SRM), called stochastic optical reconstruction microscopy (STORM). The resolution of the traditional optical microscope is limited by di raction and is termed as Abbe's limit. The goal of SRM is to engineer the optical imaging system to resolve structures in specimens, such as proteins, whose dimensions are smaller than the di raction limit. The specimen to be imaged is tagged or labeled with light-emitting or uorescent chemical compounds called uorophores. These compounds speci cally bind to proteins and exhibit uorescence upon excitation. The uorophores are assumed to be point sources and the light emitted by them undergo spreading due to di raction. STORM employs a sequential approach, wherein each step only a few uorophores are randomly excited and the image is captured by a sensor array. The obtained image is di raction-limited, however, the separation between the uorophores allows for localizing the point sources with high precision. The localization is performed using Gaussian peak- tting. This process of random excitation coupled with localization is performed sequentially and subsequently consolidated to obtain a high-resolution image. We pose the localization as a SBD problem and employ OG-ALPAprox to estimate the locations. We also report comparisons with the de facto standard Gaussian peak- tting algorithm and show that the statistical performance is superior. Experimental results on real data show that the reconstruction quality is on par with the Gaussian peak- tting.
108

Modelagem, análise e experimentação de sistema fotovoltaico isolado baseado em plataforma de simulação com diagrama de blocos.

Santos Junior, Francisco Antonio Ferreira dos 29 February 2016 (has links)
Submitted by Morgana Silva (morgana_linhares@yahoo.com.br) on 2016-07-26T18:10:50Z No. of bitstreams: 1 arquivototal.pdf: 2752063 bytes, checksum: 886fc73f7f66099f1e033937798f6b7e (MD5) / Made available in DSpace on 2016-07-26T18:10:50Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 2752063 bytes, checksum: 886fc73f7f66099f1e033937798f6b7e (MD5) Previous issue date: 2016-02-29 / This paper presents a block diagram modeling of a grid-independent photovoltaic power generation system, including the steps of DC regulation, voltage inversion, and control system based on dynamic simulations in Simulink / Matlab® exclusively using the built-in blocks available in its library. A well-known technique in literature called MPPT (maximum power point tracking) was used for tracking the maximum power of the photovoltaic generation. However, the control that was used to maintain a constant output voltage of the Push-Pull is based on a method that is similar to the MPPT, which configures a novelty of this research. The integration of modeling the entire PV system with these control systems is carried out in Simulink for investigaton and production of simulation results. An experimental platform that includes an emulator of photovoltaic panels, a 1 kW Push-Pull converter, a three-phase inverter with three arms and a hydraulic load constituided by a motor-pump was built in the laboratory. The experimental results corroborate the methodology that was used. / Este trabalho apresenta uma modelagem em diagramação de blocos de um sistema de geração de energia fotovoltaico isolado, incluindo as etapas de regulação CC, inversão de tensão e sistema de controle com base em simulações dinâmicas no ambiente Simulink/Matlab® utilizando, exclusivamente, os blocos built-in disponíveis em sua biblioteca. Uma técnica bem conhecida na literatura foi utilizada para o rastreio da máxima potência da geração fotovoltaica. No entanto, o controle utilizado para manter a tensão de saída constante do Push-Pull é baseado num método similar ao do rastreio da máxima potência, o que configura uma novidade deste trabalho. A integração da modelagem de todo o sistema fotovoltaico com estes sistemas de controle é realizada no ambiente Simulink para averiguação e produção dos resultados de simulação. Uma plataforma experimental que inclui um emulador de painéis fotovoltaicos, um Push-Pull de 1 kW de potência, um inversor trifásico de três braços e uma carga hidráulica constituída por um motobomba foi construída em laboratório. Os resultados experimentais corroboram a metodologia utilizada.
109

Vers une caractérisation spatiotemporelle pour l'analyse du cycle de vie / Towards a Spatiotemporal Characterization for Life Cycle Analysis

Beloin-Saint-Pierre, Didier 03 December 2012 (has links)
Cette thèse présente différents développements à la méthode analyse de cycle de vie (ACV) afin d'améliorer le niveau de considération des spécificités spatiotemporelles lors de la modélisation de systèmes. Ces développements abordent la question de la caractérisation des flux décrivant les systèmes. La discussion débute par une analyse des développements récents de la méthode ACV en ce qui concerne la considération des spécificités spatiotemporelles dans les différentes phases de cette méthode. Cette analyse identifie des lacunes quant à la pertinence des modes de caractérisation spatiale et temporelle existants. Un nouveau mode de caractérisation spatiotemporelle est alors pro-posé. La représentativité du système modélisé, le potentiel de précision de la caractérisation et le temps de travail nécessaire à la modélisation de différents systèmes sont trois critères importants qui ont été considérés pour la création de ce nouveau mode de caractérisation spatiotemporelle. Le nouveau mode proposé permet en particulier d'améliorer la généricité des processus définissant des systèmes dans différentes bases de données. Celui-ci permet ainsi de diminuer l'augmentation inévitable du travail lié à la caractérisation temporelle des systèmes. Le nouveau mode de caractérisation temporelle requiert toutefois une modification importante de la méthode de calcul des inventaires cycle de vie en raison de l'utilisation de distributions temporelles. La faisabilité de l'utilisation de ce nouveau mode et de la nouvelle méthode de calcul d'inventaire est ensuite démontrée par leurs mises en œuvre pour différents cas d'études de production d'énergie à partir de sources renouvelables. Les deux cas d'études retenus permettent de souligner l'intérêt d'une telle caractérisation spatiotemporelle accédant ainsi à une modélisation plus représentative des systèmes en fonction du niveau de précision retenu. Avec cette nouvelle approche nommée ESPA+, l'accès à ce niveau de représentativité s'accompagne cependant d'une diminution du potentiel de complétude de l'analyse. En effet, la méthode de calcul permet difficilement de dynamiser la totalité des systèmes modélisés. / This thesis presents various developments to the Life Cycle Assessment (LCA) method to improve the consideration of spatiotemporal specificities when modeling systems. These developments handle the question of how to characterize the various systems' flows. The discussion begins with an analysis of recent developments for the LCA method regarding the consideration of spatiotemporal characteristics in different phases of this method. This analysis identifies several weaknesses on how space and time are characterized today. A new spatiotemporal characterization mode is therefore proposed to minimize the adverse effects of existing characterization modes. Representativeness of the modeled system, potential accuracy of the characterization and the amount of time necessary for system modeling are three important criteria considered for the elaboration of this new spatiotemporal characterization mode. The new mode specifically improves the “genericity” of processes which are used to model systems in different databases. This “genericity” allows a reduction in the unavoidable workload increase related to the time characterization. The new method, however, requires a major change in the method of calculating life cycle inventories due to the use of temporal distributions. The feasibility of using this new method and the method of calculating inventory is then illustrated by their implementation through different case studies related to energy generation from renewable sources. The two case studies selected highlight the relevance of considering spatiotemporal characterization to model systems in a more representative way depending on the level of preci-sion used. With this new approach, named ESPA+, this higher level of representation, however, brings a potential decrease of completeness for the analysis of the system. Indeed, it is difficult to model the spatiotemporal characteristics of a complete system.
110

Commande crone appliquée à l'optimisation de la production d'une éolienne / CRONE command for the optimization of wind turbine production

Feytout, Benjamin 11 December 2013 (has links)
Les études, menées en collaboration entre la société VALEOL et le laboratoire IMS, proposent des solutions pour optimiser la production et le fonctionnement d'une éolienne. Il s’agit de travailler sur les lois de commande du système ou des sous-systèmes en utilisant la commande CRONE, répondant à un besoin de robustesse. Chaque étude met en avant des aspects de modélisation, d’identification et de synthèse de lois de commande avant mises en application au travers de simulations ou d’essais sur modèles réduits et taille réelle.Le chapitre 1 donne une vision d’ensemble des problématiques traitées dans ce manuscrit, à l’aide d’états de l’art et de remise dans le contexte économique et industriel de 2013.Le chapitre 2 introduit la commande CRONE pour la synthèse de régulateurs robustes. Cette méthodologie est utilisée pour réaliser l’asservissement de la vitesse de rotation d’une éolienne à vitesse variable, présentant une architecture innovante avec un variateur de vitesse mécanique et génératrice synchrone.Le chapitre 3 établit la comparaison de trois nouveaux critères d’optimisation pour la méthodologie CRONE. Le but est de réduire sa complexité et de faciliter sa manipulation par tout utilisateur. Les résultats sur les différents critères sont obtenus par simulations sur un exemple académique, puis sur un modèle d’éolienne de type MADA.Le chapitre 4 porte sur la réduction des charges structurelles transmises par le vent à l’éolienne. Il est question d’une amélioration du contrôle de l’angle de pitch par action indépendante sur chaque pale en fonction de la position du rotor ou encore des perturbations liées au ventLe chapitre 5 est consacré à la conception d’un système d’antigivrage et dégivrage d’une pale dans le cadre d’un projet Aquitain. Après modélisation et identification du procédé, la commande CRONE est utilisée pour réguler la température d’une peinture polymère chauffante sous alimentation électrique disposée sur les pales. L’étude est complétée par la mise en place d’un observateur pour la détection de présence de givre. / The research studies, in collaboration with VALEOL and IMS laboratory, propose several solutions to optimize the production and the efficiency of a wind turbine. The general theme of the work is based on control laws of the system or subsystems using the CRONE robust design. Each part highlights aspects of modeling, system identification and design before simulations or tests of scale and full size models. Chapter 1 provides an overview of the issues discussed in this manuscript, using states of the art and precisions on the industrial and economic context of 2013.Chapter 2 introduces the CRONE command for robust design. It is used to achieve the control of the rotation speed of a variable speed wind turbine, with an innovative architecture - mechanical variable speed solution and synchronous generator.Chapter 3 makes a comparison of three new optimization criteria for CRONE design. The aim is to reduce the methodology complexity and to facilitate handling by any user. The results are obtained through simulations on an academic example, then with a DFIG wind turbine model. Chapter 4 focuses on the reduction of structural loads transmitted by the wind on the turbine. It is about better control of the pitch angle by individual pitch control, depending on the rotor position or wind disturbances.Chapter 5 deals with the design of an anti-icing/de-icing system for blades. After the modeling and identification steps, the CRONE design is used to control the temperature of a heating coating disposed on the blades. An observer is finally designed to detect the presence of ice.

Page generated in 0.1134 seconds