471 |
A suboptimal SLM based on symbol interleaving scheme for PAPR reduction in OFDM systemsLiu, Yung-Fu 31 July 2012 (has links)
Orthogonal frequency division multiplexing (OFDM) system is the standard of next generation mobile communication, one of the major drawbacks of OFDM systems is the peak-to-average power ratio (PAPR). In this paper, we proposed a low complexity Selected mapping (SLM) scheme to reduce PAPR. In [27], Wang proposed a low complexity SLM scheme by utilizing conversion vectors having
the form of a perfect sequence to solve the problem that phase rotation vectors of the conversion vectors do not usually have an equal magnitude in frequency domain. This paper proposed a low complexity SLM scheme based on perfect sequence and consider the symbol interleaving to reduce the correlation between signals in time domain. It is shown that the (Complementary Cumulative Distribution Function, CCDF) of our proposed scheme are closer to the
traditional SLM scheme than Wang¡¦s in [27] but with additional complexity. And the computational complexity is much lower than traditional SLM.
|
472 |
A Novel Precoding Scheme for Systems Using Data-Dependent Superimposed TrainingChen, Yu-chih 31 July 2012 (has links)
For channel estimation without data-induced interference in data-dependent superimposed training (DDST) scheme, the data sequence is shifted by subtracting a data-dependent sequence before added to training sequence at transmitter. The distorted term causes the data identification problem (DIP) at the receiver. In this thesis, we propose two precoding schemes based on previous work. To maintain low peak-to-average power ratio (PAPR), the precoding matrix is restricted to a diagonal matrix. The first scheme is proposed to enlarge the minimum distance between the closest codewords, termed as efficient diagonal scheme. Conditions to make sure the precoding matrix is efficient for M-ary phase shift keying (MPSK) and M-ary quadrature amplitude modulation (MQAM) modulation are listed in this paper. The second scheme pursues a lowest complexity at receiver which means the amount of searching set is reduced. It is a trade-off between the better bit error rate (BER) performance and a lower complexity at
receiver. The simulation results show that PAPR have been improved and the DIP is solved in both schemes.
|
473 |
ALTERNATE POWER AND ENERGY STORAGE/REUSE FOR DRILLING RIGS: REDUCED COST AND LOWER EMISSIONS PROVIDE LOWER FOOTPRINT FOR DRILLING OPERATIONSVerma, Ankit 2009 May 1900 (has links)
Diesel engines operating the rig pose the problems of low efficiency and large
amount of emissions. In addition the rig power requirements vary a lot with time and
ongoing operation. Therefore it is in the best interest of operators to research on alternate
drilling energy sources which can make entire drilling process economic and
environmentally friendly. One of the major ways to reduce the footprint of drilling
operations is to provide more efficient power sources for drilling operations. There are
various sources of alternate energy storage/reuse. A quantitative comparison of physical
size and economics shows that rigs powered by the electrical grid can provide lower cost
operations, emit fewer emissions, are quieter, and have a smaller surface footprint than
conventional diesel powered drilling.
This thesis describes a study to evaluate the feasibility of adopting technology to
reduce the size of the power generating equipment on drilling rigs and to provide ?peak
shaving? energy through the new energy generating and energy storage devices such as
flywheels. An energy audit was conducted on a new generation light weight Huisman LOC
250 rig drilling in South Texas to gather comprehensive time stamped drilling data. A
study of emissions while drilling operation was also conducted during the audit. The
data was analyzed using MATLAB and compared to a theoretical energy audit. The
study showed that it is possible to remove peaks of rig power requirement by a flywheel
kinetic energy recovery and storage (KERS) system and that linking to the electrical grid
would supply sufficient power to operate the rig normally. Both the link to the grid and
the KERS system would fit within a standard ISO container.
A cost benefit analysis of the containerized system to transfer grid power to a rig,
coupled with the KERS indicated that such a design had the potential to save more than
$10,000 per week of drilling operations with significantly lower emissions, quieter
operation, and smaller size well pad.
|
474 |
Direct Use Of Pgv For Estimating Peak Nonlinear Oscillator DisplacementsKucukdogan, Bilge 01 November 2007 (has links) (PDF)
DIRECT USE OF PGV FOR ESTIMATING PEAK NONLINEAR OSCILLATOR
DISPLACEMENTS
KÜ / Ç / Ü / KDOGAN, Bilge
Recently established approximate methods for estimating the lateral deformation
demands on structures are based on the prediction of nonlinear oscillator
displacements (Sd,ie). In this study, a predictive model is proposed to estimate the
inelastic spectral displacement as a function of peak ground velocity (PGV). Prior to
the generation of the proposed model, nonlinear response history analysis is
conducted on several building models of wide fundamental period range and
hysteretic behavior to observe the performance of selected demands and the chosen
ground-motion intensity measures (peak ground acceleration, PGA, peak ground
velocity, PGV and elastic pseudo spectral acceleration at the fundamental period
(PSa(T1)). Confined to the building models used and ground motion dataset, the
correlation studies revealed the superiority of PGV with respect to the other intensity
measures while identifying the variation in global deformation demands of structural
systems (i.e., maximum roof and maximum interstory drift ratio). This rational is the
deriving force for proposing the PGV based prediction model. The proposed model
accounts for the variation of Sd,ie for bilinear hysteretic behavior under constant
ductility (µ / ) and normalized strength ratio (R) associated with postyield stiffness ratios
of = 0% and = 5%. Confined to the limitations imposed by the ground-motion
database, the predictive model can estimate Sd,ie by employing the PGV predictions
obtained from the attenuation relationships. This way the influence of important
seismological parameters can be incorporated to the variation of Sd,ie in a fairly
rationale manner. Various case studies are presented to show the consistent
estimations of Sd,ie by the proposed model using the PGV values obtained from
recent ground motion prediction equations.
|
475 |
Performance Improvement Of Vlsi Circuits With Clock SchedulingKapucu, Kerem 01 December 2009 (has links) (PDF)
Clock scheduling is studied to improve the performance of synchronous sequential circuits. The performance improvement covers the optimization of the clock frequency and the peak power consumption, separately. For clock period minimization, cycle stealing method is utilized, in which the redundant cycle time of fast combinational logic is transferred to slower logic by proper clock skew adjustment of registers. The clock scheduling system determines the minimum clock period that a synchronous sequential circuit can operate without hazards. The timing of each register is adjusted for operation with the minimum clock period. The dependence of the propagation delays of combinational gates on load capacitance values are modeled in order to increase the accuracy of the clock period minimization algorithm. Simulation
results show up to 45% speed-up for circuits that are scheduled by the system. For peak power minimization, the dependence of the switching currents of circuit elements on the load capacitance values are modeled. A new method, namely the Shaped Pulse Approximation Method (SPA), is proposed for the estimation of switching power dissipation of circuit elements for arbitrary capacitive loads. The switching current waves can accurately be estimated by using the SPA method with less than 10% normalized rms error. The clock scheduling algorithm of Takahashi for the reduction of the peak power consumption of synchronous sequential circuits is implemented using the SPA method. Up to 73% decrease in peak power dissipation is observed in simulation results when proper clock scheduling scheme is applied to test circuits.
|
476 |
Double-punch test for evaluating the performance of steel fiber-reinforced concreteWoods, Aaron Paul 19 June 2012 (has links)
The objective of this study is to develop test protocols for comparing the effectiveness of fiber-reinforced concrete (FRC) mixtures with high-performance steel fibers. Steel fibers can be added to fresh concrete to increase the tensile strength, ductility, and durability of concrete structures. In order to quantify steel fiber-reinforced concrete (SFRC) mixtures for field applications, a material test capable of predicting the performance of SFRC for field loading conditions is required. However, current test methods used to evaluate the structural properties of FRC (such as residual strength and toughness) are widely regarded as inadequate; a simple, accurate, and consistent test method is needed. It was determined that the Double-Punch Test (DPT), originally introduced by Chen in 1970 for plain concrete, could be extended to fiber-reinforced concrete to satisfy this industry need. In the DPT, a concrete cylinder is placed vertically between the loading platens of the test machine and compressed by two steel punches located concentrically on the top and bottom surfaces of the cylinder. It is hypothesized that the Double-Punch Test is capable of comparing future fiber-reinforcement design options for use in structural applications, and is suitable for evaluating FRC in general. The DPT Research and Testing Program was administered to produce sufficient within-laboratory data to make conclusions and recommendations regarding the simplicity, reliability, and reproducibility of the DPT for evaluating the performance of SFRC. Several variables (including fiber manufacturer, fiber content, and testing equipment) were evaluated to verify the relevance of the DPT for FRC. In this thesis, the results of 120 Double-Punch Tests are summarized and protocols for its effective application to fiber-reinforced concrete are recommended. Also, fundamental data is provided that indicates the DPT could be standardized by national and international agencies, such as the American Society of Testing and Materials (ASTM), as a method to evaluate the mechanical behavior of FRC. This project is sponsored by the Texas Department of Transportation (TxDOT) through TxDOT Project 6348, "Controlling Cracking in Prestressed Concrete Panels and Optimizing Bridge Deck Reinforcing Steel," which is aimed at improving bridge deck construction through developments in design details, durability, and quality control procedures. / text
|
477 |
EVALUATING THE EFFECTIVENESS OF PEAK POWER TRACKING TECHNOLOGIES FOR SOLAR ARRAYS ON SMALL SPACECRAFTErb, Daniel Martin 01 January 2011 (has links)
The unique environment of CubeSat and small satellite missions allows certain accepted paradigms of the larger satellite world to be investigated in order to trade performance for simplicity, mass, and volume. Peak Power Tracking technologies for solar arrays are generally implemented in order to meet the End-of-Life power requirements for satellite missions given radiation degradation over time. The short lifetime of the generic satellite mission removes the need to compensate for this degradation. While Peak Power Tracking implementations can give increased power by taking advantage and compensating for the temperature cycles that solar cells experience, this comes at the expense of system complexity and, given smart system design, this increased performance is negligible and possibly detrimental. This thesis investigates different Peak Power Tracking implementations and compares them to two Fixed Point implementations as well as a Direct Energy Transfer system in terms of performance and system complexity using computer simulation. This work demonstrates that, though Peak Power Tracking systems work as designed, under most circumstances Direct Energy Transfer systems should be used in small satellite applications as it gives the same or better performance with less complexity.
|
478 |
The Joint Modelling of Trip Timing and Mode ChoiceDay, Nicholas 24 February 2009 (has links)
This thesis jointly models the 24 hour work trip timing and mode choice decisions of commuters in the Greater Toronto Area. A discrete-continuous specification, with a multinomial logit model for mode choice and an accelerated time hazard model for trip timing, is used to allow for unrestricted correlation between these two fundamental decisions. Statistically significant correlations are found between mode choice and trip timing for work journeys with expected differences between modes. Furthermore, the joint models have a wide range of policy sensitive statistically significant parameters of intuitive sign and magnitude, revealing expected differences between workers of different occupation groups. Furthermore, the estimated models have a high degree of fit to observed cumulative departure and arrival time distribution functions and to observed mode choices. Finally, sensitivity tests have demonstrated that the model is capable of capturing peak spreading in response to increasing auto congestion.
|
479 |
The Joint Modelling of Trip Timing and Mode ChoiceDay, Nicholas 24 February 2009 (has links)
This thesis jointly models the 24 hour work trip timing and mode choice decisions of commuters in the Greater Toronto Area. A discrete-continuous specification, with a multinomial logit model for mode choice and an accelerated time hazard model for trip timing, is used to allow for unrestricted correlation between these two fundamental decisions. Statistically significant correlations are found between mode choice and trip timing for work journeys with expected differences between modes. Furthermore, the joint models have a wide range of policy sensitive statistically significant parameters of intuitive sign and magnitude, revealing expected differences between workers of different occupation groups. Furthermore, the estimated models have a high degree of fit to observed cumulative departure and arrival time distribution functions and to observed mode choices. Finally, sensitivity tests have demonstrated that the model is capable of capturing peak spreading in response to increasing auto congestion.
|
480 |
Caractérisation des régimes de crues fréquentes en France - un regard géostatistique / Analysis of frequent floods regimes in France - a geostatistical approachPorcheron, Delphine 27 September 2018 (has links)
Peu de travaux se sont attachés à estimer les statistiques relatives aux crues fréquentes en sites non jaugés. Celles-ci ont de fait été délaissées par la communauté hydrologique, plus encline à s’intéresser aux événements extrêmes (périodes de retour d’au moins 10 ans) utilisés dans la gestion du risque inondation. Cependant, le régime des hautes eaux ne se limite pas à ces seules caractéristiques. Une bonne connaissance des crues modérées est requise dans de nombreux domaines comme l’hydroécologie ou l’hydromorphologie. La fréquente occurrence de ces crues implique en effet un modelage régulier du lit. Elles concourent ainsi à conditionner les habitats écologiques au sein des hydrosystèmes d’eau douce.L’objectif de cette thèse consiste à caractériser le régime des crues fréquentes, i.e. de périodes de retour de 1 à 5 ans, en France métropolitaine. Pour cela, il est nécessaire de considérer les chroniques disponibles au plan national, et d’en extraire l’information hydrologique pertinente. La constitution d’un échantillon fiable permettant une analyse robuste représente à ce titre une étape importante. La sélection de stations s’appuie sur une analyse des valeurs extrêmes de débit, extraites des chroniques de débit à pas de temps variable (longueur de la série, stationnarité, comportement des distributions statistiques…), ainsi que sur les informations fournies par les gestionnaires des stations hydrométriques. La démarche adoptée consiste à décrire les évènements de crues modérées dans un souci d’exhaustivité, à la fois en termes de débits mais aussi de volumes, selon une analyse multi-durées décrite par les courbes QdF (débit-durée-fréquence), qui fournissent les quantiles de crue (pic et volumes). Le modèle QdF convergent exploité ici permet de réduire à 3 le nombre de paramètres descriptifs du régime des crues.Pour caractériser le régime des crues fréquentes sur l’ensemble du réseau hydrographique français, la démarche intègre la mise en œuvre de méthodes dites « de régionalisation ». Il s’agit de transférer l’information hydrologique disponible aux sites de mesures vers l’ensemble du réseau hydrographique français. Plusieurs approches ont été envisagées. Ainsi, des formulations empiriques établies sur des découpages régionaux ont été mises en œuvre. Fréquemment utilisée, cette technique nécessite de limiter le nombre de stations présentant des enregistrements disjoints afin d’éviter le risque de représenter une variabilité temporelle plutôt qu’un effet spatial. Le respect de cette contrainte entraîne une perte de 30% de stations hydrométriques de l’échantillon initial.C’est pour limiter cette perte d’information non négligeable que la méthode TREK (Time-REferenced data Kriging) a été développée. Cet algorithme de cartographie a été conçu afin de prendre en compte le support temporel des données disponibles en plus du support spatial. Les données disponibles participent plus ou moins aux estimations selon leur période d'observation propre. TREK permet ainsi d'atténuer la perte de données provoquée par le recours à une période de référence commune ou un seuil maximal de lacunes autorisées. Pour répondre aux objectifs de la thèse, les différentes méthodes d’estimation en sites non jaugés sont mises en œuvre et leur efficience est évaluée dans le cadre d’une validation croisée. Cette démarche de comparaison objective permet de sélectionner le modèle optimal pour caractériser le régime des crues fréquentes sur le réseau hydrographique français. / Only a few studies have focused on frequent floods regimes at ungauged locations. Most of works have put their efforts on extreme flood events (return periods of 10 years or more) needed for solving many engineering issues in flood risk management. However, high flows regime is not confined to extremes values. A good understanding of frequent floods is required in a wide array of topics like hydroecology and hydromorphomology. Frequent floods provide many functions, maintaining and rejuvenating ecological habitats and influencing the geomorphology of the streambed, so their distribution must be also known.The main objective of this work is to characterise the frequent floods from a statistical point of view (with a return period between 1 and 5 years) in France. Forming the dataset is a preliminary crucial step to derive both robust and reliable statistics. The selection relies on different criteria, for example related to the quality of discharge measurements, the length of records, the self-assessment of people in charge, and finally on an analysis of extreme values extracted from time series (stationarity, shape of the distributions…).A comprehensive description of frequent floods regimes (intensity, duration and frequency) is required. It is achieved by applying the flow-duration–frequency (QdF) model which takes into account the temporal dynamics of floods. This approach is analogous to the intensity-duration–frequency (IdF) model commonly used for extreme rainfall analysis. At gauged locations, the QdF model can be summarised with only three parameters: the position and scale parameters of the exponential distribution fitted to the samples of instantaneous peak floods and a parameter homogeneous to a decay time computed from observed data.Different regionalisation methods were applied for estimating these three QdF parameters at ungauged locations. Regionalisation methods rely on the concept of transferring hydrological information from a site of measurement to ungauged sites. However these approaches require simultaneous records to avoid that the map is spoiled by temporal variability rather than display truly spatial patterns. Regional empirical formulas were derived but the constraints discussed above lead to discard 30% of the dataset.Time-REferenced data Kriging method (TREK) has been developed to overcome this issue. This alogrithm was developped in order to account the temporal support over which the variable of interest has been calculated, in addition to its spatial support. This approach aims at reducing the loss of data caused by the selection of a common reference period of records required to build a reliable dataset. The performances of each method have been assessed by cross-validation and a combination of best features is finally selected to map the frequent flow features over France.
|
Page generated in 0.0459 seconds