Spelling suggestions: "subject:"decomposition"" "subject:"ecomposition""
141 |
A System Architecture-based Model for Planning Iterative Development Processes: General Model Formulation and Analysis of Special CasesJootar, Jay, Eppinger, Steven D. 01 1900 (has links)
The development process for complex system is typically iterative in nature. Among the critical decisions in managing such process involves deciding how to partition the system development into iterations. This paper proposes a mathematical model that captures the dynamics of such iterative process. The analysis of two special cases of the model provides an insight into how such decision should be made. / Singapore-MIT Alliance (SMA)
|
142 |
Understanding the Sources of Abnormal Returns from the Momentum Strategy.Zhang, Yu 01 December 2010 (has links)
This thesis studies the sources of the returns from the momentum strategy and attempts to find some hints for the heated debate on the market efficiency hypothesis over the past twenty years. By decomposing the momentum returns from a mathematical model, we investigate directly the contributors and their relative importance in generating these momentum returns.
Our empirical results support that autocorrelation of own stock returns is one of the driving forces for the momentum expected returns. The magnitude of the autocorrelation decreases as the ranking period becomes more remote. The second important source comes from the cross-sectional variation of the expected returns in the winner and loser portfolios at a given time. The third important source is the difference of the expected returns between the winner and loser portfolios. To our surprise, the cross-autocovariance does not contribute much to the momentum expected returns. Thus, the lead-lag effect can cause momentum returns, but its impact is not as significant as we had anticipated.
More importantly, by changing the weights of the winner and loser portfolios, we find that the own-autocovariance of the winner portfolio is almost negligible, compared to that of the loser portfolio. The returns of the winners are much more random than those of the losers. This asymmetric own-autocovariance found in the return decomposition provides another underlying explanation to the recent finding that the contribution of the winner and loser portfolios to the momentum returns is asymmetric, and it is the losers, rather than the winners, that drive the momentum returns.
Therefore, the market may not be as efficient as we believed before.
|
143 |
Recovery and evaluation of the solid products produced by thermocatalytic decomposition of tire rubber compoundsLiang, Lan 25 April 2007 (has links)
A thermal catalytic decomposition process has been developed to recycle used tire rubber. This process enables the recovery of useful products, such as hydrocarbons and carbon blacks. During the catalytic decomposition process, the tire rubber is decomposed into smaller hydrocarbons, which are collected in the process. The solid reaction residue, which normally consists of carbon black, catalysts, other inorganic rubber compound components, and organic carbonaceous deposits, was subjected to a series of treatments with the intention to recover the valuable carbon black and catalyst. The process economics depend strongly on the commercial value of the recovered carbon black and the ability to recover and recycle the catalysts used in the process. Some of the important properties of the recovered carbon black product have been characterized and compared with that of commercial-grade carbon blacks. The composition of the recovered carbon black was analyzed by TGA and EDX, the structure and morphology were studied through transmission electron microscopy (TEM), and the specific surface area was measured by BET nitrogen adsorption. The recovered products possess qualities at least comparable to (or even better than) that of the commercial-grade carbon black N660. Methods for increasing the market value of this recovered carbon black product are discussed. Anhydrous aluminum chloride (AlCl3) was used as the primary catalyst in the process. A catalyst recovery method based on the AlCl3 sublimation and recondensation was studied and found to be non-feasible. It is believed that the catalyst forms an organometallic complex with the decomposed hydrocarbons, such that it becomes chemically bonded to the residue material and hence not removable by evaporation. A scheme for the further study of the catalyst recovery is suggested.
|
144 |
Development of Spatio-Temporal Wavelet Post Processing Techniques for Application to Thermal Hydraulic Experiments and Numerical SimulationsSalpeter, Nathaniel 2012 May 1900 (has links)
This work focuses on both high fidelity experimental and numerical thermal hydraulic studies and advanced frequency decomposition methods. The major contribution of this work is a proposed method for spatio-temporal decomposition of frequencies present in the flow. This method provides an instantaneous visualization of coherent frequency ?structures? in the flow. The significance of this technique from an engineering standpoint is the ease of implementation and the importance of such a tool for design engineers. To validate this method, synthetic verification data, experimental data sets, and numerical results are used. The first experimental work involves flow through the side entry orifice (SEO) of a boiling water reactor (BWR) using non-intrusive particle tracking velocimetry (PTV) techniques. The second experiment is of a simulated double ended guillotine break in the prismatic block gas cooled reactor. Numerical simulations of jet flow mixing in the lower plenum of a prismatic block high temperature gas cooled reactor is used as a final data set for verification purposes as well as demonstration of the applicability of the method for an actual computational fluid dynamics validation case.
|
145 |
Spectral Decomposition Using S-transform for Hydrocarbon Detection and FilteringZhang, Zhao 2011 August 1900 (has links)
Spectral decomposition is a modern tool that utilizes seismic data to generate additional useful information in seismic exploration for hydrocarbon detection, lithology identification, stratigraphic interpretation, filtering and others. Different spectral decomposition methods with applications to seismic data were reported and investigated in past years. Many methods usually do not consider the non-stationary features of seismic data and, therefore, are not likely to give satisfactory results. S-transform developed in recent years is able to provide time-dependent frequency analysis while maintaining a direct relationship with the Fourier spectrum, a unique property that other methods of spectral decomposition may not have. In this thesis, I investigated the feasibility and efficiency of using S-transform for hydrocarbon detection and time-varying surface wave filtering.
S-transform was first applied to two seismic data sets from a clastic reservoir in the North Sea and a deep carbonate reservoir in the Sichuan Basin, China. Results from both cases demonstrated that S-transform decomposition technique can detect hydrocarbon zones effectively and helps to build the relationships between lithology changes and high frequency variation and between hydrocarbon occurrence and low-frequency anomaly. However, its time resolution needs to be improved.
In the second part of my thesis, I used S-transform to develop a novel Time-frequency-wave-number-domain (T-F-K) filtering method to separate surface wave from reflected waves in seismic records. The S-T-F-K filtering proposed here can be used to analyze surface waves on separate f-k panels at different times. The method was tested using hydrophone records of four-component seismic data acquired in the shallow-water Persian Gulf where the average water depth is about 10m and Scholte waves and other surfaces wave persistently strong. Results showed that this new S-T-F-K method is able to separate and sttenuate surface waves and to improve greatly the quality of seismic reflection signals that are otherwise completely concealed by the aliased surface waves.
|
146 |
Physical and chemical analysis of pig carcass decomposition in a fine sandLarizza, Melina 01 August 2010 (has links)
The development and improvement of methods used for the estimation of the
postmortem interval (PMI) is a common area of research in forensic science. This
research was conducted to physically and chemically analyze pig carcass decomposition
on a soil surface using conventional and newly developed methods for the potential use in
estimating the PMI. Photographs of pig carcasses decomposing on forested and open
land were scored using a decomposition scoring system and decomposition scores were
related to accumulated degree days (ADD). Overall, the ADD values were significantly
different for the two groups of carcasses; however, the ADD values for the onset of each
score demonstrated more similarity between groups. Decomposition scoring results also
indicated that refinements must be made to the calculation of ADD to allow for a
meaningful comparison of pig and human decomposition. The decomposition of pig
carcasses altered the water content, pH and fatty acid content of soil. The fatty acids,
myristic, palmitic, palmitoleic, stearic and oleic acids were successfully extracted and
analyzed from decomposition soil. Palmitic, stearic and oleic acids were the most
abundant fatty acids detected whilst the levels of myristic and palmitoleic acids were
negligible in comparison. A three peak fatty acid cycle was also observed for each fatty
acid. Variations in soil pH and fatty acid content of decomposition soil have the potential
to indicate the presence of a decomposition site. Furthermore, a nonlinear diffusion
model was developed to predict the development of the cadaver decomposition island
(CDI) in soil over time. The simulation of the model indicated that the diffusion model
has the potential to generate PMI estimations for early stages of decomposition by
corresponding the effective radius of the CDI to a particular time point. The general
findings of this research indicate that more accurate methods for PMI estimations can
potentially be developed with further research. / UOIT
|
147 |
Ketone Production from the Thermal Decomposition of Carboxylate SaltsLandoll, Michael 1984- 14 March 2013 (has links)
The MixAlco process uses an anaerobic, mixed-culture fermentation to convert lignocellulosic biomass to carboxylate salts. The fermentation broth must be clarified so that only carboxylate salts, water, and minimal impurities remain. Carboxylate salts are concentrated by evaporation and thermally decomposed into ketones. The ketones can then be chemically converted to a wide variety of chemicals and fuels.
The presence of excess lime in the thermal decomposition step reduced product yield. Mixtures of calcium carboxylate salts were thermally decomposed at 450 degrees C. Low lime-to-salt ratios (g Ca(OH)2/g salt) of 0.00134 and less had a negligible effect on ketone yield. In contrast, salts with higher lime-to-salt ratios of 0.00461, 0.0190, and 0.272 showed 3.5, 4.6, and 9.4% loss in ketone yield, respectively. These losses were caused primarily by increases in tars and heavy oils; however, a three-fold increase in hydrocarbon production occurred as well. To predict ketone product distribution, a random-pairing and a Gibbs free energy minimization model were applied to thermal decompositions of mixed calcium and sodium carboxylate salts. Random pairing appears to better predict ketone product composition.
For sodium and calcium acetate, two types of mixed sodium carboxylate salts, and two types of mixed calcium carboxylate salts, activation energy (EA) was determined using three isoconversional methods. For each salt type, EA varied significantly with conversion. The average EA for sodium and calcium acetate was 226.65 and 556.75 kJ/mol, respectively. The average EA for the two mixed sodium carboxylate salts were 195.61, and 218.18 kJ/mol. The average EA for the two mixed calcium carboxylate salts were 232.78, and 176.55 kJ/mol. In addition, three functions of conversion were employed to see which one best modeled the experimental data. The Sestak-Berggren model was the best overall. Possible reactor designs and configurations that address the challenges associated with the continuous thermal decomposition of carboxylate salts are also presented and discussed.
Methods of fermentation broth clarification were tested. Flocculation showed little improvement in broth purity. Coagulation yielded broth of 93.23% purity. Filtration using pore sizes from 1 micrometer to 240 Daltons increased broth purity (90.79 to 98.33%) with decreasing pore size.
|
148 |
Singular Value DecompositionEk, Christoffer January 2012 (has links)
Digital information och kommunikation genom digitala medier är ett växande område. E-post och andra kommunikationsmedel används dagligen över hela världen. Parallellt med att området växer så växer även intresset av att hålla informationen säker. Transmission via antenner är inom signalbehandling ett välkänt område. Transmission från en sändare till en mottagare genom fri rymd är ett vanligt exempel. I en tuff miljö som till exempel ett rum med reflektioner och oberoende elektriska apparater kommer det att finnas en hel del distorsion i systemet och signalen som överförs kan, på grund av systemets egenskaper och buller förvrängas.Systemidentifiering är ett annat välkänt begrepp inom signalbehandling. Denna avhandling fokuserar på systemidentifiering i en tuff miljö med okända system. En presentation ges av matematiska verktyg från den linjära algebran samt en tillämpning inom signalbehandling. Denna avhandling grundar sig främst på en matrisfaktorisering känd som Singular Value Decomposition (SVD). SVD’n används här för att lösa komplicerade matrisinverser och identifiera system.Denna avhandling utförs i samarbete med Combitech AB. Deras expertis inom signalbehandling var till stor hjälp när teorin praktiserades. Med hjälp av ett välkänt programmeringsspråk känt som LabView praktiserades de matematiska verktygen och kunde synkroniseras med diverse instrument som användes för att generera signaler och system. / Digital information transmission is a growing field. Emails, videos and so on are transmitting around the world on a daily basis. Along the growth of using digital devises there is in some cases a great interest of keeping this information secure. In the field of signal processing a general concept is antenna transmission. Free space between an antenna transmitter and a receiver is an example of a system. In a rough environment such as a room with reflections and independent electrical devices there will be a lot of distortion in the system and the signal that is transmitted might, due to the system characteristics and noise be distorted. System identification is another well-known concept in signal processing. This thesis will focus on system identification in a rough environment and unknown systems. It will introduce mathematical tools from the field of linear algebra and applying them in signal processing. Mainly this thesis focus on a specific matrix factorization called Singular Value Decomposition (SVD). This is used to solve complicated inverses and identifying systems. This thesis is formed and accomplished in collaboration with Combitech AB. Their expertise in the field of signal processing was of great help when putting the algorithm in practice. Using a well-known programming script called LabView the mathematical tools were synchronized with the instruments that were used to generate the systems and signals.
|
149 |
Decomposing polygons into r-stars or alpha-bounded subpolygonsWorman, Chris 09 August 2004 (has links)
To make computations on large data sets more efficient, algorithms will frequently divide information into smaller, more manageable, pieces. This idea, for example, forms the basis of the common algorithmic approach known as Divide and Conquer.
If we wish to use this principle in planar geometric computations, however, we may require specialized techniques for decomposing our data. This is due to the fact that the data sets are typically points, lines, regions, or polygons. This motivates algorithms that can break-up polygons into simpler pieces. Algorithms that perform such computations are said to compute polygon decompositions. There are many ways
that we can decompose a polygon, and there are also many types of polygons that we could decompose. Both applications and theoretical interest demand algorithms for a wide variety of decomposition problems.
In this thesis we study two different polygon decomposition problems. The first
problem that we study is a polygon decomposition problem that is equivalent to the Rectilinear Art Gallery problem. In this problem we seek a decomposition of a polygon into so-called r-stars. These r-stars model visibility in an orthogonal setting.
We show that we can compute a certain type of decomposition, known as a Steinercover, of a simple orthogonal polygon into r-stars in polynomial time. In the second problem, we explore the complexity of decomposing polygons into components that have an upper bound on their size. In this problem, the size of a polygon refers
to the size of its bounding-box. This problem is motivated by a polygon collision detection heuristic that approximates a polygon by its bounding-box to determine whether an exact collision detection computation should take place. We show that it is NP-complete to decide whether a polygon that contains holes can be decomposed
into a specified number of size-constrained components.
|
150 |
Decomposing polygons into r-stars or alpha-bounded subpolygonsWorman, Chris 09 August 2004
To make computations on large data sets more efficient, algorithms will frequently divide information into smaller, more manageable, pieces. This idea, for example, forms the basis of the common algorithmic approach known as Divide and Conquer.
If we wish to use this principle in planar geometric computations, however, we may require specialized techniques for decomposing our data. This is due to the fact that the data sets are typically points, lines, regions, or polygons. This motivates algorithms that can break-up polygons into simpler pieces. Algorithms that perform such computations are said to compute polygon decompositions. There are many ways
that we can decompose a polygon, and there are also many types of polygons that we could decompose. Both applications and theoretical interest demand algorithms for a wide variety of decomposition problems.
In this thesis we study two different polygon decomposition problems. The first
problem that we study is a polygon decomposition problem that is equivalent to the Rectilinear Art Gallery problem. In this problem we seek a decomposition of a polygon into so-called r-stars. These r-stars model visibility in an orthogonal setting.
We show that we can compute a certain type of decomposition, known as a Steinercover, of a simple orthogonal polygon into r-stars in polynomial time. In the second problem, we explore the complexity of decomposing polygons into components that have an upper bound on their size. In this problem, the size of a polygon refers
to the size of its bounding-box. This problem is motivated by a polygon collision detection heuristic that approximates a polygon by its bounding-box to determine whether an exact collision detection computation should take place. We show that it is NP-complete to decide whether a polygon that contains holes can be decomposed
into a specified number of size-constrained components.
|
Page generated in 0.1105 seconds