• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1262
  • 440
  • 229
  • 124
  • 93
  • 37
  • 27
  • 26
  • 22
  • 20
  • 16
  • 12
  • 11
  • 11
  • 10
  • Tagged with
  • 2786
  • 320
  • 317
  • 288
  • 233
  • 229
  • 190
  • 181
  • 179
  • 160
  • 155
  • 138
  • 137
  • 131
  • 130
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Plant traits as predictors of ecosystem change and function in a warming tundra biome

Thomas, Haydn John David January 2018 (has links)
The tundra is currently warming twice as rapidly as the rest of planet Earth, which is thought to be leading to widespread vegetation change. Understanding the drivers, patterns, and impacts of vegetation change will be critical to predicting the future state of tundra ecosystems and estimating potential feedbacks to the global climate system. In this thesis, I used plant traits - the characteristics of individuals and species - to investigate the fundamental structure of tundra plant communities and to link vegetation change to decomposition across the tundra biome. Plant traits are increasingly used to predict how communities will respond to environmental change. However, existing global trait relationships have largely been formulated using data from tropical and temperature environments. It is thus unknown whether these trait relationships extend to the cold extremes of the tundra biome. Furthermore, it is unclear whether approaches that simplify trait variation, such as the categorization of species into functional groups, capture variation across multiple traits. Using the Tundra Trait Team database - the largest tundra trait database ever compiled - I found that tundra plants revealed remarkable consistency in the range of resource acquisition traits, but not size traits, compared to global trait distributions, and that global trait relationships were maintained in the tundra biome. However, trait variation was largely expressed at the level of individual species, and thus the use of functional groups to describe trait variation may obscure important patterns and mechanisms of vegetation change. Secondly, plant traits are related to several key ecosystem functions, and thus offer an approach to predicting the impacts of vegetation change. Notably, understanding the links between vegetation change and decomposition is a critical research priority as high latitude ecosystems contain more than 50% of global soil carbon, and have historically formed a long-term carbon sink due to low decomposition rates and frozen soils. However, it is unclear to what extent vegetation change, and thus changes to the quality and quantity of litter inputs, drives decomposition compared to environmental controls. I used two common substrates (tea), buried at 248 sites, to quantify the relative importance of temperature, moisture and litter quality on litter decomposition across the tundra biome. I found strong linear relationships between decomposition, soil temperature and soil moisture, but found that litter quality had the greatest effect on decomposition, outweighing the effects of environment across the tundra biome. Finally, I investigated whether tundra plant communities are undergoing directional shifts in litter quality as a result of climate warming. Given the importance of litter quality for decomposition, a shift towards more or less decomposable plant litter could act as a feedback to climate change by altering decomposition rates and litter carbon storage. I combined a litter decomposition experiment with tundra plant trait data and three decades of biome-wide vegetation monitoring to quantify change in community decomposability over space, over time and with warming. I found that community decomposability increased with temperature and soil moisture over biogeographic gradients. However, I found no significant change in decomposability over time, primarily due to low species turnover, which drives the majority of trait differences among sites. Together, my thesis findings indicate that the incorporation of plant trait data into ecological analyses can improve our understanding of tundra vegetation change. Firstly, trait-based approaches capture variation in plant responses to environmental change, and enable prediction of vegetation change and ecosystem function at large scales and under future growing conditions. Secondly, my findings offer insight into the potential direction, rate and magnitude of vegetation change, indicating that despite rapid shifts in some traits, the majority of community-level trait change will be dependent upon the slower processes of migration and species turnover. Finally, my findings demonstrate that the impact of warming on both tundra vegetation change and ecosystem processes will be strongly mediated by soil moisture and trait differences among vegetation communities. Overall, my thesis demonstrates that the use of plant traits can improve climate change predictions for the tundra biome, and informs the fundamental rules that determine plant community structure and change at the global scale.
262

Soil-cadaver interactions in a burial environment

Stokes, Kathryn Lisa January 2009 (has links)
Forensic taphonomy is concerned with investigation of graves and grave sites. The primary aim of forensic taphonomy is development of accurate estimations of postmortem interval (PMI) and/or postburial interval (PBI). Soil has previously been largely ignored, therefore this thesis is designed to investigate changes in decomposition as imparted by the soil. Furthermore the impact of cadaver interment on the surrounding soil may offer prospects for identification of clandestine graves. A series of laboratory controlled decomposition experiments using cadavers (Mus musculus) and cadaver analogues (skeletal muscle tissue (SMT); Sus scrofa, Homo sapiens, Ovis aries and Bos Taurus) were designed to investigate decomposition in burial environments. Sequential destructive harvests were carried out to monitor temporal changes during decomposition. Analyses conducted included; mass loss, microbial activity (CO2 respiration) and soil chemistry (pH, EC and extractable NH4 +, NO3 -, PO4 3- and K+). Several experimental variables were tested; frozen-thawed versus refrigerated SMT, different mammalian sources of SMT, different soil type and contribution of soil versus enteric microbial communities. Mass loss measurements for SMT experiments demonstrated a sigmoidal pattern of mass loss, however, larger cadavers (Mus musculus, 5 weeks) did not. The inhumation of SMT (frozen, unfrozen, different mammalian sources) or cadavers leads to an increase in microbial activity (CO2 respiration) within 24 hours of burial. A peak of microbial activity is attained within a week, followed by a decrease and eventual plateau. The rapid influx in microbial activity is matched by corresponding increases in pH and NH4 + concentration. pH and NH4 + are strongly correlated in soils with acidic basal pH, by comparison highly alkaline soil demonstrated no relationship. NH4 + concentration also appeared to be related directly to NO3 - concentration and cadaver or SMT mass. A decrease in NH4 + corresponds with an increase in NO3 -, however, nitrification was unpredictable. Rapid nitrification was observed in sand systems when SMT was interred, but was not noted when cadavers were interred. By comparison both sandy clay loam and loamy sand soils demonstrated rapid nitrification after inhumation of a cadaver. When cadaver or cadaver analogue mass was larger, so were NH4 + and NO3 - concentrations in systems that experienced nitrification.
263

Shock Tube Experiments on Nitromethane and Promotion of Chemical Reactions by Non-Thermal Plasma

Seljeskog, Morten January 2002 (has links)
<p>This dissertation was undertaken to study two different subjects both related to molecular decomposition by applying a shock tube and non-thermal plasma to decompose selected hydrocarbons. The first approach to molecular decomposition concerned thermal decomposition and oxidation of highly diluted nitromethane (NM) in a shock tube. Reflected shock tube experiments on NM decomposition, using mixtures of 0.2 to 1.5 vol% NM in nitrogen or argon were performed over the temperature range 850-1550 K and pressure range 190-900 kPa, with 46 experiments diluted in nitrogen and 44 diluted in argon. By residual error analysis of the measured decomposition profiles it was found that NM decomposition (CH<sub>3</sub>NO<sub>2</sub> + M -> CH<sub>3</sub> + NO<sub>2</sub> + M, where M = N<sub>2</sub> /Ar) corresponds well to a law of first order. Arrhenius expressions corresponding to NM diluted either in N<sub>2</sub> or in Ar were found as k<sub>N2</sub> = 1017.011×exp(-182.6 kJ/mole / R×T <cm<sup>3</sup>/mole×s> and k<sub>Ar</sub> = 1017.574×exp(-207 kJ/mole / R×T )<cm3<sup>/</sup>mole×s>, respectively. A new reaction mechanism was then proposed, based on new experimental data for NM decomposition both in Ar and N<sub>2</sub> and on three previously developed mechanisms. The new mechanism predicts well the decomposition of NM diluted in both N<sub>2</sub> and Ar within the pressure and temperature range covered by the experiments.</p><p>In parallel to, and following the decomposition experiments, oxidative experiments on the ignition delay times of NM/O<sub>2</sub>/Ar mixtures were investigated over high temperature and low to high pressure ranges. These experiments were carried out with eight different mixtures of gaseous NM and oxygen diluted in argon, with pressures ranging between 44.3-600 kPa, and temperatures ranging between 842-1378 K.</p><p>The oxidation experiments were divided into different categories according to the type of decomposition signals achieved. For signals with and without emission, the apparent quasi-constant activation energy was found from the correlations, to be 64.574 kJ/mol and 113.544 kJ/mol, respectively. The correlations for the ignition delay for time signals with and without emission were deduced as τemission = 0.3669×10<sup>-2</sup>×[NM]<sup>-1.02</sup>[O<sub>2</sub>]<sup>-1.08</sup>×[Ar]<sup>1.42</sup>×exp(7767/T) and τno emission = 0.3005×10<sup>-2</sup>×[NM]<sup>-0.28</sup>[O<sub>2</sub>]<sup>0.12</sup>×[Ar]<sup>-0.59</sup>×exp(13657/T), respectively.</p><p>The second approach to molecular decomposition concerned the application of non-thermal plasma to initiate reactions and decompose/oxidize selected hydrocarbons, methane and propane, in air. Experiments with a gliding arc discharge device were performed at the university of Orléans on the decomposition/reforming of low-to stoichiometric concentration air/CH<sub>4</sub> mixtures. The presented results show that complete reduction of methane could be obtained if the residence time in the reactor was sufficiently long. The products of the methane decomposition were mainly CO<sub>2</sub>, CO and H<sub>2</sub>O. The CH<sub>4</sub> conversion rate showed to increase with increasing residence time, temperature of the operating gas, and initial concentration of methane. To achieve complete decomposition of CH<sub>4 </sub>in 1 m<sup>3</sup> of a 2 vol% mixture, the energy cost was about 1.5 kWh. However, the formation of both CO and NOx in the present gliding discharge system was found to be significant. The produced amount of both CO (0.4-1 vol%) and NO<sub>x</sub> (2000-3500 ppm) were in such high quantities that they would constitute an important pollution threat if this process as of today was to be used in large scale CH<sub>4</sub> decomposition. Further experimental investigations were performed on self-built laboratory scale, single- and double dielectric-barrier discharge devices as a means of removing CH<sub>4</sub> and C<sub>3</sub>H<sub>8 f</sub>rom simulated reactive inlet mixtures. The different discharge reactors were all powered by an arrangement of commercially available Tesla coil units capable of high-voltage high-frequency output. The results from each of the different experiments are limited and sometimes only qualitative, but show a tendency that the both CH<sub>4</sub> and C<sub>3</sub>H<sub>8 </sub>are reduced in a matter of a 3-6 min. retention time. The most plausible mechanism for explaining the current achievements is the decomposition by direct electron impact.</p>
264

Shock Tube Experiments on Nitromethane and Promotion of Chemical Reactions by Non-Thermal Plasma

Seljeskog, Morten January 2002 (has links)
This dissertation was undertaken to study two different subjects both related to molecular decomposition by applying a shock tube and non-thermal plasma to decompose selected hydrocarbons. The first approach to molecular decomposition concerned thermal decomposition and oxidation of highly diluted nitromethane (NM) in a shock tube. Reflected shock tube experiments on NM decomposition, using mixtures of 0.2 to 1.5 vol% NM in nitrogen or argon were performed over the temperature range 850-1550 K and pressure range 190-900 kPa, with 46 experiments diluted in nitrogen and 44 diluted in argon. By residual error analysis of the measured decomposition profiles it was found that NM decomposition (CH3NO2 + M -&gt; CH3 + NO2 + M, where M = N2 /Ar) corresponds well to a law of first order. Arrhenius expressions corresponding to NM diluted either in N2 or in Ar were found as kN2 = 1017.011×exp(-182.6 kJ/mole / R×T &lt;cm3/mole×s&gt; and kAr = 1017.574×exp(-207 kJ/mole / R×T )&lt;cm3/mole×s&gt;, respectively. A new reaction mechanism was then proposed, based on new experimental data for NM decomposition both in Ar and N2 and on three previously developed mechanisms. The new mechanism predicts well the decomposition of NM diluted in both N2 and Ar within the pressure and temperature range covered by the experiments. In parallel to, and following the decomposition experiments, oxidative experiments on the ignition delay times of NM/O2/Ar mixtures were investigated over high temperature and low to high pressure ranges. These experiments were carried out with eight different mixtures of gaseous NM and oxygen diluted in argon, with pressures ranging between 44.3-600 kPa, and temperatures ranging between 842-1378 K. The oxidation experiments were divided into different categories according to the type of decomposition signals achieved. For signals with and without emission, the apparent quasi-constant activation energy was found from the correlations, to be 64.574 kJ/mol and 113.544 kJ/mol, respectively. The correlations for the ignition delay for time signals with and without emission were deduced as τemission = 0.3669×10-2×[NM]-1.02[O2]-1.08×[Ar]1.42×exp(7767/T) and τno emission = 0.3005×10-2×[NM]-0.28[O2]0.12×[Ar]-0.59×exp(13657/T), respectively. The second approach to molecular decomposition concerned the application of non-thermal plasma to initiate reactions and decompose/oxidize selected hydrocarbons, methane and propane, in air. Experiments with a gliding arc discharge device were performed at the university of Orléans on the decomposition/reforming of low-to stoichiometric concentration air/CH4 mixtures. The presented results show that complete reduction of methane could be obtained if the residence time in the reactor was sufficiently long. The products of the methane decomposition were mainly CO2, CO and H2O. The CH4 conversion rate showed to increase with increasing residence time, temperature of the operating gas, and initial concentration of methane. To achieve complete decomposition of CH4 in 1 m3 of a 2 vol% mixture, the energy cost was about 1.5 kWh. However, the formation of both CO and NOx in the present gliding discharge system was found to be significant. The produced amount of both CO (0.4-1 vol%) and NOx (2000-3500 ppm) were in such high quantities that they would constitute an important pollution threat if this process as of today was to be used in large scale CH4 decomposition. Further experimental investigations were performed on self-built laboratory scale, single- and double dielectric-barrier discharge devices as a means of removing CH4 and C3H8 from simulated reactive inlet mixtures. The different discharge reactors were all powered by an arrangement of commercially available Tesla coil units capable of high-voltage high-frequency output. The results from each of the different experiments are limited and sometimes only qualitative, but show a tendency that the both CH4 and C3H8 are reduced in a matter of a 3-6 min. retention time. The most plausible mechanism for explaining the current achievements is the decomposition by direct electron impact.
265

Quantification of volatile compounds in degraded engine oil

Sepcic, Kelly Hall 01 December 2003 (has links)
No description available.
266

Adaptive Third-Order Volterra Satellite Channel Equalizer

Lin, Wen-Hsin 17 July 2001 (has links)
Digital satellite communication systems are equipped with nonlinear amplifiers such as travelling wave tube (TWT) amplifiers at or near saturation for better efficiency. The TWT exhibits nonlinear distortion in both amplitude and phase (AM/AM and AM/PM) conversion, respectively. That is, in the digital satellite communication the transmission is disturbed not only by the non-linearity of transmitter amplifier, but also by the inter-symbol interference (ISI) with additive white Gaussian noise. To compensate the non-linearity of the transmitter amplifier and ISI, in this thesis, a new nonlinear compensation scheme consists of the predistorter and adaptive third-order Volterra-based equalizer, with the inverse QRD-RLS (IQRD-RLS) algorithm, which are located before and after the nonlinear channel, is proposed respectively. The third-order Volterra filter (TVF) equalizer based on the IQRD-RLS algorithm achieve superior performance, in terms of convergence rate, steady-state mean-squared error (MSE), and numerically stable. They are highly amenable to parallel implementation using array architectures, such as systolic arrays. The computer simulation results using the M-ary PSK modulation scheme are carried out the signal¡¦s constellation diagrams, the learning curve of the MSE and the bit error rate (BER) are compared with conventional least mean square (LMS), gradient adaptive lattice (GAL) and adaptive LMS with lattice pre-filter algorithms.
267

Polarimetric SAR decomposition of temperate Ice Cap Hofsjokull, central Iceland

Minchew, Brent Morton 17 December 2010 (has links)
Fully-polarimetric UAVSAR data of Hofsjokull Ice Cap, central Iceland, taken in June 2009 was decomposed using Pauli-based coherent decomposition as well as Cloude and H/A/alpha eigenvector-based decomposition methods. The goals of this research were to evaluate the effect of the near-surface conditions of temperate glaciers on polarized SAR data and investigate the potential of creating a model of the radar scattering mechanisms based on the decomposed elements and local temperature. The results of this data analysis show a strong relationship between the Pauli and H/A/alpha decomposition elements and the near-surface conditions. Fitting curves to the normalized Pauli decomposition elements shows consistent trends across several spatially independent regions of the ice cap suggesting that the Pauli elements might be useful for modeling the scattering mechanisms of temperate ice with various surface conditions. / text
268

On the QR Decomposition of H-Matrices

Benner, Peter, Mach, Thomas 28 August 2009 (has links) (PDF)
The hierarchical (<i>H-</i>) matrix format allows storing a variety of dense matrices from certain applications in a special data-sparse way with linear-polylogarithmic complexity. Many operations from linear algebra like matrix-matrix and matrix-vector products, matrix inversion and LU decomposition can be implemented efficiently using the <i>H</i>-matrix format. Due to its importance in solving many problems in numerical linear algebra like least-squares problems, it is also desirable to have an efficient QR decomposition of <i>H</i>-matrices. In the past, two different approaches for this task have been suggested. We will review the resulting methods and suggest a new algorithm to compute the QR decomposition of an <i>H</i>-matrix. Like other <i>H</i>-arithmetic operations the <i>H</i>QR decomposition is of linear-polylogarithmic complexity. We will compare our new algorithm with the older ones by using two series of test examples and discuss benefits and drawbacks of the new approach.
269

Minimização ótima de classes especiais de funções booleanas / On the optimal minimization of espcial classes of Boolean functions

Callegaro, Vinicius January 2016 (has links)
O problema de fatorar e decompor funções Booleanas é Σ-completo2 para funções gerais. Algoritmos eficientes e exatos podem ser criados para classes de funções existentes como funções read-once, disjoint-support decomposable e read-polarity-once. Uma forma fatorada é chamada de read-once (RO) se cada variável aparece uma única vez. Uma função Booleana é RO se existe uma forma fatorada RO que a representa. Por exemplo, a função representada por =12+134+135 é uma função RO, pois pode ser fatorada em =1(2+3(4+5)). Uma função Booleana f(X) pode ser decomposta usando funções mais simples g e h de forma que ()=ℎ((1),2) sendo X1, X2 ≠ ∅, e X1 ∪ X2 = X. Uma decomposição disjunta de suporte (disjoint-support decomposition – DSD) é um caso especial de decomposição funcional, onde o conjunto de entradas X1 e X2 não compartilham elementos, i.e., X1 ∩ X2 = ∅. Por exemplo, a função =12̅̅̅3+123̅̅̅ 4̅̅̅+12̅̅̅4 é DSD, pois existe uma decomposição tal que =1(2⊕(3+4)). Uma forma read-polarity-once (RPO) é uma forma fatorada onde cada polaridade (positiva ou negativa) de uma variável aparece no máximo uma vez. Uma função Booleana é RPO se existe uma forma fatorada RPO que a representa. Por exemplo, a função =1̅̅̅24+13+23 é RPO, pois pode ser fatorada em =(1̅̅̅4+3)(1+2). Esta tese apresenta quarto novos algoritmos para síntese de funções Booleanas. A primeira contribuição é um método de síntese para funções read-once baseado em uma estratégia de divisão-e-conquista. A segunda contribuição é um algoritmo top-down para síntese de funções DSD baseado em soma-de-produtos, produto-de-somas e soma-exclusiva-de-produtos. A terceira contribuição é um método bottom-up para síntese de funções DSD baseado em diferença Booleana e cofatores. A última contribuição é um novo método para síntese de funções RPO que é baseado na análise de transições positivas e negativas. / The problem of factoring and decomposing Boolean functions is Σ-complete2 for general functions. Efficient and exact algorithms can be created for an existing class of functions known as read-once, disjoint-support decomposable and read-polarity-once functions. A factored form is called read-once (RO) if each variable appears only once. A Boolean function is RO if it can be represented by an RO form. For example, the function represented by =12+134+135 is a RO function, since it can be factored into =1(2+3(4+5)). A Boolean function f(X) can be decomposed using simpler subfunctions g and h, such that ()=ℎ((1),2) being X1, X2 ≠ ∅, and X1 ∪ X2 = X. A disjoint-support decomposition (DSD) is a special case of functional decomposition, where the input sets X1 and X2 do not share any element, i.e., X1 ∩ X2 = ∅. Roughly speaking, DSD functions can be represented by a read-once expression where the exclusive-or operator (⊕) can also be used as base operation. For example, =1(2⊕(4+5)). A read-polarity-once (RPO) form is a factored form where each polarity (positive or negative) of a variable appears at most once. A Boolean function is RPO if it can be represented by an RPO factored form. For example the function =1̅̅̅24+13+23 is RPO, since it can factored into =(1̅̅̅4+3)(1+2). This dissertation presents four new algorithms for synthesis of Boolean functions. The first contribution is a synthesis method for read-once functions based on a divide-and-conquer strategy. The second and third contributions are two algorithms for synthesis of DSD functions: a top-down approach that checks if there is an OR, AND or XOR decomposition based on sum-of-products, product-of-sums and exclusive-sum-of-products inputs, respectively; and a method that runs in a bottom-up fashion and is based on Boolean difference and cofactor analysis. The last contribution is a new method to synthesize RPO functions which is based on the analysis of positive and negative transition sets. Results show the efficacy and efficiency of the four proposed methods.
270

Minimização ótima de classes especiais de funções booleanas / On the optimal minimization of espcial classes of Boolean functions

Callegaro, Vinicius January 2016 (has links)
O problema de fatorar e decompor funções Booleanas é Σ-completo2 para funções gerais. Algoritmos eficientes e exatos podem ser criados para classes de funções existentes como funções read-once, disjoint-support decomposable e read-polarity-once. Uma forma fatorada é chamada de read-once (RO) se cada variável aparece uma única vez. Uma função Booleana é RO se existe uma forma fatorada RO que a representa. Por exemplo, a função representada por =12+134+135 é uma função RO, pois pode ser fatorada em =1(2+3(4+5)). Uma função Booleana f(X) pode ser decomposta usando funções mais simples g e h de forma que ()=ℎ((1),2) sendo X1, X2 ≠ ∅, e X1 ∪ X2 = X. Uma decomposição disjunta de suporte (disjoint-support decomposition – DSD) é um caso especial de decomposição funcional, onde o conjunto de entradas X1 e X2 não compartilham elementos, i.e., X1 ∩ X2 = ∅. Por exemplo, a função =12̅̅̅3+123̅̅̅ 4̅̅̅+12̅̅̅4 é DSD, pois existe uma decomposição tal que =1(2⊕(3+4)). Uma forma read-polarity-once (RPO) é uma forma fatorada onde cada polaridade (positiva ou negativa) de uma variável aparece no máximo uma vez. Uma função Booleana é RPO se existe uma forma fatorada RPO que a representa. Por exemplo, a função =1̅̅̅24+13+23 é RPO, pois pode ser fatorada em =(1̅̅̅4+3)(1+2). Esta tese apresenta quarto novos algoritmos para síntese de funções Booleanas. A primeira contribuição é um método de síntese para funções read-once baseado em uma estratégia de divisão-e-conquista. A segunda contribuição é um algoritmo top-down para síntese de funções DSD baseado em soma-de-produtos, produto-de-somas e soma-exclusiva-de-produtos. A terceira contribuição é um método bottom-up para síntese de funções DSD baseado em diferença Booleana e cofatores. A última contribuição é um novo método para síntese de funções RPO que é baseado na análise de transições positivas e negativas. / The problem of factoring and decomposing Boolean functions is Σ-complete2 for general functions. Efficient and exact algorithms can be created for an existing class of functions known as read-once, disjoint-support decomposable and read-polarity-once functions. A factored form is called read-once (RO) if each variable appears only once. A Boolean function is RO if it can be represented by an RO form. For example, the function represented by =12+134+135 is a RO function, since it can be factored into =1(2+3(4+5)). A Boolean function f(X) can be decomposed using simpler subfunctions g and h, such that ()=ℎ((1),2) being X1, X2 ≠ ∅, and X1 ∪ X2 = X. A disjoint-support decomposition (DSD) is a special case of functional decomposition, where the input sets X1 and X2 do not share any element, i.e., X1 ∩ X2 = ∅. Roughly speaking, DSD functions can be represented by a read-once expression where the exclusive-or operator (⊕) can also be used as base operation. For example, =1(2⊕(4+5)). A read-polarity-once (RPO) form is a factored form where each polarity (positive or negative) of a variable appears at most once. A Boolean function is RPO if it can be represented by an RPO factored form. For example the function =1̅̅̅24+13+23 is RPO, since it can factored into =(1̅̅̅4+3)(1+2). This dissertation presents four new algorithms for synthesis of Boolean functions. The first contribution is a synthesis method for read-once functions based on a divide-and-conquer strategy. The second and third contributions are two algorithms for synthesis of DSD functions: a top-down approach that checks if there is an OR, AND or XOR decomposition based on sum-of-products, product-of-sums and exclusive-sum-of-products inputs, respectively; and a method that runs in a bottom-up fashion and is based on Boolean difference and cofactor analysis. The last contribution is a new method to synthesize RPO functions which is based on the analysis of positive and negative transition sets. Results show the efficacy and efficiency of the four proposed methods.

Page generated in 0.0965 seconds