Spelling suggestions: "subject:"cow density"" "subject:"cow clensity""
251 |
Low-density Parity-Check decoding Algorithms / Low-density Parity-Check avkodare algoritmPirou, Florent January 2004 (has links)
Recently, low-density parity-check (LDPC) codes have attracted much attention because of their excellent error correcting performance and highly parallelizable decoding scheme. However, the effective VLSI implementation of and LDPC decoder remains a big challenge and is a crucial issue in determining how well we can exploit the benefits of the LDPC codes in the real applications. In this master thesis report, following a error coding background, we describe Low-Density Parity-Check codes and their decoding algorithm, and also requirements and architectures of LPDC decoder implementations.
|
252 |
Efficient Message Passing Decoding Using Vector-based MessagesGrimnell, Mikael, Tjäder, Mats January 2005 (has links)
The family of Low Density Parity Check (LDPC) codes is a strong candidate to be used as Forward Error Correction (FEC) in future communication systems due to its strong error correction capability. Most LDPC decoders use the Message Passing algorithm for decoding, which is an iterative algorithm that passes messages between its variable nodes and check nodes. It is not until recently that computation power has become strong enough to make Message Passing on LDPC codes feasible. Although locally simple, the LDPC codes are usually large, which increases the required computation power. Earlier work on LDPC codes has been concentrated on the binary Galois Field, GF(2), but it has been shown that codes from higher order fields have better error correction capability. However, the most efficient LDPC decoder, the Belief Propagation Decoder, has a squared complexity increase when moving to higher order Galois Fields. Transmission over a channel with M-PSK signalling is a common technique to increase spectral efficiency. The information is transmitted as the phase angle of the signal. The focus in this Master’s Thesis is on simplifying the Message Passing decoding when having inputs from M-PSK signals transmitted over an AWGN channel. Symbols from higher order Galois Fields were mapped to M-PSK signals, since M-PSK is very bandwidth efficient and the information can be found in the angle of the signal. Several simplifications of the Belief Propagation has been developed and tested. The most promising is the Table Vector Decoder, which is a Message Passing Decoder that uses a table lookup technique for check node operations and vector summation as variable node operations. The table lookup is used to approximate the check node operation in a Belief Propagation decoder. Vector summation is used as an equivalent operation to the variable node operation. Monte Carlo simulations have shown that the Table Vector Decoder can achieve a performance close to the Belief Propagation. The capability of the Table Vector Decoder depends on the number of reconstruction points and the placement of them. The main advantage of the Table Vector Decoder is that its complexity is unaffected by the Galois Field used. Instead, there will be a memory space requirement which depends on the desired number of reconstruction points.
|
253 |
Coding for Cooperative CommunicationsUppal, Momin Ayub 2010 August 1900 (has links)
The area of cooperative communications has received tremendous research interest
in recent years. This interest is not unwarranted, since cooperative communications
promises the ever-so-sought after diversity and multiplexing gains typically
associated with multiple-input multiple-output (MIMO) communications, without
actually employing multiple antennas. In this dissertation, we consider several cooperative
communication channels, and for each one of them, we develop information
theoretic coding schemes and derive their corresponding performance limits. We next
develop and design practical coding strategies which perform very close to the information
theoretic limits.
The cooperative communication channels we consider are: (a) The Gaussian relay
channel, (b) the quasi-static fading relay channel, (c) cooperative multiple-access
channel (MAC), and (d) the cognitive radio channel (CRC). For the Gaussian relay
channel, we propose a compress-forward (CF) coding strategy based on Wyner-Ziv
coding, and derive the achievable rates specifically with BPSK modulation. The CF
strategy is implemented with low-density parity-check (LDPC) and irregular repeataccumulate
codes and is found to operate within 0.34 dB of the theoretical limit. For
the quasi-static fading relay channel, we assume that no channel state information
(CSI) is available at the transmitters and propose a rateless coded protocol which
uses rateless coded versions of the CF and the decode-forward (DF) strategy. We
implement the protocol with carefully designed Raptor codes and show that the implementation suffers a loss of less than 10 percent from the information theoretical limit. For
the MAC, we assume quasi-static fading, and consider cooperation in the low-power
regime with the assumption that no CSI is available at the transmitters. We develop
cooperation methods based on multiplexed coding in conjunction with rateless
codes and find the achievable rates and in particular the minimum energy per bit to
achieve a certain outage probability. We then develop practical coding methods using
Raptor codes, which performs within 1.1 dB of the performance limit. Finally, we
consider a CRC and develop a practical multi-level dirty-paper coding strategy using
LDPC codes for channel coding and trellis-coded quantization for source coding. The
designed scheme is found to operate within 0.78 dB of the theoretical limit.
By developing practical coding strategies for several cooperative communication
channels which exhibit performance close to the information theoretic limits, we show
that cooperative communications not only provide great benefits in theory, but can
possibly promise the same benefits when put into practice. Thus, our work can be
considered a useful and necessary step towards the commercial realization of cooperative
communications.
|
254 |
LDPC Coded OFDM-IDMA SystemsLu, Kuo-sheng 05 August 2009 (has links)
Recently, a novel technique for multi-user spread-spectrum mobile systems, the called interleave-division multiple-access (IDMA) scheme, was proposed by L. Ping etc. The advantage of IDMA is that it inherits many special features from code-division multiple-access (CDMA) such as diversity against fading and mitigation of the other-cell user interference. Moreover, it¡¦s capable of employing a very simple chip-by-chip iterative multi-user detection strategy. In this thesis, we investigate the performance of combining IDMA and orthogonal frequency-division multiplexing (OFDM) scheme. In order to improve the bit error rate performance, we applied low-density parity-check (LDPC) coding to the proposed scheme, named by LDPC Coded OFDM-IDMA Systems. Based on the aid of iterative multi-user detection algorithm, the multiple-access interference (MAI) and inter-symbol interference (ISI) could be canceling efficiently. In short, the proposed scheme provides an efficient solution to high-rate multiuser communications over multipath fading channels.
|
255 |
Rab-domain dynamics in endocytic membrane trafficking / Zur Dynamik von Rab-Domänen während endozytotischer TransportprozesseRink, Jochen C. 26 April 2005 (has links) (PDF)
Eukaryotic cells depend on cargo uptake into the endocytic membrane system, which comprises a functionally interconnected network of endosomal compartments. The establishment and maintenance of such diverse compartments in face of the high rates of exchange between them, poses a major challenge for obtaining a molecular understanding of the endocytic system. Rab-GTPases have emerged as architectural key element thereof: Individual family members localize selectively to endosomal compartments, where they recruit a multitude of cytoplasmic effector proteins and coordinate them into membrane sub-domains. Such "Rab-domains" constitute modules of molecular membrane identity, which pattern the endocytic membrane system into a mosaic of Rab-domains. The main objective of this thesis research was to link such "static" mosaic-view with the highly dynamic nature of the endosomal system. The following questions were addressed: How are neighbouring Rab-domains coordinated? Are Rab-domains stable or can they undergo assembly and disassembly? Are the dynamics of Rab-domains utilized in cargo transport? The first part of this thesis research focused on the organization of Rab-domains in the recycling pathway. Utilizing Total Internal Reflection (TIRF) microscopy, Rab11-, but neither Rab4- nor Rab5-positive vesicles were observed to fuse with the plasma membrane. Rab4-positive membranes, however, could be induced to fuse in presence of Brefeldin A. Thus, these experiments complete the view of the recycling pathway by the following steps: a) Rab11-carriers likely mediate the return of recycling cargo to the surface; b) such carriers are presumably generated in an Arf-dependent fission reaction from Rab4-positive compartments. Rab11-chromatography was subsequently carried out in the hope of identifying Rab11-effectors functioning at the Rab4-Rab11 domain interface. An as yet uncharacterized ubiquitin ligase was identified, which selectively interacts with both Rab4 and Rab11. Contrary to expectations, however, the protein (termed RUL for *R*ab interacting *U*biquitin *L*igase) does not function in recycling,but appears to mediate trafficking between Golgi/TGN and endosomes instead.In order to address the dynamics of Rab-domains, fluorescently tagged Rab-GTPases were imaged during cargo transport reactions in living cells. Herefore high-speed/long-term imaging procedures and novel computational image analysis tools were developed. The application of such methodology to the analysis of Rab5-positive early endosomes showed that a) The amount of Rab5 associated with individual endosomes fluctuates strongly over time; b) such fluctuations can lead to the "catastrophic" loss of the Rab5-machinery from membranes; c) Rab5 catastrophe is part of a functional cycle of early endosomes, involving net centripetal motility, continuous growth and increase in Rab5 density. Next, the relevance of Rab5 catastrophe with respect to cargo transfer into either the recycling- or degradative pathway was examined. Recycling cargo (transferrin) could be observed to exit Rab5-positive early endosomes via the frequent budding of tubular exit carriers. Exit of degradative cargo (LDL) from Rab5-positive endosomes did not involve budding, but the rapid loss of Rab5 from the limiting membrane.Rab5-loss was further coordinated with the concomitant acquisition of Rab7, suggesting "Rab conversion" as mechanism of transport between early- and late endosomes.Altogether, this thesis research has shown that first, Rab-machineries can be acquired and lost from membranes. Second, such dynamics provide a molecular mechanism for cargo exchange between endosomal compartments. Jointly, these findings lead to the concept of Rab-domain dynamics modulation in /trans/ between neighbouring domains as mechanistic principle behind the dynamic organization of membrane trafficking pathways.
|
256 |
Αλγόριθμοι επαναληπτικής αποκωδικοποίησης κωδικών LDPC και μελέτη της επίδρασης του σφάλματος κβαντισμού στην απόδοση του αλγορίθμου Log Sum-ProductΚάνιστρας, Νικόλαος 25 May 2009 (has links)
Οι κώδικες LDPC ανήκουν στην κατηγορία των block κωδικών. Πρόκειται για κώδικες ελέγχου σφαλμάτων μετάδοσης και πιο συγκεκριμένα για κώδικες διόρθωσης σφαλμάτων. Αν και η εφεύρεσή τους (από τον Gallager) τοποθετείται χρονικά στις αρχές της δεκαετίας του 60, μόλις τα τελευταία χρόνια κατάφεραν να κεντρίσουν το έντονο ενδιαφέρον της επιστημονικής-ερευνητικής κοινότητας για τις αξιόλογες επιδόσεις τους. Πρόκειται για κώδικες ελέγχου ισοτιμίας με κυριότερο χαρακτηριστικό τον χαμηλής πυκνότητας πίνακα ελέγχου ισοτιμίας (Low Density Parity Check) από τον οποίο και πήραν το όνομά τους. Δεδομένου ότι η κωδικοποίηση των συγκεκριμένων κωδικών είναι σχετικά απλή, η αποκωδικοποίηση τους είναι εκείνη η οποία καθορίζει σε μεγάλο βαθμό τα χαρακτηριστικά του κώδικα που μας ενδιαφέρουν, όπως είναι η ικανότητα διόρθωσης σφαλμάτων μετάδοσης (επίδοση) και η καταναλισκόμενη ισχύς. Για το λόγο αυτό έχουν αναπτυχθεί διάφοροι αλγόριθμοι αποκωδικοποίησης, οι οποίοι είναι επαναληπτικοί. Παρόλο που οι ανεπτυγμένοι αλγόριθμοι και οι διάφορες εκδοχές τους δεν είναι λίγοι, δεν έχει ακόμα καταστεί εφικτό να αναλυθεί θεωρητικά η επίδοσή τους.
Στην παρούσα εργασία παρατίθενται οι κυριότεροι αλγόριθμοι αποκωδικοποίησης κωδικών LDPC, που έχουν αναπτυχθεί μέχρι σήμερα. Οι αλγόριθμοι αυτοί υλοποιούνται και συγκρίνονται βάσει των αποτελεσμάτων εξομοιώσεων. Ο πιο αποδοτικός από αυτούς είναι ο αποκαλούμενος αλγόριθμος log Sum-Product και στηρίζει σε μεγάλο βαθμό την επίδοσή του σε μία αρκετά πολύπλοκή συνάρτηση, την Φ(x). Η υλοποίηση της τελευταίας σε υλικό επιβάλλει την πεπερασμένη ακρίβεια αναπαράστασής της, δηλαδή τον κβαντισμό της. Το σφάλμα κβαντισμού που εισάγεται από την διαδικασία αυτή θέτει ένα όριο στην επίδοση του αλγορίθμου. Η μελέτη που έγινε στα πλαίσια της εργασίας οδήγησε στον προσδιορισμό δύο μηχανισμών εισαγωγής σφάλματος κβαντισμού στον αλγόριθμο log Sum-Product και στη θεωρητική έκφραση της πιθανότητας εμφάνισης κάθε μηχανισμού κατά την πρώτη επανάληψη του αλγορίθμου.
Μελετήθηκε επίσης ο τρόπος με τον οποίο το εισαγόμενο σφάλμα κβαντισμού επιδρά στην απόφαση του αλγορίθμου στο τέλος της κάθε επανάληψης και αναπτύχθηκε ένα θεωρητικό μοντέλο αυτού του μηχανισμού. Το θεωρητικό μοντέλο δίνει την πιθανότητα αλλαγής απόφασης του αλγορίθμου λόγω του σφάλματος κβαντισμού της συνάρτησης Φ(x), χωρίς όμως να είναι ακόμα πλήρες αφού βασίζεται και σε πειραματικά δεδομένα. Η ολοκλήρωση του μοντέλου, ώστε να είναι πλήρως θεωρητικό, θα μπορούσε να αποτελέσει αντικείμενο μελλοντικής έρευνας, καθώς θα επιτρέψει τον προσδιορισμό του περιορισμού της επίδοσης του αλγορίθμου για συγκεκριμένο σχήμα κβαντισμού της συνάρτησης, αποφεύγοντας χρονοβόρες εξομοιώσεις. / Low-Density Parity-Check (LDPC) codes belong to the category of Linear Block Codes. They are error detection and correction codes. Although LDPC codes have been proposed by R. Gallager since 1962, they were scarcely considered in the 35 years that followed. Only in the end-90's they were rediscovered due to their decoding performance that approaches Shannon limit. As their name indicates they are parity check codes whose parity check matrix is sparse. Since the encoding process is simple, the decoding procedure determines the performance and the consumed power of the decoder. For this reason several iterative decoding algorithms have been developed. However theoretical determination of their performance has not yet been feasible.
This work presents the most important iterative decoding algorithms for LDPC codes, that have been developed to date. These algorithms are implemented in matlab and their performance is studied through simulation. The most powerful among them, namely Log Sum-Product, uses a very nonlinear function called Φ(x). Hardware implementation of this function enforces finite accuracy, due to finite word length representation. The roundoff error that this procedure imposes, impacts the decoding performance by means of two mechanisms. Both mechanisms are analyzed and a theoretical expression for each mechanism activation probability, at the end of the first iteration of the algorithm, is developed.
The impact of the roundoff error on the decisions taken by the log Sum-Product decoding algorithm at the end of each iteration is also studied. The mechanism by means of which roundoff alters the decisions of a finite word length implementation of the algorithm compared to the infinite precision case, is analyzed and a corresponding theoretical model is developed. The proposed model computes the probability of changing decisions due to finite word length representation of Φ(x), but it is not yet complete, since the determination of the corresponding parameters is achieved through experimental results. Further research focuses on the completion of the theoretical model, since it can lead to a tool that computes the expected degradation of the decoding performance for a particular implementation of the decoder, without the need of time-consuming simulations.
|
257 |
Advanced Coding Techniques For Fiber-Optic Communications And Quantum Key DistributionZhang, Yequn January 2015 (has links)
Coding is an essential technology for efficient fiber-optic communications and secure quantum communications. In particular, low-density parity-check (LDPC) coding is favoured due to its strong error correction capability and high-throughput implementation feasibility. In fiber-optic communications, it has been realized that advanced high-order modulation formats and soft-decision forward error correction (FEC) such as LDPC codes are the key technologies for the next-generation high-speed optical communications. Therefore, energy-efficient LDPC coding in combination with advanced modulation formats is an important topic that needs to be studied for fiber-optic communications. In secure quantum communications, large-alphabet quantum key distribution (QKD) is becoming attractive recently due to its potential in improving the efficiency of key exchange. To recover the carried information bits, efficient information reconciliation is desirable, for which the use of LDPC coding is essential. In this dissertation, we first explore different efficient LDPC coding schemes for optical transmission of polarization-division multiplexed quadrature-amplitude modulation (QAM) signals. We show that high energy efficiency can be achieved without incurring extra overhead and complexity. We then study the transmission performance of LDPC-coded turbo equalization for QAM signals in a realistic fiber link as well as that of pragmatic turbo equalizers. Further, leveraging the polarization freedom of light, we expand the signal constellation into a four-dimensional (4D) space and evaluate the performance of LDPC-coded 4D signals in terms of transmission reach. Lastly, we study the security of a proposed weak-coherent-state large-alphabet QKD protocol and investigate the information reconciliation efficiency based on LDPC coding.
|
258 |
Meta-analysis and systematic review of the benefits expected when the glycaemic index is used in planning diets / Anna Margaretha OppermanOpperman, Anna Margaretha January 2004 (has links)
Motivation: The prevalence of non-communicable diseases such as diabetes mellitus (DM)
and cardiovascular disease (CVD) is rapidly increasing in industrialized societies. Experts
believe that lifestyle, and in particular its nutritional aspects, plays a decisive role in
increasing the burden of these chronic conditions. Dietary habits would, therefore, be
modified to exert a positive impact on the prevention and treatment of chronic diseases of
lifestyle. It is believed that the state of hyperglycaemia that is observed following food intake
under certain dietary regimes contributes to the development of various metabolic conditions.
This is not only true for individuals with poor glycaemic control such as some diabetics, but
could also be true for healthy individuals. It would, therefore, be helpful to be able to reduce
the amplitude and duration of postprandial hyperglycaemia. Selecting the correct type of
carbohydrate (CHO) foods may produce less postprandial hyperglycaemia, representing a
possible strategy in the prevention and treatment of chronic metabolic diseases. At the same
time, a key focus of sport nutrition is the optimal amount of CHO that an athlete should
consume and the optimal timing of consumption. The most important nutritional goals of the
athlete are to prepare body CHO stores pre-exercise, provide energy during prolonged
exercise and restore glycogen stores during the recovery period. The ultimate aim of these
strategies is to maintain CHO availability to the muscle and central nervous system during
prolonged moderate to high intensity exercise, since these are important factors in exercise
capacity and performance. However, the type of CHO has been studied less often and with
less attention to practical concerns than the amount of CHO.
The glycaemic index (GI) refers to the blood glucose raising potential of CHO foods and,
therefore, influences secretion of insulin. In several metabolic disorders, secretion of insulin
is inadequate or impossible, leading to poor glycaemic control. It has been suggested that
low GI diets could potentially contribute to a significant improvement of the conditions
associated with poor glycaemic control. Insulin secretion is also important to athletes since
the rate of glycogen synthesis depends on insulin due to it stimulatory effect on the activity of
glycogen synthase.
Objectives: Three main objectives were identified for this study. The first was to conduct a
meta-analysis of the effects of the GI on markers for CHO and lipid metabolism with the
emphasis on randomised controlled trials (RCT's). Secondly, a systematic review was
performed to determine the strength of the body of scientific evidence from epidemiological
studies combined with RCT's to encourage dieticians to incorporate the GI concept in meal
planning. Finally, a systematic review of the effect of the GI in sport performance was
conducted on all available literature up to date to investigate whether the application of the
GI in an athlete's diet can enhance physical performance.
Methodology: For the meta-analysis, the search was for randomised controlled trials with a
cross-over or parallel design published in English between 1981 and 2003, investigating the
effect of low GI vs high GI diets on markers of carbohydrate and lipid metabolism. The main
outcomes were serum fructosamine, glycosylated haemoglobin (HbA1c), high-density
lipoprotein cholesterol (HDL-c), low-density lipoprotein cholesterol (LDL-c), total cholesterol
(TC) and triacylglycerols (TG). For the systematic review, epidemiological studies as well as
RCT's investigating the effect of LGI vs HGI diets on markers for carbohydrate and lipid
metabolism were used. For the systematic review on the effect of the GI on sport
performance, RCT's with either a cross-over or parallel design that were published in English
between January 1981 and September 2004 were used. All relevant manuscripts for the
systematic reviews as well as meta-analysis were obtained through a literature search on
relevant databases such as the Cochrane Central Register of Controlled Trials, MEDLINE
(1981 to present), EMBASE, LILACS, SPORTDiscus, ScienceDirect and PubMed. This
thesis is presented in the article format.
Results and conclusions of the individual manuscripts:
For the meta-analysis, literature searches identified 16 studies that met the strict
inclusion criteria. Low GI diets significantly reduced fructosamine (p<0.05), HbA1c,
(p<0.03), TC(p<0.0001) and tended to reduce LDL-c (p=0.06) compared to high GI diets.
No changes were observed in HDL-c and TG concentrations. Results from this meta analysis,
therefore, support the use of the GI concept in choosing CHO-containing foods
to reduce TC and improve blood glucose control in diabetics.
The systematic review combined the results of the preceding meta-analysis and results
from epidemiological studies. Prospective epidemiological studies showed improvements
in HDL-c concentrations over longer time periods with low GI diets vs. high GI diets, while
the RCT's failed to show an improvement in HDL-c over the short-term. This could be
attributed to the short intervention period during which the RCT's were conducted.
Furthermore, epidemiological studies failed to show positive relationships between LDL-c
and TC and low GI diets, while RCT's reported positive results on both these lipids with
low GI diets. However, the epidemiological studies, as well as the RCT's showed positive
results with low GI diets on markers of CHO metabolism. Taken together, convincing
evidence from RCT's as well as epidemiological studies exists to recommend the use of
low GI diets to improve markers of CHO as well as of lipid metabolism.
3 From the systematic review regarding the GI and sport performance it does not seem that
low GI pre-exercise meals provide any advantages over high GI pre-exercise meals.
Although low GI pre-exercise meals may better maintain CHO availability during exercise,
low GI pre-exercise meals offer no added advantage over high GI meals regarding
performance. Furthermore, the exaggerated metabolic responses from high GI compared
to low GI CHO seems not be detrimental to exercise performance. However, athletes
who experience hypoglycaemia when consuming CHO-rich feedings in the hour prior to
exercise are advised to rather consume low GI pre-exercise meals. No studies have
been reported on the GI during exercise. Current evidence suggests a combination of
CHO with differing Gl's such as glucose (high GI), sucrose (moderate GI) and fructose
(low GI) will deliver the best results in terms of exogenous CHO oxidation due to different
transport mechanisms. Although no studies are conducted on the effect of the GI on
short-term recovery it is speculated that high GI CHO is most effective when the recovery
period is between 0-8 hours, however, evidence suggests that when the recovery period
is longer (20-24 hours), the total amount of CHO is more important than the type of CHO.
Conclusion: There is an important body of evidence in support of a therapeutic and
preventative potential of low GI diets to improve markers for CHO and lipid metabolism. By
substituting high GI CHO-rich with low GI CHO-rich foods improved overall metabolic control.
In addition, these diets reduced TC, tended to improve LDL-c and might have a positive
effect over the long term on HDL-c. This confirms the place for low GI diets in disease
prevention and management, particularly in populations characterised by already high
incidences of insulin resistance, glucose intolerance and abnormal lipid levels. For athletes it
seems that low GI pre-exercise meals do not provide any advantage regarding performance
over high GI pre-exercise meals. However, low GI meals can be recommended to athletes
who are prone to develop hypoglycaemia after a CHO-rich meal in the hour prior to exercise.
No studies have been reported on the effect of the GI during exercise. However, it has been
speculated that a combination of CHO with varying Gl's deliver the best results in terms of
exogenous CHO oxidation. No studies exist investigating the effect of the GI on short-term
recovery, however, it is speculated that high GI CHO-rich foods are suitable when the
recovery period is short (0-8 h), while the total amount rather than the type of CHO is
important when the recovery period is longer (20-24 h). Therefore, the GI is a scientifically
based tool to enable the selection of CHO-containing foods to improve markers for CHO and
lipid metabolism as well as to help athletes to prepare optimally for competitions.
Recommendations: Although a step nearer has been taken to confirm a place for the GI in
human health, additional randomised, controlled, medium and long-term studies as well as
more epidemiological studies are needed to investigate further the effect of low GI diets on
LDL-c. HDL-c and TG. These studies are essential to investigate the effect of low GI diets
on endpoints such as CVD and DM. This will also show whether low GI diets can reduce the
risk of diabetic complications such as neuropathy and nephropathy. Furthermore, the public
at large must be educated about the usefulness and application of the GI in meal planning.
For sport nutrition, randomised controlled trials should be performed to investigate the role of
the GI during exercise as well as in sports of longer duration such as cricket and tennis.
More studies are needed to elucidate the short-term effect of the GI post-exercise as well as
to determine the mechanism of lower glycogen storage with LGI meals post-exercise. / Thesis (Ph.D. (Dietetics))--North-West University, Potchefstroom Campus, 2005.
|
259 |
NMR and Biophysical Studies of Modular Protein Structure and FunctionChitayat, Seth 28 September 2007 (has links)
Proteins modularity enhances the multi-functionality and versatility of proteins by providing such properties as multiple and various ligand-binding sites, increased ligand affinity through the avidity effect, and the juxtaposition of ligand-binding modules near catalytic domains. An NMR-based "dissect-and-build" approach to studying modular protein structure and function has proven very successful, whereby modules are initially characterized individually and then correlated with the overall function of a protein. We have used the dissect-and-build approach and NMR to study two modular protein systems.
Chapter 2 details the NMR solution structure of the weak-lysine-binding kringle IV type 8 (KIV8) module from the apolipoprotein(a) (apo(a)) component of lipoprotein(a) was determined and its ligand-binding properties assessed. In vitro studies have demonstrated the importance of the apo(a) KIV7 and KIV8 modules in mediating specific lysine-dependent interactions with the apolipoproteinB-100 (apoB-100) component of LDL in the initial non-covalent step of lipoprotein assembly. Notable differences identified in the lysine binding site (LBS) of the KIV8 were deemed responsible for the differential modes of apoB-100 recognition by KIV7 and KIV8. In addition, the KIV8 structure has brought to light the importance of an RGD sequence at the N-terminus of the apo(a) KIV8 module, which may mediate important apo(a)-integrin interactions.
In Chapters 3-6, structure-function studies of the CpGH84C X82 and the CpGH84A dockerin-containing modular pair were conducted to understand how the varying modularity unique to the C-terminal regions of the secreted multi-modular family 84 glycoside hydrolases influences the spreading of Clostridium perfringens. Identification of a CpGH84C cohesin module (X82), and the structural characterization of a dockerin-containing modular pair provides the first evidence for multi-enzyme complex formation mediated by non-cellulosomal cohesin-dockerin interactions. The formation of large hydrolytic enzyme complexes introduces a novel mechanism by which C. perfringens may enhance its role in pathogenesis. / Thesis (Ph.D, Biochemistry) -- Queen's University, 2007-09-27 11:46:38.753
|
260 |
Iterative joint detection and decoding of LDPC-Coded V-BLAST systemsTsai, Meng-Ying (Brady) 10 July 2008 (has links)
Soft iterative detection and decoding techniques have been shown to be able to achieve near-capacity performance in multiple-antenna systems. To obtain the optimal soft information by marginalization over the entire observation space is intractable; and the current literature is unable to guide us towards the best way to obtain the suboptimal soft information. In this thesis, several existing soft-input soft-output (SISO) detectors, including minimum mean-square error-successive interference cancellation (MMSE-SIC), list sphere decoding (LSD), and Fincke-Pohst maximum-a-posteriori (FPMAP), are examined. Prior research has demonstrated that LSD and FPMAP outperform soft-equalization methods (i.e., MMSE-SIC); however, it is unclear which of the two scheme is superior in terms of performance-complexity trade-off. A comparison is conducted to resolve the matter. In addition, an improved scheme is proposed to modify LSD and FPMAP, providing error performance improvement and a reduction in computational complexity simultaneously. Although list-type detectors such as LSD and FPMAP provide outstanding error performance, issues such as the optimal initial sphere radius, optimal radius update strategy, and their highly variable computational complexity are still unresolved. A new detection scheme is proposed to address the above issues with fixed detection complexity, making the scheme suitable for practical implementation. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2008-07-08 19:29:17.66
|
Page generated in 0.0464 seconds