• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 228
  • 64
  • 42
  • 19
  • 18
  • 14
  • 11
  • 8
  • 6
  • 5
  • 5
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 500
  • 93
  • 92
  • 86
  • 86
  • 84
  • 73
  • 66
  • 63
  • 59
  • 51
  • 43
  • 41
  • 39
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Market Integration Analysis and Time-series Econometrics: Conceptual Insights from Markov-switching Models / Marktintegrationsanalyse und Zeitreihenökonometrie: Begriffseinblicke aus den Markov-Switching Modellen

Abunyuwah, Isaac 31 January 2008 (has links)
No description available.
362

Iterative joint detection and decoding of LDPC-Coded V-BLAST systems

Tsai, Meng-Ying (Brady) 10 July 2008 (has links)
Soft iterative detection and decoding techniques have been shown to be able to achieve near-capacity performance in multiple-antenna systems. To obtain the optimal soft information by marginalization over the entire observation space is intractable; and the current literature is unable to guide us towards the best way to obtain the suboptimal soft information. In this thesis, several existing soft-input soft-output (SISO) detectors, including minimum mean-square error-successive interference cancellation (MMSE-SIC), list sphere decoding (LSD), and Fincke-Pohst maximum-a-posteriori (FPMAP), are examined. Prior research has demonstrated that LSD and FPMAP outperform soft-equalization methods (i.e., MMSE-SIC); however, it is unclear which of the two scheme is superior in terms of performance-complexity trade-off. A comparison is conducted to resolve the matter. In addition, an improved scheme is proposed to modify LSD and FPMAP, providing error performance improvement and a reduction in computational complexity simultaneously. Although list-type detectors such as LSD and FPMAP provide outstanding error performance, issues such as the optimal initial sphere radius, optimal radius update strategy, and their highly variable computational complexity are still unresolved. A new detection scheme is proposed to address the above issues with fixed detection complexity, making the scheme suitable for practical implementation. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2008-07-08 19:29:17.66
363

Digit-Online LDPC Decoding

Marshall, Philip A. Unknown Date
No description available.
364

A Measurement of the Proton's Weak Charge Using an Integration Cerenkov Detector System

Wang, Peiqing 02 September 2011 (has links)
The Q-weak experiment at Thomas Jefferson National Accelerator Facility (USA) will make a precision determination of the proton weak charge with approximately 4% combined statistical and systematic uncertainties via a measurement of the parity violating asymmetry in elastic electron-proton scattering at very low momentum transfer and forward angle. This will allow an extraction of the weak mixing angle at Q^2=0.026 (GeV/c)^2 to approximately 0.3%. The weak mixing angle is a fundamental parameter in the Standard Model of electroweak interactions. At the proposed accuracy, a measured deviation of this parameter from the predicted value would indicate new physics beyond what is currently described in the Standard Model. Without deviation from the predicted value, this measurement would place stringent limits on possible extensions to the Standard Model and constitute the most precise measurement of the proton's weak charge to date. The key experimental apparatus include a liquid hydrogen target, a toroidal magnetic spectrometer and a set of eight Cerenkov detectors. The Cerenkov detectors form the main detector system for the Q-weak experiment and are used to measure the parity violating asymmetry during the primary Q-weak production runs. The Cerenkov detectors form the main subject of this thesis. Following a brief introduction to the experiment, the design, development, construction, installation, and testing of this detector system will be discussed in detail. This is followed by a detailed discussion of detector diagnostic data analysis and the corresponding detector performance. The experiment has been successfully constructed and commissioned, and is currently taking data. The thesis will conclude with a discussion of the preliminary analysis of a small portion of the liquid hydrogen data.
365

The relationship between the forward– and the realized spot exchange rate in South Africa / Petrus Marthinus Stephanus van Heerden

Van Heerden, Petrus Marthinus Stephanus January 2010 (has links)
The inability to effectively hedge against unfavourable exchange rate movements, using the current forward exchange rate as the only guideline, is a key inhibiting factor of international trade. Market participants use the current forward exchange rate quoted in the market to make decisions regarding future exchange rate changes. However, the current forward exchange rate is not solely determined by the interaction of demand and supply, but is also a mechanistic estimation, which is based on the current spot exchange rate and the carry cost of the transaction. Results of various studies, including this study, demonstrated that the current forward exchange rate differs substantially from the realized future spot exchange rate. This phenomenon is known as the exchange rate puzzle. This study contributes to the dynamics of modelling exchange rate theories by developing an exchange rate model that has the ability to explain the realized future spot exchange rate and the exchange rate puzzle. The exchange rate model is based only on current (time t) economic fundamentals and includes an alternative approach of incorporating the impact of the interaction of two international financial markets into the model. This study derived a unique exchange rate model, which proves that the exchange rate puzzle is a pseudo problem. The pseudo problem is based on the generally excepted fallacy that current non–stationary, level time series data cannot be used to model exchange rate theories, because of the incorrect assumption that all the available econometric methods yield statistically insignificant results due to spurious regressions. Empirical evidence conclusively shows that using non–stationary, level time series data of current economic fundamentals can statistically significantly explain the realized future spot exchange rate and, therefore, that the exchange rate puzzle can be solved. This model will give market participants in the foreign exchange market a better indication of expected future exchange rates, which will considerably reduce the dependence on the mechanistically derived forward points. The newly derived exchange rate model will also have an influence on the demand and supply of forward exchange, resulting in forward points that are a more accurate prediction of the realized future exchange rate. / Thesis (Ph.D. (Risk management))--North-West University, Potchefstroom Campus, 2011.
366

A Measurement of the Proton's Weak Charge Using an Integration Cerenkov Detector System

Wang, Peiqing 02 September 2011 (has links)
The Q-weak experiment at Thomas Jefferson National Accelerator Facility (USA) will make a precision determination of the proton weak charge with approximately 4% combined statistical and systematic uncertainties via a measurement of the parity violating asymmetry in elastic electron-proton scattering at very low momentum transfer and forward angle. This will allow an extraction of the weak mixing angle at Q^2=0.026 (GeV/c)^2 to approximately 0.3%. The weak mixing angle is a fundamental parameter in the Standard Model of electroweak interactions. At the proposed accuracy, a measured deviation of this parameter from the predicted value would indicate new physics beyond what is currently described in the Standard Model. Without deviation from the predicted value, this measurement would place stringent limits on possible extensions to the Standard Model and constitute the most precise measurement of the proton's weak charge to date. The key experimental apparatus include a liquid hydrogen target, a toroidal magnetic spectrometer and a set of eight Cerenkov detectors. The Cerenkov detectors form the main detector system for the Q-weak experiment and are used to measure the parity violating asymmetry during the primary Q-weak production runs. The Cerenkov detectors form the main subject of this thesis. Following a brief introduction to the experiment, the design, development, construction, installation, and testing of this detector system will be discussed in detail. This is followed by a detailed discussion of detector diagnostic data analysis and the corresponding detector performance. The experiment has been successfully constructed and commissioned, and is currently taking data. The thesis will conclude with a discussion of the preliminary analysis of a small portion of the liquid hydrogen data.
367

The relationship between the forward– and the realized spot exchange rate in South Africa / Petrus Marthinus Stephanus van Heerden

Van Heerden, Petrus Marthinus Stephanus January 2010 (has links)
The inability to effectively hedge against unfavourable exchange rate movements, using the current forward exchange rate as the only guideline, is a key inhibiting factor of international trade. Market participants use the current forward exchange rate quoted in the market to make decisions regarding future exchange rate changes. However, the current forward exchange rate is not solely determined by the interaction of demand and supply, but is also a mechanistic estimation, which is based on the current spot exchange rate and the carry cost of the transaction. Results of various studies, including this study, demonstrated that the current forward exchange rate differs substantially from the realized future spot exchange rate. This phenomenon is known as the exchange rate puzzle. This study contributes to the dynamics of modelling exchange rate theories by developing an exchange rate model that has the ability to explain the realized future spot exchange rate and the exchange rate puzzle. The exchange rate model is based only on current (time t) economic fundamentals and includes an alternative approach of incorporating the impact of the interaction of two international financial markets into the model. This study derived a unique exchange rate model, which proves that the exchange rate puzzle is a pseudo problem. The pseudo problem is based on the generally excepted fallacy that current non–stationary, level time series data cannot be used to model exchange rate theories, because of the incorrect assumption that all the available econometric methods yield statistically insignificant results due to spurious regressions. Empirical evidence conclusively shows that using non–stationary, level time series data of current economic fundamentals can statistically significantly explain the realized future spot exchange rate and, therefore, that the exchange rate puzzle can be solved. This model will give market participants in the foreign exchange market a better indication of expected future exchange rates, which will considerably reduce the dependence on the mechanistically derived forward points. The newly derived exchange rate model will also have an influence on the demand and supply of forward exchange, resulting in forward points that are a more accurate prediction of the realized future exchange rate. / Thesis (Ph.D. (Risk management))--North-West University, Potchefstroom Campus, 2011.
368

Perspective of risk in childbirth, women’s expressed wishes for mode of delivery and how they actually give birth

Kringeland, Tone January 2009 (has links)
Aims: The main aim of this thesis was to study a perspective of women`s expressed wishes for mode of delivery and how they actually give birth. Additional aims were to examine the notion of risk applied to childbirth, to examine what characterizes women who want to give birth as naturally as possible without painkillers or intervention and the characteristics of women who would, if possible, choose to have a cesarean section. Material and methods: The notion of risk was examined in an essay. Self-rating instruments were completed by 55,858 MoBa participants during week 30 of their pregnancy and available from The Norwegian Mother and Child Cohort Study (MoBa) by April, 2007. Individually reported information on socioeconomic factors, lifestyle factors, feelings related to childbirth, factors concerning psychosocial health, physical, psychological and sexual harassment and information on satisfaction with antenatal care health services were collected from a MoBa questionnaire. Data on the mother’s age, parity, physical health before and during the pregnancy, previous cesarean sections and actual mode of delivery were collected through a linkage to the The Medical Birth Registry of Norway. Findings: General perspectives on risk differ depending on both the person and the profession. More and more childbearing women are in danger of being considered deficient and in the danger zone. Figures on risk are not objective values, and the association between risk and security is socially and culturally determined. Personal symbols can be basic assumptions about the life one leads, and the childbearing woman has preferences of her own. Interest in natural childbirth was expressed by 72 percent and a wish for caesarean section was expressed by ten percent of the women. Positive experience from previous childbirths, first birth or third or later birth, no dread of giving birth, and reporting positive intra-psychic phenomena are significantly associated with the wish for natural birth. Negative experiences from previous childbirths and fear of giving birth are two of the strongest factors associated with a wish for a caesarean section.Overall, 47 percent of the women who wanted ”as natural a birth as possible” had their preference fulfilled. The figures differed largely for primiparas and multiparas; the risk of acute caesarean sections was high among primiparas and the effects of the predictors of natural birth were stronger for primiparas than for multiparas. Conclusions:The factors that influence the chance of having a natural birth are different for primiparas and multiparas. The high rate of non-natural births among first time mothers who actually want to have a vaginal birth without interventions should call attention to the increasing incidence of cesarean section in Norway. The chance of actually having a natural birth for women with a preference for a natural birth is much larger for multiparas. Negative experiences from previous childbirths and cesarean section are, however, important factors associated with non-natural birth and should be taken into consideration in public health / Mål: Det overordna målet for denne avhandlingen var å studere perspektiv omkring hvordan kvinner uttrykker at de ønsker å føde og hvordan de faktisk føder. I tillegg var målet å undersøke risikobegrepet anvendt innen fødselsomsorg, undersøke hva som karakteriserer kvinner som ønsker å føde så naturlig som mulig uten smertestillende eller intervensjon og undersøke hva som karakteriserer kvinner som ville valgt å ta keisersnitt dersom det var mulig. Materiell og metode: Avhandlingen inkludere fire artikler. Risikobegrepet drøftes i første artikkel som er et essay. De 3 andre inkluderer data fra Den norske mor og barn-undersøkelsen. Data fra 55,858 MoBa informanter var ferdigregistrert april 2007 og omfatter individuell informasjon om sosioøkonomiske faktorer, livsstilsfaktorer, følelser/opplevelser relatert til fødsel, faktorer som omhandler psykososial helse, fysiske, psykiske og seksuelle overgrep og informasjon om tilfredshet med offentlig svangerskapsomsorg. Tidligere keisersnitt og hvordan kvinnene faktisk fødte i dette svangerskapet ble hentet fra en link til Medisinsk Fødselsregister. Funn: Generelt perspektiv på risiko er forskjellig, avhengig av både person og profesjon. Stadig flere gravid/fødekvinner står i fare for å bli betraktet som utsatte/mangelfulle og i faresonen. Kalkulasjoner av risiko er ikke objektive verdier og assosiasjonen mellom risiko og sikkerhet er sosialt og kulturelt bestemt. Subjektive symbol kan være grunnleggende antagelser/forståelser i forhold til det livet en lever og blivende mødre har sine egne preferanser. Syttito prosent av kvinnene uttrykte ønske om å føde så naturlig som mulig og ti prosent av kvinnene ønsket å ta keisersnitt. Positive erfaringer fra tidligere fødsler, det å være førstegangsfødende eller ha født mer en ett barn tidligere, ikke være redd for å føde, samt å rapportere positivt i forhold til intrapsykiske fenomen, er signifikant assosiert med ønske om å føde så naturlig som mulig. Negative erfaringer fra tidligere fødsler og redsel for å føde er de to faktorene som er sterkest assosiert med ønske om keisersnitt. Samlet sett fikk 47 prosent av de kvinnene som ønsket så naturlig fødsel som mulig, oppfylt ønskene sine. Resultatet var svært ulikt mellom førstegangsfødende og fleregangsfødende; risikoen for akutt keisersnitt var høg blant førstegangsfødende og effekten av prediktorene for naturlig fødsel var sterkere i forhold til førstegangsfødende enn for fleregangsfødende. Konklusjon: Faktorene som influerer sjansen til å føde så naturlig som mulig er ulike for førstegangsfødende og for fleregangsfødende. Den høge tallet på fødsler med intervensjon hos førstegangsfødende som egentlig ønsker å føde vaginalt uten intervensjon burde fått større oppmerksomhet. Dette bør også sees i sammenheng med en stadig økende innsidens for keisersnitt i Norge. Muligheten for å få en så naturlig fødsel som mulig er mye større for fleregangsfødende. Negative erfaringer fra tidligere fødsler og tidligere keisersnitt er, likevel, viktige faktorer assosiert med ikke-naturlig fødsel og bør reflekteres over/tas i betraktning i et folkehelseperspektiv.
369

On Non-Binary Constellations for Channel Encoded Physical Layer Network Coding

Faraji-Dana, Zahra 18 April 2012 (has links)
This thesis investigates channel-coded physical layer network coding, in which the relay directly transforms the noisy superimposed channel-coded packets received from the two end nodes, to the network-coded combination of the source packets. This is in contrast to the traditional multiple-access problem, in which the goal is to obtain each message explicitly at the relay. Here, the end nodes $A$ and $B$ choose their symbols, $S_A$ and $S_B$, from a small non-binary field, $\mathbb{F}$, and use non-binary PSK constellation mapper during the transmission phase. The relay then directly decodes the network-coded combination ${aS_A+bS_B}$ over $\mathbb{F}$ from the noisy superimposed channel-coded packets received from two end nodes. Trying to obtain $S_A$ and $S_B$ explicitly at the relay is overly ambitious when the relay only needs $aS_B+bS_B$. For the binary case, the only possible network-coded combination, ${S_A+S_B}$ over the binary field, does not offer the best performance in several channel conditions. The advantage of working over non-binary fields is that it offers the opportunity to decode according to multiple decoding coefficients $(a,b)$. As only one of the network-coded combinations needs to be successfully decoded, a key advantage is then a reduction in error probability by attempting to decode against all choices of decoding coefficients. In this thesis, we compare different constellation mappers and prove that not all of them have distinct performance in terms of frame error rate. Moreover, we derive a lower bound on the frame error rate performance of decoding the network-coded combinations at the relay. Simulation results show that if we adopt concatenated Reed-Solomon and convolutional coding or low density parity check codes at the two end nodes, our non-binary constellations can outperform the binary case significantly in the sense of minimizing the frame error rate and, in particular, the ternary constellation has the best frame error rate performance among all considered cases.
370

Efficient architectures for error control using low-density parity-check codes

Haley , David January 2004 (has links)
Recent designs for low-density parity-check (LDPC) codes have exhibited capacity approaching performance for large block length, overtaking the performance of turbo codes. While theoretically impressive, LDPC codes present some challenges for practical implementation. In general, LDPC codes have higher encoding complexity than turbo codes both in terms of computational latency and architecture size. Decoder circuits for LDPC codes have a high routing complexity and thus demand large amounts of circuit area. There has been recent interest in developing analog circuit architectures suitable for decoding. These circuits offer a fast, low-power alternative to the digital approach. Analog decoders also have the potential to be significantly smaller than digital decoders. In this thesis we present a novel and efficient approach to LDPC encoder / decoder (codec) design. We propose a new algorithm which allows the parallel decoder architecture to be reused for iterative encoding. We present a new class of LDPC codes which are iteratively encodable, exhibit good empirical performance, and provide a flexible choice of code length and rate. Combining the analog decoding approach with this new encoding technique, we design a novel time-multiplexed LDPC codec, which switches between analog decode and digital encode modes. In order to achieve this behaviour from a single circuit we have developed mode-switching gates. These logic gates are able to switch between analog (soft) and digital (hard) computation, and represent a fundamental circuit design contribution. Mode-switching gates may also be applied to built-in self-test circuits for analog decoders. Only a small overhead in circuit area is required to transform the analog decoder into a full codec. The encode operation can be performed two orders of magnitude faster than the decode operation, making the circuit suitable for full-duplex applications. Throughput of the codec scales linearly with block size, for both encode and decode operations. The low power and small area requirements of the circuit make it an attractive option for small portable devices.

Page generated in 0.2634 seconds