Spelling suggestions: "subject:"held A""
21 |
Performance Evaluation of Turbo code in LTE systemWu, Han-Ying 25 July 2011 (has links)
As the increasing demand for high data-rate multimedia servicesin wireless broadband access, the advance wireless communication technologies have been developed rapidly. The Long-Term Evolution (LTE) is the new standard for wireless broadband access recently specified by the 3GPP(3rd Generation Partnership Project) on the way towards the fourth-generation mobile. In this thesis, we are interested in the 3GPP-LTE technology and focus on the turbo coding technique used therein. By employing MATLAB/Simulink, we build up the turbo codec simulation platform for 3GPP-LTE system. Two convolutional encoders that realize the concept of parallel concatenated convolutional codes (PCCCs) and a quadratic permutation polynomial (QPP) interleaver are used to implement the turbo encoder. The a posteriori probability (APP) decoder built-in Simulink is utilized to design the decoder that performs the soft-input and soft-output Viterbi Algorithm (SOVA). The zero-order hold block is used to control the number of decoding iteration for the iterative decoding process. We carry out the 3GPP-LTE turbo codec performance in the AWGN channel on the developed platform. Various cases that consider different data length, the number of decoding iteration, interleaver and decoding algorithm are simulated. The simulation results are compared to those of the Xilinx 3GPP-LTE turbo codec. The comparisons show that our turbo codec works properly and meets the LTE standard.
|
22 |
A Low Voltage Class AB Switched Current Sample and Hold CircuitHung, Ming-yang 21 August 2009 (has links)
In this thesis, a switched-current sample-and-hold circuit is proposed. We use feedback circuit to decrease the input impedance and to reduce the transmission error in SI cell. Furthermore, the entire memory cell is designed in a coupled differential replicate form to eliminate the clock feedthrough (CFT) error.
The sample-and-hold circuit is simulated using the parameters of TSMC 0.35£gm CMOS process. The simulation results show that the spurious-free dynamic range (SFDR) is 55 dB, the sampling rate is 40MHz, the power consumption is 0.38 mW, and the power supply is 1.5V. Furthermore, the circuit is verified by cadence-hspice simulation.
|
23 |
An analysis of the dynamics of two high-value export chains: smallholder participation, standards and trustRomero Granja, Cristina Maria 06 July 2015 (has links)
No description available.
|
24 |
Transaction cost and host country’s opportunistic behavior in oil EKim, Tae Eun 13 July 2011 (has links)
The purpose of this paper is to understand why a host country (HC) shows ex post opportunistic behaviors in E&P projects and frequently forces international oil companies (IOCs) to renegotiate previously signed contracts. This research employs the concept of asset specificity and hold-up problem in transaction cost economics (TCE). It then examines the unique characteristics of E&P projects, HC’s opportunistic behaviors, and IOCs’ safeguards. For a case study analyzing the implications between the economic theory and HC’s ex post opportunism in oil E&P project, I have selected Kazakhstan. The result is that HC’s ex post opportunism can be explained by a hold-up problem resulting from IOCs’ sunk investments and the unique characteristics of the oil E&P industry. When IOCs’ important capital assets become sunk investments and the price of oil increases rapidly, HC has a strong incentive to appropriate IOCs’ profits through ex post opportunism. Yet at the same time, HC must consider the damage to its reputation when deciding the extent and ways of its ex post opportunistic behaviors in oil E&P projects. / text
|
25 |
Deception and Arousal in Texas Hold ‘em PokerLee, Jackey, Ting Hin January 2013 (has links)
In our pilot study investigating Texas Hold ‘em poker, we found that players bluffing (with a losing hand) elicits a similar physiological arousal response (as measured by skin conductance levels) to those in a position of strength and poised to win. Since arousal has been suggested to be a reinforcing factor in problematic gambling behaviour, we sought to replicate the findings of our pilot study in the current investigation. We aimed to extend our previous findings further by: isolating truthful betting (strong betting) to disambiguate deception when players are in positions of strength (i.e. trapping), measuring subjective excitement levels and risk assessments, investigating the physiological arousal responses following wins versus losses, and finally, exploring group differences (i.e. problem gambling status, experience levels). 71 participants played 20 naturalistic rounds of Texas Hold ‘em poker for monetary rewards. We were able to replicate our previous findings that bluffing triggers a physiological arousal (as measured by skin conductance responses) similar to truthful strong betting. Trapping was also found to elicit a skin conductance response similar to both bluffing and strong betting. Measures of subjective excitement revealed a pattern that converged with physiological data. Furthermore, wins were found to be more arousing than losses. Finally, our exploratory analysis of group differences (i.e. problem gambling status, experience) proved to be an insignificant factor with all measures. We conclude that the effect of bluffing on physiological arousal is so powerful that it pervades all participants; which is problematic due to its risky nature and potential to be self-triggered. With its ever increasing popularity and availability, more research on Texas Hold ‘em poker is warranted for treatment implications.
|
26 |
Conception sur silicium de convertisseurs analogique-numérique haut débit pour le radiotélescope SKA / Design on silicon high speed analog-to-digital converters for the radio telescope SKADa Silva, Bruno 23 September 2010 (has links)
Pour les applications radioastronomiques, l'interface entre les mondes analogique et numérique est primordiale. Les convertisseurs analogique-numérique (CAN) doivent atteindre une forte résolution et un taux d'échantillonnage de plus en plus élevé pour numériser la plus grande bande passante possible. Pour le futur radiotélescope géant international SKA (Square Kilometer Array), la bande passante requise s'étend de 100 à 1500 MHz. L'objectif de ce mémoire est de concevoir et réaliser un CAN avec la technologie Qubic4X 0,25 µm en SiGeC, capable de dépasser le giga échantillon par seconde (GS/s) pour numériser toute la bande passante, pour des réseaux phasés denses. Deux études de CAN font l'objet de cette thèse. Dans le cadre de ce projet, nous avons analysé les différents blocs afin de minimiser les erreurs statiques et dynamiques pour une architecture parallèle 6 bits. Un premier CAN 6~bits en BiCMOS fonctionnant à une cadence de 1 GS/s a été étudié, réalisé et testé. Les simulations « post-layout » montrent un nombre de bits effectif de 4,6 bits pour une fréquence d'entrée de 400 MHz. La conception du masque permet de tester la puce. Ainsi, la sortie permet de valider le design. Les tests démontrent que le CAN opère à une fréquence maximale de 850 MS/s avec une bande passante de 400~MHz. Cependant, des erreurs persistent empêchant l'utilisation du circuit en raioastronomie. Le CAN consomme 2 Watts. Cette forte consommation est due aux interfaces d'entrées-sorties. Le second CAN bipolaire 6 bits fonctionne à une cadence de 3 GS/s. Ce convertisseur à architecture parallèle est entièrement conçu avec des topologies différentielles bipolaires. La partie numérique utilise une logique à émetteur couplé (ECL). Nous obtenons ainsi pour le second CAN une cadence de conversion élevée. Les simulations « post-layout » montrent que le CAN peut fonctionner à une fréquence de 3 GS/s, nous obtenons ainsi une bande passante de 1400 MHz. Les résultats dynamiques indiquent un nombre effectif de 5 bits pour une consommation de 3 Watts. / For applications in radio astronomy, the interface between the analog and digital domains is of primary concern. Analog-to-Digital Converters (ADC) must be capable of high resolution and extremely high sampling speeds in order to achieve the largest possible band width. For the future giant international radio telescope called the Square Kilometer Array (SKA), the bandwidth required is between 100 and 1500~MHz. The subject of the present thesis is to design and manufacture an ADC using the Qubic4X 0.25 µm technology in SiGeC capable of surpassing giga-samples per second (GS/s) in order to digitise the entire passband for dense phased-arrays. Two ADC designs are presented here. For this project, we analysed different design blocks with the goal of reducing static and dynamic errors in a 6-bit parallel architecture. The first 6-bit ADC which was designed, manufactured, and tested, was in BiCMOS and operated at 1 GS/s. The post-layout simulations showed the effective number of bits to be 4.6 bits with a 400 MHz input frequency. The mask design allowed for testing the chip. In this way, the output validates the design. Tests show that the ADC operates up to a maximum frequency of 850 MS/s with a passband of 400 MHz. However, there are some errors which make the current circuit unusable for astronomy purposes. The ADC runs on 2 Watts. The high power consumption is due to the input and output stages. The second 6-bit bipolar ADC operates at 3 GS/s. It is designed with a parallel architecture entirely using a bipolar differential topology. The digital part uses Emitter Coupled Logic (ECL). With this second chip, we obtain high speed conversion. Post-layout simulations show that the ADC can operate up to 3 GS/s, and we thus obtain a passband of 1400 MHz. Dynamic measurements indicate an effective number of bits of 5 bits with a power consumption of 3 Watts.
|
27 |
Métodos de validação tradicional e temporal aplicados à avaliação de classificadores de RNAs codificantes e não codificantes / Traditional and time validation methods applied to the evaluation of coding and non-coding RNA classifiersClebiano da Costa Sá 23 March 2018 (has links)
Os ácidos ribonucleicos (RNAs) podem ser classificados em duas classes principais: codificante e não codificante de proteína. Os codificantes, representados pelos RNAs mensageiros (mRNAs), possuem a informação necessária à síntese proteica. Já os RNAs não codificantes (ncRNAs) não são traduzidos em proteínas, mas estão envolvidos em várias atividades celulares distintas e associados a várias doenças tais como cardiopatias, câncer e desordens psiquiátricas. A descoberta de novos ncRNAs e seus papéis moleculares favorece avanços no conhecimento da biologia molecular e pode também impulsionar o desenvolvimento de novas terapias contra doenças. A identificação de ncRNAs é uma ativa área de pesquisa e um dos correntes métodos é a classificação de sequências transcritas utilizando sistemas de reconhecimento de padrões baseados em suas características. Muitos classificadores têm sido desenvolvidos com este propósito, especialmente nos últimos três anos. Um exemplo é o Coding Potential Calculator (CPC), baseado em Máquinas de Vetores de Suporte (SVM). No entanto, outros algoritmos robustos são também reconhecidos pelo seu potencial em tarefas de classificação, como por exemplo Random Forest (RF). O método mais utilizado para avaliação destas ferramentas tem sido a validação cruzada k-fold. Uma questão não considerada nessa forma de validação é a suposição de que as distribuições de frequências dentro do banco de dados, em termos das classes das sequências e outras variáveis, não se alteram ao longo do tempo. Caso essa premissa não seja verdadeira, métodos tradicionais como a validação cruzada e o hold-out podem subestimar os erros de classificação. Constata-se, portanto, a necessidade de um método de validação que leve em consideração a constante evolução dos bancos de dados ao longo do tempo, para proporcionar uma análise de desempenho mais realista destes classificadores. Neste trabalho comparamos dois métodos de avaliação de classificadores: hold-out temporal e hold-out tradicional (atemporal). Além disso, testamos novos modelos de classificação a partir da combinação de diferentes algoritmos de indução com características de classificadores do estado da arte e um novo conjunto de características. A partir dos testes das hipóteses, observamos que tanto a validação hold-out tradicional quanto a validação hold-out temporal tendem a subestimar os erros de classificação, que a avaliação por validação temporal é mais fidedigna, que classificadores treinados a partir de parâmetros calibrados por validação temporal não melhoram a classificação e que nosso modelo de classificação baseado em Random Forest e treinado com características de classificadores do estado da arte e mais um novo conjunto de características proporcionou uma melhora significativa na discriminação dos RNAs codificantes e não codificantes. Por fim, destacamos o potencial do algoritmo Random Forest e das características utilizadas, diante deste problema de classificação, e sugerimos o uso do método de validação hold-out temporal para a obtenção de estimativas de desempenho mais fidedignas para os classificadores de RNAs codificantes e não codificantes de proteína. / Ribonucleic acids (RNAs) can be classified into two main classes: coding and non-coding of protein. The coding, represented by messenger RNAs (mRNAs), has the necessary information for protein synthesis. Non-coding RNAs (ncRNAs) are not translated into proteins but are involved in several distinct cellular activities associated with various diseases such as heart disease, cancer and psychiatric disorders. The discovery of new ncRNAs and their molecular roles favors advances in the knowledge of molecular biology and may also boost the development of new therapies against diseases. The identification of ncRNAs is an active area of research and one of the current methods is the classification of transcribed sequences using pattern recognition systems based on their characteristics. Many classifiers have been developed for this purpose, especially in the last three years. An example is the Coding Potential Calculator (CPC), based on Supporting Vector Machines (SVM). However, other robust algorithms are also recognized for their potential in classification tasks, such as Random Forest (RF). The most commonly used method for evaluating these tools has been cross-validation k-fold. An issue not considered in this form of validation is the assumption that frequency distributions within the database, in terms of sequence classes and other variables, do not change over time. If this assumption is not true, traditional methods such as cross-validation and hold-out may underestimate classification errors. The need for a validation method that takes into account the constant evolution of databases over time is therefore needed to provide a more realistic performance analysis of these classifiers. In this work we compare two methods of evaluation of classifiers: time hold-out and traditional hold-out (without considering the time). In addition, we tested new classification models from the combination of different induction algorithms with state-ofthe-art classifier characteristics and a new set of characteristics. From the hypothesis tests, we observe that both the traditional hold-out validation and the time hold-out validation tend to underestimate the classification errors, that the time validation evaluation is more reliable, than classifiers trained from parameters calibrated by time validation did not improve classification and that our Random Forest-based classification model trained with state-of-the-art classifier characteristics and a new set of characteristics provided a significant improvement in the discrimination of the coding and non-coding RNAs. Finally, we highlight the potential of the Random Forest algorithm and the characteristics used, in view of this classification problem, and we suggest the use of the time hold-out validation method to obtain more reliable estimates of the protein coding and non-coding RNA classifiers.
|
28 |
Conversations for Connection: An Outcome Assessment of the Hold Me Tight Relationship Education Program for CouplesKennedy, Nikki January 2017 (has links)
Hold Me Tight: Conversations for Connection is a relationship education program based on Emotionally Focused Therapy (EFT; Johnson 2004), an empirically supported model of couple therapy with roots in attachment theory. Currently, relationship education is mostly provided through skills-based programs with a focus on teaching communication, problem-solving and conflict resolution skills from the social-learning perspective. The HMT program is different; it targets attachment and emotional connection – aspects central to relationship functioning as identified in the literature. The present study is the first outcome study of the HMT program. The purpose of the study was to examine the trajectory of change for relationship satisfaction, trust, attachment, intimacy, depressive symptoms and anxiety symptoms. Couples who participated in this study were from several cities across Canada and the United States. The trajectory for the outcome variables were modeled across baseline, pre-program, post-program and follow-up in a sample of 95 couples participating in 16 HMT program groups. Results of a four-level Hierarchical Linear Modeling (HLM: Raudenbush & Bryk, 2002) analysis demonstrated a significant cubic growth pattern for relationship satisfaction, trust, attachment avoidance, depressive and anxiety symptoms demonstrating no change from baseline to pre-program and improvements from pre-program to post-program. Scores returned to pre-program levels at follow-up. Follow-up analyses demonstrated that the changes from pre- to post-program were significant with a large effect size. We also looked at couples’ reported ability to engage in the conversations from the program and found that mean scores declined from post-program to follow-up. The results of this initial pilot study suggest that the HMT program is a promising alternative to existing relationship education programs with results comparable to skills-based relationship education programs. The decrease in scores from post-program to follow-up suggests that booster sessions following the completion of the program could be necessary to help couples maintain gains. Limitations and areas for further study are discussed.
|
29 |
Reologie roztoků hyaluronanů / Rheology of hyaluronane solutionsHlisnikovská, Kristýna January 2008 (has links)
Předmětem tohoto studia bylo prozkoumat reologické chování vodných roztoků vysokomolekulárního hyaluronanu. Byl studován vliv zvyšující se koncentrace biopolymeru v roztoku, která se pohybovala v rozmezí od 1 do 3 hmotnostních procent, a také vliv vzrůstající iontové síly rozpouštědla, způsobené přídavkem chloridu sodného, na viskoelasticitu a stabilitu těchto roztoků. Pro obsáhlejší popis viskoelasických vlastností roztoků byla použita, vedle běžných oscilačních měření, také metoda ceepových testů, ze které bylo možno určit důležité veličity, jako je procentuální poměr viskozní a elastické složky vzorku, rovnovážná poddajnost, viskozita při nulovém smykovém napětí a retardační čas. Ty byly následně porovnávány s výstupy z jiných typů měření, jako jsou právě oscilační a tokové křivky, nebo nesly dopňující informace důležité pro detailnější popis viskoelastických vlastností těchto roztoků. Ke studiu stability vzorků během namáhání pak byla použita metoda peak-hold, která ukázala na velmi dobou mechanickou i časovou odolnost roztoků hyaluronanu a naznačila hranice, za kterými už dochází k trvalému poškození struktury a degradaci řetězců hyaluronanu a je s němi proto potřeba při manipulaci s roztoky tohoto biopolymeru pro jejich další použití v aplikacích počítat.
|
30 |
Towards optimizing particle deposition in bifurcating structuresSonnenberg, Adam 19 May 2020 (has links)
Particle deposition patterns formed in the lung upon inhalation are of interest to a wide spectrum of biomedical sciences, particularly for their influence on non-invasive therapies which deliver drugs to the respiratory track. Before reaching the alveoli, particles, or a collection of liquid droplets called aerosols, must transverse this bifurcating network. This dissertation proposes a multi-faceted strategy for optimizing current methods of drug delivery by analyzing particle deposition in a single bifurcation and a complex 3-dimensional tree as a model of the airways. In this thesis, previous probabilistic formulations of particle deposition in a single bifurcation were first examined, combined and verified by computational fluid dynamic modeling. The traditional single bifurcation model was then extended to a multigenerational network as a Markov chain. The probabilistic approach combined with detailed fluid mechanics in bifurcating structures, permits a more realistic treatment of particle deposition. The formulation enables a rapid comparative analysis among different flow policies, i.e. how varying modes of inhalation affect local particle deposition and total particle escape rates. For example, this approach showed that body position has a minimal effect on deposition pattern, while a specific flow profile maximize deposition into the periphery of the lung.
Also included are novel experimental results of particle deposition. Most experimental deposition studies are restricted to total deposition. Regional deposition can only be estimated but not directly measured without the destruction of the lung like models. As a result, the measurement requires multiple models which adds to the variance. To this end a standard physical model for investigating effects of various ventilation strategies on regional particle deposition was developed. Results suggest that a brief pause in flow can increase deposition into regions of blocked airways where drugs would not otherwise enter. Experiments were also conducted to investigate the effects of inertia dominated flow in symmetric and asymmetric structures revealing novel features in 3D compared to 2D.
This dissertation combines experimental and computation results to propose a strategy to efficiently move particles through a symmetric and asymmetric bifurcating structure. It also introduces possible strategies for maximizing deposition to a desired region of a lung structure.
|
Page generated in 0.0562 seconds