• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 22
  • 12
  • 7
  • 6
  • 4
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 108
  • 24
  • 16
  • 11
  • 10
  • 10
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Essays on International Reserve Accumulation and Cooperation in Latin America

Rosero, Luis Daniel 01 September 2011 (has links)
One of the defining trends in international finance over the last two decades has been the unprecedented growth in the levels of international reserves accumulated by emerging nations. In a global financial system characterized by market failures and sudden stops, many developing countries have opted for the protection provided by individual accumulation of reserves as a second-best outcome. However, as suggested by Rodrik (2006), among others, the accumulation of reserves comes at a hefty opportunity cost to the nations that hold them. It is this particular aspect that brings into question--or at least merits a re-examination of--the validity and efficiency of reserve accumulation as a stabilization and development strategy, particularly in the context of some cash-strapped developing nations. This dissertation takes an in-depth look at this trend in Latin America by investigating the extent of protection of these precautionary reserves, the role of contagion risk in the accumulation process, and the outlook of regional arrangements of cooperation, such as regional reserve pooling mechanisms.
62

Optimal Risk-based Pooled Testing in Public Health Screening, with Equity and Robustness Considerations

Aprahamian, Hrayer Yaznek Berg 03 May 2018 (has links)
Group (pooled) testing, i.e., testing multiple subjects simultaneously with a single test, is essential for classifying a large population of subjects as positive or negative for a binary characteristic (e.g., presence of a disease, genetic disorder, or a product defect). While group testing is used in various contexts (e.g., screening donated blood or for sexually transmitted diseases), a lack of understanding of how an optimal grouping scheme should be designed to maximize classification accuracy under a budget constraint hampers screening efforts. We study Dorfman and Array group testing designs under subject-specific risk characteristics, operational constraints, and imperfect tests, considering classification accuracy-, efficiency-, robustness-, and equity-based objectives, and characterize important structural properties of optimal testing designs. These properties provide us with key insights and allow us to model the testing design problems as network flow problems, develop efficient algorithms, and derive insights on equity and robustness versus accuracy trade-off. One of our models reduces to a constrained shortest path problem, for a special case of which we develop a polynomial-time algorithm. We also show that determining an optimal risk-based Dorfman testing scheme that minimizes the expected number of tests is tractable, resolving an open conjecture. Our case studies, on chlamydia screening and screening of donated blood, demonstrate the value of optimal risk-based testing designs, which are shown to be less expensive, more accurate, more equitable, and more robust than current screening practices. / PHD / Group (pooled) testing, i.e., testing multiple subjects simultaneously with a single test, is essential for classifying a large population of subjects as positive or negative for a binary characteristic (e.g., presence of a disease, genetic disorder, or a product defect). While group testing is used in various contexts (e.g., screening donated blood or for sexually transmitted diseases), a lack of understanding of how an optimal grouping scheme should be designed to maximize classification accuracy under a budget constraint hampers screening efforts. We study Dorfman and Array group testing designs under subject-specific risk characteristics, operational constraints, and imperfect tests, considering classification accuracy-, efficiency-, robustness-, and equity-based objectives, and characterize important structural properties of optimal testing designs. These properties provide us with key insights and allow us to model the testing design problems as network flow problems, develop efficient algorithms, and derive insights on equity and robustness versus accuracy trade-off. We also show that determining an optimal risk-based Dorfman testing scheme that minimizes the expected number of tests is tractable, resolving an open conjecture. Our case studies, on chlamydia screening and screening of donated blood, demonstrate the value of optimal risk-based testing designs, which are shown to be less expensive, more accurate, more equitable, and more robust than current screening practices.
63

Mapping SH3 Domain Interactomes

Xin, Xiaofeng 21 April 2010 (has links)
Src homology 3 (SH3) domains are one family of the peptide recognition modules (PRMs), which bind peptides rich in proline or positively charged residues in the target proteins, and play important assembly or regulatory functions in dynamic eukaryotic cellular processes, especially in signal transduction and endocytosis. SH3 domains are conserved from yeast to human, and improper SH3 domain mediated protein-protein interaction (PPI) leads to defects in cellular function and may even result in disease states. Since commonly used large-scale PPI mapping strategies employed full-length proteins or random protein fragments as screening probes and did not identify the particular PPIs mediated by the SH3 domains, I employed a combined experimental and computational strategy to address this problem. I used yeast two-hybrid (Y2H) as my major experimental tool, as well as individual SH3 domains as baits, to map SH3 domain mediated PPI networks, “SH3 domain interactomes”. One of my important contributions has been the improvement for Y2H technology. First, I generated a pair of Y2H host strains that improved the efficiency of high-throughput Y2H screening and validated their usage. These strains were employed in my own research and also were adopted by other researchers in their large-scale PPI network mapping projects. Second, in collaboration with Nicolas Thierry-Mieg, I developed a novel smart-pooling method, Shifted Transversal Design (STD) pooling, and validated its application in large-scale Y2H. STD pooling was proven to be superior among currently available methods for obtaining large-scale PPI maps with higher coverage, high sensitivity and high specificity. I mapped the SH3 domain interactomes for both budding yeast Saccharomyces cerevisiae and nematode worm Caenorhabditis elegans, which contain 27 and 84 SH3 domains, respectively. Comparison of these two SH3 interactomes revealed that the role of the SH3 domain is conserved at a functional but not a structural level, playing a major role in the assembly of an endocytosis network from yeast to worm. Moreover, the worm SH3 domains are additionally involved in metazoan-specific functions such as neurogenesis and vulval development. These results provide valuable insights for our understanding of two important evolutionary processes from single cellular eukaryotes to animals: the functional expansion of the SH3 domains into new cellular modules, as well as the conservation and evolution of some cellular modules at the molecular level, particularly the endocytosis module.
64

Mapping SH3 Domain Interactomes

Xin, Xiaofeng 21 April 2010 (has links)
Src homology 3 (SH3) domains are one family of the peptide recognition modules (PRMs), which bind peptides rich in proline or positively charged residues in the target proteins, and play important assembly or regulatory functions in dynamic eukaryotic cellular processes, especially in signal transduction and endocytosis. SH3 domains are conserved from yeast to human, and improper SH3 domain mediated protein-protein interaction (PPI) leads to defects in cellular function and may even result in disease states. Since commonly used large-scale PPI mapping strategies employed full-length proteins or random protein fragments as screening probes and did not identify the particular PPIs mediated by the SH3 domains, I employed a combined experimental and computational strategy to address this problem. I used yeast two-hybrid (Y2H) as my major experimental tool, as well as individual SH3 domains as baits, to map SH3 domain mediated PPI networks, “SH3 domain interactomes”. One of my important contributions has been the improvement for Y2H technology. First, I generated a pair of Y2H host strains that improved the efficiency of high-throughput Y2H screening and validated their usage. These strains were employed in my own research and also were adopted by other researchers in their large-scale PPI network mapping projects. Second, in collaboration with Nicolas Thierry-Mieg, I developed a novel smart-pooling method, Shifted Transversal Design (STD) pooling, and validated its application in large-scale Y2H. STD pooling was proven to be superior among currently available methods for obtaining large-scale PPI maps with higher coverage, high sensitivity and high specificity. I mapped the SH3 domain interactomes for both budding yeast Saccharomyces cerevisiae and nematode worm Caenorhabditis elegans, which contain 27 and 84 SH3 domains, respectively. Comparison of these two SH3 interactomes revealed that the role of the SH3 domain is conserved at a functional but not a structural level, playing a major role in the assembly of an endocytosis network from yeast to worm. Moreover, the worm SH3 domains are additionally involved in metazoan-specific functions such as neurogenesis and vulval development. These results provide valuable insights for our understanding of two important evolutionary processes from single cellular eukaryotes to animals: the functional expansion of the SH3 domains into new cellular modules, as well as the conservation and evolution of some cellular modules at the molecular level, particularly the endocytosis module.
65

The Impact of Information and Communication Technology(ICT) on Health : A Cross-Country Study

Liu, Ping-Yu 09 July 2012 (has links)
This paper examines the impact of Information and Communication Technology (ICT) on health using the data of 61 countries between 2000 and 2009 from the World Bank. The ICT variables considered in this paper include internet, fixed phones, and mobile phones. Based on the Millennium Development Goals (MDGs) of the United Nations, we select several health variables and examine the impact of ICT on these variables. These variables include life expectancy at birth, infant mortality rate, under-five mortality rate, maternal mortality ratio, and prevalence of HIV. The estimation strategies are the pooling OLS model, the fixed effect model, and the random effect model. The empirical results suggest that ICT indeed plays a significant role in improving the health level of a country. ICT effectively decreases infant mortality rates and children mortality rates, and also increases life expectancy. This finding supports the viewpoints of United Nations (UN), World Health Organization (WHO), World Bank, and International Telecommunication Union (ITU) that ICT has great potential in improving a country¡¦s health. The finding also confirms the arguments of several literatures, including McNamara (2007) and Lucas (2008), that ICT can lead to a more effective health system. In addition, we also find that fixed phones and mobile phones, which have more powerful functions in communicating and have greater flexibility, help decrease deaths due to acute diseases or emergencies; while internet displays more profound impact on improving health with the accumulation of time. Our results suggest that adopting and promoting ICT is an effective way for developing countries and less-developed countries to enhance the level of health of people. We also expect that ICT can help these countries to meet at least part of the Millennium Development Goals.
66

3D - Patch Based Machine Learning Systems for Alzheimer’s Disease classification via 18F-FDG PET Analysis

January 2017 (has links)
abstract: Alzheimer’s disease (AD), is a chronic neurodegenerative disease that usually starts slowly and gets worse over time. It is the cause of 60% to 70% of cases of dementia. There is growing interest in identifying brain image biomarkers that help evaluate AD risk pre-symptomatically. High-dimensional non-linear pattern classification methods have been applied to structural magnetic resonance images (MRI’s) and used to discriminate between clinical groups in Alzheimers progression. Using Fluorodeoxyglucose (FDG) positron emission tomography (PET) as the pre- ferred imaging modality, this thesis develops two independent machine learning based patch analysis methods and uses them to perform six binary classification experiments across different (AD) diagnostic categories. Specifically, features were extracted and learned using dimensionality reduction and dictionary learning & sparse coding by taking overlapping patches in and around the cerebral cortex and using them as fea- tures. Using AdaBoost as the preferred choice of classifier both methods try to utilize 18F-FDG PET as a biological marker in the early diagnosis of Alzheimer’s . Addi- tional we investigate the involvement of rich demographic features (ApoeE3, ApoeE4 and Functional Activities Questionnaires (FAQ)) in classification. The experimental results on Alzheimer’s Disease Neuroimaging initiative (ADNI) dataset demonstrate the effectiveness of both the proposed systems. The use of 18F-FDG PET may offer a new sensitive biomarker and enrich the brain imaging analysis toolset for studying the diagnosis and prognosis of AD. / Dissertation/Thesis / Thesis Defense Presentation / Masters Thesis Computer Science 2017
67

Automated phoneme mapping for cross-language speech recognition

Sooful, Jayren Jugpal 11 January 2005 (has links)
This dissertation explores a unique automated approach to map one phoneme set to another, based on the acoustic distances between the individual phonemes. Although the focus of this investigation is on cross-language applications, this automated approach can be extended to same-language but different-database applications as well. The main goal of this investigation is to be able to use the data of a source language, to train the initial acoustic models of a target language for which very little speech data may be available. To do this, an automatic technique for mapping the phonemes of the two data sets must be found. Using this technique, it would be possible to accelerate the development of a speech recognition system for a new language. The current research in the cross-language speech recognition field has focused on manual methods to map phonemes. This investigation has considered an English-to-Afrikaans phoneme mapping, as well as an Afrikaans-to-English phoneme mapping. This has been previously applied to these language instances, but utilising manual phoneme mapping methods. To determine the best phoneme mapping, different acoustic distance measures are compared. The distance measures that are considered are the Kullback-Leibler measure, the Bhattacharyya distance metric, the Mahalanobis measure, the Euclidean measure, the L2 metric and the Jeffreys-Matusita distance. The distance measures are tested by comparing the cross-database recognition results obtained on phoneme models created from the TIMIT speech corpus and a locally-compiled South African SUN Speech database. By selecting the most appropriate distance measure, an automated procedure to map phonemes from the source language to the target language can be done. The best distance measure for the mapping gives recognition rates comparable to a manual mapping process undertaken by a phonetic expert. This study also investigates the effect of the number of Gaussian mixture components on the mapping and on the speech recognition system’s performance. The results indicate that the recogniser’s performance increases up to a limit as the number of mixtures increase. In addition, this study has explored the effect of excluding the Mel Frequency delta and acceleration cepstral coefficients. It is found that the inclusion of these temporal features help improve the mapping and the recognition system’s phoneme recognition rate. Experiments are also carried out to determine the impact of the number of HMM recogniser states. It is found that single-state HMMs deliver the optimum cross-language phoneme recognition results. After having done the mapping, speaker adaptation strategies are applied on the recognisers to improve their target-language performance. The models of a fully trained speech recogniser in a source language are adapted to target-language models using Maximum Likelihood Linear Regression (MLLR) followed by Maximum A Posteriori (MAP) techniques. Embedded Baum-Welch re-estimation is used to further adapt the models to the target language. These techniques result in a considerable improvement in the phoneme recognition rate. Although a combination of MLLR and MAP techniques have been used previously in speech adaptation studies, the combination of MLLR, MAP and EBWR in cross-language speech recognition is a unique contribution of this study. Finally, a data pooling technique is applied to build a new recogniser using the automatically mapped phonemes from the target language as well as the source language phonemes. This new recogniser demonstrates moderate bilingual phoneme recognition capabilities. The bilingual recogniser is then further adapted to the target language using MAP and embedded Baum-Welch re-estimation techniques. This combination of adaptation techniques together with the data pooling strategy is uniquely applied in the field of cross-language recognition. The results obtained using this technique outperform all other techniques tested in terms of phoneme recognition rates, although it requires a considerably more time consuming training process. It displays only slightly poorer phoneme recognition than the recognisers trained and tested on the same language database. / Dissertation (MEng (Computer Engineering))--University of Pretoria, 2006. / Electrical, Electronic and Computer Engineering / unrestricted
68

Interpretable Fine-Grained Visual Categorization

Guo, Pei 16 June 2021 (has links)
Not all categories are created equal in object recognition. Fine-grained visual categorization (FGVC) is a branch of visual object recognition that aims to distinguish subordinate categories within a basic-level category. Examples include classifying an image of a bird into specific species like "Western Gull" or "California Gull". Such subordinate categories exhibit characteristics like small inter-category variation and large intra-class variation, making distinguishing them extremely difficult. To address such challenges, an algorithm should be able to focus on object parts and be invariant to object pose. Like many other computer vision tasks, FGVC has witnessed phenomenal advancement following the resurgence of deep neural networks. However, the proposed deep models are usually treated as black boxes. Network interpretation and understanding aims to unveil the features learned by neural networks and explain the reason behind network decisions. It is not only a necessary component for building trust between humans and algorithms, but also an essential step towards continuous improvement in this field. This dissertation is a collection of papers that contribute to FGVC and neural network interpretation and understanding. Our first contribution is an algorithm named Pose and Appearance Integration for Recognizing Subcategories (PAIRS) which performs pose estimation and generates a unified object representation as the concatenation of pose-aligned region features. As the second contribution, we propose the task of semantic network interpretation. For filter interpretation, we represent the concepts a filter detects using an attribute probability density function. We propose the task of semantic attribution using textual summarization that generates an explanatory sentence consisting of the most important visual attributes for decision-making, as found by a general Bayesian inference algorithm. Pooling has been a key component in convolutional neural networks and is of special interest in FGVC. Our third contribution is an empirical and experimental study towards a thorough yet intuitive understanding and extensive benchmark of popular pooling approaches. Our fourth contribution is a novel LMPNet for weakly-supervised keypoint discovery. A novel leaky max pooling layer is proposed to explicitly encourages sparse feature maps to be learned. A learnable clustering layer is proposed to group the keypoint proposals into final keypoint predictions. 2020 marks the 10th year since the beginning of fine-grained visual categorization. It is of great importance to summarize the representative works in this domain. Our last contribution is a comprehensive survey of FGVC containing nearly 200 relevant papers that cover 7 common themes.
69

Národní park Šumava a zájmy municipalit na jeho území: případ postmateriálních cleavages? / Šumava National Park and Interests of its Municipalities: The Case of Postmaterial Cleavages?

Musilová, Karolína January 2014 (has links)
The diploma thesis Šumava National Park and Interests of its Municipalities: The Case of Postmaterial Cleavages? analyses, if the postmaterial cleavage is present in the case of Šumava National Park and how is it projected on the national level. Since the very beginning, there have been argues about the notion of the national park, where two different set of ideas collide. On the one hand, there are local municipalities which depend on incomes from tourism, on the other hand, the environmentalists seek for growing extent of areas without any human intervention. The theoretical background is based on concept of cleavages by S. Rokkan and S.M. Lipset. Special attention is paid to the postmaterial dimension of the cleavage, which was evolved by R. Inglehart. The analytical part consists of case study, which examines the attitudes of the stakeholders. The examination is based on semi-structured interviews with mayors, official documents and media coverage of the topic. According to the findings, there is a cleavage, which is visible on all levels of political system, including political parties and presidents.
70

Cash pooling jako nástroj efektivního řízení hotovosti podniku / Cash Pooling as an Effective Liquidity Management Tool

Polák, David January 2009 (has links)
No description available.

Page generated in 0.0735 seconds