• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 134
  • 124
  • 23
  • 17
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 379
  • 379
  • 136
  • 132
  • 75
  • 66
  • 49
  • 43
  • 40
  • 33
  • 29
  • 28
  • 27
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Essays in computational economics

Pugh, David January 2014 (has links)
The focus of my PhD research has been on the acquisition of computational modeling and simulation methods used in both theoretical and applied Economics. My first chapter provides an interactive review of finite-difference methods for solving systems of ordinary differential equations (ODEs) commonly encountered in economic applications using Python. The methods surveyed in this chapter, as well as the accompanying code and IPython lab notebooks should be of interest to any researcher interested in applying finite-difference methods for solving ODEs to economic problems. My second chapter is an empirical analysis of the evolution of the distribution of bank size in the U.S. This paper assesses the statistical support for Zipf's Law (i.e., a power law, or Pareto, distribution with a scaling exponent of α = 2) as an appropriate model for the upper tail of the distribution of U.S. banks. Using detailed balance sheet data for all FDIC regulated banks for the years 1992 through 2011, I find significant departures from Zipf's Law for most measures of bank size inmost years. Although Zipf's Law can be statistically rejected, a power law distribution with α of roughly 1.9 statistically outperforms other plausible heavy-tailed alternative distributions. In my final chapter, which is based on joint work with Dr. David Comerford, I apply computational methods to model the relationship between per capita income and city size. A well-known result from the urban economics literature is that a monopolistically competitive market structure combined with internal increasing returns to scale can be used to generate log-linear relations between income and population. I extend this theoretical framework to allow for a variable elasticity of substitution between factors of production in a manner similar to Zhelobodko et al. (2012). Using data on Metropolitan Statistical Areas (MSAs) in the U.S. I find evidence that supports what Zhelobodko et al. (2012) refer to as "increasing relative love for variety (RLV)." Increasing RLV generates procompetitive effects as market size increases which means that IRS, whilst important for small to medium sized cities, are exhausted as cities become large. This has important policy implications as it suggests that focusing intervention on creating scale for small populations is potentially much more valuable than further investments to increase market size in the largest population centers.
62

Analyzing multicellular interactions: A hybrid computational and biological pattern recognition approach

White, Douglas 27 May 2016 (has links)
Pluripotent embryonic stem cells (ESCs) can differentiate into all somatic cell types, making them a useful platform for studying a variety of cellular phenomenon. Furthermore, ESCs can be induced to form aggregates called embryoid bodies (EBs) which recapitulate the dynamics of development and morphogenesis. However, many different factors such as gradients of soluble morphogens, direct cell-to-cell signaling, and cell-matrix interactions have all been implicated in directing ESC differentiation. Though the effects of individual factors have often been investigated independently, the inherent difficulty in assaying combinatorial effects has made it difficult to ascertain the concerted effects of different environmental parameters, particularly due to the spatial and temporal dynamics associated with such cues. Dynamic computational models of ESC differentiation can provide powerful insight into how different cues function in combination both spatially and temporally. By combining particle based diffusion models, cellular agent based approaches, and physical models of morphogenesis, a multi-scale, rules-based modeling framework can provide insight into how each component contributes to differentiation. I propose to investigate the complex regulatory cues which govern complex morphogenic behavior in 3D ESC systems via a computational rules based modeling approach. The objective of this study is to examine how spatial patterns of differentiation by ESCs arise as a function of the microenvironment. The central hypothesis is that spatial control of soluble morphogens and cell-cell signaling will allow enhanced control over the patterns and efficiency of stem cell differentiation in embryoid bodies.
63

3D thermal-electrochemical lithium-ion battery computational modeling

Gerver, Rachel Ellen 2009 August 1900 (has links)
The thesis presents a modeling framework for simulating three dimensional effects in lithium-ion batteries. This is particularly important for understanding the performance of large scale batteries used under high power conditions such as in hybrid electric vehicle applications. While 1D approximations may be sufficient for the smaller scale batteries used in cell phones and laptops, they are severely limited when scaled up to larger batteries, where significant 3D gradients can develop in concentration, current, temperature, and voltage. Understanding these 3D effects is critical for designing lithium-ion batteries for improved safety and long term durability, as well as for conducting effective design optimization studies. The model couples an electrochemical battery model with a thermal model to understand how thermal effects will influence electrochemical behavior and to determine temperature distributions throughout the battery. Several modeling example results are presented including thermal influences on current distribution, design optimization of current collector thickness and current collector tab placement, and investigation of lithium plating risk in three dimensions. / text
64

Gradience in grammar : experimental and computational aspects of degrees of grammaticality

Keller, Frank January 2001 (has links)
This thesis deals with gradience in grammar, i.e., with the fact that some linguistic structures are not fully acceptable or unacceptable, but receive gradient linguistic judgments. The importance of gradient data for linguistic theory has been recognized at least since Chomsky's Logical Structure of Linguistic Theory. However, systematic empirical studies of gradience are largely absent, and none of the major theoretical frameworks is designed to account for gradient data. The present thesis addresses both questions. In the experimental part of the thesis (Chapters 3-5), we present a set of magnitude estimation experiments investigating gradience in grammar. The experiments deal with unaccusativity/unergativity, extraction, binding, word order, and gapping. They cover all major modules of syntactic theory, and draw on data from three languages (English, German, and Greek). In the theoretical part of thesis (Chapters 6 and 7), we use these experimental results to motivate a model of gradience in grammar. This model is a variant of Optimality Theory, and explains gradience in terms of the competition of ranked, violable linguistic constraints. The experimental studies in this thesis deliver two main results. First, they demonstrate that an experimental investigation of gradient phenomena can advance linguistic theory by uncovering acceptability distinctions that have gone unnoticed in the theoretical literature. An experimental approach can also settle data disputes that result from the informal data collection techniques typically employed in theoretical linguistics, which are not well-suited to investigate the behavior of gradient linguistic data. Second, we identify a set of general properties of gradient data that seem to be valid for a wide range of syntactic phenomena and across languages. (a) Linguistic constraints are ranked, in the sense that some constraint violations lead to a greater degree of unacceptability than others. (b) Constraint violations are cumulative, i.e., the degree of unacceptability of a structure increases with the number of constraints it violates. (c) Two constraint types can be distinguished experimentally: soft constraints lead to mild unacceptability when violated, while hard constraint violations trigger serious unacceptability. (d) The hard/soft distinction can be diagnosed by testing for effects from the linguistic context; context effects only occur for soft constraints; hard constraints are immune to contextual variation. (e) The soft/hard distinction is crosslinguistically stable. In the theoretical part of the thesis, we develop a model of gradient grammaticality that borrows central concepts from Optimality Theory, a competition-based grammatical framework. We propose an extension, Linear Optimality Theory, motivated by our experimental results on constraint ranking and the cumulativity of violations. The core assumption of our model is that the relative grammaticality of a structure is determined by the weighted sum of the violations it incurs. We show that the parameters of the model (the constraint weights), can be estimated using the least square method, a standard model fitting algorithm. Furthermore, we prove that standard Optimality Theory is a special case of Linear Optimality Theory. To test the validity of Linear Optimality Theory, we use it to model data from the experimental part of the thesis, including data on extraction, gapping, and word order. For all data sets, a high model fit is obtained and it is demonstrated that the model's predictions generalize to unseen data. On a theoretical level, our modeling results show that certain properties of gradient data (the hard/soft distinction, context effects, and crosslinguistic effects) do not have to be stipulated, but follow from core assumptions of Linear Optimality Theory.
65

The Design and Validation of a Novel Computational Simulation of the Leg for the Investigation of Injury, Disease, and Surgical Treatment

Iaquinto, Joseph 05 May 2010 (has links)
Computational modeling of joints and their function, a developing field, is becoming a significant health and wellness tool of our modern age. Due to familiarity of prior research focused on the lower extremity, a foot and ankle 3D computational model was created to explore the potential for these computational methods. The method of isolating CT scanned tissue and rendering a patient specific anatomy in the digital domain was accomplished by the use of MIMICS™ , SolidWorks™, and COSMOSMotion™ – all available in the commercial domain. The kinematics of the joints are driven solely by anatomically modeled soft tissue applied to articulating joint geometry. Soft tissues are based on highly realistic measurements of anatomical dimension and behavior. By restricting all model constraints to true to life anatomical approximations and recreating their behavior, this model uses inverse kinematics to predict the motion of the foot under various loading conditions. Extensive validation of the function of the model was performed. This includes stability of the arch (due to ligament deficiency) and joint behavior (due to disease and repair). These simulations were compared to a multitude of studies, which confirmed the accuracy of soft tissue strain, joint alignment, joint contact force and plantar load distribution. This demonstrated the capability of the simulation technique to both qualitatively recreate trends seen experimentally and clinically, as well as quantitatively predict a variety of tissue and joint measures. The modeling technique has further strength by combining measurements that are typically done separate (experimental vs. clinical) to build a more holistic model of foot behavior. This has the potential to allow additional conclusions to be drawn about complications associated with repair techniques. This model was built with the intent to provide an example of how patient specific bony geometry can be used as either a research or surgical tool when considering a disease state or repair technique. The technique also allows for the repeated use of anatomy, which is not possible experimentally or clinically. These qualities, along with the accuracy demonstrated in validation, prove the integrity of the technique along with demonstrating its strengths.
66

COMPUTATIONAL MODELING OF MULITSENSORY PROCESSING USING NETWORK OF SPIKING NEURONS

Lim, Hun Ki 04 May 2011 (has links)
Multisensory processing in the brain underlies a wide variety of perceptual phenomena, but little is known about the underlying mechanisms of how multisensory neurons are generated and how the neurons integrate sensory information from environmental events. This lack of knowledge is due to the difficulty of biological experiments to manipulate and test the characteristics of multisensory processing. By using a computational model of multisensory processing this research seeks to provide insight into the mechanisms of multisensory processing. From a computational perspective, modeling of brain functions involves not only the computational model itself but also the conceptual definition of the brain functions, the analysis of correspondence between the model and the brain, and the generation of new biologically plausible insights and hypotheses. In this research, the multisensory processing is conceptually defined as the effect of multisensory convergence on the generation of multisensory neurons and their integrated response products, i.e., multisensory integration. Thus, the computational model is the implementation of the multisensory convergence and the simulation of the neural processing acting upon the convergence. Next, the most important step in the modeling is analysis of how well the model represents the target, i.e., brain function. It is also related to validation of the model. One of the intuitive and powerful ways of validating the model is to apply methods standard to neuroscience for analyzing the results obtained from the model. In addition, methods such as statistical and graph-theoretical analyses are used to confirm the similarity between the model and the brain. This research takes both approaches to provide analyses from many different perspectives. Finally, the model and its simulations provide insight into multisensory processing, generating plausible hypotheses, which will need to be confirmed by real experimentation.
67

Patient-Specific Modeling Of Adult Acquired Flatfoot Deformity Before And After Surgery

Spratley, Edward Meade 05 December 2013 (has links)
The use of computational modeling is an increasingly commonplace technique for the investigation of biomechanics in intact and pathological musculoskeletal systems. Moreover, given the robust and repeatable nature of computer simulation and the prevalence of software techniques for accurate 3-D reconstructions of tissues, the predictive power of these models has increased dramatically. However, there are no patient-specific kinematic models whose function is dictated solely by physiologic soft-tissue constraints, articular shape and contact, and without idealized joint approximations. Moreover, very few models have attempted to predict surgical effects combined with postoperative validation of those predictions. Given this, it is not surprising that the area of foot/ankle modeling has been especially underserved. Thus, we chose to investigate the pre- and postoperative kinematics of Adult Acquired Flatfoot Deformity (AAFD) across a cohort of clinically diagnosed sufferers. AAFD was chosen as it is a chronic and degenerative disease wherein degradation of soft-tissue supporters of the medial arch eventually cause gross malalignment in the mid- and hindfoot, along with significant pain and dysfunction. Also, while planar radiographs are still used to diagnose and stage the disease, it is widely acknowledged that these 2-D measures fail to fully describe the 3-D nature of AAFD. Thus, a population of six patient-specific rigid-body computational models was developed using the commercially available software packages Mimics® and SolidWorks® in order to investigate foot function in patients with diagnosed Stage IIb AAFD. Each model was created from patient-specific sub-millimeter MRI scans, loaded with body weight, individualized muscle forces, and ligament forces, in single leg stance. The predicted model kinematics were validated pre- and postoperatively using clinically utilized radiographic angle distance measures as well as plantar force distributions. The models were then further exploited to predict additional biomechanical parameters such as articular contact force and soft-tissue strain, as well as the effect of hypothetical surgical interventions. Subsequently, kinematic simulations demonstrated that the models were able to accurately predict foot/ankle motion in agreement with their respective patients. Additionally, changes in joint contact force and ligament strain observed across surgical states further elucidate the complex biomechanical underpinnings of foot and ankle function.
68

Regression Wavelet Analysis for Progressive-Lossy-to-Lossless Coding of Remote-Sensing Data

Amrani, Naoufal, Serra-Sagrista, Joan, Hernandez-Cabronero, Miguel, Marcellin, Michael 03 1900 (has links)
Regression Wavelet Analysis (RWA) is a novel wavelet-based scheme for coding hyperspectral images that employs multiple regression analysis to exploit the relationships among spectral wavelet transformed components. The scheme is based on a pyramidal prediction, using different regression models, to increase the statistical independence in the wavelet domain For lossless coding, RWA has proven to be superior to other spectral transform like PCA and to the best and most recent coding standard in remote sensing, CCSDS-123.0. In this paper we show that RWA also allows progressive lossy-to-lossless (PLL) coding and that it attains a rate-distortion performance superior to those obtained with state-of-the-art schemes. To take into account the predictive significance of the spectral components, we propose a Prediction Weighting scheme for JPEG2000 that captures the contribution of each transformed component to the prediction process.
69

Computational Models of Nuclear Proliferation

Frankenstein, William 01 May 2016 (has links)
This thesis utilizes social influence theory and computational tools to examine the disparate impact of positive and negative ties in nuclear weapons proliferation. The thesis is broadly in two sections: a simulation section, which focuses on government stakeholders, and a large-scale data analysis section, which focuses on the public and domestic actor stakeholders. In the simulation section, it demonstrates that the nonproliferation norm is an emergent behavior from political alliance and hostility networks, and that alliances play a role in current day nuclear proliferation. This model is robust and contains second-order effects of extended hostility and alliance relations. In the large-scale data analysis section, the thesis demonstrates the role that context plays in sentiment evaluation and highlights how Twitter collection can provide useful input to policy processes. It first highlights the results of an on-campus study where users demonstrated that context plays a role in sentiment assessment. Then, in an analysis of a Twitter dataset of over 7.5 million messages, it assesses the role of ‘noise’ and biases in online data collection. In a deep dive analyzing the Iranian nuclear agreement, we demonstrate that the middle east is not facing a nuclear arms race, and show that there is a structural hole in online discussion surrounding nuclear proliferation. By combining both approaches, policy analysts have a complete and generalizable set of computational tools to assess and analyze disparate stakeholder roles in nuclear proliferation.
70

Estudos estruturais e computacionais das proteínas tirosina fosfatase A e B de Mycobacterium tuberculosis / Structural and computational studies from protein tyrosine phosphatase A and B of Mycobacterium tuberculosis.

Rodrigues, Vanessa Kiraly Thomaz 27 October 2016 (has links)
Tuberculose (TB) é um grave problema de saúde pública, sendo a segunda maior causa de morte entre doenças infecto contagiosas. Em 2014, 9,6 milhões de casos e, aproximadamente, 1,5 milhão de mortes foram reportados. O Programa Nacional de Controle da Tuberculose preconiza para o tratamento a administração simultânea de quatro medicamentos. Contudo, casos de tratamento inadequado favorecem o surgimento de cepas multirresistentes e extensivamente resistentes aos medicamentos disponíveis. Diante disso, torna-se urgente a necessidade de investigar novos alvos moleculares e desenvolver novos fármacos que sejam úteis e eficazes para o tratamento da infecção. As proteínas tirosina fosfatases (PTPs) constituem uma grande família de enzimas responsáveis pela hidrólise do fosfato ligado aos resíduos de tirosina em proteínas. A importância destas fosfatases reside no fato de estarem envolvidas na regulação de uma série de funções celulares, tais como crescimento, interação intercelular, metabolismo, transcrição, motilidade e resposta imune. A partir da análise do genoma do Mycabacteirum tuberculosis, foram identificadas duas proteínas tirosinas fosfatases (PtpA e PtpB), responsáveis pela sua sobrevivência nos macrófagos do hospedeiro. Ambas as enzimas têm sido exploradas como alvo molecular para o desenvolvimento de novos fármacos para a tuberculose. Nessa dissertação, as sequências gênicas que codificam para as enzimas PtpA e PtpB de M. tuberculosis foram clonadas com sucesso nos vetores de expressão. A expressão solúvel das proteínas permitiu o estabelecimento de um protocolo padronizado de purificação. Ensaios de cristalização foram conduzidos e cristais de proteínas obtidos tiveram os dados cristalográficos coletados. Para a enzima PtpB foi possível determinar a estrutura cristalográfica em alta resolução em complexo com um grupo fosfato no sítio catalítico. Essa estrutura foi então utilizada na etapa posterior de descoberta de novos candidatos a inibidores. Os trabalhos computacionais conduzidos incluíram uma combinação de estratégias para a identificação de pontos de interação relevantes para o processo de reconhecimento molecular e ligação bem como para a construção de modelos farmacofóricos 3D específicos para cada enzima. Esses dados foram utilizados para a seleção de um conjunto de 8 candidatos a inibidores da PtpA e 5 candidatos a inibidores da PtpB. Portanto, estudos de biologia molecular estrutural e química medicinal foram empregados com sucesso para o estabelecimento de uma plataforma produtiva dos alvos selecionados bem como para a seleção de novos candidatos a inibidores. / Tuberculosis (TB) is a serious public health problem and the second leading cause of death among infectious diseases. In 2014, 9.6 million cases, and approximately 1.5 million deaths were reported. The National Program for Tuberculosis Control recommends the simultaneous administration of four drugs as treatment for the disease. However, inadequate treatment determines the emergence of multidrug- and extensively-resistant strains to available drugs. Therefore, new molecular targets and drugs are urgently needed for the treatment of the infection. The protein tyrosine phosphatases (PTPs) are a large family of enzymes responsible for the hydrolysis of phosphate group bound to tyrosine residues in proteins. The importance of these molecules is related to the regulation of a number of cellular functions, including growth, intercellular interaction, metabolism, transcription, motility and immune response. Based on Mycabacteirum tuberculosis genome analysis, two protein tyrosine phosphatases (PTPA and PtpB) were related to mycobacterium survival in host macrophages. Both enzymes have been explored as a molecular target for the development of new drugs for TB. In this dissertation, the gene sequences encoding the enzymes PtpA and PtpB from M. tuberculosis were successfully cloned in expression vectors. The soluble expression of the proteins allowed the establishment of a standardized purification protocol. Crystallization assays were conducted, protein crystals were obtained, and crystallographic data were collected. We determine the crystallographic structure of PtpB in complex with a phosphate group in the catalytic site at high resolution. This structure was then used in the subsequent step for the discovery of new inhibitor candidates. Computational studies included a combination of strategies for identifying interaction points relevant to the process of molecular recognition and binding as well as the construction of 3D pharmacophore models specific for each enzyme. These data were used to select a set of 8 and 5 compounds as PtpA and PtpB inhibitor candidates, respectively. Therefore, structural molecular biology and medicinal chemistry studies have been successfully conducted for the establishment of a platform aimed to the production of the selected targets as well as for the selection of new inhibitor candidates.

Page generated in 0.1564 seconds