• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 220
  • 32
  • 29
  • 23
  • 15
  • 9
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 403
  • 109
  • 61
  • 59
  • 58
  • 56
  • 45
  • 43
  • 40
  • 38
  • 30
  • 30
  • 29
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Protein Primary and Quaternary Structure Elucidation by Mass Spectrometry

Song, Yang 18 September 2015 (has links)
No description available.
332

Computational Studies on Multi-phasic Multi-componentComplex Fluids

Boromand, Arman 07 February 2017 (has links)
No description available.
333

Spatial Distribution of Freshwater Mussels (Unionidae) in Ohio Brush Creek Watershed, Southern Ohio

Brown, Jason K. January 2010 (has links)
No description available.
334

[en] JOINT AUTOMATIC GAIN CONTROL AND RECEIVER DESIGN FOR QUANTIZED LARGE-SCALE MU-MIMO SYSTEMS / [pt] PROJETO CONJUNTO DO AGC E DO RECEPTOR EM SISTEMAS MU-MIMO DE GRANDE ESCALA QUANTIZADOS

THIAGO ELIAS BITENCOURT CUNHA 27 September 2019 (has links)
[pt] O emprego conjunto de Redes de Acesso por Rádio em Nuvem (CRANs) e sistemas de múltiplas entradas e múltiplas saídas (MIMO) de larga escala é uma solução chave para atender aos requisitos da quinta geração (5G) de redes sem fio. No entanto, alguns desafios ainda precisam ser superados como a redução do consumo de energia do sistema, a capacidade limitada dos links fronthaul e a redução dos custos de implantação e operação. Embora seja prejudicial para o desempenho do sistema, a quantização em baixa resolução é proposta como uma solução para estes desafios. Portanto, técnicas que melhoram o desempenho de sistemas quantizados grosseiramente são necessárias. Em sistemas móveis, os ADCs geralmente são precedidos por um controle de ganho automático (AGC). O AGC trabalha moldando a amplitude do sinal recebido dentro do intervalo do quantizador para usar eficientemente a resolução. A fim de solucionar esses problemas, esta dissertação apresenta uma otimização conjunta do AGC, que funciona nas cabeças de rádio remotas (RRHs), e um filtro de recepção linear de baixa resolução consciente (LRA) baseado no mínimo erro quadrático médio (MMSE), que funciona na unidade de nuvem (CU), para sistemas quantizados grosseiramente. Desenvolvemos receptores de cancelamento de interferência lineares e sucessivos (SIC) com base na proposta conjunta de AGC e LRA MMSE (AGC-LRA-MMSE). Uma análise da soma das taxas alcançáveis juntamente com um estudo de complexidade computacional também são realizadas. As simulações mostram que o projeto proposto fornece taxas de erro reduzidas e taxas alcançáveis mais altas do que as técnicas existentes. / [en] The joint employment of Cloud Radio Access Networks (C-RANs) and large-scale multiple-input multiple-output (MIMO) systems is a key solution to fulfill the requirements of the fifth generation (5G) of wireless networks. However, some challenges are still open to be overcome such as the high power consumption of large-scale MIMO systems, which employ a large number of analog-to-digital converters (ADCs), the capacity bottleneck of the fronthaul links and the system cost reduction. Although it often affects the system performance, the low-resolution quantization is a possible solution for these problems. Therefore, techniques that improve the performance of coarsely quantized systems are needed. In mobile applications, the ADCs are usually preceded by an automatic gain control (AGC). The AGC works shaping the received signal amplitude within the quantizer range to efficiently use the ADC resolution. Then, the optimization of an AGC is especially important. In order to present possible solutions for these issues, this thesis presents a joint optimization of the AGC, which works in the remote radio heads (RRHs), and a low-resolution aware (LRA) linear receive filter based on the minimum mean square error (MMSE), which works in the cloud unit (CU), for coarsely quantized large-scale MIMO with CRAN systems. We develop linear and successive interference cancellation (SIC) receivers based on the proposed joint AGC and LRA MMSE (AGCLRAMMSE) approach. An analysis of the achievable sum rates along with a computational complexity study is also carried out. Simulations show that the proposed design provides improved error rates and higher achievable rates than existing techniques.
335

Nitrogen Cycling from Fall Applications of Biosolids to Winter Small Grains

Bamber, Kevin William 03 February 2015 (has links)
Environmental concerns about winter nitrogen (N) leaching loss limit the amount of biosolids applied to winter small grains in Virginia. Ten field studies were established 2012-2014 in Virginia to determine the agronomic and environmental feasibility of fall biosolids applications to soft red winter wheat (Triticum aestivum L.). Eight studies were located in the Coastal Plain physiographic province and two in the Ridge and Valley physiographic province. The effects of eight biosolids and urea N treatments on 1) biomass production at Zadoks growth stage (GS) 25-30, 2) soil inorganic N at GS 25-30, 3) soil mineralizable N at GS 25-30,4) N use efficiency (NUE) at GS 58, 5) grain yield, 6) end-of-season soil inorganic N, and 7) estimated N recovery were studied. Anaerobically digested (AD) and lime stabilized (LS) biosolids were fall applied at estimated plant available N (PAN) rates of 100 kg N ha⁻¹ and 50 kg N ha⁻¹. The 50 kg N ha⁻¹ biosolids treatments were supplemented with 50 kg N ha⁻¹ as urea in spring. Urea N was split applied at 0, 50, 100 and 150 kg N ha⁻¹, with 1/3 applied in fall and 2/3 in spring. Biomass at GS 25-30 increased with urea N rate and biosolids always resulted in equal or greater biomass than urea. Soil mineralizable N at GS 25-30 rarely responded to fall urea or biosolids N rate, regardless of biosolids type. Biosolids and urea applied at the agronomic N rate resulted in equal grain yield and estimated N recovery in soils where N leaching loss risk was low, regardless of biosolids type or application strategy. Lime stabilized biosolids and biosolids/urea split N application increased grain yield and estimated N recovery in soils with high or moderate N leaching loss risk. Therefore, AD and LS biosolids can be fall-applied to winter wheat at the full agronomic N rate in soils with low N leaching loss risk, while LS biosolids could be applied to winter wheat at the full agronomic N rate in soils with moderate or high N leaching loss risk. / Master of Science
336

Geosynthetic Reinforced Soil: Numerical and Mathematical Analysis of Laboratory Triaxial Compression Tests

Santacruz Reyes, Karla 03 February 2017 (has links)
Geosynthetic reinforced soil (GRS) is a soil improvement technology in which closely spaced horizontal layers of geosynthetic are embedded in a soil mass to provide lateral support and increase strength. GRS is popular due to a relatively new application for bridge support, as well as long-standing application in mechanically stabilized earth walls. Several different GRS design methods have been used, and some are application-specific and not based on fundamental principles of mechanics. Because consensus regarding fundamental behavior of GRS is lacking, numerical and mathematical analyses were performed for laboratory tests obtained from the published literature of GRS under triaxial compression in consolidated-drained conditions. A three-dimensional numerical model was developed using FLAC3D. An existing constitutive model for the soil component was modified to incorporate confining pressure dependency of friction angle and dilation parameters, while retaining the constitutive model's ability to represent nonlinear stress-strain response and plastic yield. Procedures to obtain the parameter values from drained triaxial compression tests on soil specimens were developed. A method to estimate the parameter values from particle size distribution and relative compaction was also developed. The geosynthetic reinforcement was represented by two-dimensional orthotropic elements with soil-geosynthetic interfaces on each side. Comparisons between the numerical analyses and laboratory tests exhibited good agreement for strains from zero to 3% for tests with 1 to 3 layers of reinforcement. As failure is approached at larger strains, agreement was good for specimens that had 1 or 2 layers of reinforcement and soil friction angle less than 40 degrees. For other conditions, the numerical model experienced convergence problems that could not be overcome by mesh refinement or reducing the applied loading rate; however, it appears that, if convergence problems can be solved, the numerical model may provide a mechanics-based representation of GRS behavior, at least for triaxial test conditions. Three mathematical theories of GRS failure available in published literature were applied to the laboratory triaxial tests. Comparisons between the theories and the tests results demonstrated that all three theories have important limitations. These numerical and mathematical evaluations of laboratory GRS tests provided a basis for recommending further research. / Ph. D.
337

Validation of Attitude Determination andControl System on Student CubeSat APTASand Calibration of Coarse Sun Sensors

Jensen, Johannes January 2024 (has links)
In this thesis, a simulation harness is constructed in Simulink for the purpose of validating the Attitude Determination and Control System (ADCS) on the APTAS student CubeSat in support of the upcoming flight readiness review. The simulation results are used to verify the compliance of a subset of the requirements for the ADCS, detailed in table 1. Calibration of the onboard sun sensor array, which is used to find the sun vector for the attitude determination, is performed using a break-out board of sun sensors tested in a sun simulator. The data gathered from this test is used to model the sun sensor system in the simulation and in flight software. The results show that the sun sensor system is able to find the sun vector with an average error angle of 5.4 degrees, though the error angle may spike up to 18 degrees in operation. It is found that the complete ADCS is able to guide the spacecraft toward the desired nadir-facing attitude, though not with the accuracy specified in the requirements. The spacecraft is able to detumble much better than required. All deficiencies found in the ADCS software have been corrected. These changes arelisted in appendix B. It is concluded that, despite its flaws, the ADCS software is flight ready. / Project APTAS
338

Above and belowground biomass and net primary productivity in two subtropical mangrove forests in Japan / 日本の亜熱帯マングローブ2林における地上部・地下部のバイオマスと純一次生産量の推定

A., T. M. Zinnatul Bassar 25 March 2024 (has links)
京都大学 / 新制・課程博士 / 博士(農学) / 甲第25328号 / 農博第2594号 / 新制||農||1105(附属図書館) / 京都大学大学院農学研究科森林科学専攻 / (主査)准教授 檀浦 正子, 教授 北島 薫, 教授 Daniel Epron / 学位規則第4条第1項該当 / Doctor of Agricultural Science / Kyoto University / DFAM
339

Amélioration des méthodes de calcul de cœurs de réacteurs nucléaires dans APOLLO3 : décomposition de domaine en théorie du transport pour des géométries 2D et 3D avec une accélération non linéaire par la diffusion / Contribution to the development of methods for nuclear reactor core calculations with APOLLO3 code : domain decomposition in transport theory for 2D and 3D geometries with nonlinear diffusion acceleration

Lenain, Roland 15 September 2015 (has links)
Ce travail de thèse est consacré à la mise en œuvre d’une méthode de décomposition de domaine appliquée à l’équation du transport. L’objectif de ce travail est l’accès à des solutions déterministes haute-fidélité permettant de correctement traiter les hétérogénéités des réacteurs nucléaires, pour des problèmes dont la taille varie d’un motif d’assemblage en 3 dimensions jusqu’à celle d’un grand cœur complet en 3D. L’algorithme novateur développé au cours de la thèse vise à optimiser l’utilisation du parallélisme et celle de la mémoire. La démarche adoptée a aussi pour but la diminution de l’influence de l’implémentation parallèle sur les performances. Ces objectifs répondent aux besoins du projet APOLLO3, développé au CEA et soutenu par EDF et AREVA, qui se doit d’être un code portable (pas d’optimisation sur une architecture particulière) permettant de réaliser des modélisations haute-fidélité (best estimate) avec des ressources allant des machines de bureau aux calculateurs disponibles dans les laboratoires d’études. L’algorithme que nous proposons est un algorithme de Jacobi Parallèle par Bloc Multigroupe. Chaque sous domaine est un problème multigroupe à sources fixes ayant des sources volumiques (fission) et surfaciques (données par les flux d’interface entre les sous domaines). Le problème multigroupe est résolu dans chaque sous domaine et une seule communication des flux d’interface est requise par itération de puissance. Le rayon spectral de l’algorithme de résolution est rendu comparable à celui de l’algorithme de résolution classique grâce à une méthode d’accélération non linéaire par la diffusion bien connue nommée Coarse Mesh Finite Difference. De cette manière une scalabilité idéale est atteignable lors de la parallélisation. L’organisation de la mémoire, tirant parti du parallélisme à mémoire partagée, permet d’optimiser les ressources en évitant les copies de données redondantes entre les sous domaines. Les architectures de calcul à mémoire distribuée sont rendues accessibles par un parallélisme hybride qui combine le parallélisme à mémoire partagée et à mémoire distribuée. Pour des problèmes de grande taille, ces architectures permettent d’accéder à un plus grand nombre de processeurs et à la quantité de mémoire nécessaire aux modélisations haute-fidélité. Ainsi, nous avons réalisé plusieurs exercices de modélisation afin de démontrer le potentiel de la réalisation : calcul de cœur et de motifs d’assemblages en 2D et 3D prenant en compte les contraintes de discrétisation spatiales et énergétiques attendues. / This thesis is devoted to the implementation of a domain decomposition method applied to the neutron transport equation. The objective of this work is to access high-fidelity deterministic solutions to properly handle heterogeneities located in nuclear reactor cores, for problems’ size ranging from colorsets of assemblies to large reactor cores configurations in 2D and 3D. The innovative algorithm developed during the thesis intends to optimize the use of parallelism and memory. The approach also aims to minimize the influence of the parallel implementation on the performances. These goals match the needs of APOLLO3 project, developed at CEA and supported by EDF and AREVA, which must be a portable code (no optimization on a specific architecture) in order to achieve best estimate modeling with resources ranging from personal computer to compute cluster available for engineers analyses. The proposed algorithm is a Parallel Multigroup-Block Jacobi one. Each subdomain is considered as a multi-group fixed-source problem with volume-sources (fission) and surface-sources (interface flux between the subdomains). The multi-group problem is solved in each subdomain and a single communication of the interface flux is required at each power iteration. The spectral radius of the resolution algorithm is made similar to the one of a classical resolution algorithm with a nonlinear diffusion acceleration method: the well-known Coarse Mesh Finite Difference. In this way an ideal scalability is achievable when the calculation is parallelized. The memory organization, taking advantage of shared memory parallelism, optimizes the resources by avoiding redundant copies of the data shared between the subdomains. Distributed memory architectures are made available by a hybrid parallel method that combines both paradigms of shared memory parallelism and distributed memory parallelism. For large problems, these architectures provide a greater number of processors and the amount of memory required for high-fidelity modeling. Thus, we have completed several modeling exercises to demonstrate the potential of the method: 2D full core calculation of a large pressurized water reactor and 3D colorsets of assemblies taking into account the constraints of space and energy discretization expected for high-fidelity modeling.
340

Connectionist modelling in cognitive science: an exposition and appraisal

Janeke, Hendrik Christiaan 28 February 2003 (has links)
This thesis explores the use of artificial neural networks for modelling cognitive processes. It presents an exposition of the neural network paradigm, and evaluates its viability in relation to the classical, symbolic approach in cognitive science. Classical researchers have approached the description of cognition by concentrating mainly on an abstract, algorithmic level of description in which the information processing properties of cognitive processes are emphasised. The approach is founded on seminal ideas about computation, and about algorithmic description emanating, amongst others, from the work of Alan Turing in mathematical logic. In contrast to the classical conception of cognition, neural network approaches are based on a form of neurocomputation in which the parallel distributed processing mechanisms of the brain are highlighted. Although neural networks are generally accepted to be more neurally plausible than their classical counterparts, some classical researchers have argued that these networks are best viewed as implementation models, and that they are therefore not of much relevance to cognitive researchers because information processing models of cognition can be developed independently of considerations about implementation in physical systems. In the thesis I argue that the descriptions of cognitive phenomena deriving from neural network modelling cannot simply be reduced to classical, symbolic theories. The distributed representational mechanisms underlying some neural network models have interesting properties such as similarity-based representation, content-based retrieval, and coarse coding which do not have straightforward equivalents in classical systems. Moreover, by placing emphasis on how cognitive processes are carried out by brain-like mechanisms, neural network research has not only yielded a new metaphor for conceptualising cognition, but also a new methodology for studying cognitive phenomena. Neural network simulations can be lesioned to study the effect of such damage on the behaviour of the system, and these systems can be used to study the adaptive mechanisms underlying learning processes. For these reasons, neural network modelling is best viewed as a significant theoretical orientation in the cognitive sciences, instead of just an implementational endeavour. / Psychology / D. Litt. et Phil. (Psychology)

Page generated in 0.1822 seconds