• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1419
  • 370
  • 155
  • 140
  • 105
  • 92
  • 45
  • 32
  • 25
  • 18
  • 17
  • 15
  • 8
  • 6
  • 6
  • Tagged with
  • 2858
  • 1727
  • 814
  • 595
  • 507
  • 403
  • 399
  • 308
  • 294
  • 273
  • 270
  • 268
  • 246
  • 228
  • 208
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
871

An Analysis of Clause Usage in Academic Texts Produced by African American, Haitian, and Hispanic Community College Students

Brooks, Wendy B. 24 June 2010 (has links)
The growth of multicultural and multilingual student populations in community colleges has presented difficulties for instructors who teach academic writing. This study was motivated by the desire to understand the challenges faced by novice writers from diverse ethnolinguistic backgrounds, African-American, Haitian, and Hispanic, in a South Florida community college as they grappled with the register features which defined academic writing. One major challenge has been the tendency to transfer the register feature of clause structure typical of speech into academic texts. An analysis of clause structures using writing samples collected from 45 community-college students, 15 from African-American, Haitian and Hispanic students respectively, showed the degree to which the students relied on their speech by using hypotactic and paratactic clauses instead of the main and embedded clauses characteristic of the written academic register The study has expanded on previous research which had focused on native versus nonnative English speakers (ESL) in English-language programs, by including African American students who are speakers of African American Vernacular English (AAVE) and, therefore, speak English as a second dialect (ESD) as well as Generation 1.5 students (Haitian and Hispanic), who have command of conversational English, come to the U.S. as first or second generation immigrants, and graduated from U.S. high schools, but they lack the written academic skills to perform at the college level. A challenge faced by African American AAVE speakers is that the dialect occurs predominantly in spoken discourse, and students may go to school without any exposure to written discourse in their home language. On the other hand, many Generation 1.5 students such as Haitians and Hispanics speak native languages, which have standardized orthographies, and these students may go to school having been exposed to register features of written discourse in Haitian Creole (or French) and Spanish.
872

Turbo Bayesian Compressed Sensing

Yang, Depeng 01 August 2011 (has links)
Compressed sensing (CS) theory specifies a new signal acquisition approach, potentially allowing the acquisition of signals at a much lower data rate than the Nyquist sampling rate. In CS, the signal is not directly acquired but reconstructed from a few measurements. One of the key problems in CS is how to recover the original signal from measurements in the presence of noise. This dissertation addresses signal reconstruction problems in CS. First, a feedback structure and signal recovery algorithm, orthogonal pruning pursuit (OPP), is proposed to exploit the prior knowledge to reconstruct the signal in the noise-free situation. To handle the noise, a noise-aware signal reconstruction algorithm based on Bayesian Compressed Sensing (BCS) is developed. Moreover, a novel Turbo Bayesian Compressed Sensing (TBCS) algorithm is developed for joint signal reconstruction by exploiting both spatial and temporal redundancy. Then, the TBCS algorithm is applied to a UWB positioning system for achieving mm-accuracy with low sampling rate ADCs. Finally, hardware implementation of BCS signal reconstruction on FPGAs and GPUs is investigated. Implementation on GPUs and FPGAs of parallel Cholesky decomposition, which is a key component of BCS, is explored. Simulation results on software and hardware have demonstrated that OPP and TBCS outperform previous approaches, with UWB positioning accuracy improved by 12.8x. The accelerated computation helps enable real-time application of this work.
873

Design and Implementation of a Signal Conditioning Operational Amplifier for a Reflective Object Sensor

Master, Ankit 01 December 2010 (has links)
Industrial systems often require the acquisition of real-world analog signals for several applications. Various physical phenomena such as displacement, pressure, temperature, light intensity, etc. are measured by sensors, which is a type of transducer, and then converted into a corresponding electrical signal. The electrical signal obtained from the sensor, usually a few tens mV in magnitude, is subsequently conditioned by means of amplification, filtering, range matching, isolation etc., so that the signal can be rendered for further processing and data extraction. This thesis presents the design and implementation of a general purpose op amp used to condition a reflective object sensor’s output. The op amp is used in a non-inverting configuration, as a current-to-voltage converter to transform a phototransistor current into a usable voltage. The op amp has been implemented using CMOS architecture and fabricated in AMI 0.5-µm CMOS process available through MOSIS. The thesis begins with an overview of the various circuits involving op amps used in signal conditioning circuits. Owing to the vast number of applications for sensor signal conditioning circuits, a brief discussion of an industrial sensor circuit is also illustrated. This is followed by the complete design of the op amp and its implementation in the data acquisition circuit. The op amp is then characterized using simulation results. Finally, the test setup and the measurement results are presented. The thesis concludes with an overview of some possible future work on the sensor-op amp data acquisition circuit.
874

A Fully Integrated High-Temperature, High-Voltage, BCD-on-SOI Voltage Regulator

McCue, Benjamin Matthew 01 May 2010 (has links)
Developments in automotive (particularly hybrid electric vehicles), aerospace, and energy production industries over the recent years have led to expanding research interest in integrated circuit (IC) design toward high-temperature applications. A high-voltage, high-temperature SOI process allows for circuit design to expand into these extreme environment applications. Nearly all electronic devices require a reliable supply voltage capable of operating under various input voltages and load currents. These input voltages and load currents can be either DC or time-varying signals. In this work, a stable supply voltage for embedded circuit functions is generated on chip via a voltage regulator circuit producing a stable 5-V output voltage. Although applications of this voltage regulator are not limited to gate driver circuits, this regulator was developed to meet the demands of a gate driver IC. The voltage regulator must provide reliable output voltage over an input range from 10 V to 30 V, a temperature range of −50 ºC to 200 ºC, and output loads from 0 mA to 200 mA. Additionally, low power stand-by operation is provided to help reduce heat generation and thus lower operating junction temperature. This regulator is based on the LM723 Zener reference voltage regulator which allows stable performance over temperature (provided proper design of the temperature compensation scheme). This circuit topology and the SOI silicon process allow for reliable operation under all application demands. The designed voltage regulator has been successfully tested from −50 ºC to 200 ºC while demonstrating an output voltage variation of less than 25 mV under the full range of input voltage. Line regulation tests from 10 V to 35 V show a 3.7-ppm/V supply sensitivity. With the use of a high-temperature ceramic output capacitor, a 5-nsec edge, 0 to 220 mA, 1-µsec pulse width load current induced only a 55 mV drop in regulator output voltage. In the targeted application, load current pulse widths will be much shorter, thereby improving the load transient performance. Full temperature and input voltage range tests reveal the no-load supply current draw is within 330 µA while still providing an excess of 200 mA of load current upon demand.
875

An FPGA Based Implementation of the Exact Stochastic Simulation Algorithm

Vanguri, Phani Bharadwaj 01 December 2010 (has links)
Mathematical and statistical modeling of biological systems is a desired goal for many years. Many biochemical models are often evaluated using a deterministic approach, which uses differential equations to describe the chemical interactions. However, such an approach is inaccurate for small species populations as it neglects the discrete representation of population values, presents the possibility of negative populations, and does not represent the stochastic nature of biochemical systems. The Stochastic Simulation Algorithm (SSA) developed by Gillespie is able to properly account for these inherent noise fluctuations. Due to the stochastic nature of the Monte Carlo simulations, large numbers of simulations must be run in order to get accurate statistics for the species populations and reactions. However, the algorithm tends to be computationally heavy and leads to long simulation runtimes for large systems. Therefore, this thesis explores implementing the SSA on a Field Programmable Gate Array (FPGA) to improve performance. Employing the Field programmable Gate Arrays exploits the parallelism present in the SSA, providing speedup over the software implementations that execute sequentially. In contrast to prior work that requires re-construction and re-synthesis of the design to simulate a new biochemical system, this work explores the use of reconfigurable hardware in implementing a generic biochemical simulator.
876

Medium access control and networking protocols for the intra-body network /

Stucki, Eric Thomas, January 2006 (has links) (PDF)
Thesis (M.S.)--Brigham Young University. Dept. of Electrical and Computer Engineering, 2006. / Includes bibliographical references (p. 225-229).
877

A domain-specific embedded language for probabilistic programming

Kollmansberger, Steven 19 December 2005 (has links)
Graduation date: 2006 / Functional programming is concerned with referential transparency, that is, given a certain function and its parameter, that the result will always be the same. However, it seems that this is violated in applications involving uncertainty, such as rolling a dice. This thesis defines the background of probabilistic programming and domain-specific languages, and builds on these ideas to construct a domain-specific embedded language (DSEL) for probabilistic programming in a purely functional language. This DSEL is then applied in a real-world setting to develop an application in use by the Center for Gene Research at Oregon State University. The process and results of this development are discussed.
878

Integrated Layout Design of Multi-component Systems

Zhu, Jihong 09 December 2008 (has links)
A new integrated layout optimization method is proposed here for the design of multi-component systems. By introducing movable components into the design domain, the components layout and the supporting structural topology are optimized simultaneously. The developed design procedure mainly consists of three parts: (i). Introduction of non-overlap constraints between components. The Finite Circle Method (FCM) is used to avoid the components overlaps and also overlaps between components and the design domain boundaries. It proceeds by approximating geometries of components and the design domain with numbers of circles. The distance constraints between the circles of different components are then imposed as non-overlap constraints. (ii). Layout optimization of the components and supporting structure. Locations and orientations of the components are assumed as geometrical design variables for the optimal placement. Topology design variables of the supporting structure are defined by the density points. Meanwhile, embedded meshing techniques are developed to take into account the finite element mesh change caused by the component movements. Moreover, to account for the complicated requirements from aerospace structural system designs, design-dependent loads related to the inertial load or the structural self-weight and the design constraint related to the system gravity center position are taken into account in the problem formulation. (iii). Consistent material interpolation scheme between element stiffness and inertial load. The common SIMP material interpolation model is improved to avoid the singularity of localized deformation due to the presence of design dependent loading when the element stiffness and the involved inertial load are weakened with the element material removal. Finally, to validate the proposed design procedure, a variety of multi-component system layout design problems are tested and solved on account of inertia loads and gravity center position constraint.
879

Multi-scale modelling of shell failure for periodic quasi-brittle materials

Mercatoris, Benoît C.N. 04 January 2010 (has links)
<p align="justify">In a context of restoration of historical masonry structures, it is crucial to properly estimate the residual strength and the potential structural failure modes in order to assess the safety of buildings. Due to its mesostructure and the quasi-brittle nature of its constituents, masonry presents preferential damage orientations, strongly localised failure modes and damage-induced anisotropy, which are complex to incorporate in structural computations. Furthermore, masonry structures are generally subjected to complex loading processes including both in-plane and out-of-plane loads which considerably influence the potential failure mechanisms. As a consequence, both the membrane and the flexural behaviours of masonry walls have to be taken into account for a proper estimation of the structural stability.</p> <p align="justify">Macrosopic models used in structural computations are based on phenomenological laws including a set of parameters which characterises the average behaviour of the material. These parameters need to be identified through experimental tests, which can become costly due to the complexity of the behaviour particularly when cracks appear. The existing macroscopic models are consequently restricted to particular assumptions. Other models based on a detailed mesoscopic description are used to estimate the strength of masonry and its behaviour with failure. This is motivated by the fact that the behaviour of each constituent is a priori easier to identify than the global structural response. These mesoscopic models can however rapidly become unaffordable in terms of computational cost for the case of large-scale three-dimensional structures.</p> <p align="justify">In order to keep the accuracy of the mesoscopic modelling with a more affordable computational effort for large-scale structures, a multi-scale framework using computational homogenisation is developed to extract the macroscopic constitutive material response from computations performed on a sample of the mesostructure, thereby allowing to bridge the gap between macroscopic and mesoscopic representations. Coarse graining methodologies for the failure of quasi-brittle heterogeneous materials have started to emerge for in-plane problems but remain largely unexplored for shell descriptions. The purpose of this study is to propose a new periodic homogenisation-based multi-scale approach for quasi-brittle thin shell failure.</p> <p align="justify">For the numerical treatment of damage localisation at the structural scale, an embedded strong discontinuity approach is used to represent the collective behaviour of fine-scale cracks using average cohesive zones including mixed cracking modes and presenting evolving orientation related to fine-scale damage evolutions.</p> <p align="justify">A first originality of this research work is the definition and analysis of a criterion based on the homogenisation of a fine-scale modelling to detect localisation in a shell description and determine its evolving orientation. Secondly, an enhanced continuous-discontinuous scale transition incorporating strong embedded discontinuities driven by the damaging mesostructure is proposed for the case of in-plane loaded structures. Finally, this continuous-discontinuous homogenisation scheme is extended to a shell description in order to model the localised behaviour of out-of-plane loaded structures. These multi-scale approaches for failure are applied on typical masonry wall tests and verified against three-dimensional full fine-scale computations in which all the bricks and the joints are discretised.</p>
880

Extraction de modèles pour la conception de systèmes sur puce

Le Tallec, Jean-François 25 January 2012 (has links) (PDF)
La conception des systèmes sur puce s'appuie souvent sur SystemC/C++ qui permet des descriptions architecturales et comportementales à différents niveaux d'abstraction. D'autres approches se tournent vers l'automatisation de l'assemblage de plates-formes dites virtuelles (format IP-Xact). L'utilisation des techniques de l'ingénierie des modèles est une voie plus récente avec des profils UML tels que MARTE. Dans cette thèse, nous étudions les possibilités de modélisation de ces différentes approches et les passerelles disponibles entre elles. Motivés par la disponibilité de modèles SystemC et opar les facilités offertes par MARTE, nous traitons de l'export des modèles SystemC. Au-delà de la simple conversion entre formats, nous décrivons la mise en œuvre d'une passerelle entre l'implémentation SystemC d'un design et sa version modèle dans le format IP-Xact. La représentation IP-Xact peut ensuite être de nouveau transformée en modèles MARTE par des outils déjà existants. Nous présentons les travaux connexes avant d'exposer notre vision et sa réalisation au travers de l'outil SCiPX (SystemC to IP-Xact). Dans un second temps, nous présentons plus en détail les possibilités permises par le profil UML-MARTE, son modèle de temps et le langage de spécifications de contraintes temporelles CCSL. Nous abordons les problèmes liés à la modélisation de protocoles à différents niveaux d'abstraction et plus spécialement ceux posés par le raffinement entre les niveaux TLM et RTL. Cette étude met en évidence des insuffisances de CCSL concernant la spécification des priorités. Nous proposons un enrichissement de CCSL pour lui permettre de manipuler ce concept de priorité.

Page generated in 0.0254 seconds