• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 340
  • 129
  • 63
  • 34
  • 33
  • 22
  • 15
  • 8
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 806
  • 90
  • 88
  • 79
  • 61
  • 52
  • 48
  • 48
  • 46
  • 46
  • 45
  • 44
  • 44
  • 42
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
581

The Influence of School Discipline Approaches on Suspension Rates

Christy, Donna 01 January 2018 (has links)
A free and appropriate public education is promised to every child in the United States. However, zero tolerance school discipline policies have broken that promise, pushing students out of the classroom and into the school-to-prison pipeline. Despite the growing body of research demonstrating negative social and economic impacts of exclusionary discipline, public school administrators have been slow to adopt innovative policies that provide rehabilitative alternatives. The purpose of this study was to compare, using the consequences of innovations application of Rogers's diffusion of innovations theory, the impact of various school district approaches to school discipline on suspension rates while controlling for race and socioeconomic status. This study used a quantitative, nonexperimental, nonequivalent groups, posttest-only research design using secondary analysis of data reported by 218 school districts in a New England state for the 2016-17 school year. Analysis of covariance indicated that there is a significant relationship between approaches to school discipline and suspension rates when controlling for racial and socioeconomic composition (p < .05). Race and economic disadvantage significantly influenced suspension rates (p < .001), and districts implementing alternatives differed significantly in their racial and socioeconomic compositions (p < .001). Policy implications include the promotion of alternative approaches to school discipline. Implications for social change include evidence to support the work of those addressing the needs underlying student behavior rather than crime and punishment models to produce safe and supportive schools and dismantle the school-to-prison pipeline.
582

Heterogeneous multi-pipeline application specific instruction-set processor design and implementation

Radhakrishnan, Swarnalatha, Computer Science & Engineering, Faculty of Engineering, UNSW January 2006 (has links)
Embedded systems are becoming ubiquitous, primarily due to the fast evolution of digital electronic devices. The design of modern embedded systems requires systems to exhibit, high performance and reliability, yet have short design time and low cost. Application Specific Instruction set processors (ASIPs) are widely used in embedded system since they are economical to use, flexible, and reusable (thus saves design time). During the last decade research work on ASIPs have been carried out in mainly for single pipelined processors. Improving performance in processors is possible by exploring the available parallelism in the program. Designing of multiple parallel execution paths for parallel execution of the processor naturally incurs additional cost. The methodology presented in this dissertation has addressed the problem of improving performance in ASIPs, at minimal additional cost. The devised methodology explores the available parallelism of an application to generate a multi-pipeline heterogeneous ASIP. The processor design is application specific. No pre-defined IPs are used in the design. The generated processor contains multiple standalone pipelined data paths, which are not necessarily identical, and are connected by the necessary bypass paths and control signals. Control unit are separate for each pipeline (though with the same clock) resulting in a simple and cost effective design. By using separate instruction and data memories (Harvard architecture) and by allowing memory access by two separate pipes, the complexity of the controller and buses are reduced. The impact of higher memory latencies is nullified by utilizing parallel pipes during memory access. Efficient bypass network selection and encoding techniques provide a better implementation. The initial design approach with only two pipelines without bypass paths show speed improvements of up to 36% and switching activity reductions of up to 11%. The additional area costs around 16%. An improved design with different number of pipelines (more than two) based on applications show on average of 77% performance improvement with overheads of: 49% on area; 51% on leakage power; 17% on switching activity; and 69% on code size. The design was further trimmed, with bypass path selection and encoding techniques, which show a saving of up to 32% of area and 34% of leakage power with 6% performance improvement and 69% of code size reduction compared to the design approach without these techniques in the multi pipeline design.
583

La consommation en registres en présence de parallélisme d'instructions

TOUATI, Sid-Ahmed-Ali 25 June 2002 (has links) (PDF)
Aujourd'hui, le fait que la mémoire constitue un goulot d'étranglement pour les performances des programmes est un truisme. Les compilateurs doivent donc optimiser les programmes afin d'éviter de recourir à la mémoire, et ceci en utilisant au mieux les registres disponibles dans le processeur à parallélisme d'instructions (ILP).<br /><br />Cette thèse réexamine le concept de la pression des registres en lui donnant une plus forte priorité par rapport à l'ordonnancement d'instructions, sans ôter à ce dernier ses possibilités d'extraction de parallélisme. Nous proposons de traiter le problème des registres avant la phase d'ordonnancement. Deux grandes stratégies sont étudiées en détail. La première consiste à analyser et manipuler un graphe de dépendance de données (GDD) pour garantir les contraintes de registres sans allonger son chemin critique (si possible). Nous introduisons la notion de saturation en registres qui est la borne exacte maximale du besoin en registres de tout ordonnancement valide, indépendamment des contraintes architecturales. Son but est d'ajouter des arcs au GDD pour que la saturation soit en dessous du nombre de registres disponibles. Réciproquement, la suffisance est le nombre minimal de registres dont il faut disposer pour produire au moins un ordonnancement valide pour le GDD. Si cette suffisance est au dessus du nombre effectif de registres, alors les accès à la mémoire sont inévitables.<br />Notre deuxième stratégie construit une allocation de registres directement dans le GDD en optimisant la perte du parallélisme intrinsèque.<br /><br />Cette thèse considère des blocs de base, des graphes acycliques de flots de contrôles et des boucles internes destinées au pipeline logiciel. Nos expériences montrent que nos heuristiques sont presque optimales. L'étude prouve que nous pouvons et devons traiter les contraintes de registres avant la phase d'ordonnancement tout en garantissant une liberté pour l'extraction et l'exploitation de l'ILP.
584

Modèles et méthodes d'évaluation et de gestion des risques appliqués aux systèmes de transport de marchandises dangereuses (TMD), reposant sur les nouvelles technologies de l'information et de la communication (NTIC)

Tomasoni, Angela Maria 21 April 2010 (has links) (PDF)
Durant ma thèse de doctorat, j'ai développé plusieurs modèles et méthodes d'évaluation des risques dans les systèmes de transport de matières dangereuses. En raison de la multiplicité des approches d'évaluation de risque, tous les modèles décrits, définis et utilisés sont fondés sur la définition classique du risque technologique - liés à l'activité de l'homme - la catégorie des risques accidentels, - ou d'un accident - d'un véhicule transportant des matières dangereuses. Cette définition des risques est la même pour les conduites que pour le transport par route, mais différentes approches méthodologiques pour l'évaluation des risques de transport peuvent être abordées : Au chapitre n°2: une définition générale des marchandises dangereuses a été réalisé ainsi que différents types de matières dangereuses considérées. Ensuite, l'étude a été focalisé sur les hydrocarbures ainsi que sur les réglementations qui y sont liés. Dans le chapitre n°3, l'étude a porté sur la définition des risques dans le transport des matières dangereuses, respectivement, dans le cas des pipelines ainsi que pour le transport routier. Au cours du 4ème chapitre, une description complète de la méthodologie d'évaluation des risques de pipelines a été réalisé. Par la suite, au chapitre n°5, un modèle innovant et technologique a été utilisé afin de décrire un scénario d'accident du GPL par route et d'évaluer son impact sur la population concernée. Au chapitre n°6, j'aborde des modèles et des méthodes innovants pour l'évaluation des risques et le contrôle de la DGT par route. Cette méthodologie est basée sur une approche "Risk averse decision making" . Au chapitre n°7, une loi de contrôle optimale de la DGT a été développé et appliqué dans le cas d'une infrastructure critique, spécifiquement, dans le cas des tunnels. Enfin, le chapitre n°8 a pour objectif de résumer mon travail en termes de résultats obtenus au cours de ma thèse.
585

Conception, Réalisation et Caractérisation de l'Electronique Intégrée de Lecture et de Codage des Signaux des Détecteurs de Particules Chargées à Pixels Actifs en Technologie CMOS

Dahoumane, Mokrane 03 November 2009 (has links) (PDF)
Les futures grandes expériences de l'exploration des lois fondamentales de la Nature (e.g. ILC) exigent des détecteurs de vertex de résolution spatiale et de granularité poussées, très minces et radio-tolérants, qui sont hors de portée des technologies de détections actuelles. Ce constat est à l'origine du développement des Capteurs CMOS à Pixels Actifs. La résolution spatiale du capteur est une performance clé. Elle résulte de la répartition des charges libérées par une particule chargée traversant, et ionisant, le volume sensible. L'encodage de la charge collectée par chaque pixel repose sur un CAN (Convertisseur Analogique Numérique) intégrable à même le substrat abritant le volume sensible du capteur. Ce CAN doit être précis, compact, rapide et de faible consommation. L'objectif de cette thèse a donc été de concevoir un CAN répondant à ces exigences conflictuelles. D'abord, plusieurs architectures d'un échantillonneur-bloqueur-amplificateur ont été étudiées pour conditionner le faible signal des pixels. Une architecture originale de cet étage a été conçue. L'architecture pipeline du CAN a été choisie. La configuration de base de 1,5 bit/étage a été implémentée pour tester la validité du concept, puisqu'elle permet de minimiser les contraintes sur chaque étage. Nous avons optimisé l'architecture en introduisant le concept du double échantillonnage dans un premier temps sur une configuration de 2,5 bits/étage, ceci a permis de minimiser les dimensions et la puissance. Le double échantillonnage combiné avec la résolution de 1,5 bit/étage a constitué une seconde amélioration. Une nouvelle architecture du CAN adapté à la séquence des commandes des pixels a été proposée.
586

Direktsamplande digital transciever / Direct sampling digital transceiver

Karlsson, Magnus January 2002 (has links)
<p>Master thesis work at ITN (Department of Science and Technology) in the areas of A/D-construction and RF-circuit design. Major goal of project were to research suitable possibilities for implementations of direct conversion in transceivers operating in the 160MHz band, theoretic study followed by development of components in the construction environment Cadence. Suitable A/D- converter and other important parts were selected at the end of the theoretic study. Subsampling technique was applied to make A/D sample requirements more realistic to achieve. Besides lowering requirements on A/D-converter it allows a more simple construction, which saves more components than subsampling adds. Subsampling add extra noise, because of that an A/D-converter based on the RSD algorithm was chosen to improve error rate. To achieve high bit-processing rate compared to the used number of transistors, pipeline structure were selected as conversion method. The receiver was that part which gained largest attention because it’s the part which is most interesting to optimise. A/D-conversion is more difficult to construct than D/A conversion, besides there’s more to gain from eliminating mixers in the receiver than in the transmitter.</p>
587

Low-power high-linearity digital-to-analog converters

Kuo, Ming-Hung 09 March 2012 (has links)
In this thesis work, a design of 14-bit, 20MS/s segmented digital-to-analog converter (DAC) is presented. The segmented DAC uses switched-capacitor configuration to implement 8 (LSB) + 6 (MSB) segmented architecture to achieve high performance for minimum area. The implemented LSB DAC is based on quasi-passive pipelined DAC that has been proven to provide low power and high speed operation. Typically, capacitor matching is the best among all integrated circuit components but the mismatch among nominally equal value capacitors will introduce nonlinear distortion. By using dynamic element matching (DEM) technique in the MSB DAC, the nonlinearity caused by capacitor mismatch is greatly reduced. The output buffer employed direct charge transfer (DCT) technique that can minimize kT/C noise without increasing the power dissipation. This segmented DAC is designed and simulated in 0.18 μm CMOS technology, and the simulated core DAC block only consumes 403 μW. / Graduation date: 2012
588

Design of low OSR, high precision analog-to-digital converters

Rajaee, Omid 30 December 2010 (has links)
Advances in electronic systems have lead to the demand for high resolution, high bandwidth Analog-to-Digital Converters (ADCs). Oversampled ADCs are well- known for high accuracy applications since they benefit from noise shaping and they usually do not need highly accurate components. However, as a consequence of oversampling, they have limited signal bandwidth. The signal bandwidth (BW) of oversampled ADCs can be increased either by increasing the sampling rate or reducing the oversampling ratio (OSR). Reducing OSR is a more promising method for increasing the BW, since the sampling speed is usually limited by the technology. The advantageous properties (e.g. low in-band quantization, relaxed accuracy requirements of components) of oversampled ADCs are usually diminished at lower OSRs and preserving these properties requires complicated and power hungry architectures. In this thesis, different combinations of delta-sigma and pipelined ADCs are explored and new techniques for designing oversampled ADCs are proposed. A Hybrid Delta-Sigma/Pipelined (HDSP) ADC is presented. This ADC uses a pipelined ADC as the quantizer of a single-loop delta-sigma modulator and benefits from the aggressive quantization of the pipelined quantizer at low OSRs. A Noise-Shaped Pipelined ADC is proposed which exploits a delta-sigma modulator as the sub-ADC of a pipeline stage to reduce the sensitivity to the analog imperfection. Three prototype ADCs were fabricated in 0.18μm CMOS technology to verify the effectiveness of the proposed techniques. The performance of these architectures is among the best reported for high bandwidth oversampled ADCs. / Graduation date: 2011
589

Stress corrosion cracking of X65 pipeline steel in fuel grade ethanol environments

Goodman, Lindsey R. 20 August 2012 (has links)
In recent years, the demand for alternatives to fossil fuels has risen dramatically, and ethanol fuel has become an important liquid fuel alternative globally. The most efficient mode of transportation of petroleum-based fuel is via pipelines, and due to the 300% increase in ethanol use in the U.S. in the past decade, a similar method of conveyance must be adopted for ethanol. Low-carbon, low-alloy pipeline steels like X52, X60, and X65 comprise the existing fuel transmission pipeline infrastructure. However, similar carbon steels, used in the ethanol processing and production industry, were found to exhibit stress corrosion cracking (SCC) in ethanol service. Prior work has shown that contaminants absorbed by the ethanol during distillation, processing or transport could be the possible determinants of SCC susceptibility; 200 proof ethanol alone was shown not to cause SCC in laboratory studies. To ensure the safety and integrity of the pipeline system, it was necessary to perform a mechanistic study of SCC of pipeline steel in fuel grade ethanol (FGE). The objective of this work was to determine the environmental factors relating to SCC of X65 steel in fuel grade ethanol (FGE) environments. To accomplish this, a systematic study was done to test effects of FGE feedstock and common contaminants and constituents such as water, chloride, dissolved oxygen, and organic acids on SCC behavior of an X65 pipeline steel. Slow strain rate tests (SSRT) were employed to evaluate and compare specific constituents' effects on crack density, morphology, and severity of SCC of X65 in FGE. SCC did not occur in commercial FGE environments, regardless of the ethanol feedstock. In both FGE and simulated fuel grade ethanol (SFGE), SCC of carbon steel was found to occur at low water contents (below 5 vol%) when chloride was present above a specific threshold quantity. Cl- threshold for SCC varied from 10ppm in FGE to approximately 1 ppm in SFGE. SCC of carbon steel was inhibited when oxygen was removed from solution via N2 purge or pHe was increased by addition of NaOH. During SSRT, in-situ¬ electrochemical measurements showed a significant role of film rupture in the SCC mechanism. Analysis of repassivation kinetics in mechanical scratch tests revealed a large initial anodic dissolution current spike in SCC-causing environments, followed by repassivation indicated by current transient decay. In the deaerated environments, repassivation did not occur, while in alkaline SFGE repassivation was significantly more rapid than in SCC-inducing SFGE. Composition and morphology of the passive film on X65 during static exposure tests was studied using X-ray photoelectron spectroscopy (XPS) and atomic force microscopy (AFM). Results showed stability of an air-formed native oxide under static immersion in neutral (pHe = 5.4) SFGE, and dissolution of the film when pHe was decreased to 4.3. XPS spectra indicated changes in film composition at high pHe (near 13) and in environments lacking sufficient water. In light of all results, a film-rupture anodic-dissolution mechanism is proposed in which local plastic strains facilitates local breakdown of the air-formed oxide film, causing iron to dissolve anodically. During crack propagation anodic dissolution occurs at the crack tip while crack walls repassivate preserving crack geometry and local stress concentration at the tip. It is also proposed that SCC can be mitigated by use of alkaline inhibitors that speed repassivation and promotes formation of a more protective Fe(OH)3 film.
590

Analysis of Pipeline Systems Under Harmonic Forces

Salahifar, Raydin 10 March 2011 (has links)
Starting with tensor calculus and the variational form of the Hamiltonian functional, a generalized theory is formulated for doubly curved thin shells. The formulation avoids geometric approximations commonly adopted in other formulations. The theory is then specialized for cylindrical and toroidal shells as special cases, both of interest in the modeling of straight and elbow segments of pipeline systems. Since the treatment avoids geometric approximations, the cylindrical shell theory is believed to be more accurate than others reported in the literature. By adopting a set of consistent geometric approximations, the present theory is shown to revert to the well known Flugge shell theory. Another set of consistent geometric approximations is shown to lead to the Donnell-Mushtari-Vlasov (DMV) theory. A general closed form solution of the theory is developed for cylinders under general harmonic loads. The solution is then used to formulate a family of exact shape functions which are subsequently used to formulate a super-convergent finite element. The formulation efficiently and accurately captures ovalization, warping, radial expansion, and other shell behavioural modes under general static or harmonic forces either in-phase or out-of-phase. Comparisons with shell solutions available in Abaqus demonstrate the validity of the formulation and the accuracy of its predictions. The generalized thin shell theory is then specialized for toroidal shells. Consistent sets of approximations lead to three simplified theories for toroidal shells. The first set of approximations has lead to a theory comparable to that of Sanders while the second set of approximation has lead to a theory nearly identical to the DMV theory for toroidal shells. A closed form solution is then obtained for the governing equation. Exact shape functions are then developed and subsequently used to formulate a finite element. Comparisons with Abaqus solutions show the validity of the formulation for short elbow segments under a variety of loading conditions. Because of their efficiency, the finite elements developed are particularly suited for the analysis of long pipeline systems.

Page generated in 0.0399 seconds