581 |
[en] GEOMETRIC MAGNETIC DISCRIMINATOR SENSOR FOR SMART PIGS / [pt] SENSOR GEOMÉTRICO MAGNÉTICO DISCRIMINADOR PARA PIGS INSTRUMENTADOSVINICIUS DE CARVALHO LIMA 05 January 2005 (has links)
[pt] Este trabalho apresenta o desenvolvimento de um sensor
inovador combinando três técnicas de inspeção utilizadas
nos Pigs Instrumentados Geométrico e Magnético, para a
detecção e caracterização de defeitos na geometria em
tubulações de aço. O sensor GMD, Geométrico Magnético
Discriminador, faz a leitura magnética do duto através da
técnica de campo de fuga magnético, com a adição da leitura
geométrica além da discriminação de defeitos internos de
externos. A combinação dessas três tecnologias habilita a
construção de uma ferramenta de inspeção de alta resolução
compacta e capaz de identificar e quantificar, com apenas
uma coroa de sensores, amassamentos, perdas de espessura e
além da sua combinação. Este estudo se apresenta em um
momento oportuno, já que a Integração de dados é o ponto
fundamental da recente norma de gerenciamento de ricos em
dutos, API 1160, na qual combinando os resultados das
inspeções de geometria e corrosão, tem-se uma melhor
avaliação de risco. Testes foram realizados utilizando um
PIG Plano com corpos de prova contendo defeitos variados.
Os resultados verificaram que o sensor GMD quantifica e
discrimina amassamentos com perda de espessura. Aspectos
técnicos do desenvolvimento como os detalhes construtivos
do sensor, testes de avaliação a resultados de laboratório
são apresentados. / [en] This thesis presents the development of an innovative
sensor head for detection and characterization of geometric
defects in steel pipes that combines three inspection
techniques usually employed separately in Caliper and
Magnetic Flux Leakage (MFL) PIGs. The novel Geometric
Magnetic Discriminator (GMD) sensor performs high-
resolution magnetic pipeline readings using MFL with the
addition of internal pipe geometry evaluations and
discrimination between internal and external defects. The
combination of these technologies in a single sensor
facilitates characterization of dents and corrosions, while
at the same time optimizing the PIG set-up. According to
the repair criteria in the standard API1160, combined
defects such as a dent with metal loss, which in the past
could only be detected through combined data of two
different runs (MFL+ Caliper), must be repaired
immediately. The GMD sensor was tested in a linear
test rig, known as Flat Pig, and data were taken from
different defect sets. Evaluation tests demonstrated that
the GMD sensor sizes and discriminates a dent with metal
loss. Technical aspects of the development, e.g.: the
construction details of the sensor, evaluation tests and
laboratory results are presented.
|
582 |
Multi-Fidelity Model Predictive Control of Upstream Energy Production ProcessesEaton, Ammon Nephi 01 June 2017 (has links)
Increasing worldwide demand for petroleum motivates greater efficiency, safety, and environmental responsibility in upstream oil and gas processes. The objective of this research is to improve these areas with advanced control methods. This work develops the integration of optimal control methods including model predictive control, moving horizon estimation, high fidelity simulators, and switched control techniques applied to subsea riser slugging and managed pressure drilling. A subsea riser slugging model predictive controller eliminates persistent offset and decreases settling time by 5% compared to a traditional PID controller. A sensitivity analysis shows the effect of riser base pressure sensor location on controller response. A review of current crude oil pipeline wax deposition prevention, monitoring, and remediation techniques is given. Also, industrially relevant control model parameter estimation techniques are reviewed and heuristics are developed for gain and time constant estimates for single input/single output systems. The analysis indicates that overestimated controller gain and underestimated controller time constant leads to better controller performance under model parameter uncertainty. An online method for giving statistical significance to control model parameter estimates is presented. Additionally, basic and advanced switched model predictive control schemes are presented. Both algorithms use control models of varying fidelity: a high fidelity process model, a reduced order nonlinear model, and a linear empirical model. The basic switched structure introduces a method for bumpless switching between control models in a predetermined switching order. The advanced switched controller builds on the basic controller; however, instead of a predetermined switching sequence, the advanced algorithm uses the linear empirical controller when possible. When controller performance becomes unacceptable, the algorithm implements the low order model to control the process while the high fidelity model generates simulated data which is used to estimate the empirical model parameters. Once this online model identification process is complete, the controller reinstates the empirical model to control the process. This control framework allows the more accurate, yet computationally expensive, predictive capabilities of the high fidelity simulator to be incorporated into the locally accurate linear empirical model while still maintaining convergence guarantees.
|
583 |
Equivalence Checking for High-Assurance Behavioral SynthesisHao, Kecheng 10 June 2013 (has links)
The rapidly increasing complexities of hardware designs are forcing design methodologies and tools to move to the Electronic System Level (ESL), a higher abstraction level with better productivity than the state-of-the-art Register Transfer Level (RTL). Behavioral synthesis, which automatically synthesizes ESL behavioral specifications to RTL implementations, plays a central role in this transition. However, since behavioral synthesis is a complex and error-prone translation process, the lack of designers' confidence in its correctness becomes a major barrier to its wide adoption. Therefore, techniques for establishing equivalence between an ESL specification and its synthesized RTL implementation are critical to bring behavioral synthesis into practice.
The major research challenge to equivalence checking for behavioral synthesis is the significant semantic gap between ESL and RTL. The semantics of ESL involve untimed, sequential execution; however, the semantics of RTL involve timed, concurrent execution. We propose a sequential equivalence checking (SEC) framework for certifying a behavioral synthesis flow, which exploits information on successive intermediate design representations produced by the synthesis flow to bridge the semantic gap. In particular, the intermediate design representation after scheduling and pipelining transformations permits effective correspondence of internal operations between this design representation and the synthesized RTL implementation, enabling scalable, compositional equivalence checking. Certifications of loop and function pipelining transformations are possible by a combination of theorem proving and SEC through exploiting pipeline generation information from the synthesis flow (e.g., the iteration interval of a generated pipeline). The complexity brought by bubbles in function pipelines is creatively reduced by symbolically encoding all possible bubble insertions in one pipelined design representation. The result of this dissertation is a robust, practical, and scalable framework for certifying RTL designs synthesized from ESL specifications. We have validated the robustness, practicality, and scalability of our approach on industrial-scale ESL designs that result in tens of thousands of lines of RTL implementations.
|
584 |
Capitalist Reproduction in Schooling: The social control of marginalized students through zero tolerance policiesWickline, Mary K 01 January 2019 (has links)
Due to increasing media focus, there has been growing concern that U.S. students and the school environment are increasingly violent, leading the public to believe that school discipline should become more strict and punitive (Giroux 2003; Schept, Wall, & Brisman 2014). However, scholars argue that there is little evidence that current practices of school discipline have made the school environment safer, but instead have criminalized the school and are disproportionately targeting students of color and disabled students (Beger 2002; Civil Rights Project 2000; Gregory, Skiba, & Noguera 2010; Hirschfield 2008; McNeal & Dunbar 2010; U.S. Government Office of Accountability 2018). The expansion of zero-tolerance policies and the surveillance culture in schools have played a large role in the creation of the school-to-prison pipeline, in which students are increasingly being suspended and expelled from school and coming in contact with the juvenile justice system. This research explores the relation that zero tolerance policies function as the neoliberal social control mechanism to control students who are seen to have “no market value and [are] identified as flawed consumers because of their associations with crime and poverty, redundancy and expendability” (Sellers & Arrigo 2018, p. 66). Zero-tolerance policies function as the latest manifestation of capitalist reconstitution of educational institutions, through curricula, student conduct codes, disciplinary procedures, and the hidden curriculum, constructed of the language of capitalism, disproportionately targeting students of color (Bowles & Gintis 2011). A series of OLS regression analyses were conducted to analyze how community partners and school resource officer involvement impact the rate of suspension, expulsion, and combined school disciplinary measures using the School Survey on Crime and Safety Survey 2005-06 data. It was found that community partners and school resource officers have a positive and negative relationship with disciplinary rates. This research further substantiates that racial and ethnic minority students receive disproportionate rates of discipline.
|
585 |
The Influence of School Discipline Approaches on Suspension RatesChristy, Donna 01 January 2018 (has links)
A free and appropriate public education is promised to every child in the United States. However, zero tolerance school discipline policies have broken that promise, pushing students out of the classroom and into the school-to-prison pipeline. Despite the growing body of research demonstrating negative social and economic impacts of exclusionary discipline, public school administrators have been slow to adopt innovative policies that provide rehabilitative alternatives. The purpose of this study was to compare, using the consequences of innovations application of Rogers's diffusion of innovations theory, the impact of various school district approaches to school discipline on suspension rates while controlling for race and socioeconomic status. This study used a quantitative, nonexperimental, nonequivalent groups, posttest-only research design using secondary analysis of data reported by 218 school districts in a New England state for the 2016-17 school year. Analysis of covariance indicated that there is a significant relationship between approaches to school discipline and suspension rates when controlling for racial and socioeconomic composition (p < .05). Race and economic disadvantage significantly influenced suspension rates (p < .001), and districts implementing alternatives differed significantly in their racial and socioeconomic compositions (p < .001). Policy implications include the promotion of alternative approaches to school discipline. Implications for social change include evidence to support the work of those addressing the needs underlying student behavior rather than crime and punishment models to produce safe and supportive schools and dismantle the school-to-prison pipeline.
|
586 |
Heterogeneous multi-pipeline application specific instruction-set processor design and implementationRadhakrishnan, Swarnalatha, Computer Science & Engineering, Faculty of Engineering, UNSW January 2006 (has links)
Embedded systems are becoming ubiquitous, primarily due to the fast evolution of digital electronic devices. The design of modern embedded systems requires systems to exhibit, high performance and reliability, yet have short design time and low cost. Application Specific Instruction set processors (ASIPs) are widely used in embedded system since they are economical to use, flexible, and reusable (thus saves design time). During the last decade research work on ASIPs have been carried out in mainly for single pipelined processors. Improving performance in processors is possible by exploring the available parallelism in the program. Designing of multiple parallel execution paths for parallel execution of the processor naturally incurs additional cost. The methodology presented in this dissertation has addressed the problem of improving performance in ASIPs, at minimal additional cost. The devised methodology explores the available parallelism of an application to generate a multi-pipeline heterogeneous ASIP. The processor design is application specific. No pre-defined IPs are used in the design. The generated processor contains multiple standalone pipelined data paths, which are not necessarily identical, and are connected by the necessary bypass paths and control signals. Control unit are separate for each pipeline (though with the same clock) resulting in a simple and cost effective design. By using separate instruction and data memories (Harvard architecture) and by allowing memory access by two separate pipes, the complexity of the controller and buses are reduced. The impact of higher memory latencies is nullified by utilizing parallel pipes during memory access. Efficient bypass network selection and encoding techniques provide a better implementation. The initial design approach with only two pipelines without bypass paths show speed improvements of up to 36% and switching activity reductions of up to 11%. The additional area costs around 16%. An improved design with different number of pipelines (more than two) based on applications show on average of 77% performance improvement with overheads of: 49% on area; 51% on leakage power; 17% on switching activity; and 69% on code size. The design was further trimmed, with bypass path selection and encoding techniques, which show a saving of up to 32% of area and 34% of leakage power with 6% performance improvement and 69% of code size reduction compared to the design approach without these techniques in the multi pipeline design.
|
587 |
La consommation en registres en présence de parallélisme d'instructionsTOUATI, Sid-Ahmed-Ali 25 June 2002 (has links) (PDF)
Aujourd'hui, le fait que la mémoire constitue un goulot d'étranglement pour les performances des programmes est un truisme. Les compilateurs doivent donc optimiser les programmes afin d'éviter de recourir à la mémoire, et ceci en utilisant au mieux les registres disponibles dans le processeur à parallélisme d'instructions (ILP).<br /><br />Cette thèse réexamine le concept de la pression des registres en lui donnant une plus forte priorité par rapport à l'ordonnancement d'instructions, sans ôter à ce dernier ses possibilités d'extraction de parallélisme. Nous proposons de traiter le problème des registres avant la phase d'ordonnancement. Deux grandes stratégies sont étudiées en détail. La première consiste à analyser et manipuler un graphe de dépendance de données (GDD) pour garantir les contraintes de registres sans allonger son chemin critique (si possible). Nous introduisons la notion de saturation en registres qui est la borne exacte maximale du besoin en registres de tout ordonnancement valide, indépendamment des contraintes architecturales. Son but est d'ajouter des arcs au GDD pour que la saturation soit en dessous du nombre de registres disponibles. Réciproquement, la suffisance est le nombre minimal de registres dont il faut disposer pour produire au moins un ordonnancement valide pour le GDD. Si cette suffisance est au dessus du nombre effectif de registres, alors les accès à la mémoire sont inévitables.<br />Notre deuxième stratégie construit une allocation de registres directement dans le GDD en optimisant la perte du parallélisme intrinsèque.<br /><br />Cette thèse considère des blocs de base, des graphes acycliques de flots de contrôles et des boucles internes destinées au pipeline logiciel. Nos expériences montrent que nos heuristiques sont presque optimales. L'étude prouve que nous pouvons et devons traiter les contraintes de registres avant la phase d'ordonnancement tout en garantissant une liberté pour l'extraction et l'exploitation de l'ILP.
|
588 |
Modèles et méthodes d'évaluation et de gestion des risques appliqués aux systèmes de transport de marchandises dangereuses (TMD), reposant sur les nouvelles technologies de l'information et de la communication (NTIC)Tomasoni, Angela Maria 21 April 2010 (has links) (PDF)
Durant ma thèse de doctorat, j'ai développé plusieurs modèles et méthodes d'évaluation des risques dans les systèmes de transport de matières dangereuses. En raison de la multiplicité des approches d'évaluation de risque, tous les modèles décrits, définis et utilisés sont fondés sur la définition classique du risque technologique - liés à l'activité de l'homme - la catégorie des risques accidentels, - ou d'un accident - d'un véhicule transportant des matières dangereuses. Cette définition des risques est la même pour les conduites que pour le transport par route, mais différentes approches méthodologiques pour l'évaluation des risques de transport peuvent être abordées : Au chapitre n°2: une définition générale des marchandises dangereuses a été réalisé ainsi que différents types de matières dangereuses considérées. Ensuite, l'étude a été focalisé sur les hydrocarbures ainsi que sur les réglementations qui y sont liés. Dans le chapitre n°3, l'étude a porté sur la définition des risques dans le transport des matières dangereuses, respectivement, dans le cas des pipelines ainsi que pour le transport routier. Au cours du 4ème chapitre, une description complète de la méthodologie d'évaluation des risques de pipelines a été réalisé. Par la suite, au chapitre n°5, un modèle innovant et technologique a été utilisé afin de décrire un scénario d'accident du GPL par route et d'évaluer son impact sur la population concernée. Au chapitre n°6, j'aborde des modèles et des méthodes innovants pour l'évaluation des risques et le contrôle de la DGT par route. Cette méthodologie est basée sur une approche "Risk averse decision making" . Au chapitre n°7, une loi de contrôle optimale de la DGT a été développé et appliqué dans le cas d'une infrastructure critique, spécifiquement, dans le cas des tunnels. Enfin, le chapitre n°8 a pour objectif de résumer mon travail en termes de résultats obtenus au cours de ma thèse.
|
589 |
Conception, Réalisation et Caractérisation de l'Electronique Intégrée de Lecture et de Codage des Signaux des Détecteurs de Particules Chargées à Pixels Actifs en Technologie CMOSDahoumane, Mokrane 03 November 2009 (has links) (PDF)
Les futures grandes expériences de l'exploration des lois fondamentales de la Nature (e.g. ILC) exigent des détecteurs de vertex de résolution spatiale et de granularité poussées, très minces et radio-tolérants, qui sont hors de portée des technologies de détections actuelles. Ce constat est à l'origine du développement des Capteurs CMOS à Pixels Actifs. La résolution spatiale du capteur est une performance clé. Elle résulte de la répartition des charges libérées par une particule chargée traversant, et ionisant, le volume sensible. L'encodage de la charge collectée par chaque pixel repose sur un CAN (Convertisseur Analogique Numérique) intégrable à même le substrat abritant le volume sensible du capteur. Ce CAN doit être précis, compact, rapide et de faible consommation. L'objectif de cette thèse a donc été de concevoir un CAN répondant à ces exigences conflictuelles. D'abord, plusieurs architectures d'un échantillonneur-bloqueur-amplificateur ont été étudiées pour conditionner le faible signal des pixels. Une architecture originale de cet étage a été conçue. L'architecture pipeline du CAN a été choisie. La configuration de base de 1,5 bit/étage a été implémentée pour tester la validité du concept, puisqu'elle permet de minimiser les contraintes sur chaque étage. Nous avons optimisé l'architecture en introduisant le concept du double échantillonnage dans un premier temps sur une configuration de 2,5 bits/étage, ceci a permis de minimiser les dimensions et la puissance. Le double échantillonnage combiné avec la résolution de 1,5 bit/étage a constitué une seconde amélioration. Une nouvelle architecture du CAN adapté à la séquence des commandes des pixels a été proposée.
|
590 |
Direktsamplande digital transciever / Direct sampling digital transceiverKarlsson, Magnus January 2002 (has links)
<p>Master thesis work at ITN (Department of Science and Technology) in the areas of A/D-construction and RF-circuit design. Major goal of project were to research suitable possibilities for implementations of direct conversion in transceivers operating in the 160MHz band, theoretic study followed by development of components in the construction environment Cadence. Suitable A/D- converter and other important parts were selected at the end of the theoretic study. Subsampling technique was applied to make A/D sample requirements more realistic to achieve. Besides lowering requirements on A/D-converter it allows a more simple construction, which saves more components than subsampling adds. Subsampling add extra noise, because of that an A/D-converter based on the RSD algorithm was chosen to improve error rate. To achieve high bit-processing rate compared to the used number of transistors, pipeline structure were selected as conversion method. The receiver was that part which gained largest attention because it’s the part which is most interesting to optimise. A/D-conversion is more difficult to construct than D/A conversion, besides there’s more to gain from eliminating mixers in the receiver than in the transmitter.</p>
|
Page generated in 0.032 seconds