• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 333
  • 129
  • 63
  • 34
  • 33
  • 21
  • 15
  • 8
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 798
  • 90
  • 86
  • 76
  • 60
  • 52
  • 48
  • 48
  • 46
  • 45
  • 44
  • 44
  • 43
  • 42
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Visual Servo of Underwater Pipeline Following

Jiang, Bor-tung 14 July 2008 (has links)
This thesis describes a vision-based method for ROV¡¦s underwater pipeline recognition task. In this research, we tried to overcome the poor image quality of the underwater circumstance and the condition when seaweed is in the scene. Edge information and line feature of the pipeline are used in this method. Edge image is obtained after preprocessing to extract line feature. In this thesis we focused on the recognition of pipeline, trying to provide useful navigation information for further development of the ROV¡¦s control system.
42

A study of 10-bit, 100Msps pipeline ADC and the implementation of 1.5-bit stage

Bayoumy, Mostafa Elsayed 15 April 2014 (has links)
The demand on high resolution and high speed analog-to-digital converters (ADC’s) has been growing in today’s market. The pipeline ADC’s present advantages compared to flash or successive approximation ADC techniques. The high-resolution, high-speed requirements can relatively easier be achieved using pipelined architecture ADC’s than other implementations of ADC’s of the same requirements. Because the stages work simultaneously, the number of stages needed to obtain a certain resolution is not constrained by the required throughput rate. Latency is a result of a multistage concurrent operation of any pipelined system. But luckily enough, latency isn’t considered to be a problem in many ADC applications. In this work, a 1.5-bit stage in the pipeline ADC is completely implemented including its two voltage comparators, a DAC with three possible output voltages, and a multiplying digital to analog (MDAC) blocks. Only ideal components were used for clocking operation. At the end of design, a total harmonic distortion (THD) of less than -70 dB was achieved. / text
43

Understanding Decisions Latino Students Make Regarding Persistence in the Science and Math Pipeline

Munro, Janet January 2009 (has links)
This qualitative study focused on the knowledge and perceptions of Latino high school students, as well those of their parents and school personnel, at a southwestern, suburban high school regarding persistence in the math/science pipeline. In the context of the unique school and community setting these students experience, the decision-making process was examined with particular focus on characterizing the relationships that influence the process. While the theoretical framework that informs this study was that of social capital, its primary purpose was to inform the school's processes and policy in support of increased Latino participation in the math and science pipeline. Since course selection may be the most powerful factor affecting school achievement and college-preparedness, and since course selection is influenced by school policy, school personnel, students, parents, and teachers alike, it is important to understand the beliefs and perceptions that characterize the relationships among them. The qualitative research design involved a phenomenological study of nine Latino students, their parents, their teachers and counselors, and certain support personnel from the high school. The school's and community's environment in support of academic intensity served as context for the portrait that developed.Given rapidly changing demographics that bring more and more Latino students to suburban high schools, the persistent achievement gap experienced by Latino students, and the growing dependence of the world economy on a citizenry versed in the math- and science-related fields, a deeper understanding of the decision-making processes Latino students experience can inform school policy as educators struggle to influence those decisions.This study revealed a striking lack of knowledge concerning the college-entrance ramifications of continued course work in math and science beyond that required for graduation, relationships among peers, parents, and school personnel that were markedly lacking in influence over the decision a student makes to continue, or not, course work beyond that required for graduation, and a general dismissal of the value of math- and science-related careers. Also lacking was any evidence of social capital within parental networks that reflected intergenerational closure.
44

Localized Pipeline Encroachment Detector System Using Sensor Network

Ou, Xiaoxi 1986- 16 December 2013 (has links)
Detection of encroachment on pipeline right-of-way is important for pipeline safety. An effective system can provide on-time warning while reducing the probability of false alarms. There are a number of industry and academic developments to tackle this problem. This thesis is the first to study the use of a wireless sensor network for pipeline right-of-way encroachment detection. In the proposed method, each sensor node in the network is responsible for detecting and transmitting vibration signals caused by encroachment activities to a base station (computer center). The base station monitors and analyzes the signals. If an encroachment activity is detected, the base station will send a warning signal. We describe such a platform with hardware configuration and software controls, and the results demonstrate that the platform is able to report our preliminary experiments in detecting digging activities by a tiller in the natural and automotive noise.
45

Particle Contributions to Kinematic Friction in Slurry Pipeline Flow

Gillies, Daniel P Unknown Date
No description available.
46

Entwicklung und Gestaltung der Baulogistik im Tiefbau : dargestellt am Beispiel des Pipelinebaus /

Deml, Alexander. January 2008 (has links)
Zugl.: Regensburg, Universiẗat, Diss., 2008.
47

Die Umweltverträglichkeitsprüfung in der internationalen Kreditvergabe : Praxis und Entwicklungstendenzen am Beispiel von Erdgas- und Erdölfernleitungen /

Nickel, Elke. January 2004 (has links)
Zugl.: Dortmund, Universiẗat, Diss., 2003.
48

Revisiting Wide Superscalar Microarchitecture / Révision de larges unités superscalaires

Mondelli, Andrea 12 September 2017 (has links)
Depuis plusieurs décennies, la fréquence des processeurs à usage général n'a cessé d'augmenter grâce aux transistors de plus en plus rapides et aux micro-architectures avec des pipelines plus profonds. Cependant il y a environ 10 ans, à cause des courants de fuite et de la température, la finesse de gravure des processeurs a atteint sa limite physique. Depuis, au lieu d'augmenter la fréquence du processeur, les fabricants ont intégré plus de cœurs sur une seule puce, agrandi la hiérarchie de caches et amélioré l'efficacité énergétique. Cependant, il est également important d'accélérer les processeurs individuellement.La réduction de la consommation énergétique est donc devenue un objectif majeur lors de la conception d'une micro-architecture pour la haute performance. Certaines fonctionnalités ont été introduites dans les unités superscalaires principalement pour réduire la consommation énergétique. Un exemple de fonctionnalité est le tampon de boucles ("loop buffer"), qui est maintenant mis en œuvre dans plusieurs micro-architectures superscalaires. Le but d'un tampon de boucle est d'économiser l'énergie dans le bloc avant du microprocesseur (cache d'instructions, prédicteur de branchements, décodeur, etc.) lors de l'exécution d'une boucle avec un corps assez petit pour tenir dans cette mémoire tampon spécifique. Si la fréquence du processeur reste constante, la seule possibilité laissée libre pour l'amélioration des performances des applications séquentielles dans les futurs processeurs est d'augmenter l'exploitation du parallélisme d'instructions. Certaines améliorations des micro-architectures (e.g., une meilleure prédiction de branchement) améliorent simultanément la performance et l'efficacité énergétique. Cependant, améliorer l'exploitation du parallélisme d'instructions a généralement un coût: augmentation de la surface de silicium, de la consommation d'énergie, des efforts de conception, etc. Par conséquent, la micro-architecture est modifiée lentement, incrément par incrément. En effet, les fabricants de processeurs ont fait des efforts continus afin d'exploiter davantage l'ILP avec de meilleurs prédicteurs de branchements, de meilleurs pré-chargeurs de données, de plus grandes fenêtres d'instructions, ajout de registres physiques, et ainsi de suite. Cette thèse décrit ce que devraient être les unités superscalaires dans les 10 ans à venir et explore la possibilité d'exploiter le comportement des boucles afin de réduire la consommation énergétique au-delà du bloc avant. Certaines propositions ont été publiées notamment sur les accélérateurs de boucles et sur les unités superscalaires à bloc arrière non conventionnel. Il est soutenu que la taille de la fenêtre d'instructions peut être augmentée en combinant le regroupement (clustering) et la spécialisation des registres d'écriture (register write specialization). Une différence majeure avec les précédentes études sur les micro-architectures en grappe est l'utilisation de grappes larges (wide issue clusters), contrairement aux études passées qui étaient principalement axées sur des petites grappes (narrow issue cluster). Le passage de petites grappes à des grappes larges n'est pas qu'un changement quantitatif, mais a aussi un impact qualitatif sur le problème de regroupement, et en particulier sur la politique de pilotage (steering policy). La seconde contribution propose deux optimisations indépendantes et orthogonales concernant la consommation énergétique et exploitant les boucles. La première optimisation détecte les micro-opérations redondantes produisant le même résultat à chaque itération puis supprime définitivement ces micro-opérations. La seconde optimisation se concentre sur la diminution de l'énergie consommée des micro-opérations de chargement, en détectant les situations où un chargement n'a pas besoin d'accéder à la file d'attente des enregistrements ou n'a pas besoin d'accéder au cache de données de niveau. / For several decades, the clock frequency of general purpose processors was growing thanks to faster transistors and microarchitectures with deeper pipelines. However, about 10 years ago, technology hit leakage power and temperature walls. Since then, the clock frequency of high-end processors did not increase. Instead of increasing the clock frequency, processor makers integrated more cores on a single chip, enlarged the cache hierarchy and improved energy efficiency. Putting more cores on a single chip has increased the total chip throughput and benefits some applications with thread-level parallelism. However, most applications have low thread-level parallelism. So having more cores is not sufficient. It is important also to accelerate individual threads. Moreover, reducing the energy consumption has become a major objective when designing a high-performance microarchitecture. Some microarchitecture features have been introduced in superscalar cores mainly for reducing energy. An example of such feature is the loop buffer, which is now implemented in several superscalar microarchitectures. The purpose of a loop buffer is to save energy in the core's front-end (instruction cache, branch predictor, decoder, etc.) when executing a loop with a body small enough to fit in the loop buffer. If the clock frequency remains constant, the only possibility left for higher single-thread performance in future processors is to exploit more ILP. Certain microarchitecture improvements (e.g., better branch predictor) simultaneously improve performance and energy efficiency. However, in general, exploiting more ILP has a cost in silicon area, energy consumption, design effort, etc. Therefore, the microarchitecture is modified slowly, incrementally, taking advantage of technology scaling. And indeed, processor makers have made continuous efforts to exploit more, with better branch predictors, better data prefetchers, larger instruction windows, more physical registers, and so forth. In this thesis, we try to depict what future superscalar cores may look like in 10 years and explore the possibility of exploiting loop behaviors to reduce energy consumption beyond the front-end. Some propositions have been published for loop accelerators or for unconventional superscalar core back-ends. I argue that the instruction window and the issue width can be augmented by combining clustering and register write specialization A major difference with past research on clustered microarchitecture is that I assume wide issue clusters, whereas past research mostly focused on narrow issue clusters. Going from narrow issue to wide issue clusters is not just a quantitative change, it has a qualitative impact on the clustering problem, in particular on the steering policy. We propose, in the second part of this thesis, two independent and orthogonal energy optimizations exploiting loops. The first optimization detects redundant micro-ops producing the same result on every iteration and removes these micro-ops completely. The second optimization focuses on the energy consumed by load micro-ops, detecting situations where a load does not need to access the store queue or does not need to access the level-1 data cache.
49

Simulace řízení provozu teplovodu s dlouhým potrubím / Simulation of long heat pipeline operation control

Seriš, Richard January 2011 (has links)
The objective of this diploma thesis is to design a model of long heat pipeline operation control using Matlab software. The model should simply correspond to real heat pipeline system Melnik – Praha. After this, simulate operation control of sets of pumps. The role is to simulate usual and critical modes of operations. After evaluation of results, optimize the conditions for operation control of this system.
50

DataOps : Towards Understanding and Defining Data Analytics Approach

Mainali, Kiran January 2020 (has links)
Data collection and analysis approaches have changed drastically in the past few years. The reason behind adopting different approach is improved data availability and continuous change in analysis requirements. Data have been always there, but data management is vital nowadays due to rapid generation and availability of various formats. Big data has opened the possibility of dealing with potentially infinite amounts of data with numerous formats in a short time. The data analytics is becoming complex due to data characteristics, sophisticated tools and technologies, changing business needs, varied interests among stakeholders, and lack of a standardized process. DataOps is an emerging approach advocated by data practitioners to cater to the challenges in data analytics projects. Data analytics projects differ from software engineering in many aspects. DevOps is proven to be an efficient and practical approach to deliver the project in the Software Industry. However, DataOps is still in its infancy, being recognized as an independent and essential task data analytics. In this thesis paper, we uncover DataOps as a methodology to implement data pipelines by conducting a systematic search of research papers. As a result, we define DataOps outlining ambiguities and challenges. We also explore the coverage of DataOps to different stages of the data lifecycle. We created comparison matrixes of different tools and technologies categorizing them in different functional groups to demonstrate their usage in data lifecycle management. We followed DataOps implementation guidelines to implement data pipeline using Apache Airflow as workflow orchestrator inside Docker and compared with simple manual execution of a data analytics project. As per evaluation, the data pipeline with DataOps provided automation in task execution, orchestration in execution environment, testing and monitoring, communication and collaboration, and reduced end-to-end product delivery cycle time along with the reduction in pipeline execution time. / Datainsamling och analysmetoder har förändrats drastiskt under de senaste åren. Anledningen till ett annat tillvägagångssätt är förbättrad datatillgänglighet och kontinuerlig förändring av analyskraven. Data har alltid funnits, men datahantering är viktig idag på grund av snabb generering och tillgänglighet av olika format. Big data har öppnat möjligheten att hantera potentiellt oändliga mängder data med många format på kort tid. Dataanalysen blir komplex på grund av dataegenskaper, sofistikerade verktyg och teknologier, förändrade affärsbehov, olika intressen bland intressenter och brist på en standardiserad process. DataOps är en framväxande strategi som förespråkas av datautövare för att tillgodose utmaningarna i dataanalysprojekt. Dataanalysprojekt skiljer sig från programvaruteknik i många aspekter. DevOps har visat sig vara ett effektivt och praktiskt tillvägagångssätt för att leverera projektet i mjukvaruindustrin. DataOps är dock fortfarande i sin linda och erkänns som en oberoende och viktig uppgiftsanalys. I detta examensarbete avslöjar vi DataOps som en metod för att implementera datarörledningar genom att göra en systematisk sökning av forskningspapper. Som ett resultat definierar vi DataOps som beskriver tvetydigheter och utmaningar. Vi undersöker också täckningen av DataOps till olika stadier av datalivscykeln. Vi skapade jämförelsesmatriser med olika verktyg och teknologier som kategoriserade dem i olika funktionella grupper för att visa hur de används i datalivscykelhantering. Vi följde riktlinjerna för implementering av DataOps för att implementera datapipeline med Apache Airflow som arbetsflödesorkestrator i Docker och jämfört med enkel manuell körning av ett dataanalysprojekt. Enligt utvärderingen tillhandahöll datapipelinen med DataOps automatisering i uppgiftskörning, orkestrering i exekveringsmiljö, testning och övervakning, kommunikation och samarbete, och minskad leveranscykeltid från slut till produkt tillsammans med minskningen av tid för rörledningskörning.

Page generated in 0.0887 seconds