• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 33
  • 22
  • 7
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 183
  • 39
  • 38
  • 32
  • 27
  • 27
  • 24
  • 24
  • 23
  • 23
  • 20
  • 18
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Längdmätning i engreppsskördare

Hjerpe, Stefan January 2007 (has links)
<p>Detta examensarbete har gjorts på uppdrag av SP Maskiner AB och berör längdmätningsenheten på skördaraggregatet SP451LF. Pulsgivaren i den befintliga längdmätningen anses inte nå en acceptabel livslängd och därför skall ett sensorlager prövas som ersättare.</p><p>Uppdraget består i att söka ett lämpligt sensorlager, dimensionera en komplett mäthjulslagring samt att ta fram ett konstruktionsförslag till prototyp på längdmätning.</p><p>Arbetet begränsas av att den omkonstruerade längdmätningen skall passa på SP451LF aggregatet. Det betyder att längdmätningens infästning mot aggregatet inte får förändras och dess yttermått måste hållas inom de gränser som aggregatets chassi tillåter.</p> / <p>This diploma work has been done on commission by SP Maskiner AB and affects the length meter unit on the single-grip harvester SP451LF. The pulse encoder in the present length meter has too short duration why a sensor-bearing will be tested as replacement.</p><p>The tasks are to find a suitable sensor-bearing unit, to dimension a complete measuring wheel bearing and create a construction proposal for a prototype of a length meter.</p><p>The diploma work is limited by the fact that the reworked length meter has to fit into the SP451LF harvester. That means that the length meter’s contact points on the harvester’s frame are not to be altered.</p>
112

Desenvolvimento e implementação de um sistema de controle de posição e velocidade de uma esteira transportadora usando inversor de frequência e microcontrolador /

Raniel, Thiago. January 2011 (has links)
Orientador: Jozué Vieira Filho / Banca: Carlos Antonio Alves / Banca: Tony Inácio da Silva / Resumo: A automação de esteiras rolantes é algo comum e importante em sistemas industriais, mas problemas práticos ainda representam desafios. Um dos desses desafios é manter a precisão em sistemas que exigem paradas sistemáticas, pois folgas mecânicas tendem a provocar variações nas posições de paradas ao longo do tempo. A aplicação de motores de indução têm se tornado comum e soluções eficientes e de baixo custo têm sido pesquisadas. Neste trabalho foi desenvolvido e implementado um sistema de controle de posição e velocidade aplicado em esteiras transportadoras utilizando inversor de frequência, microcontrolador, encoder óptico incremental e sensor indutivo. O movimento da esteira transportadora é efetuado por um motor de indução trifásico, que é acionado pelo conjunto microcontrolador - inversor de frequência. Este conjunto impõe uma frequência no estator do motor através de uma troca de mensagens entre microcontrolador e inversor de frequência (Sistema Mestre-Escravo). Para o envio e recebimento das mensagens, utilizou-se o protocolo de comunicação serial USS® (Universal Serial Interface Protocol) através do padrão RS-485. Os controles de posição e velocidade de rotação do eixo do motor fundamentam-se no sinal gerado pelo encoder óptico incremental, responsável por informar a posição do eixo do motor ao longo da trajetória, e no sensor indutivo que determina uma referência externa importante para a esteira transportadora. Para o funcionamento automático da esteira, elaborou-se um software em linguagem de programação C. Como resultado obteve-se um sistema de controle de posição e velocidade do eixo do motor de indução trifásico que apresenta bons resultados / Abstract: Automated conveyors system have been largely used in industrial applications. However, there are still practical issues to be overcome. One of them is due to the system mechanical limitation which can lead to low accuracy for applications based on "stop-and-go" movements. Induction motors have been largely used in such applications and low costs solutions have been searched. In this work it was developed and implemented a system of positioning and velocity control applied to conveyors which is based on frequency inverter, microcontroller, optical incremental encoder and inductive sensor. The conveyor's movement is made by means of a three-phase induction motor, which is driven by the couple microcontroller-frequency inverter. There are messages exchange between the microcontroller and the frequency inverter (Master - Slave configuration) which is based on the communication serial protocol USS through the RS-485 standard. The position and velocity of the motor spindle are controlled using an optical incremental encoder, which is responsible to provide the position of the trajectory, and an inductive sensor which determines the initial reference to the conveyor. The software used to control the system was developed in C language. The results show a low cost system with good results / Mestre
113

A Framework for Generative Product Design Powered by Deep Learning and Artificial Intelligence : Applied on Everyday Products

Nilsson, Alexander, Thönners, Martin January 2018 (has links)
In this master’s thesis we explore the idea of using artificial intelligence in the product design process and seek to develop a conceptual framework for how it can be incorporated to make user customized products more accessible and affordable for everyone. We show how generative deep learning models such as Variational Auto Encoders and Generative Adversarial Networks can be implemented to generate design variations of windows and clarify the general implementation process along with insights from recent research in the field. The proposed framework consists of three parts: (1) A morphological matrix connecting several identified possibilities of implementation to specific parts of the product design process. (2) A general step-by-step process on how to incorporate generative deep learning. (3) A description of common challenges, strategies andsolutions related to the implementation process. Together with the framework we also provide a system for automatic gathering and cleaning of image data as well as a dataset containing 4564 images of windows in a front view perspective.
114

A study of CABAC hardware acceleration with configurability in multi-standard media processing / En studie i konfigurerbar hårdvaruaccelerering för CABAC i flerstandards mediabearbetning

Flordal, Oskar January 2005 (has links)
To achieve greater compression ratios new video and image CODECs like H.264 and JPEG 2000 take advantage of Context adaptive binary arithmetic coding. As it contains computationally heavy algorithms, fast implementations have to be made when they are performed on large amount of data such as compressing high resolution formats like HDTV. This document describes how entropy coding works in general with a focus on arithmetic coding and CABAC. Furthermore the document dicusses the demands of the different CABACs and propose different options to hardware and instruction level optimisation. Testing and benchmarking of these implementations are done to ease evaluation. The main contribution of the thesis is parallelising and unifying the CABACs which is discussed and partly implemented. The result of the ILA is improved program flow through a specialised branching operations. The result of the DHA is a two bit parallel accelerator with hardware sharing between JPEG 2000 and H.264 encoder with limited decoding support.
115

Scalable High Efficiency Video Coding : Cross-layer optimization

Hägg, Ragnar January 2015 (has links)
In July 2014, the second version of the HEVC/H.265 video coding standard was announced, and it included the Scalable High efficiency Video Coding (SHVC) extension. SHVC is used for coding a video stream with subset streams of the same video with lower quality, and it supports spatial, temporal and SNR scalability among others. This is used to enable easy adaption of a video stream, by dropping or adding packages, to devices with different screen sizes, computing power and bandwidth. In this project SHVC has been implemented in Ericsson's research encoder C65. Some cross-layer optimizations have also been implemented and evaluated. The main goal of these optimizations are to make better decisions when choosing the reference layer's motion parameters and QP, by doing multi-pass coding and using the coded enhancement layer information from the first pass.
116

Classification du texte numérique et numérisé. Approche fondée sur les algorithmes d'apprentissage automatique / Text and Image based classification of documents using machine and representation learning

Sayadi, Karim 28 March 2017 (has links)
Différentes disciplines des sciences humaines telles la philologie ou la paléographie font face à des tâches complexes et fastidieuses pour l'examen des sources de données. La proposition d'approches computationnelles en humanités permet d'adresser les problématiques rencontrées telles que la lecture, l'analyse et l'archivage de façon systématique. Les modèles conceptuels élaborés reposent sur des algorithmes et ces derniers donnent lieu à des implémentations informatiques qui automatisent ces tâches fastidieuses. La première partie de la thèse vise, d'une part, à établir la structuration thématique d'un corpus, en construisant des espaces sémantiques de grande dimension. D'autre part, elle vise au suivi dynamique des thématiques qui constitue un réel défi scientifique, notamment en raison du passage à l'échelle. La seconde partie de la thèse traite de manière holistique la page d'un document numérisé sans aucune intervention préalable. Le but est d'apprendre automatiquement des représentations du trait de l'écriture ou du tracé d'un certain script par rapport au tracé d'un autre script. Il faut dans ce cadre tenir compte de l'environnement où se trouve le tracé : image, artefact, bruits dus à la détérioration de la qualité du papier, etc. Notre approche propose un empilement de réseaux de neurones auto-encodeurs afin de fournir une représentation alternative des données reçues en entrée. / Different disciplines in the humanities, such as philology or palaeography, face complex and time-consuming tasks whenever it comes to examining the data sources. The introduction of computational approaches in humanities makes it possible to address issues such as semantic analysis and systematic archiving. The conceptual models developed are based on algorithms that are later hard coded in order to automate these tedious tasks. In the first part of the thesis we propose a novel method to build a semantic space based on topics modeling. In the second part and in order to classify historical documents according to their script. We propose a novel representation learning method based on stacking convolutional auto-encoder. The goal is to automatically learn plot representations of the script or the written language.
117

Dimensionsmätare : En PLC-baserad mätmetod för att bestämma en brädas dimension

Svensson, Ludvig January 2017 (has links)
I det här projektet har ett mätsystem utvecklats för dimensionsmätning av brädor. Mätsystemet består av ett programmerbart styrsystem (PLC), ett människa-maskin-gränssnitt (HMI), en pulsgivare för breddmätning samt två avståndssensorer för tjockleksmätning. Målet med projektet var att jämföra och välja sensorer som lämpar sig för tjockleksmätning, ta fram konstruktionsunderlag för ett styrskåp samt elscheman för inkoppling av sensorer och pulsgivare, utveckla ett PLC-program som samlar in och bearbetar data från mätningen samt skickar resultat till ett överordnat system, utveckla ett HMI för att visa resultat och göra inställningar samt testa mätsystemet för att verifiera dess mätnoggrannhet. En litteraturstudie utfördes för att öka kunskapen om olika typer av pulsgivare och avståndssensorer. Litteraturstudien visade att det finns två huvudtyper av pulsgivare. Dessa är inkrementell eller absolut, där den inkrementella pulsgivaren genererar en fyrkantsvåg av pulser medan den absoluta genererar ett pulsvärde som motsvarar en absolut position. Lasersensorer som använder sig av time-of-flight (TOF) eller trianguleringsmetoden visade sig ha hög precision och snabb svarstid. TOF-metoden går ut på att mäta tiden det tar för ljuset att studsa mot ett objekt och tillbaka till en mottagare i sensorn medan trianguleringsmetoden går ut på att använda kända avstånd och reflektionsvinklar för att bestämma ett okänt avstånd. Konstruktionsunderlag gjordes i programmet ELPROCAD och tidigare kunskaper låg till grund för hur programmet skulle användas och hur ritningarna skulle konstrueras. En litteraturstudie utfördes för att samla information om olika sensorer och i jämförelsen jämfördes sensorernas pris och prestanda. Ett styrsystem från Siemens valdes och PLC-programmet utvecklades i Simatic Step 7. Det HMI som valdes var Beijer X2 Pro 4” från Beijer Electronics. Mätsystemet klarade med en konfidensgrad på 95 % av att mäta brädor med en noggrannhet på ± 0.3 mm för tjocklek och ± 0.6 mm för bredd. Projektets mål uppfylldes men det finns förbättringsmöjligheter. Det framtida arbete som finns är att utveckla ett filter som filtrerar bort änglavingar samt göra styrning för att med tryckluft blåsa ren sensorerna från smuts med jämna mellanrum. / In this project a measurement system for dimension measurements on boards have been developed. The measurement system consists of a programmable logic controller (PLC), a human-machine-interface (HMI), an encoder for measuring width and two laser distance sensors for measuring thickness. The goal of the project is to compare between different distance sensors suitable for measuring the thickness of a board, to create electrical drawings for the system, to develop a PLC-program which collects and analyzes the measurement data and sends the results to a higher level system, to develop a HMI which will be used for monitoring and to change parameters and settings and test the measurement system at an sawmill in order to determine its accuracy. A literature study was made in order to increase the knowledge of different kinds of encoders and distance sensor. The study showed that there are mainly two kind of encoders, the absolute and the incremental. The incremental encoder generates a square wave signal while the absolute encoder generates a signal that represents the angular position of the encoder. The study also showed that an optical laser sensors based on the time of flight (TOF) or the triangulation principle proved to have high precision and fast response time. The TOF-principle is based on measuring the time it takes for a pulse of laser to travel to an object and reflect back to a receiver. By knowing the time difference between the transmitter and receiver and the speed of light makes it possible to calculate the distance to the object. The triangulation principle is based on using known distances and reflection angles to calculate unknown distances. Electrical drawings was made in ELPROCAD and previous knowledge provided the basis for how the program was used and how the drawings were constructed. A literature study was made to gather information about different types of distance sensors, which was later used to compare price and performance. A PLC-system from Siemens was chosen and the PLC-program was developed in Simatic Step 7. The HMI that was chosen in the project was Beijer X2 Pro 4” from Beijer Electronics. The measurement system was able to measure the thickness with an uncertainty of ± 0.3 mm and the width with an uncertainty of ± 0.6 mm with a confidence level of 95 %. The goal of the project was fulfilled but there is some work left for the future. That is making a filter for removing thin edges on the board and develop a pressurized air system that can clean the sensors from dust in a defined time interval.
118

Performance comparison of two implementations of TCM for QAM

Peh, Lin Kiat 12 1900 (has links)
Approved for public release; distribution is unlimited. / Trellis-Coded Modulation (TCM) is employed with quadrature amplitude modulation (QAM) to provide error correction coding with no expense in bandwidth. There are two common implementations of TCM, namely pragmatic TCM and Ungerboeck TCM. Both schemes employ Viterbi algorithms for decoding but have different code construction. This thesis investigates and compares the performance of pragmatic TCM and Ungerboeck TCM by implementing the Viterbi decoding algorithm for both schemes with 16-QAM and 64-QAM. Both pragmatic and Ungerboeck TCM with six memory elements are considered. Simulations were carried out for both pragmatic and Ungerboeck TCM to evaluate their respective performance. The simulations were done using Matlab software, and an additive white Gaussian noise channel was assumed. The objective was to ascertain that pragmatic TCM, with its reduced-complexity decoding, is more suitable to adaptive modulation than Ungerboeck TCM. / Civilian
119

Techniques For Low Power Motion Estimation In Video Encoders

Gupte, Ajit D 06 1900 (has links) (PDF)
This thesis looks at hardware algorithms that help reduce dynamic power dissipation in video encoder applications. Computational complexity of motion estimation and the data traffic between external memory and the video processing engine are two main reasons for large power dissipation in video encoders. While motion estimation may consume 50% to 70% of total video encoder power, the power dissipated in external memory such as the DDR SDRAM can be of the order of 40% of the total system power. Reducing power dissipation in video encoders is important in order to improve battery life of mobile devices such as the smart phones and digital camcorders. We propose hardware algorithms which extract only the important features in the video data to reduce the complexity of computations, communications and storage, thereby reducing average power dissipation. We apply this concept to design hardware algorithms for optimizing motion estimation matching complexity, and reference frame storage and access from the external memory. In addition, we also develop techniques to reduce searching complexity of motion estimation. First, we explore a set of adaptive algorithms that reduce average power dissipated due to motion estimation. We propose that by taking into account the macro-block level features in the video data, the average matching complexity of motion estimation in terms of number of computations in real-time hardwired video encoders can be significantly reduced when compared against traditional hardwired implementations, that are designed to handle most demanding data sets. Current macro-block features such as pixel variance and Hadamard transform coefficients are analyzed, and are used to adapt the matching complexity. The macro-block is partitioned based on these features to obtain sub-block sums, which are used for matching operations. Thus, simple macro-blocks, without many features can be matched with much less computations compared to the macro-blocks with complex features, leading to reduction in average power dissipation. Apart from optimizing the matching operation, optimizing the search operation is a powerful way to reduce motion estimation complexity. We propose novel search optimization techniques including (1) a center-biased search order and (2) skipping unlikely search positions, both applied in the context of real time hardware implementation. The proposed search optimization techniques take into account and are compatible with the reference data access pattern from the memory as required by the hardware algorithm. We demonstrate that the matching and searching optimization techniques together achieve nearly 65% reduction in power dissipation due to motion estimation, without any significant degradation in motion estimation quality. A key to low power dissipation in video encoders is minimizing the data traffic between the external memory devices such as DDR SDRAM and the video processor. External memory power can be as high as 50% of the total power budget in a multimedia system. Other than the power dissipation in external memory, the amount of data traffic is an important parameter that has significant impact on the system cost. Large memory traffic necessitates high speed external memories, high speed on-chip interconnect, and more parallel I/Os to increase the memory throughput. This leads to higher system cost. We explore a lossy, scalar quantization based reference frame compression technique that can be used to reduce the amount of reference data traffic from external memory devices significantly. In this scheme, the quantization is adapted based on the pixel range within each block being compressed. We show that the error introduced by the scalar quantization is bounded and can be represented by smaller number of bits compared to the original pixel. The proposed reference frame compression scheme uses this property to minimize the motion compensation related traffic, thereby improving the compression scheme efficiency. The scheme maintains a fixed compression ratio, and the size of the quantization error is also kept constant. This enables easy storage and retrieval of reference data. The impact of using lossy reference on the motion estimation quality is negligible. As a result of reduction in DDR traffic, the DDR power is reduced significantly. The power dissipation due to additional hardware required for reference frame compression is very small compared to the reduction in DDR power. 24% reduction in peak DDR bandwidth and 23% net reduction in average DDR power is achieved. For video sequences with larger motion, the amount of bandwidth reduction is even higher (close to 40%) and reduction in power is close to 30%.
120

Drill wear monitoring using instantaneous angular speed : a comparison with conventional technologies used in drill monitoring systems

Sambayi, Patrick Mukenyi Kataku January 2012 (has links)
Most drill wear monitoring research found in the literature is based on conventional vibration technologies. However, these conventional approaches still have not attracted real interest from manufacturers for multiples of reasons: some of these techniques are not practical and use complicated Tool Condition Monitoring (TCM) systems with less value in industry. In addition, they are also prone to give spurious drill deterioration warnings in industrial environments. Therefore, drills are normally replaced at estimated preset intervals, sometimes long before they are worn or by expertise judgment. Two of the great problems in the implementation of these systems in drilling are: the poor signal-to-noise ratio and the lack of system-made sensors for drilling, as is prevalent in machining operations with straight edge cutters. In order to overcome the noise problems, many researchers recommend advanced and sophisticated signal processing while the work of Rehorn et al. (2005) advises the following possibilities to deal with the lack of commercial system-made sensors:  Some research should be directed towards developing some form of instrumented tool for drill operations.  Since the use of custom-made sensors is being ignored in drilling operations, effort should be focused on intelligent or innovative use of available sensor technology. It is expected that the latter could minimize implementation problems and allows an optimal drill utilization rate by means of modern and smart sensors. In addition to the accelerometer sensor commonly used in conventional methods, this work has considered two other sensor-based methods to monitor the drill wear indirectly. These methods entail the use of an instrumented drill with strain gauges to measure the torque and the use of an encoder to measure the Instantaneous Angular Speed (IAS). The signals from these sensors were analyzed using signal processing techniques such as, statistical parameters, Fast Fourier Transform (FFT), and a ii preliminary Time-Frequency (TF) analysis. A preliminary investigation has revealed that the use of a Regression Analysis (RA) based on a higher order polynomial function can very well follow and give prognosis of the development of the monitored parameters. The experimental investigation has revealed that all the above monitoring systems are sensitive to the deterioration of the drill condition. This work is however particularly concerned with the use of IAS on the spindle of the drill, compared to conventional monitoring systems for drill condition monitoring. This comparison reveals that the IAS approach can generate diagnostic information similar to vibration and torque measurements, without some of the instrumentation complications. This similitude seems to be logical, as it is well known that the increase of friction between the drill and workpiece due to wear increase the torque and consequently it should reduce or at least affect the spindle rotational speed. However, the use of a drill instrumented with a strain gauge is not practical, because of the inconvenience it causes on production machines. By contrast, the IAS could be measured quite easily by means of an encoder, a tachometer or some other smart rotational speed sensors. Thus, one could take advantage of advanced techniques in digital time interval analysis applied to a carrier signal from a multiple pulse per revolution encoder on the rotating shaft, to improve the analysis of chain pulses. As it will be shown in this dissertation, the encoder resolution does not sensibly affect the analysis. Therefore, one can easily replace encoders by any smart transducers that have become more popular in rotating machinery. Consequently, a non-contact transducer for example could effectively be used in on-line drill condition monitoring such as the use of lasers or time passage encoder-based systems. This work has gained from previous research performed in Tool Condition Monitoring TCM, and presents a sensor that is already available in the arsenal of sensors and could be an open door for a practical and reliable sensor in automated drilling. iii In conclusion, this dissertation strives to answer the following question: Which one of these methods could challenge the need from manufacturers by monitoring and diagnosing drill condition in a practical and reliable manner? Past research has sufficiently proved the weakness of conventional technologies in industry despite good results in the laboratory. In addition, delayed diagnosis due to time-consuming data processing is not beneficial for automated drilling, especially when the drill wears rapidly at the end of its life. No advanced signal processing is required for the proposed technique, as satisfactory results are obtained using common time domain signal processing methods. The recommended monitoring choice will definitely depend on the sensor that is practical and reliable in industry. / Dissertation (MEng)--University of Pretoria, 2012. / gm2013 / Mechanical and Aeronautical Engineering / MEng / Unrestricted

Page generated in 0.0516 seconds