• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 90
  • 33
  • 22
  • 7
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 185
  • 40
  • 38
  • 32
  • 28
  • 28
  • 25
  • 24
  • 24
  • 23
  • 20
  • 18
  • 18
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Sensor Fusion with Coordinated Mobile Robots / Sensorfusion med koordinerade mobila robotar

Holmberg, Per January 2003 (has links)
Robust localization is a prerequisite for mobile robot autonomy. In many situations the GPS signal is not available and thus an additional localization system is required. A simple approach is to apply localization based on dead reckoning by use of wheel encoders but it results in large estimation errors. With exteroceptive sensors such as a laser range finder natural landmarks in the environment of the robot can be extracted from raw range data. Landmarks are extracted with the Hough transform and a recursive line segment algorithm. By applying data association and Kalman filtering along with process models the landmarks can be used in combination with wheel encoders for estimating the global position of the robot. If several robots can cooperate better position estimates are to be expected because robots can be seen as mobile landmarks and one robot can supervise the movement of another. The centralized Kalman filter presented in this master thesis systematically treats robots and extracted landmarks such that benefits from several robots are utilized. Experiments in different indoor environments with two different robots show that long distances can be traveled while the positional uncertainty is kept low. The benefit from cooperating robots in the sense of reduced positional uncertainty is also shown in an experiment. Except for localization algorithms a typical autonomous robot task in the form of change detection is solved. The change detection method, which requires robust localization, is aimed to be used for surveillance. The implemented algorithm accounts for measurement- and positional uncertainty when determining whether something in the environment has changed. Consecutive true changes as well as sporadic false changes are detected in an illustrative experiment.
112

Längdmätning i engreppsskördare

Hjerpe, Stefan January 2007 (has links)
<p>Detta examensarbete har gjorts på uppdrag av SP Maskiner AB och berör längdmätningsenheten på skördaraggregatet SP451LF. Pulsgivaren i den befintliga längdmätningen anses inte nå en acceptabel livslängd och därför skall ett sensorlager prövas som ersättare.</p><p>Uppdraget består i att söka ett lämpligt sensorlager, dimensionera en komplett mäthjulslagring samt att ta fram ett konstruktionsförslag till prototyp på längdmätning.</p><p>Arbetet begränsas av att den omkonstruerade längdmätningen skall passa på SP451LF aggregatet. Det betyder att längdmätningens infästning mot aggregatet inte får förändras och dess yttermått måste hållas inom de gränser som aggregatets chassi tillåter.</p> / <p>This diploma work has been done on commission by SP Maskiner AB and affects the length meter unit on the single-grip harvester SP451LF. The pulse encoder in the present length meter has too short duration why a sensor-bearing will be tested as replacement.</p><p>The tasks are to find a suitable sensor-bearing unit, to dimension a complete measuring wheel bearing and create a construction proposal for a prototype of a length meter.</p><p>The diploma work is limited by the fact that the reworked length meter has to fit into the SP451LF harvester. That means that the length meter’s contact points on the harvester’s frame are not to be altered.</p>
113

Desenvolvimento e implementação de um sistema de controle de posição e velocidade de uma esteira transportadora usando inversor de frequência e microcontrolador /

Raniel, Thiago. January 2011 (has links)
Orientador: Jozué Vieira Filho / Banca: Carlos Antonio Alves / Banca: Tony Inácio da Silva / Resumo: A automação de esteiras rolantes é algo comum e importante em sistemas industriais, mas problemas práticos ainda representam desafios. Um dos desses desafios é manter a precisão em sistemas que exigem paradas sistemáticas, pois folgas mecânicas tendem a provocar variações nas posições de paradas ao longo do tempo. A aplicação de motores de indução têm se tornado comum e soluções eficientes e de baixo custo têm sido pesquisadas. Neste trabalho foi desenvolvido e implementado um sistema de controle de posição e velocidade aplicado em esteiras transportadoras utilizando inversor de frequência, microcontrolador, encoder óptico incremental e sensor indutivo. O movimento da esteira transportadora é efetuado por um motor de indução trifásico, que é acionado pelo conjunto microcontrolador - inversor de frequência. Este conjunto impõe uma frequência no estator do motor através de uma troca de mensagens entre microcontrolador e inversor de frequência (Sistema Mestre-Escravo). Para o envio e recebimento das mensagens, utilizou-se o protocolo de comunicação serial USS® (Universal Serial Interface Protocol) através do padrão RS-485. Os controles de posição e velocidade de rotação do eixo do motor fundamentam-se no sinal gerado pelo encoder óptico incremental, responsável por informar a posição do eixo do motor ao longo da trajetória, e no sensor indutivo que determina uma referência externa importante para a esteira transportadora. Para o funcionamento automático da esteira, elaborou-se um software em linguagem de programação C. Como resultado obteve-se um sistema de controle de posição e velocidade do eixo do motor de indução trifásico que apresenta bons resultados / Abstract: Automated conveyors system have been largely used in industrial applications. However, there are still practical issues to be overcome. One of them is due to the system mechanical limitation which can lead to low accuracy for applications based on "stop-and-go" movements. Induction motors have been largely used in such applications and low costs solutions have been searched. In this work it was developed and implemented a system of positioning and velocity control applied to conveyors which is based on frequency inverter, microcontroller, optical incremental encoder and inductive sensor. The conveyor's movement is made by means of a three-phase induction motor, which is driven by the couple microcontroller-frequency inverter. There are messages exchange between the microcontroller and the frequency inverter (Master - Slave configuration) which is based on the communication serial protocol USS through the RS-485 standard. The position and velocity of the motor spindle are controlled using an optical incremental encoder, which is responsible to provide the position of the trajectory, and an inductive sensor which determines the initial reference to the conveyor. The software used to control the system was developed in C language. The results show a low cost system with good results / Mestre
114

A Framework for Generative Product Design Powered by Deep Learning and Artificial Intelligence : Applied on Everyday Products

Nilsson, Alexander, Thönners, Martin January 2018 (has links)
In this master’s thesis we explore the idea of using artificial intelligence in the product design process and seek to develop a conceptual framework for how it can be incorporated to make user customized products more accessible and affordable for everyone. We show how generative deep learning models such as Variational Auto Encoders and Generative Adversarial Networks can be implemented to generate design variations of windows and clarify the general implementation process along with insights from recent research in the field. The proposed framework consists of three parts: (1) A morphological matrix connecting several identified possibilities of implementation to specific parts of the product design process. (2) A general step-by-step process on how to incorporate generative deep learning. (3) A description of common challenges, strategies andsolutions related to the implementation process. Together with the framework we also provide a system for automatic gathering and cleaning of image data as well as a dataset containing 4564 images of windows in a front view perspective.
115

A study of CABAC hardware acceleration with configurability in multi-standard media processing / En studie i konfigurerbar hårdvaruaccelerering för CABAC i flerstandards mediabearbetning

Flordal, Oskar January 2005 (has links)
To achieve greater compression ratios new video and image CODECs like H.264 and JPEG 2000 take advantage of Context adaptive binary arithmetic coding. As it contains computationally heavy algorithms, fast implementations have to be made when they are performed on large amount of data such as compressing high resolution formats like HDTV. This document describes how entropy coding works in general with a focus on arithmetic coding and CABAC. Furthermore the document dicusses the demands of the different CABACs and propose different options to hardware and instruction level optimisation. Testing and benchmarking of these implementations are done to ease evaluation. The main contribution of the thesis is parallelising and unifying the CABACs which is discussed and partly implemented. The result of the ILA is improved program flow through a specialised branching operations. The result of the DHA is a two bit parallel accelerator with hardware sharing between JPEG 2000 and H.264 encoder with limited decoding support.
116

Scalable High Efficiency Video Coding : Cross-layer optimization

Hägg, Ragnar January 2015 (has links)
In July 2014, the second version of the HEVC/H.265 video coding standard was announced, and it included the Scalable High efficiency Video Coding (SHVC) extension. SHVC is used for coding a video stream with subset streams of the same video with lower quality, and it supports spatial, temporal and SNR scalability among others. This is used to enable easy adaption of a video stream, by dropping or adding packages, to devices with different screen sizes, computing power and bandwidth. In this project SHVC has been implemented in Ericsson's research encoder C65. Some cross-layer optimizations have also been implemented and evaluated. The main goal of these optimizations are to make better decisions when choosing the reference layer's motion parameters and QP, by doing multi-pass coding and using the coded enhancement layer information from the first pass.
117

Classification du texte numérique et numérisé. Approche fondée sur les algorithmes d'apprentissage automatique / Text and Image based classification of documents using machine and representation learning

Sayadi, Karim 28 March 2017 (has links)
Différentes disciplines des sciences humaines telles la philologie ou la paléographie font face à des tâches complexes et fastidieuses pour l'examen des sources de données. La proposition d'approches computationnelles en humanités permet d'adresser les problématiques rencontrées telles que la lecture, l'analyse et l'archivage de façon systématique. Les modèles conceptuels élaborés reposent sur des algorithmes et ces derniers donnent lieu à des implémentations informatiques qui automatisent ces tâches fastidieuses. La première partie de la thèse vise, d'une part, à établir la structuration thématique d'un corpus, en construisant des espaces sémantiques de grande dimension. D'autre part, elle vise au suivi dynamique des thématiques qui constitue un réel défi scientifique, notamment en raison du passage à l'échelle. La seconde partie de la thèse traite de manière holistique la page d'un document numérisé sans aucune intervention préalable. Le but est d'apprendre automatiquement des représentations du trait de l'écriture ou du tracé d'un certain script par rapport au tracé d'un autre script. Il faut dans ce cadre tenir compte de l'environnement où se trouve le tracé : image, artefact, bruits dus à la détérioration de la qualité du papier, etc. Notre approche propose un empilement de réseaux de neurones auto-encodeurs afin de fournir une représentation alternative des données reçues en entrée. / Different disciplines in the humanities, such as philology or palaeography, face complex and time-consuming tasks whenever it comes to examining the data sources. The introduction of computational approaches in humanities makes it possible to address issues such as semantic analysis and systematic archiving. The conceptual models developed are based on algorithms that are later hard coded in order to automate these tedious tasks. In the first part of the thesis we propose a novel method to build a semantic space based on topics modeling. In the second part and in order to classify historical documents according to their script. We propose a novel representation learning method based on stacking convolutional auto-encoder. The goal is to automatically learn plot representations of the script or the written language.
118

Dimensionsmätare : En PLC-baserad mätmetod för att bestämma en brädas dimension

Svensson, Ludvig January 2017 (has links)
I det här projektet har ett mätsystem utvecklats för dimensionsmätning av brädor. Mätsystemet består av ett programmerbart styrsystem (PLC), ett människa-maskin-gränssnitt (HMI), en pulsgivare för breddmätning samt två avståndssensorer för tjockleksmätning. Målet med projektet var att jämföra och välja sensorer som lämpar sig för tjockleksmätning, ta fram konstruktionsunderlag för ett styrskåp samt elscheman för inkoppling av sensorer och pulsgivare, utveckla ett PLC-program som samlar in och bearbetar data från mätningen samt skickar resultat till ett överordnat system, utveckla ett HMI för att visa resultat och göra inställningar samt testa mätsystemet för att verifiera dess mätnoggrannhet. En litteraturstudie utfördes för att öka kunskapen om olika typer av pulsgivare och avståndssensorer. Litteraturstudien visade att det finns två huvudtyper av pulsgivare. Dessa är inkrementell eller absolut, där den inkrementella pulsgivaren genererar en fyrkantsvåg av pulser medan den absoluta genererar ett pulsvärde som motsvarar en absolut position. Lasersensorer som använder sig av time-of-flight (TOF) eller trianguleringsmetoden visade sig ha hög precision och snabb svarstid. TOF-metoden går ut på att mäta tiden det tar för ljuset att studsa mot ett objekt och tillbaka till en mottagare i sensorn medan trianguleringsmetoden går ut på att använda kända avstånd och reflektionsvinklar för att bestämma ett okänt avstånd. Konstruktionsunderlag gjordes i programmet ELPROCAD och tidigare kunskaper låg till grund för hur programmet skulle användas och hur ritningarna skulle konstrueras. En litteraturstudie utfördes för att samla information om olika sensorer och i jämförelsen jämfördes sensorernas pris och prestanda. Ett styrsystem från Siemens valdes och PLC-programmet utvecklades i Simatic Step 7. Det HMI som valdes var Beijer X2 Pro 4” från Beijer Electronics. Mätsystemet klarade med en konfidensgrad på 95 % av att mäta brädor med en noggrannhet på ± 0.3 mm för tjocklek och ± 0.6 mm för bredd. Projektets mål uppfylldes men det finns förbättringsmöjligheter. Det framtida arbete som finns är att utveckla ett filter som filtrerar bort änglavingar samt göra styrning för att med tryckluft blåsa ren sensorerna från smuts med jämna mellanrum. / In this project a measurement system for dimension measurements on boards have been developed. The measurement system consists of a programmable logic controller (PLC), a human-machine-interface (HMI), an encoder for measuring width and two laser distance sensors for measuring thickness. The goal of the project is to compare between different distance sensors suitable for measuring the thickness of a board, to create electrical drawings for the system, to develop a PLC-program which collects and analyzes the measurement data and sends the results to a higher level system, to develop a HMI which will be used for monitoring and to change parameters and settings and test the measurement system at an sawmill in order to determine its accuracy. A literature study was made in order to increase the knowledge of different kinds of encoders and distance sensor. The study showed that there are mainly two kind of encoders, the absolute and the incremental. The incremental encoder generates a square wave signal while the absolute encoder generates a signal that represents the angular position of the encoder. The study also showed that an optical laser sensors based on the time of flight (TOF) or the triangulation principle proved to have high precision and fast response time. The TOF-principle is based on measuring the time it takes for a pulse of laser to travel to an object and reflect back to a receiver. By knowing the time difference between the transmitter and receiver and the speed of light makes it possible to calculate the distance to the object. The triangulation principle is based on using known distances and reflection angles to calculate unknown distances. Electrical drawings was made in ELPROCAD and previous knowledge provided the basis for how the program was used and how the drawings were constructed. A literature study was made to gather information about different types of distance sensors, which was later used to compare price and performance. A PLC-system from Siemens was chosen and the PLC-program was developed in Simatic Step 7. The HMI that was chosen in the project was Beijer X2 Pro 4” from Beijer Electronics. The measurement system was able to measure the thickness with an uncertainty of ± 0.3 mm and the width with an uncertainty of ± 0.6 mm with a confidence level of 95 %. The goal of the project was fulfilled but there is some work left for the future. That is making a filter for removing thin edges on the board and develop a pressurized air system that can clean the sensors from dust in a defined time interval.
119

Performance comparison of two implementations of TCM for QAM

Peh, Lin Kiat 12 1900 (has links)
Approved for public release; distribution is unlimited. / Trellis-Coded Modulation (TCM) is employed with quadrature amplitude modulation (QAM) to provide error correction coding with no expense in bandwidth. There are two common implementations of TCM, namely pragmatic TCM and Ungerboeck TCM. Both schemes employ Viterbi algorithms for decoding but have different code construction. This thesis investigates and compares the performance of pragmatic TCM and Ungerboeck TCM by implementing the Viterbi decoding algorithm for both schemes with 16-QAM and 64-QAM. Both pragmatic and Ungerboeck TCM with six memory elements are considered. Simulations were carried out for both pragmatic and Ungerboeck TCM to evaluate their respective performance. The simulations were done using Matlab software, and an additive white Gaussian noise channel was assumed. The objective was to ascertain that pragmatic TCM, with its reduced-complexity decoding, is more suitable to adaptive modulation than Ungerboeck TCM. / Civilian
120

Techniques For Low Power Motion Estimation In Video Encoders

Gupte, Ajit D 06 1900 (has links) (PDF)
This thesis looks at hardware algorithms that help reduce dynamic power dissipation in video encoder applications. Computational complexity of motion estimation and the data traffic between external memory and the video processing engine are two main reasons for large power dissipation in video encoders. While motion estimation may consume 50% to 70% of total video encoder power, the power dissipated in external memory such as the DDR SDRAM can be of the order of 40% of the total system power. Reducing power dissipation in video encoders is important in order to improve battery life of mobile devices such as the smart phones and digital camcorders. We propose hardware algorithms which extract only the important features in the video data to reduce the complexity of computations, communications and storage, thereby reducing average power dissipation. We apply this concept to design hardware algorithms for optimizing motion estimation matching complexity, and reference frame storage and access from the external memory. In addition, we also develop techniques to reduce searching complexity of motion estimation. First, we explore a set of adaptive algorithms that reduce average power dissipated due to motion estimation. We propose that by taking into account the macro-block level features in the video data, the average matching complexity of motion estimation in terms of number of computations in real-time hardwired video encoders can be significantly reduced when compared against traditional hardwired implementations, that are designed to handle most demanding data sets. Current macro-block features such as pixel variance and Hadamard transform coefficients are analyzed, and are used to adapt the matching complexity. The macro-block is partitioned based on these features to obtain sub-block sums, which are used for matching operations. Thus, simple macro-blocks, without many features can be matched with much less computations compared to the macro-blocks with complex features, leading to reduction in average power dissipation. Apart from optimizing the matching operation, optimizing the search operation is a powerful way to reduce motion estimation complexity. We propose novel search optimization techniques including (1) a center-biased search order and (2) skipping unlikely search positions, both applied in the context of real time hardware implementation. The proposed search optimization techniques take into account and are compatible with the reference data access pattern from the memory as required by the hardware algorithm. We demonstrate that the matching and searching optimization techniques together achieve nearly 65% reduction in power dissipation due to motion estimation, without any significant degradation in motion estimation quality. A key to low power dissipation in video encoders is minimizing the data traffic between the external memory devices such as DDR SDRAM and the video processor. External memory power can be as high as 50% of the total power budget in a multimedia system. Other than the power dissipation in external memory, the amount of data traffic is an important parameter that has significant impact on the system cost. Large memory traffic necessitates high speed external memories, high speed on-chip interconnect, and more parallel I/Os to increase the memory throughput. This leads to higher system cost. We explore a lossy, scalar quantization based reference frame compression technique that can be used to reduce the amount of reference data traffic from external memory devices significantly. In this scheme, the quantization is adapted based on the pixel range within each block being compressed. We show that the error introduced by the scalar quantization is bounded and can be represented by smaller number of bits compared to the original pixel. The proposed reference frame compression scheme uses this property to minimize the motion compensation related traffic, thereby improving the compression scheme efficiency. The scheme maintains a fixed compression ratio, and the size of the quantization error is also kept constant. This enables easy storage and retrieval of reference data. The impact of using lossy reference on the motion estimation quality is negligible. As a result of reduction in DDR traffic, the DDR power is reduced significantly. The power dissipation due to additional hardware required for reference frame compression is very small compared to the reduction in DDR power. 24% reduction in peak DDR bandwidth and 23% net reduction in average DDR power is achieved. For video sequences with larger motion, the amount of bandwidth reduction is even higher (close to 40%) and reduction in power is close to 30%.

Page generated in 0.1585 seconds