• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 33
  • 22
  • 7
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 183
  • 39
  • 38
  • 32
  • 27
  • 27
  • 24
  • 24
  • 23
  • 23
  • 20
  • 18
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Demand Forecasting of Outbound Logistics Using Neural Networks

Otuodung, Enobong Paul, Gorhan, Gulten January 2023 (has links)
Long short-term volume forecasting is essential for companies regarding their logistics service operations. It is crucial for logistic companies to predict the volumes of goods that will be delivered to various centers at any given day, as this will assist in managing the efficiency of their business operations. This research aims to create a forecasting model for outbound logistics volumes by utilizing design science research methodology in building 3 machine-learning models and evaluating the performance of the models . The dataset is provided by Tetra Pak AB, the World's leading food processing and packaging solutions company,. Research methods were mainly quantitative, based on statistical data and numerical calculations. Three algorithms were implemented: which are encoder–decoder networks based on Long Short-Term Memory (LSTM), Convolutional Long Short-Term Memory (ConvLSTM), and Convolutional Neural Network Long ShortTerm Memory (CNN-LSTM). Comparisons are made with the average Root Mean Square Error (RMSE) for six distribution centers (DC) of Tetra Pak. Results obtained from encoder–decoder networks based on LSTM are compared to results obtained by encoder–decoder networks based on ConvLSTM and CNN-LSTM. The three algorithms performed very well, considering the loss of the Train and Test with our multivariate time series dataset. However, based on the average score of the RMSE, there are slight differences between algorithms for all DCs.
162

Multiple Constant Multiplication Optimization Using Common Subexpression Elimination and Redundant Numbers

Al-Hasani, Firas Ali Jawad January 2014 (has links)
The multiple constant multiplication (MCM) operation is a fundamental operation in digital signal processing (DSP) and digital image processing (DIP). Examples of the MCM are in finite impulse response (FIR) and infinite impulse response (IIR) filters, matrix multiplication, and transforms. The aim of this work is minimizing the complexity of the MCM operation using common subexpression elimination (CSE) technique and redundant number representations. The CSE technique searches and eliminates common digit patterns (subexpressions) among MCM coefficients. More common subexpressions can be found by representing the MCM coefficients using redundant number representations. A CSE algorithm is proposed that works on a type of redundant numbers called the zero-dominant set (ZDS). The ZDS is an extension over the representations of minimum number of non-zero digits called minimum Hamming weight (MHW). Using the ZDS improves CSE algorithms' performance as compared with using the MHW representations. The disadvantage of using the ZDS is it increases the possibility of overlapping patterns (digit collisions). In this case, one or more digits are shared between a number of patterns. Eliminating a pattern results in losing other patterns because of eliminating the common digits. A pattern preservation algorithm (PPA) is developed to resolve the overlapping patterns in the representations. A tree and graph encoders are proposed to generate a larger space of number representations. The algorithms generate redundant representations of a value for a given digit set, radix, and wordlength. The tree encoder is modified to search for common subexpressions simultaneously with generating of the representation tree. A complexity measure is proposed to compare between the subexpressions at each node. The algorithm terminates generating the rest of the representation tree when it finds subexpressions with maximum sharing. This reduces the search space while minimizes the hardware complexity. A combinatoric model of the MCM problem is proposed in this work. The model is obtained by enumerating all the possible solutions of the MCM that resemble a graph called the demand graph. Arc routing on this graph gives the solutions of the MCM problem. A similar arc routing is found in the capacitated arc routing such as the winter salting problem. Ant colony optimization (ACO) meta-heuristics is proposed to traverse the demand graph. The ACO is simulated on a PC using Python programming language. This is to verify the model correctness and the work of the ACO. A parallel simulation of the ACO is carried out on a multi-core super computer using C++ boost graph library.
163

The modernization of a DOS-basedtime critical solar cell LBICmeasurement system.

Hjern, Gunnar January 2019 (has links)
LBIC is a technique for scanning the local quantum efficiency of solar cells. This kind of measurements needs a highly specialized, and time critical controlling software. In 1996 the client, professor Markus Rinio, constructed an LBIC system, and wrote the controlling software as a Turbo-Pascal 7.0 application, running under the MS-DOS 6.22 operating system. By now (2018) both the software and several hardware components are in dire need to be modernized. This thesis thoroughly describes several important aspects of this work, and the considerations needed for a successful result. This includes both very foundational choices about the software architecture, the choice of suitable operating system, the threading model, and the adaptation to new hardware with vastly different behavior. The project also included a new hardware module for position reports and instrument triggering, as well as several adaptations to transform the DOS-based LBIC software into a pleasant modern GUI application.
164

Traitement des signaux et images en temps réel : "implantation de H.264 sur MPSoC"

Messaoudi, Kamel 19 December 2012 (has links)
Cette thèse est élaborée en cotutelle entre l’université Badji Mokhtar (Laboratoire LERICA) et l’université de bourgogne (Laboratoire LE2I, UMR CNRS 5158). Elle constitue une contribution à l’étude et l’implantation de l’encodeur H.264/AVC. Durent l’évolution des normes de compression vidéo, une réalité sure est vérifiée de plus en plus : avoir une bonne performance du processus de compression nécessite l’élaboration d’équipements beaucoup plus performants en termes de puissance de calcul, de flexibilité et de portabilité et ceci afin de répondre aux exigences des différents traitements et satisfaire au critère « Temps Réel ». Pour assurer un temps réel pour ce genre d’applications, une solution reste possible est l’utilisation des systèmes sur puce (SoC) ou bien des systèmes multiprocesseurs sur puce (MPSoC) implantés sur des plateformes reconfigurables à base de circuit FPGA. L’objective de cette thèse consiste à l’étude et l’implantation des algorithmes de traitement des signaux et images et en particulier la norme H.264/AVC, et cela dans le but d’assurer un temps réel pour le cycle codage-décodage. Nous utilisons deux plateformes FPGA de Xilinx (ML501 et XUPV5). Dans la littérature, il existe déjà plusieurs implémentations du décodeur. Pour l’encodeur, malgré les efforts énormes réalisés, il reste toujours du travail pour l’optimisation des algorithmes et l’extraction des parallélismes possibles surtout avec une variété de profils et de niveaux de la norme H.264/AVC.Dans un premier temps de cette thèse, nous proposons une implantation matérielle d’un contrôleur mémoire spécialement pour l’encodeur H.264/AVC. Ce contrôleur est réalisé en ajoutant, au contrôleur mémoire DDR2 des deux plateformes de Xilinx, une couche intelligente capable de calculer les adresses et récupérer les données nécessaires pour les différents modules de traitement de l’encodeur. Ensuite, nous proposons des implantations matérielles (niveau RTL) des modules de traitement de l’encodeur H.264. Sur ces implantations, nous allons exploiter les deux principes de parallélisme et de pipelining autorisé par l’encodeur en vue de la grande dépendance inter-blocs. Nous avons ainsi proposé plusieurs améliorations et nouvelles techniques dans les modules de la chaine Intra et le filtre anti-blocs. A la fin de cette thèse, nous utilisons les modules réalisés en matériels pour la l’implantation Matérielle/logicielle de l’encodeur H.264/AVC. Des résultats de synthèse et de simulation, en utilisant les deux plateformes de Xilinx, sont montrés et comparés avec les autres implémentations existantes / This thesis has been carried out in joint supervision between the Badji Mokhtar University (LERICA Laboratory) and the University of Burgundy (LE2I laboratory, UMR CNRS 5158). It is a contribution to the study and implementation of the H.264/AVC encoder. The evolution in video coding standards have historically demanded stringent performances of the compression process, which imposes the development of platforms that perform much better in terms of computing power, flexibility and portability. Such demands are necessary to fulfill requirements of the different treatments and to meet "Real Time" processing constraints. In order to ensure real-time performances, a possible solution is to made use of systems on chip (SoC) or multiprocessor systems on chip (MPSoC) built on platforms based reconfigurable FPGAs. The objective of this thesis is the study and implementation of algorithms for signal and image processing (in particular the H.264/AVC standard); especial attention was given to provide real-time coding-decoding cycles. We use two FPGA platforms (ML501 and XUPV5 from Xilinx) to implement our architectures. In the literature, there are already several implementations of the decoder. For the encoder part, despite the enormous efforts made, work remains to optimize algorithms and extract the inherent parallelism of the architecture. This is especially true with a variety of profiles and levels of H.264/AVC. Initially, we proposed a hardware implementation of a memory controller specifically targeted to the H.264/AVC encoder. This controller is obtained by adding, to the DDR2 memory controller, an intelligent layer capable of calculating the addresses and to retrieve the necessary data for several of the processing modules of the encoder. Afterwards, we proposed hardware implementations (RTL) for the processing modules of the H.264 encoder. In these implementations, we made use of principles of parallelism and pipelining, taking into account the constraints imposed by the inter-block dependency in the encoder. We proposed several enhancements and new technologies in the channel Intra modules and the deblocking filter. At the end of this thesis, we use the modules implemented in hardware for implementing the H.264/AVC encoder in a hardware/software design. Synthesis and simulation results, using both platforms for Xilinx, are shown and compared with other existing implementations
165

Analysis of 3D human gait reconstructed with a depth camera and mirrors

Nguyen, Trong Nguyen 08 1900 (has links)
L'évaluation de la démarche humaine est l'une des composantes essentielles dans les soins de santé. Les systèmes à base de marqueurs avec plusieurs caméras sont largement utilisés pour faire cette analyse. Cependant, ces systèmes nécessitent généralement des équipements spécifiques à prix élevé et/ou des moyens de calcul intensif. Afin de réduire le coût de ces dispositifs, nous nous concentrons sur un système d'analyse de la marche qui utilise une seule caméra de profondeur. Le principe de notre travail est similaire aux systèmes multi-caméras, mais l'ensemble de caméras est remplacé par un seul capteur de profondeur et des miroirs. Chaque miroir dans notre configuration joue le rôle d'une caméra qui capture la scène sous un point de vue différent. Puisque nous n'utilisons qu'une seule caméra, il est ainsi possible d'éviter l'étape de synchronisation et également de réduire le coût de l'appareillage. Notre thèse peut être divisée en deux sections: reconstruction 3D et analyse de la marche. Le résultat de la première section est utilisé comme entrée de la seconde. Notre système pour la reconstruction 3D est constitué d'une caméra de profondeur et deux miroirs. Deux types de capteurs de profondeur, qui se distinguent sur la base du mécanisme d'estimation de profondeur, ont été utilisés dans nos travaux. Avec la technique de lumière structurée (SL) intégrée dans le capteur Kinect 1, nous effectuons la reconstruction 3D à partir des principes de l'optique géométrique. Pour augmenter le niveau des détails du modèle reconstruit en 3D, la Kinect 2 qui estime la profondeur par temps de vol (ToF), est ensuite utilisée pour l'acquisition d'images. Cependant, en raison de réflections multiples sur les miroirs, il se produit une distorsion de la profondeur dans notre système. Nous proposons donc une approche simple pour réduire cette distorsion avant d'appliquer les techniques d'optique géométrique pour reconstruire un nuage de points de l'objet 3D. Pour l'analyse de la démarche, nous proposons diverses alternatives centrées sur la normalité de la marche et la mesure de sa symétrie. Cela devrait être utile lors de traitements cliniques pour évaluer, par exemple, la récupération du patient après une intervention chirurgicale. Ces méthodes se composent d'approches avec ou sans modèle qui ont des inconvénients et avantages différents. Dans cette thèse, nous présentons 3 méthodes qui traitent directement les nuages de points reconstruits dans la section précédente. La première utilise la corrélation croisée des demi-corps gauche et droit pour évaluer la symétrie de la démarche, tandis que les deux autres methodes utilisent des autoencodeurs issus de l'apprentissage profond pour mesurer la normalité de la démarche. / The problem of assessing human gaits has received a great attention in the literature since gait analysis is one of key components in healthcare. Marker-based and multi-camera systems are widely employed to deal with this problem. However, such systems usually require specific equipments with high price and/or high computational cost. In order to reduce the cost of devices, we focus on a system of gait analysis which employs only one depth sensor. The principle of our work is similar to multi-camera systems, but the collection of cameras is replaced by one depth sensor and mirrors. Each mirror in our setup plays the role of a camera which captures the scene at a different viewpoint. Since we use only one camera, the step of synchronization can thus be avoided and the cost of devices is also reduced. Our studies can be separated into two categories: 3D reconstruction and gait analysis. The result of the former category is used as the input of the latter one. Our system for 3D reconstruction is built with a depth camera and two mirrors. Two types of depth sensor, which are distinguished based on the scheme of depth estimation, have been employed in our works. With the structured light (SL) technique integrated into the Kinect 1, we perform the 3D reconstruction based on geometrical optics. In order to increase the level of details of the 3D reconstructed model, the Kinect 2 with time-of-flight (ToF) depth measurement is used for image acquisition instead of the previous generation. However, due to multiple reflections on the mirrors, depth distortion occurs in our setup. We thus propose a simple approach for reducing such distortion before applying geometrical optics to reconstruct a point cloud of the 3D object. For the task of gait analysis, we propose various alternative approaches focusing on the problem of gait normality/symmetry measurement. They are expected to be useful for clinical treatments such as monitoring patient's recovery after surgery. These methods consist of model-free and model-based approaches that have different cons and pros. In this dissertation, we present 3 methods that directly process point clouds reconstructed from the previous work. The first one uses cross-correlation of left and right half-bodies to assess gait symmetry while the other ones employ deep auto-encoders to measure gait normality.
166

Programovn­ mikrokontrol©r c2000 v programu MATLAB/Simulink / MATLAB/Simulink Model Based Design Using C2000 Microcontrollers

oupal, Ondej January 2020 (has links)
The aim of this thesis is to explore possibilities of rapid control prototyping, describe the concept of creating the software application in MATLAB/Simulink environment with use for development kit Texas instruments LaunchPad and create an application for DC and induction motor control in this environment. This work describes the application for unipolar/bipolar control H-Bridge of power converter for DC motor, measurement of output currents, speed and its displaying in real time using serial control interface. This thesis also desribes scalar and vector control of induction motor. All software applications with measurements are created in MATLAB/Simulink and attached to the thesis.
167

Segmentace lézí roztroušené sklerózy pomocí hlubokých neuronových sítí / Segmentation of multiple sclerosis lesions using deep neural networks

Sasko, Dominik January 2021 (has links)
Hlavným zámerom tejto diplomovej práce bola automatická segmentácia lézií sklerózy multiplex na snímkoch MRI. V rámci práce boli otestované najnovšie metódy segmentácie s využitím hlbokých neurónových sietí a porovnané prístupy inicializácie váh sietí pomocou preneseného učenia (transfer learning) a samoriadeného učenia (self-supervised learning). Samotný problém automatickej segmentácie lézií sklerózy multiplex je veľmi náročný, a to primárne kvôli vysokej nevyváženosti datasetu (skeny mozgov zvyčajne obsahujú len malé množstvo poškodeného tkaniva). Ďalšou výzvou je manuálna anotácia týchto lézií, nakoľko dvaja rozdielni doktori môžu označiť iné časti mozgu ako poškodené a hodnota Dice Coefficient týchto anotácií je približne 0,86. Možnosť zjednodušenia procesu anotovania lézií automatizáciou by mohlo zlepšiť výpočet množstva lézií, čo by mohlo viesť k zlepšeniu diagnostiky individuálnych pacientov. Našim cieľom bolo navrhnutie dvoch techník využívajúcich transfer learning na predtrénovanie váh, ktoré by neskôr mohli zlepšiť výsledky terajších segmentačných modelov. Teoretická časť opisuje rozdelenie umelej inteligencie, strojového učenia a hlbokých neurónových sietí a ich využitie pri segmentácii obrazu. Následne je popísaná skleróza multiplex, jej typy, symptómy, diagnostika a liečba. Praktická časť začína predspracovaním dát. Najprv boli skeny mozgu upravené na rovnaké rozlíšenie s rovnakou veľkosťou voxelu. Dôvodom tejto úpravy bolo využitie troch odlišných datasetov, v ktorých boli skeny vytvárané rozličnými prístrojmi od rôznych výrobcov. Jeden dataset taktiež obsahoval lebku, a tak bolo nutné jej odstránenie pomocou nástroju FSL pre ponechanie samotného mozgu pacienta. Využívali sme 3D skeny (FLAIR, T1 a T2 modality), ktoré boli postupne rozdelené na individuálne 2D rezy a použité na vstup neurónovej siete s enkodér-dekodér architektúrou. Dataset na trénovanie obsahoval 6720 rezov s rozlíšením 192 x 192 pixelov (po odstránení rezov, ktorých maska neobsahovala žiadnu hodnotu). Využitá loss funkcia bola Combo loss (kombinácia Dice Loss s upravenou Cross-Entropy). Prvá metóda sa zameriavala na využitie predtrénovaných váh z ImageNet datasetu na enkodér U-Net architektúry so zamknutými váhami enkodéra, resp. bez zamknutia a následného porovnania s náhodnou inicializáciou váh. V tomto prípade sme použili len FLAIR modalitu. Transfer learning dokázalo zvýšiť sledovanú metriku z hodnoty približne 0,4 na 0,6. Rozdiel medzi zamknutými a nezamknutými váhami enkodéru sa pohyboval okolo 0,02. Druhá navrhnutá technika používala self-supervised kontext enkodér s Generative Adversarial Networks (GAN) na predtrénovanie váh. Táto sieť využívala všetky tri spomenuté modality aj s prázdnymi rezmi masiek (spolu 23040 obrázkov). Úlohou GAN siete bolo dotvoriť sken mozgu, ktorý bol prekrytý čiernou maskou v tvare šachovnice. Takto naučené váhy boli následne načítané do enkodéru na aplikáciu na náš segmentačný problém. Tento experiment nevykazoval lepšie výsledky, s hodnotou DSC 0,29 a 0,09 (nezamknuté a zamknuté váhy enkodéru). Prudké zníženie metriky mohlo byť spôsobené použitím predtrénovaných váh na vzdialených problémoch (segmentácia a self-supervised kontext enkodér), ako aj zložitosť úlohy kvôli nevyváženému datasetu.
168

Systém pro zobrazování černobílých snímků v nepravých barvách (Pseudocolor) / System for Imaging of the Monochromatic Pictures in the Pseudo Color

Kaděrka, Petr January 2008 (has links)
This diploma thesis treats the possibilities of the black-white picture depiction in pseudo colors (Pseudocolor). The individual methods, software or hardware are described there and the detailed block scheme of the system Pseudocolor is suggested. The block scheme is created on the basis of gained theoretical knowledge. In the thesis, individual functional blocks of the scheme are described and their circuit designs are realized. Some of the functional blocks are simulated by the PSpice program and accompanied by corresponding signal process data. The suitable choice of the active and passive components is performed, from which the general integration of the system Pseudocolor is made. All the source materials for the realization of the device are given there – double-sided drawing of the printed circuit, layout and specification of the components.
169

Mikroprocesorový modul řízení SS motoru se zpětnou vazbou / DC motor controler with feedback

Dundáček, Martin January 2008 (has links)
This thesis deals with DC motor control. Main goal was design and realization of DC motor controler module with feedback. The first part dwells on methods for DC motor control and HW design of control module. The second part describes development of software for the module, testing and sums up results.
170

Modul digitálního signálového procesoru pro ruční RFID čtečku / Digital Signal Processor for Handheld RFID Reader

Benetka, Miroslav January 2008 (has links)
This diploma thesis deals with design and realization of a module for a digital signal procesor, for handheld RFID reader working in UHF band. It utilises a special chip EM4298 for RFID signals processing. Module is controlled by the microcontroller ATmega32L, which communicates with the PC through USB bus. Settings in EM4298 is made by a service program which processes received identifying data obtained from tags. Source codes for microcontroller are created in AVR Studio 4.13 program. Source codes for microcontroller are created in C++ Builder 6.0 program. Further thing is Desing and realization of analog interface and a UHF transceiver for wireless communication with tags. A Webench program was used for the analog interface design, which is freely available on the internet. For verification of parameters of the analog interface it was used PSpice 10.0 program. The UHF transceiver is build-up with a MAX2903 chip (transmitter) and AD8347 (receiver) and transmitting and receiving antennae.

Page generated in 0.0258 seconds