• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 21
  • 19
  • 19
  • 16
  • 7
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 190
  • 40
  • 38
  • 36
  • 31
  • 28
  • 27
  • 26
  • 25
  • 24
  • 24
  • 22
  • 22
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

A Comparative Study of the Quality between Formality Style Transfer of Sentences in Swedish and English, leveraging the BERT model / En jämförande studie av kvaliteten mellan överföring av formalitetsstil på svenska och engelska meningar, med hjälp av BERT-modellen

Lindblad, Maria January 2021 (has links)
Formality Style Transfer (FST) is the task of automatically transforming a piece of text from one level of formality to another. Previous research has investigated different methods of performing FST on text in English, but at the time of this project there were to the author’s knowledge no previous studies analysing the quality of FST on text in Swedish. The purpose of this thesis was to investigate how a model trained for FST in Swedish performs. This was done by comparing the quality of a model trained on text in Swedish for FST, to an equivalent model trained on text in English for FST. Both models were implemented as encoder-decoder architectures, warm-started using two pre-existing Bidirectional Encoder Representations from Transformers (BERT) models, pre-trained on Swedish and English text respectively. The two FST models were fine-tuned for both the informal to formal task as well as the formal to informal task, using the Grammarly’s Yahoo Answers Formality Corpus (GYAFC). The Swedish version of GYAFC was created through automatic machine translation of the original English version. The Swedish corpus was then evaluated on the three criteria meaning preservation, formality preservation and fluency preservation. The results of the study indicated that the Swedish model had the capacity to match the quality of the English model but was held back by the inferior quality of the Swedish corpus. The study also highlighted the need for task specific corpus in Swedish. / Överföring av formalitetsstil syftar på uppgiften att automatiskt omvandla ett stycke text från en nivå av formalitet till en annan. Tidigare forskning har undersökt olika metoder för att utföra uppgiften på engelsk text men vid tiden för detta projekt fanns det enligt författarens vetskap inga tidigare studier som analyserat kvaliteten för överföring av formalitetsstil på svensk text. Syftet med detta arbete var att undersöka hur en modell tränad för överföring av formalitetsstil på svensk text presterar. Detta gjordes genom att jämföra kvaliteten på en modell tränad för överföring av formalitetsstil på svensk text, med en motsvarande modell tränad på engelsk text. Båda modellerna implementerades som kodnings-avkodningsmodeller, vars vikter initierats med hjälp av två befintliga Bidirectional Encoder Representations from Transformers (BERT)-modeller, förtränade på svensk respektive engelsk text. De två modellerna finjusterades för omvandling både från informell stil till formell och från formell stil till informell. Under finjusteringen användes en svensk och en engelsk version av korpusen Grammarly’s Yahoo Answers Formality Corpus (GYAFC). Den svenska versionen av GYAFC skapades genom automatisk maskinöversättning av den ursprungliga engelska versionen. Den svenska korpusen utvärderades sedan med hjälp av de tre kriterierna betydelse-bevarande, formalitets-bevarande och flödes-bevarande. Resultaten från studien indikerade att den svenska modellen hade kapaciteten att matcha kvaliteten på den engelska modellen men hölls tillbaka av den svenska korpusens sämre kvalitet. Studien underströk också behovet av uppgiftsspecifika korpusar på svenska.
172

Machine virtuelle universelle pour codage vidéo reconfigurable / A universal virtual machine for reconfigurable video coding

Gorin, Jérôme 22 November 2011 (has links)
Cette thèse propose un nouveau paradigme de représentation d’applications pour les machines virtuelles, capable d’abstraire l’architecture des systèmes informatiques. Les machines virtuelles actuelles reposent sur un modèle unique de représentation d’application qui abstrait les instructions des machines et sur un modèle d’exécution qui traduit le fonctionnement de ces instructions vers les machines cibles. S’ils sont capables de rendre les applications portables sur une vaste gamme de systèmes, ces deux modèles ne permettent pas en revanche d’exprimer la concurrence sur les instructions. Or, celle-ci est indispensable pour optimiser le traitement des applications selon les ressources disponibles de la plate-forme cible. Nous avons tout d’abord développé une représentation « universelle » d’applications pour machine virtuelle fondée sur la modélisation par graphe flux de données. Une application est ainsi modélisée par un graphe orienté dont les sommets sont des unités de calcul (les acteurs) et dont les arcs représentent le flux de données passant au travers de ces sommets. Chaque unité de calcul peut être traitée indépendamment des autres sur des ressources distinctes. La concurrence sur les instructions dans l’application est alors explicite. Exploiter ce nouveau formalisme de description d'applications nécessite de modifier les règles de programmation. A cette fin, nous avons introduit et défini le concept de « Représentation Canonique et Minimale » d’acteur. Il se fonde à la fois sur le langage de programmation orienté acteur CAL et sur les modèles d’abstraction d’instructions des machines virtuelles existantes. Notre contribution majeure qui intègre les deux nouvelles représentations proposées, est le développement d’une « Machine Virtuelle Universelle » (MVU) dont la spécificité est de gérer les mécanismes d’adaptation, d’optimisation et d’ordonnancement à partir de l’infrastructure de compilation Low-Level Virtual Machine. La pertinence de cette MVU est démontrée dans le contexte normatif du codage vidéo reconfigurable (RVC). En effet, MPEG RVC fournit des applications de référence de décodeurs conformes à la norme MPEG-4 partie 2 Simple Profile sous la forme de graphe flux de données. L’une des applications de cette thèse est la modélisation par graphe flux de données d’un décodeur conforme à la norme MPEG-4 partie 10 Constrained Baseline Profile qui est deux fois plus complexe que les applications de référence MPEG RVC. Les résultats expérimentaux montrent un gain en performance en exécution de deux pour des plates-formes dotées de deux cœurs par rapport à une exécution mono-cœur. Les optimisations développées aboutissent à un gain de 25% sur ces performances pour des temps de compilation diminués de moitié. Les travaux effectués démontrent le caractère opérationnel et universel de cette norme dont le cadre d’utilisation dépasse le domaine vidéo pour s’appliquer à d’autres domaine de traitement du signal (3D, son, photo…) / This thesis proposes a new paradigm that abstracts the architecture of computer systems for representing virtual machines’ applications. Current applications are based on abstraction of machine’s instructions and on an execution model that reflects operations of these instructions on the target machine. While these two models are efficient to make applications portable across a wide range of systems, they do not express concurrency between instructions. Expressing concurrency is yet essential to optimize processing of application as the number of processing units is increasing in computer systems. We first develop a “universal” representation of applications for virtual machines based on dataflow graph modeling. Thus, an application is modeled by a directed graph where vertices are computation units (the actors) and edges represent the flow of data between vertices. Each processing units can be treated apart independently on separate resources. Concurrency in the instructions is then made explicitly. Exploit this new description formalism of applications requires a change in programming rules. To that purpose, we introduce and define a “Minimal and Canonical Representation” of actors. It is both based on actor-oriented programming and on instructions ‘abstraction used in existing Virtual Machines. Our major contribution, which incorporates the two new representations proposed, is the development of a “Universal Virtual Machine” (UVM) for managing specific mechanisms of adaptation, optimization and scheduling based on the Low-Level Virtual Machine (LLVM) infrastructure. The relevance of the MVU is demonstrated on the MPEG Reconfigurable Video Coding standard. In fact, MPEG RVC provides decoder’s reference application compliant with the MPEG-4 part 2 Simple Profile in the form of dataflow graph. One application of this thesis is a new dataflow description of a decoder compliant with the MPEG-4 part 10 Constrained Baseline Profile, which is twice as complex as the reference MPEG RVC application. Experimental results show a gain in performance close to double on a two cores compare to a single core execution. Developed optimizations result in a gain on performance of 25% for compile times reduced by half. The work developed demonstrates the operational nature of this standard and offers a universal framework which exceeds the field of video domain (3D, sound, picture...)
173

Návrh testeru paměti RAM ve VHDL / RAM-Tester Design in VHDL

Charvát, Jiří Unknown Date (has links)
This paper describes various approaches to hardware testing semiconductor memory. We describe the priciple of basic memory types, the way which each of them stores information and their comunication protocol. Following part deals with common failures which may occur in the memory.  The section also describes the implementation of memory model and tester designed in VHDL language. It is possible to inject some errors into memory, which are later detected by the tester. The final section shows the response of tester to various error types according to used error detection method. The paper is especially focused on failure detection by variants of march test.
174

Residues in Succession U-Net for Fast and Efficient Segmentation

Sultana, Aqsa 11 August 2022 (has links)
No description available.
175

Разработка приемника-декодера сигналов стандарта ADS-B : магистерская диссертация / Development of the receiver-decoder for the ADS-B system

Чечеткин, В. А., Chechetkin, V. A. January 2014 (has links)
Разработан прототип приемника-декодера сигналов стандарта ADS-B. В ходе разработки предложена структурная схема выполнения устройства, а так же проведено комплексное исследование элементов устройства. Предложены принципиальные схемы и прототипы печатных плат для таких устройств как усилитель, инжектор питания, малошумящий усилитель, логарифмический детектор, а так же рассмотрена топология фильтра с двойной комплементарной спиралью. Приводятся результаты моделирования в различных пакетах программного обеспечения перечисленных выше устройств, а так же результаты их экспериментального исследования. Для обеспечения симуляции сигналов стандарта, а так же для обработки создано программное обеспечение. / A prototype of the receiver-decoder for the ADS-B system. During the development the block diagram of the device was proposed and a comprehensive study of elements of the device was done. Circuit schematics and layouts of printed circuit boards for devices such as amplifier, power injector, low noise amplifier, logarithmic detector and filter with a double complementary spiral were proposed. The results of the simulation of the listed above devices in a variety of software packages, as well as the results of an experimental study are presented. In order to simulate the signals, as well as for processing them special software was created.
176

Demand Forecasting of Outbound Logistics Using Neural Networks

Otuodung, Enobong Paul, Gorhan, Gulten January 2023 (has links)
Long short-term volume forecasting is essential for companies regarding their logistics service operations. It is crucial for logistic companies to predict the volumes of goods that will be delivered to various centers at any given day, as this will assist in managing the efficiency of their business operations. This research aims to create a forecasting model for outbound logistics volumes by utilizing design science research methodology in building 3 machine-learning models and evaluating the performance of the models . The dataset is provided by Tetra Pak AB, the World's leading food processing and packaging solutions company,. Research methods were mainly quantitative, based on statistical data and numerical calculations. Three algorithms were implemented: which are encoder–decoder networks based on Long Short-Term Memory (LSTM), Convolutional Long Short-Term Memory (ConvLSTM), and Convolutional Neural Network Long ShortTerm Memory (CNN-LSTM). Comparisons are made with the average Root Mean Square Error (RMSE) for six distribution centers (DC) of Tetra Pak. Results obtained from encoder–decoder networks based on LSTM are compared to results obtained by encoder–decoder networks based on ConvLSTM and CNN-LSTM. The three algorithms performed very well, considering the loss of the Train and Test with our multivariate time series dataset. However, based on the average score of the RMSE, there are slight differences between algorithms for all DCs.
177

300 MBPS CCSDS Processing Using FPGA's

Genrich, Thad J. 10 1900 (has links)
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California / This paper describes a 300 Mega Bit Per Second (MBPS) Front End Processor (FEP) prototype completed in early 1993. The FEP implements a patent pending parallel frame synchronizer (frame sync) design in 12 Actel 1240 Field Programmable Gate Arrays (FPGA's). The FEP also provides (255,223) Reed-Solomon (RS) decoding and a High Performance Parallel Interface (HIPPI) output interface. The recent introduction of large RAM based FPGA's allows greater high speed data processing integration and flexibility to be achieved. A proposed FEP implementation based on Altera 10K50 FPGA's is described. This design can be implemented on a single slot 6U VME module, and includes a PCI Mezzanine Card (PMC) for a commercial Fibre Channel or Asynchronous Transfer Mode (ATM) output interface module. Concepts for implementation of (255,223) RS and Landsat 7 Bose-Chaudhuri-Hocquenghem (BCH) decoding in FPGA's are also presented. The paper concludes with a summary of the advantages of high speed data processing in FPGA's over Application Specific Integrated Circuit (ASIC) based approaches. Other potential data processing applications are also discussed.
178

Analysis of the effects of phase noise and frequency offset in orthogonal frequency division multiplexing (OFDM) systems

Erdogan, Ahmet Yasin 03 1900 (has links)
Approved for public release, distribution is unlimited / Orthogonal frequency division multiplexing (OFDM) is being successfully used in numerous applications. It was chosen for IEEE 802.11a wireless local area network (WLAN) standard, and it is being considered for the fourthgeneration mobile communication systems. Along with its many attractive features, OFDM has some principal drawbacks. Sensitivity to frequency errors is the most dominant of these drawbacks. In this thesis, the frequency offset and phase noise effects on OFDM based communication systems are investigated under a variety of channel conditions covering both indoor and outdoor environments. The simulation performance results of the OFDM system for these channels are presented. / Lieutenant Junior Grade, Turkish Navy
179

Generalized belief propagation based TDMR detector and decoder

Matcha, Chaitanya Kumar, Bahrami, Mohsen, Roy, Shounak, Srinivasa, Shayan Garani, Vasic, Bane 07 1900 (has links)
Two dimensional magnetic recording (TDMR) achieves high areal densities by reducing the size of a bit comparable to the size of the magnetic grains resulting in two dimensional (2D) inter symbol interference (ISI) and very high media noise. Therefore, it is critical to handle the media noise along with the 2D ISI detection. In this paper, we tune the generalized belief propagation (GBP) algorithm to handle the media noise seen in TDMR. We also provide an intuition into the nature of hard decisions provided by the GBP algorithm. The performance of the GBP algorithm is evaluated over a Voronoi based TDMR channel model where the soft outputs from the GBP algorithm are used by a belief propagation (BP) algorithm to decode low-density parity check (LDPC) codes.
180

Near-capacity sphere decoder based detection schemes for MIMO wireless communication systems

Kapfunde, Goodwell January 2013 (has links)
The search for the closest lattice point arises in many communication problems, and is known to be NP-hard. The Maximum Likelihood (ML) Detector is the optimal detector which yields an optimal solution to this problem, but at the expense of high computational complexity. Existing near-optimal methods used to solve the problem are based on the Sphere Decoder (SD), which searches for lattice points confined in a hyper-sphere around the received point. The SD has emerged as a powerful means of finding the solution to the ML detection problem for MIMO systems. However the bottleneck lies in the determination of the initial radius. This thesis is concerned with the detection of transmitted wireless signals in Multiple-Input Multiple-Output (MIMO) digital communication systems as efficiently and effectively as possible. The main objective of this thesis is to design efficient ML detection algorithms for MIMO systems based on the depth-first search (DFS) algorithms whilst taking into account complexity and bit error rate performance requirements for advanced digital communication systems. The increased capacity and improved link reliability of MIMO systems without sacrificing bandwidth efficiency and transmit power will serve as the key motivation behind the study of MIMO detection schemes. The fundamental principles behind MIMO systems are explored in Chapter 2. A generic framework for linear and non-linear tree search based detection schemes is then presented Chapter 3. This paves way for different methods of improving the achievable performance-complexity trade-off for all SD-based detection algorithms. The suboptimal detection schemes, in particular the Minimum Mean Squared Error-Successive Interference Cancellation (MMSE-SIC), will also serve as pre-processing as well as comparison techniques whilst channel capacity approaching Low Density Parity Check (LDPC) codes will be employed to evaluate the performance of the proposed SD. Numerical and simulation results show that non-linear detection schemes yield better performance compared to linear detection schemes, however, at the expense of a slight increase in complexity. The first contribution in this thesis is the design of a near ML-achieving SD algorithm for MIMO digital communication systems that reduces the number of search operations within the sphere-constrained search space at reduced detection complexity in Chapter 4. In this design, the distance between the ML estimate and the received signal is used to control the lower and upper bound radii of the proposed SD to prevent NP-complete problems. The detection method is based on the DFS algorithm and the Successive Interference Cancellation (SIC). The SIC ensures that the effects of dominant signals are effectively removed. Simulation results presented in this thesis show that by employing pre-processing detection schemes, the complexity of the proposed SD can be significantly reduced, though at marginal performance penalty. The second contribution is the determination of the initial sphere radius in Chapter 5. The new initial radius proposed in this thesis is based on the variable parameter α which is commonly based on experience and is chosen to ensure that at least a lattice point exists inside the sphere with high probability. Using the variable parameter α, a new noise covariance matrix which incorporates the number of transmit antennas, the energy of the transmitted symbols and the channel matrix is defined. The new covariance matrix is then incorporated into the EMMSE model to generate an improved EMMSE estimate. The EMMSE radius is finally found by computing the distance between the sphere centre and the improved EMMSE estimate. This distance can be fine-tuned by varying the variable parameter α. The beauty of the proposed method is that it reduces the complexity of the preprocessing step of the EMMSE to that of the Zero-Forcing (ZF) detector without significant performance degradation of the SD, particularly at low Signal-to-Noise Ratios (SNR). More specifically, it will be shown through simulation results that using the EMMSE preprocessing step will substantially improve performance whenever the complexity of the tree search is fixed or upper bounded. The final contribution is the design of the LRAD-MMSE-SIC based SD detection scheme which introduces a trade-off between performance and increased computational complexity in Chapter 6. The Lenstra-Lenstra-Lovasz (LLL) algorithm will be utilised to orthogonalise the channel matrix H to a new near orthogonal channel matrix H ̅.The increased computational complexity introduced by the LLL algorithm will be significantly decreased by employing sorted QR decomposition of the transformed channel H ̅ into a unitary matrix and an upper triangular matrix which retains the property of the channel matrix. The SIC algorithm will ensure that the interference due to dominant signals will be minimised while the LDPC will effectively stop the propagation of errors within the entire system. Through simulations, it will be demonstrated that the proposed detector still approaches the ML performance while requiring much lower complexity compared to the conventional SD.

Page generated in 0.0558 seconds