• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 29
  • 9
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Konstrukce otočného lineárně přesuvného stolu s pevnou boční upínací deskou pro stroj FGU RT / Design of rotary-linear table with side clamping plate for machine FGU RT

Balák, Pavel January 2013 (has links)
The aim of this diploma thesis is design of rotary-linear table with side clamping plate. Rotary-linear table is applicated to the profiled guideways used linear table. The work is focused on the design of the individuals nodes and their calculations.
12

Interpreting Shift Encoders as State Space models for Stationary Time Series

Donkoh, Patrick 01 May 2024 (has links) (PDF)
Time series analysis is a statistical technique used to analyze sequential data points collected or recorded over time. While traditional models such as autoregressive models and moving average models have performed sufficiently for time series analysis, the advent of artificial neural networks has provided models that have suggested improved performance. In this research, we provide a custom neural network; a shift encoder that can capture the intricate temporal patterns of time series data. We then compare the sparse matrix of the shift encoder to the parameters of the autoregressive model and observe the similarities. We further explore how we can replace the state matrix in a state-space model with the sparse matrix of the shift encoder.
13

Transformer-Encoders for Mathematical Answer Retrieval

Reusch, Anja 27 May 2024 (has links)
Every day, an overwhelming volume of new information is produced. Information Retrieval systems play a crucial role in managing this deluge and facilitate users to find relevant information. Simultaneously, the volume of scientific literature is also rapidly increasing, requiring powerful retrieval tools in this domain. Current methods of Information Retrieval employ language models based on the Transformer architecture, called Transformer-Encoder models, in this work. These models are generally trained in two phases: initially, a pre-training on general natural language data is performed, then fine-tuning follows, which adapts the model to a specific task such as classification or retrieval. Since Transformer-Encoder models are pre-trained on general natural language corpora, they perform well on these documents. However, scientific documents exhibit different features. The language in these documents is characterized by mathematical notation, such as formulae. Applying Transformer-Encoder models to these documents results in a low retrieval performance (effectiveness). A possible solution is to adapt the model to the new domain by further pre-training on a data set originating in the new domain. This process is called Domain-Adaptive Pre-Training and has been successfully applied to other domains. Mathematical Answer Retrieval involves finding relevant answers from a large corpus for mathematical questions. Both the question and the answers can contain mathematical notation and natural language. To retrieve relevant answers, the model must 'understand' the problem specified in the question and the solution of the answers. This property makes the task of Mathematical Answer Retrieval well suited to evaluate whether Transformer-Encoder models can model mathematical and natural language in conjunction. Transformer-Encoder models showed a low performance on this task compared to traditional retrieval approaches, which is surprising given the success of Transformer-Encoder models in other domains. This thesis, therefore, deals with the domain-adaption of Transformer-Encoder models for the domain of mathematical documents and the development of a retrieval approach using these models for Mathematical Answer Retrieval. We start by presenting a retrieval pipeline using the Cross-Encoder setup, a specific architecture of applying Transformer-Encoder models for retrieval. Then, we enhance the retrieval pipeline by adapting the pre-training schema of the Transformer-Encoder models to capture mathematical language better. Our evaluation demonstrates the strengths of the Cross-Encoder setup using our domain-adapted Transformer-Encoder models. In addition to these contributions, we also present an analysis framework to evaluate what knowledge of mathematics the models have learned. This analysis framework is used to study Transformer-Encoder models before and after fine-tuning for mathematical retrieval. We show that Transformer-Encoder models learn structural features of mathematical formulae during pre-training but rely more on other superficial information for Mathematical Answer Retrieval. These analyses also enable us to improve our fine-tuning setup further. In conclusion, our findings suggest that Transformer-Encoder models can be applied as a suitable and powerful approach for Mathematical Answer Retrieval.
14

Process monitoring with restricted Boltzmann machines

Moody, John Matali 04 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: Process monitoring and fault diagnosis are used to detect abnormal events in processes. The early detection of such events or faults is crucial to continuous process improvement. Although principal component analysis and partial least squares are widely used for process monitoring and fault diagnosis in the metallurgical industries, these models are linear in principle; nonlinear approaches should provide more compact and informative models. The use of auto associative neural networks or auto encoders provide a principled approach for process monitoring. However, until very recently, these multiple layer neural networks have been difficult to train and have therefore not been used to any significant extent in process monitoring. With newly proposed algorithms based on the pre-training of the layers of the neural networks, it is now possible to train neural networks with very complex structures, i.e. deep neural networks. These neural networks can be used as auto encoders to extract features from high dimensional data. In this study, the application of deep auto encoders in the form of Restricted Boltzmann machines (RBM) to the extraction of features from process data is considered. These networks have mostly been used for data visualization to date and have not been applied in the context of fault diagnosis or process monitoring as yet. The objective of this investigation is therefore to assess the feasibility of using Restricted Boltzmann machines in various fault detection schemes. The use of RBM in process monitoring schemes will be discussed, together with the application of these models in automated control frameworks. / AFRIKAANSE OPSOMMING: Prosesmonitering en fout diagnose word gebruik om abnormale gebeure in prosesse op te spoor. Die vroeë opsporing van sulke gebeure of foute is noodsaaklik vir deurlopende verbetering van prosesse. Alhoewel hoofkomponent-analise en parsiële kleinste kwadrate wyd gebruik word vir prosesmonitering en fout diagnose in die metallurgiese industrieë, is hierdie modelle lineêr in beginsel; nie-lineêre benaderings behoort meer kompakte en insiggewende modelle te voorsien. Die gebruik van outo-assosiatiewe neurale netwerke of outokodeerders bied 'n beginsel gebaseerder benadering om dit te bereik. Hierdie veelvoudige laag neurale netwerke was egter tot onlangs moeilik om op te lei en is dus nie tot ʼn beduidende mate in die prosesmonitering gebruik nie. Nuwe, voorgestelde algoritmes, gebaseer op voorafopleiding van die lae van die neurale netwerke, maak dit nou moontlik om neurale netwerke met baie ingewikkelde strukture, d.w.s. diep neurale netwerke, op te lei. Hierdie neurale netwerke kan gebruik word as outokodeerders om kenmerke van hoë-dimensionele data te onttrek. In hierdie studie word die toepassing van diep outokodeerders in die vorm van Beperkte Boltzmann Masjiene vir die onttrekking van kenmerke van proses data oorweeg. Tot dusver is hierdie netwerke meestal vir data visualisering gebruik en dit is nog nie toegepas in die konteks van fout diagnose of prosesmonitering nie. Die doel van hierdie ondersoek is dus om die haalbaarheid van die gebruik van Beperkte Boltzmann Masjiene in verskeie foutopsporingskemas te assesseer. Die gebruik van Beperkte Boltzmann Masjiene se eienskappe in prosesmoniteringskemas sal bespreek word, tesame met die toepassing van hierdie modelle in outomatiese beheer raamwerke.
15

Vers la segmentation automatique des organes à risque dans le contexte de la prise en charge des tumeurs cérébrales par l’application des technologies de classification de deep learning / Towards automatic segmentation of the organs at risk in brain cancer context via a deep learning classification scheme

Dolz, Jose 15 June 2016 (has links)
Les tumeurs cérébrales sont une cause majeure de décès et d'invalidité dans le monde, ce qui représente 14,1 millions de nouveaux cas de cancer et 8,2 millions de décès en 2012. La radiothérapie et la radiochirurgie sont parmi l'arsenal de techniques disponibles pour les traiter. Ces deux techniques s’appuient sur une irradiation importante nécessitant une définition précise de la tumeur et des tissus sains environnants. Dans la pratique, cette délinéation est principalement réalisée manuellement par des experts avec éventuellement un faible support informatique d’aide à la segmentation. Il en découle que le processus est fastidieux et particulièrement chronophage avec une variabilité inter ou intra observateur significative. Une part importante du temps médical s’avère donc nécessaire à la segmentation de ces images médicales. L’automatisation du processus doit permettre d’obtenir des ensembles de contours plus rapidement, reproductibles et acceptés par la majorité des oncologues en vue d'améliorer la qualité du traitement. En outre, toute méthode permettant de réduire la part médicale nécessaire à la délinéation contribue à optimiser la prise en charge globale par une utilisation plus rationnelle et efficace des compétences de l'oncologue.De nos jours, les techniques de segmentation automatique sont rarement utilisées en routine clinique. Le cas échéant, elles s’appuient sur des étapes préalables de recalages d’images. Ces techniques sont basées sur l’exploitation d’informations anatomiques annotées en amont par des experts sur un « patient type ». Ces données annotées sont communément appelées « Atlas » et sont déformées afin de se conformer à la morphologie du patient en vue de l’extraction des contours par appariement des zones d’intérêt. La qualité des contours obtenus dépend directement de la qualité de l’algorithme de recalage. Néanmoins, ces techniques de recalage intègrent des modèles de régularisation du champ de déformations dont les paramètres restent complexes à régler et la qualité difficile à évaluer. L’intégration d’outils d’assistance à la délinéation reste donc aujourd’hui un enjeu important pour l’amélioration de la pratique clinique.L'objectif principal de cette thèse est de fournir aux spécialistes médicaux (radiothérapeute, neurochirurgien, radiologue) des outils automatiques pour segmenter les organes à risque des patients bénéficiant d’une prise en charge de tumeurs cérébrales par radiochirurgie ou radiothérapie.Pour réaliser cet objectif, les principales contributions de cette thèse sont présentées sur deux axes principaux. Tout d'abord, nous considérons l'utilisation de l'un des derniers sujets d'actualité dans l'intelligence artificielle pour résoudre le problème de la segmentation, à savoir le «deep learning ». Cet ensemble de techniques présente des avantages par rapport aux méthodes d'apprentissage statistiques classiques (Machine Learning en anglais). Le deuxième axe est dédié à l'étude des caractéristiques d’images utilisées pour la segmentation (principalement les textures et informations contextuelles des images IRM). Ces caractéristiques, absentes des méthodes classiques d'apprentissage statistique pour la segmentation des organes à risque, conduisent à des améliorations significatives des performances de segmentation. Nous proposons donc l'inclusion de ces fonctionnalités dans un algorithme de réseau de neurone profond (deep learning en anglais) pour segmenter les organes à risque du cerveau.Nous démontrons dans ce travail la possibilité d'utiliser un tel système de classification basée sur techniques de « deep learning » pour ce problème particulier. Finalement, la méthodologie développée conduit à des performances accrues tant sur le plan de la précision que de l’efficacité. / Brain cancer is a leading cause of death and disability worldwide, accounting for 14.1 million of new cancer cases and 8.2 million deaths only in 2012. Radiotherapy and radiosurgery are among the arsenal of available techniques to treat it. Because both techniques involve the delivery of a very high dose of radiation, tumor as well as surrounding healthy tissues must be precisely delineated. In practice, delineation is manually performed by experts, or with very few machine assistance. Thus, it is a highly time consuming process with significant variation between labels produced by different experts. Radiation oncologists, radiology technologists, and other medical specialists spend, therefore, a substantial portion of their time to medical image segmentation. If by automating this process it is possible to achieve a more repeatable set of contours that can be agreed upon by the majority of oncologists, this would improve the quality of treatment. Additionally, any method that can reduce the time taken to perform this step will increase patient throughput and make more effective use of the skills of the oncologist.Nowadays, automatic segmentation techniques are rarely employed in clinical routine. In case they are, they typically rely on registration approaches. In these techniques, anatomical information is exploited by means of images already annotated by experts, referred to as atlases, to be deformed and matched on the patient under examination. The quality of the deformed contours directly depends on the quality of the deformation. Nevertheless, registration techniques encompass regularization models of the deformation field, whose parameters are complex to adjust, and its quality is difficult to evaluate. Integration of tools that assist in the segmentation task is therefore highly expected in clinical practice.The main objective of this thesis is therefore to provide radio-oncology specialists with automatic tools to delineate organs at risk of patients undergoing brain radiotherapy or stereotactic radiosurgery. To achieve this goal, main contributions of this thesis are presented on two major axes. First, we consider the use of one of the latest hot topics in artificial intelligence to tackle the segmentation problem, i.e. deep learning. This set of techniques presents some advantages with respect to classical machine learning methods, which will be exploited throughout this thesis. The second axis is dedicated to the consideration of proposed image features mainly associated with texture and contextual information of MR images. These features, which are not present in classical machine learning based methods to segment brain structures, led to improvements on the segmentation performance. We therefore propose the inclusion of these features into a deep network.We demonstrate in this work the feasibility of using such deep learning based classification scheme for this particular problem. We show that the proposed method leads to high performance, both in accuracy and efficiency. We also show that automatic segmentations provided by our method lie on the variability of the experts. Results demonstrate that our method does not only outperform a state-of-the-art classifier, but also provides results that would be usable in the radiation treatment planning.
16

PreCro : A Pedestrian Crossing Robot / PeCro - roboten som hjälper människor med synnedsättning i trafiken

HEDBERG, EBBA, SUNDIN, LINNEA January 2020 (has links)
For people who suffer from visual impairment, getting around in traffic can be a great struggle. The robot PeCro, short for Pedestrian Crossing, was created as an aid for these people to use at pedestrian crossings equipped with traffic lights. The prototype was constructed in Solid Edge ST9 as a three wheeled mobile robot and consists of several components. The microcontroller, Arduino Uno, was programmed in Arduino IDE. The vision sensor used was a Pixy2 camera that can detect and track selected colour codes. A steering model called differential drive is used. It is controlled through magnetic encoders mounted on the two motor shafts. PeCro scans the environment looking for green light. If detected, PeCro searches for the blue box on the traffic light pillar on the opposite side of the street. When it is detected it crosses the street and turns 180 degrees to enable crossing the street again. The performance of a vision sensor in different light environments, the efficiency of magnetic encoders measuring travelled distance and regulating steering as well as linear interpolation as a distance calculation method, was studied. The results show that the detecting performance of PeCro is affected by the light environment and the maximum distance at which the used colour codes are detected, was 163 cm respective 150 cm. Another result shows that when measuring distance with magnetic encoders, a constant distance deviation from the desired distance occurs. This method is desirable compared to using linear interpolation to measure the distance. In conclusion, to implement and use PeCro in real life situations, further development has to be done. / Människor som lever med synnedsättning kan möta stora hinder när de rör sig i stadstrafik. Roboten PeCro, förkortning av Pedestrian Crossing (övergångsställe), skapades för att användas som ett hjälpmedel för dessa personer vid övergångsställen utrustade med trafikljus. Prototypen konstruerades som en mobil robot försedd med tre hjul i CAD-programmet Solid Edge ST9 och består av ett flertal komponenter. Mikrokontrollern, Arduino Uno, programmerades i Arduino IDE. En Pixy2-kamera användes som bildsensor som kan spåra och detektera färgkoder. Differentialstyrning användes för att enkelt kunna styra PeCro med hjälp av magnetiska givare som var fästa på motoraxlarna. PeCro skannar sin omgivning. Om den ser grönt ljus, börjar den leta efter den blå lådan på gatustolpen på motsatt sida vägen. När den blåa lådan detekteras åker roboten över övergångsstället och roterar 180 grader för att kunna användas i motsatt riktning, tillbaka över vägen. I projektet studerades en bildsensors prestanda i olika ljusmiljöer, de magnetiska givarnas effektivitet vid avståndsmätning och dess reglering av styrningen, samt avståndsmätning genom linjär interpolation. Från resultaten kan ses att PeCros detektering påverkas av ljusmiljön och att det maximala avståndet som respektive färgkod kan detekteras på är 163 respektive 150 cm. Vidare kan ses att vid avståndsmätning med magnetiska givare uppstår en konstant avvikelse från den önskade sträckan. Avståndsmätning med magnetiska givare är att föredra framför mätning med linjär interpolation. Avslutningsvis, om PeCro ska kunna användas i vardagliga situationer, kommer viss vidareutveckling behöva genomföras.
17

Estudo de verbos codificadores de extensão ou escala no jogo da linguagem: uma perspectiva funcionalista

Cristóvão, Heloá Ferreira 05 August 2013 (has links)
Made available in DSpace on 2016-12-23T14:08:59Z (GMT). No. of bitstreams: 1 Heloa Ferreira Cristovao.pdf: 812841 bytes, checksum: 71e524e49439abe4acf2da511e7345ab (MD5) Previous issue date: 2013-08-05 / A maioria dos gramáticos acolhe os verbos em duas sessões: uma que trata de aspectos morfológicos; em seguida, dentro de uma perspectiva sintático-semântica, que aborda os verbos quanto à predicação. Isso se configura um problema nesse modelo de análise, visto que, ao considerar os verbos como elementos discretos, em frases descontextualizadas, não se consideram as relações morfológicas, sintáticas, semânticas, pragmáticas e discursivas que só podem ser observadas a partir da língua em uso, dentro do jogo combinatório da linguagem. A partir dessas considerações, a concepção de língua que adotamos se coaduna com aquela proposta pelo Funcionalismo, que defende os estudos de fenômenos linguísticos a partir da análise das estruturas em uso real, priorizando as relações que se estabelecem no contexto comunicativo. Igualmente importante, foi o estudo da estrutura argumental da oração, formada pelo verbo e seus selecionados elementos obrigatórios (argumentos). Com relação aos verbos que serão objeto da pesquisa, orientamo-nos pela classificação realizada por Azeredo (2004, p.180), baseada na proposta de estudo de Cano Aguilar (1981) para a língua espanhola, que arrolou o grupo de verbos codificadores de extensão ou escala no português, entre eles: atravessar, percorrer, subir, abraçar, presidir, contornar, ocupar, preencher, inundar, medir 1 (ele mediu um terreno), medir 2 (o terreno mede 160 m), valer e durar (a viagem durou 80 dias). Em nossa pesquisa, analisaremos a transitividade de um recorte desse grupo, composto pelos verbos subir, ocupar, medir, durar, valer e seu uso na língua portuguesa, que, juntamente com a escolha do referencial teórico, justificam a importância deste estudo, visto que esse fenômeno é mais bem observado em condições reais de comunicação. O corpus é constituído de textos do âmbito jornalístico escrito e o levantamento de dados foi realizado por meio de ferramenta de pesquisa on-line no acervo digital da Revista Veja. Esperamos que o resultado desta pesquisa evidencie que um estudo que tenha como ponto de partida a língua em uso vá muito além das proposições das gramáticas / The most of grammarians welcome verbs in two sessions, namely: the first that deals with morphological aspects, then within a syntactic-semantic perspective, which deals with verbs as the predication. This creates the problem of proposal analysis, given that when considering verbs as discrete elements in non-contextualized sentences, the relationships are not considered morphological, syntactic, semantic, pragmatic and discoursive that can only be observed from the language in use, in game combinatorial language. From these considerations, the design language we adopt is consistent with that proposed by functionalism, which advocates the study of linguistic phenomena from the analysis of the structures in actual use, prioritizing the relationships established in the communicative context. Equally important was the study of argument structure of sentence, formed by the verb and its selected elements required (arguments). With respect to verbs that will be the object of research, we look to the classification performed by Azeredo (2004, p.180), based on the study proposal of Cano Aguilar (1981) for the Spanish language, that enrolled a group of verbs encoders of extent or scale in Portuguese, among them: cross, roam, rise, embrace, preside, contour, occupy, fill, flood, measure 1 (he measured a piece of land), measure 2 (the land measures 160 m), earn and last (the trip lasted 80 days). In our research, we analyze the transitivity of a clipping of this group, consisting of the verbs rise, occupy, measure, last, earn and their use in the Portuguese language, which, together with the choice of theoretical, justify the importance of this study, since this phenomenon is best observed in real communication. The corpus is composed of texts within the journalistic writing and data collection was conducted through a research tool on-line in digital collection of Veja Magazine. We hope that the result of this research it is clear that a study has as starting point the language used go far beyond the propositions of grammars
18

Error resilience for video coding services over packet-based networks

Zhang, Jian, Electrical Engineering, Australian Defence Force Academy, UNSW January 1999 (has links)
Error resilience is an important issue when coded video data is transmitted over wired and wireless networks. Errors can be introduced by network congestion, mis-routing and channel noise. These transmission errors can result in bit errors being introduced into the transmitted data or packets of data being completely lost. Consequently, the quality of the decoded video is degraded significantly. This thesis describes new techniques for minimising this degradation. To verify video error resilience tools, it is first necessary to consider the methods used to carry out experimental measurements. For most audio-visual services, streams of both audio and video data need to be simultaneously transmitted on a single channel. The inclusion of the impact of multiplexing schemes, such as MPEG 2 Systems, in error resilience studies is also an important consideration. It is shown that error resilience measurements including the effect of the Systems Layer differ significantly from those based only on the Video Layer. Two major issues of error resilience are investigated within this thesis. They are resynchronisation after error detection and error concealment. Results for resynchronisation using small slices, adaptive slice sizes and macroblock resynchronisation schemes are provided. These measurements show that the macroblock resynchronisation scheme achieves the best performance although it is not included in MPEG2 standard. The performance of the adaptive slice size scheme, however, is similar to that of the macroblock resynchronisation scheme. This approach is compatible with the MPEG 2 standard. The most important contribution of this thesis is a new concealment technique, namely, Decoder Motion Vector Estimation (DMVE). The decoded video quality can be improved significantly with this technique. Basically, this technique utilises the temporal redundancy between the current and the previous frames, and the correlation between lost macroblocks and their surrounding pixels. Therefore, motion estimation can be applied again to search in the previous picture for a match to those lost macroblocks. The process is similar to that the encoder performs, but it is in the decoder. The integration of techniques such as DMVE with small slices, or adaptive slice sizes or macroblock resynchronisation is also evaluated. This provides an overview of the performance produced by individual techniques compared to the combined techniques. Results show that high performance can be achieved by integrating DMVE with an effective resynchronisation scheme, even at a high cell loss rates. The results of this thesis demonstrate clearly that the MPEG 2 standard is capable of providing a high level of error resilience, even in the presence of high loss. The key to this performance is appropriate tuning of encoders and effective concealment in decoders.
19

Αρχιτεκτονικές διόρθωσης λαθών βασισμένες σε κώδικες BCH

Σπουρλής, Γεώργιος 19 July 2012 (has links)
Στη σύγχρονη εποχή η ανάγκη για αξιοπιστία των δεδομένων στις νέες τηλεπικοινωνιακές εφαρμογές έχει οδηγήσει στη ανάπτυξη και βελτιστοποίηση των λεγόμενων κωδικών διόρθωσης λαθών. Πρόκειται για συστήματα που έχουν την δυνατότητα ανίχνευσης και διόρθωσης λαθών που εισέρχονται σε τμήμα της πληροφορίας που μεταφέρεται μέσω τηλεπικοινωνιακών κυρίως δικτύων λόγω του θορύβου από το περιβάλλον και πιο συγκεκριμένα από το κανάλι μετάδοσης. Υπάρχουν αρκετές κατηγορίες από τέτοιους κώδικες διόρθωσης ανάλογα της δομής και της φύσης των αλγορίθμων που χρησιμοποιούν. Οι δύο κυριότερες κατηγορίες είναι οι συνελικτικοί κώδικες και οι γραμμικοί μπλοκ κώδικες με τους οποίους θα ασχοληθούμε.Οι δύο κώδικες που θα χρησιμοποιηθούν στα πλαίσια αυτής της εργασίας είναι οι κώδικες LDPC και οι BCH. Ανήκουν και οι δυο στους γραμμικούς μπλοκ κώδικες. Σκοπός της παρούσας διπλωματικής εργασίας αποτελεί αρχικά ο σχεδιασμός και η υλοποίηση ενός παραμετρικού συστήματος κωδικοποίησης και αποκωδικοποίησης για δυαδικούς BCH κώδικες διαφόρων μεγεθών. Εκτός της παραμετροποίησης έμφαση δόθηκε στην χαμηλή πολυπλοκότητα του συστήματος, στον υψηλό ρυθμό επεξεργασίας και στην δυνατότητα χρήσης shortening. Σε δεύτερη φάση πραγματοποιήθηκε σύνδεση μεταξύ, του παραπάνω κώδικα BCH, με έναν έτοιμο κώδικα LDPC και ένα κανάλι λευκού προσθετικού θορύβου (AWGN) που σχεδιάστηκαν στα πλαίσια άλλων διπλωματικών εργασιών με τελικό αποτέλεσμα την μελέτη της συμπεριφοράς του συνολικού συστήματος σε θέματα διόρθωσης λαθών και πιο συγκεκριμένα στην μείωση του φαινομένου του error-floor που παρατηρείται στον LDPC κώδικα. Επιπλέον μελετήθηκε η απαίτηση του συστήματος σε πόρους καθώς και ο ρυθμός επεξεργασίας που επιτυγχάνεται. Οι κύριες παράμετροι του κώδικα BCH που μπορούν να μεταβληθούν είναι το μέγεθος της κωδικής λέξης και η διορθωτική ικανότητα που επιτυγχάνεται. / -
20

Estimating the probability of a fleet vehicle accident : a deep learning approach using conditional variational auto-encoders

Malette-Campeau, Marie-Ève 08 1900 (has links)
Le risque est la possibilité d'un résultat négatif ou indésirable. Dans nos travaux, nous évaluons le risque d'accident d'un véhicule de flotte à partir des données de 1998 et 1999 fournies par la Société d'assurance automobiles du Québec (SAAQ), où chaque observation correspond à un camion transporteur de marchandises, et pour lequel le nombre d'accidents qu'il a eues l'année suivante est connue. Pour chaque véhicule, nous avons des informations telles que le nombre et le type d'infractions qu'il a eues, ainsi que certaines de ses caractéristiques comme la taille ou le nombre de cylindres. Avec notre objectif à l'esprit, nous proposons une nouvelle approche utilisant des auto-encodeurs variationnels conditionnels (CVAE) en considérant deux hypothèses de distribution, Binomiale Négative et Poisson, pour modéliser la distribution d'un accident de véhicule de flotte. Notre motivation principale pour l'utilisation d'un CVAE est de capturer la distribution conjointe entre le nombre d'accidents d'un véhicule de flotte et les variables prédictives de tels accidents, et d'extraire des caractéristiques latentes qui aident à reconstruire la distribution du nombre d'accidents de véhicules de flotte. Nous comparons ainsi la CVAE avec d'autres méthodes probabilistes, comme un modèle MLP qui apprend la distribution du nombre d'accidents de véhicules de flotte sans extraire de représentations latentes significatives. Nous avons constaté que le CVAE surpasse légèrement le modèle MLP, ce qui suggère qu'un modèle capable d'apprendre des caractéristiques latentes a une valeur ajoutée par rapport à un autre qui ne le fait pas. Nous avons également comparé le CVAE avec un autre modèle probabiliste de base, le modèle linéaire généralisé (GLM), ainsi qu'avec des modèles de classification. Nous avons constaté que le CVAE et le GLM utilisant la distribution binomiale négative ont tendance à montrer de meilleurs résultats. De plus, nous développons de nouvelles variables prédictives qui intègrent des caractéristiques liées à l'ensemble de la flotte en plus des caractéristiques individuelles pour chaque véhicule. L'utilisation de ces nouvelles variables prédictives se traduit par une amélioration des performances de tous les modèles mis en œuvre dans nos travaux utilisés pour évaluer la probabilité d'un accident de véhicule de flotte. / Risk is the possibility of a negative or undesired outcome. In our work, we evaluate the risk of a fleet vehicle accident using the 1998 and 1999 records from the files of the Societe d'assurance automobiles du Quebec (SAAQ), where each observation in the data set corresponds to a truck carrier of merchandise, and where the number of accidents during the following year it had. For each vehicle, we have useful information such as the number and type of violations it had, as well as some of its characteristics like the number of axles or the number of cylinders. With our objective in mind, we propose a new approach using conditional variational auto-encoders (CVAE) considering two distributional assumptions, Negative Binomial and Poisson, to model the distribution of a fleet vehicle accident. Our main motivation for using a CVAE is to capture the joint distribution between the number of accidents of a fleet vehicle and the predictor variables of such accidents, and to extract latent features that help reconstruct the distribution of the number of fleet vehicle accidents. We compare the CVAE with other probabilistic methods, such as a simple MLP model that learns the distribution of the number of fleet vehicle accidents without extracting meaningful latent representations. We found that the CVAE marginally outperforms the MLP model, which suggests that a model able to learn latent features has added value over one that does not. We also compared the CVAE with another basic probabilistic model, the generalized linear model (GLM), as well as with classification models. We found that the CVAE and GLM using the Negative Binomial distribution tend to show better results. Moreover, we provide a feature engineering scheme that incorporates features related to the whole fleet in addition to individual features for each vehicle that translates into improved performances of all the models implemented in our work used to evaluate the probability of a fleet vehicle accident.

Page generated in 0.0625 seconds