• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 33
  • 22
  • 7
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 183
  • 39
  • 38
  • 32
  • 27
  • 27
  • 24
  • 24
  • 23
  • 23
  • 20
  • 18
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Estimation of LRD present in H.264 video traces using wavelet analysis and proving the paramount of H.264 using OPF technique in wi-fi environment.

Jayaseelan, John January 2012 (has links)
While there has always been a tremendous demand for streaming video over Wireless networks, the nature of the application still presents some challenging issues. These applications that transmit coded video sequence data over best-effort networks like the Internet, the application must cope with the changing network behaviour; especially, the source encoder rate should be controlled based on feedback from a channel estimator that explores the network intermittently. The arrival of powerful video compression techniques such as H.264, which advance in networking and telecommunications, opened up a whole new frontier for multimedia communications. The aim of this research is to transmit the H.264 coded video frames in the wireless network with maximum reliability and in a very efficient manner. When the H.264 encoded video sequences are to be transmitted through wireless network, it faces major difficulties in reaching the destination. The characteristics of H.264 video coded sequences are studied fully and their capability of transmitting in wireless networks are examined and a new approach called Optimal Packet Fragmentation (OPF) is framed and the H.264 coded sequences are tested in the wireless simulated environment. This research has three major studies involved in it. First part of the research has the study about Long Range Dependence (LRD) and the ways by which the self-similarity can be estimated. For estimating the LRD a few studies are carried out and Wavelet-based estimator is selected for the research because Wavelets incarcerate both time and frequency features in the data and regularly provides a more affluent picture than the classical Fourier analysis. The Wavelet used to estimate the self-similarity by using the variable called Hurst Parameter. Hurst Parameter tells the researcher about how a data can behave inside the transmitted network. This Hurst Parameter should be calculated for a more reliable transmission in the wireless network. The second part of the research deals with MPEG-4 and H.264 encoder. The study is carried out to prove which encoder is superior to the other. We need to know which encoder can provide excellent Quality of Service (QoS) and reliability. This study proves with the help of Hurst parameter that H.264 is superior to MPEG-4. The third part of the study is the vital part in this research; it deals with the H.264 video coded frames that are segmented into optimal packet size in the MAC Layer for an efficient and more reliable transfer in the wireless network. Finally the H.264 encoded video frames incorporated with the Optimal Packet Fragmentation are tested in the NS-2 wireless simulated network. The research proves the superiority of H.264 video encoder and OPF¿s master class.
132

Semantically Aligned Sentence-Level Embeddings for Agent Autonomy and Natural Language Understanding

Fulda, Nancy Ellen 01 August 2019 (has links)
Many applications of neural linguistic models rely on their use as pre-trained features for downstream tasks such as dialog modeling, machine translation, and question answering. This work presents an alternate paradigm: Rather than treating linguistic embeddings as input features, we treat them as common sense knowledge repositories that can be queried using simple mathematical operations within the embedding space, without the need for additional training. Because current state-of-the-art embedding models were not optimized for this purpose, this work presents a novel embedding model designed and trained specifically for the purpose of "reasoning in the linguistic domain".Our model jointly represents single words, multi-word phrases, and complex sentences in a unified embedding space. To facilitate common-sense reasoning beyond straightforward semantic associations, the embeddings produced by our model exhibit carefully curated properties including analogical coherence and polarity displacement. In other words, rather than training the model on a smorgaspord of tasks and hoping that the resulting embeddings will serve our purposes, we have instead crafted training tasks and placed constraints on the system that are explicitly designed to induce the properties we seek. The resulting embeddings perform competitively on the SemEval 2013 benchmark and outperform state-of- the-art models on two key semantic discernment tasks introduced in Chapter 8.The ultimate goal of this research is to empower agents to reason about low level behaviors in order to fulfill abstract natural language instructions in an autonomous fashion. An agent equipped with an embedding space of sucient caliber could potentially reason about new situations based on their similarity to past experience, facilitating knowledge transfer and one-shot learning. As our embedding model continues to improve, we hope to see these and other abilities become a reality.
133

Fine-tuning a BERT-based NER Model for Positive Energy Districts

Ortega, Karen, Sun, Fei January 2023 (has links)
This research presents an innovative approach to extracting information from Positive Energy Districts (PEDs), urban areas generating surplus energy. PEDs are integral to the European Commission's SET Plan, tackling housing challenges arising from population growth. The study refines BERT to categorize PED-related entities, producing a cutting-edge NER model and an integrated pipeline of diverse NER tools and data sources. The model achieves an accuracy of 0.81 and an F1 Score of 0.55 with notably high confidence scores through pipeline evaluations, confirming its practical applicability. While the F1 score falls short of expectations, this pioneering exploration in PED information extraction sets the stage for future refinements and studies, promising enhanced methodologies and impactful outcomes in this dynamic field. This research advances NER processes for Positive Energy Districts, supporting their development and implementation.
134

Design, Construction, Control, and Analysis of Linear Delta Robot

Oberhauser, Joseph Q. 19 July 2016 (has links)
No description available.
135

Development of an exercise machine for enhanced eccentric training of the muscles : A study of sensors and system performance / Utveckling av en träningsmaskin för förbättrad excentrisk muskelträning

Zivanovic, Natalija January 2020 (has links)
Currently, there are various training machines that can support training of the muscles while the muscles are lengthened, also known as eccentric training. Training machines that are widely used to train the muscles eccentrically utilize a flywheel to generate load to the user. When training the muscles eccentrically with such a machine, there is a desire to accomplish eccentric overload, which is achieved when the muscles under training are exposed to a very high load during eccentric training of these muscles. To achieve this, the user needs to activate other muscles that are not in the focus of the training or be assisted by another person. In this study, a novel, smart flywheel training machine was developed by implementing electric motor and sensors, which could identify the exercise pattern of the user and help achieve desired eccentric overload. This study focused on how the system performance of such training machine interacting with human beings was affected by various grade of sensor feedback. With an increased resolution of the sensors and a lower sample time, the cost of the system is increased, and it was therefore of interest to study what grade of sensor feedback was required. More exactly, this study evaluated how the system performance was improved when sensor resolution was improved, what resolution and sample time were required for the system to perform correct and safely and last, how noise and disturbances affected the system. The study was conducted in a simulated environment in Matlab and Simulink, and some real tests and experiments were also performed on the existing flywheel training machine. An incremental encoder was implemented in the system and resolution of the encoder, as well as sample time, were tweaked in the simulation to test different combinations of these. The results showed that both resolution and sample time had an impact on the system performance. A higher resolution resulted in a smaller tracking error to some extent, but after a certain value the system became unstable if the sample time was not small enough. Noise and disturbances had a minor impact on the system performance. It was concluded that the best choice of encoder resolution was 0.0314 radians with a sample time of 0.01 ms. Even lower resolution such as 0.628 rad, 0.126 rad or 0.0571 rad with a sample time of 0.1 ms could be allowed and should be considered safe. However, the system might not perform as desired if these alternatives are chosen, although the alternatives might decrease the cost of the system. / I nuläget finns det olika träningsmaskiner som kan stödja träning av muskler där musklerna förlängs, även känt som excentrisk träning. Träningsmaskiner som idag används i stor utsträckning för att träna musklerna excentriskt använder ett svänghjul för att generera träningsmotstånd till användaren. När musklerna tränas excentriskt med en sådan maskin finns det en önskan att åstadkomma excentrisk överbelastning; detta uppnås när musklerna som tränas utsätts för en mycket hög belastning under den excentriska träningsfasen. För att uppnå detta måste användaren aktivera andra muskler som inte står i träningens fokus eller få hjälp av en annan person. I den här studien har en ny, smart, svänghjulsträningsmaskin utvecklats genom att implementera elmotor och sensorer som kan identifiera användarens träningsmönster och hjälpa till att uppnå önskvärd excentrisk överbelastning. Denna studie fokuserade på hur systemprestanda för en sådan träningsmaskin som interagerar med människor påverkades av olika grader av sensoråterkoppling. Med en ökad upplösning av sensorerna och en lägre samplingstid ökar kostnaden för systemet och det var därför av intresse att studera vilken grad av sensoråterkoppling som krävdes. Mer exakt utvärderar denna studie hur systemets prestanda förbättrades när sensorupplösningen var högre och vilken upplösning och samplingstid som krävdes för att systemet skulle fungera korrekt och säkert. Påverkan av brus och störningar på systemet utvärderades också. Studien genomfördes i simuleringsmiljö i Matlab och Simulink och verkliga tester och experiment utfördes på den befintliga svänghjulsträningsmaskinen. En inkrementell pulsgivare (incremental encoder) implementerades i systemet och dess upplösning, såväl som samplingstid, justerades i simuleringen för att testa olika kombinationer av dessa. Resultat visade att både upplösningen och samplingstiden påverkade systemets prestanda. En högre upplösning resulterade i ett mindre reglerfel till en viss del, men efter en viss ökad upplösning blev systemet instabilt om samplingstiden inte var tillräckligt liten. Brus och störningar hade en mindre inverkan på systemprestandan. Slutsatsen var att det bästa valet av pulsgivarupplösning var 0,0314 radianer med en samplingstid på 0,01 ms. Även lägre upplösning såsom 0,628 rad, 0,126 rad eller 0,0571 rad med en samplingstid på 0,1 ms kan tillåtas och bör betraktas som säkert. Systemet kan dock komma att inte fungera som önskat om dessa alternativ väljs, dock kan alternativen sänka kostnaden för systemet.
136

Electromechanical Design and Development of the Virginia Tech Roller Rig Testing Facility for Wheel-rail Contact Mechanics and Dynamics

Hosseinipour, Milad 28 September 2016 (has links)
The electromechanical design and development of a sophisticated roller rig testing facility at the Railway Technologies Laboratory (RTL) of Virginia Polytechnic and State University (VT) is presented. The VT Roller Rig is intended for studying the complex dynamics and mechanics at the wheel-rail interface of railway vehicles in a controlled laboratory environment. Such measurements require excellent powering and driving architecture, high-performance motion control, accurate measurements, and relatively noise-free data acquisition systems. It is critical to accurately control the relative dynamics and positioning of rotating bodies to emulate field conditions. To measure the contact forces and moments, special care must be taken to ensure any noise, such as mechanical vibration, electrical crosstalk, and electromagnetic interference (EMI) are kept to a minimum. This document describes the steps towards design and development of all electromechanical subsystems of the VT Roller Rig, including the powertrain, power electronics, motion control systems, sensors, data acquisition units, safety and monitoring circuits, and general practices followed for satisfying the local and international codes of practice. The VT Roller Rig is comprised of a wheel and a roller in a vertical configuration that simulate the single-wheel/rail interaction in one-fourth scale. The roller is five times larger than the scaled wheel to keep the contact patch distortion that is inevitable with a roller rig to a minimum. This setup is driven by two independent AC servo motors that control the velocity of the wheel and roller using state-of-the-art motion control technologies. Six linear actuators allow for adjusting the simulated load, wheel angle of attack, rail cant, and lateral position of the wheel on the rail. All motion controls are performed using digital servo drives, manufactured by Kollmorgen, VA, USA. A number of sensors measure the contact patch parameters including force, torque, displacement, rotation, speed, acceleration, and contact patch geometry. A unified communication protocol between the actuators and sensors minimizes data conversion time, which allows for servo update rates of up to 48kHz. This provides an unmatched bandwidth for performing various dynamics, vibrations, and transient tests, as well as static steady-state conditions. The VT Roller Rig has been debugged and commissioned successfully. The hardware and software components are tested both individually and within the system. The VT Roller Rig can control the creepage within 0.3RPM of the commanded value, while actively controlling the relative position of the rotating bodies with an unprecedented level of accuracy, no more than 16nm of the target location. The contact force measurement dynamometers can dynamically capture the contact forces to within 13.6N accuracy, for up to 10kN. The instantaneous torque in each driveline can be measured with better than 6.1Nm resolution. The VT Roller Rig Motion Programming Interface (MPI) is highly flexible for both programmers and non-programmers. All common motion control algorithms in the servo motion industry have been successfully implemented on the Rig. The VT Roller Rig MPI accepts third party motion algorithms in C, C++, and any .Net language. It successfully communicates with other design and analytics software such as Matlab, Simulink, and LabVIEW for performing custom-designed routines. It also provides the infrastructure for linking the Rig's hardware with commercial multibody dynamics software such as Simpack, NUCARS, and Vampire, which is a milestone for hardware-in-the-loop testing of railroad systems. / Ph. D.
137

Outlier detection with ensembled LSTM auto-encoders on PCA transformed financial data / Avvikelse-detektering med ensemble LSTM auto-encoders på PCA-transformerad finansiell data

Stark, Love January 2021 (has links)
Financial institutions today generate a large amount of data, data that can contain interesting information to investigate to further the economic growth of said institution. There exists an interest in analyzing these points of information, especially if they are anomalous from the normal day-to-day work. However, to find these outliers is not an easy task and not possible to do manually due to the massive amounts of data being generated daily. Previous work to solve this has explored the usage of machine learning to find outliers in these financial datasets. Previous studies have shown that the pre-processing of data usually stands for a big part in information loss. This work aims to study if there is a proper balance in how the pre-processing is carried out to retain the highest amount of information while simultaneously not letting the data remain too complex for the machine learning models. The dataset used consisted of Foreign exchange transactions supplied by the host company and was pre-processed through the use of Principal Component Analysis (PCA). The main purpose of this work is to test if an ensemble of Long Short-Term Memory Recurrent Neural Networks (LSTM), configured as autoencoders, can be used to detect outliers in the data and if the ensemble is more accurate than a single LSTM autoencoder. Previous studies have shown that Ensemble autoencoders can prove more accurate than a single autoencoder, especially when SkipCells have been implemented (a configuration that skips over LSTM cells to make the model perform with more variation). A datapoint will be considered an outlier if the LSTM model has trouble properly recreating it, i.e. a pattern that is hard to classify, making it available for further investigations done manually. The results show that the ensembled LSTM model proved to be more accurate than that of a single LSTM model in regards to reconstructing the dataset, and by our definition of an outlier, more accurate in outlier detection. The results from the pre-processing experiments reveal different methods of obtaining an optimal number of components for your data. One of those is by studying retained variance and accuracy of PCA transformation compared to model performance for a certain number of components. One of the conclusions from the work is that ensembled LSTM networks can prove very powerful, but that alternatives to pre-processing should be explored such as categorical embedding instead of PCA. / Finansinstitut genererar idag en stor mängd data, data som kan innehålla intressant information värd att undersöka för att främja den ekonomiska tillväxten för nämnda institution. Det finns ett intresse för att analysera dessa informationspunkter, särskilt om de är avvikande från det normala dagliga arbetet. Att upptäcka dessa avvikelser är dock inte en lätt uppgift och ej möjligt att göra manuellt på grund av de stora mängderna data som genereras dagligen. Tidigare arbete för att lösa detta har undersökt användningen av maskininlärning för att upptäcka avvikelser i finansiell data. Tidigare studier har visat på att förbehandlingen av datan vanligtvis står för en stor del i förlust av emphinformation från datan. Detta arbete syftar till att studera om det finns en korrekt balans i hur förbehandlingen utförs för att behålla den högsta mängden information samtidigt som datan inte förblir för komplex för maskininlärnings-modellerna. Det emphdataset som användes bestod av valutatransaktioner som tillhandahölls av värdföretaget och förbehandlades genom användning av Principal Component Analysis (PCA). Huvudsyftet med detta arbete är att undersöka om en ensemble av Long Short-Term Memory Recurrent Neural Networks (LSTM), konfigurerad som autoenkodare, kan användas för att upptäcka avvikelser i data och om ensemblen är mer precis i sina predikteringar än en ensam LSTM-autoenkodare. Tidigare studier har visat att en ensembel avautoenkodare kan visa sig vara mer precisa än en singel autokodare, särskilt när SkipCells har implementerats (en konfiguration som hoppar över vissa av LSTM-cellerna för att göra modellerna mer varierade). En datapunkt kommer att betraktas som en avvikelse om LSTM-modellen har problem med att återskapa den väl, dvs ett mönster som nätverket har svårt att återskapa, vilket gör datapunkten tillgänglig för vidare undersökningar. Resultaten visar att en ensemble av LSTM-modeller predikterade mer precist än en singel LSTM-modell när det gäller att återskapa datasetet, och då enligt vår definition av avvikelser, mer precis avvikelse detektering. Resultaten från förbehandlingen visar olika metoder för att uppnå ett optimalt antal komponenter för dina data genom att studera bibehållen varians och precision för PCA-transformation jämfört med modellprestanda. En av slutsatserna från arbetet är att en ensembel av LSTM-nätverk kan visa sig vara mycket kraftfulla, men att alternativ till förbehandling bör undersökas, såsom categorical embedding istället för PCA.
138

Evaluation of machine learning methods for anomaly detection in combined heat and power plant

Carls, Fredrik January 2019 (has links)
In the hope to increase the detection rate of faults in combined heat and power plant boilers thus lowering unplanned maintenance three machine learning models are constructed and evaluated. The algorithms; k-Nearest Neighbor, One-Class Support Vector Machine, and Auto-encoder have a proven track record in research for anomaly detection, but are relatively unexplored for industrial applications such as this one due to the difficulty in collecting non-artificial labeled data in the field.The baseline versions of the k-Nearest Neighbor and Auto-encoder performed very similarly. Nevertheless, the Auto-encoder was slightly better and reached an area under the precision-recall curve (AUPRC) of 0.966 and 0.615 on the trainingand test period, respectively. However, no sufficiently good results were reached with the One-Class Support Vector Machine. The Auto-encoder was made more sophisticated to see how much performance could be increased. It was found that the AUPRC could be increased to 0.987 and 0.801 on the trainingand test period, respectively. Additionally, the model was able to detect and generate one alarm for each incident period that occurred under the test period.The conclusion is that ML can successfully be utilized to detect faults at an earlier stage and potentially circumvent otherwise costly unplanned maintenance. Nevertheless, there is still a lot of room for improvements in the model and the collection of the data. / I hopp om att öka identifieringsgraden av störningar i kraftvärmepannor och därigenom minska oplanerat underhåll konstrueras och evalueras tre maskininlärningsmodeller.Algoritmerna; k-Nearest Neighbor, One-Class Support Vector Machine, och Autoencoder har bevisad framgång inom forskning av anomalidetektion, men är relativt outforskade för industriella applikationer som denna på grund av svårigheten att samla in icke-artificiell uppmärkt data inom området.Grundversionerna av k-Nearest Neighbor och Auto-encoder presterade nästan likvärdigt. Dock var Auto-encoder-modellen lite bättre och nådde ett AUPRC-värde av 0.966 respektive 0.615 på träningsoch testperioden. Inget tillräckligt bra resultat nåddes med One-Class Support Vector Machine. Auto-encoder-modellen gjordes mer sofistikerad för att se hur mycket prestandan kunde ökas. Det visade sig att AUPRC-värdet kunde ökas till 0.987 respektive 0.801 under träningsoch testperioden. Dessutom lyckades modellen identifiera och generera ett larm vardera för alla incidenter under testperioden. Slutsatsen är att ML framgångsrikt kan användas för att identifiera störningar iett tidigare skede och därigenom potentiellt kringgå i annat fall dyra oplanerade underhåll. Emellertid finns det fortfarande mycket utrymme för förbättringar av modellen samt inom insamlingen av data.
139

Augmenting High-Dimensional Data with Deep Generative Models / Högdimensionell dataaugmentering med djupa generativa modeller

Nilsson, Mårten January 2018 (has links)
Data augmentation is a technique that can be performed in various ways to improve the training of discriminative models. The recent developments in deep generative models offer new ways of augmenting existing data sets. In this thesis, a framework for augmenting annotated data sets with deep generative models is proposed together with a method for quantitatively evaluating the quality of the generated data sets. Using this framework, two data sets for pupil localization was generated with different generative models, including both well-established models and a novel model proposed for this purpose. The unique model was shown both qualitatively and quantitatively to generate the best data sets. A set of smaller experiments on standard data sets also revealed cases where this generative model could improve the performance of an existing discriminative model. The results indicate that generative models can be used to augment or replace existing data sets when training discriminative models. / Dataaugmentering är en teknik som kan utföras på flera sätt för att förbättra träningen av diskriminativa modeller. De senaste framgångarna inom djupa generativa modeller har öppnat upp nya sätt att augmentera existerande dataset. I detta arbete har ett ramverk för augmentering av annoterade dataset med hjälp av djupa generativa modeller föreslagits. Utöver detta så har en metod för kvantitativ evaulering av kvaliteten hos genererade data set tagits fram. Med hjälp av detta ramverk har två dataset för pupillokalisering genererats med olika generativa modeller. Både väletablerade modeller och en ny modell utvecklad för detta syfte har testats. Den unika modellen visades både kvalitativt och kvantitativt att den genererade de bästa dataseten. Ett antal mindre experiment på standardiserade dataset visade exempel på fall där denna generativa modell kunde förbättra prestandan hos en existerande diskriminativ modell. Resultaten indikerar att generativa modeller kan användas för att augmentera eller ersätta existerande dataset vid träning av diskriminativa modeller.
140

Electrical lithium-ion battery models based on recurrent neural networks: a holistic approach

Schmitt, Jakob, Horstkötter, Ivo, Bäker, Bernard 15 March 2024 (has links)
As an efficient energy storage technology, lithium-ion batteries play a key role in the ongoing electrification of the mobility sector. However, the required modelbased design process, including hardware in the loop solutions, demands precise battery models. In this work, an encoder-decoder model framework based on recurrent neural networks is developed and trained directly on unstructured battery data to replace time consuming characterisation tests and thus simplify the modelling process. A manifold pseudo-random bit stream dataset is used for model training and validation. A mean percentage error (MAPE) of 0.30% for the test dataset attests the proposed encoder-decoder model excellent generalisation capabilities. Instead of the recursive one-step prediction prevalent in the literature, the stage-wise trained encoder-decoder framework can instantaneously predict the battery voltage response for 2000 time steps and proves to be 120 times more time-efficient on the test dataset. Accuracy, generalisation capability and time efficiency of the developed battery model enable a potential online anomaly detection, power or range prediction. The fact that, apart from the initial voltage level, the battery model only relies on the current load as input and thus requires no estimated variables such as the state-of-charge (SOC) to predict the voltage response holds the potential of a battery ageing independent LIB modelling based on raw BMS signals. The intrinsically ageingindependent battery model is thus suitable to be used as a digital battery twin in virtual experiments to estimate the unknown battery SOH on purely BMS data basis.

Page generated in 0.0921 seconds