• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 48
  • 25
  • 7
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 102
  • 37
  • 24
  • 23
  • 23
  • 22
  • 21
  • 21
  • 21
  • 20
  • 17
  • 17
  • 16
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Étude et conception d'un système automatisé de contrôle d'aspect des pièces optiques basé sur des techniques connexionnistes / Investigation and design of an automatic system for optical devices' defects detection and diagnosis based on connexionist approach

Voiry, Matthieu 15 July 2008 (has links)
Dans différents domaines industriels, la problématique du diagnostic prend une place importante. Ainsi, le contrôle d’aspect des composants optiques est une étape incontournable pour garantir leurs performances opérationnelles. La méthode conventionnelle de contrôle par un opérateur humain souffre de limitations importantes qui deviennent insurmontables pour certaines optiques hautes performances. Dans ce contexte, cette thèse traite de la conception d’un système automatique capable d’assurer le contrôle d’aspect. Premièrement, une étude des capteurs pouvant être mis en oeuvre par ce système est menée. Afin de satisfaire à des contraintes de temps de contrôle, la solution proposée utilise deux capteurs travaillant à des échelles différentes. Un de ces capteurs est basé sur la microscopie Nomarski ; nous présentons ce capteur ainsi qu’un ensemble de méthodes de traitement de l’image qui permettent, à partir des données fournies par celui-ci, de détecter les défauts et de déterminer la rugosité, de manière robuste et répétable. L’élaboration d’un prototype opérationnel, capable de contrôler des pièces optiques de taille limitée valide ces différentes techniques. Par ailleurs, le diagnostic des composants optiques nécessite une phase de classification. En effet, si les défauts permanents sont détectés, il en est de même pour de nombreux « faux » défauts (poussières, traces de nettoyage. . . ). Ce problème complexe est traité par un réseau de neurones artificiels de type MLP tirant partie d’une description invariante des défauts. Cette description, issue de la transformée de Fourier-Mellin est d’une dimension élevée qui peut poser des problèmes liés au « fléau de la dimension ». Afin de limiter ces effets néfastes, différentes techniques de réduction de dimension (Self Organizing Map, Curvilinear Component Analysis et Curvilinear Distance Analysis) sont étudiées. On montre d’une part que les techniques CCA et CDA sont plus performantes que SOM en termes de qualité de projection, et d’autre part qu’elles permettent d’utiliser des classifieurs de taille plus modeste, à performances égales. Enfin, un réseau de neurones modulaire utilisant des modèles locaux est proposé. Nous développons une nouvelle approche de décomposition des problèmes de classification, fondée sur le concept de dimension intrinsèque. Les groupes de données de dimensionnalité homogène obtenus ont un sens physique et permettent de réduire considérablement la phase d’apprentissage du classifieur tout en améliorant ses performances en généralisation / In various industrial fields, the problem of diagnosis is of great interest. For example, the check of surface imperfections on an optical device is necessary to guarantee its operational performances. The conventional control method, based on human expert visual inspection, suffers from limitations, which become critical for some high-performances components. In this context, this thesis deals with the design of an automatic system, able to carry out the diagnosis of appearance flaws. To fulfil the time constraints, the suggested solution uses two sensors working on different scales. We present one of them based on Normarski microscopy, and the image processing methods which allow, starting from issued data, to detect the defects and to determine roughness in a reliable way. The development of an operational prototype, able to check small optical components, validates the proposed techniques. The final diagnosis also requires a classification phase. Indeed, if the permanent defects are detected, many “false” defects (dust, cleaning marks. . . ) are emphasized as well. This complex problem is solved by a MLP Artificial Neural Network using an invariant description of the defects. This representation, resulting from the Fourier-Mellin transform, is a high dimensional vector, what implies some problems linked to the “curse of dimensionality”. In order to limit these harmful effects, various dimensionality reduction techniques (Self Organizing Map, Curvilinear Component Analysis and Curvilinear Distance Analysis) are investigated. On one hand we show that CCA and CDA are more powerful than SOM in terms of projection quality. On the other hand, these methods allow using more simple classifiers with equal performances. Finally, a modular neural network, which exploits local models, is developed. We proposed a new classification problems decomposition scheme, based on the intrinsic dimension concept. The obtained data clusters of homogeneous dimensionality have a physical meaning and permit to reduce significantly the training phase of the classifier, while improving its generalization performances
42

Otimiza??o de superf?cies seletivas de frequ?ncia com elementos pr?-fractais utilizando rede neural MLP e algoritmos de busca populacional

Silva, Marcelo Ribeiro da 27 January 2014 (has links)
Made available in DSpace on 2014-12-17T14:55:18Z (GMT). No. of bitstreams: 1 MarceloRS_TESE.pdf: 2113878 bytes, checksum: 1cc62a66f14cc48f2e97f986a4dbbb8d (MD5) Previous issue date: 2014-01-27 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / This thesis describes design methodologies for frequency selective surfaces (FSSs) composed of periodic arrays of pre-fractals metallic patches on single-layer dielectrics (FR4, RT/duroid). Shapes presented by Sierpinski island and T fractal geometries are exploited to the simple design of efficient band-stop spatial filters with applications in the range of microwaves. Initial results are discussed in terms of the electromagnetic effect resulting from the variation of parameters such as, fractal iteration number (or fractal level), fractal iteration factor, and periodicity of FSS, depending on the used pre-fractal element (Sierpinski island or T fractal). The transmission properties of these proposed periodic arrays are investigated through simulations performed by Ansoft DesignerTM and Ansoft HFSSTM commercial softwares that run full-wave methods. To validate the employed methodology, FSS prototypes are selected for fabrication and measurement. The obtained results point to interesting features for FSS spatial filters: compactness, with high values of frequency compression factor; as well as stable frequency responses at oblique incidence of plane waves. This thesis also approaches, as it main focus, the application of an alternative electromagnetic (EM) optimization technique for analysis and synthesis of FSSs with fractal motifs. In application examples of this technique, Vicsek and Sierpinski pre-fractal elements are used in the optimal design of FSS structures. Based on computational intelligence tools, the proposed technique overcomes the high computational cost associated to the full-wave parametric analyzes. To this end, fast and accurate multilayer perceptron (MLP) neural network models are developed using different parameters as design input variables. These neural network models aim to calculate the cost function in the iterations of population-based search algorithms. Continuous genetic algorithm (GA), particle swarm optimization (PSO), and bees algorithm (BA) are used for FSSs optimization with specific resonant frequency and bandwidth. The performance of these algorithms is compared in terms of computational cost and numerical convergence. Consistent results can be verified by the excellent agreement obtained between simulations and measurements related to FSS prototypes built with a given fractal iteration / Esta tese descreve metodologias de projeto para superf?cies seletivas de frequ?ncia (FSSs) compostas por arranjos peri?dicos de patches met?licos pr?-fractais impressos em camadas diel?tricas simples (FR4, RT/duroid). As formas apresentadas pelas geometrias correspondentes ? ilha de Sierpinski e ao fractal T s?o exploradas para o projeto simples de filtros espaciais rejeita-faixa eficientes com aplica??es na faixa de micro-ondas. Resultados iniciais s?o discutidos em termos do efeito eletromagn?tico decorrente da varia??o de par?metros como, n?mero de itera??es fractais (ou n?vel do fractal), fator de itera??o fractal, e periodicidade da FSS, dependendo do elemento pr?-fractal utilizado (ilha de Sierpinski ou fractal T). As propriedades de transmiss?o destes arranjos peri?dicos propostos s?o investigadas atrav?s de simula??es realizadas pelos programas comerciais Ansoft DesignerTM e Ansoft HFSSTM, que executam m?todos de onda completa. Para validar a metodologia empregada, prot?tipos de FSS s?o selecionados para fabrica??o e medi??o. Os resultados obtidos apontam caracter?sticas interessantes para filtros espaciais de FSS, tais como: estrutura compacta, com maiores fatores de compress?o de frequ?ncia; al?m de respostas est?veis em frequ?ncia com rela??o ? incid?ncia obl?qua de ondas planas. Esta tese aborda ainda, como enfoque principal, a aplica??o de uma t?cnica alternativa de otimiza??o eletromagn?tica (EM) para an?lise e s?ntese de FSSs com motivos fractais. Em exemplos de aplica??o desta t?cnica, elementos pr?-fractais de Vicsek e Sierpinski s?o usados no projeto ?timo das estruturas de FSS. Baseada em ferramentas de intelig?ncia computacional, a t?cnica proposta supera o alto custo computacional proveniente das an?lises param?tricas de onda completa. Para este fim, s?o desenvolvidos modelos r?pidos e precisos de rede neural do tipo perceptron de m?ltiplas camadas (MLP) utilizando diferentes par?metros como vari?veis de entrada do projeto. Estes modelos de rede neural t?m como objetivo calcular a fun??o custo nas itera??es dos algoritmos de busca populacional. O algoritmo gen?tico cont?nuo (GA), a otimiza??o por enxame de part?culas (PSO), e o algoritmo das abelhas (BA), s?o usados para a otimiza??o das FSSs com valores espec?ficos de frequ?ncia de resson?ncia e largura de banda. O desempenho destes algoritmos ? comparado em termos do custo computacional e da 13 converg?ncia num?rica. Resultados consistentes podem ser verificados atrav?s da excelente concord?ncia obtida entre simula??es e medi??es referentes aos prot?tipos de FSS constru?dos com uma dada itera??o fractal
43

[en] MULTIPLE CLASSIFIER SYSTEM FOR MOTOR IMAGERY TASK CLASSIFICATION / [pt] SISTEMA DE MÚLTIPLOS CLASSIFICADORES PARA CLASSIFICAÇÃO DE TAREFAS DE IMAGINAÇÃO MOTORA

ALIMED CELECIA RAMOS 09 August 2017 (has links)
[pt] Interfaces Cérebro Computador (BCIs) são sistemas artificiais que permitem a interação entre a pessoa e seu ambiente empregando a tradução de sinais elétricos cerebrais como controle para qualquer dispositivo externo. Um Sistema de neuroreabilitação baseado em EEG pode combinar portabilidade e baixo custo com boa resolução temporal e nenhum risco para a vida do usuário. Este sistema pode estimular a plasticidade cerebral, desde que ofereça confiabilidade no reconhecimento das tarefas de imaginação motora realizadas pelo usuário. Portanto, o objetivo deste trabalho é o projeto de um sistema de aprendizado de máquinas que, baseado no sinal de EEG de somente dois eletrodos, C3 e C4, consiga classificar tarefas de imaginação motora com alta acurácia, robustez às variações do sinal entre experimentos e entre sujeitos, e tempo de processamento razoável. O sistema de aprendizado de máquina proposto é composto de quatro etapas principais: pré-processamento, extração de atributos, seleção de atributos, e classificação. O pré-processamento e extração de atributos são implementados mediante a extração de atributos estatísticos, de potência e de fase das sub-bandas de frequência obtidas utilizando a Wavelet Packet Decomposition. Já a seleção de atributos é efetuada por um Algoritmo Genético e o modelo de classificação é constituído por um Sistema de Múltiplos Classificadores, composto por diferentes classificadores, e combinados por uma rede neural Multi-Layer Perceptron. O sistema foi testado em seis sujeitos de bases de dados obtidas das Competições de BCIs e comparados com trabalhos benchmark da literatura, superando os resultados dos outros métodos. Adicionalmente, um sistema real de BCI para neurorehabilitação foi projetado, desenvolvido e testado, produzindo também bons resultados. / [en] Brain Computer Interfaces (BCIs) are artificial systems that allow the interaction between a person and their environment using the translated brain electrical signals to control any external device. An EEG neurorehabilitation system can combine portability and affordability with good temporal resolution and no health risks to the user. This system can stimulate the brain plasticity, provided that the system offers reliability on the recognition of the motor imagery (MI) tasks performed by the user. Therefore, the aim of this work is the design of a machine learning system that, based on the EEG signal from only C3 and C4 electrodes, can classify MI tasks with high accuracy, robustness to trial and inter-subject signal variations, and reasonable processing time. The proposed machine learning system has four main stages: preprocessing, feature extraction, feature selection, and classification. The preprocessing and feature extraction are implemented by the extraction of statistical, power and phase features of the frequency sub-bands obtained by the Wavelet Packet Decomposition. The feature selection process is effectuated by a Genetic Algorithm and the classifier model is constituted by a Multiple Classifier System composed by different classifiers and combined by a Multilayer Perceptron Neural Network as meta-classifier. The system is tested on six subjects from datasets offered by the BCIs Competitions and compared with benchmark works founded in the literature, outperforming the other methods. In addition, a real BCI system for neurorehabilitation is designed and tested, producing good results as well.
44

Försäljningsprediktion : en jämförelse mellan regressionsmodeller / Sales prediction : a comparison between regression models

Fridh, Anton, Sandbecker, Erik January 2021 (has links)
Idag finns mängder av företag i olika branscher, stora som små, som vill förutsäga sin försäljning. Det kan bland annat bero på att de vill veta hur stort antal produkter de skall köpa in eller tillverka, och även vilka produkter som bör investeras i över andra. Vilka varor som är bra att investera i på kort sikt och vilka som är bra på lång sikt. Tidigare har detta gjorts med intuition och statistik, de flesta vet att skidjackor inte säljer så bra på sommaren, eller att strandprylar inte säljer bra under vintern. Det här är ett simpelt exempel, men hur blir det när komplexiteten ökar, och det finns ett stort antal produkter och butiker? Med hjälp av maskininlärning kan ett sånt här problem hanteras. En maskininlärningsalgoritm appliceras på en tidsserie, som är en datamängd med ett antal ordnade observationer vid olika tidpunkter under en viss tidsperiod. I den här studiens fall är detta försäljning av olika produkter som säljs i olika butiker och försäljningen ska prediceras på månadsbasis. Tidsserien som behandlas är ett dataset från Kaggle.com som kallas för “Predict Future Sales”. Algoritmerna som används i för den här studien för att hantera detta tidsserieproblem är XGBoost, MLP och MLR. XGBoost, MLR och MLP har i tidigare forskning gett bra resultat på liknande problem, där bland annat bilförsäljning, tillgänglighet och efterfrågan på taxibilar och bitcoin-priser legat i fokus. Samtliga algoritmer presterade bra utifrån de evalueringsmått som användes för studierna, och den här studien använder samma evalueringsmått. Algoritmernas prestation beskrivs enligt så kallade evalueringsmått, dessa är R², MAE, RMSE och MSE. Det är dessa mått som används i resultat- och diskussionskapitlen för att beskriva hur väl algoritmerna presterar. Den huvudsakliga forskningsfrågan för studien lyder därför enligt följande: Vilken av algoritmerna MLP, XGBoost och MLR kommer att prestera bäst enligt R², MAE, RMSE och MSE på tidsserien “Predict Future Sales”. Tidsserien behandlas med ett känt tillvägagångssätt inom området som kallas CRISP-DM, där metodens olika steg följs. Dessa steg innebär bland annat dataförståelse, dataförberedelse och modellering. Denna metod är vad som i slutändan leder till resultatet, där resultatet från de olika modellerna som skapats genom CRISP-DM presenteras. I slutändan var det MLP som fick bäst resultat enligt mätvärdena, följt av MLR och XGBoost. MLP fick en RMSE på 0.863, MLR på 1.233 och XGBoost på 1.262 / Today, there are a lot of companies in different industries, large and small, that want to predict their sales. This may be due, among other things, to the fact that they want to know how many products they should buy or manufacture, and also which products should be invested in over others. In the past, this has been done with intuition and statistics. Most people know that ski jackets do not sell so well in the summer, or that beach products do not sell well during the winter. This is a simple example, but what happens when complexity increases, and there are a large number of products and stores? With the help of machine learning, a problem like this can be managed easier. A machine learning algorithm is applied to a time series, which is a set of data with several ordered observations at different times during a certain time period. In the case of this study, it is the sales of different products sold in different stores, and sales are to be predicted on a monthly basis. The time series in question is a dataset from Kaggle.com called "Predict Future Sales". The algorithms used in this study to handle this time series problem are XGBoost, MLP and MLR. XGBoost, MLR and MLP. These have in previous research performed well on similar problems, where, among other things, car sales, availability and demand for taxis and bitcoin prices were in focus. All algorithms performed well based on the evaluation metrics used by the studies, and this study uses the same evaluation metrics. The algorithms' performances are described according to so-called evaluation metrics, these are R², MAE, RMSE and MSE. These measures are used in the results and discussion chapters to describe how well the algorithms perform. The main research question for the study is therefore as follows: Which of the algorithms MLP, XGBoost and MLR will perform best according to R², MAE, RMSE and MSE on the time series "Predict Future Sales". The time series is treated with a known approach called CRISP-DM, where the methods are followed in different steps. These steps include, among other things, data understanding, data preparation and modeling. This method is what ultimately leads to the results, where the results from the various models created by CRISP-DM are presented. In the end, it was the MLP algorithm that got the best results according to the measured values, followed by MLR and XGBoost. MLP got an RMSE of 0.863, MLR of 1,233 and XGBoost of 1,262
45

Automatic Analysis of Peer Feedback using Machine Learning and Explainable Artificial Intelligence / Automatisk analys av Peer feedback med hjälp av maskininlärning och förklarig artificiell Intelligence

Huang, Kevin January 2023 (has links)
Peer assessment is a process where learners evaluate and provide feedback on one another’s performance, which is critical to the student learning process. Earlier research has shown that it can improve student learning outcomes in various settings, including the setting of engineering education, in which collaborative teaching and learning activities are common. Peer assessment activities in computer-supported collaborative learning (CSCL) settings are becoming more and more common. When using digital technologies for performing these activities, much student data (e.g., peer feedback text entries) is generated automatically. These large data sets can be analyzed (through e.g., computational methods) and further used to improve our understanding of how students regulate their learning in CSCL settings in order to improve their conditions for learning by for example, providing in-time feedback. Yet there is currently a need to automatise the coding process of these large volumes of student text data since it is a very time- and resource consuming task. In this regard, the recent development in machine learning could prove beneficial. To understand how we can harness the affordances of machine learning technologies to classify student text data, this thesis examines the application of five models on a data set containing peer feedback from 231 students in the settings of a large technical university course. The models used to evaluate on the dataset are: the traditional models Multi Layer Perceptron (MLP), Decision Tree and the transformers-based models BERT, RoBERTa and DistilBERT. To evaluate each model’s performance, Cohen’s κ, accuracy, and F1-score were used as metrics. Preprocessing of the data was done by removing stopwords; then it was examined whether removing them improved the performance of the models. The results showed that preprocessing on the dataset only made the Decision Tree increase in performance while it decreased on all other models. RoBERTa was the model with the best performance on the dataset on all metrics used. Explainable artificial intelligence (XAI) was used on RoBERTa as it was the best performing model and it was found that the words considered as stopwords made a difference in the prediction. / Kamratbedömning är en process där eleverna utvärderar och ger feedback på varandras prestationer, vilket är avgörande för elevernas inlärningsprocess. Tidigare forskning har visat att den kan förbättra studenternas inlärningsresultat i olika sammanhang, däribland ingenjörsutbildningen, där samarbete vid undervisning och inlärning är vanligt förekommande. I dag blir det allt vanligare med kamratbedömning inom datorstödd inlärning i samarbete (CSCL). När man använder digital teknik för att utföra dessa aktiviteter skapas många studentdata (t.ex. textinlägg om kamratåterkoppling) automatiskt. Dessa stora datamängder kan analyseras (genom t.ex, beräkningsmetoder) och användas vidare för att förbättra våra kunskaper om hur studenterna reglerar sitt lärande i CSCL-miljöer för att förbättra deras förutsättningar för lärande. Men för närvarande finns det ett stort behov av att automatisera kodningen av dessa stora volymer av textdata från studenter. I detta avseende kan den senaste utvecklingen inom maskininlärning vara till nytta. För att förstå hur vi kan nyttja möjligheterna med maskininlärning teknik för att klassificera textdata från studenter, undersöker vi i denna studie hur vi kan använda fem modeller på en datamängd som innehåller feedback från kamrater till 231 studenter. Modeller som används för att utvärdera datasetet är de traditionella modellerna Multi Layer Perceptron (MLP), Decision Tree och de transformer-baserade modellerna BERT, RoBERTa och DistilBERT. För att utvärdera varje modells effektivitet användes Cohen’s κ, noggrannhet och F1-poäng som mått. Förbehandling av data gjordes genom att ta bort stoppord, därefter undersöktes om borttagandet av dem förbättrade modellernas effektivitet. Resultatet visade att förbehandlingen av datasetet endast fick Decision Tree att öka sin prestanda, medan den minskade för alla andra modeller. RoBERTa var den modell som presterade bäst på datasetet för alla mätvärden som användes. Förklarlig artificiell intelligens (XAI) användes på RoBERTa eftersom det var den modell som presterade bäst, och det visade sig att de ord som ansågs vara stoppord hade betydelse för prediktionen.
46

Battery supported charging infrastructure for electric vehicles : And its impact on the overall electricity infrastructure / Laddinfrastruktur för elbilar kopplat till stationära batterilager : Och dess inverkan på elnätet

Svensson Dahlin, Marcus January 2019 (has links)
The Paris Agreement was formed in 2015 to reduce the environmental impact and limit the increase in temperature to 2°C compared to pre-industrial levels. It is believed that an electrification of the transport sector will reduce its negative environmental impact. To reach the goals set by the Paris Agreement we are in need of quick development towards an electrified fleet of vehicles. Despite this urgency electric vehicles (EVs) have failed to reach the majority of the market, instead it has stuck in the chasm between the early adopters and the early majority of the markets. This is due to three main challenges; EVs are relatively expensive compared to conventional petrol- and diesel-powered vehicles, EVs have an inadequate driving range, and the access to a functional charging infrastructure is limited. This thesis focuses on the third challenge regarding charging infrastructure. The charging infrastructure is dependent on the existing electricity distribution infrastructure, i.e. the grid. It is rather time-consuming and costly to strengthen the grid, which is deemed necessary for enabling a roll-out of a charging infrastructure that meets the needs of current and near-future EV operators. This research provides an alternative way of approaching the issues. Instead of strengthening the grid by digging up old cables it looks into the opportunities of incorporating stationary battery storages as a buffer between the EV charging stations and the grid connection point. This battery solution can reduce the power outtake and smoothen out the load from EV charging, thus limiting the impact of EV charging from a grid perspective. The research assesses what type of pathways this solution could follow to successfully drive the adoption of EVs. Furthermore, the study tries to understand how these solutions could be designed to deliver the necessary values regarding EV charging and reducing the overall power outtake from grid connection points. The thesis is carried out by analyzing collected quantitative and qualitative data through the lens of three main theories. These are transition theory, theory on eco-innovations, and theory on the diffusion of innovations. The thesis finds that the two pathways for a battery supported charging infrastructure that will be most efficient in speeding up the adoption rate of EVs is within a workplace and public charging setting in city and urban environments. For both pathways it is expected that a centralized concept, with one battery solution connected to several charging points, will be most feasible in the short-term, which is important as the need for developments are very urgent. The workplace charging will provide 3,6 kW AC-charging while the public charging provides 150 kW DC-charging. The solution is expected to be cost-efficient for specific locations, especially for public charging in city environments with strained grid infrastructures. The study provides an initial assessment for the city of Stockholm which indicates that the power outtake can be reduced by 63,5–112,2 MW in 2030. This means that the current grid infrastructure could support a larger number of EVs, thus reducing the greenhouse-gas emissions from the transport sector and bringing us closer to reaching the goals set by the Paris Agreement. / Parisavtalet utformades år 2015 för att reducera vår klimatpåverkan och begränsa temperaturökningen till 2°C jämfört med nivåerna som rådde innan den industriella revolutionen. Förhoppningen är att en elektrifiering av transportsektorn kan reducera dess negativa klimatpåverkan. För att nå målen i Parisavtalet behövs en snabb omställning mot en elektrifiering av fordonsflottan. Trots situationens brådskande karaktär har elbilar fastnat i en klyfta mellan den begränsade tidiga marknaden och den sena marknaden, vilken utgör majoriteten av kunderna. Det finns tre primära anledningar till detta; elbilar är dyra jämfört med bensin- och dieseldrivna bilar, räckvidden för elbilar är otillräcklig, och det råder begränsad tillgång till en funktionell laddinfrastruktur. Den här studien fokuserar på den tredje anledningen kring otillräcklig laddinfrastruktur. Laddinfrastrukturen är beroende av det existerande elnätet och dess distributionskapacitet. En förstärkning av elnätet är i många fall nödvändig för att möjliggöra en utrullning av en laddinfrastruktur som möter dagens och morgondagens behov. Istället för att förstärka elnätet genom att gräva ner tjockare kablar så fokuserar denna studie på en alternativ lösning kring laddinfrastruktur sammankopplat med stationära batterilager. Batterilagret agerar som en buffert mellan anslutningspunkten till elnätet och laddningspunkten för elbilar. Genom att reducera effektuttaget och jämna ut lastkurvan för elbilsladdning kan en batterilösning begränsa den negativa påverkan det förväntas ha på elnätet. Studien undersöker vilka vägar denna batterilösning kan ta för att öka antalet elbilar i fordonsflottan. Efter att ha förstått vilka dessa lösningsvägar är så analyserar studien hur dessa lösningar kan vara uppbyggda för att erbjuda de efterfrågade och nödvändiga värdena för elbilsladdning och elnätets fortsatta funktionalitet. Studien bygger på analys av kvalitativa och kvantitativa data. Analysen utförs genom att applicera koncept hämtade från teorier kring teknologiska övergångar, miljöinnovationer och spridning av innovationer. De två lösningsområden som förväntas vara mest effektiva i att driva en ökning av antalet elbilar i Sverige är arbetsplatsladdning samt offentlig laddning i stadsmiljöer. En lösning med ett centraliserat batterisystem där en batterilösning är kopplat till flera laddstationer antas vara mest genomförbar på kort sikt, vilket anses vara centralt på grund av utmaningarnas brådskande karaktär. För arbetsplatsladdning tillhandahålls 3,6 kW AC-laddning och för offentlig laddning tillhandahålls 150 kW DC-laddning. Lösningarna förväntas vara kostnadseffektiva for specifika platser och användarprofiler, speciellt för offentlig laddning i stadsområden med ansträngda elnät. En initial uppskattning visar att en laddinfrastruktur kopplat till stationära batterilager inom de två lösningsområdena kan minska Stockholms effektuttag för elbilsladdning med 63,5–112,2 MW år 2030. Detta betyder att dagens elnät kan tillgodose ett ökat antal elbilar, vilka genererar färre utsläpp av växthusgaser och ger oss en bättre chans att nå Parisavtalets mål.
47

A Study of Innovative Green Energy Technology Diffusion -- Taking the Evolution of Taiwan¡¦s Photovoltaic as Example

Chen, Jyung-Yau 01 February 2012 (has links)
Renewable energy can effectively decrease carbon-dioxide emissions, and alleviate the Greenhouse effect. For consuming huge fossil fuel, Taiwan does have the obligation to reduce carbon-dioxide emissions. For the sunshine abound in the whole island and mature of photovoltaic (PV) industry, Taiwan has the potential to develop PV. This paper based on the Theory of Planned Behavior (TPB) and Multi-perspective on Technological Transition (MLP) focuses on the PV evolution of Taiwan. By empirical study, this paper developed a research framework, and applied questionnaire survey to verify it. Further, this paper also has a longitudinal case study and by historical research method to explore the evolution of Taiwan¡¦s PV policy. This paper found that attitude is the primary factor that affects the household¡¦s attention, and its antecedent factor relative advantage is the most important one. The second factor that affects the household¡¦s intention is perceived behavioral control which has the antecedent factor complexity. Further, perceived behavioral control also has the direct effect to the action which we must pay attention to it. Subjective norm has slight effect to the household¡¦s intention. And, social obligation is the antecedent factor of the subjective norm. Moreover, interfere effect exists between intention and household¡¦s real action. From the macro prospective, MLP depicts the evolution of Taiwan¡¦s PV diffusion, and we found it was resulted from the interaction of socio-technical landscape, socio-technical regime and niche-innovation. The processes were continually developed and form an innovative technology spiral.
48

Natural language processing techniques for the purpose of sentinel event information extraction

Barrett, Neil 23 November 2012 (has links)
An approach to biomedical language processing is to apply existing natural language processing (NLP) solutions to biomedical texts. Often, existing NLP solutions are less successful in the biomedical domain relative to their non-biomedical domain performance (e.g., relative to newspaper text). Biomedical NLP is likely best served by methods, information and tools that account for its particular challenges. In this thesis, I describe an NLP system specifically engineered for sentinel event extraction from clinical documents. The NLP system's design accounts for several biomedical NLP challenges. The specific contributions are as follows. - Biomedical tokenizers differ, lack consensus over output tokens and are difficult to extend. I developed an extensible tokenizer, providing a tokenizer design pattern and implementation guidelines. It evaluated as equivalent to a leading biomedical tokenizer (MedPost). - Biomedical part-of-speech (POS) taggers are often trained on non-biomedical corpora and applied to biomedical corpora. This results in a decrease in tagging accuracy. I built a token centric POS tagger, TcT, that is more accurate than three existing POS taggers (mxpost, TnT and Brill) when trained on a non-biomedical corpus and evaluated on biomedical corpora. TcT achieves this increase in tagging accuracy by ignoring previously assigned POS tags and restricting the tagger's scope to the current token, previous token and following token. - Two parsers, MST and Malt, have been evaluated using perfect POS tag input. Given that perfect input is unlikely in biomedical NLP tasks, I evaluated these two parsers on imperfect POS tag input and compared their results. MST was most affected by imperfectly POS tagged biomedical text. I attributed MST's drop in performance to verbs and adjectives where MST had more potential for performance loss than Malt. I attributed Malt's resilience to POS tagging errors to its use of a rich feature set and a local scope in decision making. - Previous automated clinical coding (ACC) research focuses on mapping narrative phrases to terminological descriptions (e.g., concept descriptions). These methods make little or no use of the additional semantic information available through topology. I developed a token-based ACC approach that encodes tokens and manipulates token-level encodings by mapping linguistic structures to topological operations in SNOMED CT. My ACC method recalled most concepts given their descriptions and performed significantly better than MetaMap. I extended my contributions for the purpose of sentinel event extraction from clinical letters. The extensions account for negation in text, use medication brand names during ACC and model (coarse) temporal information. My software system's performance is similar to state-of-the-art results. Given all of the above, my thesis is a blueprint for building a biomedical NLP system. Furthermore, my contributions likely apply to NLP systems in general. / Graduate
49

Functional Characterization of the Evolutionarily Conserved Adenoviral Proteins L4-22K and L4-33K

Östberg, Sara January 2014 (has links)
Regulation of adenoviral gene expression is a complex process directed by viral proteins controlling a multitude of different activities at distinct phases of the virus life cycle. This thesis discusses adenoviral regulation of transcription and splicing by two proteins expressed at the late phase: L4-22K and L4-33K. These are closely related with a common N-terminus but unique C-terminal domains. The L4-33K protein is an alternative RNA splicing factor inducing L1-IIIa mRNA splicing, while L4-22K is stimulating transcription from the major late promoter (MLP). The L4-33K protein contains a tiny RS-repeat in its unique C-terminal end that is essential for the splicing enhancer function of the protein. Here we demonstrate that the tiny RS-repeat is required for localization of the protein to the nucleus and viral replication centers. Further, we describe an auto-regulatory loop where L4-33K enhances splicing of its own intron. The preliminary characterization of the responsive RNA-element suggests that it differs from the previously defined L4-33K-responsive element activating L1-IIIa mRNA splicing. L4-22K lacks the ability to enhance L1-IIIa splicing in vivo, and here we show that the protein is defective in L1-IIIa or other late pre-mRNA splicing reactions in vitro. Interestingly, we found a novel function for the L4-22K and L4-33K proteins as regulators of E1A alternative splicing. Both proteins selectively upregulated E1A-10S mRNA accumulation in transfection experiments, by a mechanism independent of the tiny RS-repeat. Although L4-22K is reported to be an MLP transcriptional enhancer protein, here we show that L4-22K also functions as a repressor of MLP transcription. This novel activity depends on the integrity of the major late first leader 5’ splice site. The model suggests that at low concentrations L4-22K activates MLP transcription while at high concentrations L4-22K represses transcription. So far, characterizations of the L4-22K and L4-33K proteins have been limited to human adenoviruses 2 or 5 (HAdV-2/5). We expanded our experiments to include HAdV-3, HAdV-4, HAdV-9, HAdV-11 and HAdV-41. The results demonstrated that the transcription- or splicing-enhancing properties of L4-22K and L4-33K, respectively, are evolutionarily conserved and non-overlapping. Thus, the sequence-based conservation is mirrored by the functions, as expected for functionally important proteins.
50

Comparação de modelos MLP/RNA e modelos Box-Jenkins em séries temporais não lineares

Flores, João Henrique Ferreira January 2009 (has links)
A capacidade de prever resultados futuros, ao se analisar uma série de dados, é uma importante ferramenta para o planejamento de qualquer empresa ou indústria. Porém, a literatura oferece muitas opções de ferramentas e modelos estatísticos que permitem obter estas previsões. Cada qual com suas características e recomendações. Dentre estes modelos, destacam-se os modelos de Box e Jenkins, e os modelos de Redes Neurais Artificiais (RNA) - com destaque aos modelos de perceptron de múltiplas camadas (MLP). Estas duas diferentes abordagens são comparadas nesta dissertação com relação a sua capacidade de obter previsões acuradas em séries de dados não lineares quanto a sua média. As abordagens foram comparadas utilizando-se a série mensal do índice de produção física industrial do Estado do Rio Grande do Sul. Bem como a série anual de manchas solares, sendo a segunda utilizada como caso-controle para as comparações, devido ao fato de que as suas propriedades já foram amplamente estudadas. No estudo da série do índice de produção física mensal, os modelos de Box e Jenkins obtiveram melhor rendimento. Na série das manchas solares foram os modelos MLP que se destacaram. Desta forma, não é possível afirmar se alguma das abordagens é superior - tratando-se de séries de dados não lineares quanto a sua média. / The capacity to preview future outcomes on the time series analysis is an important tool for any business and industry planning. However, the literature offers many options on statistical tools and models which allow to obtain these forecasts. Each one with their features and recommendations. 1n these models, the Box and Jenkins and Artificial Neural Networks (ANN) models, with the multilayer perceptron (MLP) highlighted, stand out. These two different approaches are compared in this thesis related to the capacity to obtain accurate forecasts in mean related non-linear time series analysis. These approaches were compared using the monthly physical production index of Rio Grande do Sul time series and the sunspot series, being the second one used as a case-control to the comparisons, due the fact of its properties are already widely studied. 1n the monthly physical production index series study, t,he Box and Jenkins models obtained better efficiency. 1n the sunspot series, the MLP models were highlighted. So, it isn't possible to affirm if any of the approaches is superior, in the case of mean related non-linear time series.

Page generated in 0.0336 seconds