Spelling suggestions: "subject:"neighborhood""
291 |
Détection robuste de jonctions et points d'intérêt dans les images et indexation rapide de caractéristiques dans un espace de grande dimension / Robust junction for line-drawing images and time-efficient feature indexing in feature vector spacePham, The Anh 27 November 2013 (has links)
Les caractéristiques locales sont essentielles dans de nombreux domaines de l’analyse d’images comme la détection et la reconnaissance d’objets, la recherche d’images, etc. Ces dernières années, plusieurs détecteurs dits locaux ont été proposés pour extraire de telles caractéristiques. Ces détecteurs locaux fonctionnent généralement bien pour certaines applications, mais pas pour toutes. Prenons, par exemple, une application de recherche dans une large base d’images. Dans ce cas, un détecteur à base de caractéristiques binaires pourrait être préféré à un autre exploitant des valeurs réelles. En effet, la précision des résultats de recherche pourrait être moins bonne tout en restant raisonnable, mais probablement avec un temps de réponse beaucoup plus court. En général, les détecteurs locaux sont utilisés en combinaison avec une méthode d’indexation. En effet, une méthode d’indexation devient nécessaire dans le cas où les ensembles de points traités sont composés de milliards de points, où chaque point est représenté par un vecteur de caractéristiques de grande dimension. / Local features are of central importance to deal with many different problems in image analysis and understanding including image registration, object detection and recognition, image retrieval, etc. Over the years, many local detectors have been presented to detect such features. Such a local detector usually works well for some particular applications but not all. Taking an application of image retrieval in large database as an example, an efficient method for detecting binary features should be preferred to other real-valued feature detection methods. The reason is easily seen: it is expected to have a reasonable precision of retrieval results but the time response must be as fast as possible. Generally, local features are used in combination with an indexing scheme. This is highly needed for the case where the dataset is composed of billions of data points, each of which is in a high-dimensional feature vector space.
|
292 |
Contemporary electromagnetic spectrum reuse techniques: tv white spaces and D2D communications / TÃcnicas contemporÃneas de reuso do espectro electromagnÃtico: tv de espaÃos branco e comunicaÃÃes D2DCarlos Filipe Moreira e Silva 15 December 2015 (has links)
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior / Over the last years, the wireless broadband access has achieved a tremendous success.
With that, the telecommunications industry has faced very important changes in terms
of technology, heterogeneity, kind of applications, and massive usage (virtual data tsunami)
derived from the introduction of smartphones and tablets; or even in terms of market structure
and its main players/actors. Nonetheless, it is well-known that the electromagnetic spectrum is
a scarce resource, being already fully occupied (or at least reserved for certain applications). Tra-
ditional spectrum markets (where big monopolies dominate) and static spectrum management
originated a paradoxal situation: the spectrum is occupied without actually being used!
In one hand, with the global transition from analog to digital Television (TV), part of the
spectrum previously licensed for TV is freed and geographically interleaved, originating the
consequent Television White Spaces (TVWS); on the other hand, the direct communications
between devices, commonly referred as Device-to-Device (D2D) communications, are attracting
crescent attention by the scientific community and industry in order to overcome the scarcity
problem and satisfy the increasing demand for extra capacity. As such, this thesis is divided in
two main parts: (a) Spectrum market for TVWS: where a SWOT analysis for the use of TVWS
is performed giving some highlights in the directions/actions that shall be followed so that its
adoption becomes effective; and a tecno-economic evaluation study is done considering as a
use-case a typical European city, showing the potential money savings that operators may reach
if they adopt by the use of TVWS in a flexible market manner; (b) D2D communications: where
a neighbor discovery technique for D2D communications is proposed in the single-cell scenario
and further extended for the multi-cell case; and an interference mitigation algorithm based
on the intelligent selection of Downlink (DL) or Uplink (UL) band for D2D communications
underlaying cellular networks.
A summary of the principal conclusions is as follows: (a) The TVWS defenders shall
focus on the promotion of a real-time secondary spectrum market, where through the correct
implementation of policies for protection ratios in the spectrum broker and geo-location
database, incumbents are protected against interference; (b) It became evident that an operator
would recover its investment around one year earlier if it chooses to deploy the network
following a flexible spectrum market approach with an additional TVWS carrier, instead of
the traditional market; (c) With the proposed neighbor discovery technique the time to detect
all neighbors per Mobile Station (MS) is significantly reduced, letting more time for the actual
data transmission; and the power of MS consumed during the discovery process is also reduced
because the main processing is done at the Base Station (BS), while the MS needs to ensure that
D2D communication is possible just before the session establishment; (d) Despite being a simple
concept, band selection improves the gains of cellular communications and limits the gains
of D2D communications, regardless the position within the cell where D2D communications
happen, providing a trade-off between system performance and interference mitigation. / Nos Ãltimos anos, o acesso de banda larga atingiu um grande sucesso. Com isso, a indÃstria
das telecomunicaÃÃes passou por importantes transformaÃÃes em termos de tecnologia,
heterogeneidade, tipo de aplicaÃÃes e uso massivo (tsunami virtual de dados) em consequÃncia
da introduÃÃo dos smartphones e tablets; ou atà mesmo na estrutura de mercado e os seus
principais jogadores/atores. PorÃm, à sabido que o espectro electromagnÃtico à um recurso
limitado, estando jà ocupado (ou pelo menos reservado para alguma aplicaÃÃo). O mercado
tradicional de espectro (onde os grandes monopÃlios dominam) e o seu gerenciamento estÃtico
contribuÃram para essa situaÃÃo paradoxal: o espectro està ocupado mas nÃo està sendo usado!
Por um lado, com a transiÃÃo mundial da TelevisÃo (TV) analÃgica para a digital, parte do
espectro anteriormente licenciado para a TV Ã libertado e geograficamente multiplexado para
evitar a interferÃncia entre sinais de torres vizinhas, dando origem a ÂespaÃos em branco na
frequÃncia da TV ou Television White Spaces (TVWS); por outro lado, as comunicaÃÃes diretas
entre usuÃrios, designadas por comunicaÃÃes diretas Dispositivo-a-Dispositivo (D2D), estÃ
gerando um crescente interesse da comunidade cientÃfica e indÃstria, com vista a ultrapassar
o problema da escassez de espectro e satisfazer a crescente demanda por capacidade extra.
Assim, a tese està dividida em duas partes principais: (a) Mercado de espectro eletromagnÃtico
para TVWS: onde à feita uma anÃlise SWOT para o uso dos TVWS, dando direÃÃes/aÃÃes a
serem seguidas para que o seu uso se torne efetivo; e um estudo tecno-econÃmico considerando
como cenÃrio uma tÃpica cidade Europeia, onde se mostram as possÃveis poupanÃas monetÃrias
que os operadores conseguem obter ao optarem pelo uso dos TVWS num mercado flexÃvel;
(b) ComunicaÃÃes D2D: onde uma tÃcnica de descoberta de vizinhos para comunicaÃÃes D2D Ã
proposta, primeiro para uma Ãnica cÃlula e mais tarde estendida para o cenÃrio multi-celular; e
um algoritmo de mitigaÃÃo de interferÃncia baseado na seleÃÃo inteligente da banda Ascendente
(DL) ou Descendente (UL) a ser reusada pelas comunicaÃÃes D2D que acontecem na rede celular.
Um sumÃrio das principais conclusÃes à o seguinte: (a) Os defensores dos TVWS devem-se
focar na promoÃÃo do mercado secundÃrio de espectro electromagnÃtico, onde atravÃs da
correta implementaÃÃo de politicas de proteÃÃo contra a interferÃncia no broker de espectro e
na base de dados, os usuÃrios primÃrio sÃo protegidos contra a interferÃncia; (b) Um operador
consegue recuperar o seu investimento aproximadamente um ano antes se ele optar pelo
desenvolvimento da rede seguindo um mercado secundÃrio de espectro com a banda adicional
de TVWS, em vez do mercado tradicional; (c) Com a tÃcnica proposta de descoberta de vizinhos,
o tempo de descoberta por usuÃrio à significativamente reduzido; e a potÃncia consumida
nesse processo à tambÃm ela reduzida porque o maior processamento à feito na EstaÃÃo RÃdio
Base (BS), enquanto que o usuÃrio sà precisa de se certificar que a comunicaÃÃo direta Ã
possÃvel; (d) A seleÃÃo de banda, embora seja um conceito simples, melhora os ganhos das
comunicaÃÃes celulares e limita os das comunicaÃÃes D2D, providenciando um compromisso
entre a performance do sistema e a mitigaÃÃo de interferÃncia.
|
293 |
Detekce fibrilace síní v EKG / ECG based atrial fibrillation detectionProkopová, Ivona January 2020 (has links)
Atrial fibrillation is one of the most common cardiac rhythm disorders characterized by ever-increasing prevalence and incidence in the Czech Republic and abroad. The incidence of atrial fibrillation is reported at 2-4 % of the population, but due to the often asymptomatic course, the real prevalence is even higher. The aim of this work is to design an algorithm for automatic detection of atrial fibrillation in the ECG record. In the practical part of this work, an algorithm for the detection of atrial fibrillation is proposed. For the detection itself, the k-nearest neighbor method, the support vector method and the multilayer neural network were used to classify ECG signals using features indicating the variability of RR intervals and the presence of the P wave in the ECG recordings. The best detection was achieved by a model using a multilayer neural network classification with two hidden layers. Results of success indicators: Sensitivity 91.23 %, Specificity 99.20 %, PPV 91.23 %, F-measure 91.23 % and Accuracy 98.53 %.
|
294 |
Neue Indexingverfahren für die Ähnlichkeitssuche in metrischen Räumen über großen DatenmengenGuhlemann, Steffen 08 April 2016 (has links)
Ein zunehmend wichtiges Thema in der Informatik ist der Umgang mit Ähnlichkeit in einer großen Anzahl unterschiedlicher Domänen. Derzeit existiert keine universell verwendbare Infrastruktur für die Ähnlichkeitssuche in allgemeinen metrischen Räumen. Ziel der Arbeit ist es, die Grundlage für eine derartige Infrastruktur zu legen, die in klassische Datenbankmanagementsysteme integriert werden könnte.
Im Rahmen einer Analyse des State of the Art wird der M-Baum als am besten geeignete Basisstruktur identifiziert. Dieser wird anschließend zum EM-Baum erweitert, wobei strukturelle Kompatibilität mit dem M-Baum erhalten wird. Die Abfragealgorithmen werden im Hinblick auf eine Minimierung notwendiger Distanzberechnungen optimiert. Aufbauend auf einer mathematischen Analyse der Beziehung zwischen Baumstruktur und Abfrageaufwand werden Freiheitsgrade in Baumänderungsalgorithmen genutzt, um Bäume so zu konstruieren, dass Ähnlichkeitsanfragen mit einer minimalen Anzahl an Anfrageoperationen beantwortet werden können. / A topic of growing importance in computer science is the handling of similarity in multiple heterogenous domains. Currently there is no common infrastructure to support this for the general metric space. The goal of this work is lay the foundation for such an infrastructure, which could be integrated into classical data base management systems.
After some analysis of the state of the art the M-Tree is identified as most suitable base and enhanced in multiple ways to the EM-Tree retaining structural compatibility. The query algorithms are optimized to reduce the number of necessary distance calculations. On the basis of a mathematical analysis of the relation between the tree structure and the query performance degrees of freedom in the tree edit algorithms are used to build trees optimized for answering similarity queries using a minimal number of distance calculations.
|
295 |
Matrix decompositions and algorithmic applications to (hyper)graphs / Décomposition de matrices et applications algorithmiques aux (hyper)graphesBergougnoux, Benjamin 13 February 2019 (has links)
Durant ces dernières décennies, d'importants efforts et beaucoup de café ont été dépensés en vue de caractériser les instances faciles des problèmes NP-difficiles. Dans ce domaine de recherche, une approche s'avère être redoutablement efficace : la théorie de la complexité paramétrée introduite par Downey et Fellows dans les années 90.Dans cette théorie, la complexité d'un problème n'est plus mesurée uniquement en fonction de la taille de l'instance, mais aussi en fonction d'un paramètre .Dans cette boite à outils, la largeur arborescente est sans nul doute un des paramètres de graphe les plus étudiés.Ce paramètre mesure à quel point un graphe est proche de la structure topologique d'un arbre.La largeur arborescente a de nombreuses propriétés algorithmiques et structurelles.Néanmoins, malgré l'immense intérêt suscité par la largeur arborescente, seules les classes de graphes peu denses peuvent avoir une largeur arborescente bornée.Mais, de nombreux problèmes NP-difficiles s'avèrent faciles dans des classes de graphes denses.La plupart du temps, cela peut s'expliquer par l'aptitude de ces graphes à se décomposer récursivement en bipartitions de sommets $(A,B)$ où le voisinage entre $A$ et $B$ possède une structure simple.De nombreux paramètres -- appelés largeurs -- ont été introduits pour caractériser cette aptitude, les plus remarquables sont certainement la largeur de clique , la largeur de rang , la largeur booléenne et la largeur de couplage induit .Dans cette thèse, nous étudions les propriétés algorithmiques de ces largeurs.Nous proposons une méthode qui généralise et simplifie les outils développés pour la largeur arborescente et les problèmes admettant une contrainte d'acyclicité ou de connexité tel que Couverture Connexe , Dominant Connexe , Coupe Cycle , etc.Pour tous ces problèmes, nous obtenons des algorithmes s'exécutant en temps $2^{O(k)}\cdot n^{O(1)}$, $2^{O(k \log(k))}\cdot n^{O(1)}$, $2^{O(k^2)}\cdot n^{O(1)}$ et $n^{O(k)}$ avec $k$ étant, respectivement, la largeur de clique, la largeur de Q-rang, la larguer de rang et la largueur de couplage induit.On prouve aussi qu'il existe un algorithme pour Cycle Hamiltonien s'exécutant en temps $n^{O(k)}$ quand une décomposition de largeur de clique $k$ est donné en entrée.Finalement, nous prouvons qu'on peut compter en temps polynomial le nombre de transversaux minimaux d'hypergraphes $\beta$-acyclique ainsi que le nombre de dominants minimaux de graphes fortement triangulés.Tous ces résultats offrent des pistes prometteuses en vue d'une généralisation des largeurs et de leurs applications algorithmiques. / In the last decades, considerable efforts have been spent to characterize what makes NP-hard problems tractable. A successful approach in this line of research is the theory of parameterized complexity introduced by Downey and Fellows in the nineties.In this framework, the complexity of a problem is not measured only in terms of the input size, but also in terms of a parameter on the input.One of the most well-studied parameters is tree-width, a graph parameter which measures how close a graph is to the topological structure of a tree.It appears that tree-width has numerous structural properties and algorithmic applications.However, only sparse graph classes can have bounded tree-width.But, many NP-hard problems are tractable on dense graph classes.Most of the time, this tractability can be explained by the ability of these graphs to be recursively decomposable along vertex bipartitions $(A,B)$ where the adjacency between $A$ and $B$ is simple to describe.A lot of graph parameters -- called width measures -- have been defined to characterize this ability, the most remarkable ones are certainly clique-width, rank-width, and mim-width.In this thesis, we study the algorithmic properties of these width measures.We provide a framework that generalizes and simplifies the tools developed for tree-width and for problems with a constraint of acyclicity or connectivity such as Connected Vertex Cover, Connected Dominating Set, Feedback Vertex Set, etc.For all these problems, we obtain $2^{O(k)}\cdot n^{O(1)}$, $2^{O(k \log(k))}\cdot n^{O(1)}$, $2^{O(k^2)}\cdot n^{O(1)}$ and $n^{O(k)}$ time algorithms parameterized respectively by clique-width, Q-rank-width, rank-width and mim-width.We also prove that there exists an algorithm solving Hamiltonian Cycle in time $n^{O(k)}$, when a clique-width decomposition of width $k$ is given.Finally, we prove that we can count in polynomial time the minimal transversals of $\beta$-acyclic hypergraphs and the minimal dominating sets of strongly chordal graphs.All these results offer promising perspectives towards a generalization of width measures and their algorithmic applications.
|
296 |
Arab-Israeli tensions and Kibbutz life in an early story by Amos OzAbramovich, Dvir 13 August 2019 (has links)
No description available.
|
297 |
Spatial Pattern and Accessibility Analysis of Covid-19 Vaccine Centers in MichiganAmin, Faria January 2021 (has links)
No description available.
|
298 |
Topics in random matrices and statistical machine learning / ランダム行列と統計的機械学習についてSushma, Kumari 25 September 2018 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(理学) / 甲第21327号 / 理博第4423号 / 新制||理||1635(附属図書館) / 京都大学大学院理学研究科数学・数理解析専攻 / (主査)准教授 COLLINS,Benoit Vincent Pierre, 教授 泉 正己, 教授 日野 正訓 / 学位規則第4条第1項該当 / Doctor of Science / Kyoto University / DFAM
|
299 |
Classification of Radar Emitters Based on Pulse Repetition Interval using Machine LearningSvensson, André January 2022 (has links)
In electronic warfare, one of the key technologies is radar. Radar is used to detect and identify unknown aerial, nautical or land-based objects. An attribute of of a pulsed radar signal is the Pulse Repetition Interval (PRI) which is the time interval between pulses in a pulse train. In a passive radar receiver system, the PRI can be used to recognize the emitter system. Correct classification of emitter systems is a crucial part of Electronic Support Measures (ESM) and Radar Warning Receivers (RWR) in order to deploy appropriate measures depending on the emitter system. Inaccurate predictions of emitter systems can have lethal consequences and variables such as time and confidence in the predictions are essential for an effective predictive method. Due to the classified nature of military systems and techniques, there are no industry standard systems or techniques that perform quick and accurate classifications of emitter systems based on PRI. Therefore, methods that allows for fast and accurate predictions based on PRI is highly desirable and worthy of research. This thesis explores and compares the capabilities of two machine learning methods for the task of classifying emitters based on received PRI. The first method is an attention based model which performs well throughout all levels of realistic noise and is quick to learn and even quicker to give accurate predictions. The second method is a K-Nearest Neighbor (KNN) implementation that, while performing well for noise-free PRI, finds its performance degrading as the amount of noise increases. An additional outcome of this thesis is the development of a system to generate samples in an automated fashion. The attention based model performs well, achieving a macro avarage F1-score of 63% in the 59-class recognition task whereas the performance of the KNN is lower, achieving a macro avarage F1-score of 43%. Future research could be conducted with the purpose of designing a better attention based model for producing higher and more confident predictions and designing algorithms to reduce the time complexity of the KNN implementation. / En av de viktigaste teknikerna inom telektrig är radarn. Radar används för att upptäcka och identifiera okända, luftburna, sjögående eller landbaserade förmål. En komponent av radar är Pulsrepetitionsinterval (Pulse Repetition Intervall, PRI) som beskrivs som tidsintervallet mellan två inkommande pulser. I ett radarvarnar system (Radar Warning Receiver, RWR) kan PRI användas för att identifiera radarsystem. Korrekt identifiering av radarsystem är en viktig uppgift för elektroniska understödsmedel (Electronic Support Measures, ESM) med syfte att tillsätta lämpliga medel beroende på radarsystemet i fråga. Icke tillförlitlig identifiering av radarsystem kan ha dödliga konsekvenser och variabler som tid och säkerhet i identifieringen är avgörande för ett effektivt system. Då dokumentation och specifikationer för militära system i regel är hemligstämplade är det svårt att utröna någon typ av industristandard för att utföra snabb och säker klassificering av radarsystem baserat på PRI. Därför är det av stort intresse detta område och möjligheterna för sådana lösningar utforskas. Detta examensarbete utforskar och jämför förmågorna hos två maskininlärningsmetoder i avseende att korrekt identifiera radarsändare baserat på genererat PRI. Den första metoden är ett djupt neuralt nätverk som använder sig av tekniken ”attention”. Det djupa nätverket presterar bra för alla brusnivåer och lär sig snabbt att känna igen attributen hos PRI som kännetecknar vilken radarsändare och som efter träning dessutom är snabb på att korrekt identifiera PRI. Den andra metoden är en K-Nearest Neighbor implementation som förvisso presterar bra på icke brusig data men vars förmåga försämras allt eftersom brusnivåerna ökar. Ett ytterligare resultat av arbetet är utvecklingen och implementationen av en metod för att specificera PRI och sedan generera PRI efter specifikation. Attention modellen genererar bra prediktioner för data bestående av 59 klasser, med ett F1-score snitt om 63% medan KNN-implementationen för samma uppgift har en lägre träffsäkerhet med ett F1-score snitt om 43%. Vidare forskning kan innefatta utökad utveckling av det djupa, neurala nätverket i syfte att förbättra dess förmåga för identifiering och metoder för att minimera tidsåtgången för KNN implementationen.
|
300 |
Predicting PV self-consumption in villas with machine learningGALLI, FABIAN January 2021 (has links)
In Sweden, there is a strong and growing interest in solar power. In recent years, photovoltaic (PV) system installations have increased dramatically and a large part are distributed grid connected PV systems i.e. rooftop installations. Currently the electricity export rate is significantly lower than the import rate which has made the amount of self-consumed PV electricity a critical factor when assessing the system profitability. Self-consumption (SC) is calculated using hourly or sub-hourly timesteps and is highly dependent on the solar patterns of the location of interest, the PV system configuration and the building load. As this varies for all potential installations it is difficult to make estimations without having historical data of both load and local irradiance, which is often hard to acquire or not available. A method to predict SC using commonly available information at the planning phase is therefore preferred. There is a scarcity of documented SC data and only a few reports treating the subject of mapping or predicting SC. Therefore, this thesis is investigating the possibility of utilizing machine learning to create models able to predict the SC using the inputs: Annual load, annual PV production, tilt angle and azimuth angle of the modules, and the latitude. With the programming language Python, seven models are created using regression techniques, using real load data and simulated PV data from the south of Sweden, and evaluated using coefficient of determination (R2) and mean absolute error (MAE). The techniques are Linear Regression, Polynomial regression, Ridge Regression, Lasso regression, K-Nearest Neighbors (kNN), Random Forest, Multi-Layer Perceptron (MLP), as well as the only other SC prediction model found in the literature. A parametric analysis of the models is conducted, removing one variable at a time to assess the model’s dependence on each variable. The results are promising, with five out of eight models achieving an R2 value above 0.9 and can be considered good for predicting SC. The best performing model, Random Forest, has an R2 of 0.985 and a MAE of 0.0148. The parametric analysis also shows that while more input data is helpful, using only annual load and PV production is sufficient to make good predictions. This can only be stated for model performance for the southern region of Sweden, however, and are not applicable to areas outside the latitudes or country tested. / I Sverige finns ett starkt och växande intresse för solenergi. De senaste åren har antalet solcellsanläggningar ökat dramatiskt och en stor del är distribuerade nätanslutna solcellssystem, dvs takinstallationer. För närvarande är elexportpriset betydligt lägre än importpriset, vilket har gjort mängden egenanvänd solel till en kritisk faktor vid bedömningen av systemets lönsamhet. Egenanvändning (EA) beräknas med tidssteg upp till en timmes längd och är i hög grad beroende av solstrålningsmönstret för platsen av intresse, PV-systemkonfigurationen och byggnadens energibehov. Eftersom detta varierar för alla potentiella installationer är det svårt att göra uppskattningar utan att ha historiska data om både energibehov och lokal solstrålning, vilket ofta inte är tillgängligt. En metod för att förutsäga EA med allmän tillgänglig information är därför att föredra. Det finns en brist på dokumenterad EA-data och endast ett fåtal rapporter som behandlar kartläggning och prediktion av EA. I denna uppsats undersöks möjligheten att använda maskininlärning för att skapa modeller som kan förutsäga EA. De variabler som ingår är årlig energiförbrukning, årlig solcellsproduktion, lutningsvinkel och azimutvinkel för modulerna och latitud. Med programmeringsspråket Python skapas sju modeller med hjälp av olika regressionstekniker, där energiförbruknings- och simulerad solelproduktionsdata från södra Sverige används. Modellerna utvärderas med hjälp av determinationskoefficienten (R2) och mean absolute error (MAE). Teknikerna som används är linjär regression, polynomregression, Ridge regression, Lasso regression, K-nearest neighbor regression, Random Forest regression, Multi-Layer Perceptron regression. En additionell linjär regressions-modell skapas även med samma metodik som används i en tidigare publicerad rapport. En parametrisk analys av modellerna genomförs, där en variabel exkluderas åt gången för att bedöma modellens beroende av varje enskild variabel. Resultaten är mycket lovande, där fem av de åtta undersökta modeller uppnår ett R2-värde över 0,9. Den bästa modellen, Random Forest, har ett R2 på 0,985 och ett MAE på 0,0148. Den parametriska analysen visar också att även om ingångsdata är till hjälp, är det tillräckligt att använda årlig energiförbrukning och årlig solcellsproduktion för att göra bra förutsägelser. Det måste dock påpekas att modellprestandan endast är tillförlitlig för södra Sverige, från var beräkningsdata är hämtad, och inte tillämplig för områden utanför de valda latituderna eller land.
|
Page generated in 0.0458 seconds