801 |
Modeling and Phylodynamic Simulations of Avian InfluenzaMosley, Liam M. 03 May 2019 (has links)
No description available.
|
802 |
Drift Capacity of Reinforced Concrete Walls with Lap SplicesWilliam G Pollalis (10709154) 27 April 2021 (has links)
<p>Twelve large-scale reinforced concrete (RC) specimens
were tested at Purdue University’s Bowen Laboratory to evaluate the
deformability of structural walls with longitudinal lap splices at their bases.
Eight specimens were tested under four-point bending and four specimens were
tested as cantilevers under constant axial force and cyclic reversals of
lateral displacement. All specimens failed abruptly by disintegration of the
lap splice, irrespective of what loading method was used or what splice details
were chosen. Previous work on lap splices has focused mainly on splice
strength. But, in consideration of demands requiring structural toughness (e.g.
blast, earthquake, differential settlement), deformability is arguably more
important than strength. </p>
<p>Approximations of wall
drift-strain relationships are presented in combination with estimates of
splice strength and deformability to provide lower-bound drift capacity
estimates for RC walls with lap splices at their bases. Deformations in slender
structural walls (with aspect ratios larger than 3) are controlled by flexure.
Shear deformations must be considered for walls with smaller aspect ratios. For
slender walls with lap splices comparable to those tested, the observations
collected suggest that drift capacities can be as low as 0.5%. That is: splices
with minimum concrete cover, minimum transverse reinforcement (0.25% transverse
reinforcement ratio) terminating in hooks, and lap splice lengths selected to
reach yielding in the spliced bars (approximately 60 bar diameters for splices
of Grade-60 reinforcement) can fail as yield is reached or soon after. For
splices of the same length, doubling the amount of hooked transverse
reinforcement increases deformation capacity by nearly 50%. By maintaining the
same transverse reinforcement ratio but confining splices with closed hoops
(instead of hooks), deformation capacity nearly doubles. Increasing splice
length increases the expected splice strength but also increases the strain
required to reach the same drift ratio. </p>
<p>Evidence from this and
similar experimental programs suggests that lap splices with minimum cover and
confined only by minimum transverse reinforcement terminating in hooks should
not be used in critical sections of structural walls when toughness is
required. To prevent abrupt failure during events that demand structural
toughness, it is recommended that lap splices be shifted away from locations
where yielding in structural walls is expected.</p>
|
803 |
Dataset Drift in Radar Warning Receivers : Out-of-Distribution Detection for Radar Emitter Classification using an RNN-based Deep EnsembleColeman, Kevin January 2023 (has links)
Changes to the signal environment of a radar warning receiver (RWR) over time through dataset drift can negatively affect a machine learning (ML) model, deployed for radar emitter classification (REC). The training data comes from a simulator at Saab AB, in the form of pulsed radar in a time-series. In order to investigate this phenomenon on a neural network (NN), this study first implements an underlying classifier (UC) in the form of a deep ensemble (DE), where each ensemble member consists of an NN with two independently trained bidirectional LSTM channels for each of the signal features pulse repetition interval (PRI), pulse width (PW) and carrier frequency (CF). From tests, the UC performs best for REC when using all three features. Because dataset drift can be treated as detecting out-of-distribution (OOD) samples over time, the aim is to reduce NN overconfidence on data from unseen radar emitters in order to enable OOD detection. The method estimates uncertainty with predictive entropy and classifies samples reaching an entropy larger than a threshold as OOD. In the first set of tests, OOD is defined from holding out one feature modulation from the training dataset, and choosing this as the only modulation in the OOD dataset used during testing. With this definition, Stagger and Jitter are most difficult to detect as OOD. Moreover, using DEs with 6 ensemble members and implementing LogitNorm to the architecture improves the OOD detection performance. Furthermore, the OOD detection method performs well for up to 300 emitter classes and predictive entropy outperforms the baseline for almost all tests. Finally, the model performs worse when OOD is simply defined as signals from unseen emitters, because of a precision decrease. In conclusion, the implemented changes managed to reduce the overconfidence for this particular NN, and improve OOD detection for REC.
|
804 |
Quantification of Variability, Abundance, and Mortality of Maumee River Larval Walleye (Sander vitreus) Using Bayesian Hierarchical ModelsDuFour, Mark R. January 2012 (has links)
No description available.
|
805 |
Comparison of Indirect Inference and the Two Stage ApproachHernadi, Victor, Carocca Jeria, Leandro January 2022 (has links)
Parametric models are used to understand dynamical systems and predict its future behavior. It is difficult to estimate the model’s parametric values since there are usually many parameters and they are highly correlated. The aim of this project is to apply the method of indirect inference and the two stage approach to estimate the drift and volatility parameters of a Geometric Brownian Motion. This was first done by estimating the parameters of a known Geometric Brownian process. Then, the Coca-Cola Company’s stock was used for a five-year forecast to study the estimators’ predictive power. The two stage approach struggles when the data does not truly follow a Geometric Brownian Motion, but when it does it produces highly efficient and accurate estimates. The method of indirect inference produces better estimates, than the two stage approach,for data that deviates from a Geometric Brownian Motion.Therefore, it is preferable to use indirect inference over two stage approach for stock price forecasting. / Parametriska modeller används för attförstå dynamiska system och förutspå dess framtida beteende.Det är utmanande att skatta modellens parametriska värdeneftersom det vanligtvis finns många parametrar och de är oftastarkt korrelerade. Målet med detta projekt är att tillämpametoderna indirect inference och two stage approach för attskatta drivnings- och volatilitetsparametrarna av en geometriskBrownsk rörelse. Först skattades parametrarna av en kändGeometrisk Brownsk rörelse. Sedan användes The Coca-ColaCompanys aktie i syfte att studera estimatorernas förmåga attförutspå en femårig period. Two stage approach fungerar dåligtför data som inte helt följer en geometrisk Brownsk rörelse, mennär datan gör det är skattningarna noggranna och effektiva.Indirect inference ger bättre skattningar än two stage approachnär datan inte helt följer en geometrisk Brownsk rörelse. Därförär indirect inference att föredra för aktieprognoser. / Kandidatexjobb i elektroteknik 2022, KTH, Stockholm
|
806 |
An Efficient Feature Descriptor and Its Real-Time ApplicationsDesai, Alok 01 June 2015 (has links) (PDF)
Finding salient features in an image, and matching them to their corresponding features in another image is an important step for many vision-based applications. Feature description plays an important role in the feature matching process. A robust feature descriptor must works with a number of image deformations and should be computationally efficient. For resource-limited systems, floating point and complex operations such as multiplication and square root are not desirable. This research first introduces a robust and efficient feature descriptor called PRObability (PRO) descriptor that meets these requirements without sacrificing matching accuracy. The PRO descriptor is further improved by incorporating only affine features for matching. While performing well, PRO descriptor still requires larger descriptor size, higher offline computation time, and more memory space than other binary feature descriptors. SYnthetic BAsis (SYBA) descriptor is developed to overcome these drawbacks. SYBA is built on the basis of a new compressed sensing theory that uses synthetic basis functions to uniquely encode or reconstruct a signal. The SYBA descriptor is designed to provide accurate feature matching for real-time vision applications. To demonstrate its performance, we develop algorithms that utilize SYBA descriptor to localize the soccer ball in a broadcast soccer game video, track ground objects for unmanned aerial vehicle, and perform motion analysis, and improve visual odometry accuracy for advanced driver assistance systems. SYBA provides high feature matching accuracy with computational simplicity and requires minimal computational resources. It is a hardware-friendly feature description and matching algorithm suitable for embedded vision applications.
|
807 |
Re-weighted softmax cross-entropy to control forgetting in federated learningLegate, Gwendolyne 12 1900 (has links)
Dans l’apprentissage fédéré, un modèle global est appris en agrégeant les mises à jour du
modèle calculées à partir d’un ensemble de nœuds clients, un défi clé dans ce domaine est
l’hétérogénéité des données entre les clients qui dégrade les performances du modèle. Les
algorithmes d’apprentissage fédéré standard effectuent plusieurs étapes de gradient avant
de synchroniser le modèle, ce qui peut amener les clients à minimiser exagérément leur
propre objectif local et à s’écarter de la solution globale. Nous démontrons que dans un tel
contexte, les modèles de clients individuels subissent un oubli catastrophique par rapport
aux données d’autres clients et nous proposons une approche simple mais efficace qui
modifie l’objectif d’entropie croisée sur une base par client en repondérant le softmax de les
logits avant de calculer la perte. Cette approche protège les classes en dehors de l’ensemble
d’étiquettes d’un client d’un changement de représentation brutal. Grâce à une évaluation
empirique approfondie, nous démontrons que notre approche peut atténuer ce problème,
en apportant une amélioration continue aux algorithmes d’apprentissage fédéré standard.
Cette approche est particulièrement avantageux dans les contextes d’apprentissage fédéré
difficiles les plus étroitement alignés sur les scénarios du monde réel où l’hétérogénéité des
données est élevée et la participation des clients à chaque cycle est faible. Nous étudions
également les effets de l’utilisation de la normalisation par lots et de la normalisation de
groupe avec notre méthode et constatons que la normalisation par lots, qui était auparavant
considérée comme préjudiciable à l’apprentissage fédéré, fonctionne exceptionnellement bien
avec notre softmax repondéré, remettant en question certaines hypothèses antérieures sur la
normalisation dans un système fédéré / In Federated Learning, a global model is learned by aggregating model updates computed
from a set of client nodes, a key challenge in this domain is data heterogeneity across
clients which degrades model performance. Standard federated learning algorithms perform
multiple gradient steps before synchronizing the model which can lead to clients overly
minimizing their own local objective and diverging from the global solution. We demonstrate
that in such a setting, individual client models experience a catastrophic forgetting with
respect to data from other clients and we propose a simple yet efficient approach that
modifies the cross-entropy objective on a per-client basis by re-weighting the softmax of
the logits prior to computing the loss. This approach shields classes outside a client’s
label set from abrupt representation change. Through extensive empirical evaluation, we
demonstrate our approach can alleviate this problem, providing consistent improvement to
standard federated learning algorithms. It is particularly beneficial under the challenging
federated learning settings most closely aligned with real world scenarios where data
heterogeneity is high and client participation in each round is low. We also investigate the
effects of using batch normalization and group normalization with our method and find that
batch normalization which has previously been considered detrimental to federated learning
performs particularly well with our re-weighted softmax, calling into question some prior
assumptions about normalization in a federated setting
|
808 |
Automatic wind turbine operation analysis through neural networks / Automatisk driftanalys av vindturbiner medels neurala nätverkBoley, Alexander January 2017 (has links)
This master thesis handles the development of an automatic benchmarking program for wind turbines and the thesis works as the theoretical basis for this program. The program is created at the request of the power company OX2 who wanted this potential to be investigated. The mission given by the company is to: 1. to find a good key point indicator for the efficiency of a wind turbine, 2. to find an efficient way to assess this and 3. to write a program that does this automatically and continuously. The thesis determines with a study of previous research that the best method to utilize for these kinds of continuous analyses are artificial neural networks which can train themselves on historical data and then assess if the wind turbine is working better or worse than it should with regards to its history. This comparison between the neural network predicted operation and the actual operation works as the measurement of the efficiency, the key point indicator for how the turbine work compared to how it historically should operate. The program is based on this principle and is completely written in MATLAB. Further testing of the program found that the best variables to use are wind speed and the blade pitch angle as input variables for the neural network and active power as the target used as the variable to predict and assess the operation. The final program was able to be fully automated and integrated into the OX2 system thanks to the possibility to continuously import wind turbine data through APIs. In the final testing was the program able to identify 75% of the anomalies manually found in the half year and in the five turbines used for this thesis, the small anomalies not found manually but identified by the program excluded. / Den här masteruppsatsen hanterar utvecklandet av ett automatiskt driftanalyseringsprogram för vindkraftverk och fungerar som det teoretiska underlaget för detta program. Programmet utvecklades på uppdrag av kraftbolaget OX2 som ville undersöka potentialen för ett sådant analysprogram i deras verksamhet. Uppdraget givet var att: 1. ta fram en bra indikator när det gäller den faktiska effektiviteten av ett vindkraftverk, 2. att hitta ett effektivt sätt att använda detta måttet i en analys där målet är att hitta avvikelser, och 3. skriva ett program som automatiskt kan använda måttet och metoden över tiden. Rapporten kommer via litteraturstudie fram till att tidigare forskning visar på att neurala nätverk är den mest lovande metoden för att genomföra sådan här analys. Dessa nätverk kan träna sig själva på historiska data och sedan analysera om vindturbinen arbetar bättre eller sämre än historiskt. Den här jämförelsen mellan den historiskt grundade förutspådda kraften ut och den faktiska kraften ut fungerar som kvalitetsmåttet på hur bra turbinen fungerar. Programmet är baserat på den här principen och är helt skriven i MATLAB. Vidare tester av programmet visar att de bästa variablerna att använda för att förutspå kraften ut är vindhastigheten och bladens vinkel mot vinden. Slutprogrammet var kapabelt att fullt automatiskt och integrerat i OX2s system identifiera 75% av alla avvikelser som manuellt hittats i ett halvårs data på de fem turbinerna använda för rapporten, småfel hittade av programmet men inte manuellt exkluderat.
|
809 |
Evaluation of drift correction strategies for an inertial based dairy cow positioning system. : A study on tracking the position of dairy cows using a foot mounted IMU with drift correction from ZUPT or sparse RFID locations. / Utvärdering av strategier för driftkorrigering i ett tröghetsbaserat positioneringssystem för mjölkkor.Markovska, Maria, Svensson, Ruben January 2019 (has links)
This thesis investigates the feasibility and performance of an inertial based positioning system for dairy cows in a barn environment. The investigated positioning method is pedestrian dead reckoning using inertial navigation with MEMS sensors. While this method is well known for human positioning applications, there has not been a lot of studies of its use on terrestrial animals. Since inertial based positioning systems are dependent on drift correction, the focus of the research is drift correction methods. Two methods, zero velocity update (ZUPT) and sparse locations, are compared with regards to positioning accuracy, energy consumption and sensor placement. The best positioning estimates are achieved by using ZUPT corrections at a sample rate of 10 Hz, resulting in a mean position drift of 0.2145 m=m. Using a proposed equidistant sample time based sleep mode scheme, this would require a theoretical supply current of 0.21 mA. It is also seen that better position estimates are obtained for sensors that are placed low and on the front legs. The sparse locations method suffers from severe position drift between the locations, resulting in unusable positioning data. A combination of ZUPT and sparse location yields less accurate positioning than ZUPT only. / Denna masteruppsats undersöker genomförbarhet och prestanda av ett tröghetsbaserat positioneringsssystem för mjölkkor i en lada. Den undersökta metoden är död räkning för fotgängare mha. tröghetsnavigering med MEMSsensorer. Denna metod är välkänd för positionering av människor, men få studier har gjorts kring dess användbarhet för djur. Eftersom tröghetsbaserad navigering är beroende av driftkorrigering är detta fokuset för forskningen. Två olika metoder utvärderas, zero velocity update (ZUPT) och sparse locations, och en jämförelse görs med avseende på positionsnoggrannhet, energiförbrukning och sensorplacering.Bäst positionering uppnås med ZUPT-korrigeringar vid en samplingsfrekvens på 10 Hz, vilket ger ett medelvärde av positionsdrift på 0.2145 m=m. Om ett föreslaget ekvidistant samplingstidsbaserat schema för viloläge används skulle 10 Hz kräva en teoretisk matningsström på 0.21 mA. Vidare fås bättre positioneringsresultat för sensorer som är placerade lågt och på frambenen. Korrektionsmetoden med sparse locations ger en svår positionsdrift mellan platserna, vilket resulterar i oanvändbar positionsdata. En kombination av ZUPT och sparse locations ger sämre precision än om endast ZUPT används, samt ökar energiförburkningen på grund av behovet av ytterligare sensorer.
|
810 |
Kartläggning av problem vid projektering, installation och drift av värmepumpar / Mapping of problems with the design, installation and operation of heat pumpsÅhlander Pettersson, Victoria, Mattsson, Anton January 2018 (has links)
För att kunna utföra energieffektiviserande åtgärder i en befintlig byggnad, och generera ett bra utfall, krävs goda kunskaper om projektet. Detta betyder att alla discipliner måste ha ett gott samarbete och att varje disciplin måste ta sitt ansvar för uppgiften. Det är inte alltid som utfallet blir det optimala. En sammanställning i tabellform kan tydligt påvisa de frekventa problemen. Detta har genom att belysa de problem som uppstår i samband med projektering, installation och drift av värmepumpar i tre olika fall genererat en sammanställd tabell. Resultatet visar att åtta olika problem förekommer och samtliga resultat förklaras och analyseras. Genom att påvisa svagheter, och ge förslag på möjliga åtgärder kan problem med projektering, installation och drift av värmepumpar minska. / In order to be able to perform energy efficiency measures in an existing building, and generate a good outcome, good knowledge of the project is required. This means that all disciplines must have a good cooperation and that each discipline must take responsibility for the task. It is not always that the outcome will be optimal. A summary in tabular form can clearly identify the frequent problems. This is done by highlighting the problems that arise in connection with the design, installation and operation of heat pumps in three different cases, generating a compiled table. The result shows that eight different problems occur and all results are explained and analyzed. By detecting weaknesses, and suggesting possible measures, problems with the design, installation and operation of heat pumps may decrease.
|
Page generated in 0.0199 seconds