Spelling suggestions: "subject:"smoothing"" "subject:"moothing""
351 |
Validation of Black-and-White Topology Optimization DesignsGarla Venkatakrishnaiah, Sharath Chandra, Varadaraju, Harivinay January 2021 (has links)
Topology optimization has seen rapid developments in its field with algorithms getting better and faster all the time. These new algorithms help reduce the lead time from concept development to a finished product. Simulation and post-processing of geometry are one of the major developmental costs. Post-processing of this geometry also takes up a lot of time and is dependent on the quality of the geometry output from the solver to make the product ready for rapid prototyping or final production. The work done in this thesis deals with the post-processing of the results obtained from topology optimization algorithms which output the result as a 2D image. A suitable methodology is discussed where this image is processed and converted into a CAD geometry all while minimizing deviation in geometry, compliance and volume fraction. Further on, a validation of the designs is performed to measure the extracted geometry's deviation from the post-processed result. The workflow is coded using MATLAB and uses an image-based post-processing approach. The proposed workflow is tested on several numerical examples to assess the performance, limitations and numerical instabilities. The code written for the entire workflow is included as an appendix and can be downloaded from the website:https://github.com/M87K452b/postprocessing-topopt.
|
352 |
Centralized versus Decentralized Inventory Control in Supply Chains and the Bullwhip EffectQu, Zhan, Raff, Horst 20 October 2017 (has links)
This paper constructs a model of a supply chain to examine how demand volatility is passed upstream through the chain. In particular, we seek to determine how likely it is that the chain experiences a bullwhip effect, where the variance of the upstream firm’s production exceeds the variance of the downstream firm’s sales. We show that the bullwhip effect is more likely to occur and is greater in size in supply chains in which inventory control is centralized rather than decentralized, that is, exercised by the downstream firm.
|
353 |
Earnings management i svenska kommuner : En kvantitativ studieJohansson, Frida, Magnusson, Jonas January 2022 (has links)
Titel: Earnings management i svenska kommuner Nivå: Examensarbete på Grundnivå (kandidatexamen) i ämnet företagsekonomi Författare: Frida Johansson och Jonas Magnusson Handledare: Jan Svanberg Datum: 2022 – januari Syfte: Syftet med studien var att undersöka och kartlägga huruvida earnings management i form av diskretionära periodiseringar förekommer i svenska kommuner. Metod: I studien har en kvantitativ forskningsstrategi tillämpats, med en longitudinell forskningsdesign. Finansiell information rapporterad från de 289 svenska kommunerna inhämtades från Statistiska Centralbyråns databas för räkenskapsåren 2017–2020. Insamlad data har testats med hjälp av multipla regressionsanalyser och har analyserats via statistikprogrammet SPSS. Resultat & slutsats: Resultatet relaterat till studiens första hypotes påvisade ett starkt signifikant, negativt samband mellan den relativa förändringen i periodiseringskostnader och större underliggande underskott innan dessa periodiseringskostnader. Det negativa sambandet kan tolkas som att de svenska kommunerna inte är benägna att belasta större underskott med ytterligare periodiseringskostnader enligt teorin om reningsbad. Utifrån resultatet falsifieras studiens första hypotes. Resultatet kopplat till studiens andra hypotes påvisade ett mindre signifikant samband, vilket ger, om än icke signifikant, visst stöd till antagandet om att de svenska kommunerna strävar efter att rapportera moderata överskott. Utifrån resultatet falsifieras dock även studiens andra hypotes. Uppsatsens bidrag till ämnet: Studien bidrar till att fylla forskningsgapet gällande earnings management i de svenska kommunerna genom förekomsten av diskretionära periodiseringar. Studien bidrar även till ett nytt teoretiskt perspektiv genom att föra in resultatutjämningsreserven i förklaringsmodellen till varför graden av earnings management i de svenska kommunerna minskat i omfattning. Förslag till fortsatt forskning: Ett förslag till vidare forskning vore att undersöka om förekomsten av reningsbad och resultatutjämning som metod skiljer sig åt mellan kommuner de år ett val äger rum eller när en revision har skett. En annan synvinkel hade varit att genomföra studien med en kvalitativ inriktning. Detta då det endast finns ett fåtal tidigare studier gällande earnings management inom kommunsektorn som har använt sig av en kvalitativ metod. / Title: Earnings management in Swedish municipalities Level: Student thesis, final assignment for Bachelor Degree in Business Administration Author: Frida Johansson and Jonas Magnusson Supervisor: Jan Svanberg Date: 2022 – January Aim: The aim of the study was to investigate and identify whether earnings management in terms of discretionary accruals occur in Swedish municipalities. Method: In this study, a quantitative research strategy has been applied, with a longitudinal research design. Financial information reported from the 289 Swedish municipalities was obtained from Statistics Sweden database for the financial years 2017-2020. The collected data was tested using multiple regression analyses and is analyzed through the statistical program SPSS. Result & Conclusions: The results related to the first hypothesis of the study demonstrated a strongly significant, negative relationship between the relative change in accrual costs and larger underlying deficits before these accruals. The negative relationship was interpreted as if Swedish municipalities are not likely to burden larger deficits with additional accruals, according to the theory of big bath accounting. Based on the results, the first hypothesis of the study was falsified. The results related to the second hypothesis of the study, indicated a smaller significant correlation which provides, although non-significant, support for the assumption that the Swedish municipalities strive to report moderate surpluses. Based on these results, the second hypothesis of the study is also falsified. Contribution of the thesis: The study contributes to filling the gap of research for the Swedish municipalities on whether earnings management occurs in terms of discretionary accruals. The study also contributes to a new theoretical perspective by introducing RUR (profit equalization reserve) into the explanatory model of why earnings management has decreased in Swedish municipalities. Suggestions for future research: A suggestion for further research would be to investigate whether big bath or income smoothing as a method differs between municipalities in the years when an election takes place or when an audit has taken place. Another point of view would have been to conduct the study with a qualitative focus, as there are only a few previous studies for earnings management in the municipal sector that have used a qualitative method.
|
354 |
Développement d'une nouvelle algorithmie de localisation adaptée à l'ensemble des mobiles suivis par le système ARGOS / Improving ARGOS Doppler location using multiple-model filtering and smoothingLopez, Remy 15 July 2013 (has links)
Depuis 1978, le système ARGOS assure à l’échelle mondiale la collecte de données et la localisation de plateformes pour des applications liées au suivi d’animaux, à l’océanographie et à la sécurité maritime. La localisation exploite le décalage Doppler affectant la fréquence de porteuse des messages émis par les plateformes et réceptionnés par des satellites dédiés. Au cours des vingt dernières années, les puissances d’émission des plateformes se sont réduites pour des conditions d’utilisation toujours plus extrêmes, augmentant le nombre de localisations de moindre qualité. Paradoxalement, les utilisateurs ont cherché à identifier des comportements à des échelles de plus en plus petites. L’objectif de ce projet est de développer un algorithme de localisation plus performant dans le contexte actuel afin de remplacer le traitement temps réel historique basé sur un ajustement par moindres carrés. Un service hors ligne, permettant de déterminer des localisations encore plus précises, est proposé dans un second temps.Le problème est reformulé comme l’estimation de l’état d’un système dynamique stochastique, tenant compte d’un ensemble de modèles de déplacement admissibles pour les plateformes. La détermination exacte de la loi a posteriori de l’état présente alors une complexité exponentiellement croissante avec le temps. Le filtre « Interacting Multiple Model » (IMM) est devenu l’outil standard pour approximer en temps réel la loi a posteriori avec un coût de calcul constant. Pour des applications hors ligne, de nombreuses solutions sous-optimales de lissage multi-modèle ont aussi été proposées. La première contribution méthodologique de ce travail présente l’extension du cadre initial de l’IMM à un ensemble de modèles hétérogènes, c.-à-d. dont les vecteurs d’état sont de tailles et de sémantiques différentes. En outre, nous proposons une nouvelle méthode pour le lissage multi-modèle qui offre une complexité réduite et de meilleures performances que les solutions existantes. L’algorithme de localisation ARGOS a été réécrit en y incorporant le filtre IMM en tant que traitement temps réel et le lisseur multi-modèle comme service hors ligne. Une étude, menée sur un panel de 200 plateformes munies d’un récepteur GPS utilisé comme vérité terrain, montre que ces stratégies améliorent significativement la précision de localisation quand peu de messages sont reçus. En outre, elles délivrent en moyenne 30% de localisations supplémentaires et permettent de caractériser systématiquement l’erreur de positionnement / The ARGOS service was launched in 1978 to serve environmental applications including oceanography, wildlife tracking and maritime safety. The system allows for worldwide positioning and data collection of Platform Terminal Transmitters (PTTs). The positioning is achieved by exploiting the Doppler shift in the carrier frequency of the messages transmitted by the PTTs and recorded by dedicated satellite-borne receivers. Over the last twenty years, the transmission power has decreased significantly and platforms have been used in increasingly harsh environments. This led to deliver a greater number of low quality locations while users sought to identify finer platform behavior. This work first focuses on the implementation of a more efficient location processing to replace the historical real time processing relying on a Least Squares adjustment. Secondly, an offline service to infer locations with even higher accuracy is proposed.The location problem is formulated as the estimation of the state vector of a dynamical system, accounting for a set of admissible movement models of the platform. The exact determination of the state posterior pdf displays a complexity growing exponentially with time. The Interacting Multiple Model (IMM) algorithm has become a standard online approach to derive an approximated solution with a constant computational complexity. For offline applications, many sub-optimal multiple model schemes have also been proposed. Our methodological contributions first focused on extending the framework of the IMM filter so as to handle a bank of models with state vectors of heterogeneous size and meaning. Second, we investigated a new sub-optimal solution for multiple model smoothing which proves to be less computationally expensive and displays markedly better performance than equivalent algorithms. The ARGOS location processing was rewritten to include the IMM filter as real time processing and the IMM smoother as offline service. We eventually analyzed their performances using a large dataset obtained from over 200 mobiles carrying both an ARGOS transmitter and a GPS receiver used as ground truth. The results show that the new approaches significantly improve the positioning accuracy, especially when few messages are received. Moreover, the algorithms deliver 30% more positions and give a systematic estimation of the location error
|
355 |
Zpracování 3D modelů scény / Processing of 3D Scene ModelsZdráhal, Lukáš January 2008 (has links)
Purpose of this document is acquite reader with basic principles of 3D model digitalization.This work describes general overview of 3D scanning devices, their physical principle and measurements methods. Next part of this document describes basic method for polygonal mesh processing as smoothing an decimation which are necessary for 3D model processing.This document contains also algorithms description of implementation, user interface and publication part through WWW. Fundamental essence of this diploma thesis will be introduction with general principles of 3D scanning and working with Minolta VIVID-700 3D digitizer which is placed on our faculty. At the end are mentioned results evalution,demostration examples and next supposed project advancement.
|
356 |
Goodness-Of-Fit Test for Hazard RateVital, Ralph Antoine 14 December 2018 (has links)
In certain areas such as Pharmacokinetic(PK) and Pharmacodynamic(PD), the hazard rate function, denoted by ??, plays a central role in modeling the instantaneous risk of failure time data. In the context of assessing the appropriateness of a given parametric hazard rate model, Huh and Hutmacher [22] showed that their hazard-based visual predictive check is as good as a visual predictive check based on the survival function. Even though Huh and Hutmacher’s visual method is simple to implement and interpret, the final decision reached there depends on the personal experience of the user. In this thesis, our primary aim is to develop nonparametric goodness-ofit tests for hazard rate functions to help bring objectivity in hazard rate model selections or to augment subjective procedures like Huh and Hutmacher’s visual predictive check. Toward that aim two nonparametric goodnessofit (g-o) test statistics are proposed and they are referred to as chi-square g-o test and kernel-based nonparametric goodness-ofit test for hazard rate functions, respectively. On one hand, the asymptotic distribution of the chi-square goodness-ofit test for hazard rate functions is derived under the null hypothesis ??0 : ??(??) = ??0(??) ??? ? R + as well as under the fixed alternative hypothesis ??1 : ??(??) = ??1(??) ??? ? R +. The results as expected are asymptotically similar to those of the usual Pearson chi-square test. That is, under the null hypothesis the proposed test converges to a chi-square distribution and under the fixed alternative hypothesis it converges to a non-central chi-square distribution. On the other hand, we showed that the power properties of the kernel-based nonparametric goodness-ofit test for hazard rate functions are equivalent to those of the Bickel and Rosenblatt test, meaning the proposed kernel-based nonparametric goodness-ofit test can detect alternatives converging to the null at the rate of ???? , ?? < 1/2, where ?? is the sample size. Unlike the latter, the convergence rate of the kernel-base nonparametric g-o test is much greater; that is, one does not need a very large sample size for able to use the asymptotic distribution of the test in practice.
|
357 |
The effect of model calibration on noisy label detection / Effekten av modellkalibrering vid detektering av felmärkta bildetiketterJoel Söderberg, Max January 2023 (has links)
The advances in deep neural networks in recent years have opened up the possibility of using image classification as a valuable tool in various areas, such as medical diagnosis from x-ray images. However, training deep neural networks requires large amounts of annotated data which has to be labelled manually, by a person. This process always involves a risk of data getting the wrong label, either by mistake or ill will, and training a machine learning model on mislabelled images has a negative impact on accuracy. Studies have shown that deep neural networks are so powerful at memorization that if they train on mislabelled data, they will eventually overfit this data, meaning learning a data representation that does not fully mirror real data. It is therefore vital to filter out these images. Area under the margin is a method that filters out mislabelled images by observing the changes in a network’s predictions during training. This method does however not take into consideration the overconfidence in deep neural networks and the uncertainty of a model can give indications of mislabelled images during training. Calibrating the confidence can be done through label smoothing and this thesis aims to investigate if the performance of Area under the margin can be improved when combined with different smoothing techniques. The goal is to develop a better insight into how different types of label noise affects models in terms of confidence, accuracy and the impact it has depending on the dataset itself. Three different label smoothing techniques will be applied to evaluate how well they can mitigate overconfidence, prevent the model from memorizing the mislabelled samples and if this can improve the filtering process for the Area under the margin method. Results show when training on data with noise present, adding label smoothing improves accuracy, an indication of noise robustness. Label noise is seen to decrease confidence in the model and at the same time reduce the calibration. Adding label smoothing prevents this and allows the model to be more robust as the noise rate increases. In the filtering process, label smoothing was seen to prevent correctly labelled samples to be filtered and received a better accuracy at identifying the noise. This did not improve the classification results on the filtered data, indicating that it is more important to filter out as many mislabelled samples as possible even if this means filtering out correctly labelled images as well. The label smoothing methods used in this work was set up to preserve calibration, a future topic of research could be to adjust the hyperparameters to increase confidence instead, focusing on removing as much noise as possible. / De senaste årens framsteg inom djupa neurala nätverk har öppnat för möjligheten att använda bildklassificering som ett värdefullt verktyg inom olika områden, såsom medicinsk diagnos från röntgenbilder. Men att träna djupa neurala nätverk kräver stora mängder annoterad data som måste märkas antingen av människor eller datorer. Denna process involverar alltid med en risk för att data får fel etikett, antingen av misstag eller av uppsåt och att träna en maskininlärningsmodell på felmärkta bilder har negativ inverkan på resultatet. Studier har visat att djupa neurala nätverk är så kraftfulla att memorera att om de tränar på felmärkta data, kommer de så småningom att överanpassa dessa data, vilket betyder att de kommer att lära sig en representation som inte helt speglar verklig data. Det är därför viktigt att filtrera bort dessa bilder. Area under marginalen är en metod som filtrerar bort felmärkta bilder genom att observera förändringarna i ett nätverks beteende under träning. Denna metod tar dock inte hänsyn till översäkerhet i djupa neurala nätverk och osäkerheten i en modell kan ge indikationer på felmärkta bilder under träning. Kalibrering av förtroendet kan göras genom etikettutjämning och denna uppsats syftar till att undersöka om prestandan för Area under marginalen kan förbättras i kombination med olika tekniker för etikettutjämning. Målet är att utveckla en bättre insikt i hur olika typer av brusiga etiketter påverkar modeller när det gäller tillförlitlighet, noggrannhet och den påverkan det har beroende på själva datasetet. Tre olika tekniker för etikettutjämning kommer att tillämpas för att utvärdera hur väl de kan mildra översäkerheten, förhindra modellen från att memorera de felmärkta bilderna och om detta kan förbättra filtreringsprocessen för Area under marginalen-metoden. Resultaten visar att när man tränar på data innehållande felmärkt data, förbättrar etikettutjämning noggrannheten vilket indikerar på robusthet mot felmärkning. Felmärkning tycks minska säkerheten hos modellen och samtidigt minska kalibreringen. Att lägga till etikettutjämning förhindrar detta och gör att modellen blir mer robust när mängden brusiga etiketter ökar. I filtreringsprocessen sågs att etikettutjämning förhindrar att korrekt märkt data filtreras bort och fick en bättre noggrannhet vid identifiering av bruset. Detta förbättrade dock inte klassificeringsresultaten på den filtrerade datan, vilket indikerar att det är viktigare att filtrera bort så mycket felmärkta prover som möjligt även om detta innebär att filtrera bort korrekt märkta bilder. Metoderna för etikettutjämning som används i detta arbete sattes upp för att bevara kalibreringen, ett framtida forskningsämne kan vara att justera hyperparametrarna för att istället öka förtroendet, med fokus på att ta bort så mycket felmärkta etiketter som möjligt.
|
358 |
AI-Based Transport Mode Recognition for Transportation Planning Utilizing Smartphone Sensor Data From Crowdsensing CampaignsGrubitzsch, Philipp, Werner, Elias, Matusek, Daniel, Stojanov, Viktor, Hähnel, Markus 11 May 2023 (has links)
Utilizing smartphone sensor data from crowdsen-sing (CS) campaigns for transportation planning (TP) requires highly reliable transport mode recognition. To address this, we present our RNN-based AI model MovDeep, which works on GPS, accelerometer, magnetometer and gyroscope data. It was trained on 92 hours of labeled data. MovDeep predicts six transportation modes (TM) on one second time windows. A novel postprocessing further improves the prediction results. We present a validation methodology (VM), which simulates unknown context, to get a more realistic estimation of the real-world performance (RWP). We explain why existing work shows overestimated prediction qualities, when they would be used on CS data and why their results are not comparable with each other. With the introduced VM, MovDeep still achieves 99.3 % F1 -Score on six TM. We confirm the very good RWP for our model on unknown context with the Sussex-Huawei Locomotion data set. For future model comparison, both publicly available data sets can be used with our VM. In the end, we compare MovDeep to a deterministic approach as a baseline for an average performing model (82 - 88 % RWP Recall) on a CS data set of 540 k tracks, to show the significant negative impact of even small prediction errors on TP.
|
359 |
Numerical splitting methods for nonsmooth convex optimization problemsBitterlich, Sandy 11 December 2023 (has links)
In this thesis, we develop and investigate numerical methods for solving nonsmooth convex optimization problems in real Hilbert spaces. We construct algorithms, such that they handle the terms in the objective function and constraints of the minimization problems separately, which makes these methods simpler to compute. In the first part of the thesis, we extend the well known AMA method from Tseng to the Proximal AMA algorithm by introducing variable metrics in the subproblems of the primal-dual algorithm. For a special choice of metrics, the subproblems become proximal steps. Thus, for objectives in a lot of important applications, such as signal and image processing, machine learning or statistics, the iteration process consists of expressions in closed form that are easy to calculate. In the further course of the thesis, we intensify the investigation on this algorithm by considering and studying a dynamical system. Through explicit time discretization of this system, we obtain Proximal AMA. We show the existence and uniqueness of strong global solutions of the dynamical system and prove that its trajectories converge to the primal-dual solution of the considered optimization problem. In the last part of this thesis, we minimize a sum of finitely many nonsmooth convex functions (each can be composed by a linear operator) over a nonempty, closed and convex set by smoothing these functions. We consider a stochastic algorithm in which we take gradient steps of the smoothed functions (which are proximal steps if we smooth by Moreau envelope), and use a mirror map to 'mirror'' the iterates onto the feasible set. In applications, we compare them to similar methods and discuss the advantages and practical usability of these new algorithms.
|
360 |
INTELLIGENT MULTIPLE-OBJECTIVE PROACTIVE ROUTING IN MANET WITH PREDICTIONS ON DELAY, ENERGY, AND LINK LIFETIMEGuo, Zhihao January 2008 (has links)
No description available.
|
Page generated in 0.065 seconds