• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 14
  • 10
  • 6
  • 4
  • 4
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 60
  • 60
  • 10
  • 10
  • 10
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Biodiversity Monitoring Using Machine Learning for Animal Detection and Tracking / Övervakning av biologisk mångfald med hjälp av maskininlärning för upptäckt och spårning av djur

Zhou, Qian January 2023 (has links)
As an important indicator of biodiversity and ecological environment in a region, the number and distribution of animals has been given more and more attention by agencies such as nature reserves, wetland parks, and animal protection supervision departments. To protect biodiversity, we need to be able to detect and track the movement of animals to understand which animals are visiting the space. This thesis uses the improved You Only Look Once Version 5 (YOLOv5) target detection algorithm and Simple online and real-time tracking with a deep association metric (DeepSORT) tracking algorithm to provide technical support for bird monitoring, identification and tracking. Specifically, the thesis tries different improvement methods based on YOLOv5 to solve the problem that small targets in images are difficult to detect. In the backbone network, different attention modules are added to enhance the network feature extraction ability; in the neck network part, the Bi-Directional Feature Pyramid Network (BiFPN) structure is used to replace the Path Aggregation Network (PAN) structure to strengthen the utilization of underlying features; in the detection head part, a high-resolution detection head is added to improve the detection ability of tiny targets. In addition, a better loss function has been used to improve the algorithm’s performance on small birds. The improved algorithms in this paper have been used in multiple comparative experiments on the VisDrone data set and a data set of bird flight images, and the results show that compared with the baseline using YOLOv5, for VisDrone data set, Spatial-to-Depth (SPD)-Convolutional stride-free (Conv) gets the highest training mean Average Precision (mAP) of all methods with an increase from 0.325 to 0.419; for the bird data set, the best result of training mAP that could be achieved is adding a P2 layer, which reaches an improvement from 0.701 to 0.724. After combining the You Only Look Once (YOLO) with DeepSORT to implement the tracking function, the improved method makes the final tracking effect better. / Som en viktig indikator på biologisk mångfald och ekologisk miljö i en region har antal och utbredning av djur uppmärksammats mer och mer av organisationer som som naturreservat, våtmarksparker och djurskyddsmyndigheter. För att skydda den biologiska mångfalden måste vi kunna upptäcka och spåra djurs rörelser för att förstå vilka djur som besöker ett område. Uppsatsen använder den förbättrade YOLOv5-måldetektionsalgoritmen och DeepSORT-spårningsalgoritmen för fågelövervakning, identifiering och spårning. Specifikt undersöks olika förbättringsmetoder baserade på YOLOv5 för att lösa problemet med att små mål i bilder är svåra att upptäcka. I den första delen av nätverket läggs olika uppmärksamhetsmoduler till; i nästa används BiFPN-strukturen för att ersätta PAN-strukturen; i detektionsdelen läggs ett högupplöst detektionshuvud till för att förbättra detekteringsförmågan för små föremål. Dessutom har en bättre förlustfunktion använts för att förbättra algoritmens prestanda för små fåglar och andra djur. De förbättrade algoritmerna har testats flera jämförande experiment på VisDronedatamängden och en datamängd av bilder av flygande fåglar. Resultaten visar att jämfört med baslinjen med YOLOv5s, för VisDrone-datamängden får SPD-Conv det högsta tränings-mAP med en ökning från 0,325 till 0,419; för fågeldatamängden nås det bästa resultatet genom att lägga till ett P2-lager, vilket ger en förbättring från 0,701 till 0,724 av mAP. Efter att ha kombinerat YOLO med DeepSORT för att implementera spårningsfunktionen, blir den slutliga spårningseffekten bättre.
52

THE INTERPRETATION OF ELECTRON ENERGY-LOSS SPECTROSCOPY IN COMPLEX SYSTEMS: A DFT BASED STUDY

Nichol, Robert M. 19 August 2015 (has links)
No description available.
53

預期、資本移動與最適外匯管理政策

顧瑩華, GU, YING-HUA Unknown Date (has links)
本文主要目的在嘗試建立一個包括總需求、總供給,預期和不確定因素的開放總體經 濟模型,並探討在此模型下的最適資本移動政策,最適外匯政策及最適財政和貨幣政 策。 過去的文獻中,在求最適資本移動政策時,並未加入預期,總供給面及不確定因素, 本論文第一部份將探討加入這些因素後,最適資本移動係數的選擇。此外根據以前的 文獻,在探討最適外匯政策時,均假定資本是完全移動的,本文第二部分將解放此假 設,探討滿足損失函數(Loss function )––即所得之變異數為最小的最適外匯政 策,同時可以證明出資本完全移動下之結論只是本文結論中的一個特例而已。本文第 三部份將利用所建立之模型探討在各種不同匯率制度(固定、管理及浮動匯率)下之 最適財政及貨幣政策,並希望能找出最適之政策搭配。
54

Fonctions de perte en actuariat

Craciun, Geanina January 2009 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal.
55

田口式品質工程方法在電子業應用之研究-華通電腦公司個案研究 / Application Of Taguchi's Quality Engineering Method In Electronic Industry--A Case Study of COMPEQ

張金生, Chang, Chin-Sheng Unknown Date (has links)
近年來,我國對於田口式品質工程方法的運用,已漸漸普及於各個生產事 業。然而,在以往田口方法的應用實例中大多只使用一種數據分析法,即 傳統實驗設計的變異數分析法,或田口方法的信號雜音比( S/N ratio ),對於二者之間的比較及異同處,則較少提及﹔且絕大部分是只探討單 一品質特性最佳化的問題,對於同一製程中同時具有多個品質特性需最佳 化的問題,亦較少觸及。本研究首先分別引用二種數據分析法進行提昇自 動光學檢驗(AOI)偵測能力」}的數據解析、比較二種分析法在理論上及應 用上的差別,並依保守原則,初步決定各品質特性(本研究有三個品質特 性)的最佳因子水準組合。其次,針對各品質特性間因子水準互相矛盾的 情形,本文將引用下述3種分析方法於研究案例,並綜合各種方法的優缺 點及適用狀況,藉以找出多品質特性同時最佳化的因子水準組合。此三種 分析方法為: (1) 畫出各因子對個別品質特性之影響效果總調查表,再經 由人為的比較判斷後選取最佳的條件組合 (2) 以個別品質特性之信號雜 音比的加權和作分析 (3) 使用品質特性值標準化的方法做分析由分析結 果,吾人可知現有各種方法均有其優缺點,唯有熟悉各種方法並配合豐富 經驗與專業知識,視實驗狀況而選擇性的加以應用,方能獲致良好實驗效 果。
56

考量指標衝突之物流業者評選模式 / Logistics service provider selection model based on conflict criteria

蔡雅慧, Tsai, Ya Hui Unknown Date (has links)
在全球化競爭的條件之下,企業之專業分工漸趨精細,企業為了發展自己本身的核心能力,選擇將非核心能力委外給專業的第三方,以求獲得更佳的服務品質且為企業減少成本支出。其中物流相關費用占企業營運費用約5%~35%,物流在整體企業營運扮演重要角色,近年來供應鏈管理的興起,企業更加重視與上下游廠商間協調關係,物流管理受到重視,物流關係到整體供應鏈的穩定度,因此在選擇物流委外供應商時,企業需審慎的評估供應商各方面的能力,以選出最適物流委外供應商。   過去評選物流委外供應商,多數藉由專家主觀判斷,本研究提出一套完整的物流委外供應商評選模式,以客觀的角度評選物流委外供應商。過去文獻中多數假設評估準則間關係獨立,此假設與現實狀況不符,評估準則通常存在相互關聯。企業在評選過程中,針對特定評選準則會有限定標準,若單以績效值評估,不符企業之需求,本研究將其納入考量,建立出更加完整的物流委外供應商評選模式。此評選模式可給予參與評選之候選廠商改善建議,為雙方建立良好的關係,創造雙贏的局面。   本研究透過決策實驗室法建立指標之網路關係,以網路程序分析法計算指標之權重,於評分處理部份,針對企業特定評選指標,透過田口品質損失函數轉換成與目標之損失值,最後藉由VIKOR排序法評選出最佳物流委外供應商。受限於成本考量,本研究透過系統動態學模擬運送情形,藉由所獲得之營運資料驗證此評選模式。 / Under the global competitive conditions, enterprises become more sophisticated division of labor. Enterprises want to develop their own core competencies. Enterprises choose to outsource non-core capabilities to third-party professional,because of better service quality and reducing costs for enterprises. In recent years, enterprises pay more attention to coordination with the relationship between upstream and downstream firms. The logistics in the overall business operations play an important role. Therefore, in the choice of logistics outsourcing providers, enterprises evaluate all aspects of supplier's abilities to select the most suitable supplier of logistics outsourcing.   This study provides a complete model to evaluate logistic service providers. Enterprises in the selection process, selection criteria will be qualified for a particular standard. Selection criteria usually have interrelationship. Under the condition, this study use DEMATEL and ANP to create criteria relationship and compute criteria weights. Taguchi loss function is used to calculate some criteria loss value, which is the distance between enterprise’s target values. Finally, VIKOR combines loss value with performance value to select the best logistics service providers. These models also give candidates some suggestions to improve their performance.
57

Fonctions de perte en actuariat

Craciun, Geanina January 2009 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
58

適應性累積和損失管制圖之研究 / The Study of Adaptive CUSUM Loss Control Charts

林政憲 Unknown Date (has links)
The CUSUM control charts have been widely used in detecting small process shifts since it was first introduced by Page (1954). And recent studies have shown that adaptive charts can improve the efficiency and performance of traditional Shewhart charts. To monitor the process mean and variance in a single chart, the loss function is used as a measure statistic in this article. The loss function can measure the process quality loss while the process mean and/or variance has shifted. This study combines the three features: adaption, CUSUM and the loss function, and proposes the optimal VSSI, VSI, and FP CUSUM Loss chart. The performance of the proposed charts is measured by using Average Time to Signal (ATS) and Average Number of Observations to Signal (ANOS). The ATS and ANOS calculations are based on Markov chain approach. The performance comparisons between the proposed charts and some existing charts, such as X-bar+S^2 charts and CUSUM X-bar+S^2 charts, are illustrated by numerical analyses and some examples. From the results of the numerical analyses, it shows that the optimal VSSI CUSUM Loss chart has better performance than the optimal VSI CUSUM Loss chart, optimal FP CUSUM Loss chart, CUSUM X-bar+S^2 charts and X-bar+S^2 charts. Furthermore, using a single chart to monitor a process is not only easier but more efficient than using two charts simultaneously. Hence, the adaptive CUSUM Loss charts are recommended in real process. / The CUSUM control charts have been widely used in detecting small process shifts since it was first introduced by Page (1954). And recent studies have shown that adaptive charts can improve the efficiency and performance of traditional Shewhart charts. To monitor the process mean and variance in a single chart, the loss function is used as a measure statistic in this article. The loss function can measure the process quality loss while the process mean and/or variance has shifted. This study combines the three features: adaption, CUSUM and the loss function, and proposes the optimal VSSI, VSI, and FP CUSUM Loss chart. The performance of the proposed charts is measured by using Average Time to Signal (ATS) and Average Number of Observations to Signal (ANOS). The ATS and ANOS calculations are based on Markov chain approach. The performance comparisons between the proposed charts and some existing charts, such as X-bar+S^2 charts and CUSUM X-bar+S^2 charts, are illustrated by numerical analyses and some examples. From the results of the numerical analyses, it shows that the optimal VSSI CUSUM Loss chart has better performance than the optimal VSI CUSUM Loss chart, optimal FP CUSUM Loss chart, CUSUM X-bar+S^2 charts and X-bar+S^2 charts. Furthermore, using a single chart to monitor a process is not only easier but more efficient than using two charts simultaneously. Hence, the adaptive CUSUM Loss charts are recommended in real process.
59

適應性計數值損失函數管制圖之設計 / Design of the Adaptive Loss Function Control Chart for Binomial Data

李宜臻, Lee,I Chen Unknown Date (has links)
This article proposes the algorithm of a new control chart (loss function control chart) based on the Taguchi loss function with an adaptive scheme for binomial data. The loss function control chart is able to monitor cost variation from the process by applying loss function in the design. This new angle economically explores production cost. This research provides designs of the loss function control chart with specified VSI, optimal VSI, VSS and VP, respectively. Numerical analyses show that the specified VSI loss function chart, the optimal VSI loss function chart, the optimal VSS loss function chart and the optimal VP loss function chart outperform the Fp loss function chart significantly and show costs can be controlled systematically.
60

Apprentissage basé sur le Qini pour la prédiction de l’effet causal conditionnel

Belbahri, Mouloud-Beallah 08 1900 (has links)
Les modèles uplift (levier en français) traitent de l'inférence de cause à effet pour un facteur spécifique, comme une intervention de marketing. En pratique, ces modèles sont construits sur des données individuelles issues d'expériences randomisées. Un groupe traitement comprend des individus qui font l'objet d'une action; un groupe témoin sert de comparaison. La modélisation uplift est utilisée pour ordonner les individus par rapport à la valeur d'un effet causal, par exemple, positif, neutre ou négatif. Dans un premier temps, nous proposons une nouvelle façon d'effectuer la sélection de modèles pour la régression uplift. Notre méthodologie est basée sur la maximisation du coefficient Qini. Étant donné que la sélection du modèle correspond à la sélection des variables, la tâche est difficile si elle est effectuée de manière directe lorsque le nombre de variables à prendre en compte est grand. Pour rechercher de manière réaliste un bon modèle, nous avons conçu une méthode de recherche basée sur une exploration efficace de l'espace des coefficients de régression combinée à une pénalisation de type lasso de la log-vraisemblance. Il n'y a pas d'expression analytique explicite pour la surface Qini, donc la dévoiler n'est pas facile. Notre idée est de découvrir progressivement la surface Qini comparable à l'optimisation sans dérivée. Le but est de trouver un maximum local raisonnable du Qini en explorant la surface près des valeurs optimales des coefficients pénalisés. Nous partageons ouvertement nos codes à travers la librairie R tools4uplift. Bien qu'il existe des méthodes de calcul disponibles pour la modélisation uplift, la plupart d'entre elles excluent les modèles de régression statistique. Notre librairie entend combler cette lacune. Cette librairie comprend des outils pour: i) la discrétisation, ii) la visualisation, iii) la sélection de variables, iv) l'estimation des paramètres et v) la validation du modèle. Cette librairie permet aux praticiens d'utiliser nos méthodes avec aise et de se référer aux articles méthodologiques afin de lire les détails. L'uplift est un cas particulier d'inférence causale. L'inférence causale essaie de répondre à des questions telle que « Quel serait le résultat si nous donnions à ce patient un traitement A au lieu du traitement B? ». La réponse à cette question est ensuite utilisée comme prédiction pour un nouveau patient. Dans la deuxième partie de la thèse, c’est sur la prédiction que nous avons davantage insisté. La plupart des approches existantes sont des adaptations de forêts aléatoires pour le cas de l'uplift. Plusieurs critères de segmentation ont été proposés dans la littérature, tous reposant sur la maximisation de l'hétérogénéité. Cependant, dans la pratique, ces approches sont sujettes au sur-ajustement. Nous apportons une nouvelle vision pour améliorer la prédiction de l'uplift. Nous proposons une nouvelle fonction de perte définie en tirant parti d'un lien avec l'interprétation bayésienne du risque relatif. Notre solution est développée pour une architecture de réseau de neurones jumeaux spécifique permettant d'optimiser conjointement les probabilités marginales de succès pour les individus traités et non-traités. Nous montrons que ce modèle est une généralisation du modèle d'interaction logistique de l'uplift. Nous modifions également l'algorithme de descente de gradient stochastique pour permettre des solutions parcimonieuses structurées. Cela aide dans une large mesure à ajuster nos modèles uplift. Nous partageons ouvertement nos codes Python pour les praticiens désireux d'utiliser nos algorithmes. Nous avons eu la rare opportunité de collaborer avec l'industrie afin d'avoir accès à des données provenant de campagnes de marketing à grande échelle favorables à l'application de nos méthodes. Nous montrons empiriquement que nos méthodes sont compétitives avec l'état de l'art sur les données réelles ainsi qu'à travers plusieurs scénarios de simulations. / Uplift models deal with cause-and-effect inference for a specific factor, such as a marketing intervention. In practice, these models are built on individual data from randomized experiments. A targeted group contains individuals who are subject to an action; a control group serves for comparison. Uplift modeling is used to order the individuals with respect to the value of a causal effect, e.g., positive, neutral, or negative. First, we propose a new way to perform model selection in uplift regression models. Our methodology is based on the maximization of the Qini coefficient. Because model selection corresponds to variable selection, the task is haunting and intractable if done in a straightforward manner when the number of variables to consider is large. To realistically search for a good model, we conceived a searching method based on an efficient exploration of the regression coefficients space combined with a lasso penalization of the log-likelihood. There is no explicit analytical expression for the Qini surface, so unveiling it is not easy. Our idea is to gradually uncover the Qini surface in a manner inspired by surface response designs. The goal is to find a reasonable local maximum of the Qini by exploring the surface near optimal values of the penalized coefficients. We openly share our codes through the R Package tools4uplift. Though there are some computational methods available for uplift modeling, most of them exclude statistical regression models. Our package intends to fill this gap. This package comprises tools for: i) quantization, ii) visualization, iii) variable selection, iv) parameters estimation and v) model validation. This library allows practitioners to use our methods with ease and to refer to methodological papers in order to read the details. Uplift is a particular case of causal inference. Causal inference tries to answer questions such as ``What would be the result if we gave this patient treatment A instead of treatment B?" . The answer to this question is then used as a prediction for a new patient. In the second part of the thesis, it is on the prediction that we have placed more emphasis. Most existing approaches are adaptations of random forests for the uplift case. Several split criteria have been proposed in the literature, all relying on maximizing heterogeneity. However, in practice, these approaches are prone to overfitting. In this work, we bring a new vision to uplift modeling. We propose a new loss function defined by leveraging a connection with the Bayesian interpretation of the relative risk. Our solution is developed for a specific twin neural network architecture allowing to jointly optimize the marginal probabilities of success for treated and control individuals. We show that this model is a generalization of the uplift logistic interaction model. We modify the stochastic gradient descent algorithm to allow for structured sparse solutions. This helps fitting our uplift models to a great extent. We openly share our Python codes for practitioners wishing to use our algorithms. We had the rare opportunity to collaborate with industry to get access to data from large-scale marketing campaigns favorable to the application of our methods. We show empirically that our methods are competitive with the state of the art on real data and through several simulation setting scenarios.

Page generated in 0.1 seconds