• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 14
  • 10
  • 6
  • 4
  • 4
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 60
  • 60
  • 10
  • 10
  • 10
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Banco Central e preferências assimétricas : uma aplicação de sieve estimators para os EUA e o Brasil

Silva, Rodrigo de Sá da January 2011 (has links)
Uma questão interessante na política monetária é se os Bancos Centrais dão pesos iguais para desvios positivos e negativos da inflação e do hiato do produto das suas respectivas metas. Para responder à esta questão, estimou-se a função perda da autoridade monetária não parametricamente através do método de sieve estimator, expandindo-a através de uma base composta de polinômios ortogonais. A economia foi modelada com agentes foward-looking e com comprometimento por parte da autoridade monetária. O método foi aplicado para a os Estados Unidos desde 1960 e para o Brasil a partir de 1999. Para a economia norte-americana não foram encontradas evidências de assimetria nas preferências da autoridade monetária. Já no Brasil o Banco Central mostrou ter preferências assimétricas quanto à inflação, auferindo uma maior perda de desvios negativos do que positivos em relação à meta. / An interesting question in monetary policy is whether the Central Bank gives equal weights to positive and negative deviations of inflation and output gap from their targets. Trying answering this question, we estimated the monetary authority’s loss function nonparametrically, using the method of sieves, expanding it with orthogonal polynomials. The economy was model with forward-looking agents and with commitment of the monetary authority. We applied the method to U.S. monetary policy since 1960 and for Brazil since 1999. For the U.S. economy, it was not found evidence of asymmetry in the preferences of the monetary authority. In Brazil, the Central Bank proved to have asymmetric preferences about inflation, with a greater loss for negative deviations of inflation from the target rather for positive ones.
42

Banco Central e preferências assimétricas : uma aplicação de sieve estimators para os EUA e o Brasil

Silva, Rodrigo de Sá da January 2011 (has links)
Uma questão interessante na política monetária é se os Bancos Centrais dão pesos iguais para desvios positivos e negativos da inflação e do hiato do produto das suas respectivas metas. Para responder à esta questão, estimou-se a função perda da autoridade monetária não parametricamente através do método de sieve estimator, expandindo-a através de uma base composta de polinômios ortogonais. A economia foi modelada com agentes foward-looking e com comprometimento por parte da autoridade monetária. O método foi aplicado para a os Estados Unidos desde 1960 e para o Brasil a partir de 1999. Para a economia norte-americana não foram encontradas evidências de assimetria nas preferências da autoridade monetária. Já no Brasil o Banco Central mostrou ter preferências assimétricas quanto à inflação, auferindo uma maior perda de desvios negativos do que positivos em relação à meta. / An interesting question in monetary policy is whether the Central Bank gives equal weights to positive and negative deviations of inflation and output gap from their targets. Trying answering this question, we estimated the monetary authority’s loss function nonparametrically, using the method of sieves, expanding it with orthogonal polynomials. The economy was model with forward-looking agents and with commitment of the monetary authority. We applied the method to U.S. monetary policy since 1960 and for Brazil since 1999. For the U.S. economy, it was not found evidence of asymmetry in the preferences of the monetary authority. In Brazil, the Central Bank proved to have asymmetric preferences about inflation, with a greater loss for negative deviations of inflation from the target rather for positive ones.
43

MULTI-SPECTRAL FUSION FOR SEMANTIC SEGMENTATION NETWORKS

Justin Cody Edwards (14700769) 31 May 2023 (has links)
<p>  </p> <p>Semantic segmentation is a machine learning task that is seeing increased utilization in multiples fields, from medical imagery, to land demarcation, and autonomous vehicles. Semantic segmentation performs the pixel-wise classification of images, creating a new, segmented representation of the input that can be useful for detected various terrain and objects within and image. Recently, convolutional neural networks have been heavily utilized when creating neural networks tackling the semantic segmentation task. This is particularly true in the field of autonomous driving systems.</p> <p>The requirements of automated driver assistance systems (ADAS) drive semantic segmentation models targeted for deployment on ADAS to be lightweight while maintaining accuracy. A commonly used method to increase accuracy in the autonomous vehicle field is to fuse multiple sensory modalities. This research focuses on leveraging the fusion of long wave infrared (LWIR) imagery with visual spectrum imagery to fill in the inherent performance gaps when using visual imagery alone. This comes with a host of benefits, such as increase performance in various lighting conditions and adverse environmental conditions. Utilizing this fusion technique is an effective method of increasing the accuracy of a semantic segmentation model. Being a lightweight architecture is key for successful deployment on ADAS, as these systems often have resource constraints and need to operate in real-time. Multi-Spectral Fusion Network (MFNet) [ 1 ] accomplishes these parameters by leveraging a sensory fusion approach, and as such was selected as the baseline architecture for this research.</p> <p>Many improvements were made upon the baseline architecture by leveraging a variety of techniques. Such improvements include the proposal of a novel loss function categorical cross-entropy dice loss, introduction of squeeze and excitation (SE) blocks, addition of pyramid pooling, a new fusion technique, and drop input data augmentation. These improvements culminated in the creation of the Fast Thermal Fusion Network (FTFNet). Further improvements were made by introducing depthwise separable convolutional layers leading to lightweight FTFNet variants, FTFNet Lite 1 & 2.</p>
44

動態規劃數值解 :退休後資產配置 / Dynamic programming numerical solution: post retirement asset allocation

蔡明諺, Tsai, Ming Yen Unknown Date (has links)
動態規劃的問題並不一定都存在封閉解(closed form solution),即使存在,其過程往往也相當繁雜。本研究擬以 Gerrard & Haberman (2004) 的模型為基礎,並使用逼近動態規劃理論解的數值方法來求解,此方法參考自黃迪揚(2009),其研究探討在有無封閉解的動態規劃下,使用此數值方法求解可以得到 逼近解。本篇嘗試延伸其方法,針對不同類型的限制,做更多不同的變化。Gerrard & Haberman (2004)推導出退休後投資於風險性資產與無風險性資產之最適投資策略封閉解, 本研究欲將模型投資之兩資產衍生至三資產,分別投資在高風險資產、中風險資產與無風險資產,實際市場狀況下禁止買空賣空的情況與風險趨避程度限制資產投資比例所造成的影響。並探討兩資產與三資產下的投資結果,並加入不同的目標函數:使用控制變異數的限制式來降低破產機率、控制帳戶差異部位讓投資更具效率性。雖然加入這些限制式會導致目標函 數過於複雜,但是用此數值方法還是可以得出逼近解。 / Dynamic Programming’s solution is not always a closed form. If it do exist, the solution of progress may be too complicated. Our research is based on the investing model in Gerrard & Haberman (2004), using the numerical solution by Huang (2009) to solve the dynamic programming problem. In his research, he found out that whether dynamic programming problem has the closed form, using the numerical solution to solve the problems, which could get similar result. So in our research, we try to use this solution to solve more complicate problems. Gerrard & Haberman (2004) derived the closed form solution of optimal investing strategy in post retirement investment plan, investing in risky asset and riskless asset. In this research we try to invest in three assets, investing in high risk asset, middle risk asset and riskless asset. Forbidden short buying and short selling, how risk attitude affect investment behavior in risky asset and riskless asset. We also observe the numerical result of 2 asset and 3 asset, using different objective functions : using variance control to avoid ruin risk, consideration the distance between objective account and actual account to improve investment effective. Although using these restricts may increase the complication of objective functions, but we can use this numerical solution to get the approximating solution.
45

最小成本下,規格及X-bar-S管制圖之設計 / The design of specification and X-bar-S charts with minimal cost

沈依潔, Shen, I Chieh Unknown Date (has links)
最小成本下,規格及X-bar-S管制圖之設計 / The design of economic statistical control charts and specification are both crucial research areas in industry. Furthermore, the determination of consumer and producer specifications is important to producer. In this study, we consider eight cost models including the consumer loss function and/or the producer loss function with the economic statistical X-bar and S charts or Shewhart-type economic X-bar and S charts. To determine the design parameters of the X-bar and S charts and consumer tolerance and/or producer tolerance, we using the Genetic Algorithm to minimizing expected cost per unit time. In the comparison of examples and sensitivity analyses, we found that the optimal design parameters of the Shewhart-type economic X-bar and S charts are similar to those of economic statistical X-bar and S control charts, and the expected cost per unit time may lower than the actual cost per unit time when the cost model only considering consumer loss or producer loss. When considering both consumer and producer tolerances in the cost model, the design parameters of the economic X-bar and S charts are not sensitive to the cost models. If the producer tolerance is smaller than the consumer tolerance, and the producer loss is smaller than the consumer loss, the optimal producer tolerance should be small.
46

Towards Single Molecule Imaging - Understanding Structural Transitions Using Ultrafast X-ray Sources and Computer Simulations

Caleman, Carl January 2007 (has links)
X-ray lasers bring us into a new world in photon science by delivering extraordinarily intense beams of x-rays in very short bursts that can be more than ten billion times brighter than pulses from other x-ray sources. These lasers find applications in sciences ranging from astrophysics to structural biology, and could allow us to obtain images of single macromolecules when these are injected into the x-ray beam. A macromolecule injected into vacuum in a microdroplet will be affected by evaporation and by the dynamics of the carrier liquid before being hit by the x-ray pulse. Simulations of neutral and charged water droplets were performed to predict structural changes and changes of temperature due to evaporation. The results are discussed in the aspect of single molecule imaging. Further studies show ionization caused by the intense x-ray radiation. These simulations reveal the development of secondary electron cascades in water. Other studies show the development of these cascades in KI and CsI where experimental data exist. The results are in agreement with observation, and show the temporal, spatial and energetic evolution of secondary electron cascades in the sample. X-ray diffraction is sensitive to structural changes on the length scale of chemical bonds. Using a short infrared pump pulse to trigger structural changes, and a short x-ray pulse for probing it, these changes can be studied with a temporal resolution similar to the pulse lengths. Time resolved diffraction experiments were performed on a phase transition during resolidification of a non-thermally molten InSb crystal. The experiment reveals the dynamics of crystal regrowth. Computer simulations were performed on the infrared laser-induced melting of bulk ice, giving a comprehension of the dynamics and the wavelength dependence of melting. These studies form a basis for planning experiments with x-ray lasers.
47

適應性加權損失管制圖之研究 / The Study of Adaptive Weighted Loss Control Charts for Dependent Process Steps

林亮妤, Lin,Liang Yu Unknown Date (has links)
近年來有許多研究發現,適應性管制圖在偵測製程或產品幅度偏移時的速度比傳統的舒華特管制圖來的快,許多文獻也討論到利用適應性管制技術同時監控製程的平均數和變異數。隨著科技的發達,許多產品在製造上更加精密,現今普遍使用的固定參數管制圖並無法有效率的偵測出製程失控,導致巨大的成本損失。為了改善現有管制圖的偵測效率與有效控制製程失控下的損失,我們提出了三種適應性加權損失管制圖,包括變動抽樣間隔(VSI)、變動樣本數與抽樣間隔(VSI)、變動管制參數(VP)來偵測單一製程與兩相依製程的平均數和變異數。採用製程發生變動後到管制圖偵測出異常訊息所需的平均時間(AATS)與所需的總觀測數(ANOS)來衡量管制圖的偵測績效,並利用馬可夫鏈推導計算得之。從數值分析中發現,適應性加權損失管制圖在「偵測小偏移幅度時的偵測效率」與「成本的控制」明顯比傳統管制圖表現的更好,再加上每一個製程僅需採用單一管制圖,對使用者也較為簡便並且容易理解,因此適應性加權損失管制圖在實務上是值得被推薦使用的。 / Recent research has shown that control charts with adaptive features detect process shifts faster than traditional Shewhart charts. In this article, we propose three kinds of adaptive weighted loss (WL) control charts, variable sampling intervals (VSI) WL control charts , variable sample sizes and sampling intervals (VSSI) WL control charts and variable parameters (VP) WL control charts, to monitor the target and variance on a single process step and two dependent process steps simultaneously. These adaptive WL control charts may effectively distinguish which process step is out-of-control. We use the Markov chain approach to calculate the adjusted average time to signal (AATS) and average number of observations to signal (ANOS) in order to measure the performance of the proposed control charts. From the numerical examples and data analyses, we find the adaptive WL control charts have better detection abilities and performance than fixed parameters (FP) WL control charts and FP Z(X-bar)-Z(Sx^2) and Z(e-bar)-Z(Se^2) control charts. We also proposed the optimal adaptive WL control charts using an optimization technique to minimize AATS when users cannot specify the values of the variable parameters. In addition, we discuss the impact of misusing weighted loss of outgoing quality control chart. In conclusion, using a single chart to monitor a process is inherently easier than using two charts. The WL control charts are easy to understand for the users, and have better performance and detection abilities than the other charts, thus, we recommend the use of WL control charts in the real industrial process.
48

Segmentace obrazu nevyvážených dat pomocí umělé inteligence / Image segmentation of unbalanced data using artificial intelligence

Polách, Michal January 2019 (has links)
This thesis focuses on problematics of segmentation of unbalanced datasets by the useof artificial inteligence. Numerous existing methods for dealing with unbalanced datasetsare examined, and some of them are then applied to real problem that consist of seg-mentation of dataset with class ratio of more than 6000:1.
49

Improving the Robustness of Deep Neural Networks against Adversarial Examples via Adversarial Training with Maximal Coding Rate Reduction / Förbättra Robustheten hos Djupa Neurala Nätverk mot Exempel på en Motpart genom Utbildning för motståndare med Maximal Minskning av Kodningshastigheten

Chu, Hsiang-Yu January 2022 (has links)
Deep learning is one of the hottest scientific topics at the moment. Deep convolutional networks can solve various complex tasks in the field of image processing. However, adversarial attacks have been shown to have the ability of fooling deep learning models. An adversarial attack is accomplished by applying specially designed perturbations on the input image of a deep learning model. The noises are almost visually indistinguishable to human eyes, but can fool classifiers into making wrong predictions. In this thesis, adversarial attacks and methods to improve deep learning ’models robustness against adversarial samples were studied. Five different adversarial attack algorithm were implemented. These attack algorithms included white-box attacks and black-box attacks, targeted attacks and non-targeted attacks, and image-specific attacks and universal attacks. The adversarial attacks generated adversarial examples that resulted in significant drop in classification accuracy. Adversarial training is one commonly used strategy to improve the robustness of deep learning models against adversarial examples. It is shown that adversarial training can provide an additional regularization benefit beyond that provided by using dropout. Adversarial training is performed by incorporating adversarial examples into the training process. Traditionally, during this process, cross-entropy loss is used as the loss function. In order to improve the robustness of deep learning models against adversarial examples, in this thesis we propose two new methods of adversarial training by applying the principle of Maximal Coding Rate Reduction. The Maximal Coding Rate Reduction loss function maximizes the coding rate difference between the whole data set and the sum of each individual class. We evaluated the performance of different adversarial training methods by comparing the clean accuracy, adversarial accuracy and local Lipschitzness. It was shown that adversarial training with Maximal Coding Rate Reduction loss function would yield a more robust network than the traditional adversarial training method. / Djupinlärning är ett av de hetaste vetenskapliga ämnena just nu. Djupa konvolutionella nätverk kan lösa olika komplexa uppgifter inom bildbehandling. Det har dock visat sig att motståndarattacker har förmågan att lura djupa inlärningsmodeller. En motståndarattack genomförs genom att man tillämpar särskilt utformade störningar på den ingående bilden för en djup inlärningsmodell. Störningarna är nästan visuellt omöjliga att särskilja för mänskliga ögon, men kan lura klassificerare att göra felaktiga förutsägelser. I den här avhandlingen studerades motståndarattacker och metoder för att förbättra djupinlärningsmodellers robusthet mot motståndarexempel. Fem olika algoritmer för motståndarattack implementerades. Dessa angreppsalgoritmer omfattade white-box-attacker och black-box-attacker, riktade attacker och icke-målinriktade attacker samt bildspecifika attacker och universella attacker. De negativa attackerna genererade motståndarexempel som ledde till en betydande minskning av klassificeringsnoggrannheten. Motståndsträning är en vanligt förekommande strategi för att förbättra djupinlärningsmodellernas robusthet mot motståndarexempel. Det visas att motståndsträning kan ge en ytterligare regulariseringsfördel utöver den som ges genom att använda dropout. Motståndsträning utförs genom att man införlivar motståndarexempel i träningsprocessen. Traditionellt används under denna process cross-entropy loss som förlustfunktion. För att förbättra djupinlärningsmodellernas robusthet mot motståndarexempel föreslår vi i den här avhandlingen två nya metoder för motståndsträning genom att tillämpa principen om maximal minskning av kodningshastigheten. Förlustfunktionen Maximal Coding Rate Reduction maximerar skillnaden i kodningshastighet mellan hela datamängden och summan av varje enskild klass. Vi utvärderade prestandan hos olika metoder för motståndsträning genom att jämföra ren noggrannhet, motstånds noggrannhet och lokal Lipschitzness. Det visades att motståndsträning med förlustfunktionen Maximal Coding Rate Reduction skulle ge ett mer robust nätverk än den traditionella motståndsträningsmetoden.
50

Generalized quantile regression

Guo, Mengmeng 22 August 2012 (has links)
Die generalisierte Quantilregression, einschließlich der Sonderfälle bedingter Quantile und Expektile, ist insbesondere dann eine nützliche Alternative zum bedingten Mittel bei der Charakterisierung einer bedingten Wahrscheinlichkeitsverteilung, wenn das Hauptinteresse in den Tails der Verteilung liegt. Wir bezeichnen mit v_n(x) den Kerndichteschätzer der Expektilkurve und zeigen die stark gleichmßige Konsistenzrate von v-n(x) unter allgemeinen Bedingungen. Unter Zuhilfenahme von Extremwerttheorie und starken Approximationen der empirischen Prozesse betrachten wir die asymptotischen maximalen Abweichungen sup06x61 |v_n(x) − v(x)|. Nach Vorbild der asymptotischen Theorie konstruieren wir simultane Konfidenzb änder um die geschätzte Expektilfunktion. Wir entwickeln einen funktionalen Datenanalyseansatz um eine Familie von generalisierten Quantilregressionen gemeinsam zu schätzen. Dabei gehen wir in unserem Ansatz davon aus, dass die generalisierten Quantile einige gemeinsame Merkmale teilen, welche durch eine geringe Anzahl von Hauptkomponenten zusammengefasst werden können. Die Hauptkomponenten sind als Splinefunktionen modelliert und werden durch Minimierung eines penalisierten asymmetrischen Verlustmaßes gesch¨atzt. Zur Berechnung wird ein iterativ gewichteter Kleinste-Quadrate-Algorithmus entwickelt. Während die separate Schätzung von individuell generalisierten Quantilregressionen normalerweise unter großer Variablit¨at durch fehlende Daten leidet, verbessert unser Ansatz der gemeinsamen Schätzung die Effizienz signifikant. Dies haben wir in einer Simulationsstudie demonstriert. Unsere vorgeschlagene Methode haben wir auf einen Datensatz von 150 Wetterstationen in China angewendet, um die generalisierten Quantilkurven der Volatilität der Temperatur von diesen Stationen zu erhalten / Generalized quantile regressions, including the conditional quantiles and expectiles as special cases, are useful alternatives to the conditional means for characterizing a conditional distribution, especially when the interest lies in the tails. We denote $v_n(x)$ as the kernel smoothing estimator of the expectile curves. We prove the strong uniform consistency rate of $v_{n}(x)$ under general conditions. Moreover, using strong approximations of the empirical process and extreme value theory, we consider the asymptotic maximal deviation $\sup_{ 0 \leqslant x \leqslant 1 }|v_n(x)-v(x)|$. According to the asymptotic theory, we construct simultaneous confidence bands around the estimated expectile function. We develop a functional data analysis approach to jointly estimate a family of generalized quantile regressions. Our approach assumes that the generalized quantiles share some common features that can be summarized by a small number of principal components functions. The principal components are modeled as spline functions and are estimated by minimizing a penalized asymmetric loss measure. An iteratively reweighted least squares algorithm is developed for computation. While separate estimation of individual generalized quantile regressions usually suffers from large variability due to lack of sufficient data, by borrowing strength across data sets, our joint estimation approach significantly improves the estimation efficiency, which is demonstrated in a simulation study. The proposed method is applied to data from 150 weather stations in China to obtain the generalized quantile curves of the volatility of the temperature at these stations

Page generated in 0.0698 seconds