• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 242
  • 55
  • 28
  • 26
  • 13
  • 12
  • 12
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 451
  • 82
  • 54
  • 50
  • 48
  • 45
  • 44
  • 44
  • 41
  • 40
  • 36
  • 35
  • 34
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

The Thermodynamic Interaction of Light with Matter

Alhanash, Mirna January 2019 (has links)
Light is electromagnetic radiation that could be shown in a spectrum with a wide range of wavelengths. Blackbody radiation is a type of thermal radiation and is an important topic to explore due to it being an ideal body that materials’ properties are often described in comparison to it. Therefore, it helps in understanding how materials behave on the quantum level. One must understand its interaction with light spectrum and how electron excitation happens. Thus, concepts such as Planck’s law, energy quantization and band theory will be discussed to try to grasp of how light interacts with materials.
282

Self-Organizing Neural Networks for Sequence Processing

Strickert, Marc 27 January 2005 (has links)
This work investigates the self-organizing representation of temporal data in prototype-based neural networks. Extensions of the supervised learning vector quantization (LVQ) and the unsupervised self-organizing map (SOM) are considered in detail. The principle of Hebbian learning through prototypes yields compact data models that can be easily interpreted by similarity reasoning. In order to obtain a robust prototype dynamic, LVQ is extended by neighborhood cooperation between neurons to prevent a strong dependence on the initial prototype locations. Additionally, implementations of more general, adaptive metrics are studied with a particular focus on the built-in detection of data attributes involved for a given classifcation task. For unsupervised sequence processing, two modifcations of SOM are pursued: the SOM for structured data (SOMSD) realizing an efficient back-reference to the previous best matching neuron in a triangular low-dimensional neural lattice, and the merge SOM (MSOM) expressing the temporal context as a fractal combination of the previously most active neuron and its context. The first SOMSD extension tackles data dimension reduction and planar visualization, the second MSOM is designed for obtaining higher quantization accuracy. The supplied experiments underline the data modeling quality of the presented methods.
283

On quantization and sporadic measurements in control systems : stability, stabilization, and observer design / Sur la quantification et l’intermittence de mesures dans les systèmes de commande : stabilité, stabilisation et synthèse d’observateurs

Ferrante, Francesco 21 October 2015 (has links)
Dans cette thèse, nous aborderons deux aspects fondamentaux qui se posent dans les systèmes de commande modernes du fait de l'interaction entre des processus en temps continu et des dispositifs numériques: la synthèse de lois de commande en présence de quantificateurs et l'estimation d'état en présence de mesures sporadiques. Une des caractéristiques principales de cette thèse consiste également à proposer des méthodes constructives pour résoudre les problèmes envisagés. Plus précisément, pour répondre à cette exigence, nous allons nous tourner vers une approche basée sur les inégalités matricielles linéaires (LMI). Dans la première partie de la thèse, nous proposons un ensemble d'outils constructifs basés sur une approche LMI, pour l'analyse et la conception de systèmes de commande quantifiés impliquant des modèles et des correcteurs linéaires. L'approche est basée sur l'utilisation des inclusions différentielles qui permet de modéliser finement le comportement de la boucle fermée et ainsi d'obtenir des résultats intéressants. Dans la seconde partie de la thèse, inspirés par certains schémas d'observation classiques présentés dans la littérature, nous proposons deux observateurs pour l'estimation de l'état d'un système linéaire en présence de mesures sporadiques, c'est-à-dire prenant en compte la nature discrète des mesures disponibles. De plus, en se basant sur une des deux solutions présentées, une architecture de commande basée observateur est proposée afin de stabiliser asymptotiquement un système linéaire en présence à la fois de mesures sporadiques et d'un accès intermittent à l'entrée de commande du système. / In this dissertation, two fundamental aspects arising in modern engineered control systems will be addressed:On the one hand, the presence of quantization in standard control loops. On the other hand, the state estimation in the presence of sporadic available measurements. These two aspects are addressed in two different parts. One of the main feature of this thesis consists of striving to derive computer-aided tools for the solution to the considered problems. Specifically, to meet this requirement, we revolve on a linear matrix inequalities (LMIs) approach. In the first part, we propose a set of LMI-based constructive Lyapunov-based tools for the analysis and the design of quantized control systems involving linear plants and linear controllers. The entire treatment revolves on the use of differential inclusions as modeling tools, and on stabilization of compact sets as a stability notion. In the second part of the thesis, inspired by some of the classical observation schemes presented in the literature of sampled-data observers, we propose two observers to exponentially estimate the state of a linear system in the presence of sporadic measurements. In addition, building upon one of the two observers, an observer-based controller architecture is proposed to asymptotically stabilize a linear plant in the presence of sporadic measurements and intermittent input access.
284

Inference in Generalized Linear Models with Applications

Byrne, Evan 29 August 2019 (has links)
No description available.
285

Direct dynamical tunneling in systems with a mixed phase space

Schilling, Lars 19 July 2007 (has links)
Tunneling in 1D describes the effect that quantum particles can penetrate a classically insurmountable potential energy barrier. The extension to classically forbidden transitions in phase space generalizes the tunneling concept. A typical 1D Hamiltonian system has a mixed phase space. It contains regions of regular and chaotic dynamics, the so-called regular islands and the chaotic sea. These different phase space components are classically separated by dynamically generated barriers. Quantum mechanically they are, however, connected by dynamical tunneling. We perform a semiclassical quantization of almost resonance-free regular islands and transporting island chains of quantum maps. This yields so-called quasimodes, which are used for the investigation of direct dynamical tunneling from an almost resonance-free regular island to the chaotic sea. We derive a formula which allows for the determination of dynamical tunneling rates. Good agreement between this analytical prediction and numerical results is found over several orders of magnitude for two example systems. / Der 1D Tunneleffekt bezeichnet das Durchdringen einer klassisch nicht überwindbaren potentiellen Energiebarriere durch Quantenteilchen. Eine Verallgemeinerung des Tunnelbegriffs ist die Erweiterung auf jegliche Art von klassisch verbotenen Übergangsprozessen im Phasenraum. Der Phasenraum eines typischen 1D Hamiltonschen Systems ist gemischt. Er besteht aus Bereichen regulärer und chaotischer Dynamik, den sogenannten regulären Inseln und der chaotischen See. Während diese verschiedenen Phasenraumbereiche klassisch durch dynamisch generierte Barrieren voneinander getrennt sind, existiert quantenmechanisch jedoch eine Verknüpfung durch den dynamischen Tunnelprozess. In dieser Arbeit wird eine semiklassische Quantisierung von praktisch resonanz-freien regulären Inseln und transportierenden Inselketten von Quantenabbildungen durchgeführt. Die daraus folgenden sogenannten Quasimoden werden für die Untersuchung des direkten dynamischen Tunnelns aus einer praktisch resonanz-freien regulären Insel in die chaotische See verwendet, was auf eine Tunnelraten vorhersagende Formel führt. Ihre anschlie?ßende Anwendung auf zwei Modellsysteme zeigt eine gute Übereinstimmung zwischen Numerik und analytischer Vorhersage über viele Größenordnungen.
286

Quantized Feedback for Slow Fading Channels

Kim, Thanh Tùng January 2006 (has links)
Two topics in fading channels with a strict delay constraint and a resolution-constrained feedback link are treated in this thesis. First, a multi-layer variable-rate single-antenna communication system with quantized feedback, where the expected rate is chosen as the performance measure, is studied under both short-term and long-term power constraints. Iterative algorithms exploiting results in the literature of parallel broadcast channels are developed to design the system parameters. A necessary and sufficient condition for single-layer coding to be optimal is derived. In contrast to the ergodic case, it is shown that a few bits of feedback information can improve the expected rate dramatically. The role of multi-layer coding, however, reduces quickly as the resolution of the feedback link increases. The other part of the thesis deals with partial power control systems utilizing quantized feedback to minimize outage probability, with an emphasis on the diversity-multiplexing tradeoff. An index mapping with circular structure is shown to be optimal and the design is facilitated with a justified Gaussian approximation. The diversity gain as a function of the feedback resolution is analyzed. The results are then extended to characterize the entire diversity-multiplexing tradeoff curve of multiple-antenna channels with resolution-constrained feedback. Adaptive-rate communication is also studied, where the concept of minimum multiplexing gain is introduced. It is shown that the diversity gain of a system increases significantly even with coarsely quantized feedback, especially at low multiplexing gains. / QC 20101117
287

First-Order Algorithms for Communication Efficient Distributed Learning

Khirirat, Sarit January 2019 (has links)
Technological developments in devices and storages have made large volumes of data collections more accessible than ever. This transformation leads to optimization problems with massive data in both volume and dimension. In response to this trend, the popularity of optimization on high performance computing architectures has increased unprecedentedly. These scalable optimization solvers can achieve high efficiency by splitting computational loads among multiple machines. However, these methods also incur large communication overhead. To solve optimization problems with millions of parameters, communication between machines has been reported to consume up to 80% of the training time. To alleviate this communication bottleneck, many optimization algorithms with data compression techniques have been studied. In practice, they have been reported to significantly save communication costs while exhibiting almost comparable convergence as the full-precision algorithms. To understand this intuition, we develop theory and techniques in this thesis to design communication-efficient optimization algorithms. In the first part, we analyze the convergence of optimization algorithms with direct compression. First, we outline definitions of compression techniques which cover many compressors of practical interest. Then, we provide the unified analysis framework of optimization algorithms with compressors which can be either deterministic or randomized. In particular, we show how the tuning parameters of compressed optimization algorithms must be chosen to guarantee performance. Our results show explicit dependency on compression accuracy and delay effect due to asynchrony of algorithms. This allows us to characterize the trade-off between iteration and communication complexity under gradient compression. In the second part, we study how error compensation schemes can improve the performance of compressed optimization algorithms. Even though convergence guarantees of optimization algorithms with error compensation have been established, there is very limited theoretical support which guarantees improved solution accuracy. We therefore develop theoretical explanations, which show that error compensation guarantees arbitrarily high solution accuracy from compressed information. In particular, error compensation helps remove accumulated compression errors, thus improving solution accuracy especially for ill-conditioned problems. We also provide strong convergence analysis of error compensation on parallel stochastic gradient descent across multiple machines. In particular, the error-compensated algorithms, unlike direct compression, result in significant reduction in the compression error. Applications of the algorithms in this thesis to real-world problems with benchmark data sets validate our theoretical results. / Utvecklandet av kommunikationsteknologi och datalagring har gjort stora mängder av datasamlingar mer tillgängliga än någonsin. Denna förändring leder till numeriska optimeringsproblem med datamängder med stor skala i volym och dimension. Som svar på denna trend har populariteten för högpresterande beräkningsarkitekturer ökat mer än någonsin tidigare. Skalbara optimeringsverktyg kan uppnå hög effektivitet genom att fördela beräkningsbördan mellan ett flertal maskiner. De kommer dock i praktiken med ett pris som utgörs av betydande kommunikationsomkostnader. Detta orsakar ett skifte i flaskhalsen för prestandan från beräkningar till kommunikation. När lösning av verkliga optimeringsproblem sker med ett stort antal parametrar, dominerar kommunikationen mellan maskiner nästan 80% av träningstiden. För att minska kommunikationsbelastningen, har ett flertal kompressionstekniker föreslagits i litteraturen. Även om optimeringsalgoritmer som använder dessa kompressorer rapporteras vara lika konkurrenskraftiga som sina motsvarigheter med full precision, dras de med en förlust av noggrannhet. För att ge en uppfattning om detta, utvecklar vi i denna avhandling teori och tekniker för att designa kommunikations-effektiva optimeringsalgoritmer som endast använder information med låg precision. I den första delen analyserar vi konvergensen hos optimeringsalgoritmer med direkt kompression. Först ger vi en översikt av kompressionstekniker som täcker in många kompressorer av praktiskt intresse. Sedan presenterar vi ett enhetligt analysramverk för optimeringsalgoritmer med kompressorer, som kan vara antingen deterministiska eller randomiserade. I synnerhet visas val av parametrar i komprimerade optimeringsalgoritmer som avgörs av kompressorns parametrar som garanterar konvergens. Våra konvergensgarantier visar beroende av kompressorns noggrannhet och fördröjningseffekter på grund av asynkronicitet hos algoritmer. Detta låter oss karakterisera avvägningen mellan iterations- och kommunikations-komplexitet när kompression används. I den andra delen studerarvi hög prestanda hos felkompenseringsmetoder för komprimerade optimeringsalgoritmer. Även om konvergensgarantier med felkompensering har etablerats finns det väldigt begränsat teoretiskt stöd för konkurrenskraftiga konvergensgarantier med felkompensering. Vi utvecklar därför teoretiska förklaringar, som visar att användande av felkompensering garanterar godtyckligt hög lösningsnoggrannhet från komprimerad information. I synnerhet bidrar felkompensering till att ta bort ackumulerade kompressionsfel och förbättrar därmed lösningsnoggrannheten speciellt för illa konditionerade kvadratiska optimeringsproblem. Vi presenterar också stark konvergensanalys för felkompensering tillämpat på stokastiska gradientmetoder med ett kommunikationsnätverk innehållande ett flertal maskiner. De felkompenserade algoritmerna resulterar, i motsats till direkt kompression, i betydande reducering av kompressionsfelet. Simuleringar av algoritmer i denna avhandling på verkligaproblem med referensdatamängder validerar våra teoretiska resultat. / <p>QC20191120</p>
288

QPLaBSE: Quantized and Pruned Language-Agnostic BERT Sentence Embedding Model : Production-ready compression for multilingual transformers / QPLaBSE: Kvantiserad och prunerad LaBSE : Produktionsklar komprimering för flerspråkiga transformer-modeller

Langde, Sarthak January 2021 (has links)
Transformer models perform well on Natural Language Processing and Natural Language Understanding tasks. Training and fine-tuning of these models consume a large amount of data and computing resources. Fast inference also requires high-end hardware for user-facing products. While distillation, quantization, and head-pruning for transformer models are well- explored domains in academia, the practical application is not straightforward. Currently, for good accuracy of the optimized models, it is necessary to fine-tune them for a particular task. This makes the generalization of the model difficult. If the same model has to be used for multiple downstream tasks, then it would require applying the process of optimization with fine-tuning for each task. This thesis explores the techniques of quantization and pruning for optimization of the Language-Agnostic BERT Sentence Embedding (LaBSE) model without fine-tuning for a downstream task. This should enable the model to be generalized enough for any downstream task. The techniques explored in this thesis are dynamic quantization, static quantization, quantize-aware training quantization, and head-pruning. The downstream performance is evaluated using sentiment classification, intent classification, and language-agnostic classification tasks. The results show that LaBSE can be accelerated on the CPU to 2.6x its original inference time without any loss of accuracy. Head-pruning 50% of the heads from each layer leads to 1.2x speedup while removing all heads but one leads to 1.32x speedup. A speedup of almost 9x is achieved by combining quantization with head-pruning with average 8% drop in accuracy on downstream evaluation tasks. / Transformer-modeller ger bra resultat i uppgifter som rör behandling av och förståelse för naturligt språk. Träning och finjustering av dessa modeller kräver dock en stor mängd data och datorresurser. Snabb inferensförmåga kräver också högkvalitativ hårdvara för användarvänliga produkter och tjänster. Även om destillering, kvantisering och head-pruning för transformer-modeller är väl utforskade områden inom den akademiska världen är den praktiska tillämpningen inte okomplicerad. För närvarande är det nödvändigt att finjustera de optimerade modellerna för en viss uppgift för att uppnå god noggrannhet där. Detta gör det svårt att generalisera modellerna. Om samma modell skall användas för flera uppgifter i sekvens så måste man tillämpa optimeringsprocessen med finjustering för varje uppgift. I den här uppsatsen undersöks tekniker för kvantisering och prunering för optimering av LaBSE- modellen (Language-Agnostic BERT Sentence Embedding) utan finjustering för en downstream-uppgift. Detta bör göra det möjligt att generalisera modellen tillräckligt mycket för alla efterföljande uppgifter. De tekniker som undersöks är dynamisk kvantisering, statisk kvantisering, samt kvantisering för träning och head-pruning. Prestandan i efterföljande led utvärderas med hjälp av klassificering av känslor, avsiktsklassificering och språkagnostiska klassificeringsuppgifter. Resultaten visar att LaBSE kan öka effektiviteten hos CPU:n till 2,6 gånger sin ursprungliga inferenstid utan någon förlust av noggrannhet. Om 50% av huvudena från varje lager tas bort leder det till 1,2 gånger snabbare hastighet, medan det leder till 1,32 gånger snabbare hastighet om alla huvuden utom ett tas bort. Genom att kombinera kvantisering med head-pruning uppnås en ökning av hastigheten med nästan 9x, med en genomsnittlig minskning av noggrannheten med 8% i utvärderingsuppgifter nedströms.
289

Evolutionary Methodology for Optimization of Image Transforms Subject to Quantization Noise

Peterson, Michael Ray 25 June 2008 (has links)
No description available.
290

A Radial Basis Function Approach to a Color Image Classification Problem in a Real Time Industrial Application

Sahin, Ferat 27 June 1997 (has links)
In this thesis, we introduce a radial basis function network approach to solve a color image classification problem in a real time industrial application. Radial basis function networks are employed to classify the images of finished wooden parts in terms of their color and species. Other classification methods are also examined in this work. The minimum distance classifiers are presented since they have been employed by the previous research. We give brief definitions about color space, color texture, color quantization, color classification methods. We also give an intensive review of radial basis functions, regularization theory, regularized radial basis function networks, and generalized radial basis function networks. The centers of the radial basis functions are calculated by the k-means clustering algorithm. We examine the k-means algorithm in terms of starting criteria, the movement rule, and the updating rule. The dilations of the radial basis functions are calculated using a statistical method. Learning classifier systems are also employed to solve the same classification problem. Learning classifier systems learn the training samples completely whereas they are not successful to classify the test samples. Finally, we present some simulation results for both radial basis function network method and learning classifier systems method. A comparison is given between the results of each method. The results show that the best classification method examined in this work is the radial basis function network method. / Master of Science

Page generated in 0.0398 seconds