431 |
Mathematical analysis of a dynamical system for sparse recoveryBalavoine, Aurele 22 May 2014 (has links)
This thesis presents the mathematical analysis of a continuous-times system for sparse signal recovery. Sparse recovery arises in Compressed Sensing (CS), where signals of large dimension must be recovered from a small number of linear measurements, and can be accomplished by solving a complex optimization program. While many solvers have been proposed and analyzed to solve such programs in digital, their high complexity currently prevents their use in real-time applications. On the contrary, a continuous-time neural network implemented in analog VLSI could lead to significant gains in both time and power consumption. The contributions of this thesis are threefold. First, convergence results for neural networks that solve a large class of nonsmooth optimization programs are presented. These results extend previous analysis by allowing the interconnection matrix to be singular and the activation function to have many constant regions and grow unbounded. The exponential convergence rate of the networks is demonstrated and an analytic expression for the convergence speed is given. Second, these results are specialized to the L1-minimization problem, which is the most famous approach to solving the sparse recovery problem. The analysis relies on standard techniques in CS and proves that the network takes an efficient path toward the solution for parameters that match results obtained for digital solvers. Third, the convergence rate and accuracy of both the continuous-time system and its discrete-time equivalent are derived in the case where the underlying sparse signal is time-varying and the measurements are streaming. Such a study is of great interest for practical applications that need to operate in real-time, when the data are streaming at high rates or the computational resources are limited. As a conclusion, while existing analysis was concentrated on discrete-time algorithms for the recovery of static signals, this thesis provides convergence rate and accuracy results for the recovery of static signals using a continuous-time solver, and for the recovery of time-varying signals with both a discrete-time and a continuous-time solver.
|
432 |
Graph-based variational optimization and applications in computer visionCouprie, Camille 10 October 2011 (has links) (PDF)
Many computer vision applications such as image filtering, segmentation and stereovision can be formulated as optimization problems. Recently discrete, convex, globally optimal methods have received a lot of attention. Many graph-based methods suffer from metrication artefacts, segmented contours are blocky in areas where contour information is lacking. In the first part of this work, we develop a discrete yet isotropic energy minimization formulation for the continuous maximum flow problem that prevents metrication errors. This new convex formulation leads us to a provably globally optimal solution. The employed interior point method can optimize the problem faster than the existing continuous methods. The energy formulation is then adapted and extended to multi-label problems, and shows improvements over existing methods. Fast parallel proximal optimization tools have been tested and adapted for the optimization of this problem. In the second part of this work, we introduce a framework that generalizes several state-of-the-art graph-based segmentation algorithms, namely graph cuts, random walker, shortest paths, and watershed. This generalization allowed us to exhibit a new case, for which we developed a globally optimal optimization method, named "Power watershed''. Our proposed power watershed algorithm computes a unique global solution to multi labeling problems, and is very fast. We further generalize and extend the framework to applications beyond image segmentation, for example image filtering optimizing an L0 norm energy, stereovision and fast and smooth surface reconstruction from a noisy cloud of 3D points
|
433 |
A New Look Into Image Classification: Bootstrap ApproachOchilov, Shuhratchon January 2012 (has links)
Scene classification is performed on countless remote sensing images in support of operational activities. Automating this process is preferable since manual pixel-level classification is not feasible for large scenes. However, developing such an algorithmic solution is a challenging task due to both scene complexities and sensor limitations. The objective is to develop efficient and accurate unsupervised methods for classification (i.e., assigning each pixel to an appropriate generic class) and for labeling (i.e., properly assigning true labels to each class). Unique from traditional approaches, the proposed bootstrap approach achieves classification and labeling without training data. Here, the full image is partitioned into subimages and the true classes found in each subimage are provided by the user. After these steps, the rest of the process is automatic. Each subimage is individually classified into regions and then using the joint information from all subimages and regions the optimal configuration of labels is found based on an objective function based on a Markov random field (MRF) model. The bootstrap approach has been successfully demonstrated with SAR sea-ice and lake ice images which represent challenging scenes used operationally for ship navigation, climate study, and ice fraction estimation. Accuracy assessment is based on evaluation conducted by third party experts. The bootstrap method is also demonstrated using synthetic and natural images. The impact of this technique is a repeatable and accurate methodology that generates classified maps faster than the standard methodology.
|
434 |
Primal dual pursuit: a homotopy based algorithm for the Dantzig selectorAsif, Muhammad Salman 10 July 2008 (has links)
Consider the following system model
y = Ax + e,
where x is n-dimensional sparse signal, y is the measurement vector in a much lower dimension m, A is the measurement matrix and e is the error in our measurements. The Dantzig selector estimates x by solving the following optimization problem
minimize || x ||₁ subject to || A'(Ax - y) ||∞ ≤ ε, (DS). This is a convex program and can be recast into a linear program and solved using any modern optimization method e.g., interior point methods. We propose a fast and efficient scheme for solving the Dantzig Selector (DS), which we call "Primal-Dual pursuit". This algorithm can be thought of as a "primal-dual homotopy" approach to solve the Dantzig selector (DS). It computes the solution to (DS) for a range of successively relaxed problems, by starting with a large artificial ε and moving towards the desired value. Our algorithm iteratively updates the primal and dual supports as ε reduces to the desired value, which gives final solution. The homotopy path solution of (DS) takes with varying ε is piecewise linear. At some critical values of ε in this path, either some new elements enter the support of the signal or some existing elements leave the support. We derive the optimality and feasibility conditions which are used to update the solutions at these critical points. We also present a detailed analysis of primal-dual pursuit for sparse signals in noiseless case. We show that if our signal is S-sparse, then we can find all its S elements in exactly S steps using about "S² log n" random measurements, with very high probability.
|
435 |
Single and multi-frame video quality enhancementArici, Tarik 04 May 2009 (has links)
With the advance of the LCD technology, video quality is becoming increasingly important. In this thesis, we develop hardware-friendly low-complexity enhancement algorithms. Video quality enhancement methods can be classified into two main categories. Single frame methods are the first category. These methods have generally low computational complexity. Multi-frame methods combine information from more than one frame and require the motion information of objects in the scene to do so.
We first concentrate on the contrast-enhancement problem by using both global (frame-wise) and local information derived from the image. We use the image histogram and present a regularization-based histogram modification method to avoid problems that are often created by histogram equalization.
Next, we design a compression artifact reduction algorithm that reduces ringing artifacts, which is disturbing especially on large displays. Furthermore, to remove the blurriness in the original video we present a non-iterative diffusion-based sharpening algorithm, which enhances edges in a ringing-aware fashion. The diffusion-based technique works on gradient approximations in a neighborhood individually. This gives more freedom compared to modulating the high-pass filter output that is used to sharpen the edges.
Motion estimation enables applications such as motion-compensated noise reduction, frame-rate conversion, de-interlacing, compression, and super-resolution.
Motion estimation is an ill-posed problem and therefore requires the use of prior knowledge on motion of objects. Objects have inertia and are usually larger then pixels or a block of pixels in size, which creates spatio-temporal correlation.
We design a method that uses temporal redundancy to improve motion-vector search by choosing bias vectors from the previous frame and adaptively penalizes deviations from the bias vectors. This increases the robustness of the motion-vector search. The spatial correlation is more reliable because temporal correlation is difficult to use when the objects move fast or accelerate in time, or have small sizes. Spatial smoothness is not valid across motion boundaries. We investigate using energy minimization for motion estimation and incorporate the spatial smoothness prior into the energy. By formulating the energy minimization iterations for each motion vector as the primal problem, we show that the dual problem is motion segmentation for that specific motion vector.
|
436 |
Έλεγχος κινητήρα εναλλασσόμενου ρεύματος για εξοικονόμηση ενέργειας : εφαρμογή στα ηλεκτροκίνητα οχήματαΛαμπρόπουλος, Λάμπρος 29 July 2011 (has links)
Η παρούσα διπλωματική εργασία πραγματεύεται τη διερεύνηση μεθόδου εξοικονόμησης ενέργειας σε ηλεκτροκίνητο όχημα, μέσω ελέγχου του κινηητήρα για την ελαχιστοποίηση των απωλειών του ηλεκτροκινητήριου συστήματος. Στη συγκεκριμένη περίπτωση, το ηλεκτροκινητήριο σύστημα αποτελείται από ασύγχρονο τριφασικό κινητήρα οδηγούμενο. από αντιστροφέα τάσης, ο οποίος τροφοδοτείται από συσσωρευτές. Η εργασία αυτή εκπονήθηκε στο Εργαστήριο Ηλεκτρομηχανικής Μετατροπής Ενέργειας του Τμήματος Ηλεκτρολόγων Μηχανικών της Πολυτεχνικής Σχολής του Πανεπιστημίου Πατρών. Σκοπός της διπλωματικής αυτής εργασίας είναι η ανάπτυξη της μεθόδου εξοικονόμησης ενέργειας σε ηλεκτροκίνητο όχημα, μέσω ελαχιστοποίησης των απωλειών του συστήματος. Η ελαχιστοποίηση των απωλειών υλοποιείται μεταβάλλοντας τη μαγνητική ροή διακένου του ασύγχρονου κινητήρα και το λόγο μετάδοσης του κιβωτίου ταχυτήτων. Απώτερος στόχος είναι η επέκταση της μεθόδου ελαχιστοποίησης των απωλειών του ηλεκτροκινητήριου συστήματος που αναπτύχθηκε στη διδακτορική διατριβή του Ε.Ρίκου, "Μέθοδοι Εξοικονόμησης Ενέργειας σε Ηλεκτροκίνητα Οχήματα". Πανεπιστήμιο Πατρών, Τμήμα Ηλεκτρολόγων Μηχανικών και Τεχνολογίας Υπολογιστών, Πάτρα 2005, στην περίπτωση κινητήριου συστήματος με ασύγχρονο τριφασικό κινητήρα και αντιστροφέα και επιβεβαίωση της αποτελεσματικότητάς της σε θεωρητικό επίπεδο, επίπεδο προσομοίωσης και πειραματικό.
Αρχικά, εξετάζονται οι σχέσεις που περιγράφουν τις απώλειες ισχύος που παράγονται κατά τη λειτουργία του ηλεκτροκίνητου οχήματος, καθώς και η εξάρτησή τους από τη μαγνητική ροή διακένου και το λόγο μετάδοσης του κιβωτίου ταχυτήτων.
Στη συνέχεια, δείχνεται με γραφικό τρόπο η δυνατότητα ελαχιστοποίησης των απωλειών του συστήματος με τη μεταβολή της μαγνητικής ροής διακένου και του λόγου μετάδοσης για δεδομένες μόνιμες καταστάσης λειτουργίας (ταχύτητα και δύναμη στους τροχούς του οχήματος).
Το επόμενο βήμα είναι η επιβεβαίωση της θεωρητικής μελέτης μέσω προσομοίωσης η οποία γίνεται σε περιβάλλον Matlab/ Simulink.
Τέλος, κατασκευάζονται στο εργαστήριο πειραματική διάταξη με χρήση της οποίας διεξάγονται μετρήσεις για την επιβεβαίωση και αξιολόγηση της θεωρητικής μελέτης. / This diploma thesis discourses the analysis of an energy saving method in an electrically powered vehicle by control of the electric motor for the loss minimization of the electromotion system. In this case, the electromotion system consists of an induction motor, driven by a voltage inverter which is fed by batteries. This project was based in the Laboratory of Electromechanincal Energy Conversion of School of Engineering of University of Patras.
The objective of this project is the development of the method for energy saving in an electrically powered vehicle, through system losses minimization. The loss minimization is carried out by controlling motor air-gap magnetic flux and gear ratio. The aim of the project is the extension of the electromotion system loss minimization method which was developed in the doctoral thesis of Evangelos Rikos, "Methods of energy saving in electric vehicles", University of Patras, department of Electrical and Computer Engineering, Patras 2005 for the case of a three phase induction motor and inverter electric drive, as well as the confirmation of its effectiveness in theoretical level, simulation and experimental level.
At first, an analysis of the equations that describe the losses of the electric vehicle is performed as well as their dependence from the air-gap flux and gear ratio.
Following, is graphically demonstrated the ability to minimize the losses of the electromotion system by controlling the air-gap flux and the gear ratio for certain steady states (values of force and velocity at the vehicle's wheels).
The next step is the confirmation of the theoretical analysis by simulation which is carried out in Matlab/Simulink environment.
Finally, a three phase inverter is constructed with the use of which, the experiment is carried out on order to confirm the efficiency and evaluate the accuracy of the theoretical analysis
|
437 |
An investigation into the applicability of lean thinking in an operational maintenance environmentTendayi, Tinashe George 12 1900 (has links)
Thesis (MScEng)-- Stellenbosch University, 2013. / ENGLISH ABSTRACT: It has been postulated that lean thinking principles can be successfully applied to any industry. Following on that postulation, there have been great advances in the area of lean thinking outside the “traditional” domain of manufacturing. One such advancement has been in the area of maintenance operations where lean thinking has been used through the concept of lean maintenance. However, a problem lies in the fact that the work that has been done so far has been largely limited to the manufacturing environment where lean maintenance is practised as a prerequisite for lean manufacturing. Little evidence exists of the use of frameworks or models that can test, let alone apply, lean thinking in operational maintenance environments outside of the manufacturing context.
The main objective of this research was to come up with a framework, based on lean thinking tools and relevant performance measures, which will prove the applicability or otherwise, of lean thinking in an operational maintenance environment outside the traditional domain of manufacturing. A case study of the rolling stock section of the Salt River depot of PRASA, Metrorail, which is a typical non-traditional domain for lean thinking, was used to build and verify the framework. The Analytic Hierarchy Process (AHP) together with the Quality Function Deployment (QFD) process, is used in building and quantifying the judgements made in developing elements of the framework. The Value Stream Management process is used to predict the possible outcomes of using the proposed framework in the case study. The study was based on the hypothesis that lean thinking can also be applicable to non-manufacturing oriented maintenance organisations. The ensuing framework is used to make the argument for the use of the lean thinking approach in non-manufacturing oriented maintenance environments and hence expand the body of knowledge in this subject area. It also provides a roadmap for PRASA, Metrorail and other similar maintenance organisations in the rail industry to streamline and improve current operations through value addition and waste elimination efforts. / AFRIKAANSE OPSOMMING: Daar word bespiegel dat die beginsels van besparende denke (“lean thinking”) suksesvol toegepas kan word op enige industrie. As gevolg van hierdie hipotese is daar groot vooruitgang op die gebied van besparende denke buite die “tradisionele” gebied van vervaardiging. Een sodanige vooruitgang is op die gebied van die verrigting van instandhouding waar besparende denke bekend is as die konsep van “lean maintenance”. Die problem is dat die vooruitgang tot dusver beperk is tot die vervaardigingsomgewing waar besparende instandhouding beskou word as ‟n vereisde vir besparende vervaardiging. Daar word min gebruik gemaak van raamwerke of modelle om besparende denke te toets of om dit toe te pas in instandhoudingsomgewings buite die vervaardigingskonteks.
Die hoofdoelwit van hierdie navorsing is om te vore te kom wet ‟n raamwerk wat gegrond is op besparende denke en relevante prestasie maatstawwe, en wat die toepaslikheid van besparende denke , al dan nie, op die verrigting van instandhouding buite die tradisionele gebied van vervaardiging sal bewys. ‟n Gevalle studie van die Soutrivier Depot van PRASA Metrorail, wat tipies is van ‟n nie-tradisionele gebied vir besparende denke, is gebruik vir die bou en stawing van die raamwerk. Die analitiese hiëragiese proses (AHP) tesame met die kwaliteits funksionele ontplooiing (QFD) proses word gebruik in die kies van elemente vir die raamwerk. Die waarde stroom bestuursproses word gebruik om die moontlik uitkomste van die gebruik van die voorgestelde raamwerk, te voorspel. Die studie is gegrond op die hipotese dat besparende denke ook toegepas kan word op nie-vervaarding-georienteerde instandhoudingsorganisasies. Die raamwerk word gebruik om te bewys dat die besparende denke benadering gebruik kan word in nie-vervaardiging–georienteerde instandhoudingsomgewings en dus kennis hieroor uit te brei. Die raamwerk kan ook gebruik word as ‟n padkaart deur Metrorail en ander soortgelyke instandhoudingsorganisasies in die spoorweg industrie, om hulle huidige werksverrigting deur waarde toevoeging te verbeter en om verkwisting te voorkom.
|
438 |
Verification of patient position for proton therapy using portal X-Rays and digitally reconstructed radiographsVan der Bijl, Leendert 12 1900 (has links)
Thesis (MScEng (Applied Mathematics))--University of Stellenbosch, 2006. / This thesis investigates the various components required for the development
of a patient position verification system to replace the existing system used
by the proton facilities of iThemba LABS1. The existing system is based
on the visual comparison of a portal radiograph (PR) of the patient in the
current treatment position and a digitally reconstructed radiograph (DRR)
of the patient in the correct treatment position. This system is not only of
limited accuracy, but labour intensive and time-consuming. Inaccuracies in
patient position are detrimental to the effectiveness of proton therapy, and
elongated treatment times add to patient trauma. A new system is needed
that is accurate, fast, robust and automatic.
Automatic verification is achieved by using image registration techniques
to compare the PR and DRRs. The registration process finds a rigid body
transformation which estimates the difference between the current position
and the correct position by minimizing the measure which compares the
two images. The image registration process therefore consists of four main
components: the DRR, the PR, the measure for comparing the two images
and the minimization method.
The ray-tracing algorithm by Jacobs was implemented to generate the DRRs,
with the option to use X-ray attenuation calibration curves and beam hardening
correction curves to generate DRRs that approximate the PRs acquired
with iThemba LABS’s digital portal radiographic system (DPRS)
better.
Investigations were performed mostly on simulated PRs generated from DRRs, but also on real PRs acquired with iThemba LABS’s DPRS.
The use of the Correlation Coefficient (CC) and Mutual Information (MI)
similarity measures to compare the two images was investigated.
Similarity curves were constructed using simulated PRs to investigate how
the various components of the registration process influence the performance.
These included the use of the appropriate XACC and BHCC, the
sizes of the DRRs and the PRs, the slice thickness of the CT data, the
amount of noise contained by the PR and the focal spot size of the DPRS’s
X-ray tube.
It was found that the Mutual Information similarity measure used to compare
10242 pixel PRs with 2562 pixel DRRs interpolated to 10242 pixels
performed the best. It was also found that the CT data with the smallest
slice thickness available should be used. If only CT data with thick slices is
available, the CT data should be interpolated to have thinner slices.
Five minimization algorithms were implemented and investigated. It was
found that the unit vector direction set minimization method can be used
to register the simulated PRs robustly and very accurately in a respectable
amount of time.
Investigations with limited real PRs showed that the behaviour of the registration
process is not significantly different than for simulated PRs.
|
439 |
Gestão de Produção mais Limpa em pequenas empresas: uma proposta metodológica desenvolvida por meio de pesquisa-ação / Cleaner Production management in small companies: a methodological proposal developed by means of action researchNunes, José Roberto Rolim 28 March 2017 (has links)
Submitted by Milena Rubi (milenarubi@ufscar.br) on 2017-08-11T17:55:33Z
No. of bitstreams: 1
NUNES_Jose_2017.pdf: 54370580 bytes, checksum: e3e67f00708f8dceb5cc06a3dcac42c1 (MD5) / Approved for entry into archive by Milena Rubi (milenarubi@ufscar.br) on 2017-08-11T17:55:40Z (GMT) No. of bitstreams: 1
NUNES_Jose_2017.pdf: 54370580 bytes, checksum: e3e67f00708f8dceb5cc06a3dcac42c1 (MD5) / Approved for entry into archive by Milena Rubi (milenarubi@ufscar.br) on 2017-08-11T17:55:46Z (GMT) No. of bitstreams: 1
NUNES_Jose_2017.pdf: 54370580 bytes, checksum: e3e67f00708f8dceb5cc06a3dcac42c1 (MD5) / Made available in DSpace on 2017-08-11T17:55:52Z (GMT). No. of bitstreams: 1
NUNES_Jose_2017.pdf: 54370580 bytes, checksum: e3e67f00708f8dceb5cc06a3dcac42c1 (MD5)
Previous issue date: 2017-03-28 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / The main objective of this research is to propose a Cleaner Production (CP) management methodology for small companies. CP can be understood as continual application of a preventive environmental strategy integrated to service, products and process in order to reduce risks to people and environment as well to increase process efficiency. Many companies, in particular the big ones, have adopted CP practices to reduce environment impacts and they have obtained, by extension, economic benefits by waste generation reduction and usage of less quantity of energy, water and materials. Despite its benefits, it is perceived low level of adherence to environmental management based on CP in the group of small companies, which is attributed to several barriers, such as few financial resources, lack of involvement of people and lack of leadership. This research analyzed how to structure the CP actions in a small metallurgical company by using the action research during five years. Data were collected and analyzed by means of participant observation. As main result this study proposes a CP management methodology in small companies composed by a five steps cyclical phase and a meta-phase. The proposed methodology showed to be a feasible way to conduct environment management in the research unit. It was observed that actions of monitoring and promoting CP improvement, prioritization of preventive opportunities and people involvement facilitate the evolution of continuous improvement in the proposed methodology. / O principal objetivo desta pesquisa é propor uma metodologia de gestão da Produção mais Limpa (P+L) para pequenas empresas. A P+L pode ser entendida como a aplicação contínua de uma estratégia ambiental preventiva e integrada aos serviços, produtos e processos produtivos a fim de reduzir os riscos para pessoas e meio ambiente e também aumentar a eficiência dos processos. Muitas empresas, em particular as de grande porte, têm adotado as práticas de P+L para reduzir os impactos ambientais e obtêm, por extensão, benefícios econômicos pela redução de geração de resíduos e uso de menor quantidade de energia, água e materiais. Apesar dos seus benefícios, no grupo das pequenas empresas é percebida baixa adesão à gestão ambiental baseada na P+L, o que é atribuído a um conjunto de barreiras, como por exemplo, poucos recursos financeiros, falta de envolvimento das pessoas e falta de liderança. Nesta pesquisa se analisou como estruturar as ações de P+L numa pequena empresa metalúrgica com o emprego do método de pesquisa-ação por um período de cinco anos. Os dados foram coletados e analisados por meio de observação participante. Como resultado principal tem-se a proposição de uma metodologia para gestão de P+L em pequenas empresas composta por cinco passos numa fase cíclica e uma meta fase. A metodologia proposta mostrou ser viável para conduzir a gestão ambiental na unidade de pesquisa. Constatou-se que ações de monitorar e promover as melhorias de P+L, priorização das oportunidades preventivas e envolvimento das pessoas facilitam a evolução do processo de melhoria contínua na metodologia proposta.
|
440 |
Labeling Clinical Reports with Active Learning and Topic Modeling / Uppmärkning av kliniska rapporter med active learning och topic modellerLindblad, Simon January 2018 (has links)
Supervised machine learning models require a labeled data set of high quality in order to perform well. Available text data often exists in abundance, but it is usually not labeled. Labeling text data is a time consuming process, especially in the case where multiple labels can be assigned to a single text document. The purpose of this thesis was to make the labeling process of clinical reports as effective and effortless as possible by evaluating different multi-label active learning strategies. The goal of the strategies was to reduce the number of labeled documents a model needs, and increase the quality of those documents. With the strategies, an accuracy of 89% was achieved with 2500 reports, compared to 85% with random sampling. In addition to this, 85% accuracy could be reached after labeling 975 reports, compared to 1700 reports with random sampling.
|
Page generated in 0.1152 seconds