• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 878
  • 201
  • 126
  • 110
  • 73
  • 25
  • 17
  • 16
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1729
  • 412
  • 311
  • 245
  • 228
  • 184
  • 174
  • 167
  • 166
  • 156
  • 155
  • 152
  • 152
  • 150
  • 141
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
721

An adaptive feature-based tracking system

Pretorius, Eugene 03 1900 (has links)
Thesis (MSc (Mathematical Sciences. Applied Mathematics))--University of Stellenbosch, 2008. / In this paper, tracking tools are developed based on object features to robustly track the object using particle filtering. Automatic on-line initialisation techniques use motion detection and dynamic background modelling to extract features of moving objects. Automatically adapting the feature models during tracking is implemented and tested.
722

Real-time stereo reconstruction using hierarchical dynamic programming and LULU filtering

Singels, Francois 03 1900 (has links)
Thesis (MSc (Mathematics))--University of Stellenbosch, 2010. / ENGLISH ABSTRACT: In this thesis we consider the essential topics relating to stereo-vision and the correspondence problem in general. The aim is to reconstruct a dense 3D scene from images captured by two spatially related cameras. Our main focus, however, is on speed and real-time implementation on a standard desktop PC. We wish to use the CPU to solve the correspondence problem and to reserve the GPU for model rendering. We discuss three fundamental types of algorithms and evaluate their suitability to this end. We eventually choose to implement a hierarchical version of the dynamic programming algorithm, because of the good balance between accuracy and speed. As we build our system from the ground up we gradually introduce necessary concepts and established geometric principles, common to most stereovision systems, and discuss them as they become relevant. It becomes clear that the greatest weakness of the hierarchical dynamic programming algorithm is scanline inconsistency. We nd that the one-dimensional LULU- lter is computationally inexpensive and e ective at removing outliers when applied across the scanlines. We take advantage of the hierarchical structure of our algorithm and sub-pixel re nement to produce results at video rates (roughly 20 frames per second). A 3D model is also constructed at video rates in an on-line system with only a small delay between obtaining the input images and rendering the model. Not only is the quality of our results highly competitive with those of other state of the art algorithms, but the achievable speed is also considerably faster. / AFRIKAANSE OPSOMMING: In hierdie tesis beskou ons die noodsaaklike onderwerpe wat in die algemeen verband hou met stereovisie en die ooreenstemmingsprobleem. Die mikpunt is om 'n digte 3D toneel te rekonstrueer vanaf beelde wat deur twee ruimtelik-verwante kameras vasgelê is. Ons hoofdoel is egter spoed, en intydse implementering op 'n standaard rekenaar. Ons wil die SVE (CPU) gebruik om die ooreenstemmingsprobleem op te los, en reserveer die GVE (GPU) vir model-beraping. Ons bespreek drie fundamentele tipes algoritmes en evalueer hul geskiktheid vir hierdie doel. Ons kies uiteindelik om 'n hiërargiese weergawe van die dinamiese programmeringsalgoritme te implementeer, as gevolg van die goeie balans tussen akkuraatheid en spoed. Soos wat ons ons stelsel van die grond af opbou, stel ons geleidelik nodige konsepte voor en vestig meetkundige beginsels, algemeen tot meeste stereovisie stelsels, en bespreek dit soos dit toepaslik word. Dit word duidelik dat skandeerlyn-strydigheid die grootste swakheid van die hiërargiese dinamiese programmeringsalgoritme is. Ons vind dat die een-dimensionele LULU- lter goedkoop is in terme van berekeninge, en e ektief aangewend kan word om uitskieters te verwyder as dit dwarsoor skandeerlyne toegepas word. Ons buit die hiërargiese struktuur van ons algoritme uit en kombineer dit met sub-piksel verfyning om resultate te produseer teen video tempo (ongeveer 20 raampies per sekonde). 'n 3D model word ook gekonstrueer teen video tempo in 'n stelsel wat aanlyn loop, met slegs 'n klein vertraging tussen die verkryging van die intree-beelde en die beraping van die model. Die kwaliteit van ons resultate is nie net hoogs mededingend met dié van die heel beste algoritmes nie, maar die verkrygbare spoed is ook beduidend vinniger.
723

Effectiveness of user-curated filtering as coping strategy for information overload on microblogging services

De la Rouviere, Simon 04 1900 (has links)
Thesis (MA)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: We are living in an increasingly global and connected society with information creation increasing at exponential rates. The research sets out to help solve the problem of mitigating the effects of information overload in order to increase the novelty of our interactions in the digital age. Online social-networks and microblogging services allow people across the world to take part in a public conversation. These tools have inherent constraints on how much communication can feasibly occur. Become too connected and a user will receive too much information to reasonably process. On Twitter (a microblogging service), lists are a tool for users to create separate feeds. The research determines whether lists are an effective tool for coping with information overload (abundance of updates). Using models of sustainable online discourse and information overload on computer-mediated communication tools, the research found that lists are an effective tool to cope with information overload on microblogging services. Quantitatively, individuals who make use of lists follow more users and when they start using lists they increase the amount of information resources (following other users) at a greater rate than those who do not use lists. Qualitatively, the research also provides insight into the reasons why people use lists. The research adds new academic relevance to ‘information overload’ and ‘online sustainability’ models previously not used in the context of feed-based online CMC tools, and deepens the understanding and importance of usercurated filtering as a way to reap the benefits from the increasing abundance of information in the digital age. / AFRIKAANSE OPSOMMING: Ons leef in ’n toenemend globale en gekonnekteerde samelewing waarin inligtingskepping toeneem teen ’n eksponensiële koers. Hierdie navorsing het ten doel om die newe-effekte van die oorvloed van inligting te verlig sodat daar meer waarde uit ons interaksies in die digitale era kan geput kan word. Aanlyn sosiale-netwerke en mikroblog-dienste laat mense wêreldwyd toe om deel te neem in ’n openbare gesprek. Hierdie aanlyn gereedskap het egter inherente beperkinge op hoeveel kommunikasie prakties moontlik is. Wanneer gebruikers té gekonnekteer raak, word daar te veel ingligting ontvang om redelikerwys verwerk te kan word. Op Twitter (’n mikroblog-diens) is lyste ’n hulpmiddel waarmee gebruikers afsonderlike strome van inligting kan skep. Deur die gebruik van modelle van ‘volhoubare aanlyn diskoers’ en ‘inligtingoorlading’, bewys hierdie navorsing dat lyste ’n doeltreffende hulpmiddel is om die oorvloed van inligting te verlig op mikroblog-dienste. Kwantitatief volg gebruikers wat lyste gebruik meer gebruikers vergeleke met die wat nie lyste gebruik nie. Wanner hul lyste begin gebruik, volg hulle gebruikers teen ’n hoër koers as dié wat nie lyste gebruik nie. Kwalitatief bied die navorsing ook insig oor die redes vir die gebruik van lyste. Die navorsing onderstreep die akademiese relevansie van ‘inligtingoorlading’ en ‘aanlyn volhoubaarheid’ modelle wat nie voorheen gebruik is in die konteks van stroom-gebaseerde aanlyn gereedskap nie, en verdiep die begrip en belangrikheid van gebruiker-saamgestelde filtrering as ’n manier om die voordele te trek uit die toenemende oorvloed van inligting in die digitale era.
724

Matrix factorization in recommender systems : How sensitive are matrix factorization models to sparsity?

Strömqvist, Zakris January 2018 (has links)
One of the most popular methods in recommender systems are matrix factorization (MF) models. In this paper, the sensitivity of sparsity of these models are investigated using a simulation study. Using the MovieLens dataset as a base several dense matrices are created. These dense matrices are then made sparse in two different ways to simulate different kinds of data. The accuracy of MF is then measured on each of the simulated sparse matrices. This shows that the matrix factorization models are sensitive to the degree of information available. For high levels of sparsity the MF performs badly but as the information level increases the accuracy of the models improve, for both samples.
725

Caring More About EQ Than IQ : Automatic Equalizing of Audio Signals

Axelson-Fisk, Magnus January 2018 (has links)
In this bachelor thesis, the possiblity to correct for room acousticsbased on frequency analysis is studied. A software to calculate transferfunctions online was constructed and tested. This was done using a ver-sion of the Maximum Length Sequence method, which is a method thatrequires long sequences for rooms with long reverberation. During theproject, it was noted that zero padding the sequences improved the ac-curacy greatly, it was also noted that the length of the zero pad aectedthe results. The software was tested both in computer simulations andin practice. While testing in practice, it was noted that the system haslimitations on which rooms it would work in. All testsignals were recordedand afterwards, compared to the original recording. The constructed soft-ware showed, that it is possible to correct for unknown transfer functionsusing only frequency analysis, to some extent. Further, it does correct forthe room's transfer function, but it is dicult to say if it this is valid forall rooms and transfer functions.
726

Introduction of the Debye media to the filtered finite-difference time-domain method with complex-frequency-shifted perfectly matched layer absorbing boundary conditions

Long, Zeyu January 2017 (has links)
The finite-difference time-domain (FDTD) method is one of most widely used computational electromagnetics (CEM) methods to solve the Maxwell's equations for modern engineering problems. In biomedical applications, like the microwave imaging for early disease detection and treatment, the human tissues are considered as lossy and dispersive materials. The most popular model to describe the material properties of human body is the Debye model. In order to simulate the computational domain as an open region for biomedical applications, the complex-frequency-shifted perfectly matched layers (CFS-PML) are applied to absorb the outgoing waves. The CFS-PML is highly efficient at absorbing the evanescent or very low frequency waves. This thesis investigates the stability of the CFS-PML and presents some conditions to determine the parameters for the one dimensional and two dimensional CFS-PML.The advantages of the FDTD method are the simplicity of implementation and the capability for various applications. However the Courant-Friedrichs-Lewy (CFL) condition limits the temporal size for stable FDTD computations. Due to the CFL condition, the computational efficiency of the FDTD method is constrained by the fine spatial-temporal sampling, especially in the simulations with the electrically small objects or dispersive materials. Instead of modifying the explicit time updating equations and the leapfrog integration of the conventional FDTD method, the spatial filtered FDTD method extends the CFL limit by filtering out the unstable components in the spatial frequency domain. This thesis implements filtered FDTD method with CFS-PML and one-pole Debye medium, then introduces a guidance to optimize the spatial filter for improving the computational speed with desired accuracy.
727

Modeling Supply Chain Dynamics with Calibrated Simulation Using Data Fusion

January 2010 (has links)
abstract: In today's global market, companies are facing unprecedented levels of uncertainties in supply, demand and in the economic environment. A critical issue for companies to survive increasing competition is to monitor the changing business environment and manage disturbances and changes in real time. In this dissertation, an integrated framework is proposed using simulation and online calibration methods to enable the adaptive management of large-scale complex supply chain systems. The design, implementation and verification of the integrated approach are studied in this dissertation. The research contributions are two-fold. First, this work enriches symbiotic simulation methodology by proposing a framework of simulation and advanced data fusion methods to improve simulation accuracy. Data fusion techniques optimally calibrate the simulation state/parameters by considering errors in both the simulation models and in measurements of the real-world system. Data fusion methods - Kalman Filtering, Extended Kalman Filtering, and Ensemble Kalman Filtering - are examined and discussed under varied conditions of system chaotic levels, data quality and data availability. Second, the proposed framework is developed, validated and demonstrated in `proof-of-concept' case studies on representative supply chain problems. In the case study of a simplified supply chain system, Kalman Filtering is applied to fuse simulation data and emulation data to effectively improve the accuracy of the detection of abnormalities. In the case study of the `beer game' supply chain model, the system's chaotic level is identified as a key factor to influence simulation performance and the choice of data fusion method. Ensemble Kalman Filtering is found more robust than Extended Kalman Filtering in a highly chaotic system. With appropriate tuning, the improvement of simulation accuracy is up to 80% in a chaotic system, and 60% in a stable system. In the last study, the integrated framework is applied to adaptive inventory control of a multi-echelon supply chain with non-stationary demand. It is worth pointing out that the framework proposed in this dissertation is not only useful in supply chain management, but also suitable to model other complex dynamic systems, such as healthcare delivery systems and energy consumption networks. / Dissertation/Thesis / Ph.D. Engineering 2010
728

Aplicação de sedimentadores de fluxo vertical na separação sólido-líquido de água de processo em usinas de beneficiamento de carvão mineral na região sul de Santa Catarina

Smaniotto, André Luiz Amorim January 2017 (has links)
O processo de espessamento e clarificação dos efluentes das usinas de beneficiamento de carvão mineral com a adoção de sedimentadores é uma prática consagrada ao longo de todo o mundo uma vez que é necessário o reaproveitamento da água utilizada tanto por questões econômicas como ambientais. O primeiro sedimentador, tipo espessador, a entrar em operação industrial na região carbonífera de Santa Catarina foi instalado na Mina Barro Branco da Carbonífera Rio Deserto em 2007. O equipamento era dotado de lamelas de PVC e foi instalado como alternativa às bacias de decantação, que apresentam alto custo de construção e operação. Contudo, o equipamento se mostrou ineficaz devido a deposição de sólidos nas lamelas. Essa dificuldade levou a adoção de outros modelos de sedimentadores de fluxo vertical que não utilizam lamelas. Esse trabalho apresenta os dados disponíveis da operação do equipamento com as lamelas na Mina Barro Branco e resultados atuais da operação dos sedimentadores da Mina Esperança sem lamelas. Nesse segundo caso foi medida a vazão, concentração de sólidos e dos metais ferro, alumínio e manganês, nos fluxos de entrada e saída. No efluente clarificado mediu-se ainda o pH e a turbidez. Registraram-se dados de uma operação satisfatória, com impacto importante na redução dos custos no transporte e deposição dos rejeitos finos e no tratamento do overflow clarificado possibilitando o descarte de acordo com a Legislação Ambiental. / The process of thickening and clarifying the effluents of mineral coal processing plants with the use of settlers is a well-established practice throughout the world since it is necessary to reuse the water used for both economic and environmental reasons. The first settler, a thickener type, to enter into industrial operation in the Santa Catarina coal region was installed at the Barro Branco Mine of the Carbonifera Rio Deserto in 2007. The equipment was equipped with PVC lamellae and was installed as an alternative to the decantation basins, which have a high cost of construction and operation. However, the equipment proved to be ineffective due to deposition of solids in the lamellae. This difficulty led to the adoption of other models of vertical flow settlers that do not use lamellae. This work presents the available data of the operation of the equipment with lamellae in the Mina Barro Branco and current results of the operation of the settlers in the Mina Esperança without lamellae. In this second case the flow, concentration of solids and iron, aluminum and manganese metals in the inflow and outflow were measured. In the clarified effluent the pH and turbidity were also measured. Data were recorded for a satisfactory operation, with a significant impact on the reduction of costs in the transportation and deposition of the fine tailings and in the treatment of the clarified overflow, allowing the disposal according to the Environmental Legislation.
729

Odhad hloubky ve scéně na základě obrazu a odometrie / Scene Depth Estimation Based on Odometry and Image Data

Zborovský, Peter January 2018 (has links)
In this work, we propose a depth estimation system based on image sequence and odometry information. The key idea is that depth estimation is decoupled from pose estimation. Such approach results in multipurpose system applicable on different robot platforms and for different depth estimation related problems. Our implementation uses various filtration techniques, operates real-time and provides appropriate results. Although the system was aimed at and tested on drone platform, it can be well used on any other type of autonomous vehicle that provides odometry information and video output.
730

Model Agnostic Extreme Sub-pixel Visual Measurement and Optimal Characterization

January 2012 (has links)
abstract: It is possible in a properly controlled environment, such as industrial metrology, to make significant headway into the non-industrial constraints on image-based position measurement using the techniques of image registration and achieve repeatable feature measurements on the order of 0.3% of a pixel, or about an order of magnitude improvement on conventional real-world performance. These measurements are then used as inputs for a model optimal, model agnostic, smoothing for calibration of a laser scribe and online tracking of velocimeter using video input. Using appropriate smooth interpolation to increase effective sample density can reduce uncertainty and improve estimates. Use of the proper negative offset of the template function has the result of creating a convolution with higher local curvature than either template of target function which allows improved center-finding. Using the Akaike Information Criterion with a smoothing spline function it is possible to perform a model-optimal smooth on scalar measurements without knowing the underlying model and to determine the function describing the uncertainty in that optimal smooth. An example of empiric derivation of the parameters for a rudimentary Kalman Filter from this is then provided, and tested. Using the techniques of Exploratory Data Analysis and the "Formulize" genetic algorithm tool to convert the spline models into more accessible analytic forms resulted in stable, properly generalized, KF with performance and simplicity that exceeds "textbook" implementations thereof. Validation of the measurement includes that, in analytic case, it led to arbitrary precision in measurement of feature; in reasonable test case using the methods proposed, a reasonable and consistent maximum error of around 0.3% the length of a pixel was achieved and in practice using pixels that were 700nm in size feature position was located to within ± 2 nm. Robust applicability is demonstrated by the measurement of indicator position for a King model 2-32-G-042 rotameter. / Dissertation/Thesis / Measurement Results (part 1) / Measurement Results (part 2) / General Presentation / M.S. Mechanical Engineering 2012

Page generated in 0.0942 seconds