591 |
A review of solid waste management practices in Polokwane CityMaluleke, Prudence Hlamarisa 08 May 2014 (has links)
Bibliographical refernces appear at the end of each chapter / This study reviews solid waste management practices in Polokwane City. The study area covered some of the residential areas in Polokwane City; namely; Ivy Park, Fauna Park, Welgelegen, Westernburg and the City Centre. This article describes two main methods that were used to collect data; that is Qualitative and Quantitative method. Field survey was also made to validate data obtained from the participants that were interviewed during qualitative data process. After framing the problem, the objectives of Solid Waste Management Practices in Polokwane City were briefly outlined as follows:
• Assess solid waste management practices in Polokwane City.
• Make comparison on how households and the municipality take responsibility in storing, collecting, transporting, treating and disposing solid waste.
• Investigate what problems the City encounters in managing solid waste.
• Make relevant recommendations aimed at improving solid waste management practices within the City.
The service management was administered by the municipality and private sector. From the five study residential areas, the Municipality manages waste in the City Centre while the private sector manages waste in the other residential areas. However, the City continues to play an administrative role over the contracted service provider.
Statistical results were presented in figures and tables. The results showed the storage habits, frequency of collection, mode of transport and methods of disposal for solid waste in Polokwane City.
The only method of disposal in the city was found to be landfilling. Activities that took place at the landfill site, such as reclaiming were outlined together with the economic values that these activities add to the City. The study also revealed that as population increases, the amount of solid waste generated also increased. / Environmental Sciences / M. Sc. (Environmental Management)
|
592 |
Indexation et recherche de contenus par objet visuel / Object-based visual content indexing and retrievalBursuc, Andrei 21 December 2012 (has links)
La question de recherche des objets vidéo basés sur le contenu lui-même, est de plus en plus difficile et devient un élément obligatoire pour les moteurs de recherche vidéo. Cette thèse présente un cadre pour la recherche des objets vidéo définis par l'utilisateur et apporte deux grandes contributions. La première contribution, intitulée DOOR (Dynamic Object Oriented Retrieval), est un cadre méthodologique pour la recherche et récupération des instances d'objets vidéo sélectionnés par un utilisateur, tandis que la seconde contribution concerne le support offert pour la recherche des vidéos, à savoir la navigation dans les vidéo, le système de récupération de vidéos et l'interface avec son architecture sous-jacente.Dans le cadre DOOR, l’objet comporte une représentation hybride obtenues par une sur-segmentation des images, consolidé avec la construction des graphs d’adjacence et avec l’agrégation des points d'intérêt. L'identification des instances d'objets à travers plusieurs vidéos est formulée comme un problème d’optimisation de l'énergie qui peut approximer un tache NP-difficile. Les objets candidats sont des sous-graphes qui rendent une énergie optimale vers la requête définie par l'utilisateur. Quatre stratégies d'optimisation sont proposées: Greedy, Greedy relâché, recuit simulé et GraphCut. La représentation de l'objet est encore améliorée par l'agrégation des points d'intérêt dans la représentation hybride, où la mesure de similarité repose sur une technique spectrale intégrant plusieurs types des descripteurs. Le cadre DOOR est capable de s’adapter à des archives vidéo a grande échelle grâce à l'utilisation de représentation sac-de-mots, enrichi avec un algorithme de définition et d’expansion de la requête basée sur une approche multimodale, texte, image et vidéo. Les techniques proposées sont évaluées sur plusieurs corpora de test TRECVID et qui prouvent leur efficacité.La deuxième contribution, OVIDIUS (On-line VIDeo Indexing Universal System) est une plate-forme en ligne pour la navigation et récupération des vidéos, intégrant le cadre DOOR. Les contributions de cette plat-forme portent sur le support assuré aux utilisateurs pour la recherche vidéo - navigation et récupération des vidéos, interface graphique. La plate-forme OVIDIUS dispose des fonctionnalités de navigation hiérarchique qui exploite la norme MPEG-7 pour la description structurelle du contenu vidéo. L'avantage majeur de l'architecture propose c’est sa structure modulaire qui permet de déployer le système sur terminaux différents (fixes et mobiles), indépendamment des systèmes d'exploitation impliqués. Le choix des technologies employées pour chacun des modules composant de la plate-forme est argumentée par rapport aux d'autres options technologiques. / With the ever increasing amount of available video content on video repositories the issue of content-based video objects retrieval is growing in difficulty and becomes a mandatory feature for video search engines.The present thesis advances a user defined video object retrieval framework and brings two major contributions. The first contribution is a methodological framework for user selected video object instances retrieval, entitled DOOR (Dynamic Object Oriented Retrieval), while the second one concerns the support offered for video retrieval, namely the video navigation and retrieval system and interface and its underlying architecture.Under the DOOR framework, the user defined video object comports a hybrid representation obtained by over-segmenting the frames, constructing region adjacency graphs and aggregating interest points. The identification of object instances across multiple videos is formulated as an energy optimization problem approximating an NP-hard problem. Object candidates are sub-graphs that yield an optimum energy towards the user defined query. In order to obtain the optimum energy four optimization strategies are proposed: Greedy, Relaxed Greedy, Simulated Annealing and GraphCut. The region-based object representation is further improved by the aggregation of interest points into a hybrid object representation. The similarity between an object and a frame is achieved with the help of a spectral matching technique integrating both colorimetric and interest points descriptors.The DOOR framework is suitable to large scale video archives through the use of a Bag-of-Words representation enriched with a query definition and expansion mechanism based on a multi-modal, text-image-video principle.The performances of the proposed techniques are evaluated on multiple TRECVID video datasets prooving their effectiveness.The second contribution is related to the user support for video retrieval - video navigation, video retrieval, graphical interface - and consists in the OVIDIUS (On-line VIDeo Indexing Universal System) on-line video browsing and retrieval platform. The OVIDIUS platform features hierarchical video navigation functionalities that exploit the MPEG-7 approach for structural description of video content. The DOOR framework is integrated in the OVIDIUS platform, ensuring the search functionalities of the system. The major advantage of the proposed system concerns its modular architecture which makes it possible to deploy the system on various terminals (both fixed and mobile), independently of the exploitation systems involved. The choice of the technologies employed for each composing module of the platform is argumented in comparison with other technological options. Finally different scenarios and use cases for the OVIDIUS platform are presented.
|
593 |
Wireless Sensor Networks : Bit Transport Maximization and Delay Efficient Function ComputationShukla, Samta January 2013 (has links) (PDF)
We consider a wireless sensor network, in which end users are interested in maximizing the useful information supplied by the network till network partition due to inevitable node deaths. Neither throughput maximization nor network lifetime maximization achieves the objective: A network with high throughput provides information at a high rate, but can exhaust the nodes of their energies quickly; similarly, a network can achieve a long lifetime by remaining idle for most of the time.
We propose and seek to maximize a new metric: “Aggregate bit transported before network partition” (a product of throughput and lifetime), which precisely captures the usefulness of sensor networks. We model the links in the wireless sensor network as wired links with reduced equivalent capacities, formulate and solve the problem of maximizing bits transported before network partition on arbitrary networks.
To assess the benefits that network coding can yield for the same objective, we study a scenario where the coding-capable nodes are placed on a regular grid. We propose an optimal algorithm to choose the minimum number of coding points in the grid to ensure energy efficiency. Our results show that, even with simple XOR coding, the bits transported can increase up to 83 % of that without coding.
Further, we study the problem of in-network data aggregation in a wireless sensor network to achieve minimum delay. The nodes in the network compute and forward data as per a query graph, which allows operations belonging to a general class of functions. We aim to extract the best sub-network that achieves the minimum delay. We design an algorithm to schedule the sub-network such that the computed data reaches sink at the earliest. We consider directed acyclic query graphs as opposed to the existing work which considers tree query graphs only.
|
594 |
Coordinated beamforming in cellular and cognitive radio networksPennanen, H. (Harri) 08 September 2015 (has links)
Abstract
This thesis focuses on the design of coordinated downlink beamforming techniques for wireless multi-cell multi-user multi-antenna systems. In particular, cellular and cognitive radio networks are considered. In general, coordinated beamforming schemes aim to improve system performance, especially at the cell-edge area, by controlling inter-cell interference. In this work, special emphasis is put on practical coordinated beamforming designs that can be implemented in a decentralized manner by relying on local channel state information (CSI) and low-rate backhaul signaling. The network design objective is the sum power minimization (SPMin) of base stations (BSs) while providing the guaranteed minimum rate for each user.
Decentralized coordinated beamforming techniques are developed for cellular multi-user multiple-input single-output (MISO) systems. The proposed iterative algorithms are based on classical primal and dual decomposition methods. The SPMin problem is decomposed into two optimization levels, i.e., BS-specific subproblems for the beamforming design and a network-wide master problem for the inter-cell interference coordination. After the acquisition of local CSI, each BS can independently compute its transmit beamformers by solving the subproblem via standard convex optimization techniques. Interference coordination is managed by solving the master problem via a traditional subgradient method that requires scalar information exchange between the BSs. The algorithms make it possible to satisfy the user-specific rate constraints for any iteration. Hence, delay and signaling overhead can be reduced by limiting the number of performed iterations. In this respect, the proposed algorithms are applicable to practical implementations unlike most of the existing decentralized approaches. The numerical results demonstrate that the algorithms provide significant performance gains over zero-forcing beamforming strategies.
Coordinated beamforming is also studied in cellular multi-user multiple-input multiple-output (MIMO) systems. The corresponding non-convex SPMin problem is divided into transmit and receive beamforming optimization steps that are alternately solved via successive convex approximation method and the linear minimum mean square error criterion, respectively, until the desired level of convergence is attained. In addition to centralized design, two decentralized primal decomposition-based algorithms are proposed wherein the transmit and receive beamforming designs are facilitated by a combination of pilot and backhaul signaling. The results show that the proposed MIMO algorithms notably outperform the MISO ones.
Finally, cellular coordinated beamforming strategies are extended to multi-user MISO cognitive radio systems, where primary and secondary networks share the same spectrum. Here, network optimization is performed for the secondary system with additional interference constraints imposed for the primary users. Decentralized algorithms are proposed based on primal decomposition and an alternating direction method of multipliers. / Tiivistelmä
Tämä väitöskirja keskittyy yhteistoiminnallisten keilanmuodostustekniikoiden suunnitteluun langattomissa monisolu- ja moniantennijärjestelmissä, erityisesti solukko- ja kognitiiviradioverkoissa. Yhteistoiminnalliset keilanmuodostustekniikat pyrkivät parantamaan verkkojen suorituskykyä kontrolloimalla monisoluhäiriötä, erityisesti tukiasemasolujen reuna-alueilla. Tässä työssä painotetaan erityisesti käytännöllisten yhteistoiminnallisten keilanmuodostustekniikoiden suunnittelua, joka voidaan toteuttaa hajautetusti perustuen paikalliseen kanavatietoon ja tukiasemien väliseen informaationvaihtoon. Verkon suunnittelutavoite on minimoida tukiasemien kokonaislähetysteho samalla, kun jokaiselle käyttäjälle taataan tietty vähimmäistiedonsiirtonopeus.
Hajautettuja yhteistoiminnallisia keilanmuodostustekniikoita kehitetään moni-tulo yksi-lähtö -solukkoverkoille. Oletuksena on, että tukiasemat ovat varustettuja monilla lähetysantenneilla, kun taas päätelaitteissa on vain yksi vastaanotinantenni. Ehdotetut iteratiiviset algoritmit perustuvat klassisiin primaali- ja duaalihajotelmiin. Lähetystehon minimointiongelma hajotetaan kahteen optimointitasoon: tukiasemakohtaisiin aliongelmiin keilanmuodostusta varten ja verkkotason pääongelmaan monisoluhäiriön hallintaa varten. Paikallisen kanavatiedon hankkimisen jälkeen jokainen tukiasema laskee itsenäisesti lähetyskeilansa ratkaisemalla aliongelmansa käyttäen apunaan standardeja konveksioptimointitekniikoita. Monisoluhäiriötä kontrolloidaan ratkaisemalla pääongelma käyttäen perinteistä aligradienttimenetelmää. Tämä vaatii tukiasemien välistä informaationvaihtoa. Ehdotetut algoritmit takaavat käyttäjäkohtaiset tiedonsiirtonopeustavoitteet jokaisella iterointikierroksella. Tämä mahdollistaa viiveen pienentämisen ja tukiasemien välisen informaatiovaihdon kontrolloimisen. Tästä syystä ehdotetut algoritmit soveltuvat käytännön toteutuksiin toisin kuin useimmat aiemmin ehdotetut hajautetut algoritmit. Numeeriset tulokset osoittavat, että väitöskirjassa ehdotetut algoritmit tuovat merkittävää verkon suorituskyvyn parannusta verrattaessa aiempiin nollaanpakotus -menetelmiin.
Yhteistoiminnallista keilanmuodostusta tutkitaan myös moni-tulo moni-lähtö -solukkoverkoissa, joissa tukiasemat sekä päätelaitteet ovat varustettuja monilla antenneilla. Tällaisessa verkossa lähetystehon minimointiongelma on ei-konveksi. Optimointiongelma jaetaan lähetys- ja vastaanottokeilanmuodostukseen, jotka toistetaan vuorotellen, kunnes algoritmi konvergoituu. Lähetyskeilanmuodostusongelma ratkaistaan peräkkäisillä konvekseilla approksimaatioilla. Vastaanottimen keilanmuodostus toteutetaan summaneliövirheen minimoinnin kautta. Keskitetyn algoritmin lisäksi tässä työssä kehitetään myös kaksi hajautettua algoritmia, jotka perustuvat primaalihajotelmaan. Hajautettua toteutusta helpotetaan pilottisignaloinnilla ja tukiasemien välisellä informaationvaihdolla. Numeeriset tulokset osoittavat, että moni-tulo moni-lähtö -tekniikoilla on merkittävästi parempi suorituskyky kuin moni-tulo yksi-lähtö -tekniikoilla.
Lopuksi yhteistoiminnallista keilanmuodostusta tarkastellaan kognitiiviradioverkoissa, joissa primaari- ja sekundaarijärjestelmät jakavat saman taajuuskaistan. Lähetystehon optimointi suoritetaan sekundaariverkolle samalla minimoiden primaarikäyttäjille aiheuttamaa häiriötä. Väitöskirjassa kehitetään kaksi hajautettua algoritmia, joista toinen perustuu primaalihajotelmaan ja toinen kerrointen vaihtelevan suunnan menetelmään.
|
595 |
Impact économique d’un nouveau test diagnostique pour le cancer du poumonGouault Laliberté, Avril 05 1900 (has links)
Au Canada, le cancer du poumon est la cause principale de décès relié au cancer. À l’imagerie médicale, le cancer du poumon peut prendre la forme d’un nodule pulmonaire. La prise en charge menant au diagnostic définitif d’un nodule pulmonaire peut s’avérer complexe. La recherche en oncoprotéomique a permis le développement de nouveaux tests diagnostiques non-invasifs en cancer du poumon. Ceux-ci ont pour objectif d’évaluer le risque de malignité d’un nodule pour guider la prise en charge menant au diagnostic. Toutefois, l’impact économique de tels tests demeure inconnu.
L’objectif de ce projet était de mesurer, en milieu de pratique réelle, l’utilisation des ressources en soins de santé pour l’investigation de nodules pulmonaires puis, de développer un modèle générique permettant d’évaluer l’impact économique au Québec des nouveaux tests protéomiques pour l’investigation de ces nodules.
Tout d’abord, une revue de dossiers patients a été effectuée dans trois centres hospitaliers du Québec afin de mesurer les ressources en soins de santé et les coûts associés à l’investigation de nodules pulmonaires entre 0,8 et 3,0 cm. Par la suite, une analyse de minimisation de coûts a été effectuée à partir d’un modèle générique développé dans le cadre de ce projet. Ce modèle visait à comparer l’approche courante d’investigation à celle intégrant un test protéomique fictif afin de déterminer l’approche la moins dispendieuse.
La revue de dossiers patients a permis de déterminer qu’au Québec, le coût moyen d’investigation d’un nodule pulmonaire est de 7 354$. Selon les résultats de l’analyse, si le coût du test protéomique est fixé en-deçà de 3 228,70$, l’approche intégrant celui-ci serait moins dispendieuse que l’approche courante.
La présente analyse suggère que l’utilisation d’un test diagnostique protéomique non-invasif en début d’investigation pour un nodule de 0,8 à 3,0 cm, permettrait d’engendrer des économies pour le système de santé au Québec. / In Canada, lung cancer is the leading cause of death among cancer patients. Imaging technologies, such as computed tomography, allows the detection of potential lung cancers in the form of pulmonary nodules. The clinical pathway leading to the definitive diagnostic of a pulmonary nodule can be complex. Research in oncoproteomics has led to the development of novel noninvasive diagnostic tests in lung cancer. These tests aim to evaluate the risk of malignancy of a nodule in order to guide the clinical pathway leading to a diagnostic. However, the economic impact of such tests remains unknown.
The objective of this project was to measure, in a real-life setting, health care resource utilization for the investigation of pulmonary nodules and then, develop a generic model to assess the economic impact in the province of Quebec of new proteomic tests for the investigation of these nodules.
Firstly, a medical chart review was performed in three hospitals in Quebec to measure health care resource utilization for the investigation of pulmonary nodules of 0,8 to 3,0 cm. Then, a cost minimization analysis was performed by using a generic model developed for this project. This model compared the usual care to the approach integrating a fictive proteomic test in order to identify the less expensive approach.
As per the medical chart review, the average cost for the investigation of a pulmonary nodule was $7,354. According to the results of the analysis, if the cost of the test is below $3,228.70, the approach integrating a proteomic test would be less expensive then the current approach.
This study tends to demonstrate that the use of a noninvasive proteomic diagnostic test at the beginning of the investigation of a pulmonary nodule from 0,8 to 3,0 cm could generate savings for the health care system in Quebec.
|
596 |
Investigation into the technical feasibility of biological treatment of precious metal refining wastewaterMoore, Bronwyn Ann January 2013 (has links)
The hydrometallurgical refining of platinum group metals results in large volumes of liquid waste that requires suitable treatment before any disposal can be contemplated. The wastewater streams are characterized by extremes of pH, high inorganic ion content (such as chloride), significant residual metal loads and small amounts of entrained organic compounds. Historically these effluents were housed in evaporation reservoirs, however lack of space and growing water demands have led Anglo Platinum to consider treatment of these effluents. The aim of this study was to investigate whether biological wastewater treatment could produce water suitable for onsite reuse. Bench-scale activated sludge and anaerobic digestion for co-treatment of an acidic refinery waste stream with domestic wastewater were used to give preliminary data. Activated sludge showed better water treatment at lab scale in terms of removal efficiencies of ammonia (approximately 25%, cf. 20% in anaerobic digestion) and COD (70% cf. 43% in digestion) and greater robustness when biomass health was compared. Activated sludge was consequently selected for a pilot plant trial. The pilot plant was operated on-site and performed comparably with the bench-scale system, however challenges in the clarifier design led to losses of biomass and poor effluent quality (suspended solids washout). The pilot plant was unable to alter the pH of the feed, but a two week maturation period resulted in the pH increasing from 5.3 to 7.0. Tests on algal treatment as an alternative or follow-on unit operation to activated sludge showed it not to be a viable process. The activated sludge effluent was assessed for onsite reuse in flotation and it was found that there was no significant difference between its flotation performance and that of the process water currently used, indicating the effluent generated by the biological treatment system can be used successfully for flotation. Flotation is the method whereby minerals refining operations recover minerals of interest from ore through the addition of chemicals and aeration of the ore slurry. Target minerals adhere to the bubbles and can be removed from the process.
|
597 |
Transformátorová páječka 500W / Power soldering station 500WŠelepa, Jan January 2010 (has links)
This thesis contains a complete description of the design and implementation of a 500W transformer soldering station. This soldering station includes a half-bridge DC/DC converter with a pulse transformer. The device works with a very low voltage and extremely high output current. Therefore some parts have a special design to ensure the proper equipment function. Coaxial transformer with very low leakage inductance (nH units) is unusual. A synchronous rectifier is another special feature working with low voltage and high output current of the transformer. The finished functional prototype consists of a soldering station and a soldering adapter.
|
598 |
Návrh a výpočet sušicího zařízení pro dřevozpracující průmysl / Design of a drying device for wood-processing industryVach, Tomáš January 2008 (has links)
The thesis considers a stabilization of temperature of a wood fiber during production of medium close fibre boards MDF in a wood-processing industry. The original purpose to solve energetic optimalization of a kiln dryer has become a serious problem. It turned out that production suffers with a big heat loss which results in big heat loss gained during a kiln drying. Primary aim of this thesis is an examination of a cooling effect of environment on chosen parts of a production line and suggestion of acceptable proceeding to minimize heat loss and to rise temperature of the wood fiber to required temperature. Solution is aimed on the section of a production line between drying equipment and press. First part of the thesis introduces technology of production of fibre boards, introduction to heat transfer problems and also basics of computer modeling of heat flow and transfer using CFD simulation. The next part includes an evaluation of the heat loss of chosen parts of the production line in a current working condition and its comparison with computer simulations. Verification of accuracy of the results while using both approaches to solution is a first step to the improvement of the current situation. It is possible to use the verified calculation model for the following solution of effects on convective and radiation heating of a wood fiber layer. An important factor in the last period of a calculation is a real temperature of the fiber board-mat at the entrance to a press. The higher entering temperature in a set temperature range is reached, the lower time needed for pressing will be and simultaneously the whole industrial process will be cut short. The solution of a problem leads to an achievement of an energy saving and should become a solid basis from which it would be possible to evaluate an improvement of production of MDF boards. The conclusion of the thesis includes an analysis of findings which refers to a heating of wood fiber and it’s effectiveness for a practical use.
|
599 |
Vliv různých agrotechnologií na fyzikální kvalitu půdy ve vybraných lokalitách v Jihomoravském kraji / The effects of different agro-technologies on the soil physical quality in selected localities in the South Moravian RegionBažantová, Adéla January 2017 (has links)
Farmland is being handled by various the tillage of soil especially for the correct course of soil processes, plant growth and development and of course profit. There are two types of the tillage of soil are their conventional (classical) which involves plowing and minimization tillage, where is excluded plowing. The last few decades, the development the tillage of soil focuses on minimization tillage. This includes loosing up small depth, soil conservation tillage and direct seeding. This thesis is aimed at impact of minimization tillage on selected physical and hydraulic properties of the soil. Sampling undisturbed soil samples was performed on the experimental plot Kozlany by Kopecky rollers (V = 100 cm3) from depth of 0 - 10 cm, during the year 2016. On this plot was used minimization tillage in the form loosing up small depth and direct seeding.
|
600 |
Methods for image restoration and segmentation by sparsity promoting energy minimization / Методе за рестаурацију и сегментацију дигиталне слике засноване наминимизацији функције енергије која фаворизује ретке репрезентацијесигнала / Metode za restauraciju i segmentaciju digitalne slike zasnovane naminimizaciji funkcije energije koja favorizuje retke reprezentacijesignalaBajić Papuga Buda 16 September 2019 (has links)
<p>Energy minimization approach is widely used in image processing applications.<br />Many image processing problems can be modelled in a form of a minimization<br />problem. This thesis deals with two crucial tasks of image analysis workflows:<br />image restoration and segmentation of images corrupted by blur and noise. Both<br />image restoration and segmentation are modelled as energy minimization<br />problems, where energy function is composed of two parts: data fidelity term and<br />regularization term. The main contribution of this thesis is development of new<br />data fidelity and regularization terms for both image restoration and<br />segmentation tasks.<br />Image restoration methods (non-blind and blind deconvolution and superresolution<br />reconstruction) developed within this thesis are suited for mixed<br />Poisson-Gaussian noise which is encountered in many realistic imaging<br />conditions. We use generalized Anscombe variance stabilization transformation<br />for removing signal-dependency of noise. We propose novel data fidelity term<br />which incorporates variance stabilization transformation process into account.<br />Turning our attention to the regularization term for image restoration, we<br />investigate how sparsity promoting regularization in the gradient domain<br />formulated as Total Variation, can be improved in the presence of blur and mixed<br />Poisson-Gaussian noise. We found that Huber potential function leads to<br />significant improvement of restoration performance.<br />In this thesis we propose new segmentation method, the so called coverage<br />segmentation, which estimates the relative coverage of each pixel in a sensed<br />image by each image component. Its data fidelity term takes into account<br />blurring and down-sampling processes and in that way it provides robust<br />segmentation in the presence of blur, allowing at the same time segmentation at<br />increased spatial resolution. In addition, new sparsity promoting regularization<br />terms are suggested: (i) Huberized Total Variation which provides smooth object<br />boundaries and noise removal, and (ii) non-edge image fuzziness, which<br />responds to an assumption that imaged objects are crisp and that fuzziness is<br />mainly due to the imaging and digitization process.<br />The applicability of here proposed restoration and coverage segmentation<br />methods is demonstrated for Transmission Electron Microscopy image<br />enhancement and segmentation of micro-computed tomography and<br />hyperspectral images.</p> / <p>Поступак минимизације функције енергије је често коришћен за<br />решавање проблема у обради дигиталне слике. Предмет истраживања<br />тезе су два круцијална задатка дигиталне обраде слике: рестаурација и<br />сегментација слика деградираних шумом и замагљењем. И рестaурација<br />и сегментација су моделовани као проблеми минимизације функције<br />енергије која представља збир две функције: функције фитовања<br />података и регуларизационе функције. Главни допринос тезе је развој<br />нових функција фитовања података и нових регуларизационих функција<br />за рестаурацију и сегментацију.<br />Методе за рестаурацију (оне код којих је функција замагљења позната и<br />код којих је функцију замагљења потребно оценити на основу датих<br />података као и методе за реконструкцију слике у супер-резолуцији)<br />развијене у оквиру ове тезе третирају мешавину Поасоновог и Гаусовог<br />шума који се појављује у многобројним реалистичним сценаријима. За<br />третирање такве врсте шума користили смо нелинеарну трансформацију<br />и предложили смо нову функцију фитовања података која узима у обзир<br />такву трансформацију. У вези са регуларизационим функцијама смо<br />тестирали хипотезу да се функција Тоталне Варијације која промовише<br />ретку слику у градијентном домену може побољшати уколико се користе<br />тзв. потенцијалне функције. Показали смо да се употребом Хуберове<br />потенцијалне функције може значајно побољшати квалитет рестауриране<br />слике која је деградирана замагљењем и мешавином Поасоновог и<br />Гаусовог шума.<br />У оквиру тезе смо предложили нову методу сегментације која допушта<br />делимичну покривеност пиксела објектом. Функција фитовања података<br />ове методе укључује и модел замагљења и смањења резолуције. На тај<br />начин је постигнута робустност сегментације у присуству замагљења и<br />добијена могућност сегментирања слике у супер-резолуцији. Додатно,<br />нове регуларизационе функције које промовишу ретке репрезентације<br />слике су предложене.<br />Предложене методе рестаурације и сегментације која допушта делимичну<br />покривеност пиксела објектом су примењене на слике добијене помоћу<br />електронског микроскопа, хиперспектралне слике и медицинске ЦТ слике.</p> / <p>Postupak minimizacije funkcije energije je često korišćen za<br />rešavanje problema u obradi digitalne slike. Predmet istraživanja<br />teze su dva krucijalna zadatka digitalne obrade slike: restauracija i<br />segmentacija slika degradiranih šumom i zamagljenjem. I restauracija<br />i segmentacija su modelovani kao problemi minimizacije funkcije<br />energije koja predstavlja zbir dve funkcije: funkcije fitovanja<br />podataka i regularizacione funkcije. Glavni doprinos teze je razvoj<br />novih funkcija fitovanja podataka i novih regularizacionih funkcija<br />za restauraciju i segmentaciju.<br />Metode za restauraciju (one kod kojih je funkcija zamagljenja poznata i<br />kod kojih je funkciju zamagljenja potrebno oceniti na osnovu datih<br />podataka kao i metode za rekonstrukciju slike u super-rezoluciji)<br />razvijene u okviru ove teze tretiraju mešavinu Poasonovog i Gausovog<br />šuma koji se pojavljuje u mnogobrojnim realističnim scenarijima. Za<br />tretiranje takve vrste šuma koristili smo nelinearnu transformaciju<br />i predložili smo novu funkciju fitovanja podataka koja uzima u obzir<br />takvu transformaciju. U vezi sa regularizacionim funkcijama smo<br />testirali hipotezu da se funkcija Totalne Varijacije koja promoviše<br />retku sliku u gradijentnom domenu može poboljšati ukoliko se koriste<br />tzv. potencijalne funkcije. Pokazali smo da se upotrebom Huberove<br />potencijalne funkcije može značajno poboljšati kvalitet restaurirane<br />slike koja je degradirana zamagljenjem i mešavinom Poasonovog i<br />Gausovog šuma.<br />U okviru teze smo predložili novu metodu segmentacije koja dopušta<br />delimičnu pokrivenost piksela objektom. Funkcija fitovanja podataka<br />ove metode uključuje i model zamagljenja i smanjenja rezolucije. Na taj<br />način je postignuta robustnost segmentacije u prisustvu zamagljenja i<br />dobijena mogućnost segmentiranja slike u super-rezoluciji. Dodatno,<br />nove regularizacione funkcije koje promovišu retke reprezentacije<br />slike su predložene.<br />Predložene metode restauracije i segmentacije koja dopušta delimičnu<br />pokrivenost piksela objektom su primenjene na slike dobijene pomoću<br />elektronskog mikroskopa, hiperspektralne slike i medicinske CT slike.</p>
|
Page generated in 0.0811 seconds