• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 264
  • 131
  • 41
  • 20
  • 16
  • 15
  • 11
  • 10
  • 8
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 622
  • 83
  • 79
  • 64
  • 62
  • 57
  • 55
  • 48
  • 46
  • 45
  • 40
  • 39
  • 39
  • 38
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
541

Efficient Hierarchical Clustering Techniques For Pattern Classification

Vijaya, P A 07 1900 (has links) (PDF)
No description available.
542

A Case for Protecting Huge Pages from the Kernel

Patel, Naman January 2016 (has links) (PDF)
Modern architectures support multiple size pages to facilitate applications that use large chunks of contiguous memory either for buffer allocation, application specific memory management, in-memory caching or garbage collection. Most general purpose processors support larger page sizes, for e.g. x86 architecture supports 2MB and 1GB pages while PowerPC architecture supports 64KB, 16MB, 16GB pages. Such larger size pages are also known as superpages or huge pages. With the help of huge pages TLB reach can be increased significantly. The Linux kernel can transparently use these huge pages to significantly bring down the cost of TLB translations. With Transparent Huge Pages (THP) support in Linux kernel the end users or the application developers need not make any change to their application. Memory fragmentation which has been one of the classical problems in computing systems for decades is a key problem for the allocation of huge pages. Ubiquitous huge page support across architectures makes effective fragmentation management even more critical for modern systems. Applications tend to stress system TLB in the absence of huge pages, for virtual to physical address translation, which adversely affects performance/energy characteristics in long running systems. Since most kernel pages tend to be unmovable, fragmentation created due to their misplacement is more problematic and nearly impossible to recover with memory compaction. In this work, we explore physical memory manager of Linux and the interaction of kernel page placement with fragmentation avoidance and recovery mechanisms. Our analysis reveals that not only a random kernel page layout thwarts the progress of memory compaction; it can actually induce more fragmentation in the system. To address this problem, we propose a new allocator which takes special care for the placement of kernel pages. We propose a new region which represents memory area having kernel as well as user pages. Using this new region we introduce a staged allocator which with change in fragmentation level adapts and optimizes the kernel page placement. Later we introduce Illuminator which with zero overhead outperforms default kernel in terms of huge page allocation success rate and compaction overhead with respect to each huge page. We also show that huge page allocation is not a one dimensional problem but a two fold concern with how the fragmentation recovery mechanism may potentially interfere with the page clustering policy of allocator and worsen the fragmentation. Our results show that with effective kernel page placements the mixed page block counts reduces upto 70%, which allows our system to allocate 3x-4x huge pages than the default Kernel. Using these additional huge pages we show up to 38% improvement in terms of energy consumed and reduction in execution time up to 39% on standard benchmarks.
543

Smart Meters Big Data : Behavioral Analytics via Incremental Data Mining and Visualization

Singh, Shailendra January 2016 (has links)
The big data framework applied to smart meters offers an exception platform for data-driven forecasting and decision making to achieve sustainable energy efficiency. Buying-in consumer confidence through respecting occupants' energy consumption behavior and preferences towards improved participation in various energy programs is imperative but difficult to obtain. The key elements for understanding and predicting household energy consumption are activities occupants perform, appliances and the times that appliances are used, and inter-appliance dependencies. This information can be extracted from the context rich big data from smart meters, although this is challenging because: (1) it is not trivial to mine complex interdependencies between appliances from multiple concurrent data streams; (2) it is difficult to derive accurate relationships between interval based events, where multiple appliance usage persist; (3) continuous generation of the energy consumption data can trigger changes in appliance associations with time and appliances. To overcome these challenges, we propose an unsupervised progressive incremental data mining technique using frequent pattern mining (appliance-appliance associations) and cluster analysis (appliance-time associations) coupled with a Bayesian network based prediction model. The proposed technique addresses the need to analyze temporal energy consumption patterns at the appliance level, which directly reflect consumers' behaviors and provide a basis for generalizing household energy models. Extensive experiments were performed on the model with real-world datasets and strong associations were discovered. The accuracy of the proposed model for predicting multiple appliances usage outperformed support vector machine during every stage while attaining accuracy of 81.65\%, 85.90\%, 89.58\% for 25\%, 50\% and 75\% of the training dataset size respectively. Moreover, accuracy results of 81.89\%, 75.88\%, 79.23\%, 74.74\%, and 72.81\% were obtained for short-term (hours), and long-term (day, week, month, and season) energy consumption forecasts, respectively.
544

3D urban cartography incorporating recognition and temporal integration / Cartographie urbaine 3D avec reconnaissance et intégration temporelle

Aijazi, Ahmad Kamal 15 December 2014 (has links)
Au cours des dernières années, la cartographie urbaine 3D a suscité un intérêt croissant pour répondre à la demande d’applications d’analyse des scènes urbaines tournées vers un large public. Conjointement les techniques d’acquisition de données 3D progressaient. Les travaux concernant la modélisation et la visualisation 3D des villes se sont donc intensifiés. Des applications fournissent au plus grand nombre des visualisations efficaces de modèles urbains à grande échelle sur la base des imageries aérienne et satellitaire. Naturellement, la demande s’est portée vers des représentations avec un point de vue terrestre pour offrir une visualisation 3D plus détaillée et plus réaliste. Intégrées dans plusieurs navigateurs géographiques comme Google Street View, Microsoft Visual Earth ou Géoportail, ces modélisations sont désormais accessibles et offrent une représentation réaliste du terrain, créée à partir des numérisateurs mobiles terrestres. Dans des environnements urbains, la qualité des données obtenues à partir de ces véhicules terrestres hybrides est largement entravée par la présence d’objets temporairement statiques ou dynamiques (piétons, voitures, etc.) dans la scène. La mise à jour de la cartographie urbaine via la détection des modifications et le traitement des données bruitées dans les environnements urbains complexes, l’appariement des nuages de points au cours de passages successifs, voire la gestion des grandes variations d’aspect de la scène dues aux conditions environnementales constituent d’autres problèmes délicats associés à cette thématique. Plus récemment, les tâches de perception s’efforcent également de mener une analyse sémantique de l’environnement urbain pour renforcer les applications intégrant des cartes urbaines 3D. Dans cette thèse, nous présentons un travail supportant le passage à l’échelle pour la cartographie 3D urbaine automatique incorporant la reconnaissance et l’intégration temporelle. Nous présentons en détail les pratiques actuelles du domaine ainsi que les différentes méthodes, les applications, les technologies récentes d’acquisition des données et de cartographie, ainsi que les différents problèmes et les défis qui leur sont associés. Le travail présenté se confronte à ces nombreux défis mais principalement à la classification des zones urbaines l’environnement, à la détection automatique des changements, à la mise à jour efficace de la carte et l’analyse sémantique de l’environnement urbain. Dans la méthode proposée, nous effectuons d’abord la classification de l’environnement urbain en éléments permanents et temporaires. Les objets classés comme temporaire sont ensuite retirés du nuage de points 3D laissant une zone perforée dans le nuage de points 3D. Ces zones perforées ainsi que d’autres imperfections sont ensuite analysées et progressivement éliminées par une mise à jour incrémentale exploitant le concept de multiples passages. Nous montrons que la méthode d’intégration temporelle proposée permet également d’améliorer l’analyse sémantique de l’environnement urbain, notamment les façades des bâtiments. Les résultats, évalués sur des données réelles en utilisant différentes métriques, démontrent non seulement que la cartographie 3D résultante est précise et bien mise à jour, qu’elle ne contient que les caractéristiques permanentes exactes et sans imperfections, mais aussi que la méthode est également adaptée pour opérer sur des scènes urbaines de grande taille. La méthode est adaptée pour des applications liées à la modélisation et la cartographie du paysage urbain nécessitant une mise à jour fréquente de la base de données. / Over the years, 3D urban cartography has gained widespread interest and importance in the scientific community due to an ever increasing demand for urban landscape analysis for different popular applications, coupled with advances in 3D data acquisition technology. As a result, in the last few years, work on the 3D modeling and visualization of cities has intensified. Lately, applications have been very successful in delivering effective visualizations of large scale models based on aerial and satellite imagery to a broad audience. This has created a demand for ground based models as the next logical step to offer 3D visualizations of cities. Integrated in several geographical navigators, like Google Street View, Microsoft Visual Earth or Geoportail, several such models are accessible to large public who enthusiastically view the real-like representation of the terrain, created by mobile terrestrial image acquisition techniques. However, in urban environments, the quality of data acquired by these hybrid terrestrial vehicles is widely hampered by the presence of temporary stationary and dynamic objects (pedestrians, cars, etc.) in the scene. Other associated problems include efficient update of the urban cartography, effective change detection in the urban environment and issues like processing noisy data in the cluttered urban environment, matching / registration of point clouds in successive passages, and wide variations in environmental conditions, etc. Another aspect that has attracted a lot of attention recently is the semantic analysis of the urban environment to enrich semantically 3D mapping of urban cities, necessary for various perception tasks and modern applications. In this thesis, we present a scalable framework for automatic 3D urban cartography which incorporates recognition and temporal integration. We present in details the current practices in the domain along with the different methods, applications, recent data acquisition and mapping technologies as well as the different problems and challenges associated with them. The work presented addresses many of these challenges mainly pertaining to classification of urban environment, automatic change detection, efficient updating of 3D urban cartography and semantic analysis of the urban environment. In the proposed method, we first classify the urban environment into permanent and temporary classes. The objects classified as temporary are then removed from the 3D point cloud leaving behind a perforated 3D point cloud of the urban environment. These perforations along with other imperfections are then analyzed and progressively removed by incremental updating exploiting the concept of multiple passages. We also show that the proposed method of temporal integration also helps in improved semantic analysis of the urban environment, specially building façades. The proposed methods ensure that the resulting 3D cartography contains only the exact, accurate and well updated permanent features of the urban environment. These methods are validated on real data obtained from different sources in different environments. The results not only demonstrate the efficiency, scalability and technical strength of the method but also that it is ideally suited for applications pertaining to urban landscape modeling and cartography requiring frequent database updating.
545

Determinação das zonas de transição metabólica durante a corrida mediante os limiares de variabilidade da frequência cardíaca / Determination of transition metabolic zones during running using hert rate variability thresholds

Eduardo Marcel Fernandes Nascimento 17 January 2011 (has links)
O propósito do presente estudo foi obter evidências de validade e reprodutibilidade dos limiares de variabilidade da frequência cardíaca (VFC) durante a corrida. Dezenove sujeitos homens, saudáveis e praticantes de corrida (30,4 ± 4,1 anos; 175,9 ± 6,4 cm; 74,3 ± 8,5 kg) foram submetidos a um teste progressivo máximo em esteira rolante com velocidade inicial 5 km.h-1 e incrementos de 1 km.h-1 a cada 3 minutos (1% de inclinação constante) até exaustão voluntária. Todos os indivíduos realizaram o reteste em um intervalo de tempo entre 48 horas e uma semana. Foram realizadas as medidas das trocas gasosas, do lactato sanguíneo e da VFC (plotagem de Poincaré). Os limiares aeróbio (LAe) e anaeróbio (LAn) foram determinados pelos limares de lactato, ventilatórios e da VFC. Para a comparação entre os métodos foi uitlizada ANOVA para medidas repetidas, acompanhada de teste de post hoc de Bonferroni. A reprodutibilidade das variáveis analisadas foram verificadas pela plotagem de Bland-Altman e pelo coeficiente de correlação intraclasse (CCI). Os resultados do presente estudo demonstraram que a velocidade correspondente ao segundo e terceiro modelos utilizados para se determinar o LA pela VFC não eram significativamente diferentes (p > 0,05) do primeiro limiar de lactato e ventilatório. Em relação ao LAn, não foram observadas diferenças significativas nas velocidades correspondentes ao LAn detectado pelos diferentes métodos (p > 0,05). Os valores do CCI estavam entre 0,69 a 0,80 (p < 0,001). Conclui-se que o LAe e o LAn podem ser identificados pela análise da VFC, desde que se utilize os procedimentos empregados na presente investigação / The aim of the present study was to obtain evidences of validity and reliability of the thresholds of heart rate variability (HRV). Nineteen male subjects, healthy and runners (30,4 ± 4,1 years; 175,9 ± 6,4 cm; 74,3 ± 8,5 kg) performed a progressive maximal test on a treadmill with initial velocity 5 km.h-1 e increases of 1 km.h-1 every 3 minutes (1% slope) until voluntary exhaustion. All subjects performed the retest at an interval of time between 48 hours and one week. It was measured gas exchange, blood lactate and heart rate variability (Poincaré plot). The aerobic threshold (AT) and anaerobic (AnT) were determined by lactate, ventilatory and heart rate variability. ANOVA for repeated measures and post-hoc test of Bonferroni was used to compare the methods. To analyze the reproducibility of the variables were used the Bland- Altman plots and intraclass correlation coefficient (ICC). The results of this study show that the velocity at the second and third models employed to determine the AT by HRV were not significantly different (p > 0.05) of the first lactate threshold and ventilatory. Similarly, there were no significant differences in the velocities corresponding to AnT detected by different methods (p> 0.05). The ICC values were between 0.69 to 0.80 (p < 0.001). We conclude that the AT and the AnT can be estimated by HRV analysis, since it utilizes the procedures employed in this study
546

Extensão da transformada imagem-floresta diferencial para funções de conexidade com aumentos baseados na raiz e sua aplicação para geração de superpixels / Extending the differential Iimage foresting transform to connectivity functions with root-based increases and its application for superpixels generation

Marcos Ademir Tejada Condori 11 December 2017 (has links)
A segmentação de imagens é um problema muito importante em visão computacional, no qual uma imagem é dividida em regiões relevantes, tal como para isolar objetos de interesse de uma dada aplicação. Métodos de segmentação baseados na transformada imagem-floresta (IFT, Image Foresting Transform), com funções de conexidade monotonicamente incrementais (MI) têm alcançado um grande sucesso em vários contextos. Na segmentação interativa de imagens, na qual o usuário pode especificar o objeto desejado, novas sementes podem ser adicionadas e/ou removidas para corrigir a rotulação até conseguir a segmentação esperada. Este processo gera uma sequência de IFTs que podem ser calculadas de modo mais eficiente pela DIFT (Differential Image Foresting Transform). Recentemente, funções de conexidade não monotonicamente incrementais (NMI) têm sido usadas com sucesso no arcabouço da IFT no contexto de segmentação de imagens, permitindo incorporar informações de alto nível, tais como, restrições de forma, polaridade de borda e restrição de conexidade, a fim de customizar a segmentação para um dado objeto desejado. Funções não monotonicamente incrementais foram também exploradas com sucesso na geração de superpixels, via sequências de execuções da IFT. Neste trabalho, apresentamos um estudo sobre a Transformada Imagem-Floresta Diferencial no caso de funções NMI. Nossos estudos indicam que o algoritmo da DIFT original apresenta uma série de inconsistências para funções não monotonicamente incrementais. Este trabalho estende a DIFT, visando incorporar um subconjunto das funções NMI em grafos dirigidos e mostrar sua aplicação no contexto da geração de superpixels. Outra aplicação que é apresentada para difundir a relevância das funções NMI é o algoritmo Bandeirantes para perseguição de bordas e rastreamento de curvas. / Image segmentation is a problem of great relevance in computer vision, in which an image is divided into relevant regions, such as to isolate an object of interest for a given application. Segmentation methods with monotonically incremental connectivity functions (MI) based on the Image Foresting Transform (IFT) have achieved great success in several contexts. In interactive segmentation of images, in which the user is allowed to specify the desired object, new seeds can be added and/or removed to correct the labeling until achieving the expected segmentation. This process generates a sequence of IFTs that can be calculated more efficiently by the Differential Image Foresting Trans- form (DIFT). Recently, non-monotonically incremental connectivity functions (NMI) have been used successfully in the IFT framework in the context of image segmentation, allowing the incorporation of shape, boundary polarity, and connectivity constraints, in order to customize the segmentation for a given target object. Non-monotonically incremental functions were also successfully exploited in the generation of superpixels, via sequences of IFT executions. In this work, we present a study of the Differential Image Foresting Transform in the case of NMI functions. Our research indicates that the original DIFT algorithm presents a series of inconsistencies for non-monotonically incremental functions. This work extends the DIFT algorithm to NMI functions in directed graphs, and shows its application in the context of the generation of superpixels. Another application that is presented to spread the relevance of NMI functions is the Bandeirantes algorithm for curve tracing and boundary tracking.
547

Estimátor v systému regulace s proměnlivou strukturou / Estimator in control systems with variable structure

Dvořáček, Martin January 2008 (has links)
The thesis write about the linear discrete time incremental estimators. These are used for the choice of the best control system in systems with variable structure and further for direct control with status controller. There is an application of this on physical plane. In this paper PID variation controllers are discussed and optimized using Nelder-Mead Simplex Method. Feedback control with optimal PID is compared with control using linear discrete incremental estimators and status regulator.
548

Metodika návrhu optimálního způsobu zálohování velkých objemů dat / A guide to designing optimal method of backup for big volumes of data

Bartoňová, Veronika January 2012 (has links)
This diploma thesis deals with backup for big volumes of data. Data backup is a overlooked area of information technology and the data can be lost by trivial user error or breakdown on any components. In this thesis is discussed theory of backup - archive bit and his behavior based on various type of backup (full, incremental, differential or combination thereof), duration and frequency of backups or the point of ultimate recovery. In addition there are mentioned rotation schemes Round-Robin, GFS and Tower of Hanoi, where are described their principles and graphic diagram of rotation. In chapter Strategy of backup describes the backup strategy via choosing the right choice, taking into technical and economic parameters. Impact analysis, which is explained also in this chapter, describes the important moments in data recovery. For select the optimal strategy is necessary to consider not only the whole capacity of the backup data, but also the size of the backup window for data storage or storage location. In chapter Media storage you can acquainted with all backup media and their technical parameters that are available on the market and can be used for data backup. In section Guide methodology of large volumes of data backup is designed a backup plan with the necessary inputs for the actual implementation of the backup. The design methodology puts emphasis on regular backups and check their location. On practical demonstration is shown that the rotation scheme Tower of Hanoi are among the smallest need for backup media. A part of this thesis is also design of methodology for backup small amounts of data.
549

Aplikace metody diskontovaného peněžního toku při hodnocení investičního projektu malého ruského podniku / Applying Discounted Cash Flow Valuation Method to Assess the Investment Project of a Small Russia-Based Company

Reznichenko, Nadezda January 2017 (has links)
The aim of the thesis is the determination of investment cash flows generated from Finnish market development activities of a selected Russia-based small company, performing investment valuation using discounted cash flow method and presenting improvements which can rise the attractiveness for potential investors. It includes comprehensive investment valuation of the selected company at the seed stage of its` development, including the overview of current financial situation, usage a valuation model followed by stable growth and terminal value determination. Provided and copulated data serves as an example of complete valuation model for capital injections of future projects in the company, thanks to which the author is able to come to particular conclusions on the funding perspectives for the company. The results obtained through the analysis is assessed through the critical prism to be used as a basis for further suggestions on improvement.
550

Projektovanje kapacitivnog senzora ugla i ugaone brzine inkrementalnog tipa na fleksibilnim supstratima / Design of incremental capacitive angular position and speed sensor utilizing flexible substrates

Krklješ Damir 27 September 2016 (has links)
<p>Disertacija istražuje primenu fleksibilne elektronike za<br />kapacitivne senzore ugla i ugaone brzine tipa apsolutnog i<br />inkrementalnog enkodera cilindrične strukture. Razmatraju se dve<br />strukture, apsolutnog i inkrementalnog enkodera. Izvršena je analiza<br />uticaja mehaničkih nesavršenosti na funkciju kapacitivnosti.<br />Razvijena su dva prototipa kapacitivnih senzora za statičko i<br />dinamičko ispitivanje karakteristika senzora. Razvijena je<br />elektronika za obradu senzora inkrementalnog tipa sa<br />autokalibracijom senzora.</p> / <p>In this thesis a research on application of flexible electronics for capacitive<br />angular position and speed sensors, referred to as absolute and incremental<br />encoders, is done. It considers two structures of absolute and incremental<br />encoder type. An analysis of mechanical inaccuracies influence on a<br />capacitance function is conducted. Two prototypes are developed and used<br />for static and dynamic measurements of capacitive sensor&#39;s characteristics.<br />An electronics front-end for a capacitive two channel incremental encoder with<br />auto-calibration is developed.</p>

Page generated in 0.0792 seconds