• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 7
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 44
  • 10
  • 10
  • 10
  • 9
  • 8
  • 7
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Μελέτη και συγκριτική αξιολόγηση μεθόδων δόμησης περιεχομένου ιστοτόπων : εφαρμογή σε ειδησεογραφικούς ιστοτόπους

Στογιάννος, Νικόλαος-Αλέξανδρος 20 April 2011 (has links)
Η κατάλληλη οργάνωση του περιεχομένου ενός ιστοτόπου, έτσι ώστε να αυξάνεται η ευρεσιμότητα των πληροφοριών και να διευκολύνεται η επιτυχής ολοκλήρωση των τυπικών εργασιών των χρηστών, αποτελεί έναν από τους πρωταρχικούς στόχους των σχεδιαστών ιστοτόπων. Οι υπάρχουσες τεχνικές του πεδίου Αλληλεπίδρασης-Ανθρώπου Υπολογιστή που συνεισφέρουν στην επίτευξη αυτού του στόχου συχνά αγνοούνται εξαιτίας των απαιτήσεών τους σε χρονικούς και οικονομικούς πόρους. Ειδικότερα για ειδησεογραφικούς ιστοτόπους, τόσο το μέγεθος τους όσο και η καθημερινή προσθήκη και τροποποίηση των παρεχόμενων πληροφοριών, καθιστούν αναγκαία τη χρήση αποδοτικότερων τεχνικών για την οργάνωση του περιεχομένου τους. Στην εργασία αυτή διερευνούμε την αποτελεσματικότητα μίας μεθόδου, επονομαζόμενης AutoCardSorter, που έχει προταθεί στη βιβλιογραφία για την ημιαυτόματη κατηγοριοποίηση ιστοσελίδων, βάσει των σημασιολογικών συσχετίσεων του περιεχομένου τους, στο πλαίσιο οργάνωσης των πληροφοριών ειδησεογραφικών ιστοτόπων. Για το σκοπό αυτό διενεργήθηκαν πέντε συνολικά μελέτες, στις οποίες πραγματοποιήθηκε τόσο ποσοτική όσο και ποιοτική σύγκριση των κατηγοριοποιήσεων που προέκυψαν από συμμετέχοντες σε αντίστοιχες μελέτες ταξινόμησης καρτών ανοικτού και κλειστού τύπου, με τα αποτελέσματα της τεχνικής AutoCardSorter. Από την ανάλυση των αποτελεσμάτων προέκυψε ότι η AutoCardSorter παρήγαγε ομαδοποιήσεις άρθρων που βρίσκονται σε μεγάλη συμφωνία με αυτές των συμμετεχόντων στις μελέτες, αλλά με σημαντικά αποδοτικότερο τρόπο, επιβεβαιώνοντας προηγούμενες παρόμοιες μελέτες σε ιστοτόπους άλλων θεματικών κατηγοριών. Επιπρόσθετα, οι μελέτες έδειξαν ότι μία ελαφρώς τροποποιημένη εκδοχή της AutoCardSorter τοποθετεί νέα άρθρα σε προϋπάρχουσες κατηγορίες με αρκετά μικρότερο ποσοστό συμφωνίας συγκριτικά με τον τρόπο που επέλεξαν οι συμμετέχοντες. Η εργασία ολοκληρώνεται με την παρουσίαση κατευθύνσεων για την βελτίωση της αποτελεσματικότητας της AutoCardSorter, τόσο στο πλαίσιο οργάνωσης του περιεχομένου ειδησεογραφικών ιστοτόπων όσο και γενικότερα. / The proper structure of a website's content, so as to increase the findability of the information provided and to ease the typical user task-making, is one of the primary goals of website designers. The existing methods from the field of HCI that assist designers in this, are often neglected due to their high cost and human resources demanded. Even more so on News Sites, their size and the daily content updating call for improved and more efficient techniques. In this thesis we investigate the efficiency of a novel method, called AutoCardSorter, that has been suggested in bibliography for the semi-automatic content categorisation based on the semantic similarity of each webpage-content. To accomplish this we conducted five comparative studies in which the method was compared, to the primary alternatives of the classic Card Sorting method (open, closed). The analysis of the results showed that AutoCardSorter suggested article categories with high relavance to the ones suggested from a group of human subjects participating in the CardSort studies, although in a much more efficient way. This confirms the results of similar previous studies on websites of other themes (eg. travel, education). Moreover, the studies showed that a modified version of the method places articles under pre-existing categories with significant less relavance to the categorisation suggested by the participants. The thesis is concluded with the proposal of different ways to improve the proposed method's efficiency, both in the content of News Sites and in general.
32

Alternative Approaches for the Registration of Terrestrial Laser Scanners Data using Linear/Planar Features

Dewen Shi (9731966) 15 December 2020 (has links)
<p>Static terrestrial laser scanners have been increasingly used in three-dimensional data acquisition since it can rapidly provide accurate measurements with high resolution. Several scans from multiple viewpoints are necessary to achieve complete coverage of the surveyed objects due to occlusion and large object size. Therefore, in order to reconstruct three-dimensional models of the objects, the task of registration is required to transform several individual scans into a common reference frame. This thesis introduces three alternative approaches for the coarse registration of two adjacent scans, namely, feature-based approach, pseudo-conjugate point-based method, and closed-form solution. In the feature-based approach, linear and planar features in the overlapping area of adjacent scans are selected as registration primitives. The pseudo-conjugate point-based method utilizes non-corresponding points along common linear and planar features to estimate transformation parameters. The pseudo-conjugate point-based method is simpler than the feature-based approach since the partial derivatives are easier to compute. In the closed-form solution, a rotation matrix is first estimated by using a unit quaternion, which is a concise description of the rotation. Afterward, the translation parameters are estimated with non-corresponding points along the linear or planar features by using the pseudo-conjugate point-based method. Alternative approaches for fitting a line or plane to data with errors in three-dimensional space are investigated.</p><p><br></p><p>Experiments are conducted using simulated and real datasets to verify the effectiveness of the introduced registration procedures and feature fitting approaches. The proposed two approaches of line fitting are tested with simulated datasets. The results suggest that these two approaches can produce identical line parameters and variance-covariance matrix. The three registration approaches are tested with both simulated and real datasets. In the simulated datasets, all three registration approaches produced equivalent transformation parameters using linear or planar features. The comparison between the simulated linear and planar features shows that both features can produce equivalent registration results. In the real datasets, the three registration approaches using the linear or planar features also produced equivalent results. In addition, the results using real data indicates that the registration approaches using planar features produced better results than the approaches using linear features. The experiments show that the pseudo-conjugate point-based approach is easier to implement than the feature-based approach. The pseudo-conjugate point-based method and feature-based approach are nonlinear, so an initial guess of transformation parameters is required in these two approaches. Compared to the nonlinear approaches, the closed-form solution is linear and hence it can achieve the registration of two adjacent scans without the requirement of any initial guess for transformation parameters. Therefore, the pseudo-conjugate point-based method and closed-form solution are the preferred approaches for coarse registration using linear or planar features. In real practice, the planar features would have a better preference when compared to linear features since the linear features are derived indirectly by the intersection of neighboring planar features. To get enough lines with different orientations, planes that are far apart from each other have to be extrapolated to derive lines.</p><div><br></div>
33

Extraction automatique de caractéristiques malveillantes et méthode de détection de malware dans un environnement réel / Automatic extraction of malicious features and method for detecting malware in a real environment

Angoustures, Mark 14 December 2018 (has links)
Pour faire face au volume considérable de logiciels malveillants, les chercheurs en sécurité ont développé des outils dynamiques automatiques d’analyse de malware comme la Sandbox Cuckoo. Ces types d’analyse sont partiellement automatiques et nécessite l’intervention d’un expert humain en sécurité pour détecter et extraire les comportements suspicieux. Afin d’éviter ce travail fastidieux, nous proposons une méthodologie pour extraire automatiquement des comportements dangereux données par les Sandbox. Tout d’abord, nous générons des rapports d’activités provenant des malware depuis la Sandbox Cuckoo. Puis, nous regroupons les malware faisant partie d’une même famille grâce à l’algorithme Avclass. Cet algorithme agrège les labels de malware donnés par VirusTotal. Nous pondérons alors par la méthode TF-IDF les comportements les plus singuliers de chaque famille de malware obtenue précédemment. Enfin, nous agrégeons les familles de malware ayant des comportements similaires par la méthode LSA.De plus, nous détaillons une méthode pour détecter des malware à partir du même type de comportements trouvés précédemment. Comme cette détection est réalisée en environnement réel, nous avons développé des sondes capables de générer des traces de comportements de programmes en exécution de façon continue. A partir de ces traces obtenues, nous construisons un graphe qui représente l’arbre des programmes en exécution avec leurs comportements. Ce graphe est mis à jour de manière incrémentale du fait de la génération de nouvelles traces. Pour mesurer la dangerosité des programmes, nous exécutons l’algorithme PageRank thématique sur ce graphe dès que celui-ci est mis à jour. L’algorithme donne un classement de dangerosité des processus en fonction de leurs comportements suspicieux. Ces scores sont ensuite reportés sur une série temporelle pour visualiser l’évolution de ce score de dangerosité pour chaque programme. Pour finir, nous avons développé plusieurs indicateurs d’alertes de programmes dangereux en exécution sur le système. / To cope with the large volume of malware, researchers have developed automatic dynamic tools for the analysis of malware like the Cuckoo sandbox. This analysis is partially automatic because it requires the intervention of a human expert in security to detect and extract suspicious behaviour. In order to avoid this tedious work, we propose a methodology to automatically extract dangerous behaviors. First of all, we generate activity reports from malware from the sandbox Cuckoo. Then, we group malware that are part of the same family using the Avclass algorithm. We then weight the the most singular behaviors of each malware family obtained previously. Finally, we aggregate malware families with similar behaviors by the LSA method.In addition, we detail a method to detect malware from the same type of behaviors found previously. Since this detection isperformed in real environment, we have developed probes capable of generating traces of program behaviours in continuous execution. From these traces obtained, we let’s build a graph that represents the tree of programs in execution with their behaviors. This graph is updated incrementally because the generation of new traces. To measure the dangerousness of programs, we execute the personalized PageRank algorithm on this graph as soon as it is updated. The algorithm gives a dangerousness ranking processes according to their suspicious behaviour. These scores are then reported on a time series to visualize the evolution of this dangerousness score for each program. Finally, we have developed several alert indicators of dangerous programs in execution on the system.
34

Clustering and Summarization of Chat Dialogues : To understand a company’s customer base / Klustring och Summering av Chatt-Dialoger

Hidén, Oskar, Björelind, David January 2021 (has links)
The Customer Success department at Visma handles about 200 000 customer chats each year, the chat dialogues are stored and contain both questions and answers. In order to get an idea of what customers ask about, the Customer Success department has to read a random sample of the chat dialogues manually. This thesis develops and investigates an analysis tool for the chat data, using the approach of clustering and summarization. The approach aims to decrease the time spent and increase the quality of the analysis. Models for clustering (K-means, DBSCAN and HDBSCAN) and extractive summarization (K-means, LSA and TextRank) are compared. Each algorithm is combined with three different text representations (TFIDF, S-BERT and FastText) to create models for evaluation. These models are evaluated against a test set, created for the purpose of this thesis. Silhouette Index and Adjusted Rand Index are used to evaluate the clustering models. ROUGE measure together with a qualitative evaluation are used to evaluate the extractive summarization models. In addition to this, the best clustering model is further evaluated to understand how different data sizes impact performance. TFIDF Unigram together with HDBSCAN or K-means obtained the best results for clustering, whereas FastText together with TextRank obtained the best results for extractive summarization. This thesis applies known models on a textual domain of customer chat dialogues, something that, to our knowledge, has previously not been done in literature.
35

文字探勘在總體經濟上之應用- 以美國聯準會會議紀錄為例 / The application of text mining on macroeconomics : a case study of FOMC minutes

黃于珊, Huang, Yu Shan Unknown Date (has links)
本研究以1993年到2017年3月間的193篇FOMC Minutes作為研究素材,先採監督式學習方法,利用潛在語意分析(latent semantic analysis,LSA)萃取出升息、降息及不變樣本的潛在語意,再以線性判別分析(Linear Discriminant Analysis, LDA)進行分類;此外,本研究亦透過非監督式學習方法中的探索性資料分析(Exploratory Data Analysis, EDA),試圖從FOMC Minutes中找尋相關變數。研究結果發現,LSA可大致區分出升息、降息及不變樣本的特徵,而EDA能找出不同時期或不同類別下的重要單詞,呈現文本的結構變化,亦能進行文本分群。 / In this study, 193 FOMC Minutes from 1993 to March 2017 were used as research materials. The latent semantic analysis (LSA) in supervised learning methods was used to extract the potential semantics of interest rate increased, decreased, and unchanged samples, and then linear discriminant analysis (LDA) was used for classification. In addition, this study attempts to find relevant variables from FOMC Minutes through exploratory data analysis (EDA) in unsupervised learning methods. The results show that LSA can distinguish the characteristics of interest rate increased, decreased, and unchanged samples. EDA can find relevant words in different periods or different categories, show changes in the text structure, and can also classify the texts.
36

Résumé automatique de parole pour un accès efficace aux bases de données audio

Favre, Benoit 19 March 2007 (has links) (PDF)
L'avènement du numérique permet de stocker de grandes quantités de parole à moindre coût. Malgré les récentes avancées en recherche documentaire audio, il reste difficile d'exploiter les documents à cause du temps nécessaire pour les écouter. Nous tentons d'atténuer cet inconvénient en produisant un résumé automatique parlé à partir des informations les plus importantes. Pour y parvenir, une méthode de résumé par extraction est appliquée au contenu parlé, transcrit et structuré automatiquement. La transcription enrichie est réalisée grâce aux outils Speeral et Alize développés au LIA. Nous complétons cette chaîne de structuration par une segmentation en phrases et une détection des entités nommées, deux caractéristiques importantes pour le résumé par extraction. La méthode de résumé proposée prend en compte les contraintes imposées par des données audio et par des interactions avec l'utilisateur. De plus, cette méthode intègre une projection dans un espace pseudo-sémantique des phrases. Les différents modules mis en place aboutissent à un démonstrateur complet facilitant l'étude des interactions avec l'utilisateur. En l'absence de données d'évaluation sur la parole, la méthode de résumé est évaluée sur le texte lors de la campagne DUC 2006. Nous simulons l'impact d'un contenu parlé en dégradant artificiellement les données de cette même campagne. Enfin, l'ensemble de la chaîne de traitement est mise en œuvre au sein d'un démonstrateur facilitant l'accès aux émissions radiophoniques de la campagne ESTER. Nous proposons, dans le cadre de ce démonstrateur, une frise chronologique interactive complémentaire au résumé parlé.
37

Reconfiguration Algorithms For Distribution Automation

Rao, Kavalipati S Papa 08 1900 (has links) (PDF)
No description available.
38

Investigating the relationship between the business performance management framework and the Malcolm Baldrige National Quality Award framework.

Hossain, Muhammad Muazzem 08 1900 (has links)
The business performance management (BPM) framework helps an organization continuously adjust and successfully execute its strategies. BPM helps increase flexibility by providing managers with an early alert about changes and, as a result, allows faster response to such changes. The Malcolm Baldrige National Quality Award (MBNQA) framework provides a basis for self-assessment and a systems perspective for managing an organization's key processes for achieving business results. The MBNQA framework is a more comprehensive framework and encapsulates the underlying constructs in the BPM framework. The objectives of this dissertation are fourfold: (1) to validate the underlying relationships presented in the 2008 MBNQA framework, (2) to explore the MBNQA framework at the dimension level, and develop and test constructs measured at that level in a causal model, (3) to validate and create a common general framework for the business performance model by integrating the practitioner literature with basic theory including existing MBNQA theory, and (4) to integrate the BPM framework and the MBNQA framework into a new framework (BPM-MBNQA framework) that can guide organizations in their journey toward achieving and sustaining competitive and strategic advantages. The purpose of this study is to achieve these objectives by means of a combination of methodologies including literature reviews, expert opinions, interviews, presentation feedbacks, content analysis, and latent semantic analysis. An initial BPM framework was developed based on the reviews of literature and expert opinions. There is a paucity of academic research on business performance management. Therefore, this study reviewed the practitioner literature on BPM and from the numerous organization-specific BPM models developed a generic, conceptual BPM framework. With the intent of obtaining valuable feedback, this initial BPM framework was presented to Baldrige Award recipients (BARs) and selected academicians from across the United States who participated in the Fall Summit 2007 held at Caterpillar Financial Headquarter in Nashville, TN on October 1 and 2, 2007. Incorporating the feedback from that group allowed refining and improving the proposed BPM framework. This study developed a variant of the traditional latent semantic analysis (LSA) called causal latent semantic analysis (cLSA) that enables us to test causal models using textual data. This method was used to validate the 2008 MBNQA framework based on article abstracts on the Baldrige Award and program published in both practitioner and academic journals from 1987 to 2009. The cLSA was also used to validate the BPM framework using the full body text data from all articles published in the practitioner journal entitled the Business Performance Management Magazine since its inception in 2003. The results provide the first cLSA study of these frameworks. This is also the first study to examine all the causal relationships within the MBNQA and BPM frameworks.
39

Rekonstrukce sportovního letounu M-2 Skaut - zástavba pohonné jednotky. / Reconstruction of Sport Aircraft M-2 Skaut - Mounting of Power Unit.

Zakopal, Libor January 2008 (has links)
This diploma thesis deals with the design of an engine mount for the M-2 Skaut aircraft on the basis of LSA and CS-VLA specifications. It is divided into several sections: analysis of input conditions, determination of factors which influence the geometry, the main design and its verification using the Finite Element Method. The FEM is compared with an analytical solution to determine its accuracy. This thesis also deals with a selection of acceptable propelers for this engine along with a basic design of an oil and fuel system.
40

Konstrukce vznětového leteckého jednoválcového motoru s protiběžnými písty / Design of Diesel Aircraft Engine One-cylinder Engine with Contra Rotating Pistons

Svoboda, Tomáš January 2013 (has links)
Diploma thesis deals with design of crankshaft for two stroke opposed piston diesel engine. In the theoretical research part a history, comparison with competitive engines in nowadays light aircrafts and the advantages of opposed piston engines are mentioned. In the practical part the balancing is chosen and CAD model of crankshaft is designed. Geometry of this model is than checked for fatigue damage fallout. In the final part was chosen the propeller and appropriate reduction gearbox.

Page generated in 0.4117 seconds