• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 35
  • 12
  • 9
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 155
  • 58
  • 36
  • 33
  • 30
  • 29
  • 27
  • 27
  • 26
  • 23
  • 19
  • 18
  • 18
  • 17
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Entwicklung und Implementierung einer Finite-Elemente-Software für mobile Endgeräte / Developement and Implementation of a Finite-Elements-Softare for mobile devices

Goller, Daniel, Glenk, Christian, Rieg, Frank 30 June 2015 (has links) (PDF)
In dem Vortrag wird die Entwicklung einer Finiten-Elemente-App für Android dargelegt, sowie die Vorteile im Postprozessing von einfachen Strukturen bei der Verwendung der Gestensteuerung erörtert.
82

Kan själv!? : -vad är verksamhetsnyttan för att själv skapaunderlaget för en Chatbot?

Olausson, Erika January 2018 (has links)
Denna uppsats är uppbyggd på Design Science Research (DSR), där det inom denna studiestuderas den verksamhetsnytta som kan bli av att skapa en egen inlärd Chatbot. DesignScience Research är inom denna studie bron mellan skapandet och teoristudierna. Inomstudien skapas det en artefakt i form av en konceptuell modell, för att ge en framtida lösningför en modell som kan implementeras inom verksamheten. Att lära Chatboten automatisktfrån verksamhetens egna hemsida, har framgått som önskvärt. Artefakten har utvärderatsgenom dels ett experimentet med Natural Language Processing (NLP) paket och algoritmer,samt maskininlärningspaket och metoder. Den myndighet som studeras i denna studie ärRiksantikvarieämbete (RAÄ), där deras data har insamlats och behandlats med NLP ochmaskininlärning. Informationen som är insamlad i denna studie är cirka 70 % av RAÄsmaterial från hemsidan. Det som framkom från utvärderingen är att det krävs ett merdomänspecifikt corpus för att kunna skapa bättre klustrering av datat. Artefakten fråndesignen visade sig uppskattad utifrån att nyttan med inlärningen inte enbart skulle resultera ien Chatbot som fungerar för externa användare, utan även kan vara en hjälp för internaanvändare, som ett internt hjälpmedel för att söka av sin egen hemsida. / This essay is based on Design Science Research (DSR), in which it studies the businessbenefits that may be of creating a customized Chatbot. Design Science Research is within thisstudy the bridge between the creation and the theory studies. Within the study, an artifact iscreated in the form of a conceptual model, to provide a future solution for a model that can beimplemented within the business. Learning the chat boat automatically from the company'sown website has been shown as desirable. The artefact has been evaluated through anexperiment with Natural Language Processing (NLP) packages and algorithms, as well asmachine learning packages and methods. The authority studied in this study is theRiksantikvarieämbetet (RAÄ), where their data has been collected and processed with NLPand machine learning. The information collected in this study is approximately 70% ofRAÄ's material from the website. What emerged from the evaluation is that a moredomain-specific corpus is needed to create better clustering of data. The design from thedesign was estimated based on the fact that the benefit of learning would not only result in aChatbot that works for external users, but could also be a help for internal users, as aninternal tool for searching their own website.
83

Método computacional automático para correção do efeito \"heel\" nas imagens radiográficas / An automatic computational method for correction of the heel effect in radiographic images

Marcelo Zanchetta do Nascimento 18 March 2005 (has links)
O diagnóstico radiográfico é baseado na análise das diferenças das densidades ópticas (DO) do filme, que deveriam ser provocadas apenas pelas estruturas anatômicas do paciente. Entretanto, a intensidade do feixe de raios X não é uniforme devido a um efeito intrínseco do equipamento de aquisição de imagem, conhecido como efeito \"heel\". Estas variações prejudicam tanto a análise visual quanto o processamento computacional (CAD) das pequenas estruturas anatômicas. O presente trabalho apresenta um método computacional que corrige as diferenças de densidades ópticas produzidas na radiografia pelo efeito \"heel\". Esse método foi implementado utilizando ambiente de programação Delphi, rotinas em C e Matlab. O método simula a distribuição da intensidade ao longo do campo de radiação, determinando o caminho de absorção que os fótons sofrem dentro do alvo utilizando os modelos de Kramers e Fritz Livingston. Calcula a correlação espacial entre a radiografia e a imagem simulada, localizando o eixo anodo/catodo e o centro do campo nas duas imagens, empregando a função de correlação estatística de Pratt e a função de mapeamento de Zitová e Flusser. Calcula tanto os percentuais de radiação recebidos para cada ponto simulado em relação à radiação ao centro do campo, quanto os percentuais dos níveis de cinza de cada pixel da radiografia e corrige esse valor em função do correspondente na simulação. O algoritmo desenvolvido permitiu determinar a posição do centro do campo de radiação com precisão em torno de 1% e eliminou aproximadamente 90% do efeito \"heel\" na radiografia permitindo que os objetos apresentassem densidades ópticas coerentes com suas absorções específicas. Um estudo preliminar mostrou que esse método poderá ser utilizado como pré-processamento dos sistemas CAD. / The radiographic diagnosis is based on the analysis of the film optical density differences that should be created only by the patient anatomical structures. However, the intensity of the x-ray beam is not uniform due to an intrinsic effect to the image acquisition equipment, known as heel effect. These variations damage the visual analysis as well the (CAD) computer processing of the small anatomical structures. The current work presents a computer method that corrects the optical densities differences generated in the radiography by heel effect. This method was implemented using Delphi Programming Environment, routines in C and Matlab. The method simulates the intensity distribution along the radiation field, determining the absorption path that photons suffer inside the target using the models of Kramers and Fritz and Livingston. It calculates the space correlation between the radiography and the simulated image, determining the anode/cathode axis and the field center in the two images, using the statistics function of Pratt and the mapping function of Zitová and Flusser. It calculates as much the received radiation, percentage for each simulated point in relation the field center radiation, as the gray scales percentage of each radiography pixel and corrects their values as function of the correspondent in the simulation. The developed algorithm has allowed to determine the center position of the radiation field with about 1% precision and approximately eliminated 90%of the heel effect in the radiography, allowing the objects to present optical densities coherent with their specific absorptions. A preliminary study has showen that this method can be used as preprocessing of CAD systems.
84

Statistické metody ve stylometrii / Statistical methods in stylometry

Dupal, Pavel January 2017 (has links)
The aim of this thesis is to provide an overview of some of the commonly used methods in the area of authorship attribution (stylometry). The text begins with a recap of history from the end of the 19th century to present time and the required terminology from the field of text mining is presented and explained. What follows is a list of selected methods from the field of multidimensional statistics (principal components analysis, cluster analysis) and machine learning (Support Vector Machines, Naive Bayes) and their application as pertains to stylometrical problems, including several methods created specifically for use in this field (bootstrap consensus tree, contrast analysis). Finally these same methods are applied to a practical problem of authorship verification based on a corpus bulit from the works of four internet writers.
85

Exploring NMF and LDA Topic Models of Swedish News Articles

Svensson, Karin, Blad, Johan January 2020 (has links)
The ability to automatically analyze and segment news articles by their content is a growing research field. This thesis explores the unsupervised machine learning method topic modeling applied on Swedish news articles for generating topics to describe and segment articles. Specifically, the algorithms non-negative matrix factorization (NMF) and the latent Dirichlet allocation (LDA) are implemented and evaluated. Their usefulness in the news media industry is assessed by its ability to serve as a uniform categorization framework for news articles. This thesis fills a research gap by studying the application of topic modeling on Swedish news articles and contributes by showing that this can yield meaningful results. It is shown that Swedish text data requires extensive data preparation for successful topic models and that nouns exclusively and especially common nouns are the most suitable words to use. Furthermore, the results show that both NMF and LDA are valuable as content analysis tools and categorization frameworks, but they have different characteristics, hence optimal for different use cases. Lastly, the conclusion is that topic models have issues since they can generate unreliable topics that could be misleading for news consumers, but that they nonetheless can be powerful methods for analyzing and segmenting articles efficiently on a grand scale by organizations internally. The thesis project is a collaboration with one of Sweden’s largest media groups and its results have led to a topic modeling implementation for large-scale content analysis to gain insight into readers’ interests.
86

Detection of Human Emotion from Noise Speech

Nallamilli, Sai Chandra Sekhar Reddy, Kandi, Nihanth January 2020 (has links)
Detection of a human emotion from human speech is always a challenging task. Factors like intonation, pitch, and loudness of signal vary from different human voice. So, it's important to know the exact pitch, intonation and loudness of a speech for making it a challenging task for detection. Some voices exhibit high background noise which will affect the amplitude or pitch of the signal. So, knowing the detailed properties of a speech to detect emotion is mandatory. Detection of emotion in humans from speech signals is a recent research field. One of the scenarios where this field has been applied is in situations where the human integrity and security are at risk In this project we are proposing a set of features based on the decomposition signals from discrete wavelet transform to characterize different types of negative emotions such as anger, happy, sad, and desperation. The features are measured in three different conditions: (1) the original speech signals, (2) the signals that are contaminated with noise or are affected by the presence of a phone channel, and (3) the signals that are obtained after processing using an algorithm for Speech Enhancement Transform. According to the results, when the speech enhancement is applied, the detection of emotion in speech is increased and compared to results obtained when the speech signal is highly contaminated with noise. Our objective is to use Artificial neural network because the brain is the most efficient and best machine to recognize speech. The brain is built with some neural network. At the same time, Artificial neural networks are clearly advanced with respect to several features, such as their nonlinearity and high classification capability. If we use Artificial neural networks to evolve the machine or computer that it can detect the emotion. Here we are using feedforward neural network which is suitable for classification process and using sigmoid function as activation function. The detection of human emotion from speech is achieved by training the neural network with features extracted from the speech. To achieve this, we need proper features from the speech. So, we must remove background noise in the speech. We can remove background noise by using filters. wavelet transform is the filtering technique used to remove the background noise and enhance the required features in the speech.
87

Nástroje pro předzpracování rentgenových snímků / Radiography image preprocessing tools

Chmelař, Petr January 2018 (has links)
This thesis deals with design and realization of methods of preprocessing of X-ray images and its storage. In the first part of this thesis, there were designed and implemented methods for preprocessing of series of X-ray images such as averaging after image registration or merging of images to a HDR image using Debevec method. In the following part of the thesis, there was done a literary research of data formats based on which was implemented a library for x-ray images storage. Both implemented methods allow to reduce a random noise by merging a series of images. Application of the Debevec method also allow to increase a dynamic range of image.
88

Rozpoznání paralingvistických signálů v řečovém projevu / Paralinguistic signals recognition in spoken dialogs

Mašek, Jan January 2010 (has links)
This document describes the three methods for the detection and classification of paralinguistic expressions such as laughing and crying from usual speech by analysis of the audio signal. The database of records was originally designed for this purpose. When analyzing everyday dialogs, music might be included, so the database was extended by four new classes as speech, music, singing with music and usual speech with background music. Feature extraction, feature reduction and classification are common steps in recognizing for all three methods. Difference of the methods is given by classification process in detail. One classification of all six classes at once is proposed in the first method called straight approach. In the second method called decision tree oriented approach we are using five intuitive sub classifiers in the tree structure and the final method uses for classification emotion coupling approach. The best features were reduced by feature evaluation using F-ratio and GMM classifiers were used for the each classification part.
89

Metody stemmingu používané při dolování textu / Stemming Methods Used in Text Mining

Adámek, Tomáš January 2010 (has links)
The main theme of this master's thesis is a description of text mining. This document is specialized to English texts and their automatic data preprocessing. The main part of this thesis analyses various stemming algorithms (Lovins, Porter and Paice/Husk). Stemming is a procedure for automatic conflating semantically related terms together via the use of rule sets. Next part of this thesis describes design of an application for various types of stemming algorithms. Application is based on the Java platform with using of graphic library Swing and MVC architecture. Next chapter contains description of implementation of the application and stemming algorithms. In the last part of this master's thesis experiments with stemming algorithms and comparing the algorithm from viewpoint to the results of classification the text are described.
90

Získávání znalostí z webových logů / Knowledge Discovery from Web Logs

Vlk, Vladimír January 2013 (has links)
This master's thesis deals with creating of an application, goal of which is to perform data preprocessing of web logs and finding association rules in them. The first part deals with the concept of Web mining. The second part is devoted to Web usage mining and notions related to it. The third part deals with design of the application. The forth section is devoted to describing the implementation of the application. The last section deals with experimentation with the application and results interpretation.

Page generated in 0.0513 seconds