• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 22
  • 11
  • 7
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 84
  • 84
  • 35
  • 31
  • 12
  • 12
  • 9
  • 9
  • 9
  • 9
  • 7
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Anaysis of the Trend of Historical Temperature and Historic CO2 Levels Over the Past 800,000 Years by Short Time Cross Correlation Technique

Patel, Tejashkumar January 2021 (has links)
Carbon Dioxide concentration in Earth’s atmosphere is currently at 417 Parts permillion (ppm) and keep rising. Historic CO2 levels and historic temperature levels hasbeen cycling over the past 800,000 years. To study the trend of CO2 and temperatureover past 800,00 years, one needs to find out the relation between historic CO2 andhistoric temperature levels. In this project, we will perform different tasks to identify thetrend influencer between CO2 and temperature. Cross correlation technique is used tofind out the relation between two random signals. Temperature and CO2 data areconsidered as two random signals. Re-sampling by Interpolation techniques are imposedon both CO2 and temperature data for the change of sampling rate. Short time crosscorrelation technique is employed on the CO2 and temperature data over the differenttime windows to find out the time lag. Time lag refers to how far the signals are offset.
12

Moderní rozpoznávače řečové aktivity / Modern Speech/pause Detectors

Adamec, Michal January 2008 (has links)
This masters theses deals with standard detection methods of speech/pause - voice activity detectors are based on the principles of short-time energy, real spectrum, short-time intensity and on a combinations of these three detectors. In the next parts, there are mentioned other voice activity detectors based on hidden Markovov‘s models and a detector described in the ITU-T G.729 standard. All the detectors, mentioned above, were implemented in research environment MATLAB. Further there was created an user interface for testing functions of the implemented detectors. Finally, there was done an evaluation by ROC characteristics according to the results of the testing.
13

Deep Learning för klassificering av kundsupport-ärenden

Jonsson, Max January 2020 (has links)
Företag och organisationer som tillhandahåller kundsupport via e-post kommer över tid att samla på sig stora mängder textuella data. Tack vare kontinuerliga framsteg inom Machine Learning ökar ständigt möjligheterna att dra nytta av tidigare insamlat data för att effektivisera organisationens framtida supporthantering. Syftet med denna studie är att analysera och utvärdera hur Deep Learning kan användas för att automatisera processen att klassificera supportärenden. Studien baseras på ett svenskt företags domän där klassificeringarna sker inom företagets fördefinierade kategorier. För att bygga upp ett dataset extraherades supportärenden inkomna via e-post (par av rubrik och meddelande) från företagets supportdatabas, där samtliga ärenden tillhörde en av nio distinkta kategorier. Utvärderingen gjordes genom att analysera skillnaderna i systemets uppmätta precision då olika metoder för datastädning användes, samt då de neurala nätverken byggdes upp med olika arkitekturer. En avgränsning gjordes att endast undersöka olika typer av Convolutional Neural Networks (CNN) samt Recurrent Neural Networks (RNN) i form av både enkel- och dubbelriktade Long Short Time Memory (LSTM) celler. Resultaten från denna studie visar ingen ökning i precision för någon av de undersökta datastädningsmetoderna. Dock visar resultaten att en begränsning av den använda ordlistan heller inte genererar någon negativ effekt. En begränsning av ordlistan kan fortfarande vara användbar för att minimera andra effekter så som exempelvis träningstiden, och eventuellt även minska risken för överanpassning. Av de undersökta nätverksarkitekturerna presterade CNN bättre än RNN på det använda datasetet. Den mest gynnsamma nätverksarkitekturen var ett nätverk med en konvolution per pipeline som för två olika test-set genererade precisioner på 79,3 respektive 75,4 procent. Resultaten visar också att några kategorier är svårare för nätverket att klassificera än andra, eftersom dessa inte är tillräckligt distinkta från resterande kategorier i datasetet. / Companies and organizations providing customer support via email will over time grow a big corpus of text documents. With advances made in Machine Learning the possibilities to use this data to improve the customer support efficiency is steadily increasing. The aim of this study is to analyze and evaluate the use of Deep Learning methods for automizing the process of classifying support errands. This study is based on a Swedish company’s domain where the classification was made within the company’s predefined categories. A dataset was built by obtaining email support errands (subject and body pairs) from the company’s support database. The dataset consisted of data belonging to one of nine separate categories. The evaluation was done by analyzing the alteration in classification accuracy when using different methods for data cleaning and by using different network architectures. A delimitation was set to only examine the effects by using different combinations of Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) in the shape of both unidirectional and bidirectional Long Short Time Memory (LSTM) cells. The results of this study show no increase in classification accuracy by any of the examined data cleaning methods. However, a feature reduction of the used vocabulary is proven to neither have any negative impact on the accuracy. A feature reduction might still be beneficial to minimize other side effects such as the time required to train a network, and possibly to help prevent overfitting. Among the examined network architectures CNN were proven to outperform RNN on the used dataset. The most accurate network architecture was a single convolutional network which on two different test sets reached classification rates of 79,3 and 75,4 percent respectively. The results also show some categories to be harder to classify than others, due to them not being distinct enough towards the rest of the categories in the dataset.
14

BLIND SOURCE SEPARATION USING FREQUENCY DOMAIN INDEPENDENT COMPONENT ANALYSIS / BLIND SOURCE SEPARATION USING FREQUENCY DOMAIN INDEPENDENT COMPONENT ANALYSIS

E., Okwelume Gozie, Kingsley, Ezeude Anayo January 2007 (has links)
Our thesis work focuses on Frequency-domain Blind Source Separation (BSS) in which the received mixed signals are converted into the frequency domain and Independent Component Analysis (ICA) is applied to instantaneous mixtures at each frequency bin. Computational complexity is also reduced by using this method. We also investigate the famous problem associated with Frequency-Domain Blind Source Separation using ICA referred to as the Permutation and Scaling ambiguities, using methods proposed by some researchers. This is our main target in this project; to solve the permutation and scaling ambiguities in real time applications / Gozie: modebelu2001@yahoo.com Anayo: ezeudea@yahoo.com
15

Entre flexibilité et sécurité : l'accompagnement des entreprises et des mobilités professionnelles. : essais empiriques de microéconométrie du marché du travail / Between flexibility and security : accompanying firms and professional mobilities. : a microeconometric analysis with French data

Calavrezo, Oana 30 November 2009 (has links)
La présente thèse contribue à la littérature empirique sur la flexicurité en se concentrant sur deux axes de recherche : l’utilisation du chômage partiel par les établissements et le rôle des trois « socles » de la flexicurité - contrat de travail, compétences et territoire (Freyssinet, 2006) - sur la sécurisation des parcours professionnels des individus. En rapport au premier axe, la thèse propose une méthodologie pour évaluer l’efficacité du chômage partiel. Selon deux critères d’efficacité (éviter les licenciements économiques et la disparition des établissements), le chômage partiel ne protège pas l’emploi et il n’est donc pas efficace. Entre 1995 et 2005, il est avant tout un outil de flexibilité. Il ne peut pas s'analyser comme un outil répondant aux principes de la flexicurité. Pour le deuxième axe, nous analysons comment les « socles » de la flexicurité sécurisent les parcours, en s’appuyant sur trois mobilités professionnelles : premier emploi-emploi, emploi-emploi et chômage-emploi. Nous soulignons le rôle central de l’emploi temporaire, des liens entre les entreprises et du lieu de résidence dans le processus de sécurisation des trajectoires. Nous montrons que : (i) les contrats temporaires ne sont pas un obstacle à la stabilisation dans l'emploi, dès lors que leur durée est suffisamment longue ; (ii) les réseaux d’entreprises traduits par l’existence de marchés professionnels ou internes favorisent l’acquisition de compétences facilitant la mobilité entre deux emplois ; (iii) un cadre géographique « défavorisé » dans lequel vit l’individu apparaît comme un obstacle dans la sécurisation de son parcours. / This PhD dissertation provides useful empirical contributions for two research topics about flexicurity: the use of the short-time compensation (STC) program by French establishments and the role of the three “bases” of flexicurity – employment contract, skills and territory (Freyssinet, 2006) – in making professional career paths secure. About the first topic, we develop a methodology to analyse the efficiency of STC and to verify if it could be considered as a flexicurity tool. According to two efficiency criteria (avoiding redundancies and establishment exit), STC does not protect employment and so it is inefficient: Between 1995 and 2005, it represents mainly a flexibility tool. It can not been seen as a tool responding to the principles of flexicurity. In the line of the second topic, we analyse how the “bases” of flexicurity make professional career paths secure, by focusing on three professional mobilities: first employment-employment, employment-employment and unemployment-employment. We show the importance of temporary contracts, firm networks and individual’s place of residence in the process of making professional career paths secure. We show that: (i) fixed-term contracts secure professional trajectories if the link between individuals and firms is long enough; (ii) firm networks support the acquisition of skills, making easier professional mobilities; (iii) a “disadvantaged” place of residence seems to be an obstacle in making professional career paths secure.
16

Polpação kraft de cavacos de espessura reduzida / Kraft pulping of thin chips

Schmidt, Flavia 28 August 2014 (has links)
Este trabalho teve como objetivo analisar os cavacos de dimensões reduzidas inseridos em cozimentos que se utilizam de menores tempos e maiores temperaturas, de maneira a se obter as bases para o estabelecimento de um novo processo e/ou a otimização dos sistemas atualmente utilizados em escala industrial. Amostras de cavacos de referência (3,6 mm, obtidas pelo processo convencional de picagem) e de cavacos com espessura de 0,5 mm, 1 mm e 2 mm (obtidas por um gerador de partículas), do híbrido de Eucalyptus urophylla x Eucalyptus grandis com 7 anos foram analisadas quanto à densidade básica, composição química e morfologia das fibras. Após a caracterização, os materiais foram submetidos à polpação kraft pelo processo convencional e foram testados três níveis de fator H com quatro níveis de álcali ativo de maneira a se estabelecer uma equação que representasse o processo e pudesse ser utilizada em cozimentos futuros. Através das equações obtidas foi possível calcular os parâmetros de rendimento depurado, álcali ativo residual, álcali ativo consumido, teor de sólidos secos, fator H e álcali ativo em função de um número kappa 18. Os resultados mostram que a densidade básica, a composição química e a morfologia das fibras da madeira não foram afetadas pelo processo de picagem. No entanto, a densidade a granel foi afetada pela espessura dos cavacos, sendo de 0,037, 0,081, 0,110 e 0,141 g.cm-³ para os cavacos de 0,5 mm, 1 mm, 2 mm e 3,6 mm respectivamente. No processo de polpação, as espessuras se comportaram de maneira semelhante, no entanto, a espessura de 2 mm apresentou o melhor número kappa para o fator H de 451, o de maior interesse, com o mesmo rendimento que as demais espessuras. Na análise de regressão, a espessura de 2 mm apresentou melhor rendimento, menor teor de sólidos e menor fator H (de 461), compatível com o que se pretende utilizar em processos de polpação com tempo reduzido de cozimento. / The objective of this work was to evaluate the performance of thin chips inserted on a short time and higher temperatures in cooking process, to obtain the basis for the establishment of a new process and / or the optimization of the currently systems used on an industrial scale. Samples of the reference chips (3,6 mm obtained by the conventional process of chipping) and thin chips with 0,5 mm, 1 mm and 2 mm (obtained by a particle generator) of the Eucalyptus urophylla x Eucalyptus grandis hybrid with 7 years of age had their density, chemical composition and fibers morphology evaluated. After the characterization, the materials were submitted to conventional kraft pulping process and three levels of H factor with four levels of active alkali were tested to establish an equation to represent the process that can be used on future cookings. Yield, residual active alkali, consumed active alkali, dry solids, H factor and active alkali were calculated through equations, according to a kappa number 18. The results show that the basic density, chemical composition and morphology of the wood fibers were not affected by the chipping process. However, the bulk density of 0,037, 0,081, 0,110 and 0,141 g.cm-³ for the 0,5 mm, 1 mm, 2 mm and 3,6 mm chips, respectively, was affected by the chip thickness. On the pulping process, the different chips had the same behavior, however, 2 mm chips showed the best kappa number to H factor 451, with the same yield as the other thickness. On the regression analysis, the 2 mm chips showed better performance, lower solids content and lower H factor (461), consistent with a short time pulping process.
17

On the Short-Time Fourier Transform and Gabor Frames generated by B-splines

Fredriksson, Henrik January 2012 (has links)
In this thesis we study the short-time Fourier transform. The short-time Fourier transform of a function f(x) is obtained by restricting our function to a short time segment and take the Fourier transform of this restriction. This method gives information locally of f in both time and frequency simultaneously.To get a smooth frequency localization one wants to use a smooth window, whichmeans that the windows will overlap. The continuous short-time Fourier transform is not appropriate for practical purpose, therefore we want a discrete representation of f. Using Gabor theory, we can write a function f as a linear combination of time- and frequency shifts of a fixed window function g with integer parameters a; b > 0. We show that if the window function g has compact support, then g generates a Gabor frame G(g; a; b). We also show that for such a g there exists a dual frame such that both G(g; a; b) and its dual frame has compact support and decay fast in the Fourier domain. Based on [2], we show that B-splines generates a pair of Gabor frames.
18

Short-Time Phase Spectrum in Human and Automatic Speech Recognition

Alsteris, Leigh, n/a January 2006 (has links)
Incorporating information from the short-time phase spectrum into a feature set for automatic speech recognition (ASR) may possibly serve to improve recognition accuracy. Currently, however, it is common practice to discard this information in favour of features that are derived purely from the short-time magnitude spectrum. There are two reasons for this: 1) the results of some well-known human listening experiments have indicated that the short-time phase spectrum conveys a negligible amount of intelligibility at the small window durations of 20-40 ms used for ASR spectral analysis, and 2) using the short-time phase spectrum directly for ASR has proven di?cult from a signal processing viewpoint, due to phase-wrapping and other problems. In this thesis, we explore the possibility of using short-time phase spectrum information for ASR by considering the two points mentioned above. To address the ?rst point, we conduct our own set of human listening experiments. Contrary to previous studies, our results indicate that the short-time phase spectrum can indeed contribute signi?cantly to speech intelligibility over small window durations of 20-40 ms. Also, the results of these listening experiments, in addition to some ASR experiments, indicate that at least part of this intelligibility may be supplementary to that provided by the short-time magnitude spectrum. To address the second point (i.e., the signal processing di?culties), it may be necessary to transform the short-time phase spectrum into a more physically meaningful representation from which useful features could possibly be extracted. Speci?cally, we investigate the frequency-derivative (or group delay function, GDF) and the time-derivative (or instantaneous frequency distribution, IFD) as potential candidates for this intermediate representation. We have performed various experiments which show that the GDF and IFD may be useful for ASR. We conduct several ASR experiments to test a feature set derived from the GDF. We ?nd that, in most cases, these features perform worse than the standard MFCC features. Therefore, we suggest that a short-time phase spectrum feature set may ultimately be derived from a concatenation of information from both the GDF and IFD representations. For best performance, the feature set may also need to be concatenated with short-time magnitude spectrum information. Further to addressing the two aforementioned points, we also discuss a number of other speech applications in which the short-time phase spectrum has proven to be very useful. We believe that an appreciation for how the short-time phase spectrum has been used for other tasks, in addition to the results of our research, will provoke fellow researchers to also investigate its potential for use in ASR.
19

Novel Pulse Train Generation Method and Signal analysis

Mao, Chia-Wei 30 August 2011 (has links)
In this thesis we use pulse shaping system to generate pulse train. Using empirical mode decomposition(EMD) and short-time Fourier transform(STFT) to analyze the signal of terahertz radiation. we use pulse shaping system to modulate the amplitude and phase of light which provide for pulse train generation. Compare with other method, first, our method will improve the stability of time delay control. Second this method is easier to control the time delay and number of pulse in the pulse train. In the past, people find the occur time of high frequency by observed the time domain of terahertz radiation directly, but if the occur time near the time of the peak power of terahertz radiation, we can¡¦t find out the occur time of high frequency. Using STFT can find out the relationship between intensity and time, but if the modes in signal have different width of frequency STFT have to use different time window to get the best frequency resolution and time resolution. However the time window with different width will have different frequency resolution, and the relationship between intensity and time will change with different frequency resolution, therefore using different frequency resolution will get different result, so we need a new signal analysis method. To solve this problem we use EMD to decompose different mode in the signal of terahertz radiation into different intrinsic mode function(IMF), and analyze the signal of terahertz by STFT to find the occur time of high frequency of terahertz radiation. Because the modes are separated in to different IMF, we can use STFT with the same time window. We expect this method applied to narrow-band frequency-tunable THz wave generation will be better.
20

Acoustic Analysis of Nearshore Breaking Wave Bubbles Simulated by Piston-Type Wavemaker

Chan, Hsiang-Chih 30 July 2002 (has links)
This article studies ambient noise in the surf zone that was simulated by piston-type wavemaker in the tank. The experiment analyzed the bubbles of breaking wave by using a hydrophone to receive the acoustic signal, and the images of bubbles were recorded by a digital video camera to observe distribution of bubbles. The tank is in College of Marine Sciences, National Sun Yat-sen University, the dimensions of water tank are 35 m ¡Ñ1 m ¡Ñ1.2 m, and the slope of the simulated seabed is 1:5. The studied parameters of ambient noise generates by breaking wave bubbles were wave height, period, and water depth. Short-time Fourier Transform was applied to obtain the acoustic spectrum of bubbles, MATLAB programs were used to calculate mean sound pressure level, and determine number of bubbles. Bubbles with resonant frequency from 0.5 to 10 kHz were studied, counted from peaks in the spectrum. The number of bubbles generated by breaking waves could be estimated by bubbles energy distributions. The sound pressure level of ambient noise was highly related to the wave height and period, with correlation coefficient 0.7. The results were compared with other studies of ambient noise in the surf.

Page generated in 0.0841 seconds